We provide IT Staff Augmentation Services!

Ibm Infosphere Admin/ Metadata Administrator Resume

Chicago, IL

SUMMARY:

  • A competent professional with over 9+ years of experience in the areas of IBM infosphere Administrator, Metadata Administrator, Information Anaylzer developer and IT Business Analysis, Data Warehousing,Data Governance in
  • IBM Info sphere 11.3/11.5 (Information Governance Catalog, Data Lineage, Information Analyzer,Datastage Quality Stage, Metadata Asset Manager,Metadex,Stewardship center), Databases (Oracle/Microsoft sql server/Teradata/DB2) and in Unix Shell Scripting.
  • Performed server assessment, volume estimation, topology design, resources estimation, time estimation before installing the IBM InfoSphere
  • Installed and configured IBM Info sphere 11.3,9.1 in new environment to start Data Governance project .
  • Experienced in creating Data Lineage report, Impact analysis report through IBM Data Governance Catalog & IBM Meta Data Workbench(8.7) .
  • Played an integral part in the building of a multi - server, multi-database enterprise Data warehouse using DataStage ETL (extract, transform and load) tools and SQL Server to load legacy business data.
  • Expertise in Data Analysis, Data Conversion, Design and Data Modeling specializing in the area of Data Warehousing and Business Intelligence and experience in Design, Preparing HLD for business requirement
  • Experienced in Database Management, Data Mining, Software Development Fundamentals, Strategic Planning, Operating Systems, Requirements Analysis, Data warehousing, Data Modeling and Data Marts.
  • Extensive experience in Parallel Extender. Efficient in all phases of the development lifecycle, coherent with Data Cleansing, Data Conversion, Performance Tuning, System Testing. Expertise in creating reusable components such as Shared Containers and Local Containers.
  • Experienced in creating entity relational & dimensional relational data models using Kimball Methodology i.e. Star Schema and Snowflake Schema.
  • Worked on Slowly Changing Dimensions (Type1, 2, 3 & 6) & Change Data Capture and its implementation to keep track of historical data.
  • Worked on Data quality stage like Data Rules,Match Frequency,Quality Stage legacy,MNS,Investigate,Refrence Match,Standardize,Unduplicate Match,Survive Stage to keep exact data as per customer request .
  • Developed industry standard solutions with Data stage ETL jobs based on business requirements using various Data stage stages like Sort, Column Import, Modify, Aggregator, Filter, Funnel, Join, Lookup, Merge, Change Capture, Datasets, MQ Series, Sequential Stage and Transformer.
  • Migrated Data Stage from Version 7.x to 8.x.
  • Experience in troubleshooting of Data Stage jobs and addressing production issues like performance tuning and enhancement. Handled Complex Web Services in Info Sphere Information Server through DataStage ASB Packs v2.0
  • Designed and developed Oracle PL/SQL Procedures, experience in writing PL/SQL Packages, Stored Procedures, functions, Materialized Views and Triggers using TOAD developer/SQL Developer. Leveraged Explain Plan and TKPROF to improve query performance.
  • Knowledge of BI(Cognos reporting tool )
  • Experienced in FTP & Connect Direct file transfer mechanism.
  • Experienced in making prototypes for ETL process with Mapping Templates and making reports on it.
  • Extensive domain knowledge of Telecom, Banking & Finance Domain.
  • Experienced in writing Autosys Scheduler scripts by analyzing ETL jobs & time dependencies.
  • Good Knowledge of ITIL process, AGILE Methodology, SDLC process.
  • Experienced in ER win data modeling tool 9.5 and IBM InfoSphere Data Architect tool for, Data Modeling.

PROFESSIONAL EXPERIENCE:

Confidential, Chicago, IL

IBM Infosphere Admin/ MetaData Administrator

Responsibilities:

  • Installed IBM Infosphere 11.5 version
  • Configured WAS authentication with Active directory with federated model
  • Configured Information Server security for users and groups
  • Worked closely with IBM support team to resolve opened PMR's and Installed Fix packs
  • Created the dsenv, uvconfig, odbc dsn's, uvparam environment variable files into the new IBM InfoSphere
  • Troubleshoot issues using system logs
  • Migrated Infosphere 11.3 to 11.5 component wise i.e Security, Information analyzer projects, IGC,Datstage projects
  • Assist business user to create Data quality rules and Fix issues related to Information analyzer data quality rules.
  • Create extension mapping document and help business to create data lineage
  • Build automation unix shell script to start and stop environment i.e WAS & Datastage.
  • Import Databases in IMAM
  • Helped business to use open IGC
  • Worked on Hadoop POC to configure as Open IGC with hive tables and hdfs files.
  • Generated Data lineage with Datastage job
  • Worked with IGC tool to build glossary, assigned asset with terms, create data classes, labels and build lineage and configure, and produce custom lineage reports
  • Defined users, roles, access levels, Catalog permissions, Custom attributes and custom views
  • Created and manage workflow, operational metadata, extended data sources and mappings
  • Imported Data model with IDA.

Confidential, Malvern, PA

IBM Infosphere Admin/ MetaData Administrator/ BPM Administrator

Responsibilities:

  • Performed server assessment, volume estimation, topology design, resources estimation, time estimation before installing the IBM InfoSphere
  • Installed IBM Infosphere 11.3 version
  • Configured WAS authentication with Active directory with federated model
  • Configured Information Server security for users and groups
  • Perform the role of datastage operator, manager while balancing ETL jobs
  • Worked closely with IBM support team to resolve opened PMR's.
  • Created the dsenv, uvconfig, odbc dsn's, uvparam environment variable files into the new IBM InfoSphere
  • Manage InfoSphere Information Server sessions/job scheduling
  • Manage Information Server components: DataStage, Admin console and Infoshpere components
  • Configure logging and reporting and troubleshoot audit trace files
  • Configure DataStage projects and objects, including environment variables, data sets, job message handlers, and configuration files
  • Analyze and Tune Performance Analysis and Resource Estimator tools
  • Establish database connectivity with Information Server
  • Gather and process operational metadata
  • Manage system performance to meet required end user performance requirements
  • Created script with use to istool to run IGC queries .
  • Create, build, and deploy packages using Information Server Manager
  • Scheduled for Backup using Information Server
  • Managed enterprise wide Job management
  • Configure Information Server clients, including Information Analyzer, Data Governance Catalog, Data Lineage .
  • Create Categories, Terms, defining relationships between assets, External Assets/mappings
  • Analyze existing ETL jobs and provide recommendations for performance improvement
  • Installed Fix pack 1&2 and all RU1-RU20 patch to fix issue with infosphere i.e IBM Data governance catalog custom attributes issue, Information Analyzer authentication issue, IMAM cognos report import s
  • REST API integration between datastage and IGC catalog update, delete .
  • Imported Custom asset in IGC.
  • Develop shell scripts to automate DataStage jobs runs
  • Elevate Datastage code in CAT& Production from DEV environment
  • Enabled event notification, integrated with IBM BPM .
  • Actively participated in IGC access and administration .

MetaData Administrator

Confidential

Responsibilities:

  • Imported 200+ Oracle, MS SQL, Sybase & DB2 databse in IGC through IMAM with automated script .Re import scheduled nightly based on dates .
  • Imported Cognos reports metadata in IGC .
  • Meta dex(Compact BI) to import Views & stored procedure of Oracle & MS SQL & DB2 DB to generate data lineage .
  • Imported meatdata from JAVA code (jar file)in IBM Governance catalog
  • Working on POC with Compact BI product to import COBOL code in IGC
  • Created script to load xml & CSV file in IBM governance catalog(IGC) queries with istool
  • Architecting a Data Quality Management solution that involves implementing IGC & Metadata Management solution across various business domains that involves end-to-end capture of Metadata from various sources, Excel, Data Warehouse, Data Mart(s), and SAP Business Objects repository.
  • Generated Data linage for application based on business requirement Generated the Data lineage, impact analysis, Business lineage reports between source database tables,views, DataStage jobs, reports etc.
  • Architecting a Data Quality Management solution that involves implementing Business Glossary & Metadata Management solution across various business domains that involves end-to-end capture of Metadata from various sources, Excel, Data Warehouse, Data Mart(s), and SAP Business Objects repository.

BPM Administrator

Confidential

Responsibilities:

  • Installed, configured, administered & troubleshooting IBM WebSphere Application Server, Business Process Manager 8.5.5, Websphere Process Server 7.x/8.x, on and LINUX Platforms.
  • Configuring BPM -WebSphere resources like J2C, JMS, JDBC, Resource adapters, mail providers and shared libraries, Calling Jython scripts to perform configuration which are not part of build forge library
  • Daily task on Analyzing logs using Heap and core Dump analyzers for incidents and outages.
  • Deployed Ears files using Jython script and admin console for Dev, QA, Prod and Prod Support Environments.
  • Worked closely with IBM on PMR's for solutions and implement them, created many critical situation tickets for WPS & WAS bugs.
  • Configured team binding for User & Group access

Confidential, Greensboro, NC

IBM Infosphere Administrator

Responsibilities:

  • Installed and Configured Infosphere Information Server 9.1.2
  • Day to day Infosphere platform activities such as creating projects, giving permissions to users for the projects, configuring the source/target connections, unlocking the jobs, monitoring disk space utilization, memory usage.
  • Opened PMR's with IBM and worked with them to close it.
  • Primary on call support for production issues.
  • Experienced in using other components of Infosphere Suite such as Infosphere Analyzer, Metadata Asset Manager(IMAM), Metadata Workbench, Data Quality Cosole(DQC), operational database.
  • Experienced in deploying and undeploying Information Services(MDM NM ADDRESS CLEANS).
  • Applied patches on all tiers.
  • Installed and Configured the SAP Packs.
  • Experienced working with Information Server Suite on high availability and grid platforms.
  • Export / Import wizard to move the DataStage jobs between different servers - Dev, QA, and Prod.
  • Configured the server files dsenv and .odbc.ini files, added DNS entries in uvconfig file.
  • Used Data stage Director for Scheduling the sequences and jobs.
  • Responsible for monitoring/troubleshooting Data Stage jobs during the production data load processes
  • Fine tune the environment to avoid any bottle necks and performance.
  • Subject matter expert for any critical issues related to DataStage.
  • Extensively used multiple configuration files (environment variable) to increase the nodes according to the varying processing needs.
  • Expert in using IBM Data stage connector migration tool.

Confidential, Middletown NJ

IBM InfoSphere Administrator/Developer

Responsibilities:

  • Performed server assessment, volume estimation, topology design, resources estimation, time estimation before installing the IBM InfoSphere
  • Installed IBM Infosphere 8.5,8.7 version
  • Configured Information Server security for users and groups
  • Perform the role of datastage operator, manager while balancing ETL jobs
  • Worked closely with IBM support team to resolve opened PMR's.
  • Created the dsenv, uvconfig, odbc dsn's, uvparam environment variable files into the new IBM InfoSphere
  • Manage InfoSphere Information Server sessions/job scheduling
  • Manage Information Server components: DataStage, Admin console and Infoshpere components
  • Project Creation in all environments from Development to Production.
  • DataStage client installations in VM Ware’s and Desktop’s.
  • Users Creation and assigning required roles in all environments.
  • Configuration of different Database connections like SQL, DB2, Kerberos.
  • DataStage Server Stopping & Starting, as well WAS Services.
  • Clearing all locks in DataStage & UNIX, as well Clearing Zombie/Defunct Processes.
  • Debugging and resolving all DataStage related issues like login, permission & Project Corruption etc.
  • Health Check activities on different environments.
  • Involved in upgrading Information Server from 8.0.1 to 8.1 on Dev, Test and Prod Servers.
  • Installed and configured fixpacks, patches on Service, Engine and Client tiers.
  • Set up the projects, roles, users, privileges in different environments.
  • Setting up job parameter defaults and environment variables.
  • Coordinated with Customer and IBM technical Support for Problem Management Requests.
  • Tune the DataStage server settings for optimum performance.
  • Unlocked DataStage jobs from Administrator Client and also from OS level.
  • Deleted unwanted datasets from Designer Client and also from UNIX command line.
  • Defined APT CONFIG FILE and configured it for multiple nodes.
  • Worked with other technical teams like UNIX admin, DB2 DBA, Oracle DBA to set-up and configure the DataStage environments.
  • Analyzed the requirements, functional specifications and identifying the source data to be moved to the warehouse.
  • Extracted data from various source systems.
  • Used the DataStage Designer to develop processes for extracting, cleansing, transforming, integrating, and loading data into data warehouse.
  • Validated, Scheduled, Run and monitored developed jobs using Data Stage Director.
  • Exported and imported jobs between development and production environments using Data Stage Manager.
  • Extensively used built-in Processing stages, which includes Aggregator, Funnel, Remove Duplicates, Join, Transformer, Sort and Merge in most of the jobs.
  • Scheduling jobs for daily loads.
  • Involved in planning for building a new DataStage Environment.
  • Involved in unit testing of the system.
  • Defined Migration approach for the new version.
  • Analyze business requirements and created document for the source to target mapping for the ETL development. Involved in preparing high level and detailed design documents and acceptable differences documents for the end users.
  • Created projects, add data sources, write, configure and execute rules/rule sets within Information Analyzer .
  • Developed data profiling solutions, run analysis jobs and view results, and create and manage data quality controls using Information Analyzer .
  • Extracted Data from fixed width files and transformed as per the requirements and loaded into the IW Oracle tables using SQL loader scripts.,
  • Created Data stage Parallel jobs using Designer and extracted data from various sources, transformed data according to the requirement and loaded into target databases like Oracle 10g and Sybase.
  • Extensively worked with Data Stage Designer for developing various jobs in formatting the data from different sources, cleansing the data, summarizing, aggregating, transforming, implementing the partitioning and sorting methods and finally loading the data into the data warehouse.
  • Extensively did the Data Quality Checks on the source data.
  • Used Data stage Designer for creating new job categories, metadata definitions, and data elements, import/export of projects, jobs and data stage components, viewing and editing the contents of the repository.
  • Used the Data stage designer to design summary tables for monthly sales and UNIX model scripts to automate and run the jobs.
  • Worked with Oracle Connector and Enterprise, DB2 Connector, Peek, Dataset, Lookup, File Set, Filter, Copy, Join, Remove Duplicates, Modify, Surrogate Key Generator, Change Capture, Funnel stages. Involved in Integration testing, Co-ordination of the development activities, maintenance of ETL Jobs.
  • Preparing HLD for requirement
  • Preparing UNIT test case package, Preparation of Unit Test cases and Unit test Logs and performing Unit Testing of Code.
  • Creation of jobs sequences using Job Activity, Wait for File Activity and Notification Activity etc.
  • Performing the System Integration testing for data sources and checking the connectivity.
  • Developing the Framework job on which other main job is depended for Notification and failed status.
  • Performance Tuning, Jobs process identifying and resolution of performance Issues.
  • Implemented data stage quality stage to filter out unwanted data from tables and finalize data type, length which helps us to reduce the database utilization.
  • Interacting with business user for Requirement gathering & System analysis Gathering information for required software and hardware for Project.
  • Preparing the Timeline for Code development, DIT and unit testing Duration.
  • Preparing Analysis report of Database(Table space /Schema) size
  • Conducting meeting with Data modeler for table structure definition and other mandatory requirement for Development
  • Used different types of stages like Transformer, CDC, Remove Duplicate, Aggregator, ODBC, Join, Funnel, dataset and Merge for developing different jobs.
  • Involved in performance tuning of the jobs while developing the jobs
  • Handled Complex Web Services in InfoSphere Information Server through DataStage ASB Packs
  • Transform and integrate data using WebSphere DataStage XML and Web services packs
  • Discussing with other team DBA/UNIX team for mandatory requirement to develop the code.
  • Providing overview to team member for low level design
  • Conducting meeting with Testing team for test cases, How they are going to perform Testing for Code
  • Code (jobs) development / code (jobs) modifications as per requirement changes.
  • Performing the System Integration testing for data sources and checking the connectivity.
  • Preparing the Job parameter (Local parameter and global parameter) for code rather than using manual parameter.
  • Developing the Framework job on which other main job is depended for Notification and failed status.
  • Worked on standardization of files to audit the input data and make sure that the data is valid.
  • Performance Tuning, Jobs process identifying and resolution of performance Issues.
  • Exporting & Importing Jobs and Importing the Metadata from repository as and when required.
  • Involved in the development of data stage Jobs, UNIX Shell scripts for data loading.
  • Writing Reconciliation Queries as per Business Requirements. Using shard containers created reusable components for local and shard use in the ETL process.
  • Providing UAT support, Deployment support.
  • Developed Data Stage Loads into Oracle for STAR Schema
  • Used subversion( SVN source control management ) to push the code to higher environments( QA/IT/Staging/PROD)
  • Migrated jobs from development to QA to Production environments.
  • Preparing the UNIX script for post job completion activity.
  • Responsible for overseeing the Quality procedures, Standards related to the project.
  • Migrated 7.5 server job to 8.7 parallel jobs in stringent timeline which highly appreciated by customer.
  • Designing & Configuring the routines and scripts for sending the critical alerts from the production support environment prospective.
  • Monitoring process progress as per scheduled deadlines for various tasks and taking necessary steps to ensure completion within time, cost and effort parameters.
  • Analyze business requirements and create business rules to identify Enterprise critical data element(ECDE’s) and Business value data elements (BVDE’s)
  • Import data in Information Analyzer meta data from production environment
  • Created Automated DQ solution and improved the performance by 600 times.
  • Profiled data in IBM InfoAnalyzer using rule and column analysis against identified ECDEs,BVDE’s.
  • Submitted analysis on results and provided LoB/EF recommendations for data quality rule development.
  • Configured and code data quality validation rules within IBM InfoAnalyzer (or via SQL) and will schedule, review and package LoB/EF data quality monthly / quarterly results.
  • Created projects, add data sources, write, configure and execute rules/rule sets within Information Analyzer ·
  • Developed data profiling solutions, run analysis jobs and view results, and create and manage data quality controls using Information Analyzer ·
  • Performed column analysis, rule analysis, primary key analysis, natural key analysis, foreign-key analysis, and cross-domain analysis ·
  • Import/export projects along with rules and bindings successfully from one environment to another ·
  • Create score card for all request element and shared with Business.
  • Develop SQL, run and analyze data quality results.
  • Strong knowledge of data system platforms, practices and data management policies.
  • Designed develop and document data related policies, standards, procedures, processes.
  • Use Import Export Manager to bring metadata about data files, data tables, business terms, reports, and models into IBM Mata Data Workbench .
  • Establish manual and automated links between the IBM Mata Data Workbench .
  • Create Data Lineage report, Impact analysis report, Business Lineage report through IBM Mata Data Workbench

Confidential

Sr. Datastage Developer

Responsibilities:

  • Used Data stage Manager for importing metadata from repository, and new job categories creating new data elements. Export and import of the jobs between the production, development and test servers.
  • Used several stages like sequential file stage, datasets, Copy, Aggregator, Row Generator, Join, Merge,Lookup, Funnel, Filter, Column Export etc in development of parallel jobs.
  • Used the Data Stage Director and its run-time engine to schedule running the solution, testing and debugging its components, and monitoring the resulting executable versions (on an ad hoc or scheduled basis).
  • Creation of jobs sequences using Job Activity, Wait for File Activity and Notification Activity etc.
  • Code (jobs) development / code (jobs) modifications as per requirement changes
  • Performance Tuning, Jobs process identifying and resolution of performance Issues.
  • Running, monitoring and scheduling of Data Stage jobs through data stage director.
  • Exporting & Importing Jobs and Importing the Metadata from repository as and when required.
  • Involved in the development of data stage Jobs, UNIX Shell scripts for data loading.
  • Involved in Performance Tuning of Queries and Jobs.
  • Responsible for overseeing the Quality procedures, Standards related to the project
  • Successfully delivered the one of the Major Milestone and created process to provide RCA for production defects post to deployment.
  • Got the source data in the form of Oracle tables, Sequential files and Excel sheets, developed processes to extract the source data and load it into the data warehouse after cleansing, transforms and integrating
  • Developed various shared Container jobs for Re-Usability
  • Worked with Hash files, Parallel Job Extender for jobs for parallel Processing and IPC stages to improve the performance of jobs
  • Wrote SQL queries, PL/SQL procedures to ensure database integrity
  • Created shell scripts for production and the scripts for small changes using before-after subroutine. Used Korn Shell scripts for scheduling DS jobs.
  • Used Partition methods and collecting methods for implementing parallel processing.
  • Tuned of SQL queries for better performance for processing business logic in the database

Confidential

Datastage Developer

Responsibilities:

  • Used stages like Transformer, sequential, Oracle, ODBC, Aggregator, Data Set, File Set, CFF, Remove Duplicates, Sort, Join, Lookup, Funnel, Copy, Modify, Filter, Change Data Capture, Change Apply, Head, Tail, Sample, Surrogate Key and SCD.
  • Extensively worked on capturing the Change Data.
  • Extensively worked on slowly changing Dimension concepts.
  • Used Data stage Manager for importing metadata from repository, and new job categories creating new data elements. Export and import of the jobs between the production, development and test servers.
  • Used the Data Stage Director and its run-time engine to schedule running the solution, testing and debugging its components, and monitoring the resulting executable versions (on an ad hoc or scheduled basis).
  • Wrote SQL queries, PL/SQL procedures to ensure database integrity
  • Creation of jobs sequences using Job Activity, Wait for File Activity and Notification Activity etc.
  • Code (jobs) development / code (jobs) modifications as per requirement changes
  • Preparation of Unit Test cases and Unit test Logs.
  • Performance Tuning, Jobs process identifying and resolution of performance Issues.
  • Running, monitoring and scheduling of Data Stage jobs through data stage director.
  • Exporting & Importing Jobs and Importing the Metadata from repository as and when required.
  • Involved in the development of data stage Jobs, UNIX Shell scripts for data loading.
  • Writing Reconciliation Queries as per Business Requirements.
  • Involved in Performance Tuning of Queries.
  • Worked with XML Transformer stage to convert and load XML data into Data Warehouse. Designed Jobs, scripts and process to rectify the data corruption and cleanup activity.
  • Responsible for overseeing the Quality procedures, Standards related to the project

Confidential

Datastage Developer

Responsibilities:

  • Extensively used DataStage Designer stages such as ODBC, Native plug-in, Sequential File, Remove duplicates, filter, Aggregator, Transformer, Join, Pivot, Lookup, XML input, XML output, MQ Connector, Sort,Funnel, Dataset, Copy, Modify, Row generator and Merge.
  • Used DataStage Software Development Kit (SDK) Transforms .Used Director for executing, analyzing logs and scheduling the jobs
  • Preparation of Unit Test cases and Unit test Logs.
  • Code (jobs) development / code (jobs) modifications as per requirement changes .Running, monitoring and scheduling of Data Stage jobs through data stage director, Writing Reconciliation Queries as per Business Requirements.
  • Exporting & Importing Jobs and Importing the Metadata from repository as and when required.
  • Involved in the development of data stage Jobs.
  • Designed Jobs, scripts and process to rectify the data corruption and cleanup activity.
  • Preparing Knowledge base repository for knowledge sharing.

Hire Now