Informatica Mdm Product 360 Pim Consultant Resume
Joliet, IL
SUMMARY:
- Well versed 21+yrs of professionally experienced consultant in IT includes extensive client & business teams facing,functional and technical experience in delivering end to end Data Integration, Data Lake, Data Warehousing, Data Goverance,Business Intelligence (BI), Data Modeling, OLAP, ETL tools, centralized data & log processing using ELK stack and OLTP Client server product Development projects.
- Worked on Informatica PowerCenter, Informatica IDQ, Informatica Analyst, Informatica Administrator, Informatica DIH(Data Integration Hub, Informatica Advanced Data Transformation(ADT),Informatica AXON, Informatica EDC, AWS s3,AWS Glue, AWS Athena, Databricks Spark, Hadoop, ELK stack, IBM infosphere information server DataStage, Information Analyzer,IBM Information Server Fast track, Cognos, Business Objects and other ETL tools.
- I have strong analytical and problem - solving skills with effective verbal and written communication, Good planning and Organization skills. Ability to handle multiple tasks with proven track record in meeting targets and exceeding expectations
TECHNICAL SKILLS:
Informatica ETL Products: Informatica PowerCenter 10.2.x/10.1/9.x/8.x/7.x/6.x/5.x, Informatica Data Integration Hub (DIH), Informatica Data Quality (IDQ) 10.2/10.1, Informatica Advanced Data Transformation (ADT) 10.2/10.1, INFA MDM Product 360, Informatica MDM
IBM ETL Products: IBM Infosphere Information Server (DataStage) 11.5/9.1/8.5/8.1/7.5.1 (Enterprise Edition- PX)
DataBricks ETL & Other: Databricks Spark big data ETL, Scala 2.1.1
Amazon AWS Products: AWS S3, AWS Glue, Athena, Redshift spectrum, AWS Quicksight
Cloud Platforms: Amazon AWS, GCP and Azure
Hadoop Administration: Hadoop Administration, HortonWorks Administration
Apache Products: Apache Spark, PIG, HIVE, SQOOP, HBASE, Cassandra, Oozie, Zookeeper, Ambari, Flume, Impala, Kafka
ELK/Elastic Stack: ElasticSearch, Logstash, Kibana, Filebeat
Data Governance: Informatica EDC/EIC, Informatica AXON
Data Profiling: Informatica Data profiling, Informatica Data Analyst, IBM Information Analyzer (IA)
Data Archive/Purge: Informatica Information Lifecycle Management (ILM)
Other ETL tools: Oracle Warehouse Builder (OWB), Oracle ODIEE, Humming Bird Genio ETL, Microsoft SSIS, Hitachi Vantara - Pentaho Data integration, SAP Data Services (BODS), SAP SDI, Talend ETLInfoSphere
Tools: Information Server Manager, Information Services Director, Fast Track, Business Glossary, IBM Metadata Workbench, IBM Master Data Management (MDM)
Business Intelligence: IBM Cognos 10.x/8.x, COGNOS suite (Impromptu 7.x/6.0,PowerPlay Transformer 6.6/7.x, PowerPlay 6.x/7.x,Access Manager, Report Administration, Server Administration, Cognos Upfront), Cognos ReportNet 1.1(Cognos Framework Manager, Report Studio, Query Studio, Cognos Connection), Business Objects 5.1.6/Xi, Crystal Reports, SAS Base, SQL Server Analysis Services (SSAS),Tableau, Qlikview,Tibco SpotFire,Hitachi Vantara - Business analytics Pentaho Report designer, SSRS, Microsoft Power BI, SAS BASE
R D B M S: Oracle 12C/11G/10G/9i/8.x/7.x, MS SQL Server 2012/2008/2005/2000 , Sybase 11.9.2/12.5/15.2/15.5 , DB2 v10/v9.5, Teradata, MySQL
NONRDBMS: Mongo DB, Cassandra, Hbase
Data Model tool: ERWIN
Languages: PL/SQL, SQL * LOADER, C, C++, COBOL
Scripting Languages: UNIX Shell Script, Python, Perl, MS-DOS Script
Incident Management: Remedy, HP SM7, HP SM 9
GUI: Visual Basic 6.0, Visual InterDev and VBA
Internet Technologies: ASP, COM, DCOM, MTS, VB script, Java, Java Script, XML
Tools: TOAD, SQL Navigator, DB-Artisan, XML Spy, Ultra Edit, DB Visualizer, Rapid SQL Developer, SOAPUI
Document Management: Microsoft SharePoint
Defect Tracking: HPQC 10, JIRA
Version Management Tools: GitHub, SVN, Visual Source Safe, PVCS, CVS, MKS, ClearCase
DevOps: Jenkins, IBM Urbancode
ERP: SAP-ABAP 4.6B
Schedule Management: JAMS, TIDAL V5/V6, AutoSys, CRONTAB
MSOffice: Ms-Access, Ms-Excel, MS-Power point
WORK EXPERIENCE:
Confidential, Joliet, IL
Informatica MDM Product 360 PIM Consultant
Responsibilities:
- Attending meeting with stakeholders and business users and client technical team to gather the requirements for the Informatica PIM to get data from SAP MDM, SAP ECC, SAP EHSM
- Understanding client current SAP MDM system to migrate the data to Informatica MDM PIM
- As part of the PIM demo’s to client, involved in answering client questions on Informatica PIM rich client, Informatica Web client and also some part of media manager.
- Transforming source excel files to load data into Informatica PIM structure preset values
- Transformed source pdf data into csv and load data into Informatica PIM Structure features
- Preparing source files for structure groups, structure group features to be loaded into Informatica PIM
- Used Informatica developer tool to read the source files and transform the files to PIM required format
- Used Excel Power Query to transform some of data to send to INFA PIM.
Environment: Informatica Product 360(PIM) 8.1.1, Informatica Developer 10.2.1, SQL server 2016, Windows 2016
Confidential
ETL Solution Architect/Specialist/Team lead
Responsibilities:
- Overall ETL design and architecture of the Bedrock project and Data Quality process flow.
- Interviewed senior Informatica ETL consultants to setup a team and also interviewed INFA developers for support team.
- Based on the bedrock data delivery requirements, designed logical and physical database objects in the SQL server 2012.
- Designed and developed XML and JSON files splitter using Informatica advanced data transformation (ADT) to store XML and JSON source records as it is into the master tables.
- Developed XML and JSON parsing using Advanced Data transformation. Used user defined transformation (UDT) to use the ADT code in INFA PowerCenter
- Designed and developed custom DIH publication workflows and mappings which read data from XML, JSON and CSV files to publish data.
- Coordinating with source data provider Murex, MDS and Other departments.
- Developed custom DIH subscriptions to deliver XML, JSON and CSV data to consumers
- Developed DIH auto publications and subscriptions to deliver data to DIH API adapter
- Saved around $300K to the client by initial designing and developing INFA DIH PUBS and SUB 75% by solo.
- Created export and import DIH specifications to export object and import objects into different environment
- Created XML & JSON parsers, powercenter mappings & workflows, DIH pubs & subs to deliver data for reconciliation between MarkIT data and MDS data.
- Developed python scripts to parse JSON documents and also to automate day to day support tasks.
- Designed and developed reconciliation code using python to download AWS s3 files, format s3 files to remove metadata attached to format the files.
- Created scripts to export and import INFA powercenter objects to be used by Jenkins build and Urbancode
- Provided demo to Directors and stakeholders on the initial deployment of Informatica DIH.
- Coordinated with QA while QA testing is in-progress
- Coordinated with various source teams to prepare the data to UAT. Coordinated with business users while they were performing UAT.
- Used Microsoft PowerBI to generate DQ reports out of custom DQ repository.
- Created deployment JIRA to get approvals to deploy code to production
- Arranged multiple meetings to explain about overall ETL design and developed DIH pubs and subs to supporting team to get approval to deploy to production
- Created Jenkin jobs to commit the code to Github and used GitHub as version management tool
- Deployed code to production using IBM Urbancode devops tool.
- Supporting INFA DIH, INFA PowerCenter during initial deployment of the code to production
- Support L4 on a rotational basis
- Leading the team and resolving any technical issues they come across.
- Gone through the ECO business requirements to consume Instrument Mapping data.
- Designed and developed Datastage Jobs and sequences in IBM Infosphere Datastage
- Unit test and deploying to QA and UAT.
- Supporting QA and UAT
- Capacity planning and submitted a request to infrastructure team to build the INFA servers for DEV, QA, UAT and PROD
- Submitted a request for NAS storage and coordinated with infrastructure team to get the task done.
- Conference calls with INFA sales head, professional services directors and other product R & D specialists on the product features.
- Suggesting new enhancements and features to INFA DIH and INFA Powercenter products.
- Downloaded INFA software and installed INFA DIH, Informatica Developer, Informatica PowerCenter INFA Information life cycle management (ILM) on DEV, QA, UAT and PROD environments
- Configured INFA DIH, INFA PowerCenter in the informatica administrator and INFA ILM product in the ILM product. Configured domain, analyst service, content management service, Data Integration service, Informatica integration service, Repository service, Web services Hub services, Catalog service, cluster service.
- Created required DB connections in the Informatica administrator.
- Downloaded INFA AXON, INFA IDQ and INFA EDC/EIC products and installed the products.
- Configured INFA AXON, INFA IDQ and INFA EDC by creating various services in the Informatica Administrator.
- Installing INFA DIH 10.2 and configured to run workflows developed in Informatica Developer.
- LDAP (Microsoft access directory) configured in the Informatica Administrator for windows authentication.
- Setup IBM MQ to load messages to MQ and configured connection factories and connection destination
- Configured JNDI and JMS in the Informatica workflow manager to send messages to IBM MQ via JMS
- Setup SFTP server on windows 2012 R2 to receive the files from source systems to DIH
- Setup meetings with devops team to implement Jenkin build and IBM urbancode to deploy to target environments
- Developing INFA repositories backup scripts to backup repositories periodically.
- Setup meeting with Zabbix server monitoring teams and provided them commands to monitor the server and auto restart the services in case if any of the INFA services goes down.
- Created server build Runbooks for supporting team
- As an overall ETL administrator, monitored DEV, QA, UAT and PROD servers on a daily basis to avoid any interruptions and supporting developers in case if come across any INFA server issues.
- Opening tickets with INFA support for the product issues and coordinating with them to fix the issues or getting EBF patches to fix the issue.
- Coordinating with DBA for DB issues
- Creating groups and users and granting privileges in the informatica administrator
- Initial discussion on the validating the tools
- Did analysis and prototype on AWS using AWS S3, AWS Glue ETL, AWS Athena, AWS Redshift Spectrum and AWS QuickSight
- Provided demo to project managers on how AWS glue works in ETL space and how PySpark runs from AWS Glue
- Did analysis on Databricks Spark using python-PySpark and also using HQL(Hive)
- Working on Databricks Spark to process the data and to deliver the data to downstream consumers
- Used Scala to read data from the RIMES files on the databricks environment and converted the files to parquet format.
- Developed spark jobs using Databricks spark to read files from AWS S3 and to generate broken links between transactional and data and loading data to S3.
- Discussed with business teams on the requirements on processing application logs
- Architecting of the overall project, sizing of the server, submitted a request to infrastructure team to build the ELK servers
- Setup required folder structure on the Linux server
- Installed elasticsearch, logstash and kibana stack on Linux servers
- Configured ELK products to process, store and display the Kibana
- Installed filebeat on application server.
- Design and develop ELK stack solutions.
- Developed Logstash scripts
- Conducted demo to Senior portfolio managers and project managers to explain them about ELK
- Configuring Filebeat setup to deliver the application logs to ELK server
- Indexing the log data to deliver to business users
- Designed and created Kibana visualizations and dashboards
- Day to day support of the ELK stack server and applications
Environment: Elasticsearch 6.3, Logstash 6.3, Kibana 6.3, Filebeat 6.2, RedHat Linux 7.3
Confidential
Sr. ETL Consultant/Lead Consultant
Responsibilities:
- CDR RECON
- GL ATTESTATION
- SRDR Rating Service
- Migrating GSS Datastage ETL jobs from 8.5 to 11.5
- BDR file data using Information Analyzer
- Tableau for CDR and BDR reporting.
- Hadoop Data Lake project which is developed to load data into HDFS.
Environment: IBM Infosphere information Server 11.5//8.5, Informatica Analyzer 8.5, Tableau 9, Red Hat Linux, TIDAL scheduler, SQL Server 2008, Sybase 15.2.
Confidential
Sr. ETL Consultant
Responsibilities:
- Designed and developed CASL enhancements and FATCA reconciliation projects.
- Point of contact for any technical DataStage issues
Environment: IBM Infosphere information Server 8.5, DB2 10, IBM Infosphere MDM Server 10, GNU Linux, ClearCase.
Confidential
Sr. ETL Consultant
Responsibilities:
- Prototyping for Disaster Server Set up using ACTIVE - PASSIVE topologyPrototyping to Call SOLACE message system from Datastage
Environment: IBM Infosphere information Server Datastage 8.5/8,1/7.5, XML, Web services, Information Analyzer 8.5, Fast Track, Business Glossary, Red Hat Linux, Sun Solaris, AIX, Perl, TIDAL scheduler, SQL Server 2008, Sybase 15.2/11.2.
DataStage Technical Specialist & SME Expert
Confidential
Responsibilities:
- Understanding the client current infrastructure of the client.
- Gone through the client requirements for the alert messaging low latency requirements.
- Designed the solution to load the low latency alert messaging into the database.
- Designed the XML schema model definition for low latency messages.
- Developed the Datastage jobs to demonstrate client about the design and how to process the real time messages into the Database.
- Used MQ Connector, XML Input, Difference, joiner, look up, Datasets, Oracle Enterprise stages to demonstrate the low latency alert messaging.
- Created sequences to continuously run the jobs.
- Designed the invalid messages log file and designed the strategy to retrieve the invalid XML message.
- Reviewed the jobs Developed by IBM Indian team and provided recommendations for the improvement.
- Reviewed the client infrastructure and DataStage set up and suggested the changes to the environment.
- Provided Low latency designed document to the client and recommended client to follow the designed approach for processing low latency messages as well as error handling.
Environment: IBM Websphere DataStage EE 8.1, Oracle 10G/11G, Websphere MQ, Linux
Confidential
Sr. ETL & BI Consultant - DW & BI
Responsibilities:
- Saved $750,000 to $1million to the project by developing solo with in short span of time (Actual development in 3 - 4 months) for IPODS project
- Developed 80% of the Datastage jobs and saved upto $1.5 million to the company for Policy holder Document Print project
Environment: IBM Websphere DataStage EE 8.1/7.5.1,IBM MQ Series, RTM, PMS, GuideWire Claims system, Oracle 10G, AIX 5.3, CA7, JCL, MKS
Confidential
Sr. Data Warehouse ETL Consultant
Responsibilities:
- Analyzed source systems and Business user requirements
- Designed Mapping between source and target RSPM data warehouse using OWB
- Developed ETL mappings using OWB (Oracle warehouse Builder) to populate Reporting Layer tables from ODS
- Designed and developed code to Identify fraud using Oracle 9i PL/SQL
- Developed Unix shell script for job scheduling
- Designed and Developed Dialer conversion from CDSE to ODS database using PL/SQL
- Developed SQL * LOADER scripts to populate data into database from inbound files
- Modified the existing PERL script to in corporate new changes
Environment: OWB (Oracle Warehouse Builder) 9i, ERWIN 4.0, Sun OS 5.8, SQL * LOADER, ORACLE 9i, PERL Script