We provide IT Staff Augmentation Services!

Sr. Informatica Engineer / Data Analyst Resume

Minneapolis, MN


  • 7+years of IT experience in all phases of Software Development Life Cycle (SDLC) which includes User Interaction, Business Analysis/Modeling, Design, Development, Integration, Planning and testing and documentation in data warehouse applications, ETL processing and distributed applications.
  • Excellent domain knowledge of Banking, Finance, Insurance.
  • Expertise in using ETL Tool Informatica Power Center 8.x/9.x/10.2 (Mapping Designer, Workflow Manager, Repository Manager, Data Quality (IDQ) and ETL concepts.
  • Extensive knowledge in RDBMS, Business Intelligence and Data Warehousing Concepts with emphasis on ETL and System Development Life Cycle (SDLC).
  • Hands on experience working on Waterfall Model as well as Agile Model, implementation of various sprint.
  • Interacted with end - users and functional analysts to identify and develop Business Requirement Documents (BRD) and transform it into technical requirements.
  • Strong experience with Informatica mapping i.e. Source Analyzer, Warehouse designer, Mapping designer, Mapplet Designer, Transformations Developer, Informatica Repository.
  • Designed and developed complex mappings to move data from multiple sources into a common target area such as Data Marts and Data Warehouse using lookups, Source Qualifier, Router, Filter, Expression, Aggregator, Joiner, Normalizer, Sequence Generator, Update Strategy, and Stored Procedure from varied transformation logics in Informatica.
  • Worked with Teradata various utilities like Fast Load and Multi Load and Tpump and Teradata Parallel transporter and highly experienced in Teradata SQL Programming.
  • Experienced in Teradata Parallel Transporter.
  • Expertise in tuning the performance of Mappings and sessions in Informatica and determining the Performance bottlenecks.
  • Having strong hands on experience in extraction of the data from various source systems ranging from Mainframes like DB2, Flat Files, VSAM files, etc. to RDBMS like Oracle, SQLServer, Teradata etc.
  • Extensively used multiple Slowly Changing Dimension (SCD-Type1,2,3) technique in ETL Transformation.
  • Expertise in OLTP/OLAP System Study, Analysis, E-R diagram, developing Dimensional Models using Star schema and Snowflake schema techniques used in relational, dimensional and multidimensional modeling.
  • Worked on optimizing the mappings by creating re-usable transformations and Mapplets. Created debugging and performance tuning of sources, targets, mappings, transformations and sessions.
  • Experienced in writing SQL, PL/SQL programming, Stored Procedures, Package, Functions, Triggers, Views, Materialized Views.
  • Experience in Task Automation using UNIX Scripts, Job scheduling and Communicating with Server using PMCMD command. Extensively used Autosys for Job monitoring and scheduling. Automated the ETL process using UNIX Shell scripting.
  • Proficient in converting logical data models to physical database designs in Data Warehouse Environment and in-depth understanding in Database Hierarchy, Data Integrity concepts and Data Analysis.
  • Experience in defining standards, methodologies and performing technical design reviews.
  • Excellent communication skills, interpersonal skills, self-motivated, quick learner and outstanding team player.


ETL Tools: Informatica Power Center(10.2/9.x/8.x), Power Exchange 9x, Informatica Data Quality 9.x, power connect for IBM MQ series, power connect for Mainframes

BI Tools: Cognos 8/9, Tableau 10

Data Modeling: ERwin 9.5.2/7.3/4.1, MS - Visio

Databases: Teradata 15/14, Oracle 12c/11g/10g/9i, SQL Server 2008, DB2, MySQL, PostgreSQL

Languages: XML, Java, HTML, JAVA, PL/SQL C++, C, UNIX Shell Scripting, SQL, PL/SQL.

Big Data: Hadoop Ecosystem (HDFS, Hive, Pig)

MS: DOS, HP UNIX, Windows and Sun OS.

Methodologies: Ralph Kimball s Star Schema and Snowflake Schema.

MS Word, MS Access, T: SQL, TOAD, SQL Developer, Microsoft Office, Teradata View Point, Teradata SQL Assistant, Ice scrum, Rally, JIRA,, Control - M, Autosys, GitHub


Confidential, Minneapolis, MN

Sr. Informatica Engineer / Data Analyst


  • Actively involved in interacting with business users to record user requirements and Business Analysis.
  • Translated requirements into business rules& made recommendations for innovative IT solutions.
  • Outlined the complete process flow and documented the data conversion, integration and load mechanisms to verify specifications for this data migration project.
  • Parsing high-level design spec to simple ETL coding and mapping standards.
  • Worked with PowerCenter Designer tools in developing mappings and Mapplets to extract and load the data from flat files and Oracle database.
  • Maintained warehouse metadata, naming standards and warehouse standards for future application development.
  • Created the design and technical specifications for the ETL process of the project.
  • Used Informatica as an ETL tool to create source/target definitions, mappings and sessions to extract, transform and load data into staging tables from various sources.
  • Responsible for mapping and transforming existing feeds into the new data structures and standards utilizing Router, Lookups Using Connected, Unconnected, Expression, Aggregator, Update strategy & stored procedure transformation.
  • Worked on Informatica PowerCenter tool - Source Analyzer, Data Warehousing Designer, Mapping Designer & Mapplets, and Transformations.
  • Prepared Unit/ Systems Test Plan and the test cases for the developed mappings.
  • Worked with slowly changing dimension Type1, Type2 and Type3.
  • Maintained Development, Test and Production Mappings, migration using Repository Manager. Involved in enhancements and Maintenance activities of the data warehouse.
  • Performance tuning of the process Confidential the mapping level, session level, source level, and the target level.
  • Utilized Informatica IDQ to complete the initial data profiling and matching/removing duplicate data for the process of data migration from the legacy systems to the target Oracle Database.
  • Designed and developed Informatica DQ Jobs, Mapplets using different transformation like Address validator, matching, consolidation, rules etc. for data loads and data cleansing.
  • Extensively used Informatica Data Quality tool (IDQ Developer) to create rule-based data validations for profiling.
  • Created dictionary tables using IDQ analyst tool for data validations.
  • Heavily used BTEQ script for loading data into target table
  • Worked on loading of data from several flat files to Staging using Teradata MLOAD, FLOAD and BTEQ.
  • Implemented various new components like increasing the DTM Buffer Size, Database Estimation, Incremental Loading, Incremental aggregation, Validation Techniques, and load efficiency.
  • Strong on Exception Handling Mappings for Data Quality, Data Cleansing and Data Validation.
  • Created Workflows containing command, email, session, decision and a wide variety of tasks.
  • Developed Parameter files for passing values to the mappings for each type of client
  • Scheduled batch and sessions within Informatica using Informatica scheduler and wrote shell scripts for job scheduling.
  • Customize shell scripts to run mapping in Control M.
  • Understanding the entire functionality and major algorithms of the project and adhering to the company testing process.

Environment: Informatica PowerCenter 10.2, IDQ 10.2, Oracle 11g, DB2, Teradata 15,MSSQL Server 2012, Erwin 9.2,Putty, Shell Scripting, Clearcase, Putty, WinSCP, Notepad++, JIRA, Control-M, Cognos 10.


Sr. Informatica Engineer / Data Analyst


  • Analyze business requirements, technical specification, source repositories and physical data models for ETL mapping and process flow.
  • Worked extensively with mappings using expressions, aggregators, filters, lookup, joiners, update strategy and stored procedure transformations.
  • Extensively used Pre-SQL and Post-SQL scripts for loading the data into the targets according to the requirement.
  • Developed mapping to load Fact and Dimension tables for Type 1, Type 2 dimensions, Incremental loading and Unit tested the mappings.
  • Extracted data from a Web Service source, transform data using a web service, and load data into a web service target.
  • Coordinate and develop all documents related to ETL design and development.
  • Involved in designing the Data Mart models with Erwin using Star schema methodology.
  • Used repository manager to create repository, user’s groups and managed users by setting up privileges and profile.
  • Used debugger to debug the mapping and correct them.
  • Performed Database tasks such as creating database objects (tables, views, procedures, functions).
  • Responsible for debugging and performance tuning of targets, sources, mappings and sessions.
  • Optimized the mappings and implementing the complex business rules by creating re-usable transformations and Mapplets.
  • Involved in writing BTEQ, MLOAD and TPUMP scripts to load the data into Teradata tables.
  • Optimized the source queries in order to control the temp space and added delay intervals depending upon the business requirement for performance
  • Used Informatica workflow manager for creating, running the Batches and Sessions and scheduling them to run Confidential specified time.
  • Executed sessions, sequential and concurrent batches for proper execution of mappings and set up email delivery after execution.
  • Preparing Functional Specifications, System Architecture/Design, Implementation Strategy, Test Plan & Test Cases.
  • Implemented and documented all the best practices used for the data warehouse.
  • Improving the performance of the ETL by indexing and caching.
  • Created Workflows, tasks, database connections, FTP connections using workflow manager.
  • Responsible for identifying bugs in existing mappings by analyzing data flow, evaluating transformations and fixing bugs.
  • Code walks through with team members.
  • Developed stored procedures using PL/SQL and driving scripts using Unix Shell Scripts.
  • Created UNIX shell scripting for automation of ETL processes.
  • Used UNIX for check in’s and check outs of workflows and config files in to the Clearcase.
  • Automated ETL workflows using Control-M Scheduler.

Environment: Informatica PowerCenter 9.1.1, IDQ 9.1.1, Oracle 11g, Teradata 14.0, Teradata SQL Assistant, MSSQL Server 2012, MySQL, Erwin 9.2, Putty, Shell Scripting, Bit-Bucket, WinSCP, Notepad++, Rally, Control-M, Tableau 9.2.

Confidential, New York City, NY

Informatica Developer


  • Involved in gathering business requirements, logical modeling, physical database design, data sourcing and data transformation, data loading, SQL and performance tuning.
  • Participated in data analysis, data profiling, data dictionary and metadata management. Used SQL to do the Data Profiling.
  • Collaborated with Business users to collect requirements and prepared ETL technical specifications
  • Developed, supported and maintained the ETL processes for exporting data from other application into reporting data mart using Informatica Power center 8.5.1
  • Designed, built and maintained mappings, sessions and workflows for the target data load process using Informatica, PL/SQL and UNIX.
  • Implemented customer history data capture for catalogue tables using SCD Type 2.
  • Designed mappings for Slowly Changing Dimensions, used Lookup (connected and unconnected), Update strategy and filter transformations for loading historical data.
  • Developed Stored Procedures and used them in Stored Procedure transformation for data processing and have used data migration tools
  • Worked with SQL*Loader to load data from flat files obtained from various facilities.
  • Experience in debugging execution errors using Data Services logs (trace, statistics and error) and by examining the target data.
  • Worked extensively with Informatica tools such as Source Analyzer, Warehouse Builder and Workflow Manager.
  • Extensively used transformations like router, aggregator, lookup, source qualifier, joiner, expression and sequence generator transformations in extracting data in compliance with the business logic developed.
  • Wrote SQL overrides in source qualifier to filter data according to business requirements.
  • Wrote Unix shell scripts for scheduling Informatica pre/post session operations.
  • Created different parameter files and started sessions using these parameter files using pmcmd command to change session parameters, mapping parameters, and variables Confidential runtime.
  • Tuned the mappings by removing the Source/Target bottlenecks and Expressions to improve the throughput of the data loads.

Environment: Informatica PowerCenter8.5.1, IDQ 8.5.1, Oracle 10g, Toad, SQL Developer, UNIX, BOXI, PL/SQL, Autosys, Putty, WinSCP.

Hire Now