We provide IT Staff Augmentation Services!

Sr. Informatica Developer Resume

3.00/5 (Submit Your Rating)

Irving, TX

PROFESSIONAL SUMMARY:

  • 8+ years of experience in Information Technology with a strong background in Database development and Data warehousing (OLAP) and ETL process using Informatica Power Center 9.x/8.x/7.x/6.x, Power Exchange, Designer (Source Analyzer, Warehouse designer, Mapping designer, Metadata Manager, Mapplet Designer, Transformation Developer), Repository Manager, Repository Server, Workflow Manager & Workflow Monitor.
  • Experience in integration of various data sources with Multiple Relational Databases like DB2, Oracle, SQL Server and Worked on integrating data from XML files, flat files like fixed width and delimited.
  • Experience in writing stored procedures and functions.
  • Experience in SQL tuning using Hints, Materialized Views.
  • Lead the Team in driving the development of IFRS Business Rule Development in EDQ
  • Worked on Power Exchange bulk data movement process by using Power Exchange Change Data Capture (CDC) method
  • Tuned Complex SQL queries by looking Explain plan of the SQL.
  • Involved in the data analysis for source and target systems.
  • Master Data Management concepts, Methodologies and ability to apply this knowledge in building MDM solutions
  • Experience in Data mining, finding correlations or patterns among fields in Data base.
  • Implemented performance tuning techniques at application, database and system levels
  • Worked in Agile methodology and Kanban method for faster development.
  • Experience in UNIX shell programming.
  • Expertise in TeradataRDBMS using Fastload , Multiload , Tpump , Fastexport , Multiload , Teradata SQL Assistant and BTEQ utilities.
  • Worked with Informatica Cloud for creating source and target object, developed source to target mappings.
  • Expertise in Dimensional Data Modeling using Star and Snow Flake Schema. Designed data models using Erwin.
  • Understanding of Relational (ROLAP), Multidimensional (MOLAP) modeling, data warehousing concepts, star and snowflake schema, database design methodologies and Meta data management.
  • Experience with preparing functional Specifications, Low Level and High - Level Design specification and Source to target mapping documents for the ETL programs/business processes
  • Experience in working with Oracle Stored Programs, Packages, Cursors, Triggers, Database Link, Snapshot, Roles, Privileges, Tables, Constraints, Views, Indexes, Sequence, and Synonyms Dynamic SQL and SQL*Loader in distributed environment.
  • Experience in Creation and managing user accounts, security, rights, disk space and process monitoring in Solaris and Red hat Linux.
  • Extensive knowledge in handling slowly changing dimensions (SCD) type 1/2/3.
  • Experience in using Informatica to populate the data into Teradata DWH.
  • Experience in understanding Fact Tables, Dimension Tables, Summary Tables
  • Experience on AWS cloud services like EC2, S3, RDS, ELB, EBS, VPC, Route53, Auto scaling groups, Cloud watch, Cloud Front, IAM for installing configuring and troubleshooting on various Amazon images for server migration from physical into cloud.
  • Designed Architectural Diagrams for different applications before migrating into Amazon cloud for flexible, cost- effective, reliable, scalable, high-performance and secured .
  • Good knowledge in Azure Data Factory, SQL Server 2014 Management Studio
  • Migrated existing development from servers onto into Microsoft Azure, a cloud based service.
  • Build servers using AWS: Importing volumes, launching EC2, creating security groups, auto-scaling, load balancers, Route 53, SES and SNS in the defined virtual private connection.
  • Creating alarms in Cloud Watch service for monitoring the server performance, CPU Utilization, disk usage etc.
  • Wrote adhoc data normalization jobs for new data ingested into Redshift.
  • Familiar and with advanced Amazon Redshift and MPP database concepts.
  • Moved the company from a SQL Server database structure to AWS Redshift data warehouse and responsible for ETL and data validation using SQL Server Integration Services.
  • Optimizing and tuning the Redshift environment, enabling queries to perform up to 100x faster for Tableau and SAS Visual Analytics
  • Designing and building multi-terabyte, full end-to-end Data Warehouse infrastructure from the ground up on Amazon Redshift for large scale data handling Millions of records every day.
  • Used Amazon Kinesis as a platform for streaming data on AWS
  • Build custom streaming data applications for specialized needs by using Kinesis as a platform.
  • Configured and Managed Elastic Load Balancing (ELB) to avoid fault tolerance and avoid single point of failure of applications, hence provide high availability and network load balancing.
  • Worked with networking teams in configuring AWS Direct Connect to establish dedicated connection to datacenters and AWS Cloud
  • Designing and configuring the AWS Secure Notification Service (SNS) and Secure Email Service (SES) architecture of the solution and working with a client.
  • Managed users and groups using the Amazon Identity and Access Management (IAM).
  • Exposure to development, testing, debugging, implementation, user training & production support.

TECHNICAL SKILLS:

ETL: Informatica 9.6.1/9.x/8.x/7.x, Data Stage 7.5/6.0, SSIS

CLOUD SERVICES: amazon web services (CERTIFIED)

Databases: Oracle 12c/11g/10g/9i/8i/7.x, DB2, Teradata v 13/v12/v2r5, SQL Server

Reporting Tools: Business Objects XI 3.1/r2, Web Intelligence, Crystal Reports 8.X/10.X

Database Modeling: Erwin 4.0, Rational rose

Languages: SQL, COBOL, PL/SQL, JAVA, C

DB Tools: SQL* plus, SQL* Loader, SQL*Forms, TOAD

Web Tools: HTML, XML, JavaScript, Servlets, EJB

OS: Windows NT/2000/2003/7, UNIX, Linux, AIX

PROFESSIONAL EXPERIENCE:

Confidential - Irving TX

Sr. Informatica Developer

Responsibilities:

  • Provided strategic direction for the planning, development and implementation of various projects in support of enterprise data governance office and data quality team.
  • Designed & Developed Scripts | Jobs in EDQ to carry out the profiling of data for different data fields on source schema tables for different business scenarios and report the statistics to the Business Team.
  • Designed & Developed Scripts | Jobs in EDQ to carry out profiling of data on different Data Fields from source schema tables along with the concerned reference tables meant for migration purpose from source to target systems.
  • Ownership of the development of the ‘Integrated Control Framework’ (ICF) for data quality
  • Extensively used Informatica Data Quality (IDQ) to profile the project source data, define or confirm the definition of the metadata, cleanse and accurately check the data.
  • Ran checks for duplicate or redundant records and provided the information on how to proceed backend ETL process.
  • Loaded data in to the Teradata tables using Teradata Utilities BTEQ.Fast Load, Multi Load, and Fast Export, TPT.
  • Partnered with data stewards to provide summary results of data quality analysis, which will be used to make decisions regarding how to measure business rules and quality of the data.
  • Implemented data quality processes including translation, parsing, analysis, standardization and enrichment at point of entry and batch modes. Deploy mappings that will run in a scheduled, batch or real-time environment
  • Collaborated with various business and technical teams to gather requirements around data quality rules and propose the optimization of these rules if applicable, then design and develop these rules with IDQ
  • Worked on the designing and development of custom objects and rules, reference data tables and create/import/export mappings
  • As per business requirements, performed thorough data profiling with multiple usage patterns, root cause analysis and data cleaning and develop scorecards utilizing informatica data quality (IDQ)
  • Developed ‘matching’ plans and helped determine best matching algorithm, configure identity matching and analyze duplicates
  • Worked on building human task workflows to implement data stewardship and exception processing
  • Developed KPIs & KRIs for the data quality function
  • Drove improvements to maximize value of data quality (e.g. drive changes to have access to required metadata to maximize impact of data quality & quantity cost of poor data quality)
  • Performed data quality activities i.e. data quality rule creation, edit checks, identification of issues, root cause analysis, value case analysis and remediation
  • Worked on strengthening the data steward role within Bank’s 1st and 2nd lines of defense
  • Augmented and executed the data quality to ensure achievable and measurable millstones are mapped out for delivery and are effectively communicated
  • Prepared the documents on all mappings, mapplets and rules in detail and handover documentation to the customer

Environment: Informatica Developer 10.1.1 HotFix1, Informatica EDC 10.2, Informatica Axon 6.1/6.2, Teradata 14.0,Oracle 11G, PL/SQL, Flat Files (XML/XSD, CSV, EXCEL), UNIX/LINUX, Shell Scripting, Rational Rose/Jazz, SharePoint

Confidential - New Jersey citY

Sr. Informatica Developer

Responsibilities:

  • Involved in gathering and analyzing the requirements and preparing business rules.
  • Gathered information from source system, Business Documents and prepared the Data conversion and migration technical design document.
  • Extensively worked in performance tuning of terra data SQL,ETL and other process to optimize performance. .
  • Designed and developed complex mappings by using lookup, expression, update, sequence generator, aggregator, router, stored procedure, etc., transformations to implement complex logics while coding a mapping.
  • Worked in requirement analysis, ETL design and development for extracting data from the mainframes
  • Worked with Informatica Power Center 10 .0.2 designer, workflow manager, workflow monitor and repository manager.
  • Developed and maintained ETL (extract, transformation and loading) mappings to extract the data from multiple source systems like oracle, SQL server and flat files and loaded into oracle.
  • Developed Informatica workflows and sessions associated with the mappings using workflow manager.
  • Used power exchange to read source data from Mainframes Systems, power center for etl and Db2 as targets worked and created files for business objects
  • Involved in creating new table structures and modifying existing tables and fit into the existing data model.
  • Responsible for normalizing COBOL files using normalize transformation
  • Responsible for testing, modifying, debugging, documenting and implementation of Informatica mappings. performed unit and integration testing and wrote test case
  • Debug through session logs and fix issues utilizing database for efficient transformation of data.
  • Worked with pre-session and post-session unix scripts for automation of etl jobs. Also involved in migration/conversion of etl processes from development to production.
  • Extracted data from different databases like oracle and external source systems like flat files using etl tool.
  • Extracted data from sources like sql server, and fixed width and delimited flat files. transformed the data according to the business requirement and then loaded
  • Involved in debugging informatica mappings, testing of stored procedures and functions,
  • Perform unit testing on deliverables and documenting.
  • Developed mapplets, reusable transformations, source and target definitions, mappings using Informatica 10.0.2 .
  • Generated queries using sql to check for consistency of the data in the tables and to update the tables as per the business requirements.
  • Involved in performance tuning of mappings in informatica.
  • Migrated the ETL - Worklflows, Mappings and sessions to QA and Production Environment
  • Good understanding of source to target data mapping and business rules associated with the ETL process.

Environment : informatica 10.0.2, unix, flat files (delimited), cobol files, Teradata 12.0/13.0/14.0

Confidential - MI

Sr. Informatica Developer

Responsibilities:

  • Collaborated with Business Analysts for requirements gathering, business analysis and designing of the Enterprise Data warehouse. worked in requirement analysis, ETL design and development for extracting data from the heterogeneous source systems like DB2, MS SQL Server, Oracle, flat files, Mainframes and loading into Staging and Star Schema.
  • Created/Modified Business requirement documentation.
  • Created ETL Specifications using GAP Analysis.
  • Did Production Support on Monthly rotation basis.
  • Worked on Master Data Management concepts, Methodologies and ability to apply this knowledge in building MDM solutions
  • Used Informatica Data Quality (IDQ) to profile the data and apply rules to Membership and Provider subject areas to get Master Data Management (MDM)
  • Involved in migration of the maps from IDQ to power center
  • Eliminated and prevent duplicated and inconsistent data using Informatica MDM.
  • Worked on CISM tickets for Production issues for Trucks and Cars Data Marts.
  • Extracted Mainframe data for Local Landscapes like Credit Check, Application etc.
  • Created Unit Test cases, supported SIT and UAT.
  • Involved in massive data profiling using IDQ (Analyst Tool) prior to data staging.
  • Used IDQ’s standardized plans for addresses and names clean ups.
  • Applied the rules and profiled the source and target table's data using IDQ.
  • Worked with Informatica Power exchange and Informatica Cloud to integrate and load data to Oracle db.
  • Worked with Informatica Cloud for creating source and target object, developed source to target mappings.
  • Expertise in Dimensional Data Modeling using Star and Snow Flake Schema. Designed data models using Erwin
  • Involved in Dimensional modeling (Star Schema) of the Data warehouse and used Erwin to design the business process, dimensions and measured facts
  • Migrated Informatica code to different environment.
  • Developed the Mappings with Best Practices and Standards.
  • Hands On experience with various Modelling tools including Erwin, Power Designer, ER Studio, Oracle Data Modeler.
  • Informatica 9.5.1,Oracle 11.6,Teradata 13.0,Unix, Hadoop, ER Studio, DB2
  • Implemented Incremental load logic.
  • Worked on Power Exchange bulk data movement process by using Power Exchange Change Data Capture (CDC) method, Power Exchange Navigator, Power Exchange Bulk Data movement. Power Exchange CDC can retrieve updates at user-defined intervals or in near real time.
  • Involved inDesign and Development of technical specifications using Hadoop Technology
  • Worked on debugging code to find bottle necks for performance tuning.
  • Extracted data from sources like SQL server, and Fixed width and Delimited Flat files. Transformed the data according to the business requirement and then Loaded into Oracle and Teradata databases.
  • Experience in writing SQL queries and optimizing the queries in SQL Server.
  • Performed data analysis and data profiling using SQL on various sources systems including SQL Server
  • Designed and developed complex mappings to load the Historical and Weekly data from the Legacy Systems to Oracle database.
  • Worked with Teradata Database in writing BTEQ queries and Loading Utilities using Multiload, Fastload, FastExport.
  • Created CISM ticket with HP to fix DB2 issues.
  • Modified MORIS (Trucks) data mart load with enhancements.
  • Created Defects in ALM software and assigned to developers.
  • Fine Tuned the Complex SQL by checking Explain Plan of the queries.
  • Improved ETL job duration/run time from hours to minutes where no push up and partition process unavailable.
  • Created mappings to load data from ODS to DWH.
  • Used DB2 Loader utility to load large tables efficiently.
  • Created session partitions to load data faster.
  • Extensively used pmcmd commands on command prompt and executed Unix Shell scripts to automate workflows and to populate parameter files.
  • Used Informatica Metadata Manager to find the data lineage in the Mappings.
  • Working in Agile Methodology SDLC.
  • Did Analysis for ABC (Automated Balance and Control) for each Business Areas and recommended the changes to capture Stats for each Workflow and its Sessions.
  • Tuned ETL Load by checking Load on the Server CPU.
  • Ftp’d the File using Session property FTP Connection to Informatica Server.
  • Used Informatica Power Exchange to read Mainframe files.
  • Documented ETL test plans, test cases, test scripts, test procedures, assumptions, and validations based on design specifications for unit testing, system testing, expected results, preparing test data and loading for testing, error handling and analysis.

Environment: Informatica Power Center 9.6.1, Informatica Metadata Manager, DB2 9.8, Rapid SQL 8.2, IBM Data Studio, Erwin, PLSQL, Redhat Linux, Power Exchange 9.5.1, Copy Book, Tivoli, ALM, Shell Scripting.

Confidential, Pittsburgh, PA

Informatica Developer

Responsibilities:

  • Designed to load data from various sources like Oracle, Flat files, XML files, Teradata, Sybase and MS SQL Server into Oracle, Teradata, XML, and SQL server targets.
  • Moved the data from source systems to different schemas based on the dimensions and fact tables by using the slowly changing dimensions type 2 and type 1.
  • Experience in Data mining, finding correlations or patterns among fields in Data base.
  • Hands on working experience in building MDM composite services using Services Integration Framework (SIF) including setup and configuring SIF SDK.
  • Analyzed the suspects in MDM while loading the data.
  • Interacted with the client for the requirements on MDM.
  • Worked on Teradata databases in profiling the data from the production tables from different sources.
  • Involved in Project Agile Roadmap Estimation, Pursuits, Solutions Architecture, Hadoop best practice, and architecture design/implementation within the company
  • Involved in column profiling of the tables using Informatica Analyst Tool (IDQ).
  • Analyzing the requirements for profiling from MDM.
  • Worked with Informatica cloud to create Source/Target connections, monitor, and synchronize the data.
  • Extensive experience in Relational and Dimensional Data modeling for creating Logical and Physical Design of Database and ER Diagrams using multiple data modeling tools like ER Studio and ERWIN.
  • Strong experience in Dimensional Modeling using Star and Snow Flake Schema, Identifying Facts and Dimensions, Physical and logical data modeling using Erwin
  • Extensively used Erwin for data modeling and Dimensional Data Modeling.
  • Experience in implementing Data Quality rules on Hive Data sources using Informatica Big Data Edition 10.x.
  • Informatica Data Quality(IDQ) 9.6.1, Informatica Data Quality(IDQ) 10.1.1, Teradata 15.10, Oracle 11g, UC4, Windows, WinSCP, Toad, Big Data Edition 10.1.0, Hadoop Hortonworks, Hive 2.4
  • Worked with Informatica Power exchange and Informatica Cloud to integrate and load data to Oracle db.
  • Worked with Informatica Cloud for creating source and target object, developed source to target mappings.
  • Used IDQ to profile the project source data, define or confirm the definition of the metadata, cleanse and accuracy check the project data, check for duplicate or redundant records, and provide information on how to proceed with ETL processes
  • Involved in data profiling using IDQ prior to data staging.
  • Extensively used XML Transformation, Normalizer, Lookups, Expression, and Aggregator, sequence Generator, Sorter and Joiner Transformations.
  • Developed XML mapping for ETL team for the transformation and loading.
  • Defining the code values and making changes according to values in mapping and explaining it to ETL team for the transformation rules
  • Worked on XML Source Files.
  • Created complex mappings in Power Center Designer using Aggregate, Expression, Filter, and Sequence Generator, Update Strategy, Union, Lookup, Joiner, XML Source Qualifier and Stored procedure transformations.
  • Responsible for migration of the work from dev environment to testing environment
  • Responsible for solving the testing issues.
  • Created documents that have the detail explanation of the mappings, test cases and the expected results and the actual results.

Environment: Informatica Power Center 9.5, Oracle 11g, Teradata V 13.0, Fast load, Multiload, Teradata SQL Assistant, Erwin,ER Studio, MS SQL Server 2012, SQL, PL/SQL, T-SQL, SQL*Plus, IDQ, XML,XSD, T-SQL, Oracle SQL developer.

Confidential, Pittsburgh, PA

ETL Developer

Responsibilities:

  • Worked on Data mining to analyzing data from different perspective and summarizing into useful information.
  • Involved in requirement analysis, ETL design and development for extracting data from the heterogeneous source systems like MS SQL Server, Oracle, flat files and loading into Staging and Star Schema.
  • Used SQL Assistant to querying Teradata tables.
  • Experience in various stages of System Development Life Cycle(SDLC) and its approaches like Waterfall,Spiral,Agile.
  • Used Teradata Data Mover to copy data and objects such as tables and statistics from one system to another. involved in Analyzing / building Teradata EDW using Teradata ETL utilities and Informatica.
  • Worked on Teradata Multi-Load, Teradata Fast-Load utility to load data from Oracle and SQL Server to Teradata.
  • Involved in massive data cleansing prior to data staging.
  • Configured workflows with Email Task, which would send mail with session, log for Failure of a sessions and for Target Failed Rows.
  • Worked on CDC (Change Data Capture) to implement SCD (Slowly Changing Dimensions).
  • Worked on Change Data Capture (CDC) using CHKSUM to handle any change in the data if there is no flag or date column present to represent the changed row.
  • Worked on reusable code known as Tie outs to maintain the data consistency. It compared the source and target after the ETL loading is complete to validate no loss of data during the ETL process.
  • Cleanse, standardize and enrich customer information using Informatica MDM.
  • Worked with data Extraction, Transformation and Loading data using BTEQ, Fast load, Multiload.
  • Used the Teradata fast load/Multiload utilities to load data into tables.
  • Worked with UNIX shell scripts extensively for job execution and automation.
  • Documented ETL test plans, test cases, test scripts, test procedures, assumptions, and validations based on design specifications for unit testing, system testing, expected results, preparing test data and loading for testing, error handling and analysis.
  • Involved in Unit testing, User Acceptance Testing to check whether the data is loading into target, which was extracted from different source systems according to the user requirements.

Environment: Informatica Power Center 9.5 Repository Manager, Mapping Designer, Workflow Manager and Workflow Monitor, Oracle 11g, Teradata V 13.0, Fast load, Multiload, Teradata SQL Assistant, MS SQL Server 2012, SQL, PL/SQL, T-SQL, Unix, Cognos, Tidal.

Confidential, Dallas, TX

Programmer Analyst

Responsibilities:

  • Involved with requirement gathering and analysis for the data marts focusing on data analysis, data quality, data mapping between staging tables and data warehouses/data marts.
  • Designed and developed processes to support data quality issues and detection and resolutions of error conditions.
  • Wrote SQL scripts to validate and correct inconsistent data in the staging database before loading data into databases.
  • Analyzed the session logs, bad files and error tables for troubleshooting mappings and sessions.
  • Wrote shell script utilities to detect error conditions in production loads and take corrective actions, wrote scripts to backup/restore repositories, backup/restore log files.
  • Scheduled workflows using Autosys job plan.
  • Provided production support and involved with root cause analysis, bug fixing and promptly updating the business users on day-to-day production issues.

Environment: Informatica Power Center 8.6.1/7.x, Oracle 10g, Autosys, Erwin 4.5, CMS, TOAD 9.0, SQL, PL/SQL, UNIX, SQL Loader, MS SQL Server 2008.

We'd love your feedback!