- Utilized Informatica Data Quality (IDQ) for data profiling and matching/removing duplicate data, fixing the bad data, fixing the null values.
- Expertise in using both connected and unconnected Lookup Transformations. Exported the IDQ Mappings and Mapplet's to power center and automated the scheduling process. Knowledge of data profiling using Informatica Data Quality (IDQ).
- Extensively used Informatica PowerCenter(10.1/9.x/8.x/7.1/6.x/5.x) for ETL (Extraction, Transformation and Loading) of data from multiple source database systems to Data Warehouses in UNIX/Windows environment.
- Extensive experience using tools and technologies such as Oracle 11g/10g/9i/8i, SQL Server, DB2 UDB, Sybase, TERADATA, MS Access, Flat files, SQL Developer, SQL Navigator, SQL Loader, PL/SQL, Sun Solaris, Erwin, TOAD, Stored procedures, triggers.
- Strong working experience in Extraction, Transformation, loading (ETL) data from various sources into Data Warehouses and Data Marts using Informatica Power Center (Repository Manager, Designer, Workflow Manager, Workflow Monitor), Power Exchange, Power Mart, Power Analyzer, Power Connect.
- Extensively worked on Informatica Designer Components - Source Analyzer, Warehousing Designer, Transformation Developer, Mapplet and Mapping Designer.
- Expert developer skills in TeradataRDBMS, initial TeradataDBMS environment setup, development and production support, use of FASTLOAD, MULTILOAD, TPUMP, and Teradata SQL and BTEQ Teradata utilities
- Develops SQL and PL/SQL scripts to validate and load data int interface tables using SQL Loader. Performed strategies for incremental data extraction as well data migration to load data into Teradata.
- Performed extraction, transformation and loading of data from RDBMS tables and Flat File sources into Oracle 11g RDBMS in accordance with requirements and specifications.
- Experience in writing, testing and implementation of the PL/SQL triggers, stored procedures, functions and packages.
- Used Informatica Power Center to extract/transform and load data from different operational data sources like Oracle, Flat files like CSV files, XML files, Cobol files, SQL server from which customers data is coming and is loaded into Teradata warehouse.
- Involved in peer reviews of the code to ensure all the standards are implemented and tuned mappings in order to bring down the run time by using Informatica partitioning if needed
- Informatica powercenter 10.1/9.x/8.x
- Windows, Linux,Unix
- SQL, PL/SQL, Unix Shell scripting
- Oracle 11g, Microsoft SQL Server, IBM DB2, Netazza, Teradata
- InformaticaPowerCenter (9.x/10.x), Data Warehouse.
Sr. Informatica/Datawarehouse Developer
- Over 8 years of Solid understanding and expertise in data Warehouse Development, Testing, Deployment, Maintenance and Production Support of Data Integration using Informatica Power Center (10.1/9.x/8.x/7.1/6.x/5.x), IDQ and DWBI ETL.
- Expertise in Extraction, Transformation and Loading (ETL) process, Dimensional Data Modeling experience using Data modeling, Star schema/ Snowflake modeling, Fact and Dimensions tables, dimensional, multidimensional modeling and De - normalization techniques.
- Extensive experience in creating the Workflows, Worklets, Mappings, Mapplets, Reusable transformations and scheduling the Workflows and sessions using Informatica Power Center.
- Experience in Performance Tuning of mappings, transformations and sessions, by implementing various techniques like partitioning techniques and pushdown optimization, and also identifying performance bottlenecks.
- Worked extensively with debugger and tuning of ETL processes to improve the overall performance.
Sr. ETL Informatica Developer
- Used Informatica Power Center Designer analyzed the source data to Extract & Transform from various source systems (SQL server and flat files) by incorporating business requirement rules.
- Used Informatica Power Center 9.5.1 as a tool, extracted data from Flat files, SQL to build Data Source and Applied business requirement logic to load the Facets, Vendor and Pharmacy data into Data Warehouse Data Mart Tables.
- Designed and developed Mappings using different transformations such as Source Qualifier, Expression, Aggregator, Filter, Joiner, and Lookup to load data from source to target tables.
- Created Stored Procedures to handle the selection criteria such as Address, Provider, Specialty, Chapters and Credentialing and to load the data for the Extract and Exclusion reports based on the business requirements.
- Created Stored Procedures to load the crosswalks data for Medicare Provider and Printed Directories to SQL Staging tables.
- Created variables and parameters files for the mapping and session so that it can migrate easily in different environment and database.
- Worked with re - usable sessions, decision task, control task and email tasks for on success/on failure mails.
- Migration of ETL code from Development to QA and from QA to Production environments.
- Involved in working with business analyst in resolving the issues related with the migration of designs from Dev Server to Prod Server.
- Involve in Sprint planning and come up with the user stories for the tasks we will be working for the Sprint
- Do thorough analysis of the project requirements, coordinate with the business users in order understand thoroughly their needs and to minimize the gaps.
- Understand the current process as we make changes to the existing process and do Impact analysis by looking into database and informatica metadata with the queries we prepared
- Come up with the entire list of changes we need to do and also estimate the tasks that we need to work on
- We limit 50 hours for every 2 weeks Sprint and then add more tasks if we burn all the hours before the Sprint ends
- Error handling, Auditing and Performance tuning are somethings we always consider for any code we develop
- Thoroughly do Unit testing, Integration testing and then move the code to UAT and support the users in UAT until we get a sign off
- One - time movement of Historical data from Oracle to Netezza is done in partitions by parameterizing at AppWorx level.
- Thoroughly used parameter files to parameterize the connections, file paths, file names and any other source filter conditions and for other mapping parameters or variables
- Thoroughly build Unit test cases and build project document to which we add the test cases, and also specify the deployment needs for Operations team to move the code to Production
- Specify the versions that needs to be migrated in the project document and also specify the deployment steps along with the backout plan and historical loading
- Support the Operations team for the code we migrated for at least 2 weeks until it stabilizes and make any necessary changes as a production break and fix.
- Worked closely with Data Architects and DBAs in order to understand the architecture or come up with any new table structures
- Performed thorough Unit testing and Integration testing using sample data from PROD and once the code is reviewed and met the standards, promoted to higher environments
- Always communicated the downstream applications for any data or structural changes we made so that they can do their impact analysis.
- Extensively used reusable transformations, mapplets and parameter files to reduce the redundancy of the logic implementation.
- Maintained all the standards and implemented the best practices in our development process.