Senior Etl Developer Resume
Hartford, CT
SUMMARY:
- Over 10 years of IT experience in ETL Architecture, Analysis, development, testing, implementation, maintenance and supporting of Enterprise level Data Integration,Data Warehouse (EDW) Data Warehouse (DW)/Data Mart (DM), ETL, OLAP, ROLAP Client/Server and Web applications on Windows and Unix platforms
- 1 Year of experience in big data Analyst and hadoop development
- Hands - on experience understanding and working all stages of Software Development Life Cycle (SDLC) in Agile, Water-Fall and Hybrid methodologies. Experience in SDLC includes Requirement gathering, Analysis, Design, Development and Testing.
- Experience working in a fast paced Agile (Scrum/Kanban) environment
- Have knowledge about the AWS concepts like EC2, RDS, S3
- Hands on experience in tuning Oracle database and database objects for performance while working with large dataset
- Good knowledge of key Oracle performance related features such as Query Optimizer, Execution Plans and Indexes.
- Created complex Informatica mappings using Unconnected Lookup, joiner, Rank, Source Qualifier, Sorter, Aggregator, newly changed dynamic Lookup and Router transformations to extract, transform and loaded data mart area experience using Informatica PowerCenter with multiple database systems (SQL Server, Oracle and various Cloud technologies)
- Good hands on experience working in support projects, fixing all production related issues, and improve the matching percentage for reconcillation
- Developed Informatica workflows/worklets/sessions associated with the mappings using Workflow Manager
- Have extensively worked in developing ETL for supporting Data Extraction, transformations and loading using Informatica Power Center.
- Have knowledge about DimensionalData Modeling, Star Schema Modeling, Snowflake Modeling, FACT and Dimensions Tables, Physical and Logical Data Modeling
- Worked extensively with complex mappings using different transformations like Source Qualifiers, Expressions, Filters, Joiners, Routers, Union, Unconnected / Connected Lookups and Aggregators, Stored Procedures and Normalizers.
- Good Knowledge on applying rules and policies using ILM (Information Life Cycle Management) workbench for Data Masking Transformation and loading into targets.
- Excellent knowledge in Oracle SQL and PL/SQL queries and packages and defining objects like Tables, Stored Procedures, Views, Indexes, Triggers and functions.
- Experience with Performance Tuning for Oracle RDBMS using Explain Plan and HINT
- Well versed in Autosys JIL Scripts (Job Information Language).
- Primary technical skills in HDFS, MapReduce, Hive, Sqoop
- Part of the Centre of Excellence team within the account whose responsibilities include suggesting best practices and focusing on emerging technologies like Big Data and Hadoop.
TECHNICAL SKILLS:
Data Warehousing: Informatica Power Center 10.2/9.1/8.6.1/8.1.1/8.0/, Informatica Designer, Workflow Manager, Work flow Monitor, ETL, Datamart, OLAP, OLTP, Mapplet, Transformations, Toad, SQL*Loader, SQL, Oracle PL/SQL.
IDE: Eclipse, Microsoft Visual Studio
Databases: Oracle 7.x/8i/9i/10g/11g (SQL, PL/SQL, Stored Procedures, Triggers), SQL*Plus, MS SQL SERVER 2000/2005/2008, DB2/UDB, Teradata, SAP Tables and MS Access
Programming GUI: Visual Basic 6.0/5.0, C, C++, HTML, Unix Shell Scripting
Environment: Sun Solaris 2.6/2.7/2.8/8.0, IBM AIX 4.2/4.3, MS DOS 6.22, Win 3.x/95/98, Win XP.
Tools: TOAD, Visio, Eclipse, Tortoise SVN, Putty, HP Quality Center,Rally, Jira,Netezza
Execution Methodologies: Waterfall, Agile
PROFESSIONAL EXPERIENCE:
Confidential, Hartford, CT
Senior ETL Developer
Technical Platform:
Language: Oracle PL/SQL, informatica 10.2, Unix, shell scripts,hadoop(HDFS,HIVE)
Database: Oracle 10g, Sql*plus, netezza
Tool: SQL Developer, sql loader
Job Responsibilities
- Attending Daily status/Scrum meeting to discuss about the current status of the project/assigned tasks and any other impediments to meet Project Milestone
- Involved in technical discussion to build new design/future scope like connecting netezza with data lake and cognos with netezza
- Review designs, documentation to ensure the quality deliverables. Prepare functional or technical documentation for data warehouses. Build ETL code using sql loader,oracle pl/sql and informatica to load the data available in various source systems
- Executed queries using Hive and developed Map-Reduce jobs to analyze data.
- Developed MapReduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables in the EDW.
- Managed Hadoop log files.
- Analyzed the web log data using the HiveQL.
- Enabled speedy reviews and first mover advantages by using Oozie to automate data loading into the Hadoop Distributed File System
- Prepare the run book document to ensure all the existing flow / dependencies are accurate between source and target
- Created SQL-Loader scripts to load legacy data into Oracle staging tables and wrote SQL queries to perform Data Validation and Data Integrity testing. Involved in performance tuning of sql queries, sources, targets and adding partition to the larger tables to rectifying performance bottlenecks
- Created around 15 to 20 pl/sql procedures and db objects which used to load the data into the oracle tables
- Created oracle directory to store all the logs which created from the oracle procedures
- Designed Source to Target mappings to extract data from heterogeneous sources such as Oracle Tables, Flat files and SQL server as source and load it to the targettables.
- Worked with various Informatica transformations such as Lookup, Aggregator, Expression, Normalizer, Joiner, Router, Filter, Update Strategy, and Sequence Generator.
- Created Reusable Transformations, Mapplets, Sessions and Worklets and made use of the Shared Folder concept using shortcuts wherever possible to avoid redundancy.
- Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems.
- Created informatica mapping to process source data from oracle to hadoop (data lake) and move data from data lake to netezza, the cognos is build in top of netezza for reporting purpose
Confidential, Hartford, CT
Senior ETL Devloper
Technical Platform
Languages - PL/SQL, UNIX, shell script, Autosys
Databases - Oracle 11g Exadata, SQL* Plus
Job Responsibilities
- Involved in the Project startup phases such as Resourceplanning and effort estimation using the Delphi estimation template
- Assisted Project Manager in scope estimation and work breakdown structure (WBS) to meet Project Milestone
- Involved with the Business analysts and other department heads in requirements gathering requirements from and prepared theTechnical Specification Document in line with each Business Requirement Document (BRD) and Functional Specification Document (FSD).
- Installed, configured and deployed Hadoop cluster for development, production and testing
- Transferring Bulk Data between Oracle Database and Hadoop Ecosystem with Sqoop
- Exported the analysed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
- Imported data using Sqoop to load data from MySQL to HDFS on regular basis
- Involved in creating Hive tables loading with data and writing hive queries which will run internally in map reduce way.
- Developed Hive Queries to analyze the data in HDFS to identify issues and behavioral patterns
- Implemented MapReduce programs to perform joins using secondary sorting and distributed cache
- Involved in loading data from UNIX file system to HDFS.
- Strong knowledge of Dimensional modeling concepts likeStar Schema, Snow Flake Schema, Dimension and Fact table
- Good Knowledge on a pplying rules and policies using ILM (Information Life Cycle Management) workbench for Data Masking Transformation and loading into targets. complete implementation experience of Informatica Cloud Application and Data Integration or any other major cloud based platforms
- Designed Source to Target mappings to extract data from heterogeneous sources such as Oracle Tables, Flat files and SQL server as source and load it to the targettables.
- Worked with various Informatica transformations such as Lookup, Aggregator, Expression, Normalizer, Joiner, Router, Filter, Update Strategy, and Sequence Generator.
- Created Reusable Transformations, Mapplets, Sessions and Worklets and made use of the Shared Folder concept using shortcuts wherever possible to avoid redundancy.
- Performed tuning of Informatica Mappings and sessions to get optimum performance.
- Created database objects for the system like tables, views, sequences, functions, synonyms, indexes, triggers, packages and stored procedures.
- Created PL/SQL scripts to extract the data from the operational database into simple flat text files using UTL FILEpackage.
- Creating shell script to execute the procedures automatically for automate process
- Created database objects for the system like tables, views, sequences, functions, synonyms, indexes, triggers, packages and stored procedures.
- Involved in Performance Tuning of SQL Queries, Sources, Targets and sessions by identifying and adding bulk collect process to rectifying performance bottlenecks.
Confidential, Hartford, CT
Senior Informatica Devloper
Technical Platform
Languages - PL/SQL, UNIX, shell script, Autosys (Job Information Language)
Databases - Oracle 11g Exadata, SQL* Plus
Job Responsibilities
- Involved with the Business analysts in requirements gathering and data mapping.
- Assisted in the design of an efficient reliable ETL mechanism for the creation of the Rating Data Store data mart.
- Prepared Impact Analysis and Design Specifications for ETL Coding and mapping standards.
- Created Informatica Mappings to build business rules to load data using transformations like Source Qualifier, Aggregator, Expression, Joiner, Connected and Unconnected lookups, Filters and Sequence, External Procedure, Router and Update strategy
- Extensively used ILM tool for data masking
- Used the PL/SQL procedures for Informatica mappings for truncating the data in target tables at run time
- Implemented Custom application support to quickly apply data masking algorithms to any sensitive data, in any format
- Modified the existing ETL (Informatica) jobs in order to get the report of each load taking place with the time and number of records that have loaded and extracted as well.
- Modified existing UNIX shell scripts to in corporate the automation.
- Developed Oracle procedures, Packages, Functions, Triggers using SQL and PL/SQLto pull data from various source environments to the application in the desired format that can blend in the customized framework.
- Developed materialized views for data replication in distributed environments.
- Partitioned large Tables using range partition technique.
- Used advanced Bulk techniques (FOR ALL, BULK COLLECT) to improve performance.
- Migrated more than 110 applications spanning 4 TB of Data from Oracle 10g to 11g.
- Extensively involved in using Oracle hintsand Explain Plan to direct the optimizer to choose an optimum query execution plan
- Automated the Informatica jobs using UNIX shell scripting
- Creation of database objects like tables, views, materialized views, procedures and packages using oracle tools likeToad, PL/SQL Developer and SQL* plus
Confidential, Hartford, CT
Lead Oracle PL/SQL,informatica developer
Technical Platform
Languages - PL/SQL, UNIX, shell script, Autosys (Job Information Language)
Databases - Oracle 11g Exadata, SQL* Plus
Job Responsibility
- Understanding the existing ETL code and identifying equivalent component in the new architecture
- Involved in Developing the ETL Components in the Exadata environment
- Designed architecture for the Informatica Workflows to Migrate the 4TB of history data
- Designed Source to Target mappings to extract data from heterogeneous sources such as Oracle and Mainframe files as source and load it to the target tables.
- Worked with various transformations such as Lookup, Aggregator, Expression, Normalizer, Joiner, Router, Filter, Update Strategy, and Sequence Generator.
Confidential
Lead Oracle PL/SQL developer
Job Responsibilities
- Involving in project activities from preparing the component inventory, impact analysis, build, implementation, integrating, and testing the components.
- Addressing Issues and Maintaining the History of Defect logs with proper resolutions
- Maintaining Versions of Components using Subversion tool.
- Worked closely with the QA team to ensure the test cases are meeting the requirements documented.
- Regular communication with clients on various aspects.