Lead Etl/cloud Developer Resume
4.00/5 (Submit Your Rating)
EXPERIENCE SUMMARY:
- 9+ years of experience in Informatica PowerCentre 9.X/10.X, Informatica Developer 10.X and Informatica cloud
- 9+ years of experience with Oracle SQL PL/SQL programming, Netezza, SQL Server and UNIX.
- 3+ years’ experience with ETL implementation using Redshift, AWS S3, EC2, Data pipeline and Glue.
- BIG data implementation using Apache spark and Scala.
- Extensive experience in Data mart and Data warehouse Design and development.
- Strong Experience in dimension modelling, Star schema and data lake implementation
- End to end project delivery experience starting from requirement gathering to development, testing, delivery and hyper care.
- Strong experience in working on Amazon Redshift and other AWS components.
- Extensively worked with Oracle PL/SQL Stored Procedures, Triggers, Functions, Packages and also involved in Query Optimization.
- Ability to write complex SQLs needed for ETL jobs and analysing data, and is proficient and worked with databases like Oracle 11g/10g/9i,Netezza, SQL Server 2008/2005, My SQL, Flat Files and COBOL files .
- Strong experience in working on Hadoop hive, pig and managing files in HDFS.
- Extensive experience in Insurance, Banking, Financial and Telecom domains.
- Knowledge of Qlikview and Tableau reporting tools.
- Knowledge in working with Python scripts
SOFTWARE PRODUCTS:
Operating Systems Tools: Windows (NT/XP/2000) LINUX, UNIIX
Programming Languages: PL/SQL, SQL, Shell Programming and Python
DBMS: Oracle, Microsoft SQL Server, AWS Redshift and Netezza
Tools: Informatica PowerCentre 9.x, Control - M, Qlikview, Data Pipeline and Glue
ASSIGNMENT:
Confidential
Lead ETL/Cloud developer
Responsibilities:
- Helped creating the roadmap for moving data from on premise to AWS cloud
- Leading and Managing the project from requirement analysis to delivery
- Created data pipeline and ETL process to load Data from AWS S3 to RDS and redshift.
- Developed and maintained DataMart’s in Redshift and oracle using the star schema approach.
- Created dimension model for project implementation.
- Implemented performance improvement techniques in Redshift and oracle
- Analysing, designing and developing of ETL project.
- Developed design document for ETL process.
- Moving data from S3 to oracle and then to Redshift using Informatica and AWS services.
Solution Environment:
- Informatica Developer
- Amazon AWS
- Amazon Redshift
- Amazon EC2
- Data Pipeline
- Glue
- Oracle 11g database
- PL/SQL
- Unix
- Autosys
Confidential
Lead ETL developer
Responsibilities:
- Leading and Managing the project from requirement analysis to delivery
- Developing and maintaining DataMart and Data Warehouse
- Created dimension models for review and implementation
- Analysing, designing and developing of ETL project.
- Developed design document for ETL process.
- Responsible for designing and developing of mappings, mapplets, sessions and work flows for load the data from source to target database using Informatica Power Center and tuned mappings for improving performance.
- Build data pipelines
- Moving data from database to HDFS.
- Loading data in Hive
- Transforming data using Hive and Pig.
- Develop sql.pl/sql code required for the implementation.
- Develop Unix shell scripting code as per the requirement
- Extracted data from various sources like Flat files, Oracle and loaded it into Target systems using Informatica 9.5.
Solution Environment:
- Informatica power centre 9.5
- Informatica cloud
- SAP ECC
- SAP HANA
- Oracle 11g database
- PL/SQL
- Unix
- Tivoli
Confidential
ETL/Big Data Developer
Responsibilities:
- Worked on the data warehousing project for the US based Financial institution's Confidential group.
- The vision of the project was to create a comprehensive and aggregated view of risk data to allow assessment and
- Analytics across the financial instruments. In this project We used Hadoop, Redshift and tableau technologies.
- I had worked on analysing, designing and building extraction, transformation and load processes for Building a data lake.
- Built data models in redshift To help reporting layer to read data faster.
- Transform data from Hadoop and load into final tables in data warehouse.
Solution Environment:
- Informatica power centre 9.1
- Hadoop
- Hive
- Sqoop
- Redshift
- Netezza
- Oracle 11g database
- PL/SQL
- Unix
Confidential
ETL/Big Data DeveloperResponsibilities:
- Requirement analysis and preparation of mapping document.
- Maintaining and developing existing DataMart model
- Analysing, designing and developing of ETL project.
- Developed design document for ETL process.
- Responsible for designing and developing of mappings, mapplets, sessions and work flows for load the data from source to target database using Informatica Power Center and tuned mappings for improving performance.
- Moving data from database to HDFS.
- Loading data in Hive
- Transforming data using Hive and Pig.
- Develop sql.pl/sql block required for the implementation.
- Develop Unix shell scripting code as per the requirement
- Extracted data from various sources like Flat files, Oracle and loaded it into Target systems using Informatica 9.5.
- Used Informatica Power Center Workflow Manager to create sessions, batches to run with the logic embedded in the mappings.
- Writing Oracle Stored Procedures, Triggers, and Packages to effectively in corporate Business logic and providing security by wrapping the contents.
- Handling PL/SQL Compile-Time, Runtime Errors, and Debugging stored procedure for business logic modification
- Optimization of the query and performance tuning.
Solution Environment:
- Informatica power centre 9.1
- Hadoop
- Hive
- Pig
- Sqoop
- Control-M
- Oracle 11g database
- PL/SQL
- Unix
Confidential
ETL/Big Data DeveloperResponsibilities:
- Analysing, designing and developing Extraction, Transformation and Load (ETL) processes for a data warehousing project.
- Developed design document for ETL process.
- Responsible for designing and developing of mappings, mapplets, sessions and work flows for load the data from source to target database using Informatica Power Center and tuned mappings for improving performance.
- Extracted data from various sources like Flat files, Oracle and loaded it into Target systems using Informatica 9.1.
- Developed mappings using various transformations like update strategy, lookup, stored procedure, router, joiner, sequence generator, expression transformation to build business rules to load data.
- Used Informatica Power Center Workflow Manager to create sessions, batches to run with the logic embedded in the mappings.
Solution Environment:
- Informatica power centre 9.1
- Netezza,Oracle 11g database,mysql,hadoop
- PL/SQL
- Unix
Confidential
ETL/Big Data DeveloperResponsibilities:
- Connecting to Oracle/my sql database from Apcahe Sqoop
- Migrating data from oracle database and flat files to Hadoop HDFS.
- Creating Tables in Hadoop Hive
- Writing Queries in Hadoop Hive
- Transferring data from Hive to Oracle.
Solution Environment:
- Apache Hadoop
- Apache Sqoop
- Apache Hive
- Oracle 11g database
