We provide IT Staff Augmentation Services!

Sr Etl Developer/lead Resume

2.00/5 (Submit Your Rating)

SUMMARY

  • Around 14 years of IT consulting experience in analysis, design, coding, development, testing and maintenance of data warehouse systems. Hands on experience include developing Data Warehouses/Data Marts/ODS within Energy Utilities and Healthcare industries.
  • Primary technical Skills: Teradata (TD15/TD14/ TD13/TD12), Informatica Power center ( 9.6/9.0.1/8.6/8.1 ), Hadoop (2.0/1.0), SQL Server and Oracle (7.x to 11g).
  • Involved in full lifecycle implementation of multiple projects, including requirement gathering, system designing, application development, enhancement, deployment, maintenance and support.
  • Hands on experience in Teradata (Teradata Administrator, Teradata SQL Assistant, BTEQ, Fast Load, MultiLoad, TPump, Fast export) in a Data warehousing Environment.
  • Strong experience in Teradata Database design, Implementation and Maintenance mainly in large - scale Data Warehouse environments.
  • Strong SQL experience in Teradata from developing the ETL with Complex tuned queries including analytical functions and BTEQ scripts.
  • Expert in Coding Teradata SQL, Teradata Stored Procedures, Macros and Triggers.
  • Expertise in Query Analyzing, performance tuning and testing.
  • Hands on experience in monitoring and managing varying mixed workload of an active data warehouse using various tools like PMON, Teradata Workload Analyzer, and Viewpoint.
  • Extensively worked on Query tools like SQL Assistant, SQL Developer.
  • Strong experience in using TDCH Teradata connector.
  • Proficient in Informatica Power Center tools - Designer, Repository Manager, Workflow Manager, and Workflow Monitor etc.
  • Good Knowledge in Logical and physical modeling using Erwin. Hands on experience in 3NF, Star/Snowflake schema design and De-normalization techniques.
  • Solid Experience in designing and developing of Extraction transformation and loading (ETL) process using the Informatica power center.
  • Expertise in tuning the performance of Mappings and sessions in Informatica and determining the Performance bottlenecks.
  • Practical experience working with transformations like Filter, Router, Expression, Sequence Generator, Update Strategy, Joiner, Stored Procedure, Aggregator and Union to develop robust mappings in the Informatica Designer.
  • Proficient in implementing Complex business rules by creating re-usable transformations, workflows/worklets and Mappings/Mapplets. Comfortable with both technical and functional applications of RDBMS, Data Mapping, Data management, Data transportation and Data Staging.
  • Experience in working with various heterogeneous Source Systems like Oracle, SQL Server, Hadoop, ERP, SFDC, Flat files and Legacy systems.
  • Proficient in converting logical data models to physical database designs in Data warehousing Environment and in-depth understanding in Database Hierarchy, Data Integrity concepts and Data Analysis.
  • One-year hands on experience working with Hadoop, HDFS, Map Reduce framework and Hadoop ecosystem like Hive, HBase, and Sqoop.
  • Extensive experience in developing PySpark, Python scripts.
  • Experience with loading data to Hive.
  • Worked on Schedulers CTL-M and Workload Manager.
  • Experience in writing UNIX shell to support and automate the ETL process.
  • Involved in Unit Testing, Integration Testing and preparing test cases.
  • Involved in production support activities 24/7 during on call and resolved database issues.
  • Capably used Mainframes (COBOL-II, DB2, CICS, JCL, VSAM, ISPF, TSO, File Aid, Xpeditor, and MQ Series). Extensively worked on creating the JCL’s in MVS environment.
  • Excellent communication, interpersonal skills and has a strong ability to work as part of a team and as well as handle independent responsibilities.
  • Completed AWS training and working on certification.

TECHNICAL SKILLS

Databases: Teradata 15/14/ 13/12, Cloudera Hadoop, Oracle 10g/9i/8i, DB2, MS-SQL Server, MS-Access

DB Tools/Utilities: Teradata SQL Assistant, BTEQ, Fastload, MultiLoad, Fast Export, TPump, Teradata Manager, Teradata Query Manager, Teradata Administrator, Teradata SQL Assistance, PMON, SQL Loader, TOAD 8.0.

ETL Tools: Informatica Power Center 9.6/9.0.1/8.6/8.1 , Data Services

Data Modelling: Logical/Physical/Dimensional, Star/Snowflake/Extended-star schema, OLAP, ERWIN.

Scheduling Tools: CTL-M, Workload Manager

Operating Systems: Sun Solaris 5.0/2.6/2.7/2.8/8.0 , Linux, Windows, UNIX.

PROFESSIONAL EXPERIENCE

Confidential

Sr ETL Developer/Lead

Responsibilities:

  • Technology skill set - Teradata, Informatica, Unix Shell Scripting, BTEQ Scripting, Mainframes,CA-7.
  • Designing, developing and testing the BTEQ scripts to load the data into Teradata warehouse.
  • Performing analysis on existing BTEQ to know their present functionality and in turn to know where specific enhancements are to be applied.
  • Creating new BTEQ modules and Procedures based on the business needs and enhancements to the existing applications based on new business requirements.
  • Developing Adhoc processes to serve the instant requests from the business.
  • Effectively communicate with business users and stakeholders to gather, review, analyze, profile, validate and map system data into the data warehouse to meet reporting and analytical business needs.
  • Working on Hadoop POC to implement the NGE application on Hadoop.
  • Coordinating with development and Business teams to commit and meet the target release time lines as per agile methodology.

Environment: Teradata/Informatica/Hadoop/Mainframes/CA7/SQL Server/DMExpress/Hive/Spark

Confidential

Systems Analyst

Responsibilities:

  • Involved in Requirement gathering, business Analysis, Design and Development, testing and implementation of business rules.
  • Development of scripts for loading the data into the base tables in EDW using Fastload, MultiLoad and BTEQ utilities of Teradata.
  • Collected Multi-Column Statistics on all the non-indexed columns used during the join operations & all columns used in the residual conditions.
  • Used extensively Derived Tables, Volatile Table and GTT tables in many of the ETL scripts.
  • Tuning of Teradata SQL statements using Explain analyzing the data distribution among AMPs and index usage, collect statistics, definition of indexes, revision of correlated sub queries, usage of Hash functions, etc.
  • Use SQL to query the databases and do as much crunching as possible in Teradata, using very complicated SQL Query optimization (explains plans, collect statistics, data distribution across AMPS, primary and secondary indexes, locking, etc.) to achieve better performance.
  • Performed bulk data load from multiple data source (ORACLE 8i, legacy systems) to TERADATA RDBMS.
  • Used BTEQ and SQL Assistant (Query man) front-end tools to issue SQL commands matching the business requirements to Teradata RDBMS.
  • Developed the Teradata Macros, Stored Procedures to load data into Incremental/Staging tables and then move data from staging into Base tables
  • Performed Space Management for Perm & Spool Space.
  • Reviewed the SQL for missing joins & join constraints, data format issues, mis-matched aliases, casting errors.
  • Developed procedures to populate the customer data warehouse with transaction data, cycle and monthly summary data, and historical data.
  • Dealt with initials, delta and Incremental data as well Migration data to load into the Teradata.
  • Analyzing data and implementing the multi-value compression for optimal usage of space.
  • Query Analysis using Explain for unnecessary product joins, confidence factor, join type, order in which the tables are joined.
  • Performance tuning, monitoring and index selection while using Statistics wizard and Index wizard and Teradata Visual Explain to see the flow of SQL queries in the form of Icons to make the join plans more effective and faster.
  • Very good understanding of Database Skew, PPI, Join Methods and Join Strategies, Join Indexes including sparse, aggregate and hash.
  • Used extensively Derived Tables, Volatile Table and GTT tables in many of the ETL scripts.
  • Excellent experience in performance tuning and query optimization of the Teradata SQLs.
  • Worked closely with the end users in writing the functional specifications based on the business needs.
  • Developed, documented and executed unit test plans for the components.
  • Automated the reports, and developed the mail alters for loads by using Shell Scripts.
  • Extensively used of Performance tuning techniques to minimize the run time and create pre-session caches with load being balanced on server.
  • Developed Spark Applications by using Scala, Java and Implemented Apache Spark data processing project to handle data from various RDBMS and Streaming sources.
  • Worked with the Spark for improving performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Spark MLlib, Data Frame, Pair RDD's, Spark YARN.
  • Experience creating Hive tables, loading tables with data and aggregating data by writing Hive queries.
  • Developed PySpark scripts to load the data from csv files to HDFS.
  • Worked on TDCH wrapper script to load the data from Hive tables to Teradata.
  • Performed Schema design for Hive and optimized the Hive configuration.
  • Involved in designing the data model in Hive for migrating the ETL process into Hadoop and wrote Pyspark Scripts to load data into Hadoop environment.
  • Designed and Built Spark Streaming application by using Stream sets which analyses and evaluates the Streaming data against the business rules through Rules Engine and then send alerts to the business users to address the customer preferences and do product promotions.
  • Developed shell scripts to validate the files and SFTP the files.

Environment: Teradata 13/14/15, Teradata SQL Assistant, Teradata View Point, Teradata Utilities, Informatica 9.0, Mainframes DB2, Erwin Designer, UNIX, Windows2000, UNIX/LINUX, Hadoop, Hive, Pyspark, Python, Stream sets.

Confidential

Programmer Analyst

Responsibilities:

  • Involved in gathering business requirements, logical modeling, physical database design, data sourcing and data transformation, data loading, SQL and performance tuning.
  • Performed source data analysis and provided various business reports to validate the business requirements.
  • Participated in data analysis, data profiling, data dictionary and metadata management. Used SQL to do the Data Profiling.
  • Developed scripts to load the data from source to target tables using Fast Load, MultiLoad and BTEQ utilities of Teradata. Writing scripts for data cleansing, data validation, data transformation for the data coming from different source systems.
  • Involved in Dimensional modelling to Design and develop STAR Schemas using ER-win to design Fact and Dimension Tables.
  • Working closely with user decision makers to develop the transformation logic to be used in Informatica Power Center.
  • Identifying and tracking the slowly changing dimensions, heterogeneous sources and determining the hierarchies in dimensions.
  • Using Workflow Manager for Workflow and Session Management, database connection management and Scheduling of jobs to be run in the batch process.
  • Developed number of complex Informatica Mappings, Mapplets and Reusable Transformations for different types of studies for Daily, Monthly, and Yearly Loading of Data.
  • Fixing invalid Mappings, testing of Stored Procedures .
  • Created reusable transformations and reusable mapplets for use in Mappings.
  • Extensively Used Environment SQL commands in workflows prior to extracting the data in the ETL tool.
  • Created Stored Procedures for data transformation purpose.
  • Monitored the sessions using Workflow Monitor .
  • Used stored procedures to create a standard Time dimension, drop and create indexes before and after loading data into the targets.
  • Removed bottlenecks at source level, transformation level, and target level for the optimum usage of sources, transformations and target loads.
  • Captured data error records corrected and loaded into target system.
  • Created Mappings, Mapplets and Transformations, which remove any duplicate records in source.
  • Implemented efficient and effective performance tuning procedures.
  • Tuned Source System and Target System based on performance details, when source and target were optimized, sessions were run again to determine the impact of changes.
  • Worked on Informatica Power Center tools - Designer, Repository Manager, Workflow Manager, and Workflow Monitor.
  • Used various transformations like Filter, Expression, Update Strategy, Joiner, Stored Procedure, and Aggregator, Java to develop robust mappings in the Informatica Designer.
  • Written complex SQLs using joins, sub queries and correlated sub queries. Expertise in SQL Queries for cross verification of data.
  • Mitigated the number of conversion errors by Analyzing, Identifying and validating the data and provided fix after every conversion run which increased the Data Posting in the DPI System.
  • Audited and certified the mock data.
  • Assisted and provided clarification and solutions to Informatica ETL developers during development activity.
  • Performed unit testing to validate the ETL Development to make sure it meets the business requirement and Data Mapping specification.
  • Coordinated and worked with UAT and testing team to reduce the number of defects raised by the testing team.

Environment: Informatica Power Center 7.1/6.2, ERwin 4.0, Oracle 9i, Shell Scripting, SQL, PL/SQL, SQL Loader, Toad, Visual Basic8.0, IIS, UNIX, Sun Solaris 2.6, Windows 2000 Serv

Confidential

Programmer Analyst

Responsibilities:

  • Responsible for Business Analysis and Requirements Collection.
  • Translated requirements into business rules& made recommendations for innovative IT solution.
  • Development of scripts for loading the data into the base tables in EDW using Fastload, MultiLoad and BTEQ utilities of Teradata.
  • Assisted DBA to Create tables and views in Teradata, Oracle database.
  • Dealt with Incremental data as well Migration data to load into the Teradata.
  • Have experience in tuning some batch BTEQ queries.
  • Enhanced some queries in the other application to run faster and more efficiently.
  • Data was extracted from Teradata, Processed/Transformed using Ksh programs and loaded into Data Mart.
  • Used various Teradata Index techniques to improve the query performance
  • Used BTEQ and SQL Assistant (Query man) front-end tools to issue SQL commands matching the business requirements to Teradata RDBMS.
  • Updated numerous BTEQ/SQL scripts, making appropriate DDL changes and completed unit and system test.
  • Created series of Macros for various applications in TERADATA SQL Assistant.
  • Responsible for loading data into warehouse from different sources using MultiLoad and Fastload to load millions of records.
  • Performed tuning and optimization of complex SQL queries using Teradata Explain.
  • Documented data conversion, integration, load and verification specifications.
  • Parsing high-level design specifications to simple ETL coding and mapping standards.
  • Worked with the various enterprise groups to document user requirements, translate requirements into system solutions and produce development, testing and implementation plan and schedule.
  • Used transformations like Joiner, Expression, Connected and Unconnected lookups, Filter, Aggregator, Store Procedure, Rank, Update Strategy, Router and Sequence generator.
  • Developed, documented and executed unit test plans for the components.
  • Documenting the developed code, run the sessions and workflows, while keeping track of source and target row count.
  • Collected performance data for sessions and performance tuned by adjusting Informatica session parameters.
  • Extensively used un-connected Look-Ups for minimizing the run time and add to the performance of the server, while making sure to avoid duplicates of data from other source systems.
  • Created pre-session and post-session shell scripts and mail-notifications.
  • Extensively worked on the Informatica Designer, Repository Manager, Repository Server, Workflow Manager/Server Manager and Workflow Monitor.
  • Extensive use of Performance tuning techniques to minimize the run time and create pre-session caches with load being balanced on server.
  • Created Workflows containing command, email, session, decision and a wide variety of tasks.
  • Employed performance tuning to improve the performance of the entire system.
  • Used Informatica as an ETL tool to create source/target definitions, mappings and sessions to extract, transform and load data into staging tables from various sources.
  • Understanding the entire functionality and major algorithms of the project and adhering to the company testing process.

Environment: Informatica Power Center 7.1, Oracle10g, PL/SQL Developer, SQL Server 2005, UNIX, Oracle 8i, MS-Access, TOAD, Teradata Administrator, Teradata V2R5, Teradata SQL Assistant, BTEQ, Windows 2003

We'd love your feedback!