We provide IT Staff Augmentation Services!

Sr Etl Developer Resume

Kansas City, MO

SUMMARY

  • IBM Infosphere Data stage certified professional with over 12 years of experience in developing, testing and maintaining applications in Data Warehouse using ETL tools like IBM Infosphere DataStage (Versions 11.5/9.x/8.x) and Quality stage and Information Analyzer, Informatica
  • Experience in working with reporting and analytics tools like SAP Crystal Reports and Tableau
  • Experience in writing SQL Coding, Views, PL/SQL Procedures, Functions,
  • Experience in Banking, insurance, Automobile, Retail l domains.
  • Experience in UNIX, Oracle, SQL Server DB2 and Teradata
  • Experience in Production Support.
  • Project lead for multiple projects.
  • Importing large sets of data from RDBMS into HDFS using SQOOP
  • Created Hive tables and involved in data loading and writing Hive UDFs.
  • Used Hive functions such as Partitioning, Bucketing, Index
  • Created Hive tables and involved in data loading and writing Hive UDFs.
  • Provided ad - hoc queries and data metrics to the business users using Hive
  • Extracted data from Mainframe through CFF stage.
  • Schedule the jobs through Autosys JIL.
  • Experience in SAP including extracting the data from SAP Sources.
  • Experience in Hadoop stack
  • Experience in estimation, planning, forecasting and tracking for projects
  • Co-ordinate with multiple development teams to track the progress of the project
  • Experience in all Phases of System/Software Development Life Cycle (Process Consulting, Architecture, Design, Development, Testing, Deployment and Support).
  • Extensive experience in design and architecture of ETL interfaces and data marts.
  • Experience in Python Scripting
  • Familiar with aspects of technology projects like Business Requirements, Technical Architecture, Design Specification, Development and Deployment.
  • Proficiency in Data warehousing techniques for data cleansing, Slowly Changing Dimension phenomenon, Surrogate key assignment, change data capture
  • Experience in ETL and Datawarehouse systems like DataStage, Informatica
  • Experience in Agile Methodology
  • Experience in Toad
  • Excellent knowledge in Business Intelligence, Analytics and Optimization.
  • Proficiency in Data Warehouse Architecture and Designing Star Schema, Snow flake Schema, FACT and Dimensional Tables, Physical and Logical Data Modeling using Erwin and Designer 2000
  • Extensive experience in loading high volume data, and performance tuning.
  • Excellent interpersonal communication skills and ability to work effectively in a team environment
  • Good understanding of RDBMS like Oracle, Teradata, DB2 and SQL Server and extensively worked on data Integration using DataStage for the Extraction transformation and loading of data from various database systems.
  • Experience in data warehousing concepts, ETL programming using DataStage Parallel Extender analysis, cleansing, transforming, debugging/testing, and data loading across source systems.
  • Experience in writing UNIX scripts

TECHNICAL SKILLS

ETL Tools: IBM Infosphere DataStage11.x/9.x/8.x/7.5x2, Quality Stage, Business Glossary, Information Analyzer, Fast Track, Informatica Power Center 10.1.1/9.6,SSIS

Big Data Ecosystems: HDFS, MapReduce, Hive, Pig, Sqoop, Flume, Oozie, Hadoop Streaming, Apache Spark, Scala, Kafka, Ambari, Tez

Reporting Tools: SAP Crystal Reports 2016 BI4.2, Tableau

RDBMS: Snowflake Database, Teradata V13, V14, Oracle 12g/11g/10g/9i, DB2, SQL Server, Netezza, MySQL

Scheduling Tools: Autosys, Crontab,CA7

Version control Tools: github,PVCS, Accurev

Tracking tools: JIRA, Bugzero, Service now

Change Management Tools: Remedy, Cherwell

Other Technologies: Citrix, Putty, Humming Bird, Winscp

Programming Languages: Python, SQL,PL/SQL, Shell Scripting,XML,Java,J2EE

PROFESSIONAL EXPERIENCE

Confidential, Kansas City, MO

Sr ETL Developer

Responsibilities:

  • Involved in design, analysis and architectural meetings. Created Architecture Diagrams, and Flow Charts using Microsoft Visio.
  • Developed ETL code using Parallel Stages like Join, Merge, look up, Filter, Remove Duplicates, Funnel, Row Generator, slowly changing dimensions, Modify, Peek etc. for development and de-bugging purposes.
  • Experience in writing the Unix scripting
  • Used SQL to extract data from Oracle database by creating user defined SQL statements.
  • File transferred to the other systems through FTP, SFTP, AS2 methods.
  • Understanding and analyzing the performance issues. Provided quick approach to resolve them.
  • Meeting SLA’s and providing accurate results to the client management.
  • Solved performance issues whenever production team raised a concern on performance.
  • Participated in the Analysis phase to analyze the source data to gather the requirements.
  • Performed Unit Testing, Integration Testing and User Acceptance testing for code change and enhancement.
  • Working on Agile methodology.
  • Created indexes on the tables for faster retrieval of the data to enhance database performance.
  • Developed Complex database objects like Stored Procedures, Functions using SQL and PL/SQL.
  • Worked on Autosys for scheduling and automating the daily/monthly job flows.
  • Involve in the build and testing of small to medium-scale projects.
  • Utilize business knowledge to collaborate and offer technical solutions.
  • Extracted the data from SAP IDOC Source Systems.
  • Create the Hive tables in the Hadoop.
  • Experience in the Hadoop file System and Hive
  • Used file connector to load the Hive tables

Environment: IBM Infosphere Data Stage v11.5, Python, SAP IDOC, Teradata, DB2, Oracle12c, Unix Shell scripting, Service now, SAP Crystal Reports 2016 BI4.2, See burger, Hive, Sqoop, Bigdata

Confidential, Kansas City, MO

ETL Developer

Responsibilities:

  • Performed debugging, troubleshooting, monitoring and performance tuning using DataStage.
  • Used different Parallel Extender Partitioning techniques in the stages to facilitate the best parallelism in the Parallel Extender jobs.
  • Developed complex store procedures using input/output parameters, complex queries using temp tables and joins
  • Extracted data from Oracle database.
  • Worked on the server routines.
  • Responsible for analyzing and preparing dependencies and sequence in which job/processes should be scheduled in Enterprise Scheduling tools using Autosys.
  • Running, monitoring of the jobs using IBM Infosphere Data stage Director and checking logs in lower environment to ensure proper testing is done.
  • Experience in the Hadoop file System and Hive
  • Used Hive connector to load the Hive tables
  • Analyzed and fixed production issues.
  • Extracted the data from Flat files and XML
  • Experience in Web Services (SOAP and REST)
  • Design and track the changes of Components and check for Error free code build across various instances throughout the project.
  • Defect tracking and Resolutions
  • Co-ordinate with business and testing teams as applicable.

Environment: IBM Infosphere Data Stage v11.3, Oracle 12c, Unix Shell scripting, JIRA, Bug Zero, HP Service Manager, Autosys, SQL Server, SAP Crystal Reports 2016,BI4.2, Hive, Sqoop, Bigdata, Podium

Confidential, New Jersey

ETL Lead

Responsibilities:

  • Worked on full software development cycle activities including analysis, design, development, testing, deployment to production and support.
  • Developed detailed ETL design documents, ETL specifications and data mappings based on requirements
  • Written UNIX shell script to handle automation of daily batch runs to handle catch up runs by passing last run date, last pull date and use of infrastructure ETL run tables to accomplish daily batch runs catch-up
  • Schedule the jobs through Autosys JIL.
  • Involved in documentation for Design Review, Code Review and Production Implementation.
  • Experience in writing rule sets using Quality Stage.
  • Experience in writing Survivorship rules using Quality Stage.
  • Experience in Investigate stage, Standardize stage, Match stage
  • Experience in writing rule sets using Quality Stage.
  • Experience in writing Survivorship rules using Quality Stage.
  • Perform column analysis, rule analysis, primary key analysis, natural key analysis, foreign-key analysis, and cross-domain analysis using IBM Info Analyzer
  • Configure and code data quality validation rules within IBM Info Analyzer
  • Information Analyzer administrative tasks such as managing logs, schedules, active sessions and security roles etc
  • Import/export projects along with rules and bindings successfully from one environment to another using Information Analyzer
  • Involved in Low Level design document for mapping the files from source to target and implementing business logic.
  • Conduct all review meetings and explain the process of new implementations.
  • Coordinate with team members and administer all onsite and offshore work packages.
  • Prepared documentation including requirement specification
  • Worked with Developers to troubleshoot and resolve issues in job logic as well as performance.
  • Conducted weekly status meetings.
  • Maintained Data Warehouse by loading dimensions and facts as part of project.
  • Experienced in developing datastage parallel jobs and Server jobs using various Development/debug stages (Peek stage, Head & Tail Stage, Row generator stage, Column generator stage, Sample Stage) and processing stages (Aggregator, Change Capture, Filter, Sort & Merge, Funnel, Remove Duplicate Stage)
  • Generation of Surrogate Keys for the dimensions and fact tables for indexing and faster access of data in Data Warehouse.
  • Debug, test and fix the transformation logic applied in the DataStage parallel jobs and Server jobs
  • Successfully implemented pipeline and partitioning parallelism techniques and ensured load balancing of data.
  • Deployed different partitioning methods like Hash by column, Round Robin, Entire, Modulus, and Range for bulk data loading and for performance boost.
  • Developed UNIX shell scripts to automate file manipulation and data loading procedures.
  • Tuned transformations and jobs for Performance Enhancement.
  • Scheduled the jobs using AutoSys
  • Involved in Reading and writing to the XML

Environment: IBM Info Sphere Data stage 11.X, Quality stage & Information Analyzer 11.X,SQL, Business Objects XIR2,DB2,Teradata, Netezza, SQL Server, Oracle, Zena, PVCS, SAP, PLSQL,TOAD, UNIX- AIX, MS Word, Excel, DB2,IBM Info Sphere Master Data Management Enterprise Edition, XML, TOAD, SAP

Confidential

ETL Developer

Responsibilities:

  • Used the Autosys job Scheduler to schedule the Datastage jobs to run the daily jobs and weekly jobs.
  • Production Support - Identify issues, review errors, approve resolution, and advise of potential downstream impacts
  • Experience in debugging ETL jobs to check in for the errors and warnings associated with each Job run
  • Worked on data analysis and data correction and cleanup for warehouse tables
  • Responsible in debugging Ticketing issues in the production phase
  • Password maintain in Production environment
  • Code migration in Production environment
  • Worked on new generic UNIX scripts and existing scripts to automate some of the warehouse functionality.
  • Monitor Automobile Environments and run defined Data Quality Checks
  • Maintain relationships with IT and Business.
  • First line support for Business Users and Analysts including mailbox support, researching questions and maintaining web-based knowledge base.
  • Created shell script to run data stage jobs from UNIX and then schedule this script to run data stage jobs through scheduling tool.
  • Performed performance tuning of the jobs by interpreting performance statistics of the jobs developed.
  • Documented ETL test plans, test cases, test scripts, and validations based on design specifications for unit testing, system testing, functional testing, prepared test data for testing, error handling and analysis.
  • Developed Test Plan that included the scope of the release, entrance and exit criteria and overall test strategy. Created detailed Test Cases and Test sets and executed them manually
  • Worked on Fast load, Multi load for loading the data into the Teradata database
  • Provided production support and performed enhancement on existing multiple projects.
  • Automate the process of data stage jobs to handle multiple day runs to catch up for any missed day runs
  • Written UNIX shell script to handle automation of daily batch runs to handle catch up runs by passing last run date, last pull date and use of infrastructure ETL run tables to accomplish daily batch runs catch-up
  • Validated the current production data, existing data and the programming logic involved
  • Responsible in debugging Ticketing issues in the production phase

Environment: IBM Info Sphere Data stage 8.X, Information Analyzer, Quality stage & Information Analyzer 8.X, Autosys, Teradata, Oracle, Netezza, SQL Server, Oracle, Teradata, UNIX,SQL Server,DB2,PVCS,Autosys

Confidential

ETL Developer

Responsibilities:

  • Designed, developed, tested and attuned Data stage jobs mappings. Analyzed and modified existing ETL objects in order to incorporate new changes in them according to the project requirements
  • Extensively developed Data stage parallel & sequence jobs.
  • Worked on performance tuning of ETL jobs using SQL queries, data connections, configuration files, parameter sets and environment variables.
  • Used Autosys to schedule jobs and e-mailed the status of ETL jobs to operations team daily
  • Developing the Datastage Parallel jobs using Sequential file, Transformer, Lookup, Funnel, Filter, Remove Duplicate, Oracle and Teradata stages
  • Performed Knowledge Transfer KT to team members and lead the whole project in all the phases.
  • Used shared containers to re-use stages and links in Datastage parallel jobs.

Environment: Datastage 7.5, Autosys, Teradata, Oracle, UNIX, SQL Server, DB2

Hire Now