Etl Datastage Developer Resume
Minnetonka, MN
SUMMARY:
- Around 5 years of IT experience in Design, Development and Implementation of Data Warehouse and integrating vast array of data sources using Confidential InfoSphere Information Server 9.1/8.1, Confidential DataStage Enterprise and DataStage 11.3/9.1/8.7/8.5/8.1 Client - Server environment.
- Experienced in various phases of IT projects Software Development Life Cycle (SDLC) such as Analysis, Design, Coding and Testing.
- Expertise in working with different versions of DataStage - DataStage Standard (Server Jobs and sequence jobs), DataStage Enterprise (Server jobs, sequence jobs and parallel jobs) offering parallel processing features allowing jobs to be developed on a Unix or Windows server and transferred to the mainframe to be compiled and run.
- Extensively Worked on DataStage client tools like DS Designer, DS Director, DS Manager and DS Administrator.
- Designed DataStage Jobs to extract data from XML files using XML input stage, Used XML Transformer Stage to cleanse and transform the data to load it into the Data Mart.
- Experienced in converting PL/SQL codes into DataStage ETL codes.
- Strong Experience and knowledge of Relational Database Management Systems (RDBMS) concept in several databases like SQL Server 2012/2008/2005, Oracle 12c/11i/10g/9i, Teradata 12.0/15.0.
- Hands on experience in creating Indexed Views, complex Stored Procedures, Triggers to assist efficient data manipulation and data consistency.
- .Experienced in UNIX Shell Script, using K-shell for the automation of processes, file manipulation, file(s) transfer between servers and scheduling the DataStage jobs
- Experienced in creating Test cases, scripts and Unit testing.
- Experienced with code deployments, version controlling and production support (on - call).
- Strong understanding of data warehouse concepts like Star Schema, Snowflake modeling, Fact and Dimensions tables, Physical and logical data modeling.
- Strong skills in data mapping for the Slowly Changing Dimensions - SCD1and SCD2 Implemented the complex business rules by creating reusable transformations and Mappings / Mapplets.
- Proficient in creating data mappings and data dictionaries and understanding business data relationships.
- Interacted with Developers, Business analysts, DBAs, Deployment and Released planning team
- Effective leadership skills with good written, verbal communication and presentation skills. Self-motivated with result oriented approach. Committed collaborator and team leader.
TECHNICAL SKILLS:
ETL Tools: IBM InfoSphere DataStage 11.3/9.1/8.7/8.5/8.1 (Designer, Director, Parallel Extender)
RDBMS: DB2, Teradata V12/15, Microsoft SQL Server v11/10, Oracle 12.1/8i/9i/10g
Languages: SQL, PLSQL, UNIX Shell, C, C++, HTML, XML
Other Tools & Utilities: Autosys, Tivoli Workload Scheduler (TWS), SQL Assistant 12.0, Erwin 4.0, SVN, UltraEdit
Operating Systems: UNIX, Linux, Windows 2000/2003/XP/VISTA/7/8
PROFESSIONAL EXPERIENCE:
Confidential, Minnetonka, MN
ETL DataStage Developer
Responsibilities:
- Used DataStage 11.3 as an ETL tool for enhancements and new development.
- Interacted with End users to understand the business requirements and in identifying data sources.
- Analysis of business requirements and providing ETL design.
- Extracted data from flat files and then transformed according to the requirement and Loaded into target sequential file
- Extensively worked on different types of stages like Copy, Funnel, Transformer, Merge, Join, Lookup, Sort, Pivot, Remove Duplicates and Containers (Parallel) for developing jobs.
- Responsible for reading input data, processing and reporting.
- Created Multi Instance jobs that can save time and increase efficiency.
- Developing DataStage jobs and Executing by running Shell Scripts.
- Created job sequences for easy maintenance and improved performance
- Prepared shared containers for reusability.
- DailyBusiness meeting to discuss on project status and any issues/concerns.
- Performed the Unit testing for jobs developed to ensure that it meets the requirements.
- Writing SQL queries for Data Extracts and loading.
- Modified shell scripts to run Data Stage jobs from UNIX.
- Removing special characters from input files.
- Transferring files between UNIX and Windows servers
- Developing Unix Scripts for verification and validation of input and output files.
Environment: IBM Information Server DataStage 11.3, Microsoft SQL Server v11, SQL, Linux, TWS, UltraEdit
Confidential, Charleston, SCETL DataStage Developer
Responsibilities:
- Used DataStage 9.1 as an ETL tool to both enhancements and new development.
- Extensively worked on different types of stages like Sequential file, ODBC, Hashed File, Aggregator, Transformer, Merge, Join, Lookup, Sort and Containers (Server, Parallel) for developing jobs.
- Analysis and design of ETL processes.
- Requirement gathering & Providing Estimates for Development.
- Responsiblefor managing enhancements to existing Data Marts as well as developing new Data Marts to meet new business requirements.
- Writing sql queries to join or any modifications in the table.
- Created Multi Instance jobs that can save time and increase efficiency.
- Weekly Business meeting to discuss on project status and any issues/concerns.
- Documented test cases and performed Unit testing.
- Modified shell scripts to run DataStage jobs from UNIX.
- Monitoring the DataStage jobs on daily basis by running the UNIX shell script.
- Created job sequences for easy maintenance and improved performance
- Converted complex job designs to different job segments and executed through job sequencer for better performance and easy maintenance.
- Created a proactive capability to enhance the management of supplier performance.
Environment: IBM Information Server DataStage 9.1, Oracle 12.1, Teradata 15.0, Microsoft SQL Server v10, SQL, PL/SQL, UNIX, AIX 6.1
Confidential Tampa, FLETL DataStage Developer
Responsibilities:
- Used DataStage 8.7 as an ETL tool to extract data from source systems, loaded the data into the DB2 database.
- Designed and developed DataStage jobs to extract data from multiple sources, applied transformation logic to extracted data, and loaded the data into Data Warehouse Databases.
- Created PX Jobs using different stages such as, Aggregator, Change Data Capture, Copy, Filter, Funnel, Join, Lookup Merge, Remove Duplicates, Sample, Surrogate Key, Sort, Column Generator, Row Generator, and Peek.
- Used some UNIX scripts to archive the files to permanent folder and also to run the DS job.
- Worked in deployment of DS jobs from one environment to other and also in Bug fixing.
- Converted complex job designs to different job segments and executed through job sequencer for better performance and easy maintenance.
- Worked in performance tuning of the job at both DB2 and DataStage level by creating index, partitioning and triggers.
- Created Parameter Sets and Worked in creating of user defined environment variables, environment variables that improves the job execution.
- Involved in complete phase till the job is deployed to production where we have taken care in UAT, SIT phases in bug fixing.
- Created job sequences which includes different stage notification activity, Execute Command activity, Sequencer, UserVariables Activity.
- Maintained Data Warehouse by loading dimensions and facts as part of the overall project.
Environment: IBM Information Server DataStage 8.7, Oracle 12.1, Confidential DB2, SQL, UNIX, SVN
Confidential, Lakeville, MNETL Developer/Support
Responsibilities:
- Extensively used Confidential Information server Designer 8.7/8.5 to develop various jobs to extract, cleanse, transform, integrate and load data into tables and target tables and interface files.
- Extract data from various sources performed Data transformations and loaded into the target interface files and tables.
- Unit test and document all the results.
- Develop jobs for extracting, cleansing, transforming, integrating and loading data into data warehouse.
- Prepare technical design/specifications for Data Extraction, Data Transformation, Data Cleansing and Data Loading.
- Develop jobs to load History data from legacy data merits to Enterprise Data Warehouse.
- Develop numerous transformations to populate Data warehouse tables.
- Develop Parallel Jobs for parallel processing to reduce the complexities and to speed up processing.
- Design and developed Aggregator, Lookup, Joins, and Merge transformations according to business rules.
- Implement Surrogate key functionality for newly inserted rows in Data warehouse which made data availability more convenient.
- Work on different Databases Oracle, SQL Server and RDBMS.
- Develop jobs to handle Type 1 and Type 2 Slowly Changing Dimension Loads from Type1/Type2 data sources- Member and provider Master using Lookup/Transformer or Join Stages.
- Optimize job performance with various partitioning techniques in the parallel processing environment.
- Support through the successful execution of all the jobs in Production.
Environment: IBM Information Server (DataStage) 8.7/8.5, Oracle 12.1, UNIX, SQL, UNIX Shell Scripting.
Confidential, Milwaukee, WIETL DataStage Developer
Responsibilities:
- Involved in complete Data Warehouse Life Cycle from requirements gathering to end user support.
- Provided day-to-day and month-end production support for various applications like Business Intelligence Center, and Management Data Warehouse by monitoring servers.
- Involved in Designing, Testing and Supporting DataStage jobs.
- Developed Parallel jobs using various stages like Join, Merge, Lookup, Surrogate key, Scd, Funnel, Sort, Transformer, Copy, Remove Duplicate, Filter, Pivot and Aggregator stages for grouping and summarizing on key performance indicators used in decision support systems.
- Accomplished various development requests through mainframe utilities, CICS Conversation
- Meet the clients on a weekly basis to provide better services and maintain the SLAs.
- Used Control-M to schedule jobs by defining the required parameters and monitor the flow of jobs.
- Automated the process of generating daily and monthly status reports for the processing jobs.
- Drop indexes, remove duplicates, rebuilt indexes and rerun the jobs failed due to incorrect source data.
- Involved in the process of two client bank mergers by taking care of the customer account numbers, Bank numbers, and their respective applications.
Environment: IBMl InfoSphere DataStage 8.5/8.1, SQL, Teradata12, Erwin, Autosys, Toad.