We provide IT Staff Augmentation Services!

Data Stage Developer Resume

3.00/5 (Submit Your Rating)

SUMMARY:

  • To pursue a challenging career in the field of Software Development in a professional environment with my sincere inputs is being hard work, enthusiasm and constant learning to furnish quality Products as output, and as well play an active part in growth of the company.
  • Having 10 yrs. of experience in Software Development Life Cycle (SDLC) which includes Design, Development, Testing in variety of technological platforms with special emphasis on Data Warehouse and Business Intelligence applicationsand 1 year of experience as Hadoop Developer.
  • Good experience in IBM Data stage 7.5.1/8.1,9.1, 11.5
  • Good knowledge of troubleshooting of DataStage jobs and addressing issues like performance tuning
  • Experienced in processing large datasets of different forms including structured, semi - structured and unstructured data.
  • Experienced on major Hadoop ecosystem’s projects such as HIVE, PIG, HBASE, SQOOP, SPARK, SCALA, OZIEE with Cloudera Manager.
  • Hands on experience with Cloudera and multi cluster nodes on Cloudera Sandbox.
  • Expertise at designing tables in Hive, PIG, MYSQL using SQOOP and processing data like importing and exporting of databases to the HDFS.
  • Experienced in working with data architecture including pipeline design of data ingestion, Architecture information of Hadoop, data modeling, machine learning and advanced data processing.
  • Experience optimizing ETL workflows, where data coming from different sources and it is processed.
  • Hands on experience with MapReduce, Pig, Programming Model, Installation and Configuration of Hadoop, HBase, Hive, Pig, Sqoop and Flume using Linux commands.
  • Handle the TEXT, JSON, XML, AVRO, Sequence file, Parquet Log data using Hive(SERDE), Pig and filter the data based on query factor.
  • ETL: Data extraction, managing, aggressions and loading into NoSQL Data base HBASE.
  • Good understanding of SDLC and STLC.
  • Hands on experience in developing Pig Latin Script and Hive Query Language.
  • Proficiency in Linux(Unix) and Windows OS.
  • Experience with Oozie Workflow Engine in running workflow jobs with actions that run Hadoop Map-Reduce and Pig jobs.
  • Hands on experience with SPARK to handle the streaming data.
  • Hands on experience with SCALA for the batch processing and Spark streaming data.
  • Good Understanding of Hadoop architecture and the daemons of Hadoop including Name-Node, Data Node, Job Tracker, Task Tracker, Resource Manager.
  • Hands on experience in ingesting data into Data Warehouse using various data loading techniques.
  • Hands on experience with STLC besides software testing tools including Test Drive.
  • Expertise in using Version Control systems like SVN, SCM Suite and RTC.
  • Excellent Problem-Solving skills, Documentation Skills and Communication Skills.

TECHNICAL SKILLS:

Big Data: Spark, Scala, Pig, Hive, Sqoop, HBase, Oziee, Kafka, Zookeeper, MapReduce

ETL Tools: IBM DataStage 11.5, 9.1/8.0.1/ 7.5.2

Programming Languages: Core java. UNIX Shell Scripting, SQL, Knowledge on python.

Web Services: WSDL, SOAP, Apache, REST.

Operating Systems: UNIX, Windows, LINUX

Databases: Netezza, Oracle 8i/9i/10g, Microsoft SQL Server, DB2 & MySQL 4.x/5.x

IDE: Eclipse 3.x, Salce IDE

Tools: RTC, RSA, Control-M, Oziee, Hue, SQL Developer, SOAP UI

PROFESSIONAL EXPERIENCE:

Confidential

Data Stage Developer

Environment: : IBM DataStage 9.1, Netezza, Oracle, Control-M

Responsibilities:

  • Performed Project Requirements Gathering, Requirements Analysis, Design and Development.
  • Effectively designed, developed and delivered a project within the timelines by achieving all technical and USAA quality process challenges.
  • Effectively involved meetings with different teams like Workday, Scheduling.
  • Involved in End to End deliverables like Tables & ETL migration to prod.
  • Solely responsible for preparing release document, Transition plan to Support team.
  • Used different stages like Transformer, Lookup, Funnel, CDC, Aggregator, Sort, Filter, Modify, Merge, Copy, remove duplicate, Join, Sequential file, Dataset
  • Worked with various Partitioning Methods Round-Robin, Hash, Entire, Same, modulus etc.) and Collection (Round-Robin, Ordered and Sorted Merge) techniques
  • Developed job sequencer with proper job dependencies, job control stages, triggers.
  • Worked on troubleshooting, performance tuning and performances monitoring for enhancement of DataStage jobs and builds across Development, QA

Confidential

Da ta Stage Developer

Environment: : IBM DataStage 9.1, Netezza, MS SQL Server, Sharepoint, Clarabridge, Control-M

Responsibilities:

  • Performed Project Requirements Gathering, Requirements Analysis, Design and Development.
  • Effectively designed, developed and delivered a project within the timelines by achieving all technical and USAA quality process challenges.
  • Effectively involved meetings with different teams like SharePoint, D3 and BO.
  • Involved in End to End deliverables like Tables & ETL migration to prod.
  • Used different stages like Transformer, Lookup, Funnel, CDC, Aggregator, Sort, Filter, Modify, Merge, Copy, remove duplicate, Join, Sequential file, Dataset
  • Worked with various Partitioning Methods Round-Robin, Hash, Entire, Same, modulus etc.) and Collection (Round-Robin, Ordered and Sorted Merge) techniques
  • Developed job sequencer with proper job dependencies, job control stages, triggers.
  • Worked on troubleshooting, performance tuning and performances monitoring for enhancement of DataStage jobs and builds across Development, QA
  • Worked on Unix Shell Scripting to trigger the DataStage Jobs, archive script, tar.gz script, purging scripts, SFTP scripts and various other scripts
  • Prepared the technical mapping document for source to target

Confidential

Da ta Stage Developer

Environment: DataStage 9.1, Oracle9i, IAM Suite, Control-M, RTC

Responsibilities:

  • Created jobs in DataStage to extract from heterogeneous data sources like Oracle, SQL server and Flat files.
  • Designed jobs to extract, cleanse, and parameterize the jobs to allow portability and flexibility during runtime, to apply business rules and logic at transformation stage and load data into data warehouse.
  • Used IAM tools like OIA & OIM to load the feeds with user access information.
  • Used Control-M to develop the jobs to schedule the jobs.

Confidential

ETL Developer

Environment: : DataStage, Unix, Oracle, Netezza

Responsibilities:

  • Designed jobs to extract, cleanse, and parameterize the jobs to allow portability and flexibility during runtime, to apply business rules and logic at transformation stage and load data into data warehouse.
  • Developed and implemented Job sequences to integrate the data stage jobs
  • Effectively designed, developed and delivered project within the timelines by achieving all technical and USAA quality process challenges.
  • Created jobs in DataStage to extract from heterogeneous data sources like Oracle, SQL server and Flat files.
  • Designed jobs to extract, cleanse, and parameterize the jobs to allow portability and flexibility during runtime, to apply business rules and logic at transformation stage and load data into data warehouse.
  • Used IAM tools like OIA & OIM to load the feeds with user access information.
  • Used Control-M to develop the jobs to schedule the jobs.

Confidential

ETL Developer

Environment: DataStage, Unix, Oracle

Responsibilities:

  • Gathered requirement from Client, and business analysts.
  • Created jobs in DataStage to extract from heterogeneous data sources like Oracle, SQL server and Flat files.
  • Designed jobs to extract, cleanse, and parameterize the jobs to allow portability and flexibility during runtime, to apply business rules and logic at transformation stage and load data into data warehouse.
  • Developed logics to accommodate SCD1 and SCD2.
  • Involved in Environment setup for SIT & UAT.
  • Provided support to the system testing & user acceptance test and resolve issues raised by system testing & user acceptance test team

Confidential

ETL Developer

Environment: : DataStage, Oracle, Unix

Responsibilities:

  • Gathered requirement from Client, and business analysts.
  • Created jobs in DataStage to extract from heterogeneous data sources like Oracle, SQL server and Flat files.
  • Designed jobs to extract, cleanse, and parameterize the jobs to allow portability and flexibility during runtime, to apply business rules and logic at transformation stage and load data into data warehouse.
  • Developed logics to accommodate SCD1 and SCD2.
  • Involved in Environment setup for SIT & UAT.
  • Provided support to the system testing & user acceptance test and resolve issues raised by system testing & user acceptance test team

Confidential

ETL Developer

Environment: : Informatcia, Oracle, Unix.

Responsibilities:

  • Involved in the Design, Development and Production Implementation of Data Warehouse.
  • Interacted with Business users for gathering requirements.
  • Interacted with SIT team to solve the SIT Issues.
  • Given Knowledge transfer to Production Team on the Module.
  • Worked on Production issues and User Acceptance issues.
  • Supported the team for development and technical issue resolving.
  • Designed and developed jobs using Data Stage Designer to extract data from different sources, cleansing, applied various Business rules and loading into
  • Data warehouse.
  • Involved in performance tuning of the ETL process and performed the data
  • Warehouse testing.

Confidential

ETL Developer

Environment: DataStage, Oracle, Unix.

Responsibilities:

  • Involved in the Design, Development and Production Implementation of Data Warehouse.
  • Interacted with Business users for gathering requirements.
  • Interacted with SIT team to solve the SIT Issues.
  • Given Knowledge transfer to Production Team on the Module.
  • Worked on Production issues and User Acceptance issues.
  • Supported the team for development and technical issue resolving.
  • Designed and developed jobs using Data Stage Designer to extract data from different sources, cleansing, applied various Business rules and loading into Data warehouse.
  • Involved in performance tuning of the ETL process and performed the data Warehouse testing.

We'd love your feedback!