We provide IT Staff Augmentation Services!

Sr. Talend Bigdata Developer Resume

2.00/5 (Submit Your Rating)

Chicago, IL

SUMMARY

  • Expert in Talend Big Data. Experience in information technology, which includes Data warehousing and Legacy applications using Talend ETL, Big data and Mainframe technologies.
  • Talend Certified Talend Bigdata v6 Developer. Built ETL framework using Talend Bigdata.
  • Extensive working experience in Healthcare, Manufacturing, Credit Card, Insurance, Capital Markets and Life science.
  • Expertise in extracting data from multiple sources, data cleansing and validation based on business requirements.
  • Experience in design and deploy ETL solution for large - scale data OLAP and OLTP instance using Talend ETL.
  • Experience in troubleshooting and improving performance of the Talend ETL.
  • Experienced in Interacting with Business Users in gathering requirements.
  • Experience in analysis, estimation, design, construction, problem solving, ongoing maintenance and enhancements for the on demanding business needs.
  • Preparation of Estimation using Complexity point methodology.
  • Experience in project planning, assigning tasks and monitoring activities with Microsoft Project Plan and PMSmart application.
  • Experience in designing and developing Mainframe applications using COBOL, JCL, Vsam, DB2, CICS online functions, and batch systems using COBOL/ DB2.
  • Single point of contact between customer and offshore team members.
  • Involved in end to end SDLC processes and QA activities. Experience in writing Test Cases, Debugging, and testing of Online Systems using Interest and Batch systems using Xpediter and Intertest.
  • Expertise in application software analysis, architecture, design, development, testing, implementation and quality assurance.
  • Highly motivated and adaptive with the ability to grasp things quickly and possess excellent interpersonal, technical and communication skills.

TECHNICAL SKILLS

  • COBOL
  • JCL
  • Vsam
  • DB2
  • CICS
  • Microsoft Project Plan
  • PMSmart application

PROFESSIONAL EXPERIENCE

Confidential, Chicago, IL

Sr. Talend Bigdata developer

Responsibilities:

  • Analyzed the requirements and framed the business logic for the ETL process using Talend Bigdata. Involved in the ETL design and its documentations.
  • Interacting with various counter parts like Data SME, Business and Users and Meridian Management. Have developed Talend Spark batch jobs.
  • Design and develop the reusable jobs, components and routines.
  • Excellent experience working on tAdvancedFileoutputXML, tFileoutputMSXML, tHDFSInput, tHDFSOutput, tHiveLoad, tHiveInput, tSqoopImport and tSqoopExport.
  • Used tStatsCatcher, tDie, tLogRow to create a generic joblet to store processing statistics into a Database table to record job history.
  • Followed the organization defined naming conventions for name the flat file structure, Talend jobs and daily batches for executing Talend jobs.
  • Participated in weekly end user meeting to discuss data quality, performance issues and ways to improved data accuracy and new requirements.
  • Involved in migrating objects from Dev to QA and testing them and then promoting to production. Involved in automation of FTP process in Talend and FTPing the files in Linux.
  • Created Talend Development standards. It describes the general guidelines for Talend developers, the naming conventions to be used in the transformations and also development and production environment structures.
  • Solved business queries raised by Client and customers.
  • Responsible for identifying corrective actions to address the risks and issues. Scheduled jobs using ControlM
  • Involved in Reviews of design and unit test results. Involved in SIT and UAT support.

Environment: Talend Bigdata 6.4, Hadoop, Hive, Spark batch, Sqoop, Oracle and Linux

Confidential, Harrisburg, PA

Senior Talend Bigdata Developer

Responsibilities:

  • Designed ETL framework and set it up as an example for others to follow the design. Designed Error handling framework.
  • Done the end to end ETL development at Hadoop. Interacting with various counter parts like Data SME, Business and Users and TE Management.
  • Preparing the gap analysis report for migrating the Hive script based bigdata jobs to Talend bigdata.
  • Analyzed the requirements and framed the business logic for the ETL process using Talend Bigdata. Involved in the ETL design and its documentations.
  • Developed jobs in Talend Bigdata Enterprise edition from state to source, intermediate, conversion and target.
  • Have done POC on Talend Spark batch jobs by converting Talend Map Reduce job to Talend Spark batch jobs.
  • Design and develop the reusable jobs, components and routines.
  • Worked on Talend ETL to load data from various sources using tFileInputMSDelimted, tMySqlInput, tMap, tReplicate, tFilterRow, tWaitForFile and various features of Talend.
  • Worked on Talend ETL and used features such as Context variables, Database Components like tMysqlInput, tMySqlOutput, file components, ELT components etc.
  • Excellent experience working on tHDFSInput, tHDFSOutput, tHiveLoad, tHiveInput, tSqoopImport and tSqoopExport.
  • Used tStatsCatcher, tDie, tLogRow to create a generic joblet to store processing statistics into a Database table to record job history.
  • Followed the organization defined naming conventions for name the flat file structure, Talend jobs and daily batches for executing Talend jobs.
  • Participated in weekly end user meeting to discuss data quality, performance issues and ways to improved data accuracy and new requirements.
  • Involved in migrating objects from Dev to QA and testing them and then promoting to production.
  • Involved in automation of FTP process in Talend and FTPing the files in Linux.
  • Created Talend Development standards. It describes the general guidelines for Talend developers, the naming conventions to be used in the transformations and also development and production environment structures.
  • Solved business queries raised by Client and customers.
  • Responsible for identifying corrective actions to address the risks and issues. Scheduled jobs using ControlM
  • Involved in Reviews of design and unit test results. Involved in SIT and UAT support. Involved in SIT/UAT defect fixing and implementation activities

Environment: Talend Bigdata 6.4, Hadoop, Hive, Spark, HBase, Sqoop, Oracle, MySQL, Linux and ControlM

Confidential, Austin, TX

Talend/Bigdata developer

Responsibilities:

  • Prepared Data Mapping document to map the source and target mapping rules. Analyzed the requirements and framed the business logic for the ETL process using Talend.
  • Involved in the ETL design and its documentations. Have worked on developing and maintaining data lake using Talend Big Data components.
  • Have written Talend Big data jobs(Pig, Hive) to handle data ingestion, data management and client consumption.
  • Worked on Talend ETL to load data from various sources using tFileInputMSDelimted, tMySqlInput, tMap, tReplicate, tFilterRow, tWaitForFile and various features of Talend.
  • Worked on Talend ETL and used features such as Context variables, Database Components like tMysqlInput, tMySqlOutput, file components, ELT components etc.
  • Excellent experience working on tHDFSInput, tHDFSOutput, tPigLoad, tPigFilterRow, tPigFilterColumn, tPigStoreResult, tHiveLoad, tHiveInput, tHbaseInput, tHbaseOutput, tSqoopImport and tSqoopExport.
  • Used tStatsCatcher, tDie, tLogRow to create a generic joblet to store processing statistics into a Database table to record job history.
  • Followed the organization defined naming conventions for name the flat file structure, Talend jobs and daily batches for executing Talend jobs.
  • Worked on Context variables and defined contexts for database connections, file paths for easily migrating to different environments in a project.
  • Implemented Error handling in Talend to validate the data integrity and data completeness for the data from the Flat file.
  • Participated in weekly end user meeting to discuss data quality, performance issues and ways to improved data accuracy and new requirements.
  • Involved in migrating objects from Dev to QA and testing them and then promoting to production.
  • Responsible for developing, support and maintenance for the ETL using Talend integration suite.
  • Involved in automation of FTP process in Talend and FTPing the files in Linux.
  • Created Talend Development standards. It describes the general guidelines for Talend developers, the naming conventions to be used in the transformations and also development and production environment structures.
  • Solved business queries raised by Client and customers. Involved in Reviews of design and unit test results.

Environment: Talend 5.6, Hadoop, Hive, Pig, HBase, Sqoop, DB2, MySQL, Oracle and Linux

Confidential, Minneapolis, MN

Talend/Bigdata developer

Responsibilities:

  • Prepared Data Mapping document to map the source and target mapping rules. Analyzed the requirements and framed the business logic for the ETL process using Talend.
  • Involved in the ETL design and its documentations.
  • Developed jobs in Talend Enterprise edition from state to source, intermediate, conversion and target.
  • Worked on Talend ETL to load data from various sources usingtFileInputMSDelimted, tMySqlInput, tMap, tReplicate, tFilterRow, tWaitForFile and various features of Talend.
  • Worked on Talend ETL and used features such as Context variables, Database Components like tMysqlInput, tMySqlOutput, file components, ELT components etc.
  • Excellent experience working on tHDFSInput, tHDFSOutput, tPigLoad, tPigFilterRow, tPigFilterColumn, tPigStoreResult, tHiveLoad, tHiveInput, tHbaseInput, tHbaseOutput, tSqoopImport and tSqoopExport.
  • Used tStatsCatcher, tDie, tLogRow to create a generic joblet to store processing statistics into a Database table to record job history.
  • Followed the organization defined naming conventions for name the flat file structure, Talend jobs and daily batches for executing Talend jobs.
  • Worked on Context variables and defined contexts for database connections, file paths for easily migrating to different environments in a project.
  • Implemented Error handling in Talend to validate the data integrity and data completeness for the data from the Flat file.
  • Participated in weekly end user meeting to discuss data quality, performance issues and ways to improved data accuracy and new requirements.
  • Involved in migrating objects from Dev to QA and testing them and then promoting to production.
  • Responsible for developing, support and maintenance for the ETL using Talend integration suite.
  • Involved in automation of FTP process in Talend and FTPing the files in Linux.
  • Created Talend Development standards. It describes the general guidelines for Talend developers, the naming conventions to be used in the transformations and development and production environment structures.

Environment: Talend 5.6, Hadoop, Hive, Pig, HBase, Sqoop, XML, JSON, DB2, MySQL, Oracle and Linux

Confidential, Minneapolis, MN

Talend/Bigdata developer

Responsibilities:

  • Prepared Data Mapping document to map the source and target mapping rules. Analyzed the requirements and framed the business logic for the ETL process using Talend.
  • Involved in the ETL design and its documentations.
  • Developed jobs in Talend Enterprise edition from state to source, intermediate, conversion and target.
  • Worked on Talend ETL to load data from various sources usingtFileInputMSDelimted, tMySqlInput, tMap, tReplicate, tFilterRow, tWaitForFile and various features of Talend.
  • Worked on Talend ETL and used features such as Context variables, Database Components like tMysqlInput, tMySqlOutput, file components, ELT components etc.
  • Excellent experience working on tHDFSInput, tHDFSOutput, tPigLoad, tPigFilterRow, tPigFilterColumn, tPigStoreResult, tHiveLoad, tHiveInput, tHbaseInput, tHbaseOutput, tSqoopImport and tSqoopExport.
  • Used tStatsCatcher, tDie, tLogRow to create a generic joblet to store processing statistics into a Database table to record job history.
  • Followed the organization defined naming conventions for name the flat file structure, Talend jobs and daily batches for executing Talend jobs.
  • Worked on Context variables and defined contexts for database connections, file paths for easily migrating to different environments in a project.
  • Implemented Error handling in Talend to validate the data integrity and data completeness for the data from the Flat file.
  • Participated in weekly end user meeting to discuss data quality, performance issues and ways to improved data accuracy and new requirements.
  • Involved in migrating objects from Dev to QA and testing them and then promoting to production.
  • Responsible for developing, support and maintenance for the ETL using Talend integration suite.
  • Involved in automation of FTP process in Talend and FTPing the files in Linux.
  • Created Talend Development standards. It describes the general guidelines for Talend developers, the naming conventions to be used in the transformations and development and production environment structures.

Environment: Talend 5.6, Hadoop, Hive, Pig, HBase, Sqoop, XML, JSON, DB2, MySQL, Oracle and Linux

Confidential, Houston, TX

ETL Developer

Responsibilities:

  • Worked closely with Business analysts and Business users to understand the requirements and to build technical specifications.
  • Involved in all business meetings to understand the existing logic, different member information and agency data to come up with the best IT solution.
  • Responsible to create Source to Target (STT) mappings. Involved in day to day production support activities.
  • Worked on various defects raised by concerned business teams from various entities.
  • Developed and supported the Extraction, Transformation and Load process (ETL) for a data warehouse from various data sources using DataStage Designer and to load the target tables using the DataStage Designer.
  • Worked on various stages like Transformer, Join, Lookup, Sort, Filter, Change Capture and Apply stages.
  • Coding using BTEQ SQL of TERADATA, write UNIX scripts to validate, format and execute the SQL's on UNIX environment.
  • Developed parallel jobs using various Development/debug stages (Peek stage, Row generator stage, Column generator stage, Sample Stage) and processing stages (Aggregator, Change Capture, Change Apply, Filter, Sort & Merge, Funnel, Remove Duplicate Stage)
  • Designed job sequencer to run multiple jobs with dependency and email notifications. Involved in Unit Testing, SIT and UAT Worked with the users in data validations.
  • Extensively worked in improving performance of the jobs by avoiding as many transforms as we can.
  • Prepared documentation for unit, integration and final end - to - end testing.
  • Responded to customer needs; self-starter and customer service oriented.
  • Worked within the team to make appropriate, effective decisions related to project responsibilities and to initiate and follow-up on assigned tasks without supervision.
  • Provided support and guidance by creating Release, Deployment & Operation guide documents.
  • Involved in Performance tuning of complex queries.
  • Interacted with Teradata DBA team for creation of primary and secondary indexes on Data Warehouse tables.
  • Develop Universe model as per the business requirements to use for Webi rich client.

Environment: IBM InfoSphere DataStage 8.5 (Designer, Director, Administrator), Teradata, Oracle 10g, Sequential files and Mainframe files, COBOL and UNIX Shell Scripting.

We'd love your feedback!