We provide IT Staff Augmentation Services!

Sr. Talend Developer Resume

2.00/5 (Submit Your Rating)

Austin, TX

PROFESSIONAL SUMMARY:

  • 7+ years of experience in Analysis, Design, Development, Testing, Implementation, Enhancement and Support of ETL applications which includes strong experience in OLTP & OLAP environments as a Data Warehouse/Business Intelligence Consultant.
  • 4+ years of experience in Talend Open Studio (6.x/5.x) for Data Integration, Data Quality and Big Data.
  • Experience in working with Data Warehousing Concepts like OLAP, OLTP, Star Schema, Snow Flake Schema, Logical Data Modeling, Physical Modeling and Dimension Data Modeling.
  • 2+ years of experience in Expertise on Talend Data Integration suite and Bigdata Integration Suite for Design and development of ETL/Bigdata code and Mappings for Enterprise DWH ETL Talend Projects.
  • 1+ years of experience of admin and good knowledge on Talend Administrator Center/Job Scheduling.
  • Widespread experience in using Talend features such as context variables, triggers, connectors for Database and flat files.
  • Experience in scheduling tools Autosys, Control M & Job Conductor (Talend Admin Console)
  • Hands on Involvement on many components which are there in the palette to design Jobs & used Context Variables to Parameterize Talend Jobs.
  • Experienced in ETL Talend Data Fabric components and used features of Context Variables, MySQL, Oracle, Hive Database components.
  • Tracking Daily Data load, Monthly Data extracts and send to client for their verification.
  • Strong experience in designing and developing Business Intelligence solutions in Data Warehousing using ETL Tools.
  • Excellent understanding and best practice of Data Warehousing Concepts, involved in Full Development life cycle of Data Warehousing.
  • Experienced in analyzing, designing and developing ETL strategies and processes, writing ETL specifications.
  • Involved in extracting user's Data from various Data sources into Hadoop Distributed File Systems (HDFS)
  • Experience with MapReduce, Pig, Programming Model, Installation and Configuration of Hadoop, HBase, Hive, Pig, Sqoop and Flume using Linux commands.
  • Experienced in using Talend Data Fabric tools (Talend DI, Talend MDM, Talend DQ, Talend Data Preparation, ESB, TAC)
  • Experienced in working with different data sources like Flat files, Spreadsheet files, log files and Databases.
  • Knowledge in Data Flow Diagrams, Process Models, E - R diagrams with modeling tools like ERwin & ERStudio.
  • Experience in AWS S3, RDS (MySQL) and Redshift cluster configuration.
  • Extensive experience in J2EE platform including, developing both front end & back end applications using Java, Servlets, JSP, EJB, AJAX, Spring, Struts, Hibernate, JAXB, JMS, JDBC, Web Services.
  • Strong Understanding of Data Modeling (Relational, dimensional, star and snowflake schema) Data analysis implementation of Data Warehouse using Widows and Unix.
  • Extensive Experience in, Functions, Developing Stored Producers Views and Triggers, complex queries using SQL Server.
  • Hands-on experience across all stages of Software Development Life Cycle (SDLC) including business requirement analysis, data mapping, build, unit testing, systems integration and user acceptance testing.
  • Worked in all phases of BW/BI full life cycles including Analysis, Design, Development, Testing, Deployment, Post-Production Support/Maintenance, Documentation and End-User .
  • Highly Proficient in Agile, Test Driven, Iterative, Scrum and Waterfall software development life cycle.
  • Highly motivated with the ability to work effectively in teams as well as independently.
  • Excellent interpersonal and communication skills, and is experienced in working with senior level managers, business people and developers across multiple disciplines. Ability to grasp and apply new concepts quickly and effectively.
  • Strong in transferring the data from relational data base to Cloud such as Amazon S3 and Redshift by using Talend Big data Spark Jobs

PROFESSIONAL EXPERIENCE:

Sr. Talend Developer

Confidential, Austin, TX

Responsibilities:

  • Interacted with business team to understand business needs and to gather requirements.
  • Designed target tables as per the requirement from the reporting team and also designed Extraction, Transformation and Loading (ETL) using Talend.
  • Designing, building, installing, configuring and supporting Hadoop.
  • Created Technical Design Document from Source to stage and Stage to target mapping.
  • Worked with Talend Studio (Development area) & Admin Console (Admin area)
  • Created Java Routines, Reusable transformations, Joblets using Talend as an ETL Tool
  • Created Complex Jobs and used transformations like tMap, tOracle (Components), tLogCatcher, tStatCatcher, tFlowmeterCatcher, File Delimited components and Error handling components (tWarn, tDie)
  • Developed simple to complex Map Reduce jobs using Hive and Pig for analyzing the data.
  • Assisted in migrating the existing data center into the AWS environment.
  • Written Hive queries for data analysis and to process the data for visualization.
  • Identified performance issues in existing sources, targets and Jobs by analyzing the data flow, evaluating transformations and tuned accordingly for better performance.
  • Manage all technical aspects of the ETL Jobs process with other team members
  • Worked with Parallel connectors for parallel processing to improve job performance while working with bulk data sources.
  • Optimized Map/Reduce Jobs to use HDFS efficiently by using various compression mechanisms.
  • Developed mappings to load Fact and Dimension tables, SCD Type 1 and SCD Type 2 dimensions and Incremental loading.
  • Created contexts to use the values throughout the process to pass from parent child to child jobs and child to parent jobs.
  • Analyzed and performed data integration using Talend Cloud hybrid integration suite.
  • Worked on Joblets (reusable code) & Java routines in Talend.
  • Performed Unit testing and created Unix Shell Scripts and provided on call support.
  • Schedule Talend Jobs using Job Conductor (Scheduling Tool in Talend) - available in TAC.
  • Retrieved data from Oracle and loaded into SQL Server data Warehouse
  • Created many complex ETL jobs for data exchange and to Database Server and various other systems including RDBMS, XML, CSV, and Flat file structure.
  • Created and reviewed scripts to create new tables, views, queries for new enhancement in the applications using TOAD.
  • Monitoring the Data Quality, Generating weekly/monthly/yearly statistics reports on production processes - success / failure rates for causal analysis as maintenance part and Enhancing exiting production ETL Process
  • Development of high level data dictionary of ETL data mappings and transformations from a series of complex Talend data integration jobs. enabling Enterprise Data Governance program to save time when it comes to harvest metadata and do the data
  • Quality assessment. This was a large effort and was done working and collaborating with Voya's Technology Risk and Security Management (TRSM) team.

Environment: Talend 6.2.1/6.0.1, Talend Open Studio Big Data/DQ/DI, Talend Administrator Console, Oracle 11g, Teradata V 14.0, Hive, HANA, PL/SQL, DB2, XML, JAVA. ERwin 7, UNIX Shell Scripting, Oracle 10g, Oracle, TAC (Admin Center).

ETL/Talend Developer

Confidential, New Jersey

Responsibilities:

  • Worked closely with Business Analysts to review the business specifications of the project and to gather the ETL requirements.
  • Created Talend jobs to copy the files from one server to another and utilized Talend FTP components.
  • Created and managed Source to Target mapping documents for all Facts and Dimension tables
  • Analyzing the source data to know the quality of data by using Talend Data Quality.
  • Involved in writing SQL Queries and used Joins to access Data from Oracle, and MySQL.
  • Assisted in migrating the existing data center into the AWS environment.
  • Prepared ETL mapping Documents for every mapping and Data Migration document for smooth transfer of project from development to testing environment and then to production environment.
  • Design and Implemented ETL for data load from heterogeneous Sources to SQL Server and Oracle as target databases and for Fact and Slowly Changing Dimensions SCD-Type1 and SCD-Type2.
  • Utilized Big Data components like tHDFSInput, tHDFSOutput, tPigLoad, tPigFilterRow, tPigFilterColumn, tPigStoreResult, tHiveLoad, tHiveInput, tHbaseInput, tHbaseOutput, tSqoopImport and tSqoopExport.
  • Used Talend most used components (tMap, tDie, tConvertType, tFlowMeter, tLogCatcher, tRowGenerator, tSetGlobalVar, tHashInput & tHashOutput and many more)
  • Created many complex ETL jobs for data exchange from and to Database Server and various other systems including RDBMS, XML, CSV, and Flat file structures.
  • Experienced in using debug mode of Talend to debug a job to fix errors.
  • Responsible for developing, support and maintenance for the ETL (Extract, Transform and Load) processes using Talend Integration Suite.
  • Conducted JAD sessions with business users and SME's for better understanding of the reporting requirements.
  • Developed Talend jobs to populate the claims data to data warehouse - star schema.
  • Used Talend Admin Console Job conductor to schedule ETL Jobs on daily, weekly, monthly and yearly basis.
  • Worked on various Talend components such as tMap, tFilterRow, tAggregateRow, tFileExist, tFileCopy, tFileList, tDie etc.
  • Worked Extensively on Talend Admin Console and Schedule Jobs in Job Conductor.
  • Analyzed and performed data integration using Talend Cloud hybrid integration suite.

Environment: Talend Enterprise Big Data Edition 5.1, Talend Administrator Console, MS SQL Server 2012/2008, Oracle 11g, Hive, HDFS, Sqoop, TOAD, UNIX Enterprise Platform for Data integration.

Talend Developer

Confidential, Atlanta, GA

Responsibilities:

  • Developed ETL process to load Oracle data to Sql Server System using following Talend Components:
  • Oracle Components - tOracleConnection, tOracleInput, tOracleBulkExec
  • Worked on Linux system (Red Hat) to deploy the Talend code.
  • Deployed the code using shell scripts in other machines.
  • Worked extensively on SQL Queries for validating the records.
  • Worked on paginating the SQL statements in the ETL flow to handle the memory issues and to improve the performance.
  • Worked on handling the dead lock errors while updating the SQL Server tables in the ETL flow.
  • Parameterized the overall work flow to execute the code in different environments.
  • Parallelized the workflows to improve the time for execution.
  • Developed ETL mappings.
  • Developed and tested all the backend programs, Informatica mappings and update processes.
  • Developed Informatica mappings to load data into various dimensions and fact tables from various source systems.
  • Worked on Informatica power center Designer tools like source Analyzer, Target Designer, Transformation Developer, Mapping Designer and Mapplet Designer
  • Worked on Informatica Power Center Workflow Manager tools like Task Designer, Workflow Designer, and Worklet Designer.
  • Designed and developed Informatica power center medium to complex mappings using transformations such as the Source Qualifier, Aggregator, Expression, Lookup, Filter, Router, Rank, Sequence Generator, Stored Procedure and Update.
  • Worked as a key project resource taking day-to-day work direction and accepting accountability for technical aspects of development.
  • Developed the Business rules for cleansing, validating and standardization of data using Informatica Data Quality.
  • Designed and developed multiple reusable cleanse components.
  • Successful partnership and leadership on the Metadata harvest and management using collibra connect with Data Service Team

Environment: Talend Open Studio 5.0.1, Informatica Power center, UNIX, SQL Server, TOAD, AutoSys.

Informatica Developer

Confidential

Responsibilities:

  • Assisted gathering business requirements and worked closely with various Application and Business teams to develop Data Model, ETL procedures to design Data Warehouse.
  • Designed and developed star schema model for target database using ERWIN Data modeling.
  • Extensively used ETL Informatica tool to extract data stored in MS SQL 2000, Excel, and Flat files and finally loaded into a single Data Warehouse.
  • Used various active and passive transformations such as Aggregator, Expression, Sorter, Router, Joiner, connected/unconnected Lookup, and Update Strategy transformations for data control, cleansing, and data movement.
  • Designed and developed Mapplets for faster development, standardization and reusability purposes.
  • Implemented Slowly Changing Dimension Type 1 and Type 2 for inserting and updating Target tables for maintaining the history.
  • Used Debugger to validate transformations by creating break points to analyze, and monitor Data flow.
  • Tuned performance of Informatica Session by increasing block size, data cache size, sequence buffer length and Target based commit interval, and mappings by dropping and recreation of indexes.
  • Worked along with the QA Team and provided production support by monitoring the processes running daily.
  • Involved in pre-and post-session migration planning for optimizing data load performance.
  • Interfaced with the Portfolio Management and Global Asset Management Groups to define reporting requirements and project plan for intranet applications for Fixed Income and Equities.
  • Performed Unit testing during the mapping phase to ensure proper and efficient implementation of the transformations.
  • Wrote UNIX Shell Scripts and pmcmd command line utility to interact with Informatica Server from command mode.

Environment: Informatica Power Center 8.x, Informatica Repository Manager, Oracle10g/9i, DB2, ERwin, TOAD, UNIX - AIX, PL/SQL, SQL Developer.

SQL/BI Developer

Confidential

Responsibilities:

  • Responsible for designing and developing of mappings, mapplets, sessions and work flows for load the data from source to target database using Informatica Power Center and tuned mappings for improving performance.
  • Created database objects like views, indexes, user defined functions, triggers and stored procedures.
  • Involved in ETL process from development to testing and production environments.
  • Extracted date from various sources like Flat files, Oracle and loaded it into Target systems using Informatica 7.x.
  • Developed mappings using various transformations like update strategy, lookup, stored procedure, router, joiner, sequence generator and expression transformation.
  • Developed PL/SQL triggers and master tables for automatic creation of primary keys.
  • Used Informatica Power Center Workflow Manager to create sessions, batches to run with the logic embedded in the mappings.
  • Tuned mappings and SQL queries for better performance and efficiency.
  • Automated existing ETL operations using Autosys.
  • Created & Ran shell scripts in UNIX environment.
  • Created and ran the Workflows using Workflow manager in Informatica Maintained stored definitions, transformation rules and targets definitions using Informatica repository manager.
  • Created tables and partitions in database Oracle.

Environment: Informatica Power Center 8.x, Oracle, SQL developer, MS Access, PL/SQL, UNIX Shell Scripting, SQL Server 2005, Windows XP.

We'd love your feedback!