We provide IT Staff Augmentation Services!

Talend Developer Resume

5.00/5 (Submit Your Rating)

Austin, TX

SUMMARY

  • Around 6 Years of experience in IT industry wif progressive experience in providing product specifications, design, analysis, development, documentation, coding, and implementation of business technology solutions in Data warehousing applications.
  • Experience in Talend administration, Installation and configuration and worked extensively on Talend Big data to load the data in to HDFS, S3, Hive, Redshift.
  • Created mappings using Lookup, Aggregator, Joiner, Expression, Filter, Router, Update strategy and Normalizer Transformations.
  • Hands on experience in Talend Big Data wif MDM for creating data model, data container, views, and workflows. Used different components in talend MDM like tMDMInput, tMDMOutput, tMDMBulkLoad, tMDMConnection, tMDM Receive and tMDM Rollback.
  • Created complex mappings in Talend 6.4.1 using tMap, tJoin, tReplicate, tParallelize, tJava, tJavaFlex, tAggregateRow, tDie, tWarn, tLogCatcher, etc.
  • Experience in working wif Talend open source, Talend Enterprise version and Talend cloud.
  • Extensively used Talend components like tfileinputdelimited, tparquetinput, tspakrow,tSetGlobalVar tMap, tReplicate, tJoin, tFileList, tSortRow, tBufferInput, tBufferOutput, tDenormalize, tNormalize, tParseRecordSet, tUniqueRow, tS3put, tS3get, tS3FileList, tRedshiftInput, tRedshiftOutput, tRedshiftRow, tsnowflakeinput, tsnowflakeoutput, tsnowflakerow.
  • Extensively worked on Talend Bulkcomponents like tMySqlBulkExec, tMySqloutputBulk, tOracleBulkExec, tOracleOutputBulkExec for multiple data bases.
  • Extensive experience in development and maintenance in a corporate wide ETL solution using SQL, PL/SQL, TALEND 5.x/6.x/7.x on Unix and Windows platforms.
  • Experience in working wif Standard jobs, Batch jobs and streaming jobs using Talend for Bigdata and Talend for real time Bigdata.
  • Experienced in ETL migration projects converting the Code from one ETL tool to another involving DWH migration at the same point.
  • Extensive experience in ETL methodology for performing Data Profiling, Data Migration, Extraction, Transformation and Loading using Talend and designed data conversions from wide variety of source systems including Oracle, DB2, SQL server, Teradata, Hive, Hana and non - relational sources like flat files, XML and Mainframe files.
  • Extensively worked on ETL Informatica transformations effectively including - Source Qualifier, Connected - Unconnected Lookup, Filter, Expression, Router, Union, Normalizer, Joiner, Update, Rank, Aggregator, Stored Procedure, Sorter and Sequence Generator and created complex mappings.
  • Created configurations for mailboxes, users, profiles, and auditing for Managed file transfer (MFT) using SFG and multiple protocols (e.g., FTP, SFTP).
  • Extensively worked on Error logging components like tLogCatcher, tStatCatcher, tAssertCatcher, tFlowMeter, tFlowMeterCatcher.

TECHNICAL SKILLS

ETL/Middleware Tools: Talend 5.5/5.6/6.2/7.1, Informatica Power Center 9.5.1/9.1.1/8.6.1/7.1.1

Data Modelling: Dimensional Data Modeling, Star Join Schema Modeling, Snow-Flake Modeling, Fact and Dimension tables, Physical and Logical Data Modeling.

Business Intelligence Tools: Business Objects 6.0, Cognos 8BI/7.0. s, Sybase, OBIEE 11g/10.1.3.x

RDBMS: Oracle 11g/10g/9i, MS SQL Server 2014/2008/2005/2000, MySQL, MS Access.

Programming Skills: SQL, Oracle PL/SQL, Unix Shell Scripting, HTML, DHTML, XML, Java.

Modelling Tool: Erwin 4.1/5.0, MS Visio.

Tools: TOAD, SQL Plus, SQL*Loader, Soap UI, Subversion, Share Point, IP switch user.

Operating Systems: Windows 8/7/XP/NT/2x, Linux.

PROFESSIONAL EXPERIENCE

Confidential, Austin, TX

Talend Developer

Responsibilities:

  • Experienced in Design, Develop, and Improve databases and ETL processes in scope of Application development.
  • Created Talend jobs to copy the files from one server to another and utilized Talend FTP components.
  • Design and Implemented ETL for data load from heterogeneous Sources to SQL Server and Oracle as target databases and for Fact and Slowly Changing Dimensions SCD-Type1 and SCD-Type2.
  • Utilized Big Data components like tHDFSInput, tHDFSOutput, tPigLoad, tPigFilterRow, tPigFilterColumn, tPigStoreResult, tHiveLoad, tHiveInput, tHbaseInput, tHbaseOutput, tSqoopImport and tSqoopExport.
  • Used Talend most used components (tMap, tDie, tConvertType, tFlowMeter, tLogCatcher, tRowGenerator, tSetGlobalVar, tHashInput & tHashOutput and many more).
  • Created many complex ETL jobs for data exchange from and to Database Server and various other systems including RDBMS, XML, CSV, and Flat file structures.
  • Created Talend Mappings to load data from S3 to Amazon Redshift DWH.
  • Created Implicit, local and global Context variables in the job. Worked on Talend Administration Console (TAC) for scheduling jobs and adding users.
  • Worked on various Talend components such as tMap, tFilterRow, tAggregateRow, tFileExist, tFileCopy, tFileList.
  • Designed the ETL processes using Data Stage to load data from Teradata, DB2, Informix, Flat Files (Fixed Width) and XML files to staging database and from staging to the target Oracle Data Warehouse database.
  • Used extensively Assignment task for assigning file names, Command task for copying and archiving the target files from Outgoing path to Archive path and Pre-Session Command in the session level for deleting existing output files.
  • Debugged the mappings extensively, hardcoded the test data to test the logics going instance by instance. Involved in unit testing and documented the testing results, workflows, and jobs.
  • Using Key Management Functions Surrogate Keys were generated for composite attributes while loading the data into Data Warehouse.
  • Responsible for developing data pipeline wif Amazon AWS to extract the data from weblogs and store in HDFS.
  • Installed Hadoop, Map Reduce, HDFS, and Developed multiple maps reduce jobs in PIG and Hive for data cleaning and pre-processing.
  • Used Spark SQL to perform big data analytics on ingested data in Amazon S3.
  • Imported data from Amazon S3 to Spark RDD and performed transformations and actions on RDD’s.
  • Hands on Experience on many components which are there in the palette to design Jobs & used Context Variables to Parameterize Talend Jobs.
  • Has experience in TWS, Talend open studio, Building Jenkins GitHub pipelines CI/CD for Supported projects.
  • Had experience in Agile environment by attending daily scrum meetings, bi-weekly sprint planning meetings, Retro’s and Demo’s.
  • Created param files for parameterizing the source/target file locations, file names and DB Connections and written Unix shell scripts in FileZilla for running the workflows in Putty.
  • Migrated the code from one environment to another i.e., from DEV to QA using Repository Manager by exporting and importing XML files and from DEV to PROD using Service Now by attaching the XML file to the Change ticket.
  • Migrated the param files and KSH scripts from Test to QA manually and from Test to PROD by attaching param files and KSH scripts to the Change ticket in Service Now.
  • Scheduled the jobs using ZENA scheduler by checking dependencies.
  • Provided warranty support for the jobs that are in Production until sign off is done by external vendor.

Environment: Talend, Hadoop, MapReduce, HDFS, Hive, HBase, spark, Java, SQL, Tableau, Sqoop, Teradata, Oozie etc.

Confidential, Dallas, TX

Talend Developer

Responsibilities:

  • Involves wif business team in analyzing and validating requirements. Creating, implementing, maintaining, documenting, and enhancing test plans, test scripts, and test methodologies that ensure exhaustive testing of application for each systematic release.
  • Participated in all phases of development life cycle wif extensive involvement in the definition and design meetings, functional and technical walkthroughs.
  • Created Talend jobs to copy the files from one server to another and utilized Talend FTP components.
  • Created and managed Source to Target mapping documents for all Facts and Dimension tables.
  • Used ETL methodologies and best practices to create Talend ETL jobs. Followed and enhanced programming and naming standards.
  • Created and deployed physical objects including custom tables, custom views, stored procedures, and Indexes to SQL Server for Staging and Data-Mart environment.
  • Design and Implemented ETL for data load from heterogeneous Sources to SQL Server and Oracle as target databases and for Fact and Slowly Changing Dimensions SCD-Type1 and SCD-Type2.
  • Utilized Bigdata components like tHDFSInput, tHDFSOutput, tPigLoad, tPigFilterRow, tPigFilterColumn, tPigStoreResult, tHiveLoad, tHiveInput, tHbaseInput, tHbaseOutput, tSqoopImport and tSqoopExport.
  • Extensively used tMap component which does lookup & Joiner Functions, tjava, tOracle, txml, tdelimtedfiles, tlogrow, tlogback components etc.
  • Worked on Development/Enhancement/Migration projects, we are doing migrating entire data warehouse data using AWS services and Apache SPARK and SQOOP applications. Basically, it works through Scala Language script.
  • Used Talend most used components (tMap, tDie, tConvertType, tFlowMeter, tLogCatcher, tRowGenerator, tSetGlobalVar, tHashInput & tHashOutput and many more).
  • Created many complex ETL jobs for data exchange from and to Database Server and various other systems including RDBMS, XML, CSV, and Flat file structures.
  • We are migrating our Entire network into AWS Cloud services, creating Virtual Private Clouds and create servers. So, part of this we are migrating entire all sources from Altegra Legacy to AWC Cloud CHC 2.0.
  • Part of data loading into data warehouse using big data Hadoop Talend ETL components, AWS S3 Buckets and AWS Services for red shift database.
  • We must design jobs using Big data Talend and Pick files from AWS S3 Buckets and Load into AWS Red shift database.
  • Part of red shift database AWS maintenance, we must vacuum and analyze our AWS red shift tables.
  • Use Talend big data components like Hadoop and S3 Buckets and AWS Services for Red shift.
  • Created Implicit, local, and global Context variables in the job. Worked on Talend Administration Console (TAC) for scheduling jobs and adding users.
  • Worked on various Talend components such as tMap, tFilterRow, tAggregateRow, tFileExist, tFileCopy, tFileList, tDie etc.
  • Experienced in Building a Talend job outside of a Talend studio as well as on TAC server.
  • Experienced in writing expressions wif in tmap as per the business need. Handled insert and update Strategy using tmap. Used ETL methodologies and best practices to create Talend ETL jobs.
  • Extracted data from flat files/ databases applied business logic to load them in the staging database as well as flat files.
  • Used tRunJob component to run child job from a parent job and to pass parameters from parent to child job.
  • Developed complex Talend ETL jobs to migrate the data from flat files to database. Implemented custom error handling in Talend jobs and worked on different methods of logging. Created ETL/Talend jobs both design and code to process data to target databases.

Environment: Talend7.1,5.6, Abinitio3.3.5, Oracle11g, Autosys, TAC, PL/SQL, Shell Script, JIRA, GIT, Jenkins, Udeploy, MySQL, Unix, Apache Hue.

Confidential

Talend Developer

Responsibilities:

  • Identified test cases to automate. Automate those test cases using Selenium WebDriver, TestNG and Java technologies (Eclipse IDE). Contributed to framework creation.
  • Worked closely wif Business Analysts to review the business specifications of the project and together the ETL requirements
  • Created Talend jobs to copy the files from one server to another and utilized Talend FTP components.
  • Analyzing the source data to know the quality of data by using Talend Data Quality.
  • Designed and Implemented ETL for data load from heterogeneous Sources to SQL Server and Oracle as target databases and for Fact and Slowly Changing Dimensions SCD-Type2 capture the changes.
  • Used components like tHDFSInput, tHDFSOutput, tPigLoad, tPigFilterRow, tPigFilterColumn,tPigStoreResult, tHiveLoad, tHiveInput, tHbaseInput, tHbaseOutput, tSqoopImport and tSqoopExport.
  • Migrated on premise database structure to Amazon Redshift data warehouse and Performance tuning - Using the tMap cache properties, Multi-threading and tParallelize components for better performance in case of huge source data. Tuning the SQL source queries to restrict unwanted data in ETL process.
  • Extensively Used tMap component which does Joiner Functions, tJava, tOracle, tXml, tDelimtedfiles, tlogrow, tlogback components etc. in many of my jobs created and worked on over 100+ components to use in my jobs.
  • Implemented File Transfer Protocol (FTP) operations using Talend Studio to transfer files in between network folders using Talend components like tftpConnection, tftpFilelist, tftpget and tftpput etc.
  • Designed, developed, and improved complex ETL structures to extract transform and load data from multiple data sources into data warehouse and other databases based on business requirements.
  • Used custom code components like tJava, tjavarow and tjavaflex. Created Talend jobs to load data into various Oracle tables. Utilized Oracle stored procedures and wrote few Java code to capture global map variables and used them in the job.
  • Experienced in using debug mode of Talend to debug a job to fix errors.
  • Used TAC (Talend Administrator center) and implemented new users, projects, tasks wifin multiple different environments of TAC (Dev, Test, Prod, DR).
  • Scheduling the ETL Jobs in TAC using file based and time-based triggers. Experience in Agile methodology.

Environment: Talend Enterprise for Big Data (V6.0.1, 5.6 .2/5.6.1 ), UNIX, SQL, Hadoop, Hive, Oracle, Unix Shell Scripting, Microsoft SQL Server management Studio.

Confidential

SQL/ETL Developer

Responsibilities:

  • Involved in requirements gathering by interacting wif the users and other management personnel to get a better understanding of the business process.
  • Analyzed business requirements and build logical data models that describe all the data and relationships between the data using data vault.
  • Created new database objects like Procedures, Functions, Packages, Triggers, Indexes and Views using T-SQL in SQL Server 2005.
  • Validated change requests and made appropriate recommendations. Standardized the implementation of data.
  • Responsible for designing and developing of mappings, mapplets, sessions and workflows for loading the data from source to target database using Informatica Power Center and tuned mappings for improving performance.
  • Created database objects like views, indexes, user defined functions, triggers, and stored procedures. Tuned mappings & SQL queries for better performance and efficiency.
  • Developed PL/SQL triggers & master tables for automatic creation of primary keys.
  • Involved in ETL process from development to testing and production environments.
  • Extracted data from various sources like Flat files, Oracle and loaded it into target systems using informatica 8.x.
  • Used Informatic Power Center Workflow Manager to create sessions, batches to run wif the logic Embedded in the mappings.
  • Automated existing ETL operations using Autosys.
  • Created and Executed shell scripts in Unix Environment.
  • Created and ran the Workflows using Workflow manager in Informatica Maintained stored definitions, transformation rules and targets definitions using Informatica repository manager.
  • Created tables and partitions in database Oracle.

Environment: Informatica Power Center 8.x, Oracle, SQL developer, MS Access, PL/SQL, UNIX Shell Scripting, SQL Server.

We'd love your feedback!