We provide IT Staff Augmentation Services!

Senior Teradata, Informatica, Hadoop Developer Resume

3.00/5 (Submit Your Rating)

Charlotte, NC

SUMMARY

  • 8+ years of IT experience in implementation of Data Warehouse projects using Teradata, Informatica and Unix Shell Scripting.
  • 8 years of experience in Banking/Finance domains.
  • 2 years of working experience in Hadoop Ecosystem.
  • 7+ years of Experience in Autosys Job Scheduling Tool
  • Strong Experience in Data modeling using Power Designer - Conceptual Data Model, Logical Data Model and Physical Data Model
  • Expertise in Dimensional data modeling, Relational data modeling, Normalization, Star, snow flake schemas, data mining and slowly changing dimension.
  • Good Knowledge and experience in Anti Money Laundering (AML), Fraud Integration Domain.
  • Extensive experience in Project Planning, Effort estimation, Business analysis to meet needs of Clients and developing efficient solutions.
  • Project Experience in both Waterfall and Agile Methodologies with participation in Scrum daily calls and effort Estimation Meetings
  • Designed and Developed Framework for several Data Quality(DQ) checks and Recon checks
  • Good Experience in Software Development Life Cycle (SDLC) and Building Complete Architecture from Source till Target using multiple ETL tools, Data mover utilities
  • Good Data Warehousing ETL experience using Informatica Power Center Client tools - Power Center Designer - Source Analyzer, Target Designer, Transformation Developer, Mapping Designer and Mapplets, Workflow Manager, Workflow Monitor and Repository Manager.
  • Expertise in Informatica Session Partitioning using Round-Robin and Pass Through partition, Push Down Optimization, Concurrent workflow to improve performance
  • Expertise in Teradata utilities - TPT, BTEQ, Fast load, MLOAD, TPUMP scripts
  • Excellent knowledge on Teradata SQL Performance tuning for optimal performance. Sound knowledge on Collect statistics, Derived tables, Volatile/Global Temporary tables, join strategies, join types and Explain/Optimizer plans.
  • Excellent Knowledge on writing Teradata Macros, Semantic Views, Stored Procedures, Macro, Triggers, join index and Partitioned Primary Index. Developed Macros for Data validation and Attribute/Object Usage reports.
  • Excellent knowledge in implementing Teradata Standards, best Practices, ANSI SQL, Compression features (Multi Value Compression)
  • Sound Knowledge on Teradata Architecture (PE, AMPs, BYNET, Indexes, Data Distribution, Data Retrieval, Data Protection and Locking).
  • Proposed Design and developed Hadoop migration projects which transformed Complex Informatica and Teradata Code using HDFS, Hive, PIG, SQOOP, OOZIE.
  • Good Experience in using Teradata View point for Managing Query Execution, Monitoring performance, resource and space utilization.
  • Good Experience in Datameer tool for big data analytics by effective use of workbooks and jobs.
  • Good Knowledge on ETL processing using SSIS
  • Expertise in implementation of NDM, SFTP, FTP services to source and deliver Files
  • Experience working with business Analyst and Subject Matter Experts (SME's) to identify, prioritize, and implement solutions to improve efficiency, reduce risk, and support new business.
  • Coordinate with upstream and Downstream for End-End Testing.
  • Good Experience working in a Global Delivery Model

TECHNICAL SKILLS

DATABASES: Teradata v12; v13; v14; v15, MS-SQL Server, Oracle

ETL Tools: Informatica Power Center v8.x, v9.5.1 and v9.6.1 -Power Center Designer, Workflow Manager, Workflow Monitor and Repository Manager; SSIS

Teradata Utilities: TPT (LOAD, UPDATE, STREAM, SQL Selector, ODBC, Fast Export, Data Connector Operators), BTEQ, FASTLOAD, MLOAD, TPUMP, FASTEXPORT

Hadoop/Big data: HDFS, Hive, Pig, Sqoop, oozie and Datameer

Data Modeling Tool: Power Designer

Scheduling Tool: Autosys, Oozie

DB Tools: Teradata View Point, Teradata SQL Assistant, Toad, Teradata Studio Express

Languages: Unix/Linux Shell Scripting, ANSI SQL, PL/SQL

Other Tools: JIRA, SVN Versioning Tool, ITSM/Remedy, Visio, SFTP, FTP, NDM (Network Data Mover), Putty, WINSCP, Quality Center

PROFESSIONAL EXPERIENCE

Confidential, Charlotte, NC

Senior Teradata, Informatica, Hadoop Developer

Responsibilities:

  • Responsible for developing, support and maintenance for the ETL processes using Informatica Power Center 9.6.1. Informatica Client tools - Source Analyzer, Warehouse Designer, Mapping Designer, Mapplets Designer, Informatica Repository Manager and Informatica Workflow Manager using Teradata Load Utilities like TPT, Fast Load and Multi Load.
  • Integrate heterogeneous data sources like Oracle, SQL Server and Flat Files (Fixed & delimited) into Staging Area using Informatica mappings, workflows and session objects.
  • Extensively work with various lookup cache like Static Cache, Dynamic Cache, Persistent Cache, and Shared Cache.
  • Extensively work on different types of transformations like expression, filter, aggregator, lookup, stored procedure, sequence generator, and joiner etc.
  • Create parameter files and Unix Scripts for operations like file validation, file processing, report generation and Data reconciliation check between the source and target.
  • Understand business requirement, and the current system implementation of that functionality and perform system analysis with the proposed changes
  • Create Wrapper shell script to connect Informatica repositories and Autosys server to execute the jobs and report generation.
  • Implement NDM, SFTP, FTP services to retrieve Flat Files from the external sources, create shell script to connect external servers using NDM, SFTP and FTP process.
  • Increase Code Reusability using Shortcuts, Mapplets, Reusable Transformations and Reusable Sessions to reduce redundancy in code.
  • Create/modify Teradata Tables, Views and Stored Procedures. Create Semantic layer views for business users.
  • Prepare Teradata Stored Procedure for Masking data when data is copied from Production Environment to the Lower Environment for Development
  • Prepare Re-Usable Macro to Switch the View Layer to Maintain Referential Integrity between Parent and child profiles during data processing
  • Prepare Teradata BTEQ Scripts for Change Data Capture (CDC) between Historical Data and Current Data
  • Create Coding and Unit Testing for Teradata SQLs, TPT (LOAD, UPDATE, STREAM, SQL Selector, SQL Insertor, ODBC, Fast Export, Data Connector Operators), Fast Load, Multi Load, Fast Export and BTEQ Script.
  • Create Autosys jobs invoking Unix Scripts to run TPT scripts, use command line, local job variable file and global job variable file.
  • Apply Teradata Multi Value Compression feature to save space in the Teradata Target Database
  • Create XML Script to create Send Reconciliation Report and Daily success/Failure Job Status to Business Users
  • Create Join Index to improve performance of Teradata Queries which are used by User Interface Transaction monitoring system.
  • Interact with the business in defect resolution and user support in terms of any design/requirement questions.

Environment: Teradata v13, v14, v15, Informatica v9.5.x v9.6.1, Oracle, UNIX shell scripting, Autosys, SVN

Confidential, Charlotte, NC

Senior Teradata, Informatica, Hadoop Developer

Responsibilities:

  • Created new Informatica mappings to extract data from different upstream systems heterogeneous data sources like Oracle, SQL Server, apply the transformation logic and loaded them into target databases
  • Acted as Lead Developer by mentoring the new offshore resource and managing the project
  • Understood existing business process, project functional and technical specifications
  • Proposed Usage Multi Common Table Expression in Source qualifier query for using single mapping to extract different set of Data
  • Applied Session Partitioning using Round-Robin and Pass through partition in Informatica which reduced the batch run time by 6 hours
  • Usage of NTILE SQL Function to split the source records for session partitioning
  • Used reusable Mapplets to calculate Value Added Process (VAP) like Credit Risk rating, Counterparty risk rating and Obligor risk rating.
  • Worked on various loading techniques like TPT, FASTLOAD, MULTILOAD, TPUMP, BTEQ in Teradata
  • Prepared BTEQ script for applying the Slowly Changing Dimension (SCD Type II) at the target Level.
  • Wrote UNIX shell scripts to generate reports, value manipulation and file processing
  • Scheduling the application process using Autosys scheduling tool by determining the dependencies and SLAs.
  • Managed existing ETL workflows, changed existing mappings based on scope changes
  • Implemented a complete Equity application on replacing complex regulatory mappings into HADOOP, SQOOP PIG and HIVE.
  • Used SQOOP IMPORT feature to Transfer the data from Teradata to HDFS to perform ETL transformation using Hive QL and SQOOP EXPORT to transfer the Processed data from HDFS to Teradata.
  • Implemented and represented mapping in Hadoop Datameer
  • Reengineered a module from Informatica/Unix/Teradata system to Hadoop system successfully.
  • Redesigned existing system for faster data deliver to the users for regulatory calculations

Environment: Informatica 9.0.1, Oracle, Teradata, Hadoop - Sqoop, oozie, Hive, Pig, UNIX and Autosys, Datameer

We'd love your feedback!