We provide IT Staff Augmentation Services!

Sr. Teradata Developer And Hadoop Developer Resume

4.00/5 (Submit Your Rating)

Boston, MA

PROFESSIONAL SUMMARY:

  • Over 8 years of Total IT professional experience in Big Data and Data warehousing (ETL/ELT) technologies includes requirements gathering, data analysis, design, development, system integration testing, deployments and documentation.
  • Hands on experience in solutions for big data using Hadoop, HDFS, Map Reduce, Spark, Hive, Sqoop, Zoo keeper, Oozie.
  • Excellent knowledge and hands on experience of Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradigm and monitoring systems.
  • Hands on experience in installing, configuring, and using Hadoop ecosystem components and management.
  • Experience in importing and exporting data using Sqoop from HDFS/Hive to Relational Database Systems and vice - versa.
  • Experienced and well versed in writing and using UDFs in Hive using Java.
  • Excellent understanding with different storage concepts like block storage, object storage, column storage, compression storage.
  • Extensive experience in Extraction, Transformation, Loading (ETL and ELT) data from various sources into Data Warehouses and Data marts with industry best practices.
  • Experience with Informatica ETL for data movement, applying data transformations and data loads.
  • Experience on working with Teradata Utilities such as BTEQ, Fast Load, Multi Load, and Xml import, Fast Export, Teradata SQL Assistant, Teradata Administrator and PMON.
  • Expertize in the ETL Tool Informatica which includes components like Power Center, Power Exchange, Power Connect, Designer, Workflow Manager, Workflow Monitor, Repository Manager, Repository Server Administration Console, IDE Informatica Data Explorer, and IDQ - Informatica Data Quality.
  • In Depth understanding and usage of TERADATA OLAP functions. Proficient in TERADATA SQL, Stored Procedures, Macros, Views, Indexes Primary, Secondary, PPI, Join indexes etc.
  • Experience in working with Microstrategy, Crystal reports, Business intelligence tools Business objects and ETL tool as informatica.
  • Have good experience to UNIX/Windows/Mainframe environments for running Tpump batch process for Teradata CRM.
  • Have good understanding of Teradata MPP architecture such as Partitioning, Primary Indexes, Shared Nothing, Nodes, AMPs, BYNET, etc.
  • Experience in Teradata production support.
  • Good working experience with different Relational DB systems.
  • Very good understanding with implementations in building data warehousing and data marts with OLTP vs OLAP, star vs snow flake schema, normalization vs de-normalization methods.
  • Hands on experience in building wrapper shell scripts and analysis shell commands in practice.
  • Supported various reporting teams and experience with data visualization tool Tableau.
  • Very good at SQL, data analysis, unit testing, debugging data quality issues.
  • Excellent communication, creative, technically competent, problem solving and leadership skills.
  • Focus on customer satisfaction and drive results by being team player and individual contributor with good collaboration skills as well.

SKILL:

Languages: Hive, Sqoop, SQL, PL/SQL, UNIX shell scripting.

DB Utilities: BTEQ, Viewpoint, Fast Load, Multi Load, Fast Export, T Pump, SQL*Loader, Exp/Imp, TD Administrator, TD Manager, TSET, SQL Assistant, Visual Explain, TASM.

Scheduling Tool: Multiload, AutoSys, and Version Control.

Tools: CSV, VSS, Arc main, Teradata Administrator, Visual Explain, SQL Assistant, Toad, Putty, WINSCP, CYGWIN, Oracle Developer 2000, SQL*Plus.

PROFESSIONAL EXPERIENCE:

Sr. Teradata Developer and Hadoop Developer

Confidential, Boston, MA

Responsibilities:

  • Evaluated business requirements and prepared detailed specifications that follow project guidelines required to develop written programs.
  • Responsible for building scalable distributed data solutions using Hadoop.
  • Importing the data from the SQLServer to HIVE and HDFS using SQOOP for One time and daily solution.
  • Worked on Big Data Hadoop environment on multiple Nodes
  • Implemented Hive tables and HQL Queries for the reports.
  • Experience in performing data validation using HIVE dynamic partitioning and bucketing.
  • Involved in Extracting, loading Data from Hive to Load an RDBMS using Sqoop.
  • Extensive data validation using HIVE and also written Hive UDFs
  • Involved in creating Hive tables loading with data and writing hive queries which will run internally in map reduce way.
  • Worked on exporting the same to SQLServer, which further will be used for generating business reports.
  • Worked in tuning HiveQL to improve performance
  • Good experience in troubleshooting performance issues and tuning Hadoop cluster.
  • Load flat files from Hdfs file path to local informatica (ETL) file system directories. From here loaded as Source file into Informatica Power Center for transformations and loading processed data into final destination database for further decision making process.
  • Used Teradata utilities: Fast Load, Multiload, Tpump, Fast EXPORT, BTEQ,TPT
  • Used EXPLAIN, COLLECT STATISTICS for TERADATA performance tuning.
  • Used Informatica PDO (push down optimization) and SESSION PARTITION for better performance.
  • Created procedures, macros in Teradata
  • Used Bteq for sql scripts and batch scripts and created batch programs using Shell scripts.
  • Developed the sqoop scripts in order to make the interaction between Pig and MYSQL Database.
  • Developed the script files for processing data and loading to HDFS. Written CLI commands using HDFS. Developed the UNIX shell scripts for creating the reports from Hive data.
  • Ran cron jobs to delete Hadoop logs/local old job files/cluster temp files, Setup Hive with MySQL as a Remote Metastore.
  • Moved all log/text files generated by various products into HDFS location. Created External Hive Table on top of parsed data.
  • Worked on different phases of Data Warehouse development lifecycle from Mappings to extracting data from various sources to tables and flat files. Created Re-Usable objects like Maplets & Re-usable transformations for business logic.
  • Worked on Transformations, such as Rank transformations, Expressions, Aggregator and Sequence
  • Experienced in working with complex mappings using expressions, routers, lookups, aggregators, filters. Worked on updates and joiners in Informatica, session partition, cache memory, connected lookups and unconnected lookups.
  • Used TERADATA 13/Oracle databases for informatica DW tool to load source data.
  • Created Teradata schemas with constraints, Created Macros in Teradata. Loaded the data using Fast load utility. Created functions and procedures in Teradata.
  • Experienced in writing SQL queries, PL/SQL programming and Query Level Performance tuning.
  • Developed and Tested database sub-programs (Packages, Stored Procedures, Functions) according to business and technical requirements

Sr. Hadoop Developer

Confidential, Sanjose, CA

Responsibilities:

  • Data Analysis and issue identification
  • Propose Architectural design changes to improve data warehouse performance
  • Visualize a data architecture design from high level to low level, and design performance objects for each level
  • Troubleshooting database issues related to performance, queries, stored procedure
  • Create ER diagram and conceptual, logical, physical data model
  • Fine-tune the existing scripts and process to achieve increased performance and reduced load times for faster user query performance
  • Accountable for Architect related deliverables to ensure all project goals are met within the project time lines
  • Performs mapping between source and target data, as well as performing logical model to physical model mapping and mapping from third normal form to dimensional (presentation layer).
  • Creates, validates and updates the data dictionary and analyzing documentation to make sure that the information captured is correct
  • Design logical and physical data model using Erwin data modelling tool and vision
  • Architecture and design support to provide solution for business initiated requests/ projects
  • Writing Teradata sql queries to join or any modifications in the table
  • Creation of customized Mload scripts on UNIX platform for Teradata loads
  • Provide design for CDC implementation for real time data solutions
  • Interact with business to collect critical business metrics and provide solution to certify data for business use
  • Analyze and recommend solutions for data issues
  • Writing Teradata BTEQ scripts to implement the business logic.

Teradata Developer

Confidential

Responsibilities:

  • Understanding the specification and analyzed data according to client requirement. Extensively worked in data Extraction, Transformation and loading from source to target system using BTEQ, Fast Load, and MultiLoad.
  • Design of process oriented UNIX script and ETL processes for loading data into data warehouse.
  • Using Stored Procedures created Database Automation Script to create databases in different Environments.
  • Involved in writing scripts for loading data to target data Warehouse for BTEQ, Fast Load, and MultiLoad.
  • Error handling and performance tuning in Teradata queries and utilities.
  • Tested the functionality of the systems in the development phase and designed the test plans for the data warehouse projects.
  • Parsed high-level design spec to simple ETL coding and mapping standards.
  • Maintained warehouse metadata, naming standards and warehouse standards for future application development.
  • Teradata performance tuning via Explain, PPI, AJI, Indices, collect statistics or rewriting of the code.
  • Developed BTEQ scripts to load data from Teradata Staging area to Teradata data mart.
  • Designed the complete workflow for all the extracts mappings to serve the business requirements with dependency hierarchy.
  • Performance tuning for TERADATA SQL statements using huge volume of data.
  • Created Fast Load, Fast Export, Multi Load, TPUMP, and BTEQ to load data from Oracle database and Flat files to primary data warehouse.
  • Worked as a member of the Big Data team for deliverables like design, construction, unit testing and deployment.
  • Loading data from large data files into Hive tables.
  • Initial setup to receive data from external source.
  • Designed and developed Hive job to merge incremental file.
  • Involved in writing Map/Reduce jobs using java.
  • Parsed high-level design spec to simple ETL coding and mapping standards.
  • Used External Loaders like Multi Load and Fast Load to load data into Teradata database.
  • Translation of functional and technical requirements into detailed architecture and design
  • Responsible to manage data coming from different sources.
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Responsible for operational support of Production system.
  • Used External Loaders like Multi Load and Fast Load to load data into Teradata database.
  • Designed the complete workflow for all the extracts mappings to serve the business requirements with dependency hierarchy.
  • Tuned Informatica Mappings and Sessions for optimum performance.

We'd love your feedback!