We provide IT Staff Augmentation Services!

Teradata Developer Resume

0/5 (Submit Your Rating)

Concord, CA

SUMMARY

  • 7+ years of experience in Development and design of ETL methodology for supporting data transformations & processing in a corporate wide ETL Solution using Informatica, Teradata, administration, analyzing business needs of clients, developing effective and efficient solutions and ensuring client deliverable within committed deadlines.
  • Proven track record in planning, building, managing successful large - scale Data Warehouse and decision support systems. Comfortable with both technical and functional applications of RDBMS, Data Mapping, Data management, Data transportation and Data Staging.
  • 5+ years OLTP, ODS and EDW data modeling (logical and physical design, and schema generation) using Erwin, ER/Studio and other tools for Teradata, Oracle, DB2/UDB and MS SQL Server models & repositories.
  • Proficiency in data warehousing techniques like Slowly Changing Dimension (SCD) (Type I, Type II and Type III), Surrogate key assignment.
  • Extensively worked on Informatica Power Center Transformations such as Source Qualifier, Lookup, Filter, Expression, Router, Normalizer, Joiner, Union, Update Strategy, Rank, Aggregator, SQL, XML, Stored Procedure, Sorter, Sequence Generator.
  • Implemented various Performance tuning techniques on Sources, Targets, Mappings and Workflows.
  • Have Good understanding of ETL/Informatica standards and best practices.
  • Experience in administering large Teradata database system in development, staging and production.
  • Expert Developer skills in Teradata RDBMS initial Teradata DBMS environment setup, development and use of FASTLOAD, MULTILOAD, TPUMP and Teradata SQL and BTEQ Teradata utilities.
  • Proficient in Teradata database design (conceptual and physical), Query optimization, Performance Tuning.
  • Experience in implementing Data warehouse and data base applications with Ab Initio, Informatica ETL in addition with data modeling and reporting tools on Teradata, Oracle, DB2, and Sybase RDBMS.
  • Extensive experience with managing, leading various complex data warehousing applications and installations, integrations, upgrades, maintenances of ETL, Datamart Siebel 7.0/2000/5.5 , Siebel Warehouse 6.3, Siebel EIM, OLAP, OLTP, Autosys, Control M, Sybase, BI, data cleansing, data profiling tools.
  • Strong hands on experience using Teradata utilities (FastExport, MultiLoad, FastLoad, Tpump, BTEQ and QueryMan).
  • 3 years of Data Cleansing experience using Trillium 7.6/6.5 (Converter, Parser, Converter & Geocoder), Firstlogic 4.2/3.6, UNIX Shell Scripting and SQL coding.
  • Good Knowledge in Dimensional Data modeling, Star/Snowflake schema design, Fact and Dimensional tables, Physical and Logical data modeling.
  • Experience with business intelligence reporting tools using Business Objects, Cognos and Hyperion.
  • Experience in supporting large databases, troubleshooting the problems.
  • Experience in all phases of SDLC like system analysis, application design, development, testing and implementation of data warehouse and non-data warehouse projects

TECHNICAL SKILLS

Teradata Tools: ARCMAIN, BTEQ, Teradata SQL Assistant, Teradata Manager, PMON, Teradata Administrator.

ETL Tools: Informatica Power Center 6.2/7.1/7.1.3 , Ab Initio (GDE 1.15/1.11,Co-Op 2.14, SSIS

DB Tools: SQL*Plus, SQL Loader, TOAD 8.0, BTEQ, Fast Load, Multiload, FastExport, SQL Assistant, Teradata Administrator, PMON, Teradata Manager

Databases: Teradata 14/13/12/V2R6.2/V2R5, Oracle 10g, DB2, MS-SQL Server 2000/2005/2008 , MS-Access.

Scheduling Tools: Autosys, Tivoli Maestro

Version Control Tools: Sub-version, Clear Case

Programming Languages: C, C++, Java, J2EE, Visual Basic, SQL, PL/SQL and UNIX Shell Scripting

Data Modelling /Methodologies: Logical/Physical/Dimensional, Star/Snowflake, ETL, OLAP, Complete Software Development Cycle. ERWIN 4.0

Operating Systems: Sun Solaris 2.6/2.7/2.8/8.0 ,Linux, Windows, UNIX

PROFESSIONAL EXPERIENCE

Confidential, Concord, CA

Teradata Developer

Responsibilities:

  • Involved in Requirement gathering, business Analysis, Design and Development, testing and implementation of business rules.
  • Developed entity diagrams and data dictionaries to accomplish tasks. Worked closely to develop logical and physical data models that capture current state/future state data elements and data flow using ER-WIN
  • Analyzed the data model to fit in the ETL to load data into its input tables with proper link of surrogate keys
  • Derived the dimensions and facts for the given data and loaded them on a regular interval as per the business requirement
  • Actively involved in the Design and development of the STAR schema data model.
  • Extracted data from source systems like flat files as per the requirements and loaded it to Teradata using FASTLOAD, TPUMP and MLOAD
  • Worked with Teradata SQL Assistant to analyze and BTEQ scripts to load the data from Teradata staging to Teradata Warehouse Tables.
  • Created and Configured Workflows, Worklets and Sessions to transport the data to target warehouse Teradata tables using Informatica Workflow Manager.
  • Written SQL commands (Pre-Session & Post-Session commands) and executed them in the target database to drop the index & create the index for the target table before and after loading data into it.
  • Involved in identifying bottlenecks in source, target, mappings and sessions and resolved the bottlenecks by doing Performance tuning techniques like increasing block size, data cache size, buffer length.
  • Developed the UNIX shell scripts to send out an E-mail on success of the process indicating the destination folder where the files are available.
  • Used Incremental Aggregation technique to load data into Aggregation tables for improved performance.
  • Extensively used Parameter file to override mapping parameter, mapping Variables, Workflow Variables, Session Parameters, Ftp Session Parameters and Source-Target Application Connection parameters.
  • Performed Risk and Gap Analysis Used Informatica -Best Practice to handle Error handling - Logging the record level errors in the metadata tables and Auditing - Capture the source/target record counts in every phase of the process flow.
  • Used Autosys for scheduling the jobs.
  • Involved in Pre-Prod Migration and Mock Prod Migration test Activity.
  • Developed Scripts to load the data from source to staging and staging area to target tables using different load utilities like Bteq, FastLoad and MultiLoad.
  • Regular interactions with DBA’s
  • Involved in Data Modeling to identify the gaps with respect to business requirements and transforming the business rules.
  • Developing and reviewing Detail Design Document and Technical specification docs for end to end ETL process flow for each source systems.

Confidential, St. Louis, MO

Teradata Developer

Responsibilities:

  • Involved in analysis of end user requirements and business rules based on given documentation and working closely with tech leads and analysts in understanding the current system and developing new strategies for ETL process.
  • Worked with different Data sources ranging from Sybase, Teradata, Flat files, Oracle, and SQL server databases.
  • Involved heavily in writing complex SQL queries based on the given requirements. Used volatile table and derived queries for breaking up complex queries into simpler queries.
  • Involved in a migration effort to convert all the Multi load and Fast load jobs to TPT.
  • Involved in performance tuning of scripts using EXPLAIN plans to decrease the CPU consumption time and increase the parallel efficiency.
  • Created views for the users and extensively involved in generating Line of Business reports using Actuate.
  • Involved in Unit Testing, Integration Testing and preparing test cases for SIT and UAT teams.
  • Actively involved in reviewing the codes for each project before deploying the codes into production.
  • Involved in troubleshooting the production issues and providing production support.
  • Regular Interactions with DBA’s. Did table rebuilds to back up the data from the final tables.
  • Provided and supported the development and on-going support of data warehouse, business analytics, reporting systems and services and tools.
  • Involved in developing ETL program for supporting Data Extraction, transformations and loading using Informatica Power Center as a part of migration effort in the current project.
  • Involved in Creating the Unix Shell Scripts/Wrapper Scripts that are used for scheduling jobs.
  • Involved in developing Multi load, Fast Load and BTEQ scripts.
  • Deployed the reviewed scripts into Dev, SIT, UAT, PROD boxes using Tortoise Subversion.
  • Provided and supported the development and on-going support of data warehouse, business analytics, reporting systems and services and tools.
  • Analyzed data quality, data organization, metadata, and data profiling, and documenting quality of source data to be used within the warehouse.
  • Tuned non-compliant queries by analyzing Explain plans, visual explain to understand optimizer plans, also used Collect Statistics for collecting stats and recommending indexes.
  • Worked with other work stream developers to promote their code and solve day to day issues.
  • Produced documentation and procedures for best practices in Teradata development and administration.
  • Analyzing Application Data Access and handling the Access Categories to ensure optimal Database Security.
  • Monitoring database space, Identifying tables with high Skew, working with data modeling team to change the Primary Index on tables with High skew
  • Member of Change Advisory Board (CAB) and Release management for Enterprise Application.
  • Involved in Unit Testing and Preparing test cases
  • Involved in Peer Reviews

Confidential, Eldorado Hills, CA

Teradata Developer/ ETL consultant

Responsibilities:

  • Involved in Requirement gathering, business Analysis, Design and Development, testing and implementation of business rules.
  • Developed mappings to load data from Source systems like oracle, AS400 to Data Warehouse.
  • Development of scripts for loading the data into the base tables in EDW using FastLoad, MultiLoad and BTEQ utilities of Teradata.
  • Writing MultiLoad scripts, FastLoad and BTEQ scripts for loading the data into stage tables and then process into BID.
  • Dealt with Incremental data as well Migration data to load into the Teradata.
  • Designed and Developed the Informatica workflows/Worklets/sessions/Mappings to extract, transform and load the data into Target.
  • Created proper Teradata Primary Indexes (PI) taking into consideration of both planned access of data and even distribution of data across all the available AMPS. Considering both the business requirements and factors, created appropriate Teradata NUSI for smooth (fast and easy) access of data.
  • Worked on exporting data to flat files using Teradata FastExport
  • Analyzed the Data Distribution and Reviewed the Index choices
  • In-depth expertise in the Teradata cost based query optimizer, identified potential bottlenecks
  • Worked with PPI Teradata tables and was involved in Teradata specific SQL fine-tuning to increase performance of the overall ETL process
  • Debugging and monitoring the code using GDB commands
  • Developed procedures to populate the customer data warehouse with transaction data, cycle and monthly summary data, and historical data.
  • Worked with different Informatica tuning issues and fine-tuned the transformations to make them more efficient in terms of performance.
  • Involved in designing the data flow diagram.
  • Extensively worked on several ETL Ab Initio assignments to Extract, Transform and Load data into tables as part of Data Warehouse development with high Complex Data Models of Relational, Star, and Snowflake schema.
  • Documented the mappings used in ETL processes
  • Worked on UNIX shell scripts.
  • Involved in Unit Testing and Preparing test cases
  • Involved in Peer Reviews

Confidential, Englewood, DE

Teradata/ETL Developer

Responsibilities:

  • Involved in Data Extraction, Transformation and Loading from source systems.
  • Created Informatica mappings to populate data into fact tables and dimension tables.
  • Developed complex mappings using multiple sources and targets in different databases, flat files.
  • Developed BTEQ scripts for Teradata.
  • Automated Workflows and BTEQ scripts using scheduling tool Cronacle.
  • Responsible for tuning the performances of Informatica mappings and Teradata BTEQ scripts.
  • Used Repository Server Administration Console to create and backup Repositories.
  • Worked with DBAs to tune the performance of the applications and Backups.
  • Writing UNIX Shell Scripts for processing/cleansing incoming text files.
  • Used CVS as a versioning tool.
  • Performed Unit testing, Integration testing and generated various Test Cases.
  • Performed Data analysis and Data validations.
  • Mapping of data from source Oracle to staging to target Teradata database and ETL processes for complete project lifecycle and involved in analysis, High level Design, Low level Design documentation.
  • Responsible for creating Users, Databases, Profiles, Roles, Tables, Views, Indexes (Primary, Secondary, partitioned etc.), Triggers, and Macros in different phases and different segments of the project and Responsible for database space for present and future considerations and assigning and revoking space on the analysis basis and tech leads guideline basis.
  • Involved in directing to migrating the data from different data barns to Teradata and in mapping of data from source to staging to target data base.
  • Migrated data with help of Teradata FastExport, Insert/ Select, flat files from one system to another.
  • Involved in test data set up and data quality testing and Unit testing and Integration Testing.
  • Netvault for backup and recovery and also used journals extensively for the disaster recovery process for rollback and roll forward process. Used Archived data as source with FastExport and FastLoad or MultiLoad process to migrate data from one system to another depending on the space and business requirements.
  • Performance tuning, monitoring and index selection while using PMON, Teradata Dashboard, Statistics wizard and Index wizard and Teradata Visual Explain to see the flow of SQL queries in the form of Icons to make the join plans more effective and fast.
  • Extensively used Teradata Manager, Teradata Query Manager, and Teradata administrator etc. to manage system in prod, test, and development environments.
  • Providing suggestions for best join plans while visualizing SQL queries with Visual Explain and Explain while recommending best Join Indexes such as Single table or multi table join indexes.
  • Supporting jobs running in production for failure resolution with tracking failure reasons and providing best resolution in timely manner.
  • Interaction with different teams for finding failure of jobs running in production systems and providing solution, restarting the jobs and making sure jobs complete in the specified time window.
  • Provide 24*7 production support for the Teradata ETL jobs for daily, Monthly and Weekly Schedule.

Confidential, Cincinnati, OH

Teradata/ETL Developer

Responsibilities:

  • Involved in understanding the Requirements of the End Users/Business Analysts and Developed Strategies for ETL processes.
  • The project involved extracting data from various sources, then applying the transformations before loading the data into target (warehouse) Stage tables and Stage files.
  • Worked on Informatica power center tools-Source Analyzer, Warehouse designer, Mapping Designer, Transformation Developer.
  • Created the mappings using transformations such as the Source Qualifier, Aggregator, Expression, Lookup, Router, Filter, and Update Strategy.
  • Used the following components of Ab Initio in creating graphs. Dataset components (Input file, output file, lookup file, and intermediate file), Database components (Input table, output table, RunSql, Truncated Table), Transform Components (Aggregate, Dedup Sorted, Filter by Expression, Join, Normalize, Reformat, Rollup and Scan Components), Partitioning Components (Broad Cast, partition by expression, partition by key, partition by round robin), Gatherlogs, Redefine format, Replicate, Runprogram components.
  • Extensively used the Ab Initio tool’s feature of Component, Data and Pipeline parallelism.
  • Configured the source and target database connections using .dbc files
  • Used BTEQ and SQL Assistant (Query man) front-end tools to issue SQL commands matching the business requirements to Teradata RDBMS.
  • Implemented star-schema models for the above data marts. Identified the grain for the fact table. Identified and tracked the slowly changing dimensions and determined the hierarchies within the dimensions.
  • Worked with DBA team to ensure implementation of the databases for the physical data models intended for the above data marts.
  • Created proper Teradata Primary Indexes (PI) taking into consideration of both planned access of data and even distribution of data across all the available AMPS. Considering both the business requirements and factors, created appropriate Teradata NUSI for smooth (fast and easy) access of data.
  • Adapted Agile software development methodology to ETL the above data marts.
  • Created mapping documents from EDS to Data Mart. Created several loading strategies for fact and dimensional loading.
  • Designed the mappings between sources (external files and databases) to Operational staging targets.
  • Created proper Teradata Primary Indexes (PI) taking into consideration of both planned access of data and even distribution of data across all the available AMPS. Considering both the business requirements and factors, created appropriate Teradata NUSI for smooth (fast and easy) access of data.
  • Did the performance tuning for Teradata SQL statements using Teradata Explain command.
  • Created .dml files for specifying record format.
  • Created checkpoints, phases to avoid dead locks and tested the graphs with some sample data then committed the graphs and related files into Repository from sandbox environment. Then schedule the graphs using Autosys and loaded the data into target tables from staging area by using SQL Loader.
  • Worked heavily with various built-in transform components to solve the slowly changing dimensional problems and creating process flow graphs using Ab Initio GDE and Co>Operating System.
  • Analyzed the Data Distribution and Reviewed the Index choices
  • Tuning of Teradata SQL statements using Explain analyzing the data distribution among AMPs and index usage, collect statistics, definition of indexes
  • Extensively worked under the UNIX Environment using Shell Scripts.

We'd love your feedback!