Teradata Developer Resume
Chicago-iL
SUMMARY:
- Around 7 years of IT professional experience with involvement in Analysis, Design and Development of different ETL applications and using Business Intelligence Solutions in Data Warehousing and Reporting with different databases.
- Expertise in Full Software Development Life Cycle of Data Warehousing.
- Strong experience in providing ETL solutions using Informatica Power Center 9.x/8.x/7.x.
- Experience with dimensional modeling using star schema and snowflake models.
- Highly proficient in integrating data with multiple Databases involving Teradata, Oracle, My Sql, SQL Server, DB2, Mainframe and Flat Files like Delimited, Fixed width.
- Developed complex mappings in Informatica using various transformations like Source Qualifier, Joiner, Aggregator, Update Strategy, Rank, Router, Java, Lookup - Connected & Unconnected, Sequence Generator, Filter, Sorter, Stored Procedure transformation etc.
- Good Experience with Change Data Capture for pulling Delta data.
- Worked on Initial Load Mapping templates for historical load.
- Expertise in handling & creating process for Slowly Changing Dimension (SCD) Type 1 & Type 2 to maintain history in dimension tables.
- Experience in creating Stored Procedures, Functions, Views and Triggers.
- Proficiency in developing SQL with various Relational Databases.
- Having solid Experience in Informatica and Teradata combination process.
- Strong experience using Teradata utilities like MLOAD, FLOAD, TPUMP, FAST EXPORT and TPT for improving Teradata target load performance. Have also created BTEQ scripts to load data in to base tables.
- Strong Experience in doing Performance Tuning of Teradata BTEQs.
- Strong Experience in working with Tuning of Different Relational data bases.
- Extensively worked with Informatica performance tuning involving source level, target level and mapping level bottlenecks.
- Expertise on creating mappings for performing Data Quality, Data Cleaning & Data Validation.
- Created shell scripts to run the Informatica workflows and controlling the ETL flow.
- Experience in doing DWH Production support on rotational basis.
- Experience with using Scheduling tools - Informatica Scheduler, Maestro, Cron Job, Control M, and Autosys.
- Experience in Performing Query Optimization with the help of explain plans, collect statistics, Primary and Secondary indexes. Used volatile tables and derived queries for breaking up complex queries into simpler queries. Streamlined the Teradata scripts and shell scripts migration process on the UNIX box.
- Experience in Technical and User Documentation.
- Experience in Data Migration projects from DB2 and Oracle to Teradata. Created automated scripts to do the migration using UNIX shell scripting, Oracle/TD SQL, TD Macros and Procedures.
- Experienced in working both Waterfall & Agile Methodologies.
- Experience with Agile Tools Rally, Jira & Scrum Process.
- Experience working with offshore and onsite co-ordination.
- Able to work independently and collaborate proactively & cross functionally within a team.
- Good team player with ability to solve problems, organize and prioritize multiple tasks.
TECHNICAL SKILLS:
Teradata Utilities: BTEQ, Fast Load, Multiload, TPT, TPump, SQL Assistant, Viewpoint, Query Monitor.
Tools: Informatica Analyst 9.1, Informatica Developer 9.1, Repository, Metadata, Data Mart, OLAP, OLTP, Web services, Informatica Power Center Source Analyzer, Mapping Designer, Workflow ETL Monitor, Workflow Manager, Data Cleansing, Data Quality Power Mart Informatica Dat-Analyzer 9.1, Power Exchange 8.
Languages: Teradata SQL, Bteq, SQL, Shell Script and Perl Scripting.
Databases: Teradata 13/12/V2R6.2, Oracle 10g/9i, DB2/UDB, SQL Server
Operating Systems: Windows, Linux, Unix
Scheduling tools: Control M, Autosys
Data Modeling: Erwin and ER Studio
Methodologies: Agile(SCRUM), Waterfall, Kanban
PROFESSIONAL EXPERIENCE:
Confidential, Chicago-IL
Teradata Developer
Responsibilities:
- Interacting with business team to understand business needs and to gather requirements.
- Prepared requirements document to achieve business goals and to meet end user expectations.
- Created Mapping document from Source to stage and Stage to target mapping.
- Performed Unit testing and created Unix Shell Scripts and provided on call support.
- Created and modified several UNIX shell Scripts according to the changing needs of the project and client requirements.
- Wrote Unix Shell Scripts to process the data received from source system on daily basis.
- Extensively involved in using hints to direct the optimizer to choose an optimum query execution plan.
- Partitioned the fact tables and materialized views to enhance the performance.
- Created records, tables, collections (nested tables and arrays) for improving Query performance by reducing context switching.
- Involved in Data Migration projects from DB2 and Oracle to Teradata. Created automated scripts to do the migration using UNIX shell scripting, Oracle/TD SQL, TD Macros and Procedures.
- Worked with TPT wizards to generate the TPT scripts for the Incoming Claims data.
- Implemented pipeline partitioning concepts like Hash-key, Round-Robin, Key-Range, Pass Through techniques in mapping transformations. Used Control-M for Scheduling.
- Expertise in writing scripts for Data Extraction, Transformation and Loading of data from legacy systems to target data warehouse using BTEQ, FastLoad, MultiLoad and Tpump.
- Performed tuning and optimization of complex SQL queries using Teradata Explain and Run stats.
- Created a BTEQ script for pre- population of the work tables prior to the main load process.
- Extensively used Derived Tables, Volatile Table and Global Temporary tables in many of the ETL scripts.
- Developed MLOAD scripts to load data from Load Ready Files to Teradata Warehouse.
- Performance Tuning of sources, Targets, mappings and SQL queries in transformations.
- Worked on exporting data to flat files using Teradata FastExport.
- Analyzed the Data Distribution and Reviewed the Index choices.
- In-depth expertise in the Teradata cost based query optimizer, identified potential bottlenecks.
- Worked with PPI Teradata tables and was involved in Teradata specific SQL fine-tuning to increase performance of the overall ETL process.
Environment: Teradata, Fixed width files, TPT, TPT script, Teradata 14.0 (FastLoad, MultiLoad, FastExport, BTEQ), Teradata SQL Assistant.
Confidential, Louisville- KYTeradata Developer/ETL Consultant
Responsibilities:
- My main responsibility was to code scripts for sourcing, transforming and loading of external data into the Data mart.
- Experienced in developing mappings, sessions and workflows in Informatica Power Center.
- Coordinate with cross-functional teams in different locations for quality data and analysis
- Design and implement data model for Data Quality Audits
- Perform analysis on business requirements, KPI and provide solution to certify data for business use
- Daily transnational data and claims data from hospitals will be available by 4AM next day to load into data warehouse.
- File check process (shell script) will be waiting for this file and kick start the actual load process once files arrives.
- As initial load process, we will MLoad this file in to Teradata staging table and archive the file in to archive folder after MLoad completes successfully.
- This actual load process contains mostly Teradata stored procedures and are scheduled to run every day through 3rd party scheduling tool.
- After completion of load process, we will run Validation process to validate the data loaded in target tables against source.
- Develop Fast load scripts to load data from host file in to Landing Zone table.
- Involved in writing the ETL specifications and unit test plans for the mappings.
- Apply the business transformation using BTEQ scripts.
- Creating Database Tables, Views, Functions, Procedures, Packages as well as Database Sequences, Triggers and database link.
- Design and develop Extract Transfer and Load (ETL) Code to migrate and integrate data from disparate data sources into Hadoop for specific projects.
- Analyzed large data sets by running Hive queries and Pig scripts.
- Written SQL queries for retrieving the required data from the database.
- Tested and Debugged PL/SQL packages.
- Create BTEQ scripts to load the data from staging table in to target table.
- Used standard packages like UTL FILE, DMBS SQL, and PL/SQL Collections and used BULK Binding involved in writing database procedures, functions and packages.
- Provided required support in Multiple (SIT, UAT & PROD) stages of the project.
- Prepared BTEQ import, export scripts for tables.
- Written BTEQ, FAST LOAD, MLOAD scripts.
- Involved in unit testing and prepared the test case and Validated the target data with the source data.
- Worked on automation of sourcing file feeds, loading, transformation is done using Unix Scripts and Control-M Jobs
- Worked with the Offshore team, updating them regularly with Client Requirements and Testing of all the code developed by offshore.
Environment: Teradata 14, Teradata SQL Assistant, Teradata Manager, PL/SQL, UNIX, Scripts, Hadoop, Pig, Hive, MLOAD, BTEQ, FASTLOAD.
ConfidentialTeradata Developer /ETL Consultant
Responsibilities:
- Responsible for coordinating with the Business System Analysts in the requirement gathering process and creating technical specific document.
- Gathering requirements from the various systems business users.
- Perform Gap Analysis between existing ETL process and Standard Business Content, created Gap Analysis documents for each business area identified.
- Responsible for Design, Data Mapping Analysis and Mapping rules
- Responsible for project management and planning.
- Fixed issues with the existing Fast Load/ Multi Load Scripts in for smooth loading of data in the warehouse more effectively.
- Working on POC using Java, Hadoop, Hive and NO-SQL databases like Cassandra for the data analysis. This is the initial implementation of the Hadoop related project where LM want to see the claims loss transactions on Hadoop.
- Worked on loading of data from several flat files sources to Staging using MLOAD, FLOAD.
- Created BTEQ scripts with data transformations for loading the base tables.
- Generated reports using Teradata BTEQ.
- Worked on optimizing and tuning the Teradata SQLs to improve the performance of batch and response time of data for users.
- Fast Export utility to extract large volume of data and send files to downstream applications.
- Provided performance tuning and physical and logical database design support in projects for Teradata systems and managed user level access rights through the roles.
- Developed TPump scripts to load low volume data into Teradata RDBMS at near real-time.
- Created BTEQ scripts to load data from Teradata Staging area to Teradata
- Performed tuning of Queries for optimum performance.
- Preparation of Test data for Unit testing and data validation tests to confirm the transformation logic
- Tuned various queries by COLLECTING STATISTICS on columns in the WHERE and JOIN expressions.
- Performance tuning for Teradata SQL statements using Teradata EXPLAIN
- Collected statistics periodically on tables to improve system performance.
- Instrumental in Team discussions, Mentoring and Knowledge Transfer.
- Responsible for Implementation & Post Implementation support
- Documentation of scripts, specifications and other processes.
Environment: Teradata V2R12, Teradata SQL Assistant, MLOAD, FASTLOAD, BTEQ, TPUMP, Erwin, Unix Shell Scripting, Macros VBA, Windows XP
ConfidentialInformatica/Teradata Developer
Responsibilities:
- Responsible for requirements gathering for an enhancement requested by client. Involved in analysis and implementation.
- Extensively used transformations to implement the business logic such as Sequence Generator, Normalizer, Expression, Filter, Router, Rank, Aggregator, Connected and Un connected Look Up (Target as well as Source), Update Strategy, Source Qualifier and Joiner, designed complex mappings involving target load order and constraint based loading.
- Developed Informatica mappings, Reusable transformations. Developed and wrote procedures for getting the data from the Source systems to the Staging and to Data Warehouse system.
- Extensively used the Teradata utilities like BTEQ, Fastload, Multiload, DDL Commands and DML Commands (SQL).
- Performed tuning and optimization of complex SQL queries using Teradata Explain and Run stats.
- Created a BTEQ script for pre-population of the work tables prior to the main load process.
- Created Primary Indexes (PI) for both planned access of data and even distribution of data across all the available AMPS. Created appropriate Teradata NUPI for smooth (fast and easy) access of data.
- Worked on exporting data to flat files using Teradata Fast Export.
- In-depth expertise in the Teradata cost based query optimizer, identified potential bottlenecks.
- Responsible for designing ETL strategy for both Initial and Incremental loads.
- Developed the Teradata Macros, Stored Procedures to load data into Incremental/Staging tables and then move data from staging to Journal then move data from Journal into Base tables
- Interacted with business community and gathered requirements based on changing needs. Incorporated identified factors into Informatica mappings to build the DataMart.
- Provided scalable, high speed, parallel data extraction, loading and updating using TPT.
- Developed UNIX scripts to transfer the data from operational data sources to the target warehouse.
- Extracted data from various source systems like Oracle, Sql Server and flat files as per the requirements Provided scalable, high speed, parallel data extraction, loading and updating using TPT.
- Implemented full pushdown Optimization (PDO) for Semantic layer implementation for some of the complex aggregate/summary tables instead of using the ELT approach. worked on Informatic Power Center tools - Designer, Repository Manager, Workflow Manager, and Workflow Monitor.
Environment: Informatica Power Center 9.6, Teradata 15.10, Oracle 11g, SQL Server 2012/2008, UNIX, Flat Files Oracle 11g, SQL Server 2012/2008, UNIX, Toad