We provide IT Staff Augmentation Services!

Data Architect/senior Etl Developer Resume

4.00/5 (Submit Your Rating)

SUMMARY

  • Over 14 plus years of extensive IT experience with special emphasis on Software Architecture, Analysis, Design, Development, Integration testing, end to end testing, Production support and worked in Agile/Scrum iterative development model, and Software development life cycle methodologies.
  • Certified in Informatica Designer, IBM Web Sphere Data Stage Designer, Oracle 9i/11G PL/SQL Programming and Teradata V2R5 SQL Specialist tools.
  • Expertise in ETL methodologies in all phases of Data warehousing life cycle, with IBM Info Sphere Data Stage, Informatica and Teradata working as an Architect, Lead Developer and Senior Developer.
  • Programming experience in various Databases such as Teradata, Oracle 9i/11g, DB2, and MS SQL Server 2005 using PL/SQL. Involved in Database scripting, performance tuning of the queries and analyzing the SQL and PL/SQL scripts and fixing the data and logical issues.
  • Experience in Designing Process Flow and scheduling of the system between various Interfaces i.e. Project High - Level Design and Low Level design. Involved in Dimensional Data modeling, Star Schema/snowflakes schema, Bill Inmon, Ralph Kimball DW approaches, dimension and Fact tables.
  • Knowledge in Extract, Transform and loading of data (ETL Processes), Decisions Support systems, Data Warehousing & Online Analytical Processing technology using DataStage, Informatica, Teradata, Talend, Amazon web services (AWS) and Big Data Hadoop tools along with other BI Tools.
  • My role involved in all the phases of Software Development Life Cycle and Agile/Scrum iterative and incremental model with strong understanding of business processes & domain knowledge with Information Technology industry.
  • Experienced in working in Retail Industry, Financial domain, Tele communication domain and other business areas.
  • Excellent Knowledge in dealing with data sources and target systems like Teradata, Oracle, MS SQL Server, and Sybase DB2/UDB, XML, Flat files using DataStage, Informatica and other tools.
  • Effective in cross-functional and global/international environments with Onsite/Offshore development models and to manage multiple tasks & assignments concurrently both as a leader and team player with effective communication skills including mentoring developers.

TECHNICAL SKILLS

DW and ETL Tools: Worked in Data Stage Designer, Manager, Director, Informatica Power Center Mapping Designer, Workflow Manager and Monitor, Teradata Utilities (Fast Load, Multi Load, Fast Export, Tpump, TPT scripts, BTEQ scripting), Oracle Exadata with SQL Developer, PL SQL, Toad, Teradata SQL Assistant, Unix Shell Scripting, Talend.

Database Systems: Worked in Teradata 15, Oracle 9i / 11g / Exadata 12c, and PL/SQL, Microsoft Server 2005, Sybase DB2 UDB, Oracle Enterprise Manager (OEM), Teradata Viewpoint.

Scheduling Tools: Autosys, Tiwoli workload scheduler (TWS), Cisco Dollar Universe tool, Confidential ESP Scheduling, Amex EngineG.

Versioning Tools: GIT Hub, Serena Change Man DS Client, CVS code versioning tool, Power systems and Visual Source Safe tools, TortoiseSVN versioning tool.

Business Intelligence Tools: Knowledge in Micro strategy, OBIEE, Business Objects XI R2.

Programming Languages: Knowledge in Core Java, Java Scripts, C# .Net, UNIX and Shell Scripting.

Amazon Web Services(AWS) and Big data tools: EC2, S3, RDS, Redshift, DynamoDB, Elastic Map Reduce and Hadoop framework HDFS, Map Reduce, Hive, Scoop, Oozie, Spark, Cassandra, and other tools etc.

Other Tools: Knowledge in Version One Agile Software, Rally Agile Software, Confluence and Jira Agile, Microsoft Project Plan, BMC Remedy Incident and Change Management requests, CA ERWin Data modeling, Gromit tool for Data Architect & Metadata data management and Microsoft Project Plan (MPP) tools, MSBI (SSIS), Amazon Web services (AWS).

PROFESSIONAL EXPERIENCE

Confidential

Data Architect/Senior ETL Developer

Responsibilities:

  • Requirement gathering and designing the new architecture of the project, created High level interface design document along with low level Detail Design documents.
  • Understanding the existing system and domain knowledge along with process flows and technical architecture of the applications.
  • Data warehouse design, development in various programming languages in Java, Unix shell scripting and production support, versioning tools github, Maven and other supporting tools.
  • Requirement gathering and designing the new architecture of the project, created High level interface design document along with low level Detail Design documents.
  • Understanding the existing system and domain knowledge along with process flows and technical architecture of the applications.
  • Followed the data architect standards, procedures, guidelines and worked with various teams for reviews and approval process.
  • Involved in the development of Informatica mappings, sessions, work flows and parameter files based on the ETL interface design specification documents.
  • Involved in analysis of Oracle PL SQL packages, stored procedures and implementation of Teradata migration scripts, stored procedures, macros scripts.
  • Worked on the back log stories and performed grooming, prioritizing prior to the Sprint Planning/Iterations and worked with testing teams in Unit testing, UAT, Integration, E2E testing.
  • Performed testing and validation of the informatica mappings, debugging, validations of data from source system to target systems.
  • Actively worked on the performance tuning, analyzing the statistics, tuning the SQL and PL SQL queries, partitioning of the tables, using the parallel hints and effective joins.
  • Unit Testing, Database source and target data validations, verification and validation of the code developed and compare the results with existing system data and target system.
  • Involved in discussions and meetings with client teams, offshore development teams and system integration testing teams to identify and resolve issues.

Environment: Informatica 10.2, Oracle 12c, PLSQL, Teradata 14, Amazon Web Services (AWS), Confluence and Jira Agile Tool, GIT Hub versioning, BigData hadoop framework, UNIX Shell Scripting.

Confidential

Data Architect / Senior ETL Developer

Responsibilities:

  • Involved in analyzing the business requirement of the project and performed the impact analysis of the components, analyzed the system requirement documents and created the High level and low-level design documents. Gained domain knowledge along with process flows and technical architecture of the existing system from various source systems to target data ware house system and to data marts.
  • Based on the understanding of the new architecture of the project, created the Application Interface Design document and then created the Application Detail Design Documents along with the Run Book details, production support handover documents, SLA documents.
  • Followed the data architect standards, procedures, guidelines and maintained the check list of the DA process and worked with various teams for the reviews and approval process.
  • Created the logical and physical data models for the new tables, views and modified the existing data models according to the data model standards in Erwin Data model tool and followed the processes set by the Data Architect teams.
  • Data Architect specifications are created for each new, impacted tables and columns, views, updated the transformation logic along with the metadata for each of the column details in Gromit tool and also make sure that the Metadata is being maintained in the enterprise metadata system.
  • Data element mapping from source to target with business rules, transformation logic where included in technical design documents and has conducted several approval reviews meetings with business teams, upstream and downstream teams.
  • Conducted several technical review sessions with Technical Forum team, principal technical architects, system analysts and developers for the end to end development flow with source system tables, file handling, extraction of the data with load ready files, processing of data in staging layer, transformation logic with business rules, loading of data in to base tables and then to target tables in the target warehouse and creation of the base views, custom views and business user views. Also to create the extract and export scripts for processing of the data in the reporting cubes.
  • Involved in the development of Teradata Multi load, BTEQ, Teradata parallel transporter (TPT), stored procedures, macros with PL/SQL scripts, Informatica mappings, work flows, and reviewing the scripts.
  • Involved in discussions and meetings with offshore and near shore development teams and system integration testing teams, source and target systems and along with end users and various business teams to identify and resolve issues.

Environment: Informatica, Teradata 14, Rally Agile Tool, CA ERWin Data modeling, Gromit Data Analyst and metadata tool, UNIX Shell Scripting, Oracle Database, Tiwoli workload scheduler (TWS) Scheduling, Amazon Web Services (AWS), Data lakes and Bigdata Hadoop framework.

Confidential

Senior Integration Engineer / Data Engineer

Responsibilities:

  • Extensively worked on the requirement analysis, design, data element mapping from source to target and technical design documents in Agile/Scrum development model.
  • Understood the existing End to End Architecture of the ETL Technical Process flow of various projects and existing systems which are being maintained and supported.
  • Worked on the logical and physical data models for the tables, views according to the data model standards in Erwin Data model tool and maintained the various project data models.
  • Prepared High level, Detail level design documents, Data Architect specifications are created for new and existing tables, views, updated the metadata for each of the column and businesses rules.
  • Extracted the LAWSON financial system data and SSMS corporate wide and organizational data and implemented the ETL implementation logic using Data stage jobs and sequences and loaded the data into staging area, operational data source (ODS) system and then target dimension and fact tables.
  • Implemented all Datastage jobs stages like DB2 Connectors, lookups, transformations, joiners, modulus source extract partitioning, merge, Oracle connectors, pivot, remove dups, filter, etc.
  • Extensively used pre and post SQL merge statements, gather stats, disable indexes, partitioning of the jobs flows, full and incremental loads etc. to load various ODS target tables along Enterprise Data Warehouse Dimension and Fact tables using surrogate keys.
  • Implemented many stored procedures, views, materialized views and many of the scripts to process and load data into target data base system.
  • Actively worked on the performance tuning, analyzing the statistics, tuning the SQL and PL SQL queries, partitioning of the tables, using the parallel hints and effective joins and archival of the historical table data. Also worked on loading of data CLOB related data from source to target system.
  • Created the data stage jobs and focused on performance tuning and created the sequences for loading of the Master and Lookup table’s data, exception handling of the data then loading the transaction table’s data and later Dimension and Fact tables Full and Incremental Data Loads which got incremental extracts from source system data.
  • Extensively worked with Enterprise Application Studio (EAS) Application development teams and BI Micro strategy and OBIEE developers populate data in Dashboards, Static Reports and Ad-hoc reports with prompts and filters applied Confidential end user level and generate Charts and Graphs for analysis.
  • Worked on fixing the data related issues and debugging the scripts and modifying the logic and make sure the data is matching with the source system to the target system.
  • Worked on the back log stories and performed grooming, prioritizing prior to the Sprint Planning/Iterations and worked with testing teams in Unit testing, UAT, Integration, E2E testing.
  • Followed Daily Standup’s (DSU), grooming, planning, and attended retrospectives ceremonies with the team to ensure successful project forecasting and realistic commitments.
  • Performed Incremental delivery of the project with Scrum demos and constant reviews with the customers and end uses to get proper feedback and any further improvements to the deliverables.
  • Collaboratively worked with development teams, product owner, scrum master, functional teams, and managers, cross functional teams to gain the business needs, requirements and then develop and deliver the successful products.
  • Estimated the stories and tasks from the backlog items associated the current sprints, worked and updated the status of tasks to be in-progress, completed, ready to validate and Accepted and closed work items and constantly update the efforts and to do hours remaining and reflect breakdown chart and velocity of the scrum project.
  • Responsible for the performing the production support on daily and weekly basis and to close the stories as and when they are being worked and validated.
  • Coordinate and conduct meetings with offshore development teams and worked with various cross functional teams located across different locations.
  • Proactively documented the issues/ solutions in various applications, ETL and database scripts for efficient resolutions of recurring and to minimize future issues.

Environment: IBM Info Sphere Data Stage, Oracle Database 11g / 12c, Teradata 14, Sybase DB2/UDB MS SQL Server 2005, Version One Agile Tool, CA ERWin Data modeling, UNIX Shell Scripting, ESP Scheduling.

Confidential

Lead Developer / Project Manager

Responsibilities:

  • Actively involved in designing the efficient architecture of the ETL Technical Process flow (i.e) High level Design and Low level design documents for implementation across multiple countries.
  • Understand the Teradata Migration Framework and Technical Architecture of the source to target migration approach and also Information Delivery Network (IDN) Architecture and Data warehouse Transformation end-to-end processes.
  • Technical Integration specifications were developed and documented for the low level detailed design and creating the data mapping document from source system to target system with transformation logic to be implemented.
  • AtanaSoft Suite Compress Tool results were extensively used to analyze the tables, data types and data, to generate the Multi Value Compression DDL for the implementation of Teradata tables.
  • Developed Fast load, Multi Load, Fast export, BTEQ scripts for loading data from Sybase to Teradata and used generic Teradata Parallel Transporter(TPT) scripts to extract the data from Sybase to Unix CSV/data files and then to load into Teradata staging environment.
  • Involved in implementation of the Partition Primary Index (PPI) and Multi Partition Primary Index (MPPI) and modifying the critical tables PPI and improved the performance of the project and identifying the secondary index to various tables.
  • Developed the scripts to perform the data validation and then verification of the data and logic between parallel production systems.
  • Developed the scripts for migration of data from Sybase to Teradata using Teradata Parallel Transporter and implement the Extraction-Transformation-Loading of data through ETL tools.
  • Develop source to target mappings and implement the transformation, business rules and logic in the scripts and schedule the jobs through scheduler tools and monitor the production jobs, maintain and fix the issues.
  • Convert Sybase stored procedures, views, tables to Teradata Universal Data warehouse Production environment.
  • Worked with the offshore teams and managed the resources allocation, project estimations and status meetings.
  • Unit Testing, Database testing, Data verification and validation of the code developed and compare the results with Sybase and Teradata systems.
  • Identify the performance bottlenecks in the production processes and identify key places where SQL can be tuned to improve overall performance of the production process

Environment: Teradata 14, Sybase IQ DB2/UDB database, Informatica Power center, UNIX Shell Scripting, Perl scripting, CVS versioning tool, CA ERWin Data modeling, Amex EngineG Migration and scheduling.

We'd love your feedback!