We provide IT Staff Augmentation Services!

Tech Lead/sr. Etl Informatica Developer Resume

SUMMARY

  • Professional experience of over 8 years in the field of Enterprise DataWarehousing, Data Integration and Data/code Migration.
  • Experience in all the phases of Data Warehousing life cycle involving Requirement analysis, Design, Coding, Testing and Deployment.
  • Experience working with Informatica Data Quality, Informatica Analyst, Informatica Metadata Manager & Informatica Cloud Services(IICS).
  • Interacted with subject matter experts and data Stewards to get information about the business rules for cleaning the source system data required as part of Data Governance.
  • Dimensions of Data governance maturity model: Metadata, Data Quality, Operations and Infrastructure accomplished using various tools.
  • Extensively experienced in implementing B2B and EDI solutions using Informatica, EDIFECS, GXS and WebMethods tools.
  • Participated in requirement gathering session with business users to understand and document the business requirements as well as the goals of the project.
  • Worked as an onsite project coordinator once the design of the database was finalized to implement the data warehouse according to the implementation standards.
  • Strong Experience in implementing Data warehouse solutions in Confidential Redshift; Worked on various projects to migrate data from on premise databases to Confidential Redshift, RDS and S3.
  • Extensive Experience on Teradata database, analyzing business needs of clients, developing effective and efficient solutions and ensuring client deliverables with in committed timelines.
  • Communicating with offshore related to the questions they have while in process of development.
  • Extensively used Informatica Developer/Informatica Data Quality(IDQ) tool to build rules and mappings.
  • Used OPTIM Test data management tool for masking sensitive data in Production environment.
  • Extensive experience in Netezza database design and workload management
  • Proficient with all major PostgreSQL procedural languages (PL/PgSQL, PL/Perl, PL/PgPython, PL/Tcl)
  • Extensively worked on various ETL mappings, analysis, documentation of OLAP reports requirements and Big Data Hadoop Development.
  • Experience in creating configuration files to deploy the SSIS packages across all environments.
  • Experience in importing/exporting data between different sources like Oracle/Access/Excel etc. using SSIS/DTS utility.
  • Experience with Snowflake cloud data warehouse and AWS S3 bucket for integrating data from multiple source system which include loading nested JSON formatted data into snowflake table.
  • Using Informatica Powercenter 9.6 to extract, Transform and Load data into Netezza Datawarehouse from Various sources like Oracle and Flatfiles.
  • Provided extensive Production Support for Data Warehouse for internal and external data flows to Netezza, Oracle DBMS from ETL servers via remote servers.
  • Realistic understanding of the Data modeling (Dimensional & Relational) concepts like Star - Schema Modeling, Snowflake Schema Modeling.
  • Experience working in complex transformations such as Normalizer, transaction control, Aggregator etc.
  • Strong knowledge of Entity-Relationship concept, Facts and dimensions tables, slowly changing dimensions and Dimensional Modeling such as Star Schema and Snow Flake Schema.
  • Experience in writing various ETL procedures to load data from different sources like Oracle 10g, DB2, XML Files, Flat Files, MS SQL Server and MS Access into Data marts, Data warehouse using Informatica Power Center.
  • Experience on Cloud Databases and Data warehouses ( SQL Azure and Confidential Redshift/RDS).
  • Designing highly available, elastic and performable cloud infrastructure, planning and testing new solutions, executing quick POC to test out new functionalities and features.
  • Extensive experience in writing UNIX shell scripts, python or Perl script and automation of the ETL processes using UNIX shell scripting.
  • Extensive experience on working with scheduling tools like JSS.
  • Worked on preparing test cases which involves test plan document and test plan strategy as per requirement.
  • Worked on performance tuning, identifying and resolving various performance bottlenecks at various levels like sources, mapping and sessions.

TECHNICAL SKILLS

ETL TOOLS: Informatica 9.5.1/9.6 (PowerCenter), IICS(Informatica Cloud Services), IDQ 9.6 /IDQ10.0, Informatica Analyst, Informatica power exchange, SSIS, Azure Data Factory, Glue

Data Modelling: Relational Modelling, Dimensional Modelling (Star Schema, Snow-Flake, Fact, Dimensions), Data Quality, Entities, Attributes, ER Diagrams.

Databases: Oracle 11g/9i/8i, MS SQL Server 2005/2000, Confidential Redshift, SQL Azure, MS Access, Teradata, DB2, SnowFlake Cloud DWH, NOSQL, PostgreSQL, Dynamo DB

Query Languages: SQL, PL/SQL, C, C++, UNIX Shell Script, Visual Basic

Job Scheduling Tools: Crontab, Control-M

Operating Systems: UNIX, Windows NT/2000/2003/XP/Vista/7, MS-DOS

Tools: SQL plus, PL/SQL Developer, Toad, SQL* Loader, Tableau, Python, Perl, Hive

PROFESSIONAL EXPERIENCE

Confidential

Tech Lead/Sr. ETL Informatica Developer

Responsibilities:

  • Working on-site with Clients to understand the requirements.
  • Interacted with subject matter experts and data Stewards to get information about the business rules for cleaning the source system data.
  • Analyzed FRDs in order to design a solution depending on the requirement.
  • Prepared design document in visio which helps to understand the model of the system.
  • Research on AWS DMS data replication service in order to migrate data from MySQL DB to Aurora MySQL DB.
  • Co-ordinated with offshore in order to pass the technical requirements in order to design a solution.
  • Wrote scripts and indexing strategy for a migration to Confidential Redshift from SQL Server and MySQL databases.
  • Provided the user stories/tasks to be entered into TFS as we followed agile methodology.
  • Worked on AWS Data Pipeline to configure data loads from S3 to into Redshift.
  • Prepared design document in visio which helps to understand the model of the system.
  • Worked on Data Synchronization from MySQL database(External Vendor) to Aurora MySQL Database.
  • Worked on creating connections to respective DBs which include Aurora MySQL DB, S3 connectors and SQL Server on-premise DB. Used 2 different runtime integrations to connect to on-premise and cloud DBs.
  • Used below components in the Data Integration Module. created business services to be used in web service transformation in order to receive the responses from Mule API.
  • Created fixed width file in order to be used in the Mapping as per the requirement.
  • Created File listener in order to set up file dependency.
  • Used different components including Mappings, Taskflows & different tasks including Assignment task, Notification Task, Command Task, File watcher and Decision Task.
  • Used multiple transformations including Expression, Union, Java, Aggregator, Joiner, Normalizer, Transaction Control, Union & Webservices.
  • Created schedules on the various Mapping Tasks and Taskflows in order to schedule the interfaces depending on the requirement.
  • Made use of Parameter files where ever necessary in order to pick the in/in-out variables. Used mapping variables in Taskflow and then assigned to taskflow variable using which decision will be taken to call the data task1 or data task2.
  • Worked on the Powercenter task as well but it just connects to the mapping but doesnt import the entire flow.
  • Implemented the concept of Slowly changing Dimension Type 2 from the Amazon S3 files we receive on a daily basis and load them to Aurora MySQL Database.
  • Researched on AWS DMS data replication service in order to migrate data from MySQL DB to Aurora MySQL DB.Providing status update to client on the build effort and various activities around design of the interfaces.
  • Used JSON schema to define table and column mapping from S3 data to Redshift

Tools: /Technologies: Informatica PowerCenter 9.6/10.0, IICS, SQL developer for Oracle,Confidential Redshift, AWS Data Pipeline,S3, SQL Server2005/2008, UNIX Shell Scripting, Python, Amazon Web Services.

Confidential, FL

Sr.Informatica Developer/ Solution Specialist

Responsibilities:

  • Working on-site with Clients to understand the requirements of the State Project.
  • Interacted with subject matter experts and data Stewards to get information about the business rules for cleaning the source system data.
  • Involved in requirement analysis, ETL design and development for extracting data from the source systems like sales force,Mainframe, DB2, sybase, Oracle, flat files and loading into Netezza.
  • Responsible for determining the bottlenecks and fixing the bottlenecks with performance tuning using Netezza Database.
  • Created Netezza Sql scripts to test the table loaded correctly
  • Provided the user stories/tasks to scrum master to be entered into TFS as we followed agile methodology.
  • Analyzing the project and come up with required components.
  • Utilized the Zena tool in order to have the scheduling for Informatica components as well as Unix scripts.
  • Also- coordinating with offshore in order to provide them the requirements and get the work done as needed.
  • Used various transformations in Informatica such as Router, aggregator, Transaction control based on the business needs.
  • Implemented different Slowly changing dimensions(SCD2) using effective date concept in Informatica.
  • Used Erwin for reverse engineering to connect to existing database and ODS to create graphical representation in the form of Entity Relationships and elicit more information.
  • Used Model Mart of Erwin for effective model management of sharing, dividing and reusing model information and design for productivity improvement
  • Requesting for RFC/RITM in order to help promoting the code to higher environments to test and production.
  • Worked on PL/SQL procedures, functions and triggers depending on the requirement.
  • Co-ordinate and assist the SIT team in case of any issues.
  • Provide warranty support once the project is completed and handed over to OPS team.
  • Preparing various documentation such as ASM, TWS job linkage, mapping documents, etc.
  • Also- worked with different tools such as Github, Bamboo, Bitbucket and release automation tool in order to automatically migrate the code from Dev to Sit and from SIT to production.
  • Worked on the concept of Concurrent and continuous workflows in order to meet the project requirements.
  • Also - utilized various databases such as Oracle, SQL Server etc. to read/write the data to respective databases.
  • Worked on different faltfiles delimited and fixed size as well as designed various unix scripts in order to capture the empty file, duplicate file scenarios in order to ensure that the required revalidation conditions are met for the batch run.
  • Designed control tables approach in order to capture the batch run as well as what is the latest batch when the batch process is completed.
  • Working with Dataarchitects and providing them requirements in order to get the LDM/PDM as per need basis.
  • Design, Development, Testing and Implementation of ETL processes using Informatica Cloud
  • Developed the audit activity for all the cloud mappings in IICS.
  • Automated/Scheduled the IICS cloud jobs to run daily with email notifications for any failures.
  • Created Filewatcher jobs to setup the dependency between Cloud and PowerCenter jobs.
  • Convert specifications to programs and data mapping in an ETL Informatica Cloud environment
  • Experience in using SSIS tools like Import and Export Wizard, Package Installation, and SSIS Package Designer.
  • Experience in importing/exporting data between different sources like Oracle/Access/Excel etc. using SSIS/DTS utility.
  • Experience in creating configuration files to deploy the SSIS packages across all environments.
  • Designed SSIS Packages to transfer data from flat files to SQL Server using Business Intelligence Development Studio.
  • Created SSIS Reusable Packages to extract data from Multi formatted Flat files, Excel, XML files into UL Database and DB2 Billing Systems.
  • Developed, deployed, and monitored SSIS Packages.
  • Created SSIS Packages using SSIS Designer for export heterogeneous data from OLE DB Source (Oracle), Excel Spreadsheet to SQL Server 2005/2008.
  • Extensively used SSIS transformations such as Lookup, Derived column, Data conversion, Aggregate, Conditional split, SQL task, Script task and Send Mail task etc.
  • Vast experience working with Informatica tool in order to debug and find the Issues when the workflows are failing.
  • Also- co-ordinated with admin team if in case of any platform issues.
  • Worked on loading the data from different sources including google analytics, Adobe Analytics to SQL Server DB by using Azure Data Factory.
  • Utilized Azure Data Lake Storage to store the files and then utilized Azure Data Factory to load to the respective targets.
  • Experienced in working on DevOps /Agile operations process and tools area (Code review, unit test automation, Build & Release automation Environment, Incident and Change Management) including various tools.
  • Designed Python scripts using Pandas Data Frames to Read data from SQL Server DB.
  • Worked on PowerBI Reporting tool in order to design some sample reports for the customer to view the data.
  • Integrated powerBI with various sources including MySQL, Azure Data Lake, Google Analytics etc.

Tools: /Technologies: Informatica PowerCenter 9.6/10.0, IICS, ERWIN R7.1, SQL developer for Oracle, SQL Server2005/2008, Tableau., UNIX Shell Scripting, Netezza,Python, Azure Data Factory, Azure Data Lake, PowerBI.

Confidential, Minnesota

Tech Lead/Sr. ETL Informatica Developer

Responsibilities:

  • Informatica PowerCenter is used for using command tasks to trigger IDQ Applications and Greenplum functions.
  • Worked with Informatica Metadata manager to create lineage between database, Informatica objects and also Data Governance activities.
  • Utilized Data Governance tool in order to capture all the metadata and profiling purpose
  • Load ETL mappings, Database objects and other data movement objects to Informatica Metadata Manager for lineage.
  • Used informatica power exchange tool in order to read data from mainframe files to load in to database.
  • Power exchange tool acts to read copybook of mainframe files.
  • This lineage will be used by other ETL teams to see their flow of the data across the whole organization.
  • Research for other tools like D3 to replace the Informatica Metadata manager.
  • Also, we have used Tableau tool for reporting out the Quality of data to data stewards.
  • Incorporated various filters in Tableau tool to make the report look meaningful to data stewards.
  • Also, work on research with Python tool to look for a possible replacement for our Tableau reports.
  • Involved in Test data management using IBM OPTIM tool which will be helpful for masking sensitive data from Production environment.
  • Integrate with EDC & Data Quality for business glossary, score cards discovery.
  • Setup the EDC resources for Oracle, SQL servers and Azure cloud DBs for metadata lineage.
  • Load the business terms of the organizations data dictionary into EDC and assign them to the data assets and create lineages for the data flow.
  • Create different scanners Oracle/PowerCenter/Platform to load the data assets into a single place repository - EDC.
  • Performed bulk load of JSON data from s3 bucket to snowflake.
  • Used Snowflake functions to perform semi structures data parsing entirely with SQL statements.
  • Used Tableau for reporting data issues to the business users who take necessary action based on the issue encountered. Reason for issue will also be shown across each record.
  • Involved in migrating IDQ from 9.6 version to IDQ 10.0 version. Communicated with Admin team based on requirement and provided necessary information regarding the same.
  • Followed Agile methodology. Logged all the user stories that needs to be developed on sprint basis in Version one tool.
  • Worked on PL/SQL procedures, functions and triggers depending on the requirement.
  • Used appworx for scheduling the IDQ objects.
  • Optimizing and tuning the Redshift environment, enabling queries to perform up to 100x faster for Tableau and SAS Visual Analytics
  • Wrote various data normalization jobs for new data ingested into Redshift
  • Advanced knowledge on Confidential Redshift and MPP database concepts.
  • Migrated on premise database structure to Confidential Redshift data warehouse.
  • Implemented Work Load Management (WML) in Redshift to prioritize basic dashboard queries over more complex longer-running adhoc queries. This allowed for a more reliable and faster reporting interface, giving sub-second query response for basic queries.
  • Design and Develop ETL Processes in AWS Glue to migrate Campaign data from external sources like S3, ORC/Parquet/Text Files into AWS Redshift.
  • Data Extraction, aggregations and consolidation of Adobe data within AWS Glue using PySpark.

Tools: /Technologies: Informatica Data Quality 9.6/10.0, SnowFlake Cloud Datawarehouse, Informatica PowerCenter 9.6/10.0, EDC (Enterprise Data Catalogue) 10.2.1, Informatica Analyst, Informatica power exchange, SQL developer for Oracle, Greenplum, SQL Server2005/2008, Confidential Redshift, AWS Data Pipeline,S3, Elastic Load Balancer, Tableau, IBM-OPTIM for Test Data Management., GLUE, UNIX Shell Scripting, Python, Tableau.

Confidential, Richmond, VA

Informatica IDQ Developer

Responsibilities:

  • Interacted with subject matter experts and data Stewards to get information about the business rules for cleaning the source system data.
  • Worked on Data Governance activities as well as put all the required metadata and idq rules required.
  • Used Data Governance tool to design all the rules and populate using scorecards.
  • Genworth is a claim processing company with more than millions claims processed every year.
  • Worked on healthcare domain like medicare, Medicaid & Insurances compliance with in HIPPA Regulation and requirement.
  • Collaboratively worked with Executive stakeholders including CTO, VP of EDI solutions for key business strategy decision and proposals.
  • Used MDM tool (Meta data manager) in order to capture the metadata of the entire company which will be useful for tracking the end to end mechanism of the data.
  • We have developed datamart to handle health insurance member, their policy need and claim.
  • Extracted data from multiple data sources and immediate objective was to develop a datamart to handle health insurance member, their policy need and claim.
  • Developed multiple mappings for healthcare insititutional and professional data.
  • Ensured that all the claims/calypso healthcare data is appropriately loaded in to the respective datawarehouse with appropriate data after applying all the rules provided by datastewards.
  • Meet with application team to review the report for last month and gather all the rules that needs to be applied on a specific columns/tables and use the metadata to turn on/off flags based on which quality will be measured on that set of columns.
  • Used Informatica Analyst tool which will be helpful for maintaining Reference tables and helpful for running the scorecards.
  • Used Informatica Analyst tool to perform Profiling which provides the quality of data.
  • Worked with Informatica Business Glossary which helps in maintaining Metadata of all the systems.
  • Made use of UI where data stewards will be logging in and turn the flags on/off for the columns that are required which will be reviewed by Lead data stewards for approval. Once the approval is completed, we will review the flags and then will be sent to the Data Quality jobs.
  • We made utilization of Business Glossary desktop which stores the metadata and all the definitions of the terms and categories related to the business definitions.
  • Used various transformations in Informatica Data Quality(IDQ)/ Informatica Developer tool to calculate the factors of Data where we will be populating all the Analysis in to a flat file and based on the metadata, we will summarize all the issues across table level and column level.
  • Data stewards will be looking in to this report and provide an update which all flags need to be turned off/ turned on and the various user defined rules to be applied on top of the column.
  • Since using the IDQ/Informatica Developer tool we must design a single mapping for each table, maintenance of code is becoming very complex. So, to handle this scenario, we have designed a dynamic Process using Informatica PowerCenter tool which helps in measuring the quality of data.
  • It measures the quality of the data using the factors Completeness, Accuracy, Timeliness and Consistency.
  • This process takes table name as input and provide the necessary results with all the various analysis based on the datatypes of column which meets the above factors. Ex: Null Analysis, Null override, Business Rules, String junk, String Pattern, Valid Value, number range.
  • Used different PostgreSQL Greenplum Database functions to aggregate across source summary, column summary and issue summary based on requirement.
  • Optimize postgresql.conf for performance improvement. Review all PostgreSQL logs for problems.
  • Installing and Configuring PostgreSQL from source or packages on Linux machines. Experience designing database structures, indexes, views, partitioning.
  • Ensure night corn jobs for backups, re-Indexing & vacuuming & materialized views executed properly on 24/7 high availability Postgres production and development databases.
  • Experienced in creating EDI maps to translate IDOC’s to EDI standards and vice versa using GXS Application integrator, Specbuilder and GXS Enterprise system for external communication.
  • Expertise in handling EDI standards like ANSI X12, EDIFACT and HIPAA mandated transactions (837/835, 270/271, 276/277, 834 and 999/TA1).
  • Involved in Test data management using IBM OPTIM tool which will be helpful for masking sensitive data from Production environment.
  • Used Unix shell scripting to trigger various functions in Greenplum and to trigger Informatica PowerCenter workflows.
  • Utilized spotfire reporting tool in order to provide our data to reporting tool.
  • Performed some simple calculations in spotfire tool in order for data to be visualized.
  • Planned and Lead Go-Live activities for entire EDI solution and Production stabilization period.
  • Optimized the performance of by using Persistent cache across multiple Lookup transformations which will be useful for building different rules.

Tools: /Technologies: Informatica Data Quality 9.6/10.0, Informatica PowerCenter 9.6/10.0, Informatica power exchange, Informatica Analyst, SQL Integration Services(SSIS), SQL developer for Oracle, Greenplum, SQL Server2005/2008, Tableau, Spotfire, IBM-OPTIM for Test Data Management., UNIX Shell Scripting, Python, PostgreSQL 9.0/9.1.

Confidential, Chicago, IL

ETL Informatica Developer

Responsibilities:

  • Worked on 9.5.1 version of Informatica PowerCenter in Confidential AMI Project.
  • Worked on handling large amount of data usually in millions per day.
  • Confidential is a utilities project where we handle live data which usually come in millions depending on number of customers.
  • Worked with dynamic sql transformation which involves lot of joins on fact and dimension tables.
  • Used pre sql of source qualifier which involves handling many sql queries.
  • Worked on Mapping and workflow variables and how to use the mapping variables in workflow and vice versa.
  • Used lot of functions like set variable function to assign the incoming values to mapping variables.
  • Created mappings in such a way where we connect all the mappings using mapping and workflow variables which are part of a single workflow.
  • Used pre-session success variable assignment and post session success variable assignment at non-reusable session task to pass the workflow variables to mapping.
  • Created Unix shell scripts to schedule or unscheduled the workflows based on requirement.
  • Created concurrent workflow which will be called form a different workflow using command task.
  • Worked on continuous scheduling of Informatica workflow.
  • Used decision task, assignment task, email task based on requirement.
  • Created parameter files at workflow level and merged both parameter files used at workflow and session level.
  • Created unit test cases and documented all of them.
  • Used lookup transformation where I wrote a lookup override to divide the comma separated values into multiple rows by using CONNECT clause in oracle.
  • Worked on PL/SQL procedures, functions and triggers depending on the requirement.
  • Used oracle minus queries to compare the data between two schemas.
  • Create a mapplet which can be reused across two mappings and functions to give a different output based on incoming data.
  • Used transactional control transformation to generate dynamic files at mapping level.
  • Also, we used an approach to calculate the dynamic files by dividing with timestamp at session level.
  • Worked on performance tuning where we increase the throughput by switching from
  • Sql transformation and using the same query in source qualifier.
  • Used parallel hint clause for increasing performance which divides across 16 CPUs to run oracle query which has multiple joins.
  • Worked on Informatica dynamic partitions which can be used for loading large amount of history data.
  • Worked on calling the same workflow multiple times until a condition is satisfied which is handled in Unix shell script.
  • Involved in data analysis using python and handling the ad-hoc requests as per requirement.
  • Developing python scripts for automating tasks.
  • Involved in code migration from development to integration testing, Staging and finally to Production Environment.
  • Strong hands on experience using Teradata utilities (SQL, B-TEQ, Fast Load, MultiLoad, FastExport, Tpump, Visual Explain, Query man), Teradata parallel support and Unix Shell scripting.
  • Proficient in coding of optimized Teradata batch processing scripts for data transformation, aggregation and load using BTEQ.
  • Worked on optimizing and tuning the Teradata views and SQL’s to improve the performance of batch and response time of data for users.
  • Worked on preparing the unit test cases which covers all aspect of testing the code that is developed.
  • Prepared documentation required to migrate all the objects across different environments.

Tools: /Technologies: Informatica PowerCenter 9.6, Informatica power exchange, SQL developer for Oracle, SQL Server2005/2008, Spotfire, UNIX Shell Scripting, Python.

Confidential, OH

ETL Informatica Developer

Responsibilities:

  • Prepared technical specifications for the development of Informatica (ETL) mappings to load data into various target tables and defining ETL standards.
  • Created Source to Target Mapping Documents as per requirement.
  • Responsible for working with DBA to calculate the estimated space required for each Source System depending on the requirement.
  • Involved in working with 4-5 Source Systems to load them to build EDW environment.
  • Used UNIX environment to run the batches where the jobs are grouped based on the requirement.
  • Involved in basic testing required before migrating the objects to QA/UAT environment.
  • Performed testing both in UNIX shell and ORACLE level based on requirement. Also provided production support by monitoring the processes running daily
  • Involved in performing Reconciliation based on the Source and Target.
  • Responsible for developing complex Informatica PowerCenter mappings using different types of transformations like UNION transformation, Connected and Unconnected LOOKUP transformations, Router, Filter, Aggregator, Expression and Update strategy transformations for Large volumes of Datasets.
  • Implemented SCD2 using Effective dates concept as per requirement.
  • Extensively used Parameter files, mapping variables in the process of development of the mappings for all the dimension tables.
  • Worked with Informatica and other consultants to develop IDQ plans to identify possible data issues.
  • Involved in generating and applying rules to profile data for flat files and relational data by creating rules to case cleanse, parse, standardize data through mappings in IDQ and generated as Mapplets in PC.
  • Responsible for Error Handling in Workflows and sessions using Informatica workflow manager and monitor the workflow run and statistic properties on Informatica Workflow Monitor.
  • Created Connected, Unconnected and Dynamic Lookup transformations for better performance.
  • Extensively used Parameter files, mapping variables in the process of development of the mappings for all the dimension tables.
  • Worked on handling performance issues, Troubleshooting of Informatica Mappings, evaluating current logic for tuning possibilities.
  • Responsible for creating production support documentation.
  • Worked on creating various test cases, test plan document and test plan strategy as part of the testing effort required for the project.
  • Involved in team weekly and by monthly status meetings.

Tools: /Technologies: Informatica Data Quality 9.6, Informatica PowerCenter 9.6, Hive 0.12, Informatica Analyst, SQL developer for Oracle, SQL Server2005/2008, Netezza, UNIX Shell Scripting

Confidential

Student Assistant/DB Developer

Responsibilities:

  • Involved in understanding of how the Data flows across the university portal
  • Prepared the ETL Requirements utilizing the Stored Procedures, Functions and Views.
  • Involved in requirement gathering sesssion to understand the complexities in data and provided with a solution to design the Datawarehouse.
  • Designed scheduling in MySQL using the in-built scheduler available in the SQL Workbench.
  • Created documents including the data dictionary, source to target mapping documents and functional document which explains the end to end process.
  • Monitoring all the environments regularly and provided 24x7 support for all the modules designed.

Technologies: MySQL, Oracle 11g

Confidential

Jr. Informatica Developer

Responsibilities:

  • Involved in KT understanding of how the Informatica tool can be used for ETL process
  • Understood the business level requirement from already prepared documentation of the whole process that’s going on.
  • Created Workflow, Decision, Event Wait and Raise and Email Task, scheduled Task and Workflow based on Client requirements.
  • Implemented different Slowly changing dimensions(SCD) using Informatica.
  • Monitoring all the environments regularly and provided 24x7 production support for Informatica.
  • Prepared Software engineering quality documents related to design, development, testing.
  • Documented the mapping process and the process that can be used to facilitate future development.

Technologies: Informatica Power Center 9.0, Oracle 11g, TIVOLI

Hire Now