Data Support Analyst Resume
Bentonville, ArkansaS
TECHNICAL SKILLS:
ETL Tools: IBM InfoSphere DataStage 11.5, 8.5, 8.1
RDBMS: Oracle, IBM DB2, Sybase, Teradata, Informix.
Languages: SQL, Unix Shell Scripting
Third Party Tools: SQL Developer, Service Now, Plutora, Control M and CA7 Scheduling Tool.
Big Data: HANA, Hive Databases.
Agile Tools: Github, Jeera, LeanKit
EXPERIENCE:
Confidential, Bentonville, Arkansas
Data Support Analyst
Skills: DataStage V9.1 & 11.5, SQL, DB2 UDB, Informix, Teradata, HANA, HIVE
Responsibilities:
- Handling regular enhancement and support Activities; Production Abends paged to the Team is resolved within the SLAs. Generally, this involves extensive analysis of the jobs. Recommendations to restart or cancel are given based on the DataStage jobs’ restart logics and the downstream DataStage Jobs.
- Analyzing and providing solutions for repeat abends. Performance Tuning for long running jobs. Addressing job failures by data issues and providing workarounds.
- Coordinating with other Service Teams & Support Teams across Geographies both internally and externally, involving with the process and driving incidents/ issues to resolutions handling escalations and triaging based on need.
- Reporting Status through standard and ad hoc reporting for Management
- Implemented an enhancement in DataStage Flow as per business needs without disrupting the normal cycle of execution. The key is to make changes to an existing Database Load flow to dynamically load 2 more databases based on the Databases’ availability and need. The enhancement was implemented with minimal change after extensive analysis.
- Implementing enhancements to fix jobs with intermittent Database Locks and data issues that arise because of causes like NLS settings, source data in Production when huge data with multiple flows is loaded. Changes to DataStage Jobs are implemented iteratively in Production.
- Implemented 3 times restart - ability in DataStage Jobs (using UNIX scripts) that abort due to temporary issues. This enabled abend reduction.
- Handling extensive escalations that arise during environment outages when data is not displayed in the front end and participating in war rooms after outages, taking time crucial decisions based on the Data Stage Job flows.
- Using Rapid Deployment Tool and Github as part of Dev Ops Practices to implement Changes in Production
Confidential, Bentonville, Arkansas
Lead Developer
Skills: DataStage V9.1 & 11.5, SQL, DB2 UDB
Responsibilities:
- To lead a Team of 6 who were in different geographical locations (3 members in Chennai and 3 from Kolkatta). This included keeping the Team members highly motivated so that work can be done with minimal supervision.
- Analyzing the architecture of the cross platform DataStage jobs including the Mainframe, UNIX and FTP servers.
- Onboarding and enabling the different resources of the Team to accomplish the Project activities and clearing roadblocks to accomplish the same.
- Collaborating with the different Team and vendors of the client; reporting the progress to Senior Management and the Confidential Client.
- Planning the project activities from Analysis, Testing and Go Live with respect to resource utilization and project timelines. Tracking the tasks using Task Matrix.
- Helping the Team on a need basis, when they encounter issues with DataStage job development and testing.
- Identify inbound/ outbound connections with the parameter sets and connection string in data stage and raise requests in Security Portal to establish the same connections in higher versions.
- Updating UNIX scripts to reflect 11.5 connection details
- Testing by running the jobs in 9.1 and 11.5 version with the same input data. Jobs are migrated to the higher environment only when the job’s performance exceeds or is on par with the lower environment. Run times were compared for each job in lower and higher version
- Make sure all the interfacing applications refer to the higher version DataStage jobs.
- Adding, removing, altering Environment Variables, NLS values, DataStage Stages as per the compatibility of the Stages with the upgraded version. Using Connector Migration Tool for migrating DataStage Connectors.
- Addressing data issues and job aborts when the job runs in the new version.
- Rapid Deploying the Code in the Dev, Test and Prod Environments with approvals from Stakeholders and Change Approval Board (CAB).
Confidential, Phoenix
DataStage Developer & Initiative Lead
Skills: DataStage V11.5, SQL, DB2 UDB
Responsibilities:
- The Wyoming State will provide data in segment flat files, loading the data from these flat files to the application database.
- Analyzing the source data for data anomalies and understanding the source data with respect to the Confidential table structure
- Data Profiling
- Collaborating with the client to obtain data in a standardized format and with good quality
- Working with users to understand and gather detailed Business Requirements and propose solutions.
- Data Mapping between the source systems and the Confidential systems and delivering Data Mapping Specification Documents
- Created an ‘Extraction Issue’ spreadsheet to inform and hold the Client/ respective Teams accountable for data of poor quality
- Working along with the Client on data mapping - to map the data between the Source Columns and the Confidential Column.
- Exclusively using DB2 Connector Stage, Sequential File Stage, Transformer Stage, Join Stage, Lookup Stage, Surrogate Key Stage, Remove Duplicate Stage, Funnel, Filter and other stages for developing jobs to reflect the business rules.
- Using Job Sequences to control the execution of the job flow using various activities like Job Activity, Email Notification, Sequencer, Routine activity and Exec Command Activities.
- Used DataStage Director to schedule, monitor, cleanup resources and run jobs with several invocation ids for various run streams for a particular system.
- Constructing and testing of conversion scripts/programs
- Loading data into UI Modernization database after performing the necessary transformations on the source data. Experienced in handling large volumes of data with performance tuning and capacity planning.
- Verifying and validating the data loaded in collaboration with the state
- Validating and verifying the data loaded using ETL Reconciliation Programs or Database Scripts.
- Defect Fixing and performing Root Cause Analysis on the defects by categorizing them as Extraction, Mapping or Code defects
- Performing Impact Analysis on Change Requests and Data Model Changes; and impact analysis for proposed solutions for Defects as a change/ enhancement in one module might affect other modules. Proposed changes and Solutions will be analyzed and prioritized based on Impact, Time Constraints and Resource Constraints of the Application and Team
- Delivering the Strategy, Mapping, Weekly Defect Tracker and other relevant Data Migration Documents
- Documenting the Database Changes, Extraction Issues, Performance Improvement steps, Review Comments, Best Practices, Verification and Validation scenarios during the Development Phase and Defects during Testing Phases.
- Being Responsible for the initiatives that I own along with managing work between two Life Cycles (Phase 1 and Phase 2) and helping my Team
- Reporting Status through standard and ad hoc reporting, analysis dashboards for Management
- Importing, Exporting Metadata and Code; Managing Code movement through different environments
- Making decisions on the approach and design in an effective way to meet client’s current and future needs; Reviewing the work of other team members to ensure quality of code along with strong Analytical, relationship, collaborative and organization skills.
- Demonstrating hands on data analyzing skills to understand the data
- Working with business analyst teams to understand data quality requirements
Confidential, Minneapolis
Technical Lead & Onsite Coordinator
Skills: DataStage V9.1, SQL, DB2 UDB, MS SQL
Responsibilities:
- Analyzing jobs on data performance issues; performance tuning long running jobs.
- Implementing CRs after analyzing the impact to the application, making changes to jobs wherever necessary.
- Working with scheduling tools like Control M Enterprise ManagerCoordinating with other Service Teams & Support Teams across Geographies both internally and externally, involving with the process and driving incidents/ issues to resolutions handling escalations - Onsite Offshore Coordinator
- Independently perform analysis of issues, participating in development of potential solutions, and making recommendations to ensure accurate and timely resolution.
- Encouraging performance tuning of ETL jobs using parallel - pipelining techniques and proper use of Stages
- Understanding ETL Job Flows between applications through jobs scheduled in Control M and providing a quicker resolution.
- Worked with DataStage Designer to import/export metadata from database, jobs and routines between DataStage projects
- Implemented Error Handling scenarios in DataStage jobs to write the rejected records to a table.
- Performed debugging, troubleshooting, monitoring and performance tuning using DataStage.
- Used different Parallel Extender Partitioning techniques in the stages to facilitate the best parallelism in the Parallel Extender jobs.
- Used DataStage director to run and monitor the jobs for performance statistics
- Performance tuning of jobs by interpreting performance statistics of developed job
- Experienced in developing efficient queries to query Relational Databases.
- Understanding, analyzing and documenting requirements, asking the right questions to the Client
- Single Point of Contact for the Client, Infrastructure and DBA Teams maintaining multiple environments and Onsite Coordinator for Offshore Team
- Constantly questioning the status quo & evaluating the current processes and identifying newer simpler means as solutions
- Reporting and abiding the industry standard following the principles of ITIL
- Moderating between different Interfacing Teams, identifying issues, prioritizing the solutions based on time and resource constraints in the Project/Team
Confidential
DataStage Support & Enhancements
Skills: DataStage V9.1, SQL, DB2 UDB, Oracle.
Responsibilities:
- Analyzing the statistics of the DataStage jobs in director and conducting performance tuning to reduce the time required to load the tables and the loads on computing nodes involved.
- Triaging various problems by doing root cause analysis and resolving the issue in timely manner to minimize impact to systems/environment
- Exercising judgment within defined procedures and practices to determine appropriate action
- Using communication and interpersonal skills to build lasting working relationships
- Understand existing ETL Jobs, and make changes as per new requirements
- Identify, propose & implement improvements in ETL jobs
- Work closely with DBAs to fix performance bottlenecks
- Oversee day-to-day ETL processes-job monitoring, issue identification, documentation, analysis and resolution
- Drive completion of projects and initiatives, follow through on execution of strategies and demonstrate ability to stay the course despite obstacles
- Drive root cause analysis and long-term solution for critical and high impact issues;
- Streamline all ETL jobs, SLAs and communication channels.
- Maintain and constantly refine the failures and escalations; be able to capture, articulate and create knowledge base of ETL functioning and potential points of failures.
- Train and coach junior and senior ETL engineers in setting the right expectations for those on development jobs as well as on Support
- Collecting and Creating Knowledge Articles for an issue removing the necessity to analyze and resolve from scratch the next time the issue occurs.
- Documenting ETL processes & solutions by following the company standards for Support and Production Release
- Spearheaded the movement across the Support Team and successfully created a Database of issues integrated into the Service Now (ITSM) Tool
Confidential
Developer
Skills: DataStage V8.5, SQL, UNIX Shell Scripting, Oracle
Responsibilities:
- Extensively used DataStage Designer to develop various parallel jobs to extract claims related data, policies related data, provider and practitioners’ related data and to do necessary transformations and load into Confidential tables or send output files.
- Used DataStage Director to schedule, monitor, cleanup resources and run jobs with several invocation ids for various run streams for a particular system.
- Worked with DataStage designer to import metadata from database, import/export jobs and routines between DataStage projects
- Developed various SQL scripts for extracting the source data
- Design DataStage Jobs for One time, Daily & Biweekly Incremental Loading.
- Developed SQL scripts for validating the data loaded in to Confidential tables and also prepared flow charts, design documents and unit test documents.
- Wrote UNIX shell Scripts for file validations and scheduling DataStage jobs and for preprocessing and post processing of the files.
- Extensively worked on Parallel Jobs using various stages like Join, Transformer, Lookup, Pivot, Modify, InfoSphere Change Data Capture, Difference, Filter, Funnel, Copy, Sort, FTP, Sequential File, Complex Flat File, Data set, DB2 enterprise/connector, XML Stage, Merge, Aggregator, Row Generator, Shared Container, Remove Duplicates.
- Support Inbound and Outbound Data Integrations with the help of Message Queues and by sending data as XMLs