Data Analyst Resume
2.00/5 (Submit Your Rating)
SUMMARY
- A goal - oriented professional with over 7.6 years of experience in Information Technology
- Strong in Data Warehouse design, develop and analytics.
- Extensive experience in design and development of ETL in Data Stage with all versions
- Strong in programming of shell, python and proficient in all RDBS tools like Teradata, Oracle, SQL Server and Mysql Db2.
- Experience in various platforms and technologies: Analytics, Hadoop, spark
- Experience in Data modeling and designing
- Experience in Cloud computing and Microsoft Azure cloud service.
- Experienced in GitHub CI and Jenkins for CI and for End-to-End automation for all build and CD.
- Good Knowledge on Apache Spark with Python, Airflow, Data bricks, Jupyter, Hadoop Big Data.
- Experience with Agile/Scrum methodology
- The ability to present and absorb complex ideas quickly and accurately.
- Strong focus on time-management, decision-making, and project delivery.
- Excellent Knowledge of Application Lifecycle Management, Change & Release Management and ITIL process.
- Exposed to all aspects of software development life cycle (SDLC) such as Analysis, Planning, Developing, Testing, implementing and Post-production analysis of the projects.
- Good interaction with developers, managers, and team members to coordinate job tasks and strong commitment to work.
PROFESSIONAL EXPERIENCE
Confidential
Data Analyst
Responsibilities:
- Providing Application development and support for expertise for its Case Management Application Development
- Develop Datawarehouse model for the backend database for reporting requirements as per state and Federal Health and Human Services agencies.
- Developed Data (Core ETL) extraction model for easy end user data reporting UI interfaces.
- Created multiple pipelines and activities and activities for full and incremental data loads into Azure Data Lake store and Azure Synapse.
- Designing and implementing redundant systems, policies, and procedures for disaster recovery and data archiving of data asserts.
- Worked on (Coding, Review, Unit testing, debugging & deployment & Operations & Support) and maintain Azure Data Factory service
- Worked on ETL package for data migration from internal source system to Azure cloud
- Design specifications, installation instructions, implementation guidelines and other system-related information.
Confidential
Sr. Data Engineering Analyst /Hadoop Developer
Responsibilities:
- Collaborate with Business Analysts to gather business requirements, evaluate the scope of design and technical feasibility.
- Analyze complexity and technical impact of requirements to cope with the existing design and discuss with Business Analyst for further refinement of requirements.
- Created High Level and Detailed Level Design. Conduct design reviews and design verification. Create final work estimate.
- End-to-end implementation, maintenance, optimizations and enhancement of the application.
- Code Review, Presenting code & design in Technical Review Board
- Leading the team and ensuring the delivery happens on time with the best quality and least defects. This is important in living up to the Quality standards of Optum.
- Used Import and Export in Web Infosphere Server Manager for importing the importing and exporting jobs/projects1 1 5
- Verify production code, support first three executions of code in production, Transition of the code and Process to the Maintenance/Support team.
- Co-ordinate with various Business partners, Analytical teams, Stakeholders to provide status reporting
- Actively participated in the team meetings in day to day calls, meeting reviews, status calls, batch reviews, etc.
Environment: Teradata TD14, Data Stage 8.5, Unix/Linux, DB2 v10.1, TWS, HPSM, Mercury ITG,SQL,BTEQConfidential
Data Engineering Analyst /ETL Developer
Responsibilities:
- Extensively used DataStage Designer and Teradata to build the ETL process which pulls the data and does the grouping techniques in job design.
- Developed master jobs (Job Sequencing) for controlling flow of both parallel & server Jobs.
- Good knowledge in parameterizing the variables rather than hardcoding directly, Used Director widely for monitoring the job flow and processing speed.
- Based on the above analysis did performance tuning for improving job processing speed.
- Developed Autosys jobs for scheduling the Jobs. These job includes Box jobs, Command jobs, file watcher jobs and creating ITG requests.
- Closely monitor schedules and look into the failures to complete all ETL/Load processes within the SLA.
- Designed and developed SQL scripts and extensively used Teradata utilities like BTEQ scripts, Fast Load, Multi Load to perform bulk database loads and updates.
- After Completing ETL activities corresponding load file will be sent to Cube team for building Cubes.
- Created spec doc’s for automating the manual processes.
- Closely work with On-shore people and Business people to resolve critical issues occurred during load process.
- Later all the UGAP jobs were migrated from Autosys to TWS, we were involved in end to end migration process and migrated successfully