We provide IT Staff Augmentation Services!

Data Engineer Resume

3.00/5 (Submit Your Rating)

TECHNICAL SKILLS

Source Control: Git, Azure Devops, Github

Cloud services: Snowflake, Azure SQL data warehouse, google big query, Azure data bricks.

Data orchestration: Snowpipe, Azure data factory, stream sets.

ETL: Oracle PL/SQL, Informatica.

Cloud Platforms: AWS, Azure.

Languages: Python, Oracle PL/SQL.

Scripting & Other Tools: Unix Shell (Ksh, Bash), Putty, Snow SQL, Qlikview, powershell, Airflow.

PROFESSIONAL EXPERIENCE

Data Engineer

Confidential

Responsibilities:

  • Worked on POC on Snowflake, Azure SQL data warehouse and Google big query to understand the functionality of the different cloud DWaaS provider and provide results to senior management to help make decisions.
  • Work with enterprise architects on solutioning the POC and on EDW cloud journey.
  • Work with team on making decisions on how to migrate the data from on prem to cloud, which tools can be used for ETL or ELT on cloud.
  • Convert and review code from oracle PL/SQL programming to snowflake code, make performance changes and test.
  • Perform load testing using Jmeter, performance testing to ensure snowflake can handle the real time load we see in EDW. Compare on - prem vs cloud on various parameters.
  • Create notebooks to load xml files in Azure SQL datwarehouse using Azure databricks.
  • Worked on setting up DevOps CI/CD build and release pipeline using python code to run test queries.
  • Involved in setting up repositories, setup branch policies and automate test cases in DevOps.
  • Setting up data pipelines in stream sets to copy data from oracle to Snowflake.
  • Create dashboards on snowflake cost model, usage in QlikView.
  • Test run of spark code using snowflake native connector as part of PoC.
  • Created program in python to handle PL/SQL functions like cursors and loops which are not supported by snowflake.
  • Training other developers on snowflake and provide support as and when needed.
  • Documentation and presentations for future roadmap of EDW cloud and how it will integrate with other input and output sources like informatica, mainframe, qlikview and PowerBI.
  • Build pipelines in Azure data factory to move data from on prem to Azure SQL Datawarehouse, from Amazon S3 buckets to Azure blob storage.
  • Administration of Azure SQL Datawarehouse - creating users, roles, cost alerts, whitelisting IP addresses.
  • Get security alters from Divvycloud for Azure and ensure those are fixed in Azure subscription.

ETL developer/Tech Lead

Confidential

Responsibilities:

  • Requirement gathering and understanding the functional specifications through constant face to face interaction.
  • Preparing data flow diagrams, data models and designing ETL.
  • Setup CI/CD pipeline for build and release of oracle SQL code and run automate jobs using UFT.
  • Designing, validating and transforming the data from various sources.
  • Writing and validating the complex SQL procedures and blocks to load the data into the data warehouse.
  • Writing SQL scripts for applying the transformation logic.
  • Identify performance bottlenecks in the design, evaluate and propose alternate solutions.
  • Review the work to ensure all the prerequisites for quality of the delivery are met. coordinate with business user for the user acceptance testing for the developed application.
  • Leading the team at onsite and provide support to offshore.
  • Designing scheduling document for the jobs created in univiewer Dollar universe.
  • Monitor long running critical jobs and provide solutions for reducing the time (performance tuning).
  • Analyze data and provide support to business users during UAT phase and post go-live.

PL/SQL Oracle Developer

Confidential

Responsibilities:

  • Involved in software development life cycle for the project.
  • Customization of front-end screens for various modules in Finacle.
  • Worked on LMS(liquidity management system) module in Finacle. Created automatic sweep code along with PL/SQL procedures.
  • Design and providing the solution to the client. Working on creating functional documents from requirements.
  • Writing complex PL/SQL procedures, functions to extract data in required format for interfaces.
  • Tracking the issues that are raised in JIRA, doing SLA management to make sure that SLA is not missed during UAT and postproduction support.
  • Designing scheduling of jobs in IBM Tivoli.
  • Performance tuning for the long running jobs in database.

We'd love your feedback!