We provide IT Staff Augmentation Services!

Azure Data Engineer Resume

PROFESSIONAL EXPERIENCE

Azure Data Engineer

Confidential

Responsibilities:

  • Develop, design data models, data structures and ETL jobs for dataacquisition and manipulation purposes.
  • Develop deep understanding of the data sources, implement datastandards, maintain data quality and master data management.
  • Expert in developing JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data.
  • Expert in using Databricks with Azure Data Factory (ADF) to compute large volumes of data.
  • Performed ETL operations in Azure Databricks by connecting to different relational database source systems using jdbc connectors.
  • Developed Python scripts to do file validations in Databricks and automated the process using ADF.
  • Developed an automated process in Azure cloud which can ingest data daily from web service and load in to Azure SQL DB.
  • Developed Streaming pipelines using Azure Event Hubs and Stream Analytics to analyze data for dealer efficiency and open table counts for data coming in from IOT enabled poker and other pit tables.
  • Analyzed data where it lives by Mounting Azure Data Lake and Blob to Databricks.
  • Used Logic App to take decisional actions based on the workflow.
  • Developed custom alerts using Azure Data Factory, SQLDB and Logic App.
  • Developed Databricks ETL pipelines using notebooks, Spark Dataframes, SPARK SQL and python scripting.
  • Developed complex SQL queries using stored procedures, common table expressions (CTEs), temporary table to support Power BI reports.
  • Implemented complex business logic through T - SQL stored procedures, Functions, Views and advance query concepts.
  • Worked with enterprise Data Modeling team on creation of Logical models.
  • Development level experience in Microsoft Azure providing data movement and scheduling functionality to cloud-based technologies such as Azure Blob Storage and Azure SQL Database.
  • Independently manage development of ETL processes - development to delivery.

Azure Data Engineer/ETL Developer

Confidential, New Brunswick, NJ

Responsibilities:

  • Involved in Requirement gathering, business Analysis, Design and Development, testing and implementation of business rules.
  • Understand business use cases, integration business, write business & technical requirements documents, logic diagrams, process flow charts, and other application related documents.
  • Used Pandas in Python for Data Cleansing and validating the source data.
  • Designed and developed ETL pipeline in Azure cloud which gets customer data from API and process it to Azure SQLDB.
  • Orchestrated all Data pipelines using Azure Data Factory and built a custom alerts platform for monitoring.
  • Created custom alerts queries in Log Analytics and used Web hook actions to automate custom alerts.
  • Created Databricks Job workflows which extracts data from SQL server and upload the files to sftp using pyspark and python.
  • Used Azure Key vault as central repository for maintaining secrets and referenced the secrets in Azure Data Factory and also in Databricks notebooks.
  • Built Teradata ELT frameworks which ingests data from different sources using Teradata Legacy load utilities.
  • Built a common sftp download or upload framework using Azure Data Factory and Databricks.
  • Maintain and support Teradata architectural environment for EDW Applications.
  • Involved in full lifecycle of projects, including requirement gathering, system designing, application development, enhancement, deployment, maintenance and support
  • Involved in logical modeling, physical database design, data sourcing and data transformation, data loading, SQL and performance tuning.
  • Project development estimations to business and upon agreement with business delivered project accordingly
  • Created proper Teradata Primary Indexes (PI) taking into consideration of both planned access of data and even distribution of data across all the available AMPS. Considering both the business requirements and factors, created appropriate Teradata NUSI for smooth (fast and easy) access of data.
  • Developing Data Extraction, Transformation and Loading jobs from flat files, Oracle, SAP, and Teradata Sources into Teradata using BTEQ, FastLoad, FastExport, MultiLoad and stored procedure.
  • Design of process oriented UNIX script and ETL processes for loading data into data warehouse
  • Developed mappings in Informatica to load the data from various sources into the Data Warehouse, using different transformations like Source Qualifier, Expression, Lookup, aggregate, Update Strategy, and Joiner
  • Worked on Informatica Advanced concepts & also Implementation of Informatica Push down Optimization technology and pipeline partitioning.
  • Performed bulk data load from multiple data source (ORACLE 8i, legacy systems) to TERADATA RDBMS using BTEQ, MultiLoad and FastLoad.
  • Used various transformations like Source qualifier, Aggregators, lookups, Filters, Sequence generators, Routers, Update Strategy, Expression, Sorter, Normalizer, Stored Procedure, Union etc.
  • Used Informatica Power Exchange to handle the change data capture (CDC) data from the source and load into Data Mart by following slowly changing dimensions (SCD) type II process.
  • Used Power Center Workflow Manager to create workflows, sessions, and also used various tasks like command, event wait, event raise, email.
  • Designed, created and tuned physical database objects (tables, views, indexes, PPI, UPI, NUPI, and USI) to support normalized and dimensional models.
  • Created a cleanup process for removing all the Intermediate temp files that were used prior to the loading process.
  • Used volatile table and derived queries for breaking up complex queries into simpler queries.
  • Responsible for performance monitoring, resource and priority management, space management, user management, index management, access control, execute disaster recovery procedures.
  • Used Python and Shell scripts to Automate Teradata ELT and Admin activities.
  • Performed Application level DBA activities creating tables, indexes, and monitored and tuned Teradata BETQ scripts using Teradata Visual Explain utility.
  • Performance tuning, monitoring, UNIX shell scripting, and physical and logical database design.
  • Developed UNIX scripts to automate different tasks involved as part of loading process.
  • Worked on Tableau software for the reporting needs.
  • Worked on creating few Tableau dashboard reports, Heat map charts and supported numerous dashboards, pie charts and heat map charts that were built on Teradata database.

ETL Developer/Teradata Consultant

Confidential

Responsibilities:

  • Maintain and support Teradata architectural environment for EDW Applications.
  • Interacted with business community and gathered requirements based on changing needs.
  • Designed the logical Data Model using Erwin and transformed Logical model to Physical database using Power Designer.
  • Data Mart and Dimensional Modeling, Star and Snow Flake Schema Modeling for Data Warehouse.
  • Developed mappings/scripts to extract data from Oracle, Flat files, SQL Server, DB2 and load into data warehouse using the Mapping Designer, BTEQ, Fast Load and MultiLoad.
  • Exported data from Teradata database using Fast Export and BTEQ.
  • Wrote SQL Queries, Triggers, Procedures, Macros, Packages and Shell Scripts to apply and maintain the Business Rules.
  • Coded and implemented PL/SQL packages to perform batch job scheduling.
  • Performed Teradata and Informatica tuning to improve the performance of the Load.
  • Performed error handling using error tables and log files.
  • Used Informatica Designer to create complex mappings using different transformations like Filter, Router, Connected & Unconnected lookups, Stored Procedure, Joiner, Update Strategy, Expressions and Aggregator transformations to pipeline data to Data Warehouse.
  • Performed DML and DDL operation with the help of SQL transformation in Informatica.
  • Collaborated with Informatica Admin in process of Informatica Upgradation from PowerCenter 7.1 to PowerCenter 8.1.
  • Used SQL Transformation to sequential loads in Informatica Power Center for ETL processes.
  • Worked closely with the business analyst's team to solve the Problem Tickets, Service Requests. Helped the 24/7 Production Support team.
  • Developed mappings in Informatica to load the data from various sources into the Data Warehouse, using different transformations like Source Qualifier, Expression, Lookup, aggregate, Update Strategy, and Joiner.
  • Worked on Informatica Advanced concepts & also Implementation of Informatica Push down Optimization technology and pipeline partitioning.
  • Performed bulk data load from multiple data source (ORACLE 8i, legacy systems) to TERADATA RDBMS using BTEQ, MultiLoad and FastLoad.
  • Used various transformations like Source qualifier, Aggregators, lookups, Filters, Sequence generators, Routers, Update Strategy, Expression, Sorter, Normalizer, Stored Procedure, Union etc.
  • Used Informatica Power Exchange to handle the change data capture (CDC) data from the source and load into Data Mart by following slowly changing dimensions (SCD) type II process.
  • Used Power Center Workflow Manager to create workflows, sessions, and also used various tasks like command, event wait, event raise, email.
  • Designed, created and tuned physical database objects (tables, views, indexes, PPI, UPI, NUPI, and USI) to support normalized and dimensional models.
  • Created a cleanup process for removing all the Intermediate temp files that were used prior to the loading process.
  • Used volatile table and derived queries for breaking up complex queries into simpler queries.
  • Worked on Tableau software for the reporting needs.
  • Worked on creating few Tableau dashboard reports, Heat map charts and supported numerous dashboards, pie charts and heat map charts that were built on Teradata database.

Hire Now