We provide IT Staff Augmentation Services!

Azure/cloud Data Engineer Resume

SUMMARY

  • Dedicated IT professional with 14 Years of experience in Design, Develop & Implementation of large - scale distributed applications. Significant work with Azure Data Factory, Azure Data lake, Azure Databricks, Azure Cosmos DB & Hadoop: HDFS, HIVE, Pig, HBase, Sqoop, Spark and Scala.
  • 2+ Years of exp in Microsoft Azure Cloud like Data Storage (Azure Data Lake, Azure Blob Storage, Azure Cosmos DB), Messaging systems (Azure Event Hubs, Azure IoT Hub) and Data Processing engines (Azure Data Lake Analytics, Azure HDInsight).
  • 5+ years of exp in Hadoop/Bigdata Ecosystem HDFS, Hive, Pig, Sqoop, Spark and Scala.
  • Experience in architecting, developing and delivering solutions using the Azure Data Analytics platform including Azure Data Lake, Azure Databricks, Azure HDInsight, Azure Cosmos DB, Azure Data Factory, Azure Logic Apps, Azure Functions, Azure Storage & Azure SQL Data Warehouse.
  • Developed methodologies for cloud migration, implemented best practices and helped to develop backup and recovery techniques for applications and database on virtualization platform.
  • Experience in manipulating/analyzing large datasets and finding patterns and insights within structured and unstructured data.
  • Develop information processes for data acquisition, data transformation, data migration, data verification, data modeling, and data mining.
  • Experience in Data Security & Platform Security designs.
  • Monitor Azure Infrastructure through System Center Operation Manager (SCOM).
  • Expertise in Microsoft Technologies including C#.Net, WPF, jQuery & JavaScript.
  • Good knowledge on Test Processes, Test Automation, Performance Tuning, Unit test Frameworks and Test-Driven Development.
  • I have participated and aware of agile/scrum (user story, sprint planning, retrospective).
  • Quick learner / proven ability to work independently
  • Excellent organization skills with proven ability to management multiple assignments and priorities successfully, delivering with high quality.

TECHNICAL SKILLS

Azure Cloud: Azure Data lake, Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse, Azure Cosmos DB, Azure AD, Azure Functions, Azure Event HUB, Azure Web App, Azure App Store. Azure Active Directory (AD), terraform, Kubernetes & Docker.

Programming: Spark, Scala, Unix Shell Scripting, PowerShell & VS. C#.Net Data Migration, Data Integration & Data Visualization Analytic Problem-Solving

PROFESSIONAL EXPERIENCE

Confidential

Azure/Cloud Data Engineer

Responsibilities:

  • Analyze, design, build & deploy modern data solutions using Microsoft Azure PaaS / SaaS service to support visualization of data. Understand current Production state of application and determine the impact of new implementation on existing business processes.
  • Responsible for building scalable distributed data solutions using Azure Data lake, Azure Databricks. Azure HDInsight & Azure Cosmos DB.
  • Experience in Developing Spark applications using Spark-SQL in Azure Databricks for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.
  • Write transformation logics to extract real time data store into Azure Data lake & process it.
  • Coordinating with business team (Membership, Retail, Finance and Claims) data for processing via Data-lake and Storing for future usages.
  • Design & Develop Audit framework for Inbound & outbound data transfer.

Environment: - Azure Data Lake, Data Factory, Azure Databricks, Azure Cosmos DB & Azure Stream Analytics. HDFS, HIVE, Sqoop, Spark, Scala, Shell Script.

Confidential

Hadoop Developer

Responsibilities:

  • Design and development of commercial data service applications.
  • Develop multiple Hive external table with partitions to load staging data dynamically and analyzing data using hive queries
  • Develop Spark applications using Spark-SQL for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.
  • Develop Spark Applications for setting right Batch Interval time, correct level of Parallelism and memory tuning.
  • Design & develop a Sqoop script that will import data from DB2 to Hive external tables.
  • Design & develop audit framework for inbound & outbound data transfer.
  • Analyze YARN Resource manager for performance.
  • Participated in daily scrum meetings and iterative development

Environment: s: - Hortonworks, YARN, HIVE, PIG, Sqoop, Spark, Scala, Unix Shell Script

Confidential

Senior Software Engineer

Responsibilities:

  • Involved in all the phases of SDLC including Requirements collection, Design & Analysis of the Client Specifications, Development and Deployment of the Ulysse application.
  • Interact with customers to understand, analyzing the requirement, estimate effort to design, coding and testing.
  • Implemented UI design using HTML and CSS
  • Analyze the data that has until now been unused because of it structure or volume
  • Use 3rd party library for multi-language support.
  • Fixed software defects via JavaScript and CSS
  • Followed Agile development and consistently delivered new features on time during sprints.
  • Maintained close communication with UI/UX team to enhance product quality.

Environment: s: - C#.NET, MVC, JavaScript, SQL Server 2005, HTML, CSS

Hire Now