We provide IT Staff Augmentation Services!

Platform Engineering Lead Resume

4.00/5 (Submit Your Rating)

Sunnyvale, CA

SUMMARY:

  • 11 years of professional IT experience working in development, testing and support of Web based Application including 5 years of experience in Big Data Ecosystem.
  • Working in the Product development as a Platform Engineering Lead.
  • Involved in Research and Development group and provided inputs in building the Platform using the open source Hadoop components.
  • Leveraged strong Skills in developing applications involving Big Data technologies like Hadoop, Spark, Map Reduce, Yarn, Hive, Pig, Kafka, Oozie, Sqoop, Hue, Zeepeline, Hortonworks, Cloudera and Scala.
  • Experience in designing and implementing the data ingestion framework by to load the data into the data lake for analytical purposes.
  • Has good knowledge on ETL development for processing the large scale data using Bigdata platform.
  • Have developed the data pipelines using Hive, Pig and Mapreduce.
  • Experienced in integrating Kafka with Spark streaming for high speed log data processing.
  • Hands on experience in writing Map Reduce jobs.
  • Have experience in administrating clusters in the Hadoop ecosystem
  • Experience in installing and configuring the Hadoop Cluster of Major Hadoop Distributions.
  • Designed the reporting application that uses the Spark SQL to fetch and generate reports on Hive.
  • Good experience in analyzing data using Hive and have written User Defined Functions (UDF’s) in Java.
  • Design and implement best practices for cloud based cluster deployments of Hadoop, Spark, and other BigData eco - system tools
  • Experience in developing application for REST services using Spring Boot to submit the Hadoop Jobs on the cluster
  • Has experience in migrating the SQL/PLSQL queries to Hive queries
  • Experience in working with NoSQL database like Mongo DB to store the Hadoop and Spark jobs metadata, config files etc in the Mongo DB database.
  • Experience in building the Scala and Java Code for writing the Spark streaming code and using deploy using Maven and Gradle in various environments
  • Extensively used various visualization tools such as Tableau and Pentaho
  • Extensive experience in writing UNIX shell scripts and automation of the ETL processes using UNIX shell scripting.
  • Experience in developing the framework for Adhoc queries to generate the reports
  • Involved in performance tuning of the ETL process by addressing various performance issues at the extraction and transformation stages.
  • Experience in working with Bit Bucket, JIRA, JAMA, Junit.
  • Expert in developing Multi-Tier Web applications using Java-J2ee, Design Patterns, Struts, Hibernate.
  • Having experience in working in Agile and Waterfall model development methodologies.
  • Software developer in Java Application Development, Client/Server Applications, Internet/Intranet based database applications and developing, testing and implementing application environment using J2EE, JDBC, JSP, Servlets, Web Services, Oracle, PL/SQL and Relational Databases

PROFESSIONAL EXPERIENCE:

Confidential,Sunnyvale, CA

Platform Engineering Lead

Responsibilities:

  • Developed the data ingestion framework using Spark to read the data from Kafka topic and store it into the HDFS
  • Developed the JUnit test cases for unit testing the ingestion framework.
  • Was working as Onshore Lead and handling the offshore team.

Skills: Spark, Scala, Kafka, Spring Boot, Gradle, HDFS, MongoDB.

Confidential,Plano,TX

Platform Engineering Lead

Responsibilities:

  • I was responsible in ingesting the data from various sources like Oracle, SQL and Postgres to store the data in to the data lake.
  • Responsible in implementing Real time streaming application using Kafka and Spark.
  • Responsible for client interfacing in Support activities

Confidential,Billerica, MA

Platform Engineering Lead

Responsibilities:

  • Responsible for writing Hive queries to do the transformations once the data is ingested into the data lake.
  • Responsible for writing the Java Program using Rest API to retrieve the data from Salesforce to insert into the data lake and then doing the calculations to be accessed from the QlikSense for visualization purpose.
  • Responsible for writing the Map Reduce java program to process the data present in the input csv files to format as per the generic format to store it into the data lake for reporting purpose.

Skills: Spark, Hive, Sqoop, Tableau, Java

Confidential,Malvern, PA

Platform Engineering Lead

Responsibilities:

  • I was responsible in migrating the Oracle SQL and PL/SQL queries in to Hive queries.
  • I was responsible in implementing the ETL workflow and to schedule the daily and monthly jobs
  • Responsible in implementing the Spark code to retrieve the data from CSV to store it into the Hive store.
  • Responsible for handling the offshore team members.

Skills: Spark, Hive, Sqoop, SQL

Confidential,Charlotte,NC

Platform Engineering Lead

Responsibilities:

  • Responsible for writing the Map reduce code to retrieve the data from HDFS to do the calculations using Hive and to store in the HDFS.
  • Responsible for writing the Pig Macros to be used in the Pig Scripts
  • Responsible for writing the Hive, PIG scripts and UDF’s to do transformations.
  • Responsible for designing the workflow using Oozie.
  • Responsible for integrating the Oozie workflow with the Autosys scheduler.
  • Responsible for handing the offshore team.

Skills: HDFS, Hive, Oozie, Pig, Sqoop, Map reduce, Avro, Windows, UNIX, Autosys

Confidential,San Jose, CA

Services Analytics

Responsibilities:
  • Developed the Application using Pentaho reports using the Pentaho BI tool by connecting to Hive and Oracle.
  • Responsible for importing the data from Oracle to Hive using Sqoop.
  • Responsible for writing the automated Unit Test Cases.
  • Responsible for handling the team.

Skills: Pentaho, HDFS, Hive, Map reduce, Sqoop, Oracle, Windows, UNIX

Confidential

Platform Engineering Lead

Responsibilities:

  • Developed the Application using Map reduce, Hive and UDF’s.
  • Was responsible for writing the automated Unit Test Cases.
  • Was responsible for handling the team.

Skills: Java, HDFS, MapReduce, Hive, Oozie

We'd love your feedback!