We provide IT Staff Augmentation Services!

Hadoop/spark Develope Resume

3.00/5 (Submit Your Rating)

PROFESSIONAL EXPERIENCE

Hadoop/Spark Develope

Confidebtial

  • Having 10 years of IT experience in full System Development Life Cycle (Analysis, Design, Development, Testing, Deployment and Support) using various methodologies. Expert in
  • Big Data/Hadoop with strong skills in providing solutions to business problems using Big Data analytics.
  • Cloudera Certified Developer with 4 years of strong experience in Big Data and Hadoop Ecosystems.
  • Extensive experience in implementing, consulting and managing Hadoop Clusters & eco system components like HDFS, MapReduce, Pig, Hive, Sqoop, Flume, Oozie, Event Engine, Zookeeper & HBase.
  • Hands on experience with Spark - Scala programming with good knowledge on Spark Architecture and its In-memory Processing.
  • Experience in Cloudera and MapR distributions.
  • Strong architectural experience in building large scale distributed data processing and In-depth knowledge of Hadoop Architecture MR1 & MR2 (YARN).
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems (RDBMS) and vice-versa.
  • Experience in analyzing data using HIVEQL, PIG Latin and custom MapReduce programs in JAVA. Extending HIVE and PIG core functionality by using custom UDF’s.
  • Expertized in Implementing Spark using Scala and SparkSQL for faster testing and processing of data
  • Real time streaming the data using Spark with Kafka.
  • Experience in NoSQL databases such as HBase and Cassandra.
  • Worked on different job workflow scheduling and monitoring tools like Oozie, Event Engine and Zookeeper.
  • Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs and Scala.
  • Very good understanding of Static and Dynamic Partitions, Bucketing concepts in Hive and designed both Managed and External tables in Hive to optimize performance.
  • Experienced in creating API Proxy’s and configuration using APIGEE Dashboard and working with API OP’s team to promote the changes in E2, E3 thru RFC.
  • Experience in creating the Spring Cloud Netflix - Eureka Client and Service applications.
  • Exploring with the Spark for improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Having good knowledge on Python.
  • Good Knowledge with NoSQL Databases like HBase, Cassandra, CouchDB, and MongoDB.
  • In depth understanding of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce.
  • Expert in Java based technologies including Core Java, JSP, Servlets, JDBC, Ajax, Hibernate, Web services and frameworks like Struts, Spring and SDLC process.
  • Experience in web design technologies, such JSP, HTML, JavaScript, AJAX, JSON, JQuery, JSTL, CSS,

TECHNICAL SKILLS

Big Data Ecosystems: Hadoop, MapReduce, Hive, Pig, HBase, Sqoop, Spark, Oozie, Scala, Kafka, Zookeeper, Cassandra and YARN, Cloudera CDH, MapR.

Programming Languages: Core Java, J2EE Technologies, Python, Scala.

Scripting Languages: JavaScript, XML, HTML and Shell scripting.

Databases: MySQL, Oracle.

Tools: Service Now, CR Database, Log4J, logback, POSTMAN, CVS, Rally.

Web Technologies: JSP, JavaScript, XML, HTML, CSS and Web Services.

Methodologies: Agile, SDLC.

We'd love your feedback!