We provide IT Staff Augmentation Services!

Sr Big Data Hadoop Developer Resume

3.00/5 (Submit Your Rating)

SUMMARY

  • Having 9 years of IT experience in full System Development Life Cycle (Analysis, Design, Development, Testing, Deployment and Support) using various methodologies. Expert in Hadoop with strong skills in providing solutions to business problems using Big Data analytics.
  • Cloudera Certified Developer with 4+ years of strong experience in Big Data & Hadoop Ecosystems.
  • Extensive experience in implementing and managing Hadoop Clusters & eco system components like HDFS, MR1 & MR2 (YARN), Spark, Scala, Hive, HBase, Sqoop, Oozie, Kafka and Zookeeper.
  • Hands on experience with Spark - Scala programming with good knowledge on Spark Architecture and its In-memory Processing.
  • Experience in different distribution like Cloudera and MapR.
  • Strong architectural experience in building large scale distributed data processing and In-depth knowledge of Hadoop Architecture MR1 & MR2 (YARN).
  • Experience in Sqooping the data (import and export) to HDFS and RDBMS.
  • Expertized in Implementing Spark and Scala programs for faster data processing.
  • Used Kafka Consumer origin to consume Kafka produced data and stores it into HDFS.
  • Experience in NoSQL databases such as HBase and Cassandra.
  • Worked on different job workflow scheduling and monitoring tools like Oozie, Event Engine, Tivoli.
  • Experienced in Static and Dynamic Partitions, Bucketing concepts in Hive and designed both Managed and External tables in Hive to optimize performance.
  • Experienced in creating API Proxy’s and configuration using APIGEE Dashboard and working with API OP’s team to promote the changes in E2, E3 thru RFC.
  • Experienced in data profiling using Informatica Analyst for data quality check.
  • Having good experience in Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Good Knowledge with NoSQL Databases like HBase, Cassandra, CouchDB, and MongoDB.
  • In depth understanding of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce.
  • Good knowledge on Amazon Web Services (AWS).
  • Solid understanding of Object-Oriented designs, MVC Architecture, Enterprise Java Beans and successful implementation of the same.
  • Experienced in testing the API’s using SOAPUI, POSTMAN and HttpMasterExpresss.
  • Experience in J2EE servlet technologies, such as Servlets, JDBC, Web Services, etc.
  • Experience in various databases, such as Oracle, DB2, MySQL.
  • Working knowledge in different software methodologies like Agile and SDLC.
  • Extensive experience in Requirements gathering, Analysis, Design, Reviews, Coding and Code Reviews, Unit and Integration Testing.

TECHNICAL SKILLS

Big Data Ecosystems: Hadoop, MR1 & MR2 (YARN), Spark, Scala, Hive, HBase, Sqoop, Kafka, Zookeeper, Oozie, Cassandra and CDH, MapR.

Programming Languages: Core Java, J2EE Technologies, Scala, Shell Scripting.

Databases: MySQL, Oracle, DB2.

Tools: Eclipse, IntelliJ, Log4J, logback, POSTMAN, SOAPUI, WinSCP, MobaXterm, SQL Developer, CVS, Rally, JIRA, XMLSpy, GitHUB.

Web Technologies: JSP, JavaScript, XML, HTML, CSS and Web Services.

Methodologies: Agile, Waterfall.

We'd love your feedback!