Java developer Resume
5.00/5 (Submit Your Rating)
SUMMARY
- 2+ years experience in Hadoop, Scala and Spark, zookeeper, Kafka, Map Reduce, Hive, Impala, Pig and good knowledge in Hbase.
- Good Knowledge in scala and involved in processing streaming data.
- 1+ Years of experience in the development field of various web applications Using Java, JDBC, Servlets, Struts, Spring and Hibernate.
- Installed CDH in distributed mode and involved in integrating Hadoop components and zookeeper and Hbase.
- Good knowledge in zookeeper distributed service.
- Involved in Installing CDH and setting up hadoop ecosystem.
- Hands - on experience with the Hadoop stack, Map Reduce Programming.
- Loading files to HDFS and writing map reduce jobs to mine the data.
- Well understanding of HDFS and Map Reduce framework and Hadoop's ecosystem
- Involved in projects developed using frame-works Struts, Springs & Hibernate
- Proficient knowledge in Java & J2EE web-component technologies
- Work experience on Tomcat and Web logic servers.
- Good knowledge in Design patterns.
- Highly self motivated and adaptive with the ability to grasp things quickly and possesses excellent interpersonal, technical and communication skills.
TECHNICAL SKILLS
Primary Skills: Java, Scala, Hadoop, Spark
Languages: Java, SCALA
Big Data Platforms: Cloudera
Big Data ecosystem: HDFS, Map/Reduce, Hive, Pig, Spark, Imapala, Sqoop
Messaging Queue: Kafka
Operating System: Windows, Ubuntu 14.04
Server side Technologies: JDBC, Servlets
Frame-works: CDH 5.x HDFS, Struts, spring, Hibernate
Database: Oracle 10g, noSql hBase.
Web/Application Servers: Apache Tomcat, Web logic
PROFESSIONAL EXPERIENCE
Confidential
Java DeveloperTechnologies/Frameworks: CDH 5.x, Ubuntu 14.04, Apache Kafka, Spark Steaming, Spark core,Zookeeper,HDFS, Hbase,Sqoop, Hive, Pig, Impala
Responsibilities:
- Used Kafka as messaging server
- Involved in Spark Core, Spark SQL and Spark Streaming.
- Developed solution in the spark streaming side.
- Experience with Spark Context, Streaming Context and SqlContext.
- Involved in processing streaming data in spark.
- Involved in batch processing in spark.
- Involved in development and deployment phase
- Preparing analysis document for the existing code.
- Created Technical design documents based on business process requirements.
- Querying Sqoop to import Legacy database info to Hadoop Cluster in Hive tables/Partitions/Buckets.
- Writing Pig scripts to validate the records.
- Export the data from HDFS to Oracle DB.
Confidential
Technologies/Frameworks: Java, spring, Hibernate, Web logic
Responsibilities:
- Designed and Developed Struts Action classes and Form Bean Classes.
- Involved in validation framework using struts.
- Involved in developing Hibernate POJO Classes and Hibernate mapping files.
- Involved in coding and implementation ofSpring Injectionscontroller, service layer and DAO layer