We provide IT Staff Augmentation Services!

Java, J2ee Developer Resume

PROFESSIONAL SUMMARY:

  • Overall 6+ years of IT experience in analysis, design, development and implementation of business applications with thorough knowledge in Java, J2EE, Big Data, Hadoop Eco System and RDBMS related technologies with domain exposure in Retail, Healthcare, Banking, E - commerce websites, Insurance, Logistics and Financial (Mortgage) systems.
  • Expertise with the tools in Hadoop Ecosystem including Pig, Hive, HDFS, MapReduce, Sqoop, Storm, Spark, Kafka, Yarn, Oozie, and Zookeeper.
  • Excellent knowledge on Hadoop Architecture such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce programming paradigm.
  • Strong experience on Hadoop distributions like Cloudera, MapR and HortonWorks.
  • Experience in developing MapReduce Programs using Apache Hadoop for analyzing the big data as per the requirement.
  • Highly capable of processing large sets of Structured, Semi-structured and Unstructured datasets supporting Big Data applications.
  • Good experience in Hive partitioning, bucketing and perform different types of joins on Hive tables and implementing Hive Sere like JSON and ORC .
  • Worked on different file formats (ORCFILE, TEXTFILE) and different Compression Codecs (GZIP, SNAPPY, LZO) .
  • Proficiency in Hadoop data formats like AVRO & Parquet.
  • Have good knowledge on NoSQL databases like HBase, Cassandra and MongoDB .
  • Proficient in implementing HBase .
  • Used Zookeeper to provide coordination services to the cluster.
  • Experience in Oozie and workflow scheduler to manage Hadoop jobs by Direct Acyclic Graph (DAG) of actions with control flows.
  • Experience using Sqoop to import data into HDFS from RDBMS and vice-versa.
  • Extensive Experience on importing and exporting data using stream processing platforms like Flume and Kafka .
  • Implemented POC to migrate Map Reduce jobs into Spark RDD transformations using SCALA.
  • Experience in creating Spark Contexts, Spark SQL Contexts, and Spark Streaming Context to process huge sets of data.
  • Exploring with the Spark for improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Experienced in Spark Core, Spark RDD, Pair RDD, Spark Deployment Architectures.
  • Extensive experience using MAVEN and ANT as a Build tool for the building of deployable artifacts from source code.
  • Worked with Big Data distributions like Cloudera (CDH 3 and 4) with Cloudera Manager.
  • Knowledge on Cloud technologies like AWS Cloud and Amazon Elastic Map Reduce (EMR) .
  • Experience in database development using SQL and PL/SQL and experience working on databases like Oracle 9i/10g, SQL Server and MySQL .
  • Strong knowledge in development of Object Oriented and Distributed applications.
  • Written unit test cases using JUnit and MRUnit for Map Reduce jobs.
  • Proficiency in Hadoop data formats like AVRO & Parquet.
  • Experience with code development frameworks - GitHub, Jenkins.
  • Good understanding of Hadoop Gen1/Gen2 architecture and hands-on experience with Hadoop components such as Job Tracker, Task Tracker, Name Node, Secondary Name Node, Data Node, Map Reduce concepts and YARN architecture which includes Node manager, Resource manager and App Master.
  • Comprehensive knowledge of Software Development Life Cycle (SDLC), having thorough understanding of various phases like Requirements Analysis, Design, Development and Testing.
  • Involved in the Software Life Cycle phases like Agile and Waterfall estimating the timelines for projects.
  • Ability to quickly master new concepts and applications.

TECHNICAL SKILLS:

Big Data Technologies: Hadoop (HDFS & MapReduce), PIG, HIVE, HBASE, ZOOKEEPER, Sqoop, Apache Storm, Flume, Spark, Spark Streaming, Mlib, Spark SQL and Data Frames, Graph X, Scala

Programming & Scripting Languages: SQL, Python, Scala

Frameworks: Spring 3.5 - Spring MVC, Spring ORM, Spring Security, Spring ROO, Hibernate, Struts.

Application Servers: IBM Web Sphere, JBoss WebLogic

Web Servers: Apache Tomcat

Databases: MS SQL Server & SQL Server Integration Services (SSIS), My SQL, MongoDB

Designing Tools: UML, Visio

IDEs: Eclipse, Net Beans

Operating System: Unix, Windows, Linux, Cent OS

Others: Putty, WinScp, DataLake, Talend, Tableau, GitHub, SVN, CVS.

PROFESSIONAL EXPERIENCE

Confidential

Java, J2EE Developer

Responsibilities:

  • Worked on different file formats (ORCFILE, TEXTFILE) and different Compression Codecs (GZIP, SNAPPY, LZO) .
  • Proficiency in Hadoop data formats like AVRO & Parquet.
  • Have good knowledge on NoSQL databases like HBase, Cassandra and MongoDB .
  • Proficient in implementing HBase .
  • Used Zookeeper to provide coordination services to the cluster.
  • Experience in Oozie and workflow scheduler to manage Hadoop jobs by Direct Acyclic Graph (DAG) of actions with control flows.
  • Experience using Sqoop to import data into HDFS from RDBMS and vice-versa.
  • Extensive Experience on importing and exporting data using stream processing platforms like Flume and Kafka .
  • Implemented POC to migrate Map Reduce jobs into Spark RDD transformations using SCALA.
  • Experience in creating Spark Contexts, Spark SQL Contexts, and Spark Streaming Context to process huge sets of data.
  • Exploring with the Spark for improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Experienced in Spark Core, Spark RDD, Pair RDD, Spark Deployment Architectures.
  • Extensive experience using MAVEN and ANT as a Build tool for the building of deployable artifacts from source code.
  • Worked with Big Data distributions like Cloudera (CDH 3 and 4) with Cloudera Manager.
  • Knowledge on Cloud technologies like AWS Cloud and Amazon Elastic Map Reduce (EMR) .
  • Experience in database development using SQL and PL/SQL and experience working on databases like Oracle 9i/10g, SQL Server and MySQL .
  • Strong knowledge in development of Object Oriented and Distributed applications.
  • Written unit test cases using JUnit and MRUnit for Map Reduce jobs.
  • Proficiency in Hadoop data formats like AVRO & Parquet.
  • Experience with code development frameworks - GitHub, Jenkins.
  • Good understanding of Hadoop Gen1/Gen2 architecture and hands-on experience with Hadoop components such as Job Tracker, Task Tracker, Name Node, Secondary Name Node, Data Node, Map Reduce concepts and YARN architecture which includes Node manager, Resource manager and App Master.
  • Comprehensive knowledge of Software Development Life Cycle (SDLC), having thorough understanding of various phases like Requirements Analysis, Design, Development and Testing.
  • Involved in the Software Life Cycle phases like Agile and Waterfall estimating the timelines for projects.

Hire Now