We provide IT Staff Augmentation Services!

Sr. Java Developer Resume

0/5 (Submit Your Rating)

Dallas, TX

SUMMARY

  • Proactive IT developer with 8 years of working experience in Designing and Building high performance and scalable systems using Big Data Ecosystem on windows and Linux environments and Java.
  • Highly dedicated and result oriented Hadoop Developer with 2+ years of strong end - to-end experience on Hadoop Development with varying level of expertise around different BIGDATA Hadoop projects.
  • Experience in Hadoop distributions like Cloudera (CDH4 & CDH5.5).
  • Procedural knowledge in cleansing and analyzing data using HiveQL, Pig Latin, and custom MapReduce programs in Java.
  • Experience in installing Hadoop cluster using different distributions of Apache Hadoop, Cloudera.
  • Experience on Apache Hadoop technologies Hadoop distributed file system (HDFS), MapReduce framework, YARN, Pig, Hive, Sqoop, and Flume.
  • Expertise in developing Map-Reduce programs to perform Data Transformation.
  • Have hands on experience in writing Map Reduce jobs on Hadoop Ecosystem including Hive and Pig.
  • Hands on experience in installing, configuring and using ecosystem components like Hadoop Map Reduce, HDFS, Pig, Hive, and Sqoop.
  • Good knowledge of Hadoop architecture and various Hadoop Stack elements.
  • Strong knowledge on implementation of SPARK core - SPARK SQL, MLib, GraphX and Spark streaming.
  • Familiarity on real time streaming data with Spark for fast large scale in memory MapReduce.
  • Experience in developing pipelines and processing data from various sources and processing them with Hive and Pig.
  • Good understanding of NoSQL databases and hands on work experience in writing applications on NoSQL databases like HBase.
  • Experience in Cluster Monitoring Tools such as Ganglia.
  • Hands on Experience in development and deployment of custom Hadoop application in both in house clusters and Cloud based environment.
  • Exploring with the Spark improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Good understanding and ability to learn, apt new skills and good communicator with team spirit minded.

TECHNICAL SKILLS

Big Data: Apache Hadoop, HDFS, MapReduce, Hive, PIG, OOZIE, HUE, SQOOP, Spark

Database: MYSQL, Oracle, SQL Server, Hbase

IDEs: Eclipse, NetBeans

Languages: C, Java, PIG LATIN, Unix shell scripting, Python

Reporting Tool: Tableau

Scripting Languages: HTML, CSS, JavaScript, DHTML, XML, JQuery

Web Technologies: HTML, XML, JavaScript, J query

Web/Application Servers: Apache Tomcat, WebLogic

PROFESSIONAL EXPERIENCE

Hadoop/Spark Developer

Confidential, Atlanta, GA

Responsibilities:

  • Developed PIG scripts to transform the raw data into intelligent data as specified by business users.
  • Worked closely with the data modelers to model the new incoming data sets.
  • Expertise in designing and deployment of Hadoop cluster and different Big Data analytic tools including Pig, Hive, HBase, Oozie, ZooKeeper, SQOOP, flume, Spark, Impala, Cassandra.
  • Installed Hadoop, Map Reduce, HDFS and developed multiple MapReduce jobs in PIG and Hive for data cleaning and pre-processing.
  • Assisted in upgrading, configuration and maintenance of various Hadoop infrastructures like Pig, Hive, and Hbase.
  • Exploring with the Spark improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Developed Spark code using scala and Spark-SQL/Streaming for faster testing and processing of data.
  • Import the data from different sources like HDFS/Hbase into Spark RDD.
  • POC on Single Member Debug on Hive/Hbase and Spark.
  • Performed transformations, cleaning and filtering on imported data using Hive, Map Reduce, and loaded final data into HDFS.
  • Load the data into Spark RDD and do in memory data Computation to generate the Output response.
  • Loading Data into Hbase using Bulk Load and Non-bulk load.
  • Experience in Oozie and workflow scheduler to manage Hadoop jobs by Direct Acyclic Graph (DAG) of actions with control flows.
  • Expertise in different data Modeling and Data Warehouse design and development.
  • Developed Spark code using scala and Spark-SQL/Streaming for faster testing and processing of data.
  • Import the data from different sources like HDFS/Hbase into Spark RDD.
  • Developed a data pipeline using Kafka and Storm to store data into HDFS.
  • Performed real time analysis on the incoming data.
  • Automated the process for extraction of data from warehouses and weblogs by developing work-flows and coordinator jobs in OOZIE.
  • Performed transformations like event joins, filter bot traffic and some pre-aggregations using Pig.
  • Developed business specific Custom UDF's in Hive, Pig.
  • Configured Oozie workflow to run multiple Hive and Pig jobs which run independently with time and data availability.
  • Optimized MapReduce code, pig scripts and performance tuning and analysis.

Environment: MapReduce, HDFS, Hive, Pig, Spark, Spark-Streaming, Spark SQL, Apache Kafka, Sqoop, Java, Scala, CDH4, CDH5.5, Eclipse, Oracle, Git, Shell Scripting and Cassandra.

Hadoop Developer

Confidential - Hoffman Estates, IL

Responsibilities:

  • Optimized Map/Reduce Jobs to use HDFS efficiently by using various compression mechanisms.
  • Responsible for building scalable distributed data solutions using Hadoop.
  • Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
  • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS and Extracted the data from Oracle into HDFS using Sqoop.
  • Analyzed the data by performing Hive queries and running Pig scripts to study customer behavior.
  • Installed and configured Cloudera Manager for easy management of existing Hadoop cluster.
  • Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing with Pig.
  • Responsible for managing and reviewing Hadoop log files. Designed and developed data management system using MySQL.
  • Developed entire frontend and backend modules using Python on Django Web Framework.
  • Wrote Python scripts to parse XML documents and load the data in database.
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, and other tools.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
  • Little bit hands on Data processing using spark.
  • Worked on NoSQL databases including HBase and ElasticSearch.
  • Worked on importing and exporting data from Oracle and DB2 into HDFS and HIVE using Sqoop.
  • Used Tableau as reporting tool as data visualization tool
  • Involved in Java, J2EE, Struts, Web Services and Hibernate in a fast paced development environment.
  • Followed Agile methodology, interacted directly with the client provide/take feedback on the features, suggest/implement optimal solutions, and tailor application to customer needs.
  • Setting up proxy rules of applications in Apache server and Creating Spark SQL queries for faster requests.
  • Designed and Developed database design document and database diagrams based on the Requirements.
  • Developed UI of Web Service using Struts MVC Framework.
  • Implemented Struts validation framework.
  • Installed and configured Hive, Pig, Sqoop and Oozie on the Hadoop cluster.
  • Implemented Web Service security on JBoss Server.
  • Used Hibernate ORM framework as persistence engine, actively engaged in mapping POJO's and hibernate queries.
  • Implemented DAOs for data access using Spring ORM with Hibernate.
  • Implemented/optimized complex stored procedures for performance enhancements.
  • Designed the XML Schema for data transmission using xml documents.
  • Wrote Unit tests using JUnit.
  • Responsible and involved in the start to end process of Hadoop cluster installation, configuration and monitoring.

Environment: HDFS, Hive, PIG, UNIX, SQL, Java MapReduce, SPARK Hadoop Cluster, Hbase, Sqoop, Oozie, Linux, Data Pipeline, Cloudera Hadoop Distribution, Python, MySql, Git. MapR-DB.

Sr. Java Developer

Confidential - Dallas, TX

Responsibilities:

  • Used segment widget along with segment templates to present data and to reuse the template with tablets.
  • Involved in Java, J2EE, Struts, Web Services and Hibernate in a fast-paced development environment.
  • Followed Agile methodology, interacted directly with the client provide/take feedback on the features, suggest/implement optimal solutions, and tailor application to customer needs.
  • Developed UI of Web Service using Struts MVC Framework.
  • Designed UI forms for different size devices (phones and tablets).
  • Involved in design, development, and testing phases of software development life cycle.
  • Developed use case diagrams, class diagrams, database tables, and provided mapping between relational database tables and object oriented java objects using Hibernate.
  • Created and deployed web pages using HTML, JSP, JavaScript, CSS and Angular JS framework.
  • Developed web tier components of web stores using Spring Web MVC framework that leverages Model View Controller (MVC) architecture and used spring tool Suite.
  • Implemented SOA based web services, designed and built SOAP web service interface.
  • Used MVC-Struts framework in the front-end to develop the User Interface.
  • Involved in the implementation of business logic in struts Framework and Hibernate in the back-end.
  • Developed various DOA's in the applications using Spring JDBC support and fetch, insert, update and deleted data into the database table.
  • Involved in code reviews and ensured code quality across the project.
  • Participated in all aspects of application design, development, testing, and implementation, using OOA/OOD or model driven design, and SOA.
  • Hands-on development and testing with web services and SOA.
  • Hibernate Frameworks is used in persistence layer for mapping an object-oriented domain model to a relational database.
  • Developed Web Services and exposed (provider) them to other team for consumption.
  • Actively involved in software development life cycle starting from requirements gathering and performing Object Oriented Analysis.
  • Interacted with the business users to gather requirements and provided high-level design with Sequential and State-chart diagrams
  • Used Validator framework to perform JSP form validation.
  • Implemented JDBC to connect with the database and read/write the data.
  • Bug fixing and 24/7production support and worked on bug fixing and enhancements on change requests.
  • Created the database roles, users, tables, views, procedures, packages, functions and triggers in Oracle using Toad.
  • Developed test cases and performed unit testing using JUnit.
  • Involved in Sprint meetings and followed agile software development methodologies.
  • Designed database and created tables, written the complex SQL Queries and stored procedures as per the requirements.
  • Involved in coding for JUnit Test cases, ANT for building the application.

Environment: Java/J2EE, JSP, SQL, Apache Tomcat, Eclipse, MVC, Oracle 10g, SQL, PL/SQL, JSP, EJB, Struts, Hibernate, WebLogic 9.0, HTML, AJAX, Java Script, JDBC, XML, JMS, XSLT, UML, JUnit, log4j, MyEclipse 6.0

Java Developer

Confidential

Responsibilities:

  • Involved in the analysis, design, and development and testing phases of Software Development Life Cycle (SDLC).
  • Developed test cases and performed unit testing using JUnit.
  • Designed and developed framework components, involved in designing MVC pattern using Struts and Spring framework.
  • Used Ant builds script to create EAR files and deployed the application in Web Logic app server.
  • Coded HTML pages using CSS for static content generation with JavaScript for validations.
  • Implemented Java Naming/Directory Interface (JNDI) to support transparent access to distributed components, directories and services.
  • Used JDBC API to connect to the database and carry out database operations. Coded Java Server Pages for the Dynamic front end content that use Servlets and EJBs.
  • Used JSP and JSTL Tag Libraries for developing User Interface components.
  • Used JUnit framework for unit testing and ANT to build and deploy the application on WebLogic Server.
  • Used SOAP for exchanging XML based messages.
  • Responsible for developing Use case, Class diagrams and Sequence diagrams for the modules using UML and Rational Rose.
  • Actively involved in designing and implementing Factory method, Singleton, MVC and Data Access Object design patterns.
  • Web services used for sending and getting data from different applications using SOAP messages. Then used DOM XML parser for data retrieval.
  • Developed Custom Tags to simplify the JSP code. Designed UI screens using JSP and HTML.
  • Developed the user interface using JSP and Java Script to view all online trading transactions.
  • Developed both Session and Entity beans representing different types of business logic abstractions.
  • Used application server like WebSphere and Tomcat.
  • Designed and developed framework components, involved in designing MVC pattern using Struts and Spring framework.
  • Developed the Action Classes, Action Form Classes, created JSPs using Struts tag libraries and configured in Struts-config.xml, Web.xml files.
  • Involved in Deploying and Configuring applications in Web Logic Server.
  • Used Microsoft VISIO for developing Use Case Diagrams, Sequence Diagrams and Class Diagrams in the design phase.
  • Used JUnit framework for unit testing of application and ANT to build and deploy the application on WebLogic Server.

Environment: Java, J2EE, JSP, Oracle, VSAM, Eclipse, HTML, MVC, ANT, WebLogic

We'd love your feedback!