We provide IT Staff Augmentation Services!

Bigdata Developer Resume

Carrollton, TX


  • Over eight plus year’s of professional IT experience with five plus years of experience in analysis, architectural design, prototyping, development, Integration and testing of applications using Java/J2EE Technologies and three plus years of experience in Big Data Analytics as Hadoop Developer.
  • Three plus years of experience as Hadoop Developer with good knowledge in Hadoop ecosystem technologies.
  • Experience in developing Map Reduce Programs using Apache Hadoop for analyzing the big data as per the requirement.
  • Experienced on major Hadoop ecosystem’s projects such as Pig, Hive, Hbase and monitoring them with Cloudera Manager.
  • Extensive experience in developing Pig Latin Scripts and using Hive Query Language for data analytics.
  • Hands on experience working on NoSQL databases including Hbase, Cassandra and its integration with Hadoop cluster.
  • Experience in implementing Spark, Scala application using higher order functions for both batch and interactive analysis requirement.
  • Good working experience using Sqoop to import data into HDFS from RDBMS and vice - versa.
  • Good knowledge in using job scheduling and monitoring tools like Oozie and Zookeeper.
  • Experience in Hadoop administration activities such as installation and configuration of clusters using Apache, Cloudera and AWS.
  • Developed UML Diagrams for Object Oriented Design: Use Cases, Sequence Diagrams and ClassDiagrams using Rational Rose,Visual Paradigm and Visio.
  • Hands on experience in solving software design issues by applying design patterns including Singleton Pattern, Business Delegator Pattern, Controller Pattern, MVC Pattern, Factory Pattern, Abstract Factory Pattern, DAO Pattern and Template Pattern.
  • Experienced in creative and effective front-end development using JSP, JavaScript, HTML 5,DHTML, XHTML Ajax and CSS.
  • Good Working experience in using different Spring modules like Spring Core Container Module,Spring Application Context Module, Spring MVC Framework module, Spring ORM Module in Web applications.
  • Aced the persistent service, Hibernate and JPA for object mapping with database. Configured xml files for mapping and hooking it with other frameworks like Spring, Struts.
  • Used JQuery to select HTML elements, to manipulate HTML elements and to implement AJAX in Web applications.Used available plug-ins for extension of JQuery functionality.
  • Working knowledge of database such as Oracle8i/9i/10g, Microsoft SQL Server,DB2.
  • Experience in writing numerous test cases using JUnit framework with Selenium
  • Strong experience in database design, writing complex SQL Queries and Stored Procedures.
  • Experienced in using Version Control Tools like SubVersion, Git.
  • Experience in Building, Deploying and Integrating with Ant, Maven.
  • Experience in development of logging standards and mechanism based on Log4J.
  • Strong work ethic with desire to succeed and make significant contributions to the organization.
  • Strong problem solving skills, good communication, interpersonal skills and a good team player.
  • Have the motivation to take independent responsibility as well as ability to contribute and be a productive team member.


Hadoop/Big Data Technologies: HDFS, MapReduce, Hive, Pig, Sqoop,Flume, HBase, Cassandra, Oozie,Zookeeper, YARN, Spark.

Programming Languages: Java JDK1.4/1.5/1.6 (JDK 5/JDK 6),C/C++, Matlab, R, HTML, SQL, PL/SQL

Frameworks: Hibernate 2.x/3.x, Spring 2.x/3.x, Struts1.x/2.x and JPA

Client Technologies: JQUERY, Java Script, AJAX, CSS, HTMLXHTML

Operating Systems: UNIX, Windows, LINUX

Application Servers: IBM Web sphere, Tomcat, Web Logic, WebSphere

Web technologies: JSP, Servlets, Socket Programming, JNDI,JDBC, Java Beans, JavaScript, WebServices(JAX-WS)

Databases: Oracle 8i/9i/10g, Microsoft SQL Server,DB2 & MySQL 4.x/5.x

Tools: TOAD, SQL Developer, SOAP UI, ANT,Maven, Visio, Rational Rose


Confidential, Carrollton, TX

BigData Developer


  • Worked on analyzing Hadoop cluster and different Big Data analytic tools including Pig, Hive HBase database and SQOOP.
  • Coordinated with business customers to gather business requirements. And also interact with other technical peers to derive Technical requirements and delivered the BRD and TDD documents.
  • Extensively involved in Design phase and delivered Design documents.
  • Worked on analyzing Hadoop cluster and different Big Data analytic tools including Pig, Hive, HBase database and SQOOP.
  • Installed Hadoop, Map Reduce, HDFS, and Developed multiple map reduce jobs in PIG and Hive for data cleaning and pre-processing.
  • Importing and exporting data into HDFS and Hive using SQOOP.
  • Map the Relational Database Architecture to Hadoop's file system and build databases on top of it using Cloudera Impala.
  • Migration of huge amounts of data from different databases (i.e. Netezza, Oracle, SQL Server) to Hadoop.
  • Developed Spark programs on Scala and java.
  • Written Hive jobs to parse the logs and structure them in tabular format to facilitate effective querying on the log data.
  • Involved in creating Hive tables, loading with data and writing hive queries that will run internally in map reduce way.
  • Worked on analyzing Hadoop cluster and different Big Data analytic tools including Pig, hive, Hbase, Spark and Sqoop.
  • Developed Apache Spark jobs using Scala in test environment for faster data processing and used SparkSQL for querying.
  • Migrated HiveQL queries on structured into SparkQL to improve performance.
  • Analyzed data using Hadoop components Hive and Pig and created tables in hive for the end users.
  • Involved in writing Hive queries and pig scripts for data analysis to meet the business requirements.
  • Written Oozie flows and shell scripts to automate the flow.
  • Optimized MapReduce and hive jobs to use HDFS efficiently by using Gzip, LZO, Snappy and ORC compression techniques.
  • Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Experienced in managing and reviewing the Hadoop log files.
  • Used Pig as ETL tool to do Transformations, even joins and some pre-aggregations before storing the data onto HDFS.
  • Load and Transform large sets of structured and semi structured data.
  • Responsible to manage data coming from different sources.
  • Involved in creating Hive Tables, loading data and writing Hive queries.
  • Utilized Apache Hadoop environment by Cloudera.
  • Created Data model for Hive tables.
  • Exported data from HDFS environment into RDBMS using Sqoop for report generation and visualization purpose.
  • Involved in Unit testing and delivered Unit test plans and results documents.

Environment: Hadoop, MapReduce, HDFS, Hive, Pig, Java, SQL, Cloudera Manager, Sqoop, Flume, Oozie, Java (jdk 1.6), Eclipse, HBase, Pig, Spark, Java, python.

Confidential, Plano, Texas

Java/Hadoop developer


  • Hands on experience in loading data from UNIX file system to HDFS.
  • Experienced on loading and transforming of large sets of structured, semi structured and unstructured data from HBase through Sqoop and placed in HDFS for further processing.
  • Installed and configured Flume, Hive, Pig, Sqoop and Oozie on the Hadoop cluster.
  • Managing and scheduling Jobs on a Hadoop cluster using Oozie.
  • Involved in creating Hive tables, loading data and running hive queries in those data.
  • Extensive Working knowledge of partitioned table, UDFs, performance tuning, compression-related properties, thrift server in Hive.
  • Involved in writing optimized Pig Script along with involved in developing and testing Pig Latin Scripts.
  • Working knowledge in writing Pig’s Load and Store functions.
  • Developed Java MapReduce programs on log data to transform into structured way to find user location, age group, spending time.
  • Developed optimal strategies for distributing the web log data over the cluster, importing and exporting the stored web log data into HDFS and Hive using Scoop.
  • Collected and aggregated large amounts of web log data from different sources such as webservers, mobile and network devices using Apache Flume and stored the data into HDFS for analysis.
  • Monitored multiple Hadoop clusters environments using Ganglia.
  • Monitored workload, job performance and capacity planning using Cloudera Manager.
  • Developed PIG scripts for the analysis of semi structured data.
  • Developed and involved in the industry specific UDF (user defined functions).
  • Used Flume to collect, aggregate, and store the web log data from different sources like web servers, mobile and network devices and pushed to HDFS.
  • Analyzed the web log data using the HiveQL to extract number of unique visitors per day, page views, visit duration, most purchased product on website.
  • Integrated Oozie with the rest of the Hadoop stack supporting several types of Hadoop jobs out of the box (such as Map-Reduce, Pig, Hive, and Sqoop) as well as system specific jobs (such as Java programs and shell scripts).

Environment: Amazon EC2, Apache Hadoop 1.0.1, MapReduce, HDFS, CentOS 6.4, HBase, Hive, Pig, Oozie, Flume, Java (jdk 1.6), Eclipse, Sqoop, Ganglia, Hbase.

Confidential, St Louis, Missouri

Java/Big Data Developer


  • Involved in development of business domain concepts into Use Cases, Sequence Diagrams, Class Diagrams, Component Diagrams and Implementation Diagrams.
  • Implemented various J2EE Design Patterns such as Model-View-Controller, Data Access
  • Responsible in gathering requirements from users and designing Use cases, Technical Design and Implementation.
  • Extensively worked on Spring and Hibernate Frameworks.
  • Worked on Front Controller, Dependency Injection, MVC, Data Access Objects and other J2EE core patterns.
  • Used RAD as an IDE for developing application.
  • Implemented complex MAPREDUCE algorithms using JAVA languages
  • Used internal tool to design dataflow with Cassandra/MONGODB NOSQL databases
  • Dealt with Stored Procedures to consolidate and centralize logic that was originally implemented in applications
  • Developed the entire front end screens using Ajax, JSP, JSP Tag Libraries, CSS, Html and JavaScript.
  • Used JavaScript and JQuery for front end validations and functionalities.
  • Contributed significantly in applying the MVC Design pattern using Spring.
  • Developed EJB’s from the scratch for communicating with remote applications.
  • Implemented action Form classes for data transfer and server side data validation.
  • Performed Unit Testing (JUnit), System Testing and Integration Testing.
  • Involved in Maintenance and Bug Fixing.
  • Actively participated in design and technical discussions.
  • Involved Web Service Design and Development.
  • Developed web services using SOAP and WSDL.
  • Application deployment is done in Web Sphere 8.5, JBoss servers.
  • Involved in the complete software development life cycle.
  • Involved in unit testing and user documentation and used Log4j for creating the logs.

Environment: Java, J2EE (Servlets, JSP1.2), Apache Ant, Linux/UNIX, SOAP WebSevices, Hibernate 4, Spring 3, RAD 7, Web services (SOAP, WSDL), Big Data Hadoop, Pig, hive, Map reduce, Zookeeper, Sqoop, Flume, JUnit, SAX, XML, XSLT, HTML, JavaScript, JQuery, AJAX, CSS, Oracle9i/10g, SQL, Maven, TOAD, SVN, JDBC, GIT, CVS, SVN, SOAP UI, ANT Hill PRO, log4J, Windows 7.

Confidential, Minneapolis MN

Java /J2EE Developer


  • Responsible for gathering and analyzing requirements and converting them into technical specifications
  • Created Use case, Sequence diagrams, functional specifications and User Interface diagrams using Star UML.
  • Involved in complete requirement analysis, design, coding and testing phases of the project.
  • Participated in JAD meetings to gather the requirements and understand the End Users System.
  • Developed user interfaces using JSP, HTML, XML and JavaScript.
  • Generated XML Schemas and used XML Beans to parse XML files.
  • Created Stored Procedures & Functions. Used JDBC to process database calls for DB2/AS400 and SQL Server databases.
  • Developed the code which will create XML files and Flat files with the data retrieved from Databases and XML files.
  • Created Data sources and Helper classes which will be utilized by all the interfaces to access the data and manipulate the data.
  • Developed web application called iHUB (integration hub) to initiate all the interface processes using Struts Framework, JSP and HTML.
  • Heavily used bug trackers Pivotal, jira, Bugzilla and Target Process
  • Developed the interfaces using Eclipse 3.1.1 and JBoss 4.1 Involved in integrated testing, Bug fixing and in Production Support.

Environment: Java 1.3, Servlets, JSPs, Java Mail API, Java Script, HTML, MySQL 2.1, Java Web Server 2.0, JBoss 2.0, RMI, Rational Rose, Red Hat Linux 7.1.


Program analyst


  • Involved in the design and development phases of Rational Unified Process (RUP)
  • Involved in creation of UML diagrams like Class, Activity, and Sequence Diagrams using modeling tools of IBM Rational Rose
  • Coding using Java, JSP, and HTML.
  • Developed front end validations using JavaScript and developed design and layouts of JSPs and custom taglibs for all JSPs.
  • Developed the Data Exchange Objects and integrated with the Presentation layer to exchange the business data (fetches the data from oracle database) from the Pinnacle user interfaces (Presentation Layer).
  • GUI development using HTML, XML, JSP, Servlets, JavaScript with the help of MVC Architecture.
  • Worked extensively on the JSPs using MVC architecture.
  • Responsible for writing the Design Specifications for the user interfaces and the business logic layers.
  • Front-end validations in JavaScript.
  • Analyzed user tasks and develop task models and usage scenarios (UML).
  • Prepared exhaustive test cases to comprehensively test functionality and code.

Environment: Java, J2EE, Tomcat, JSP and Struts Framework, tiles, custom taglibs, Eclipse, SQL, Oracle, HTML, JavaScript, Servlets, JSP, XML, EJB, Web Tomcat Application Server.

Hire Now