We provide IT Staff Augmentation Services!

Hadoop Developer Resume

5.00/5 (Submit Your Rating)

Bloomington, IL

SUMMARY

  • Over 8+ years of professional IT experience with five plus years of experience in analysis, architectural design, prototyping, development, Integration and testing of applications using Java/J2EE Technologies and three plus years of experience in Big Data Analytics as Hadoop Developer.
  • Three plus years of experience as Hadoop Developer with good knowledge in Hadoop ecosystem technologies.
  • Experience in developing Map Reduce Programs using Apache Hadoop for analyzing the big data as per the requirement.
  • Experienced on major Hadoop ecosystem’s projects such as Pig, Hive, Hbase and monitoring them with Cloudera Manager.
  • Extensive experience in developing Pig Latin Scripts and using Hive Query Language for data analytics.
  • Hands on experience working on NoSQL databases including Hbase, Cassandra and its integration with Hadoop cluster.
  • Good working experience using Sqoop to import data into HDFS from RDBMS and vice - versa.
  • Good knowledge in using job scheduling and monitoring tools like Oozie and ZooKeeper.
  • Experience in Hadoop administration activities such as installation and configuration of clusters using Apache, Cloudera and AWS.
  • Developed UML Diagrams for Object Oriented Design: Use Cases, Sequence Diagrams and Class Diagrams using Rational Rose, Visual Paradigm and Visio.
  • Hands on experience in solving software design issues by applying design patterns including Singleton Pattern, Business Delegator Pattern, Controller Pattern, MVC Pattern, Factory Pattern, Abstract Factory Pattern, DAO Pattern and Template Pattern.
  • Experienced in creative and effective front-end development using JSP, JavaScript, HTML 5, DHTML, XHTML Ajax and CSS.
  • Expert level skills in programming with Struts Framework, Custom Tag Libraries, Spring tag Libraries and JSTL.
  • Good Working experience in using different Spring modules like Spring Core Container Module, Spring Application Context Module, Spring MVC Framework module, Spring ORM Module in Web applications.
  • Aced the persistent service, Hibernate and JPA for object mapping with database. Configured xml files for mapping and hooking it with other frameworks like Spring, Struts.
  • Used JQuery to select HTML elements, to manipulate HTML elements and to implement AJAX in Web applications. Used available plug-ins for extension of JQuery functionality.
  • Good exposure of Web Services using CFX/ XFIRE and Apache AXIS, for the exposure and consumption of SOAP Messages.
  • Working knowledge of database such as Oracle 8i/9i/10g, Microsoft SQL Server, DB2.
  • Experience in writing numerous test cases using JUnit framework with Selenium
  • Strong experience in database design, writing complex SQL Queries and Stored Procedures.
  • Experienced in using Version Control Tools like SubVersion, Git.
  • Have extensive experience in building and deploying applications on Web/Application Servers like Weblogic, Websphere, and Tomcat.
  • Experience in Building, Deploying and Integrating with Ant, Maven.

TECHNICAL SKILLS

Operating Systems: Windows 8/7/XP, Unix, Ubuntu 13.X, Mac OSX, CentOS

Hadoop Eco System: Hadoop 1.x/2.x(Yarn), HDFS, Map Reduce, HBase, Hive, PIG, Zookeeper, Sqoop, Eclipse, spark

Java Tools: Java, Map Reduce, J2EE (JSP, Servlets, EJB, JDBC, JMS, JNDI, RMI), AJAX, XSLT, HTML, JavaScript, CSS, Ant, Junit, log4J, JAVA, JSP,JSF, JSON, J2EE, Web Services, taglibs, DHTML, Java script, DOM, SAX, JQuery, XML,XSLT

API’s: Servlets, EJB, Java Naming and Directory Interface(JNDI), Map reduce

Development Tools: Eclipse, RAD/RSA (Rational Software Architect), IBM DB2 Command Editor, QTOAD, SQL Developer, Microsoft Suite (Word, Excel, PowerPoint, Access), Open Office Suite (Editor, Calc etc..),VM Ware

Databases: IBM DB2 9.x, Oracle 11g/10g

No SQL Databases: HBase

Servers: Web sphere (WAS) 6.x/7.0, Web Logic 10-12c, JBoss 5.1/6.0

PROFESSIONAL EXPERIENCE

Confidential, Bloomington, IL

Hadoop Developer

Responsibilities:

  • Worked on analyzing Hadoop clusters and different Big Data analytic tools including Pig, Hive HBase database and SQOOP.
  • Installed Hadoop, Map Reduce, HDFS, and Developed multiple map reduce jobs in PIG and Hive for data cleaning and pre-processing.
  • Coordinated with business customers to gather business requirements. And also interact with other technical peers to derive Technical requirements and delivered the BRD and TDD documents.
  • Extensively involved in Design phase and delivered Design documents.
  • Involved in Testing and coordination with business in User testing.
  • Importing and exporting data into HDFS and Hive using SQOOP.
  • Written Hive jobs to parse the logs and structure them in tabular format to facilitate effective querying on the log data.
  • Involved in creating Hive tables, loading with data and writing hive queries that will run internally in map reduce way.
  • Experienced in defining job flows.
  • Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Experienced in managing and reviewing the Hadoop log files.
  • Used Pig as ETL tool to do Transformations, even joins and some pre-aggregations before storing the data onto HDFS.
  • Load and Transform large sets of structured and semi structured data.
  • Responsible to manage data coming from different sources.
  • Involved in creating Hive Tables, loading data and writing Hive queries.
  • Utilized ApacheHadoop environment by Cloudera.
  • Created Data model for Hive tables.
  • Involved in Unit testing and delivered Unit test plans and results documents.
  • Exported data from HDFS environment into RDBMS using Sqoop for report generation and visualization purpose.
  • Worked on Oozie workflow engine for job scheduling.

Environment: Apache Solr 4.5, Hadoop2, Sqoop, Pig, Hive, HCatalog, Java 6, Eclipse, Apache Tomcat 6.0, Oracle, Java, J2ee, Spring 3.0, Hibernate 3.

Confidential, Richmond, VA

Hadoop Developer

Responsibilities:

  • Worked as a Hadoop and Java developer, involved in design and development of three-tier using Java/J2EE, Hive, Pig, Apache Tomcat, spring, Oracle.
  • Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper and Sqoop.
  • Wrote the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
  • Installed and configured Hadoop, MapReduce, HDFS (Hadoop Distributed File System), developed multiple MapReduce jobs in java for data cleaning.
  • Developed data pipeline using Flume, Sqoop, Pig and Java map reduce to ingest customer behavioral data and financial histories into HDFS for analysis.
  • Involved in collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis
  • Collected the logs data from web servers and integrated in to HDFS using Flume.
  • Worked on installing cluster, commissioning & decommissioning of DataNodes, NameNode recovery, capacity planning, and slots configuration.
  • Implemented NameNode backup using NFS. This was done for High availability.
  • Developed PIG Latin scripts to extract the data from the web server output files to load into HDFS.
  • Used Pig as ETL tool to do transformations, event joins and some pre-aggregations before storing the data onto HDFS.
  • Responsible for developing data pipeline using HDInsight, flume, Sqoop and pig to extract the data from weblogs and store in HDFS.
  • Installed Oozie workflow engine to run multiple Hive and Pig Jobs.
  • Use of Sqoop to import and export data from HDFS to RDBMS and vice-versa.
  • Used Hive and created Hive tables and involved in data loading and writing Hive UDFs.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports.
  • Involved in migration of ETL processes from Oracle to Hive to test the easy data manipulation.
  • Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Worked on importing and exporting data from Oracle and DB2 into HDFS and HIVE using Sqoop.
  • Worked on NoSQL databases including HBase, MongoDB, and Cassandra.
  • Created Hive External tables and loaded the data in to tables and query data using HQL.
  • Wrote shell scripts for rolling day-to-day processes and it is automated.
  • Automated workflows using shell scripts pull data from various databases into Hadoop
  • Deployed Hadoop Cluster in Fully Distributed and Pseudo-distributed modes.
  • Supported in setting up QA environment and updating configurations for implementing scripts with Pig and Sqoop.
  • Experience in AWS (Amazon web services)

Environment: Hadoop, MapReduce, Hive, HDFS, PIG, Sqoop, Oozie, Cloudera, Flume, HBase, Zookeeper, CDH3, MongoDB, Cassandra, Oracle, NoSQL and Unix/Linux.

Confidential, Dallas, TX

Hadoop Developer

Responsibilities:

  • Installed and configured Hadoop and responsible for maintaining cluster and managing and reviewing Hadoop log files.
  • Worked on analyzing Hadoop cluster and different big data analytic tools including Pig, HBase NoSQL database and Sqoop.
  • Installed and configured Hadoop Map Reduce, HDFS, Developed multiple MapReduce jobs in java for data cleaning and pre-processing.
  • Applied MapReduce framework jobs in java for data processing by installing and configuring Hadoop, HDFS.
  • Performed data analysis in Hive by creating tables, loading it with data and writing hive queries which will run internally in a map reduce way.
  • Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
  • Implemented Fair scheduler on the Job tracker to share the resources of the cluster for the MapReduce jobs given by the users.
  • Developed map reduce programs for applying business rules on the data.
  • Used Sqoop to import and export data from HDFS to RDBMS and vice-versa.
  • Created HBase tables to store various data formats of PII data coming from different portfolios Implemented Map-reduce for loading data from oracle database to NoSQL database.
  • Exported data from DB2 to HDFS using Sqoop and NFS mount approach.
  • Worked on custom Pig Loaders and Storage classes to work with a variety of data formats such as JSON, Compressed CSV, etc.
  • Used Cloudera Manager for installation and management of Hadoop Cluster.
  • Wrote Pig scripts to run ETL jobs on the data in HDFS.
  • Involved in scheduling Oozie workflow engine to run multiple Hive and pig jobs.
  • Moved data from Hadoop to Cassandra using Bulk output format class.
  • Used Sqoop to import data into HDFS and Hive from other data systems.
  • Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required.
  • Developed and executed hive queries for denormalizing the data.
  • Involved in the regular Hadoop Cluster maintenance such as patching security holes and updating system packages.
  • Automated the work flow using shell scripts.
  • Performance tuning of the hive queries, written by other developers.

Environment: Hadoop, MapReduce, Pig, Hive, HBase, Oozie, HDFS, Sqoop, Oozie, Cloudera, Cassandra, NoSQL, DB2 and UNIX.

Confidential, Dallas, TX

Core Java/J2EE Developer

Responsibilities:

  • Designed and developed the interface modules between the Oracle and Java.
  • Participated in Sprint meetings using AGILE development methodology.
  • Involved in all the phases of the life cycle of the project from requirements gathering to quality assurance testing.
  • Developed Class diagrams, Sequence diagrams using Rational Rose.
  • Responsible in developing Rich Web Interface modules with Struts tags, JSP, JSTL, CSS, JavaScript, Ajax, GWT.
  • Developed presentation layer using Struts framework, and performed validations using Struts Validator plugin.
  • Created SQL script for the Oracle database
  • Implemented the Business logic using Java Spring Transaction Spring AOP.
  • Implemented persistence layer using Spring JDBC to store and update data in database.
  • Produced web service using WSDL/SOAP standard.
  • Implemented J2EE design patterns like Singleton Pattern with Factory Pattern.
  • Extensively involved in the creation of the Session Beans and MDB, using EJB 3.0.
  • Used Hibernate framework for Persistence layer.
  • Extensively involved in writing Stored Procedures for data retrieval and data storage and updates in Oracle database using Hibernate.
  • Deployed and built the application using Maven.
  • Performed testing using Junit.
  • Used JIRA to track bugs.
  • Extensively used Log4j for logging throughout the application.
  • Produced a Web service using REST with Jersey implementation for providing customer information.
  • Used SVN for source code versioning and code repository.

Environment: Java(JDK1.5), J2EE, Eclipse, JSP, JavaScript, JSTL, Ajax, GWT, Log4j, CSS, XML, Spring, EJB, MDB, Hibernate, WebLogic, REST, Rational Rose, Junit, Maven, JIRA, SVN.

Confidential, Minnekonka, MN

Java Developer

Responsibilities:

  • Extensively involved in designing the database.
  • Implemented the project according to the Software Development Life Cycle (SDLC
  • Implemented JDBC for mapping an object-oriented domain model to a traditional relational database.
  • Created Stored Procedures to manipulate the database and to apply the business logic according to the user’s specifications.
  • Developed the Generic Classes, which includes the frequently used functionality, so that it can be reusable.
  • Exception Management mechanism using Exception Handling Application Blocks to handle the exceptions.
  • Designed and developed user interfaces using JSP, Java script and HTML.
  • Involved in Database design and developing SQL Queries, stored procedures on MySQL.
  • Used CVS for maintaining the Source Code.
  • Updated the day-to-day work status via mails to onsite and offshore leads
  • Construction of Unit Test Cases and unit testing.

Environment: JAVA, Java Script, HTML, JDBC Drivers, Soap Web Services, UNIX, Shell scripting, SQL Server.

Confidential

Java Developer

Responsibilities:

  • Involved in Design, Development and Support phases of Software Development Life Cycle (SDLC)
  • Reviewed the functional, design, source code and test specifications
  • Involved in developing the complete front end development using Java Script and CSS
  • Author for Functional, Design and Test Specifications
  • Implemented Backend, Configuration DAO, XML generation modules of DIS
  • Analyzed, designed and developed the component
  • Used JDBC for database access
  • Used Data Transfer Object (DTO) design patterns
  • Unit testing and rigorous integration testing of the whole application
  • Written and executed the Test Scripts using JUNIT
  • Actively involved in system testing
  • Developed XML parsing tool for regression testing
  • Prepared the Installation, Customer guide and Configuration document which were delivered to the customer along with the product

Environment: Java, JavaScript, HTML, CSS, JDK 1.5.1, JDBC, Oracle10g,XML, XSL, Solaris and UML

We'd love your feedback!