We provide IT Staff Augmentation Services!

Hadoop Developer Resume

3.00/5 (Submit Your Rating)

Eden Prairie, MN


  • Excellent understanding and thorough noledge on Hadoop Architecture and components such as Map Reduce and Hadoop Distributed File System.
  • Hands on experience in writing Pig Latin scripts, developing Apache Hadoop MapReduce programs for data processing.
  • Experience in reading and loading the tables using hcat loader.
  • Experience in Importing and exporting data from RDBMS into HDFS using Sqoop Import and Export.
  • Experience in designing both time and data driven workflows using Oozie.
  • Good understanding of Cloudera, Hortonworks and MapR distributions of Hadoop.
  • Experience in creating and maintaining Stored Procedures, Triggers and functions and strong RDBMS concepts using SQL server.
  • Used to analyze large datasets using hive queries and extending hive and pig core functionality using custom UDFs.
  • Experience in working with NoSQL databases like HBase.
  • In depth noledge on all the Hadoop daemons Job Tracker, Task Tracker, Name Node, Data Node, Secondary Name Node.
  • Experience in installing, configuring and administrating Hadoop cluster for major Hadoop Distributions.
  • Experience in developing and modifying the shell scripts to define the system variables and invoke map reduce jobs.
  • Experience in working with Hadoop in Standalone, pseudo and distributed modes.
  • Experience on using Flume for efficiently collecting, aggregating and moving large amount of log data.
  • Extensive understanding and experience in design and development of Java/J2EE applications using Core Java, Java collections framework, JDBC, JSP, Servlets, JSON, HTML and Java Script.
  • Experience on different IDE’s like Eclipse, My Eclipse and web servers like Apache Tomcat.


Tools/Methods: Hive, PIG, SqoopOozie, MapReduceHDFS, FlumeZookeeper

C,C++, Java: Core Java, ServletsJSP, JDBC, Java Beans HBase, Cassandra

Shell scriptingPERL, Python: Oracle 11gMS SQL serverMYSQL, Apache TomcatHTML, XML, JavaScriptPHP, CSS, Ajax, JSON

Eclipse, My Eclipse: HTTP, TCP/IP,DNS


Hadoop Developer

Confidential, Eden Prairie, MN


  • Responsible for writing piglatin scripts for filtering the data loaded from the hive tables and apply business login on that data.
  • Involved in creating Hive Tables, loading with data and writing Hive queries, which will invoke and run MapReduce jobs in the backend.
  • Developed the workflows in Zaloni bedrock servers to run the map reduce jobs. We used to create the file patterns to get the data to the desired locations by using the control file as trigger file.
  • Responsible for converting the incoming fixed width and delimited data to Avro through map reduce jobs and load it to the hive tables.
  • Refactored the schema builder to create the Avro schema file from the actual Meta file along with the data.
  • Used the hcat loader to load the data from the hive tables to pig relations and make analysis on top of that relations.
  • Used Ant Hill Pro (AHP) to build the projects and deploy them to development, test and production environments.
  • Responsible for refactoring and writing the new shell scripts according to the new MapR environment.
  • Developed the UDF’s in pig to handle the different date formats, checking the coverage areas and padding the digits at required places.
  • Followed agile methodology and used to update the assigned task status using the rally.

Environment:MapR 3.0.2, Eclipse, Java (JDK 1.7), My SQL, Hive, Pig, Linux, Apache maven, AntHillPro build tool, Rally, Artifactory, SVN.

Hadoop Developer/Admin

Confidential ., NC


  • Responsible for coding MapReduceprogram,Hivequeries, testing and debugging the MapReduce programs.
  • DevelopedPiglatin scripts in the areas where extensive coding needs to be reduced to analyze large data sets.
  • UsedSqooptool to extract data from a relational database intoHadoop.
  • Worked closely with data warehouse architect and business intelligence analyst to develop solutions.
  • Responsible for performing peer code reviews, troubleshooting issues and maintaining status report.
  • Involved in creating Hive Tables, loading with data and writing Hive queries, which will invoke and run MapReduce jobs in the backend.
  • Installed and configured Hadoop cluster in DEV, QA and Production environments.
  • Performed upgrade to the existing hadoop clusters.
  • Enabled Kerberos for Hadoop cluster Autantication and integrate with Active Directory for managing users and application groups.
  • Implemented Commissioning and Decommissioning of new nodes to existing cluster
  • Worked with systems engineering team for planning new Hadoop environment deployments, expansion of existing Hadoop clusters.
  • Monitoring workload, job performance and capacity planning using Cloudera Manager.
  • Worked with application teams to install OS level updates, patches and version upgrades required for Hadoop cluster environments.
  • Supported in setting up QA environment and updating configurations for implementing scripts with Pig, Hive and Sqoop.

Environment:Apache Hadoop, Java (JDK 1.7), My SQL, Hive, Pig, Sqoop,Linux, Cent OS.

Hadoop Developer



  • Responsible for creating Hive tables, loading data and writing hive queries.
  • Used to write custom UDF’s in Hive and Pig.
  • Responsible for building scalable distributed data solutions using Hadoop.
  • Responsible for Cluster maintenance, adding and removing cluster nodes, Cluster Monitoring and Troubleshooting, Manage and review data backups and log files.
  • Analyzed data using Hadoop components Hive and Pig.
  • Responsible for running Hadoop streaming jobs to process terabytes of xml's data.
  • Load and transform large sets of structured, semi structured and unstructured data using Hadoop/Big Data concepts.
  • Handled importing data from various data sources, performed transformations using Hive, Map Reduce, and loaded data into HDFS.
  • Imported and exported the data using Sqoop Export and Sqoop Import.
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs, which run independently with time and data availability.

Environment:Apache Hadoop, Java (JDK 1.6), Oracle, My SQL, Sqoop,Linux Hive, Pig.

Java Developer



  • Designed, developed JSP, Servlets and deployed them on Apache Tomcat Web Server.
  • Extensively developed User defined Custom JSP tags to separate presentation from application logic.
  • Involved in writing SQL, Stored procedure and PLSQL for back end. Used Views and Functions at the Oracle Database end. Developed the PL/SQL scripts for rebuilding the application Oracle Database.
  • Developed HTML, JavaScript and scripts for UNIX platform deployment.
  • Used ANT for compilation and building JAR, WAR and EAR files.
  • Involved in System Analysis and Design methodology as well as Object Oriented Design and Development using OOAD methodology to capture and model business requirements.
  • Proficient in doing Object Oriented Design using UML - Rational Rose.
  • Review and guide the application architects with Object Oriented Design using Rational Unified Process (RUP).
  • Involved in writing Junit test cases, unit and integration testing of the application.
  • Used Apache Subversion for checking in and checking out the code on daily basis.

Environment:Java, J2EE, JSP, Servlets, HTML, Apache Subversion, CSS, XML, JavaScript, AJAX, XML, Apache tomcat, Oracle 10g/9i, JUNIT, JDBC, PL/SQL, Eclipse, ANT

UNIX Administrator



  • Installed and maintained the Linux servers.
  • Installed Cent OS using Pre-Execution environment boot and Kick start method on multiple servers.
  • Monitored System Metrics and logs for any problems.
  • Performed cron-tab to back up data.
  • Worked with Telnet, FTP, TCP/IP, rlogin, used to inter-operate hosts.
  • Contact various systems administration works under CentOS, Red hat Linux environments.
  • Performed regular day-to-day system administrative tasks including User Management, Backup, Network Management, and Software Management including Documentation.
  • Recommend system configurations for clients based on estimated requirements.
  • Adding, removing, or updating user account information, resetting passwords, etc.
  • Upgrade software, add patches, and add new hardware in UNIX machines.

We'd love your feedback!