We provide IT Staff Augmentation Services!

Sr. Lead Hadoop Developer Resume

3.00/5 (Submit Your Rating)

San Ramon, CA

SUMMARY:

  • Around 12+years of professional IT experience this includes 7+ years of experience inHadoopDeveloper and Big Data ecosystem related technologies.
  • Understanding, identify and architecting Big Data solutions using Hadoop and related ecosystems powered by NoSQL.
  • Excellent understanding / noledge ofHadooparchitecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradigm.
  • Experience in managing and reviewingHadooplog files.
  • Hands on experience in Import/Export of data usingHadoopData Management tool SQOOP.
  • Strong experience in writing Map Reduce programs for Data Analysis. Hands on experience in writing custom practitioners for Map Reduce.
  • Performed data analysis using Hive and Pig.
  • Capable of setting up Hadoop infrastructure on Multiple operating systems, using various distributions of Hadoop including Open Source, Cloudera and Hortonworks.
  • Experience with troubleshooting failed jobs and analyzing the root cause.
  • Excellent understanding and noledge of NOSQL databases like HBase.
  • Hands on experience in application development using RDBMS, and Linux shell scripting.
  • Development experience in DBMS like Oracle, MS SQL Server and MYSQL.
  • Hands on experience on writing Queries, Stored procedures, Functions and Triggers by using SQL.
  • An excellent team player and self - starter with good communication skills and proven abilities to finish tasks before target deadlines.

TECHNICAL SKILLS:

Tools: Microsoft Office (Word, Excel, PowerPoint), Photoshop, QlikView, Tableau, SharePoint

O.S: Windows, Unix

Language: Java, C++, Visual Basic, PHP, .Net, SQL, T-SQL, DAL, HTML, Java Script, CSS

Hadoop Technologies: HDFS, Map Reduce, Hive, Pig, SQOOP, Flume, oozie.

Database: Oracle, SQL Server, MS Access, PostgerSQL

Others: Putty, batch file, ftp file

PROFESSIONAL EXPERIENCE:

Confidential, San Ramon, CA

Sr. Lead Hadoop Developer

Responsibilities:

  • Handled importing of data from various data sources, performed transformations using Hive, Map Reduce, loaded data into HDFS and Extracted the data from MySQL into HDFS using Sqoop
  • Responsible for building scalable distributed data solutions using Hadoop
  • Developed Simple to complex Map reduce Jobs using Hive and Pig
  • Experience in installing, configuring and using Hadoop Ecosystem components.
  • Experience in Importing and exporting data into HDFS and Hive using Sqoop.
  • Experienced in managing and reviewing Hadoop log files.
  • Hands on experience on installing the Hadoop cluster on different servers or on one machine.
  • Participated in development/implementation of Cloudera Hadoop environment.
  • Load and transform large sets of structured, semi structured and unstructured data.
  • Responsible for managing data coming from different sources.
  • Gained good experience with NOSQL database.

Confidential NJ

Hadoop/BigData Developer

Responsibilities:

  • Confidential track lets executives and sales people track the latest news on their competitors,
  • customers or key markets in a timely fashion, from a single point of access. Confidential Search is
  • designed so dat information workers spend less time searching, and more time using the
  • information to make their best business decisions.
  • Participated in Design of the system using various diagrams.
  • Writing the Java Classes on the server side to send the SOAP response to different clients.
  • Developed Simple to complex Map reduce Jobs using Hive and Pig
  • Experience in installing, configuring and using Hadoop Ecosystem components.
  • Experience in Importing and exporting data into HDFS and Hive using Sqoop.
  • Experienced in managing and reviewing Hadoop log files.
  • Hands on experience on installing the Hadoop cluster on different servers or on one machine.
  • Participated in development/implementation of Cloudera Hadoop environment.
  • Developed Front-end using JSP, Servlet.
  • Used both the Apache SOAP project and the Web Services Toolkit to build Web services, Web services clients using WSDL and registering the services in the UDDI Preview edition.
  • Created tables, indexes, sequences, constraints and snapshots.
  • Have also used the new WSAD features to create and publish Web services.
  • Client memory foot prints were important and also the flow of the user had to be tracked and hence the XSL data transformation were done on the server side and presented to the browser.
  • Gained good experience with NOSQL database.
  • Implementation of XSL and XSLT was done for formatting.
  • Use of the DOM2 and SAX2 were used.
  • Environment: JSP, Servlet, XML Schema 1.0, XSLT 1.0, XSL, SAX2, JSF, AJAX, Path 1.0, Namespaces, DTDs, Xerces, J2EE (Servlets, JSP, JDBC, STRUTS. Weblogic 7.0, TOMCAT4.0, ANT, Oracle 10g,Oracle Reports 10g, JSTL 1.0, SOAP 1.0, UDDI 1.0, WSDL 1.0, UNIX, XSL.

Environment: Hadoop, Map Reduce, Apache Pig, Hive, Hbase, Oozie, Flume,Spark SQL, Sqoop, UNIX, MySQL, Teradata, Cassandra, Linux/Unix shell Scripting, JavaLinux, SQL,Big Data, Spark, Cloudera Hadoop

Hadoop Developer

Confidential

Project Responsibilities:

  • Architecting and Delivering projects for large Customers on Big Data Platforms.
  • Design and build Hadoop solutions for big data problems.
  • Developed MapReduce application using Hadoop, MapReduce programming and Hbase.
  • Developed transformations using custom MapReduce, Pig and Hive
  • Involved in developing the Pig scripts
  • Involved in developing the Hive Reports.
  • Implemented Map-Side Join and Reduce-Side Join in Java MapReduce.
  • Developed the Sqoop scripts in order to make the interaction between Pig and MySQL Database.
  • Involved in converting Hive/SQL queries into Spark functionality and analyze them using Scala API.
  • Involved in HBase data modelling and row key design.
  • Developed and configured HBase and Hive tables to load data to HBase and Hive respectively.
  • Data Ingestion into HDFS using tools like Sqoop, Flume and HDFS client APIs.
  • Implemented POC using Spark.
  • Implemented test scripts to support test driven development and continuous integration.
  • Created Hive external tables, added partitions and worked to improve the performance of hive.
  • Perform POC on single member debug on Spark and Hive.
  • Configured various big data workflows to run on top ofHadoopand these workflows comprise of heterogeneous jobs like Pig, Hive, Sqoop and MapReduce.
  • Imported/exported data from/to relational DB / NoSQL DB toHadoopusing SQOOP.
  • Worked on tuning the performance of Hive and Pig queries.
  • Writing java code for custom partitioner and writable.

Environment:Hadoop, Map Reduce, Apache Pig, Hive, Hbase, Oozie, Flume,Spark SQL, Sqoop, UNIX, MySQL, Teradata, Cassandra, Linux/Unix shell Scripting, JavaLinux, SQL,Big Data, Spark, Cloudera Hadoop Distribution

Confidential - Princeton, NJ

HadoopDeveloper

Responsibilities:

  • Knowledge on the real-time message processing systems (Storm, S4)
  • Collected the business requirements from the Business Partners and Experts.
  • Involved in installing Hadoop Ecosystem components.
  • Responsible to manage data coming from different sources.
  • Used Apache flume to ingest log data from multiple sources directly into HDFS.
  • Customized flume to enrich data with LDAP lookups and GOIP lookups.
  • Involved in writing Map Reduce Programs which are running on the cluster.
  • Involved in HDFS maintenance and loading of structured and unstructured data.
  • Installed and configured Pig and also written PigLatin scripts.
  • Wrote MapReduce job using Java API.
  • Wrote MapReduce job using Pig Latin.
  • Imported data from MySQL to HDFS by using Sqoop to load data.
  • Developed Scripts and Batch Job to schedule various Hadoop Program.
  • Wrote Hive queries for data analysis to meet the business requirements and generated reports.
  • Created Hive tables by using Hive QL and worked on them.
  • Wrote Hive UDF for frequently used HiveQL queries.
  • Utilized Agile Scrum Methodology to help manage and organize a team of 4 developers with regular code review sessions.
  • Regular meetings with technical teams and active participation in code review sessions with other developers.
  • Used Continuum for integration testing and JUnit for unit testing.

Environment:Hadoop, HDFS, MapReduce, Unix, Flume, Python, Pig, MySQL, MySQL Workbench Hive

Confidential, New York City, NY

Java/J2EE Developer

Responsibilities:

  • Responsible for understanding the scope of the project and requirement gathering.
  • Review and analyze the design and implementation of software components/applications and outline the development process strategies
  • Coordinate with Project managers, Development and QA teams during the course of the project.
  • Used Spring JDBC to write some DAO classes to interact with the database to access account information.
  • Used Tomcat web server for development purpose.
  • Involved in creation of Test Cases for JUnit Testing.
  • Used Oracle as Database and used Toad for queries execution and also involved in writing SQL scripts, PL/SQL code for procedures and functions.
  • Used CVS, Perforce as configuration management tool for code versioning and release.
  • Developed application using Eclipse and used build and deploy tool as Maven.
  • Used Log4J to print the logging, debugging, warning, info on the server console.
  • Extensively used Core Java, Servlets, JSP and XML

Confidential - Dallas, TX

JavaDeveloper

Responsibilities:

  • Involved in Design, Development and Support phases of Software Development Life Cycle (SDLC).
  • Reviewed the functional, design, source code and test specifications.
  • Involved in developing the complete front end development using Java Script and CSS.
  • Author for Functional, Design and Test Specifications.
  • Analyzed, designed and developed the component.
  • Used JDBC for database access.
  • Experienced in reading logs files and responding to it quickly
  • UsedDataTransfer Object (DTO) design patterns.
  • Followed UML standards, created class and sequence diagrams.
  • Unit testing and rigorous integration testing of the whole application.
  • Preparing and executing test cases.
  • Actively involved in the system testing.
  • Performed Unit Testing and documented the test results.
  • Prepared the Installation, Customer guide and Configuration document which were delivered to the customer along with the product.

Environment: Java/J2EE, SQL, Oracle 10g, JSP 2.0, EJB, AJAX, Java Script, Web Logic 8.0, HTML, JDBC 3.0, log4j, Junit, Servlets, MVC, My Eclipse

Confidential PA, Harrisburg, PA

Java/J2EE Developer

Responsibilities:

  • Interacting with the client on a regular basis to gather requirements.
  • Understanding the business, technical, and functional requirements.
  • Checking for timely delivery of various milestones.
  • Using Spring Framework, Axis, developed web services including design of the XML request/response structure.
  • Implemented Hibernate/Spring framework for Database and business layer.
  • Configured Oracle with Hibernate, wrote hibernate mapping and configuration files for database processing (Create, Update, select) operations.
  • Involved in creating Oracle stored procedures for data/business logic.
  • Created T-SQL stored procedures for Contract generation module.
  • Involved in configuring and deploying of code to different environments Integration, QA and UAT.
  • Preparing and designing system/acceptance test cases and executing them.
  • Created ant build script to build Artifacts.

We'd love your feedback!