We provide IT Staff Augmentation Services!

Hadoop Developer Resume

5.00/5 (Submit Your Rating)

SUMMARY:

  • 12 years of Experience in Enterprise Application Development, Web Applications, Client - Server Technologies, Web Programming with various languages and tools like Java, JavaScript
  • 5 years of Experience in design, development, analyzing, maintenance and support of Big Data Analytics using Hadoop Ecosystem tools like HDFS, Hive, Sqoop and Pig.
  • Experienced in distributed systems to leverage Hadoop/Big Data, HDFS, MapReduce, analyzed performance bottlenecks and recommended tuning optimizations.
  • Proficient in Apache Hadoop ecosystems PIG Scripts, FLUME, Hbase, Zookeeper, Hive, SQOOP, strong understanding of HDFS architecture.
  • Solid working experience with ingestion, storage, querying, processing and analysis of large data.
  • Experience with Hadoop Architecture and the daemons of Hadoop - MapReduce, HDFS, Job Tracker, Task Tracker, Name Node and DataNode.
  • Implemented Proofs of Concept on Hadoop stack and different big data analytic tools, migration from different databases (i.e. Teradata, Oracle, and MYSQL) to Hadoop.
  • Hands on experience in developing Map Reduce jobs using Hadoop ecosystem.
  • Experienced in job workflow scheduling and monitoring tools like Oozie and Zookeeper
  • Experience in configuring Hadoop Clusters and HDFS.
  • Successfully loaded files to Hive and HDFS from Mongo DB, Hbase Loaded the dataset into Hive for ETL Operation.
  • In-depth understanding of Data Structure and Algorithms.
  • Experience in developing and deploying applications using Web logic, Tomcat and JBOSS.
  • Experience with backend databases like ORACLE, DB2, MYSQL and SQL Server
  • Team Player, quick learner and self-starter with effective communication, motivation and organizational skills combined with attention to details and business process improvements.
  • Hands on experience in installing, configuring and using Apache Hadoop ecosystem components like HDFS, Hadoop MapReduce, Zoo Keeper, Oozie, Hive, Sqoop, Pig and Flume.+
  • Expertise in writing Hadoop Jobs for analyzing data using Hive and Pig
  • Experience in working with MapReduce programs using Apache Hadoop for working with Big Data
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems(RDBMS) and vice-versa
  • In depth understanding/knowledge of Hadoop Architecture and various factors such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce concepts
  • Right agreement of Data Mining and Machine Learning techniques
  • Broad experience with SQL, PL/SQL and database concepts
  • Experience in optimization of Map reduces algorithm using combiners and petitioners to deliver the best solutions
  • Excellent programming skills in Java, JSP, JDBC, XML, HTML, XHTML, JavaScript and developing client-server, Web and Distributed Applications.
  • Experience in Database design, Entity relationships, Database analysis, Programming, SQL, Stored procedure’s PL/ SQL, Packages and Triggers in Oracle and SQL Server on Windows and UNIX.
  • Expertise in developing distributed business applications using EJB implementing Session beans for business logic, Entity beans for persistence logic and Message driven beans for asynchronous communication
  • Worked on different operating systems like UNIX/Linux, Windows XP

TECHNICAL PROFILE:

Big Data Ecosystem: Hadoop, MapReduce, HDFS, Hbase, Zookeeper, Hive, Pig, Sqoop, Oozie and Flume

Programming Languages: C, C++, Java, SQL,PL/SQL,UNIX/Linux Shell Scripts

Web Technologies: HTML, XML, JavaScript, JSON

Framework: JUnit, log4j, Spring, Hibernate

Database: Oracle, DB2,MySQL,Hbase, Mongo DB

Application Server: Apache Tomcat 5.5.0

IDE’s, Utilities & Web: Eclipse, HTML,CSS, Java Script

Operating Systems: LINUX, Windows 7, UNIX

Methodologies: Agile, UML, OOP

Protocols: TCP/IP, HTTP, SOAP and HTTPS

PROFESSIONAL EXPERIENCE:

Confidential

Hadoop Developer

Responsibilities:

  • Converting the existing relational database model to Hadoop ecosystem.
  • Generate datasets and load to HADOOP Ecosystem
  • Design technical architectural workflow
  • Optimize the Hive queries with proper algorithms and build customer attributes using HIVE.
  • Integrating the Hive queries with OOZIE
  • Compare the Hive queries output to existing data model outputs.
  • Generate datasets and load to HADOOP Ecosystem
  • POC on data Ingestion with different tools.
  • Follow agile methodology for the entire project.
  • Designed and developed MapReduce programs with Java.
  • Develop UDF (java) and UDAF (java)
  • Orchestrate hundreds of HIVE queries using Oozie workflows.
  • Analyze customer patterns based on the attributes.
  • Follow agile methodology for the entire project.
  • Conduct scrum calls every day
  • Prepare technical design documents, detailed design documents.

Environment: Hadoop 0.2.0 MR1, HDFS, Hbase, Flume 1.4, Sqoop 1.4.3, Hive 0.7.1, Java 1.6, Linux, Spring 3.2.3, Eclipse Juno, XML, JSON.

Confidential

Hadoop Developer

Responsibilities:

  • Replaced default Derby metadata storage system for Hive with MySQL system.
  • Executed queries using Hive and developed Map-Reduce jobs to analyze data.
  • Developed Pig Latin scripts to pull the information from the web server output files to load into HDFS.
  • Built up the Pig UDF's to preprocess the information for analysis.
  • Developed Hive queries for the analysts.
  • Utilized Apache Hadoop environment.
  • Involved in loading data from LINUX and UNIX file system to HDFS.
  • Supported in setting up QA environment and updating configurations for implementing scripts with Pig.

Environment: Hadoop 0.2 MR1, CDH3U6, HDFS, Hbase 0.90.0, Flume 0.9.3, Sqoop 1.x, Hive 0.7.1, Java 1.6, Linux, Spring 3.0, Eclipse Juno, XML, JSON

Confidential

Hadoop Developer

Responsibilities:

  • Installed and configured Hadoop, MapReduce, HDFS and developed multiple MapReduce jobs in Java for data preprocessing
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Proactively monitored systems and services, architecture design and implementation of Hadoop deployment, configuration management, musical accompaniment, and disaster recovery schemes and routines.
  • Used Flume to collect, aggregate, and store the web log data from different sources like web servers, mobile and net devices and pushed to HDFS
  • Load and transform large sets of structured, semi structured and unstructured data supported Map Reduce Programs those are running on the cluster
  • Wrote shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to whatever warning or failure conditions Involved in loading data from the UNIX file system to HDFS, configuring Hive and writing Hive UDFs
  • Used Java and MySQL from day to day to debug and repair issues with client processes Managed and reviewed log files
  • Implemented partitioning, dynamic partitions and buckets in HIVE.

Confidential

Technical Lead

Responsibilities:

  • Called for in the blueprint of the Technical Business Design Documents based on the Conceptual System Design.
  • Meet the major office to manage the critical application during knowledge transition and shadowing and reverse shadowing time.
  • Responsible for getting up the detail planning and system appreciation document preparation of several applications and all its subsets.
  • Perform business area analysis, including subjects of business plans in order to enforce the theory and rules of performance improvement to customer offices
  • Recognize significant problems and opportunities in clients’ operations and give understanding of clients’ systems and procedures, overall business operations and industries in the current.
  • Responsible for utilizing technological analysis and design principles to formulate detailed application plans and procedures in order to implement clients' requests for new or modified functionalities
  • Analyze clients' requests for new or modified applications through interviews and design sessions
  • Design, produce and test proposed enhancements with client interaction for verification
  • Developed software programs using JCL, COBOL and DB2.

Environment: Z/OS, DB2, COBOL, JCL, CICS, VSAM, SPUFI, QMF, SQL,ACF2, SHAREPOINT, XPEDITOR

Confidential

Senior technical Analyst

Responsibilities:

  • Called for in the blueprint of the Technical Business Design Documents based on the Conceptual System Design. Coded complex Cobol/DB2 modules that do dynamic allocation of the files.
  • Provided escalated support by investigating and solving complicated business application issues and their associated organizations.
  • Developed an expert understanding of technical and business process flow of applications and provided recommendations for improvement.
  • Provided with appropriate communication, facilitate bringing other parties together, and complete post resolution reporting for high severity incidents
  • Participated in Development efforts, provide input for future requirements, and inform the Service Desk or others of the Release Notes and any known events.
  • Extensively worked on Problem Investigation, Analysis and development of for the existing or new modules.
  • Setup, configure, maintain and monitor assigned business application(s) and related systems and worked in Mainframe transaction facility protocols.
  • Working with clients to obtain the occupation requirements and enhancements in the organization.
  • Involved in production batch support like scheduling the jobs, restarting, fixing abends and bypassing the cases. Setting up of bugs & ensuring timely and defect free delivery.
  • Coordinated with interface teams to clear the technical/business doubts.

Environment: OS/390, TSO/ISPF, VSAM, COBOL, JCL, DB2, PLATINUM, CMTS, SPUFI, Toad, JIRA, SQL Explorer, MS-Office, MS Project.

Confidential

Program Analyst

Responsibilities:

  • Responsible for implementing, customizing, and integrating components of the client application
  • Design, produce and test proposed enhancements with client interaction for verification
  • Monitor operation and functionality throughout the execution process by testing applications to ensure optimum user benefits, design and configure application modifications and enhancements as necessary
  • Plan and create web front-end applications to integrate with host-side operations
  • Implement the integration and customization of customer-specific system packages
  • Provide first level production support post “get live” operations
  • Integrate and program middleware concepts

Environment: Z/OS, DB2, COBOL, JCL, VSAM, SPUFI, QMF, SQL, ACF2, SHAREPOINT, MICROSOFT OFFICE (WORD, EXCEL, ACCESS)

Confidential

Mainframe Analyst

Responsibilities:

  • Research, update and maintain quality testing measures and routines.
  • Assist in the planning, creation and control of the test environment(s).
  • Identify, collect and create test data.
  • Facilitate and participate in structured walk-through and peer critiques.
  • Take part in the coordination and implementation of system and assembly testing.
  • Inform Test Architect and Team Lead of any events that may bear upon the schedule, budget, or quality of the ware and the testing process.
  • Validate fixes, execute test scripts, as well as record problems and events in accordance with the project's problem and issue management plans
  • Project management, testing and reporting of outcomes
  • Document all the testing results and maintain ABEND logs.
  • Assist and coordinate with the new resources in understanding the nature of work and the testing procedures.
  • Created conceptual design, test approach, test plan, test plan and test scripts manually and Mercury Quality Centre and CMTS. Setup and manage test environments and test data based on the exam requirements. Coordinating with onshore & Client communication.

Environment: Z/OS, DB2, COBOL, JCL, VSAM, SPUFI, QMF, SQL.

We'd love your feedback!