We provide IT Staff Augmentation Services!

Hadoop Stack Developer Resume

3.00/5 (Submit Your Rating)

Rolling Meadows, IllinoiS

CAREER PROFILE:

Experienced in Bigdata Hadoop and its Ecosystems - Hadoop, HDFS concepts, Hive, Sqoop, Pig, Oozie. Expertise in gathering requirement and preparing Design, Estimation, Code develop/changes, Unit testing, Test Plan & Test case document preparation, Test summary report, RTM.

PROFESSIONAL SUMMARY:

  • 4 plus years of experience as Hadoop professional - creating, installing, configuring, testing Hadoop ecosystem components.
  • Design, develop, integrate, test and implement information systems business solutions.
  • Customization on Open source tools in Linux and Web/Database Applications support.
  • Experienced with event-driven and scheduled AWS Lambda functions to trigger various AWS resources.
  • Capable of processing large sets of structured, semi-structured and unstructured data and supporting systems application architecture.
  • Familiar with data architecture including data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing.
  • Help design and implement Hadoop architectures and configurations for customer.
  • Solid experience with deploying, configuring, Upgrade and troubleshooting on UNIX flavors.
  • Major activities in which I was involved include Technical support, enhancements, bug fixes, and process improvements.
  • Experienced with the installation of AWS CLI to control various AWS services through SHELL/BASH scripting.

CAREER SKILLS:

  • HDP 2.x and 3.0
  • Spark
  • Scala
  • Hive
  • Sqoop
  • HDFS
  • MapReduce
  • Apache Ambari
  • Kafka
  • Nifi
  • HBase
  • Spark, Hive, Oozie, Yarn, MapReduce, HDFS.

TECHNICAL SKILLS:

  • Java
  • Unix, Shell Scripting
  • Red Hat 5,6,7
  • JavaScript, jQuery
  • JSON, XML
  • Oracle, MySQL, PostgreSQL
  • Virtualization VMware

PROFESSIONAL EXPERIENCE:

Confidential, Rolling meadows, Illinois

Hadoop Stack Developer

Responsibilities:

  • Developed Spark application using Scala to perform data check before ETL
  • Leveraged Sqoop to import and export data from Oracle to Hadoop and Vice versa.
  • Cluster maintenance as well as creation and removal of nodes
  • HDFS Support & Maintenance, Backup & restores, Cluster Monitoring and Troubleshooting.
  • Manage and review Hadoop log files. Installing operating system, database, and Hadoop updates, patches, version upgrades.
  • Created EBS volumes for storing application files for use with EC2 instances whenever they are mounted to them.
  • Involved in Installation, configuration, and optimization of three 20 nodes Hadoop cluster using CDH5.8.
  • Develop MapReduce jobs in Java for log analysis, analytics, and data cleaning.
  • Performed cleaning and filtering on imported data using Spark.
  • Responsible for preparing design document and delivery of batch application, un-block technical impediments for the team, cross team interaction, code review.
  • Transfer of data plays an important role which works based on the secured network data layer. Cluster maintenance as well as creation and removal of nodes, HDFS Support & Maintenance, Backup & restores, Cluster Monitoring, AWS web services, and Troubleshooting.

Confidential

Hadoop Stack Developer

Responsibilities:

  • Plan, implement, install, operate and maintain systems hardware, software applications and Information Technology infrastructure.
  • Participated in the installation, configuration and benchmark Hadoop on 18+ nodes using HDP platform.
  • Developed Spark Streaming application using Scala for Enterprise Fraud Management.
  • Developed Oozie workflow to automate the loading of data into HDFS and Hive for data pre-processing.
  • Involved in performance tuning and debugging errors during Sqoop execution.
  • Created test plan in coordination with Release schedule of the Development team and requirement specification of the product.
  • Regularly tune performance of Hive and Pig queries to improve data processing and retrieving.
  • Involved in batch processing using Spring Batch framework to extract data from data stage process and load into Oracle RDBMS.

Confidential

Software Associate

Responsibilities:

  • Developed new JSP pages, enhanced existing functionality from the requirement documentation.
  • Developed page layouts, Navigations and presented designs and concepts to the clients and the management review.
  • Designed and developed business logic using UML Modeling tools, code complexity tools.
  • Wrote interfaces and test clients in order to facilitate testing scheduled jobs.
  • Production support including analyzing and fixing defects.
  • Customized the java bean validation framework to propagate constraint violations to respective fields in UI layer.
  • Designed and developed restful services, UI handlers, core services and data access layer.
  • Developed User Interface using HTML, JSP. Validated the data using JavaScript.
  • Developed Servlets for retrieving/updating the data from tables in the database.
  • Wrote build scripts using Maven software project management tool. Assisted Web Sphere admin team in setting up Maven Repository on Build server.
  • Tested applications and resolved complex problems throughout the software development life cycle (SDLC), including preparing detailed program specifications.
  • Gathered information regarding entire plant setup, Business communications within the plant and type of tools (H/W and S/W) used by the management.
  • Understanding the use of data management using the MVC model.
  • Assisted development and production teams by determining application design flaws, thereby optimizing SDLC.

We'd love your feedback!