We provide IT Staff Augmentation Services!

Hadoop Developer Resume

2.00/5 (Submit Your Rating)

Jersey City, NJ

SUMMARY

  • 4+ years of professional experience in IT which includes 1 year of comprehensive experience as Apache Hadoop Developer and Hadoop ecosystem related technologies.
  • Experience in full Software Development Life Cycle (SDLC) that includes Analyzing, Designing, Coding, Testing, and Implementation & Production Support.
  • Expertise in writing Hadoop jobs for analyzing data using MapReduce, Hive, SQOOP, HBase, KAFKA, Flume and Spark
  • Good working Expertise on handling Petabytes of structured and unstructured data on significantly big Cluster Environment
  • In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, and Map Reduce concepts
  • Good understanding and working experience on Hadoop Distributions like Cloudera.
  • Good experience in installing, configuring, and using ClouderaHadoop ecosystem components like HDFS, Hadoop Map Reduce, Hive, Sqoop, Kafka and Pig.
  • Experience in importing and exporting Multi Terabytes of data using Sqoop from HDFS to Relational Database Systems(RDBMS) and vice - versa
  • Good Exposure on JD Edward upgrade projects that involves migration from XE (legacy systems) to E9.1 system
  • Worked on different operating systems like Linux and Windows
  • Exceptional ability to quickly master new concepts and capable of working in-group as well as independently with excellent communication skills.
  • Highly self-motivated and able to set effective priorities to achieve immediate & long-term goals and meet project & operational deadlines
  • Excellent analytical skills and object oriented design skills
  • Vibrant team player with belief in taking positive approach towards issue resolution
  • Able to effectively communicate with wide range of clients and coworkers.

TECHNICAL SKILLS

Languages: Java, SQL, Python, Shell Scripting

Database: MS SQL Server, MYSQL, MS ACCESS

Big Data Eco- components: Hadoop, MapReduce, Hive, SQOOP, Spark, Flume, HBase, KAFKA

Development Framework/tools: Eclipse IDE, MS Visual Studio, MS Excel, Tableau, STATA, PuTTY, FileZilla, WinSCP

Project Management Methodologies: Waterfall Methodology, Agile Methodology

PROFESSIONAL EXPERIENCE

Confidential, Jersey City, NJ

Hadoop Developer

Responsibilities:

  • Analyzed client business requirements, worked with business and design team to refine and implement requirement.
  • Prepared an ETL framework with the help of sqoop and hive to be able to frequently bring in data from the source and make it available for consumption.
  • Developed Hive Scripts, Hive UDFs to load data files.
  • Created 30 buckets for each Hive table based on clustering by User-Id for better performance (optimization) while updating the tables.
  • Extracted the data from other data sources into HDFS using Sqoop
  • Processed HDFS data and created external tables using Hive and developed scripts to ingest and repair tables that can be reused across the project.
  • Involved in emitting processed data from Hadoop to relational databases or external file systems using SQOOP, HDFS GET or Copy to Local.
  • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS and Extracted the data from MySQL into HDFS using Sqoop
  • Expert in importing and exporting data into HDFS using Sqoop and Flume.

Confidential, Sarasota, Fl

Systems Analyst

Responsibilities:

  • Automated the process of migrating setups and configurations, resulting in 50% reduction in cycle time which enabled faster. deployments across instances and improved the patch-impact analysis
  • Automated the process of bulk loading into Oracle applications, thus reducing human errors, time and effort.
  • Implemented O2C process and deployed necessary applications on web to improve and increase the revenue inflows.
  • Collaborated with BI team to learn about Hadoop framework and architecture
  • Involved in installing Hadoop Ecosystem components under Cloudera distribution.
  • Responsible to manage data coming from different sources.
  • Supported MapReduce Programs running on the cluster.

Confidential

Technical consultant

Responsibilities:

  • Good exposure on Enterprise Resource Planning for Procurement and Inventory systems
  • Expertise with JD Edwards EnterpriseOne Technical and Development Tools
  • Collaborated with Process Owners to analyze user requirements and designed workflow models using MS-Visio to replicate the same
  • Customized applications and reports for Purchase Order, Sales Order, Inventory Management based on user requirements
  • Responsible for resolving a table indices bug in multiple applications, resulting in 200+ objects passing the L1 test
  • Experience in requirements definition, design, analysis, custom reports, programming, implementations, retrofitting of modifications, documentation, testing in different environments
  • Deployed builds using PUTTY, FileZilla and maintained code documentation using Tortoise SVN repository
  • Managed Production GO Live and Maintenance support to handle change requests across 80+ customer locations
  • Accelerated business integration and developed IT Services, resulting in increased productivity and business growth by $8 Billion

We'd love your feedback!