We provide IT Staff Augmentation Services!

Hadoop Developer Resume

5.00/5 (Submit Your Rating)

Syracuse, NY

SUMMARY

  • Over 7+ years of experience in the analysis, design, development, implementation and management of full life cycle of SDLC in Application Development.
  • With 2+ years of extensive experience as Hadoop Developer and Big Data Analyst. Primary technical skills in HDFS, MapReduce, YARN, Pig, Spark, Scala, Hive, Sqoop, HBase, Flume, Oozie, Zookeeper.
  • Working experience with Big Data and Hadoop Distributed File System (HDFS). In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Namenode, Data node and MapReduce concepts.
  • Experience in working with MapReduce programs using Apache Hadoop for working with Big Data analysis.
  • Hands on experience in working with Ecosystems like Hive, Sqoop, Spark, Mapreduce, Flume, Oozie. Strong knowledge of Hive analytical functions, extending Hive functionality by writing custom UDFs.
  • Hands on experience on Scala language features - Language fundamentals, Classes, Objects, Traits, Collections, Case Classes, High Order Functions, PatterFn Matching, Extractors etc.
  • Experience on Creating Internal and External tables and implementation of performance improvement techniques using partitioning tables, bucketing tables in Hive.
  • Developed the Sqoop scripts to import data from RDBMS to HIVE, RDBMS to HDFS and Export Data from HDFS to RDBMS.
  • Experience handling different file formats like JSON, AVRO, ORC and Parquet.
  • Knowledge of job workflow scheduling and monitoring tools like Control-M, Oozie.
  • Knowledge on NoSQL databases such as HBase and creating mapping Phoenix tables in HBase to query Hbase using SQLs.
  • Experience with RDBMS Databases such as MySQL, Oracle and expertise on Writing SQLs, HQLs.
  • Experience on Scripting languages like Bash, Unix Shell scripting and knowledge on Python
  • Expertise in preparing the test cases, documenting and performing unit testing and Integration.
  • Working experience on Data ingestion tools like Apache NiFi also data loading into Common Data Lake using HiveQLs.
  • Working experience to develop wrapper shell scripts to schedule the data loading using HiveQLs using batch scheduled jobs.
  • Developed SQOOP Scripts for importing large dataset from RDBMS to HDFS. Creating the UDFs in Java and Register them in PIG and HIVE.
  • Fast learner with good interpersonal skills, having strong analytical and communication skills and interested in problem solving and troubleshooting. Self-motivated, excellent team player, with positive attitude and adhere to strict deadlines. Skilled in progressing from problem statement to well-documented design. Excellent business knowledge, ability to work under pressure and good interpersonal skills.

TECHNICAL SKILLS

Programming Languages: Scala, Java

Databases: MYSQL, DB2, Oracle

Hadoop Framewor: kHDFS, MapReduce

Hadoop: Eco SystemHive, HBase, Pig, Impala, Spark, SQOOP, YARN, Ambari

Real Time Messaging: systemsKafka, Flume

Ingestion Tools: /ETL Apache NiFi

Distributed: Query EnginesPhoenix on HBase

Operating Systems: UNIX, Linux.

Cloud Platforms: Hadoop

Scripting Languages: SHELL, BASH, PYTHON

Version Control Tools: GIT/Stash, SVN

Build Tools: Maven, SBT

CI Tools: Jenkins

Bug Tracking Tools: JIRA, ServiceNow

Scheduling Tools: Control-M, Oozie

BI Tools: Micro Strategy, Tableau

IDE Eclipse: , Intelli-J

PROFESSIONAL EXPERIENCE

Confidential, Syracuse, NY

Hadoop Developer

Responsibilities:

  • Perform requirement analysis
  • Prepare Technical Specification document based on the High-Level Design
  • Development of Sqoop Scripts to ingest the data from RDBMS into HDFS and Hive
  • Development of Unix wrapper scripts to execute SQOOP Jobs
  • Development of HiveQLs to create Hive Schema and tables for CDL and DM
  • Development of HQL scripts to load the data into Private Data mart
  • Solved performance issues in Hive scripts with understanding of Joins, Group and Aggregation and how does it translate to MapReduce jobs.
  • Experience on developing Spark Batch applications to ingest data into Common Data Lake using Scala.
  • Experience on building the applications using Spark Core RDDs, SparkSQL, Data Frames.
  • Experience on analyzing the spark logs using spark UI.
  • Experience on using various performance tuning for spark applications.
  • Ingested data stream into the data lake via Flume
  • Prepare test cases and unit testing
  • Peer Reviews for Technical specification documents and code
  • Support SIT and UAT testing
  • Release coordinator for preparing the release checklist and help release team to provide component list

Confidential

Hadoop Developer

ROLES AND RESPONSIBILITIES

  • Perform requirement analysis
  • Prepare Technical Specification document based on the High-Level Design
  • Development of SQOOP import and Export jobs
  • Development of HiveQLs to load the data from Staging Layer to Common Data Lake(CDL)
  • Development of Unix scripts to execute loading HQLs as scheduled jobs
  • Prepare test cases and unit testing
  • Peer Reviews for Technical specification documents and code
  • Support SIT and UAT testing
  • Release coordinator for preparing the release checklist and help release team to provide component list

Confidential

Software Engineer

Responsibilities:

  • Member in Agile scrum team and participated in daily scrum calls, Sprint planning and task estimations using rally.
  • Developed reports using jasper report designer.
  • Developed Data access layer using ADO.net and third-party libraries like Enterprise and Breeze for Hibernate.
  • Developed Restful APIs using WEB API and WCF services.
  • Created web pages using traditional web forms and user controls.
  • Customized the Log4net and Enterprise library for creating the ADP standard logging framework.
  • Created the interfaces to communicate to the KRONOS database using Time programming API.
  • Created the custom validation annotation for incoming web messages like JSON, Soap and XML.
  • Implemented the new features for the legacy systems which were developed using Classic ASP and supported the applications developed using VB.Net.
  • Implemented the front validations and custom java scripts functions using JavaScript.
  • Developed ASP.net web pages with user controls.
  • Created the web applications using MVC with razor engine views.
  • Deploying applications and maintaining databases in Microsoft Azure.
  • Used Coherence Cache for caching the data and architected the TaaS using interceptors using Dynamic Castle Core, using Log4Net for logging.
  • Developed web services both REST and SOAP based using C#.
  • Writing test cases using MS test framework.
  • Developed the Data transmission packages using the SSIS.
  • Created the crystal reports using crystal report designer and loading, binding data using crystal report libraries.
  • Deployed the applications on the IIS on both QA and UAT servers and also did the click once deployment for windows applications.
  • Created the setup files using the Install Shield.

Confidential

Software Engineer

Responsibilities:

  • Analyzed and documented the requirements.
  • Developed templates using LIMS basics.
  • Developed reports using crystal report designer.
  • Developed applications in ASP.NET using C#.NET in code behind pages.
  • Developed application using MVC 4.0.
  • Created the crystal and BOXI reports and deploy them on to the BOXI servers.
  • Deployed the code on to the IIS on both QA and UAT servers.
  • Configuring Master data for Analysts.
  • Supporting the LIMS applications and finding root causes for various sites.
  • Developed web forms in ASP.NET using Visual Studio environment.
  • Used JavaScript to handle front end validations.
  • Developed Business Objects and Data Access Layer using C#.net, ADO.net.
  • Used CSS to have a uniform look and feel throughout the project.
  • Created tables, views, stored procedures, Packages, functions and triggers using ORACLE Server 10g.
  • Deployment on Client Server.

We'd love your feedback!