We provide IT Staff Augmentation Services!

Senior/lead Hadoop Developer Resume

Bellevue, WashingtoN


Seeking a challenging Hadoop Lead/Dev Manager position where I can leverage over 13 years of software development experience to deliver innovative technology solutions. Expand my knowledge and make an impact using Big Data/Hadoop Analytics and related technologies that can dramatically improve an organization’s ability to drive revenues, profitability, market share, and competitiveness.


  • Over 13 years of IT experience including around 4 years of hands - on experience in Big Data, Hadoop ecosystem related technologies.
  • Architect Big Data Solutions and Lead/Co-lead teams on several Hadoop ingestion project tracks.
  • Extensive experience with C# / VB.NET, Java, Spring, Hibernate, JQuery/Javascript, and ASP.NET Web Development.
  • Excellent exposure to RDBMS design and writing SQL queries - MSSQL, MySQL, ODBC/JDBC
  • Sound knowledge of Hadoop architecture and its various components - Hadoop File System HDFS, YARN, MapReduce, Name node, Data Node, Job Tracker, Task Tracker, Secondary Name Node
  • Excellent understanding of Hadoop MapReduce programming paradigm.
  • Hands on experience with using Cloudera CDH 4, Hortonworks and Cloud (AWS/EC2)
  • Hands on experience with Hadoop ecosystem components like HBase, Oozie, Hive, Zookeeper, Sqoop, Pig, Flume, HUE, Impala, Spark.
  • Extended Hive and Pig core functionality by writing Java UDFs.
  • Experience with Oozie Scheduler in setting up workflow jobs with Map/Reduce and Pig jobs.
  • Extracted & processed streaming log data from various sources and integrated into HDFS using Flume and Storm.
  • Importing and exporting data between HDFS and Relational Database Management systems using Sqoop.
  • Experience in designing and implementing MapReduce jobs to support distributed processing using Java, Hive and Pig.
  • Sound knowledge of using NOSQL database HBase and its architecture.
  • Worked with different File Formats like TEXTFILE, AVROFILE, SEQUENCE FILE, JSON, XML, PARQUET/ORC for HIVE querying and processing.
  • Experience using MRUNIT to test the working of Java code.
  • Exposure to various SDLC methodologies like Agile, Waterfall and Scrum models
  • Ability to adapt to evolving technology, strong sense of responsibility and .
  • Enjoy working in a fast changing environment and learning new technologies effortlessly.


C#, ASP.Net/VB.Net, Ajax, Winforms, Winforms Controls and Components, ADO.Net, XML Web Services, .Net / J2EE, Jquery, Java, Scala, Javascript, Visual Studio 2003, Eclipse, Hue,.Net Framework 1.1, ASP.Net, IIS 5 and 6, SOAP, ASP.Net Handlers, SQL, SQLServer 2000, SSIS, SSAS, SSRS. Hadoop Ecosystem and its tools: HDFS, Map/Reduce, Hive, HBase, Sqoop, Oozie, Zookeeper, Spark, Impala, Windows XP, Visual SourceSafe 6, XSLT, XML Schema, XMLSpy 4.0, XPath, MSMQ, COM, COM+ Interoperability, Linux, UNIX, Korn shell, Bash shell, Perl, Apache IIS, Sequence Diagrams, Use Cases, RUP and Unified Process, Control-M, Github, Gerrit, Jenkins


Senior/Lead Hadoop Developer

Confidential, Bellevue, Washington


  • Lead development efforts on several Hadoop Data Lake ingestion tracks, coordinating between developers, analysts, system administrators, app support and other stakeholders.
  • Worked on Flatfile and DB ingestion jobs using custom framework, utilitzing Unix Shell, Oozie, Sqoop, JAR files, Pig, Hive, Hue, Hbase, and HDFS.
  • Re-architected SIS deployment document to minimize Hadoop jobs deployment and execution to just a few steps, and built corresponding automation scripts - yum installation, HDFS/Hive cleanup, and post-deployment validation.
  • Architected a custom file monitoring and movement solution and put together a corresponding Power Point presentation to address client need.
  • Participated in POC projects dealing with streaming data sources, using tools such as Storm and Spark.
  • Designed and coded count validation Shell Script as part of ABC efforts for over 600 active ingestion jobs.
  • Automated mandatory deletion of sensitive files from Hadoop Dev environment - Edgenode and HDFS.
  • Designed and wrote a Pig Script to extract Hadoop job error logs from Hbase, and Shell Script driver to sort /email top N errors by job to relevant stakeholders.

Environment: Linux, Hortonworks Hadoop Ver 2.7.1, Oozie, Pig, Hive, Sqoop, Hbase, MapReduce, Spark, Hue, HCatalog, Oracle/MS-SQL/MySQL/Teradata, Shell Scripts, Control-M, Github, Jenkins, Gerrit

Java/.NET/Big Data - Owner

Confidential, New York, New York


  • Developed .NET/Web adapters and views for VMS
  • Handled importing of data from machine logs using Flume.
  • Extracted the data from MySQL/RDS into HDFS using Sqoop.
  • Worked with cloud services like Amazon web services (AWS) and EC2
  • Developed simple to complex Map/Reduce Jobs using Java, Hive and Pig
  • Build analytics queries, and exported reports to be displayed on the web.
  • Involved in installation and setting up of the Hadoop cluster.
  • Wrote Pig/UDF (Java) jobs in performing data cleaning, transformations and joins.
  • Used Sqoop to import data into Hadoop for processing, and export for presentation.
  • Installed Oozie workflow engine and set it up to run multiple Hive and Pig Jobs.
  • Created Hive tables and wrote Hive queries using Hive QL
  • Developed MapReduce programs to transform raw data into meaningful format used by various Hadoop ecosystems like Hive and HBase.
  • Created the front-end administrative and presentation layer to display Hadoop search results.

Environment: Hadoop, Pig, Hive, Sqoop, Hbase, Java, MapReduce, ASP.Net, Javascript .Net Entity Framework, MySQL

Java/ Big Data - Hadoop Developer

Confidential, New York, New York


  • Worked on .Net/JEE/Spring/Hibernate application which provided an Analytics UI and was required to run on multiple application servers.
  • Joined the “Big Data IT” team responsible for building Hadoop stack and different big data analytic tools, migration from different databases ( i.e. Teradata, Oracle, MySQL) and multiple file formats (Json, XML, Text, etc) to Hadoop.
  • Implemented Big Data tools within Hadoop ecosystem (HDFS, Hbase, Hive, Pig, Sqoop, Oozie, Flume).
  • Senior Developer specializing in ETL, web log analysis, e commerce and Hadoop to augment and/or replace traditional relational warehouse.
  • Worked with Flume for pulling log files from Application server to a central location in HDFS.
  • Extensively worked with Oozie workflow manager to scheduling the Hadoop Pig and Hive jobs.
  • Handled importing of data from various data sources, performed transformations using Pig, Hive, UDFs, MapReduce, and loaded data into HDFS and HBase.
  • Wrote Hive Queries to analyze data in Hive warehouse using Hive Query Language (HQL). Provided support data analysts in running Pig and Hive queries.

Environment: Hadoop, HDFS, Pig, Hive, MapReduce, MRUnit, Sqoop, Cloudera manager, Linux, Cloudera CDH4, Flume, HBase, Oracle, MySQL, Teradata, .Net, Agile/Scrum, JEE, Java, Spring, Hibernate, Eclipse

.NET/Java Web Developer

Confidential, New York, New York


  • Designed and developed a new C#, Java, Hibernate, ASP.NET web application utilizing .NET 4.5, JQuery, Modernizer, Bootstrap, JavaScript, XHTML and CSS. We used Microsoft SQL Server 2008 R2 for the database and SVN for source control.
  • Working with a Project Manager and several other developers, I was a Senior Developer responsible for the UI design, development of C# and Java code, setting coding standards and overseeing the code developed by the other developers.
  • I was also responsible for updating and maintaining an existing legacy contingent worker tracking system. The legacy application was developed in Microsoft Access 2003, VBA and Microsoft SQL Server

IT/Technical Recruiter

Confidential, New York, New York


  • Performed full-cycle recruiting of IT professionals for this consulting and staffing firm: sourced, interviewed, checked s, negotiated terms and rates, extended offers, and closed candidates.
  • Delivered qualified, interested and available candidates promptly and within budget for Confidential 500 clients.
  • Successfully sourced resumes using traditional (referrals, internal database, etc) as well as advanced internet sourcing techniques via: Job sites, company websites, colleges, resume banks, newsgroups, search engines, home pages, virtual communities, user groups, e-mailing lists)

Senior .NET Web Developer

Confidential, Burbank, California


  • Saved thousands of man hours, and simplified multiple business processes by creating extensive web-based applications for medical laboratory administrators, various departments, and clients of a largest privately-owned Medical Laboratory in California.
  • Researched and developed custom software solutions to meet unique client and internal requests.
  • Accountable for most of database design and development - Database Architecture, Stored Procedures, Triggers, Performance Tuning on MS-SQL.
  • Created distinctive solutions by integrating in-house created software with custom / third party Active-X components.

Hire Now