We provide IT Staff Augmentation Services!

Lead / Sr. Hadoop Developer Resume

4.00/5 (Submit Your Rating)

Bellevue, WA

PROFESSIONAL SUMMARY:

  • 15+ years of experience in software development, Architecture decisions and leading projects from concept through the release process
  • 4+ years of experience in Hadoop Big Data solutions - Leading, development and testing
  • Cloudera Certified Developer for Apache Hadoop
  • Very good experience in Solution Building and data Architecture
  • Good understanding and experience withClouderaHadoopstack
  • Capable of Designing and Architecting Hadoop Applications and recommending the right solutions and technologies for the application
  • Proficient in all Phases of SDLC (Analysis, Design, Development, Testing and Deployment) and gathering user requirements and converting them into software requirement specifications
  • Work closely with Business clients
  • Good knowledge in BI tools like Denodo, Tableau
  • Worked as liaison between the Customer and the Off-shore & On-shore team
  • Excellent Analytical, Problem solving skills, Programming and Logical skills
  • Capable of handling multiple projects at the same time
  • Capable of processing large sets of structured, semi-structured and unstructured data and supporting systems application architecture
  • Good Experience as a Tech / Project Lead
  • Knowledge in NoSQL databases like Hbase
  • Strong understanding of Data Modeling in data warehouse environment such as star schema and snow flake schema.
  • Experienced working with offshore vendors and establishing offshore teams and processes.
  • Good working knowledge on Distributed data processing systems
  • Expertise in Hadoop Lambda Architecture

TECHNICAL SKILLS:

Big Data Eco System: Cloudera Distribution for Hadoop (CDH), MapReduce, HDFS, YARN, Hive, Pig, Sqoop, Storm, Impala, Elastic search, Scala, Spark, Kibana, Parquet, Flume, AWS, Snappy, Avro, HBase

Programming Languages: Core Java

Scripting Languages: Shell Scripting

Operating Systems: LINUX, UNIX, Windows

Database: Oracle, MySQL, Teradata

Tools: Eclipse, Toad

Other Technologies: Informatica 9.1, Denodo

Methodologies: Waterfall, Agile

WORK EXPERIENCE:

Confidential, Bellevue, WA

Architect

Responsibilities:

  • Manage the BEAM Ingestion team for different tracks
  • Provided design recommendations and thought leadership to sponsors /stakeholders that improved review processes and resolved technical problems.
  • Co-coordinate between the Business and the Off-shore team
  • Requirement gathering and prepare the Design
  • Work with different Business and stake holders for each track
  • Export and Import data into HDFS- HBase and Hive . creating Hive tables, loading data and writing Hive queries
  • Bulk loading HBase using Pig
  • Initial load and incremental load data into HBase thru BEAM via GG
  • Implemented solutions using Hadoop, HBase, Hive, Sqoop, Java API, etc.
  • Work closely with the business and analytics team in gathering the system requirements
  • Load and transform large sets of structured and semi structured data.
  • Loading data into HBase tables using Java MapReduce
  • Loading data into Hive partitioned tables

Technologies: Horton Works, HDFS, Core Java, MapReduce, Hive, Pig, Flume, Storm, Hue, Sqoop, Shell script, UNIX, Oracle, Toad, DMF, Active MQ .

Confidential, Greenville, SC

Lead / Sr. Hadoop Developer

Responsibilities:

  • Provided design recommendations and thought leadership to sponsors /stakeholders that improved review processes and resolved technical problems.
  • Co-coordinate between the Business and the Off-shore team
  • Requirement gathering and prepare the Design
  • Export and Import data into HDFS, HBase and Hive using Sqoop.
  • Involved in creating Hive tables, loading with data and writing Hive queries
  • Bulk loading HBase using Pig
  • Implemented solutions using Hadoop, HBase, Hive, Sqoop, Java API, etc.
  • Work closely with the business and analytics team in gathering the system requirements
  • Load and transform large sets of structured and semi structured data.
  • Loading data into HBase tables using Java MapReduce
  • Loading data into Hive partitioned tables

Technologies: CDH, HDFS, Core Java, MapReduce, Hive, Pig, Flume, Storm, Elastic search, Scala, Spark, Kibana, Shell scripting, UNIX.

Confidential, Greenville, SC

Lead / Sr. Hadoop Developer

Responsibilities:

  • Worked on a Hadoop Cluster with current size of 56 Nodes and 896 Terabytes capacity.
  • Written Map Reduce Jobs, HIVEQL, Pig.
  • Imported data using Sqoop into Hive and Hbase from existing SQL Server.
  • Supported code/design analysis, strategy development and project planning.
  • Created reports for the BI team using Sqoop to export data into HDFS and Hive.
  • Developed multiple MapReduce jobs in Java for data cleaning and preprocessing.
  • Involved in Requirement Analysis, Design, and Development.
  • Export and Import data into HDFS, HBase and Hive using Sqoop.
  • Involved in creating Hive tables, loading with data and writing Hive queries which will run internally in Map Reduce way.
  • Work closely with the business and analytics team in gathering the system requirements
  • Load and transform large sets of structured and semi structured data.
  • Loading data into HBase tables using Java MapReduce
  • Loading data into Hive partitioned tables

Confidential, Greenville, SC

Lead / Sr. Hadoop Developer

Responsibilities:

  • Imported data using Sqoop into Hive and Hbase from existing SQL Server.
  • Supported code/design analysis, strategy development and project planning.
  • Created reports for the BI team using Sqoop to export data into HDFS and Hive.
  • Involved in Requirement Analysis, Design, and Development.
  • Export and Import data into HDFS, HBase and Hive using Sqoop.
  • Involved in creating Hive tables, loading with data and writing Hive queries which will run internally in Map Reduce way.
  • Load and transform large sets of structured and semi structured data.
  • Loading data into Hive partitioned tables

Confidential, Greenville, SC

Lead / Sr. Hadoop Developer

Technologies: CDH, HDFS, Core Java, MapReduce, Hive, Pig, Hbase, Sqoop, Shell scripting, UNIX.

Responsibilities:

  • Supported code/design analysis, strategy development and project planning.
  • Created reports for the BI team using Sqoop to export data into HDFS and Hive.
  • Involved in Requirement Analysis, Design, and Development.
  • Export and Import data into HDFS, HBase and Hive using Sqoop.
  • Involved in creating Hive tables, loading with data and writing Hive queries which will run internally in Map Reduce way.
  • Work closely with the business and analytics team in gathering the system requirements
  • Load and transform large sets of structured and semi structured data.
  • Loading data into Hive partitioned tables

Confidential, Greenville, SC

Architect

Responsibilities:

  • Understanding the ETL Specification Documents for mapping requirements.
  • Extract Data from multiple sources like Flat files/Oracle/FTP site into staging database.
  • Extensively worked on Informatica tools such as Source Analyzer, Data Warehouse Designer, Transformation Designer, Mapplet Designer and Mapping Designer to design, develop and test complex mappings and Mapplets to load data from external flat files and RDBMS
  • Created mappings using the transformations such as the Source qualifier, Aggregator, Expression, lookup, Router, Filter, Joiner, Union, Sequence Generator, Rank, Normalizer, Transactional Control, Stored Procedure and Update Strategy.
  • Involved in performance tuning of mappings, identifying source and target bottlenecks and worked withthe sessions and workflow properties to improve the performance.

We'd love your feedback!