We provide IT Staff Augmentation Services!

Hadoop Developer Resume

PROFESSIONAL SUMMARY:

  • 7 years of professional IT experience which includes over 2 years of experience in Big data ecosystem related technologies.
  • 5 Years of experience in Analysis, Designing and Development of various Web based applications.
  • Over 2 years of extensive experience in Hadoop/Spark development and various components such as Hadoop MapReduce, HDFS, Spark Core, Spark SQL, Spark Streaming, Hive, Sqoop.
  • Experience using SQL queries to access and manipulate data in MySQL.
  • Self - driven, Quick learner and excellent team player.
  • Excellent verbal and written communication skills excel and presentation skills.

TECHNICAL SKILLS:

Big Data: Knowledge of Hadoop, Sqoop, Hive,Spark

Data Analytics & Visualization: Tableau

Databases: MySQL

Programming/Scripting Languages: Familiarity with Java, Python, R

Operating Systems: Windows Vista, XP and 98

PROFESSIONAL EXPERIENCE:

Confidential

Hadoop Developer

Environment: Hadoop Ecosystem, HDFS, Sqoop, Hive, Spark, Python, Tableau.

Responsibilities:

  • Create design architecture for Data Ingestion from multiple sources like RDBMS & Cloudera
  • Developed a SQOOP Incremental Import Job, Shell Script & CRONJOB for importing data into HDFS
  • Imported data from HDFS into Hive using Hive commands
  • Created Hive partition on Dates and Stocks for imported data
  • Developed a PySpark Script which dynamically downloads the Data files into the HDFS system.
  • Created PySpark RDDs for data transformation
  • Proficient in SQL Queries, triggers
  • Worked with Structured & Unstructured, RDBMS & CSV data.

Confidential

Hadoop Developer/Data Analyst

Responsibilities:

  • Project involved implementing big data solution to Confidential using Machine Learning techniques.

Confidential

Hadoop Developer/Data Analyst

Environment: Hadoop Ecosystem, HDFS, Sqoop, Hive, Spark, Python, Tableau.

Responsibilities:

  • Built data pipelines to Load and transform large sets of structured, semi structured and unstructured data.
  • Imported data from HDFS into Hive using HiveQL
  • Involved in creating Hive tables, loading and analyzing data using hive queries
  • Created Hive Partitioned and Bucketed tables to improve performance.
  • Developed a SQOOP Import Job, Shell Script & CRONJOB for importing data into HDFS
  • Used Tableau for visualization and building dashboards
  • To improve performance and optimization of the existing algorithms, explored different components like Spark Context, Spark-SQL, Data Frame, Pair RDD's, accumulators.
  • Processed millions of records usingHadoop jobs
  • Implemented Spark code using Python for RDD transformations & actions in Spark application

Software Engineer

Confidential

Responsibilities:

  • Involving mainly in End User functionality.
  • Involving in developing Action & Bean classes.
  • Writing validation rules using Struts.
  • Writing workflow classes.
  • Writing Hibernate queries.
  • Involving DAO layer.

Programmer Analyst

Confidential

Responsibilities:

  • Involving mainly in Admin functionality.
  • Involving in developing Action & Bean classes.
  • Writing validation rules using Struts.
  • Writing workflow classes.
  • Writing Hibernate queries.
  • Involving DAO layer.

Hire Now