We provide IT Staff Augmentation Services!

Sr. Software/hadoop Developer Resume

2.00/5 (Submit Your Rating)

Phoenix, AZ

SUMMARY:

  • 11+ years of professional IT experience in analyzing requirements, designing, building, highly distributed mission critical products and applications.
  • Highly dedicated and results oriented Hadoop Developer with 4+ years of strong end - to-end experience on Hadoop Development with varying level of expertise around different BIGDATA Environment projects.
  • Expertise in core Hadoop and Hadoop technology stack which includes HDFS, Hive, Sqoop, Pig, Flume, HBase, Spark.
  • Having experience in implementing spark operations on RDD and also optimizing transformations and actions in spark.
  • Reviewing and managing Hadoop log files by consolidating logs from multiple machines/sources using flume.
  • Collected the logs data from web servers and integrated in to HDFS using Flume.
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice-versa.
  • Hands on experience in application development using Python, Scala, Perl and Linux shell scripting.
  • Knowledge of ETL methods for data extraction, transformation and loading in corporate-wide ETL Solutions and Data warehouse tools for reporting and data analysis.
  • Great hands on experience with Pyspark for using Spark libraries by using python scripting for data analysis.
  • Experience in analyzing data using HiveQL, Pig Latin. Experience in working with structured data using Hive QL, join operations, Hive UDFs, partitions, bucketing and internal/external tables. Good Understanding in Apache Hue.
  • Involved in converting Hive/SQL queries into Spark transformations using Spark RDD's in Scala and Python.
  • Excellent working Knowledge in Spark Core, Spark SQL, Spark Streaming.
  • Techno-functional responsibilities include interfacing with users, identifying functional and technical gaps, estimates, designing custom solutions, development, leading developers, producing documentation, and production support.
  • Knowledge of ETL methods for data extraction, transformation and loading in ETL solutions and data warehouse for data analysis. Good in using version control like GITHUB and SVN.
  • Rich experience in Banking & Financial Domain.

TECHNICAL SKILLS:

Hadoop Distribution: Cloudera (CDH4, CDH5), Apache, MapR

Hadoop Data Services: Hadoop HDFS, Map Reduce, Yarn,HIVE, PIG, HBase, Sqoop, Oozie, Spark, Scala, Flume and Avro,Parquet,Snappy,ORC

Languages: SQL, PL/SQL, Pig Latin, HiveQL, Unix Shell Scripting, HTML,XML, C, C++, Python, Scala

Application Servers: Web Sphere, Tomcat.

Databases: Oracle,MySQL,DB2, MS SQL Server,SQL/NOSQL, Netezza NoSQL DB- HBase

Operating Systems: UNIX, Windows, LINUX ( RHEL)

Methodologies: Agile (Scrum), Waterfall

Other Tools: Putty, WinSCP, IBM Unica, DMexpress

PROFESSIONAL EXPERIENCE:

Confidential, Phoenix, AZ

Sr. Software/Hadoop Developer

Responsibilities:

  • Extensively migrated existing shell based ETL architecture to python based spark extraction.
  • Executed Spark code using Scala for Spark/SQL for faster processing of data.
  • Performed SQL Joins among Hive tables to get input for Spark batch process.
  • Developed shell code to extract the data from HBase and designs the solution to implement using PySpark.
  • Analyzed the Sql scripts and designed solutions to implement using pyspark.
  • Involved in converting Hive/Sql queries into Spark transformations using Spark RDD s.
  • Loading data from Linux file system to HDFS and vice - versa
  • Knowledge of ETL methods for data extraction, transformation and loading in corporate-wide ETL Solutions and Data warehouse tools for reporting and data analysis.
  • Participated in development/implementation of MapR Impala Hadoop environment.
  • Utilized Apache Hadoop environment by MapR.
  • Manipulate, serialize, model data in multiple forms like JSON, XML.
  • Prepared Avro schema files for generating Hive tables.
  • Worked on physical transformations of data model which involved in creating Tables, Indexes, Joins, Views and Partitions.
  • Involved in Analysis, Design, System architectural design, Process interfaces design, design, documentation.
  • Used Jira for bugtracking and BitBucket to check-in and checkout code changes.
  • Involved in Netezza & db2 table modelling.
  • Utilized Agile and Scrum Methodology to help manage and organize a team of developers with regular code review sessions.

Environment: HDFS, Spark, Python, Hive, Scala, RDBMS ( Netezza, db2), Shell, Perl, Dmexpress.

Confidential , WI

Sr Software Developer

Responsibilities:

  • Analyzed critical issues in application and provide support to fix this.
  • Developed C++ UI product framework.
  • Involved in gathering the business requirements from the Business Partners and Subject Matter Experts.
  • Wrote Managed C++ modules to enhance product.
  • Worked on QT UI development framework in product.

Environment : C++, unix, Managed C++, QT, VC++, IBM rational Door (requirement management tool).

Confidential

Engineer

Responsibilities :

  • Completely responsible for development of tool named “bitmap font tool”, which consists of measuring technique of width of each character of the bitmap. The application uses the pixel color and according to that picks up the color and generates .txt file for each letter of bitmap
  • Doing constant upgrade and maintenance.
  • Creating the design, development and testing of games module.

Environment/tech Skills: Linux, C,C++, SDL

We'd love your feedback!