We provide IT Staff Augmentation Services!

Sr. Hadoop /spark Developer Resume

5.00/5 (Submit Your Rating)

San Jose, CA

SUMMARY:

  • Over all 8+ years of progressive experience in the IT industry with proven expertise in implementing Software solutions.
  • 4 years of experience on Batch Analytics using Hadoop working environment includes Map Reduce, HDFS, Hive, Pig, spark, Zookeeper, Oozie, and Sqoop.
  • In depth understanding of Hadoop Architecture and its various components such as Resource Manager, Node Manager, Applications Master, Name Node, Data Node concepts.
  • Experience in importing and exporting data using Sqoop from Relational Database Systems to HDFS and vice - versa.
  • Extending HIVE and PIG core functionality by using custom User Defined Function’s (UDF), User Defined Table-Generating Functions (UDTF) and User Defined Aggregating Functions (UDAF) for Hive and Pig.
  • Experience in analyzing data using HiveQL, Pig Latin, and custom MapReduce programs in Java.
  • Developed Pig Latin scripts for data cleansing and Transformation.
  • Job workflow scheduling and monitoring using tools like Oozie.
  • Good experience in Cloudera, Hortonworks & Apache Hadoop distributions.
  • Worked with relational database systems (RDBMS) such as MySQL, MSSQL, Oracle .
  • Assisted with performance tuning and monitoring Hive.
  • Used Shell Scripting to move log files into HDFS.
  • Good hands on experience in creating the RDD's, DF's for the required input data and performed the data transformations using Spark.
  • Developed Scala scripts, UDF's using both Data frames/SQL and RDD/MapReduce in Spark for Data Aggregation, queries and writing data back into RDBMS through Sqoop.
  • Good understanding in processing of real-time data using Spark.
  • Import the data from different sources like HDFS/HBase into Spark RDD.
  • Experience in writing MapReduce jobs in python for some complicated queries.
  • Experienced with different file formats like Parquet, ORC, CSV, Text, Sequence, XML, JSON and Avro files.
  • Good knowledge on Data Modelling and Data Mining to model the data as per business requirements.
  • Involved in unit testing of Map Reduce programs using Apache MRunit.
  • Good knowledge on python scripting and bash scripting languages.
  • Expert in Data Visualization development using Tableau to create complex and innovative dashboards.
  • Generated ETL reports using Tableau and created statistics dashboards for Analytics.
  • Reported the bugs by classifying them and have played major role in carrying out different types of tests viz. Well versed with Smoke, Functional, Integration, System, Data Comparison and Regression testing.
  • Experience in creating Master Test Plan, Test Cases, and Test Result Reports, Requirements Traceability Matrix and creating Status Reports and submitting to the Project management.
  • Good in Designing and developing the Data Access Layer modules with the help of Hibernate Framework for the new functionalities.
  • Working knowledge of Agile and waterfall development models.
  • Worked with different software version control, Jira, bug tracking and code review systems like CVS, Clear Case.

TECHNICAL SKILLS:

Big data/Hadoop Ecosystem: HDFS, Map Reduce, HIVE, PIG, HBase, Sqoop, Flume, Oozie, Spark, Storm, Kafka, HCatalog, Impala.

Programming Languages: Java, Scala, SQL, PL/SQL, Linux shell scripts, HL7.

Database: Oracle 11g/10g, DB2, MS-SQL Server, MySQL, Teradata.

Web Technologies: HTML, XML, JDBC, JSP, CSS, JavaScript, AJAX, SOAP, Angular JS

Tools: Used: Eclipse,IntelliJ,Putty,Winscp,NetBeans,QC,QlikView,IssueTrack,Selenium,SplunkTableau.

Operating System: Ubuntu (Linux), Win 95/98/2000/XP, Mac OS, Red Hat

Methodologies: Agile/Scrum, Rational Unified Process and Waterfall

Distributed plat forms: Hortonworks, Cloudera, MapR

Monitoring tools: Ganglia, Nagios.

PROFESSIONAL EXPERIENCE:

Confidential, San Jose, CA

Sr. Hadoop /Spark Developer

Responsibilities:

  • Worked on Hadoop technologies like, Hive, Sqoop, spark SQL and Big Data testing.
  • Developed automated scripts for ingesting the data from Teradata around 200TB bi-weekly refreshment of data.
  • Developed Hive scripts for end user / analyst requirements for adhoc analysis.
  • Used of Partitions, Bucketing concepts in Hive and designed both Managed and External tables in Hive for optimized performance
  • Solved performance issues in Hive and Pig scripts with understanding of Joins, Group and aggregation and how does it translate to MapReduce jobs.
  • Extensively used Apache Sqoop for efficiently transferring bulk data between Apache Hadoop and relational databases (Oracle) for product level forecast.
  • Worked in tuning Hive to improve performance. Developed UDFs using JAVA as and when necessary to use HIVE queries.
  • Extracted the data from Teradata into HDFS using Sqoop.
  • Created Sqoop job with incremental load to populate Hive External tables.
  • Developed TWS workflow for scheduling and orchestrating the ETL process.
  • Created Tableau Dashboards with interactive views, trends and drill downs along with user level security
  • Used Impala to read, write and query the Hadoop data in HDFS .
  • Used Datameer to increase business agility and responsiveness.
  • Functional, non-functional and performance testing of key systems prior to cutover to AWS
  • Developed programs in Spark based on the application for faster data processing than standard MapReduce programs.
  • Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and troubleshooting, manage and review data backups, manage and review Hadoop log files.
  • Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required.
  • Configured Hadoop system files to accommodate new sources of data and updated the existing configuration Hadoop cluster
  • Involved in gathering business requirements and prepared detailed specifications that follow project guidelines required to develop written programs.
  • Worked on importing and exporting data from different databases like Oracle, Teradata into HDFS and Hive using Sqoop.
  • Actively participating in the code reviews, meetings and solving any technical issues.

Environment: Hadoop, Map Reduce, HDFS, Spark, Scala,, Hive,, maven, Jenkins, Pig, UNIX, Python, Git, Hortonworks, Oozie .

Confidential

Sr. Hadoop Developer

Responsibilities:

  • Imported data from different relational data sources like RDBMS, Teradata to HDFS using Sqoop.
  • Imported bulk data into HBase Using Map Reduce programs.
  • Designed and implemented Incremental Imports into Hive tables.
  • Used Rest ApI to Access HBase data to perform analytics.
  • Developed Spark code using Scala and Spark -SQL/Streaming for faster testing and processing of data.
  • Involved in converting Map Reduce programs into Spark transformations using Spark RDD's on Scala.
  • Implemented Spark using Scala and Spark SQL for faster testing and processing of data.
  • Experienced in working with various kinds of data sources such as Teradata and Oracle. Successfully loaded files to HDFS from Teradata, and load loaded from hdfs to hive and impala.
  • Experienced in running query using Impala and used BI tools to run ad-hoc queries directly on Hadoop.
  • Experienced with batch processing of data sources using Apache Spark, Elastic search.
  • Develop wrapper using shell scripting for Hive, Pig, Sqoop, Scala jobs.
  • Worked on developing Unix Shell scripts to automate Spark-Sql.
  • Performed advanced procedures like text analytics and processing, using the in-memory computing capabilities of Spark using Scala.
  • Worked in Loading and transforming large sets of structured, semi structured and unstructured data.
  • Involved in collecting, aggregating and moving data from servers to HDFS using Apache Flume.
  • Written Hive jobs to parse the logs and structure them in tabular format to facilitate effective querying on the log data.
  • Involved in creating Hive tables, loading with data and writing hive queries that will run internally in MapReduce way.
  • Developed java Restful web services to upload data from local to Amazon S3, listing S3 objects and file manipulation operations.
  • Configured a 20-30 node (Amazon EC2 spot instance) Hadoop cluster to transfer the data from Amazon S3 to HDFS and HDFS to Amazon S3 and also to direct input and output to the Hadoop MapReduce framework.
  • Experienced in managing and reviewing the Hadoop log files.
  • Successfully ran all Hadoop MapReduce programs on Amazon Elastic MapReduce framework by using Amazon S3 for input and output.
  • Involve in Data Asset Inventory to gather, analyze, and document business requirements, functional requirements and data specifications for Member Retention from sources SQL / Hadoop.
  • Worked on solving performance and limit queries to the workbooks that when it connects to live database by using a data extract option in Tableau.
  • Designed and developed Dashboards for Analytical purposes using Tableau.
  • Designed and implemented facts, dimensions, measure groups, measures and OLAP cubes using dimensional data modeling standards in SQL Server 2008 that maintained data
  • Creating and Designing OLAP using SAS OLAP Cube Studio.
  • Designing Source, Job, Target using SAS OLAP Cube Studio and SAS/DIS.
  • Analyzing OLAP Using SAS OLAP Viewer and SAS Dataset using SAS/EG.
  • Migrated ETL jobs to Pig scripts do Transformations, even joins and some pre-aggregations before storing the data onto HDFS.
  • Worked with Avro Data Serialization system to work with JSON data formats.
  • Worked on different file formats like Sequence files, XML files and Map files using Map Reduce Programs.
  • Involved in Unit testing and delivered Unit test plans and results documents using Junit and MRUnit.
  • Exported data from HDFS environment into RDBMS using Sqoop for report generation and visualization purpose.
  • Worked on Oozie workflow engine for job scheduling.
  • Created and maintained Technical documentation for launching HADOOP Clusters and for executing Pig Scripts.

Environment: Hadoop, HDFS, Pig 0.10, Hive, AWS, MapReduce, Sqoop, Java Eclipse, SQL Server, Shell Scripting.

Confidential, SFO, CA

Hadoop Developer

Responsibilities:

  • Installed and configured Hadoop MapReduce, HDFS, Developed multiple MapReduce jobs in java for data cleaning and preprocessing.
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Designed and developed Big Data analytics platform for processing customer viewing preferences and social media comments using Java, Hadoop, Hive and Pig.
  • Integrated Hadoop into traditional ETL, accelerating the extraction, transformation, and loading of massive structured and unstructured data.
  • Worked on analyzing hadoop cluster and different Big data Components including Pig, Hive, Spark, HBase, Kafka and SQOOP.
  • Experienced in defining job flows.
  • Developed and executed custom MapReduce programs, PigLatin scripts and HQL queries.
  • Used Hadoop FS scripts for HDFS (Hadoop File System) data loading and manipulation.
  • Performed Hive test queries on local sample files and HDFS files.
  • Developed and optimized Pig and Hive UDFs (User-Defined Functions) to implement the functionality of external languages as and when required.
  • Extensively used Pig for data cleaning and optimization.
  • Developed Hive queries to analyze data and generate results.
  • Exported data from HDFS to RDBMS via Sqoop for Business Intelligence, visualization and user report generation.
  • Analyzed business requirements and cross-verified them with functionality and features of NOSQL databases like HBase, Cassandra to determine the optimal DB.
  • Load and transform large sets of structured, semi structured and unstructured data.
  • Installed and configured Apache Hadoop, Hive and Pig environment on the prototype server. Configured SQL database to store Hive metadata.
  • Loaded unstructured data into Hadoop File System (HDFS).
  • Created ETL jobs to load Twitter JSON data and server data into MongoDB and transported MongoDB into the Data Warehouse.
  • Responsible to manage data coming from different sources. Responsible for implementing MongoDB to store and analyze unstructured data.
  • Supported Map Reduce Programs those are running on the cluster.
  • Involved in loading data from UNIX file system to HDFS.
  • Installed and configured Hive and also written Hive UDFs.
  • Involved in creating Hive tables, loading with data and writing hive queries that will run internally in map reduce way.
  • Implemented CDH3 Hadoop cluster on CentOS.

Environment: Hadoop, MapReduce, HDFS, Hive, Pig, Java, SQL, Cloudera Manager, Sqoop, Flume, Cassandra, Oozie, Java (jdk 1.6), Eclipse

Confidential

Software Engineer

Responsibilities:

  • Participated in requirement gathering and converting the requirements into technical specifications.
  • Created UML diagrams like use cases, class diagrams, interaction diagrams, and activity diagrams.
  • Created Business Logic using Servlets, POJO’s and deployed them on Web logic server.
  • Wrote complex SQL queries and stored procedures.
  • Developed the XML Schema and Web services for the data maintenance and structures.
  • Implemented the Web Service client for the login authentication, credit reports and applicant information using Apache Axis 2 Web Service.
  • Developed and implemented custom data validation stored procedures for metadata summarization for the data warehouse tables, for aggregating telephone subscribers switching data, for identifying winning and losing carriers, and for identifying value subscribers.
  • Identified issue and developed a procedure for correcting the problem which resulted in the improved quality of critical tables by eliminating the possibility of entering duplicate data in a Data Warehouse.
  • Designed and implemented SQL based tools, stored procedures and functions for daily data volume and aggregation status
  • Responsible to manage data coming from different sources.
  • Developed map reduce algorithms.
  • Got good experience with NOSQL database.
  • Involved in loading data from UNIX file system to HDFS.
  • Installed and configured Hive and also written Hive UDFs.
  • Worked with cloud services like Amazon web services (AWS)
  • Designed the logical and physical data model, generated DDL scripts, and wrote DML scripts for Oracle 10g database.
  • Used Hibernate ORM framework with Spring framework for data persistence and transaction management.
  • Wrote test cases in JUnit for unit testing of classes.
  • Involved in creating templates and screens in HTML and JavaScript.
  • Involved in integrating Web Services using SOAP.

Environment: Hive 0.7.1, Apache Solr - 3.x, HBase-0.90.x/0.20.x, JDK, Spring MVC, WebSphere 6.1, HTML, XML, JavaScript, JUnit 3.8, Oracle 10g, Amazon Web Services.

We'd love your feedback!