We provide IT Staff Augmentation Services!

Senior Hadoop Developer Resume

2.00/5 (Submit Your Rating)

Phoenix, AZ

SUMMARY:

  • Over 8 years of IT experience in software development and support with experience in developing strategic methods for deploying Big Data technologies to efficiently solve Big Data processing requirement.
  • Expertise in Hadoop eco system components HDFS, Map Reduce, Yarn, HBase, Pig, Sqoop, Spark, Spark SQL, Spark Streaming, and Hive for scalability, distributed computing, and high performance computing.
  • Experience in using Hive Query Language for data Analytics.
  • Experienced in Installing, Maintaining and Configuring Hadoop Cluster.
  • Strong knowledge on creating and monitoring Hadoop clusters on Amazon EC2, VM, Horton works Data Platform 2.1 & 2.2, CDH3, CDH4Cloudera Manager on Linux, Ubuntu OS etc.
  • Capable of processing large sets of structured, semi - structured and unstructured data and supporting systems application architecture.
  • Having Good knowledge on Single node and Multi node Cluster Configurations.
  • Strong knowledge in NOSQL column oriented databases like HBase, Cassandra, MongoDB, and its integration with Hadoop cluster.
  • Expertise on Scala Programming language and Spark Core.
  • Worked with AWS based data ingestion and transformations.
  • Experienced in job workflow scheduling and monitoring tools like Oozie and Zookeeper.
  • Good knowledge on Amazon EMR, Amazon RDS S3 Buckets, Dynamo DB, RedShift.
  • Analyze data, interpret results, and convey findings in a concise and professional manner
  • Partner with Data Infrastructure team and business owners to implement new data sources and ensure consistent definitions are used in reporting and analytics
  • Promote full cycle approach including request analysis, creating/pulling dataset, report creation and implementation and providing final analysis to the requestor
  • Very Good understanding of SQL, ETL and Data Warehousing Technologies
  • Knowledge of MS SQL Server 2012/2008/2005 and Oracle 11g/10g/9i and E-Business Suite.
  • Expert in TSQL, creating and using Stored Procedures, Views, User Defined Functions, implementing Business Intelligence solutions using SQL Server 2000/2005/2008 .
  • Developed Web-Services module for integration using SOAP and REST.
  • NoSQL database experience on HBase, Cassandra, DynamoDB.
  • Flexible with Unix/Linux and Windows Environments working with Operating Systems like Centos 5/6, Ubuntu 13/14, Cosmos.
  • Knowledge of java virtual machines (JVM) and multithreaded processing.
  • Strong programming skills in designing and implementation of applications using Core Java, J2EE, JDBC, JSP, HTML, Spring Framework, Spring batch framework, Spring AOP, Struts, JavaScript, Servlets.
  • Responsible for collaborating and facilitating with SMB customer on accessing, designing, and implementing Azure cloud solutions.
  • Java Developer with extensive experience on various Java Libraries, API’s, and frameworks.
  • Hands on development experience with RDBMS, including writing complex Sql queries, stored procedure, and triggers.
  • Have sound knowledge on designing data warehousing applications with using Tools like Teradata, Oracle, and SQL Server.
  • Experience on using Talend ETL tool.
  • Experience in working with job scheduler like Autosys and Maestro.
  • Strong in databases like Sybase, DB2, Oracle, MS SQL, Clickstream.
  • Strong understanding of Agile Scrum and Waterfall SDLC methodologies.
  • Strong Working experience in snowflake.
  • Hands on experience with automation tools such as Puppet, Jenkins.
  • Strong communication, collaboration & team building skills with proficiency at grasping new Technical concepts quickly and utilizing them in a productive manner.
  • Adept in analyzing information system needs, evaluating end-user requirements, custom designing solutions and troubleshooting information systems.
  • Strong analytical and Problem solving skills.

TECHNICAL SKILLS:

Hadoop/Big Data Technologies: HDFS, Map Reduce, Sqoop, Flume, Pig, Hive, Oozie, impala, Spark, Zookeeper and Cloudera Manager, splunk, Kafka.

NO SQL Database: HBase, Cassandra

Monitoring and Reporting: Tableau, Custom shell scripts

Hadoop Distribution: Horton Works, Cloudera, AWS, MapR

Build Tools: Maven, SQL Developer

Programming & Scripting: JAVA, C, SQL, Shell Scripting, Python, Scala, Storm

Java Technologies: Servlets, JavaBeans, JDBC, Spring, Hibernate, SOAP/Rest services

Databases: Oracle, MY SQL, MS SQL server, Teradata

Web Dev. Technologies: HTML, XML, JSON, CSS, JQUERY, JavaScript, angular JS

Version Control: SVN, CVS, GIT

Operating Systems: Linux, Unix, Mac OS-X, Cen OS, Windows10, Windows 8, Windows 7, Windows Server 2008/2003

PROFESSIONAL EXPERIENCE:

Confidential - Phoenix, AZ

Senior Hadoop Developer

Responsibilities:

  • Installed/Configured/Maintained Apache Hadoop clusters for Analytics, application development and Hadoop tools like Hive, HSQL Pig, HBase, OLAP, Zookeeper, Avro, parquet and Sqoop on Linux.
  • Implemented appropriate test cases for new processes perform appropriate testing on changes to existing processes and ensure that data is being transformed properly through unit testing and review of output data and reports.
  • Installed and configured Hadoop, MapReduce, HDFS (Hadoop Distributed File System), developed multiple MapReduce jobs in java for data cleaning.
  • Having experience in doing structured modeling on unstructured data models.
  • Developed data pipeline using Flume, Sqoop, Pig and Java MapReduce to ingest customer behavioral data and financial histories into HDFS for analysis.
  • Involved in collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
  • Worked on installing cluster, commissioning & decommissioning of Data Nodes, Name Node recovery, capacity planning, and slots configuration.
  • Developed PIG Latin scripts to extract the data from the web server output files to load into HDFS.
  • Created and implemented workflow process automation and monitoring using Hive SQL.
  • Used Pig as ETL tool to do transformations, event joins and some pre-aggregations before storing the data onto HDFS.
  • Implementation of Azure Cloud services for all new production servers and web portal .
  • Worked with AWS data pipeline.
  • Worked with Elastic Search, Postgres, Apache NIFI.
  • Developed detailed specifications that define how to transform incoming data into the appropriate target output.
  • Responsible for developing data pipeline using flume, Sqoop and pig to extract the data from weblogs and store in HDFS.
  • Installed Oozie workflow engine to run multiple Hive and Pig Jobs, used Sqoop to import and export data from HDFS to RDBMS and vice-versa for visualization and to generate reports.
  • Involved in migration of ETL processes from Oracle to Hive to test the easy data manipulation.
  • Worked in functional, system, and regression testing activities with agile methodology.1
  • Worked on Python plugin on MySQL workbench to upload CSV files.
  • Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Worked with HDFS Storage Formats like Avro, Orc.
  • Worked with NoSQL databases like Hbase.
  • Worked with AWS based data ingestion and transformations.
  • Worked on importing and exporting data from Oracle and DB2 into HDFS and HIVE using Sqoop.
  • Responsible for building scalable distributed data solutions using Hadoop, and responsible for Cluster maintenance, adding and removing cluster nodes, Cluster Monitoring, and Troubleshooting, Manage and review data backups and log files.
  • Developed several new MapReduce programs to analyze and transform the data to uncover insights into the customer usage patterns.
  • Worked extensively with importing metadata into Hive using Sqoop and migrated existing tables and applications to work on Hive.
  • Transformed data from one server to other servers using tools like Bulk Copy Program (BCP), and SQL Server Integration Services (SSIS).
  • Responsible for running Hadoop streaming jobs to process terabytes of xml's data, utilized cluster co-ordination services through Zookeeper.
  • Extensive experience in using the MOM with Active MQ, Apache storm, Apache Spark & Kafka Maven, and Zookeeper.
  • Worked on the core and Spark SQL modules of Spark extensively.
  • Developed Kafka producer and consumers, HBase clients, Spark, shark, Streams and Hadoop MapReduce jobs along with components on HDFS, Hive.
  • Analyzed the SQL scripts and designed the solution to implement using PySpark.
  • Experience using Spark.
  • Experience in writing batch processing huge Scala programs.
  • Load and transform large sets of structured, semi structured, and unstructured data using Hadoop/Big Data concepts.
  • Responsible for creating Hive External tables and loaded the data in to tables and query data using HQL.
  • Handled importing data from various data sources, performed transformations using Hive, MapReduce, and loaded data into HDFS.
  • Worked Agile environment

Environment: Hadoop Cluster, HDFS, Hive, Pig, Sqoop, OLAP, AWS, data modeling, Linux, Hadoop Map Reduce, HBase, Shell Scripting, MongoDB, and Cassandra, Apache Spark, SSAS, SSIS.

Confidential - New Orleans, LA

Big Data Software Developer

Responsibilities:

  • Performed benchmarking of HDFS and Resource manager using Test DFSIO and Tera Sort.
  • Worked on SQOOP to import data from various relational data sources.
  • Working with Flume in bringing click stream data from front facing application logs
  • Worked on strategizing SQOOP jobs to parallelize data loads from source systems
  • Participated in providing inputs for design of the ingestion patterns.
  • Participated in strategizing loads without impacting front facing applications.
  • Worked in agile environment using Jira, Git.
  • Worked on design on Hive, ANSI data store to store the data from various data sources.
  • Involved in brainstorming sessions for sizing the Hadoop cluster.
  • Involved in providing inputs to analyst team for functional testing.
  • Worked with source system load testing teams to perform loads while ingestion jobs are in progress.
  • Worked with Continuous Integration and related tools (i.e. Jenkins, Maven).
  • Worked on performing data standardization using PIG scripts.
  • Worked with query engines Tez, Apache Phoenix.
  • Worked on installation and configuration Horton works cluster ground up.
  • Managed various groups for users with different queue configurations.
  • Worked on building analytical data stores for data science team’s model development.
  • Worked on design and development of Oozie works flows to perform orchestration of PIG and HIVE jobs.
  • Worked on performance tuning of HIVE queries with partitioning and bucketing process.
  • Worked on the core and Spark SQL modules of Spark extensively.
  • Developed Kafka producer and consumers, HBase clients, Spark, and Hadoop MapReduce jobs along with components on HDFS, Hive.
  • Worked with Source Code Management Tools GitHUB, Clearcase SVN, CVS,
  • Working experience with Testing tools.
  • Experienced in analyzing the SQL scripts and designed the solution to implement using PySpark.
  • Worked with Code Quality Governance related tools (Sonar, PMD, FindBugs, Emma, Cobertura, etc)
  • Analyzed the SQL scripts and designed the solution to implement using PySpark.
  • Worked in Agile environment using Jira, Git.

Environment: Hadoop, HDFS, Map Reduce, Flume, Pig, Sqoop, Hive, Oozie, HBase, Shell Scripting, Apache Spark.

Confidential, Detroit, MI

Hadoop Developer

Responsibilities:

  • Worked on Distributed/Cloud Computing (Map Reduce/Hadoop, Hive, Pig, HBase, Sqoop, Spark AVRO, Zookeeper etc.), Cloudera distributed Hadoop.
  • Installed and configured Hadoop MapReduce, HDFS, developed multiple Map Reduce jobs in java for data cleaning and processing.
  • Involved in installing Hadoop Ecosystem components.
  • Importing and exporting data into HDFS, Pig, Hive and HBase using SQOOP.
  • Responsible to manage data coming from different sources.
  • Flume and from relational database management systems using SQOOP.
  • Responsible to manage data coming from different data sources.
  • Involved in gathering the requirements, designing, development and testing.
  • Worked on loading and transformation of large sets of structured, semi structured data into Hadoop system.
  • Developed simple and complex MapReduce programs in Java for Data Analysis.
  • Load data from various data sources into HDFS using Flume.
  • Developed the Pig UDF'S to pre-process the data for analysis.
  • Worked on Hue interface for querying the data.
  • Created Hive tables to store the processed results in a tabular format.
  • Developed Hive Scripts for implementing dynamic Partitions.
  • Developed Pig scripts for data analysis and extended its functionality by developing custom UDF's.
  • Extensive knowledge on PIG scripts using bags and tuples.
  • Experience in managing and reviewing Hadoop log files.
  • Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing with Pig.
  • Exported analyzed data to relational databases using SQOOP for visualization to generate reports for the BI team.

Environment: Hadoop, UNIX, Eclipse, HDFS, Java, MapReduce, Pig, Hive, HBase, Oozie, SQOOP and MySQL, cloud era, zoo keeper, Flume.

Confidential

JAVA/J2EE Developer

Responsibilities:

  • Involved in Java, J2EE, struts, web services and Hibernate in a fast-paced development environment.
  • Followed agile methodology, interacted directly with the client on the features, implemented optimal solutions, and tailor application to customer needs.
  • Involved in design and implementation of web tier using Servlets and JSP.
  • Used Apache POI for Excel files reading.
  • Developed the user interface using JSP and Java Script to view all online trading transactions.
  • Designed and developed Data Access Objects (DAO) to access the database.
  • Used DAO Factory and value object design patterns to organize and integrate the JAVA Objects
  • Coded Java Server Pages for the Dynamic front end content that use Servlets and EJBs.
  • Coded HTML pages using CSS for static content generation with JavaScript for validations.
  • Used JDBC API to connect to the database and carry out database operations.
  • Used JSP and JSTL Tag Libraries for developing User Interface components.
  • Performing Code Reviews.
  • Have Knowledge on Spring Batch, which provides Functions like processing large volumes of records, including job processing statistics, job restart, skip, and resource management.
  • Implemented various design patterns in the project such as Business Delegate, Data Transfer Object, Service Locator, Data Access Object, and Singleton.
  • Developed web service for web store components using JAXB and involved in generating stub and JAXB data model class based on annotation.
  • Worked on the platforms REST APIs, NodeJS.
  • Developed XML configuration and data description using Hibernate. Hibernate Transaction Manager is used to maintain the transaction persistence.
  • Designed and develop web based application using HTML5, CSS, JavaScript, AJAX, JSP framework.
  • Performed unit testing, system testing and integration testing.
  • Involved in building and deployment of application in Linux environment.

Environment: Java, J2EE, JDBC, Struts, SQL. Hibernate, Eclipse, Apache POI, CSS, JDK5.0, J2EE, Servlets, JSP, spring, HTML, Java Script Prototypes, XML, JSTL, XPath, JQuery, Oracle 10, RAD.

We'd love your feedback!