We provide IT Staff Augmentation Services!

Senior Hadoop Developer Resume

2.00/5 (Submit Your Rating)

Madison, WI

SUMMARY

  • Over 8 years of IT experience in software development and support with experience in developing strategic methods for deploying Big Data technologies to efficiently solve Big Data processing requirement.
  • Expertise in Hadoop eco system components HDFS, Map Reduce, Yarn, HBase, Pig, Sqoop, Spark, Spark SQL, Spark Streaming,and Hive for scalability, distributed computing, and high performance computing.
  • Experience in using Hive Query Language for data Analytics.
  • Experienced in Installing, Maintaining and Configuring Hadoop Cluster.
  • Strong knowledge on creating and monitoring Hadoop clusters on Amazon EC2, VM, Hortonworks Data Platform 2.1 & 2.2, CDH3, CDH4Cloudera Manager on Linux, Ubuntu OS etc.
  • Capable of processing large sets of structured, semi - structured and unstructured data and supporting systems application architecture.
  • Having Good knowledge on Single node and Multi node Cluster Configurations.
  • Strong knowledge in NOSQL column oriented databases like HBase, Cassandra, MongoDB, andMarkLogicand its integration with Hadoop cluster.
  • Expertise on Scala Programming language and Spark Core.
  • Worked withAWS based data ingestion and transformations.
  • Worked with CloudBreak and BluePrint to configure AWS plotform.
  • Worked with data warehouse tools like Informatica,Talend.
  • Experienced in job workflow scheduling and monitoring tools like Oozie and Zookeeper.
  • Good knowledge on Amazon EMR,Amazon RDS S3 Buckets, Dynamo DB, RedShift.
  • Analyze data, interpret results, and convey findings in a concise and professional manner
  • Partner with Data Infrastructure team and business owners to implement new data sources and ensure consistent definitions are used in reporting and analytics
  • Promote full cycle approach including request analysis, creating/pulling dataset, report creation and implementation and providing final analysis to teh requestor
  • Good experience on Kafkaand Storm.
  • Worked with Docker to establish connection between Spark and NEO4J database.
  • Knowledge of java virtual machines (JVM) and multithreaded processing.
  • Hands on experience working with ANSI SQL.
  • Strong programming skills in designing and implementation of applications using Core Java, J2EE, JDBC, JSP, HTML, Spring Framework, Spring batch framework, Spring AOP, Struts, JavaScript, Servlets.
  • Experience in build scripts using Maven and do continuous integrations systems like Jenkins.
  • Java Developer with extensive experience on various Java Libraries, API’s,and frameworks.
  • Hands on development experience with RDBMS, including writing complex Sql queries, Stored procedure,and triggers.
  • Very Good understanding of SQL, ETL and Data Warehousing Technologies
  • Knowledge of MS SQL Server 2012/2008/2005 and Oracle 11g/10g/9i and E-Business Suite.
  • Expert in TSQL, creating and using Stored Procedures, Views, User Defined Functions, implementing Business Intelligence solutions using SQL Server 2000/2005/2008.
  • Developed Web-Services module for integration using SOAP and REST.
  • NoSQL database experience onHBase, Cassandra,DynamoDB.
  • Flexible with Unix/Linux and Windows Environments working with Operating Systems like Centos 5/6, Ubuntu 13/14, Cosmos.
  • Have sound knowledge on designing data warehousing applications with using Tools like Teradata, Oracle,and SQL Server.
  • Experience working with Solr for text search.
  • Experience on using Talend ETL tool.
  • Experience in working with job scheduler like Autosys and Maestro.
  • Strong in databases like Sybase, DB2, Oracle, MS SQL,Clickstream.
  • Strong understanding of Agile Scrum and Waterfall SDLC methodologies.
  • Strong Working experience in snowflake.
  • Hands on experience with automation tools such as Puppet, Jenkins,chef,Ganglia,Nagios.
  • Strong communication, collaboration & team building skills with proficiency at grasping new Technical concepts quickly and utilizing them in a productive manner.
  • Adept in analyzing information system needs, evaluating end-user requirements, custom designing solutions and troubleshooting information systems.
  • Strong analytical and Problem solving skills.

TECHNICAL SKILLS

Hadoop/Big Data Technologies: HDFS, Map Reduce, Sqoop, Flume, Pig, Hive, Oozie, impala, Spark, Zookeeper and Cloudera Manager,Splunk.

NO SQL Database: HBase, Cassandra

Monitoring and Reporting: Tableau, Custom shell scripts

Hadoop Distribution: Horton Works, Cloudera, MapR

Build Tools: Maven, SQL Developer

Programming & Scripting: JAVA, C, SQL, Shell Scripting, Python, Scala

Java Technologies: Servlets, JavaBeans, JDBC, Spring, Hibernate, SOAP/Rest services

Databases: Oracle, MY SQL, MS SQL server, Teradata

Web Dev. Technologies: HTML, XML, JSON, CSS, JQUERY, JavaScript, angular JS

Version Control: SVN, CVS, GIT

Operating Systems: Linux, Unix, Mac OS-X, Cen OS, Windows10, Windows 8, Windows 7, Windows Server 2008/2003

PROFESSIONAL EXPERIENCE

Confidential, Madison, WI

Senior Hadoop Developer

Responsibilities:

  • Installed/Configured/Maintained Apache Hadoop clusters forAnalytics, application development and Hadoop tools like Hive, HSQL Pig, HBase, OLAP, Zookeeper,Avro,parquetand Sqoopon Linux ARCH.
  • Responsible for developing data pipeline using Azure HDInsight, flume, Sqoop and pig to extract teh data from weblogs and store in HDFS.
  • Installed Oozie workflow engine to run multiple Hive and Pig Jobs, used Sqoop to import and export data from HDFS to RDBMS and vice-versa for visualization and to generate reports.
  • Involved in migration of ETL processes from Oracle to Hive to test teh easy data manipulation.
  • Worked in functional, system, and regression testing activities with agile methodology.1
  • Worked onPythonplugin on MySQL workbench to upload CSV files.
  • Used Hive to analyze teh partitioned and bucketed data and compute various metrics for reporting.
  • Worked with HDFS Storage Formats like Avro, Orc,Parquet.
  • Worked with Accumulo to Modify server side Key Value pairs.
  • Working experience with shiny and R.
  • Working experience with business analytics tools like Microsoft Power BI.
  • Working experience withVertica, QilkSense, QilkView and SAP BOE.
  • Worked with NoSQL databases likeHBase, Cassandra, DynamoDB
  • Worked withAWS based data ingestion and transformations.
  • Good experience with Python Pig Sqoop Oozie Hadoop Streaming,Hive and Phoenix.
  • Worked on importing and exporting data from Oracle and DB2 into HDFS and HIVE using Sqoop.
  • Responsible for building scalable distributed data solutions using Hadoop, and responsible for Cluster maintenance, adding and removing cluster nodes, Cluster Monitoring, and Troubleshooting, Manage and review data backups and log files.
  • Developed several new MapReduce programs to analyze and transform teh data to uncover insights into teh customer usage patterns.
  • Working experience with teh tool CloudBreak and Ambari Blueprint to maintain,configure and scaling of AWS plotform.
  • Worked extensively with importing metadata into Hive using Sqoop and migrated existing ACID tables and applications to work on Hive.
  • Extract, Load, and Transfer data through Talend.
  • Responsible for running Hadoop streaming jobs to process terabytes of xml's data, utilized cluster co-ordination services through Zookeeper.
  • Extensive experience in using teh MOM with Active MQ, Apache storm, Apache Spark & Kafka Maven,and Zookeeper.
  • Wrote teh shell scripts to monitor teh health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
  • Worked with Docker to establish connection between Spark and NEO4J database.
  • Having experience working with Devops.
  • Installed and configured Hadoop, MapReduce, HDFS (Hadoop Distributed File System), developed multiple MapReduce jobs in java for data cleaning.
  • Having experience in doing structured modelling on unstructured data models.
  • Developed data pipeline using Flume, Sqoop, Pig and Java MapReduce to ingest customer behavioral data and financial histories into HDFS for analysis.
  • Involved in collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
  • Worked on installing cluster, commissioning & decommissioning of Data Nodes, Name Node recovery, capacity planning, and slots configuration.
  • Developed PIG Latin scripts to extract teh data from teh web server output files to load into HDFS.
  • Worked on Hortonworks Data Platform(HDP).
  • Worked with SPLUNK to analyze and visualize data.
  • Worked on Mesos cluster and Marathon.
  • Experience working with Integration of data from spark to teh NOSQL database like MarkLogic.
  • Used Pig as ETL tool to do transformations, event joins and some pre-aggregations before storing teh data onto HDFS.
  • Worked with Orchestration tools like Airflow.
  • Write test cases, analyze and reporting test results to product teams.
  • Good experience onClojure, Kafka and Storm.
  • Worked with AWS data pipeline.
  • Worked with Elastic Search, Postgres, Apache NIFI.
  • Hadoopworkflow management using Oozie, Azkaban,Hamake.
  • Worked with teh automation of teh streaming process using teh puppet tool.
  • Worked on teh core and Spark SQL modules of Spark extensively.
  • Worked on Descriptive statistics Using R.
  • Developed Kafka producer and consumers, HBase clients, Spark,shark,Streamsand Hadoop MapReduce jobs along with components on HDFS, Hive.
  • Strong Working experience in snowflake,Clickstream.
  • Experience working with Elastic search and Kibana for data search and visualization.
  • Worked on Hadoop EMC Greenplum, Gemstone, Gemfire.
  • Experience working with spark machine learning and SPSS.
  • Analyzed teh SQL scripts and designed teh solution to implement using PySpark.
  • Experience using Spark with Neo4J where acquiring teh interrelated graphical information of teh insurer and to query teh data from teh stored graphs.
  • Worked with Neo4j and spark integration to get teh data to and from Neo4j and analyse teh data using Tableau
  • Experience in writing batch processing huge Scala programs.
  • Worked with Datawarehouse tools to perform ETL using Informatica and Talend.
  • Querying with ANSI SQL which works on oracle SQL.
  • Worked with text file processing using teh command line utility Grep and perlscripting.
  • Load and transform large sets of structured, semi structured, and unstructured data using Hadoop/Big Data concepts.
  • Responsible for creating Hive External tables and loaded teh data in to tables and query data using HQL.
  • Experience working with ML libraries.
  • Handled importing data from various data sources, performed transformations using Hive, MapReduce, and loaded data into HDFS.

Environment: Hadoop Cluster, HDFS, Hive, Pig, Sqoop,OLAP,data modelling, Linux, Hadoop Map Reduce, HBase, Shell Scripting, MongoDB, and Cassandra, Apache Spark,Neo4J.

Confidential, Harrisburg, PA

Senior Hadoop Developer

Responsibilities:

  • Worked on Distributed/Cloud Computing (Map Reduce/Hadoop, Hive, Pig, HBase, Sqoop, Spark AVRO, Zookeeper etc.), Cloudera distributed Hadoop (CDH4).
  • Installed and configured Hadoop MapReduce, HDFS, developed multiple Map Reduce jobs in java for data cleaning and processing.
  • Involved in installing Hadoop Ecosystem components.
  • Importing and exporting data into HDFS, Pig, Hive and HBase using SQOOP.
  • Responsible to manage data coming from different sources.
  • Flume and from relational database management systems using SQOOP.
  • Responsible to manage data coming from different data sources.
  • Developed Pig scripts for data analysis and extended its functionality by developing custom UDF's.
  • Extensive knowledge on PIG scripts using bags and tuples.
  • Experience in managing and reviewing Hadoop log files.
  • Involved in gathering teh requirements, designing, development and testing.
  • Worked on loading and transformation of large sets of structured, semi structured data into Hadoop system.
  • Developed simple and complex MapReduce programs in Java for Data Analysis.
  • Load data from various data sources into HDFS using Flume.
  • Developed teh Pig UDF'S to pre-process teh data for analysis.
  • Worked on Hue interface for querying teh data.
  • Created Hive tables to store teh processed results in a tabular format.
  • Developed Hive Scripts for implementing dynamic Partitions.
  • Developed workflow in Oozie to automate teh tasks of loading teh data into HDFS and pre-processing with Pig.
  • Exported analyzed data to relational databases using SQOOP for visualization to generate reports for teh BI team.

Environment: Hadoop (CDH4), UNIX, Eclipse, HDFS, Java, MapReduce, Apache Pig, Hive, HBase, Oozie, SQOOP and MySQL.

Confidential, New York, NY

Big Data Software Developer

Responsibilities:

  • Worked on SQOOP to import data from various relational data sources.
  • Working with Flume in bringing click stream data from front facing application logs
  • Worked on strategizing SQOOP jobs to parallelize data loads from source systems
  • Participated in providing inputs for design of teh ingestion patterns.
  • Participated in strategizing loads without impacting front facing applications.
  • Worked on performance tuning of HIVE queries with partitioning and bucketing process.
  • Worked on teh core and Spark SQL modules of Spark extensively.
  • Developed Kafka producer and consumers, HBase clients, Spark,and Hadoop MapReduce jobs along with components on HDFS, Hive.
  • Worked with Solr to do teh full text search and teh NoSQL search of teh structured and unstructured data.
  • Worked with big data tools like Apache Phoenix, Apache Kylin, Atscale, Apache Hue.
  • Worked with securities likeKnox, Apache Ranger, Atlas,sentry,Kerberose.
  • Worked with BI Concepts-Dataguru,Talend.
  • Worked in agile environment using Jira,Git.
  • Worked on design on Hive, ANSI data store to store teh data from various data sources.
  • Involved in brainstorming sessions for sizing teh Hadoop cluster.
  • Involved in providing inputs to analyst team for functional testing.
  • Worked with source system load testing teams to perform loads while ingestion jobs are in progress.
  • Worked with Continuous Integration and related tools (i.e. Nagios,Jenkins, Maven, Puppet, Chef, Ganglia).
  • Worked on performing data standardization using PIG scripts.
  • Worked with query engines Tez, Apache Phoenix.
  • Worked with Business Intelligent(BI) Concepts and Data Ware housing Technologies using Power BI and R Statistics.
  • Worked on installation and configuration Horton works cluster ground up.
  • Managed various groups for users with different queue configurations.
  • Worked on building analytical data stores for data science team’s model development.
  • Worked on design and development of Oozie works flows to perform orchestration of PIG and HIVE jobs.
  • Worked withSource Code Management Tools GitHUB, Clearcase SVN, CVS,
  • Working experience with Testing tools JUNIT / SOAPUI.
  • Experienced in analyzing teh SQL scripts and designed teh solution to implement using PySpark.
  • Worked with Code Quality Governance related tools (Sonar,PMD, FindBugs, Emma, Cobertura, etc)
  • Analyzed teh SQL scripts and designed teh solution to implement using PySpark.

Environment: Hadoop, HDFS, Map Reduce, Flume, Pig, Sqoop, Hive, Pig, Sqoop, Oozie, Solr, Ganglia, HBase, Shell Scripting, Apache Spark.

Confidential, Chicago, IL

JAVA/J2EE Developer

Responsibilities:

  • Involved in teh project from requirements gathering and involved in various stages like Design, testing till production following agile methodology.
  • Implemented Spring MVC framework, which includes writing Controller classes for handling requests, processing form submissions and performed validations using Commons Validator.
  • Implemented teh business layer by using Hibernate with Spring DAO and developed mapping files and POJO java classes using ORM tool.
  • Designed and developed Business Services using Spring Framework (Dependency Injection) and DAO Design Patterns.
  • Have Knowledge on Spring Batch, which provides Functions like processing large volumes of records, including job processing statistics, job restart, skip, and resource management.
  • Implemented various design patterns in teh project such as Business Delegate, Data Transfer Object, Service Locator, Data Access Object, and Singleton.
  • Used Maven Deployment Descriptor Setting up build environment by writing Maven build XML, taking build, configuring, and deploying of teh application in all teh servers
  • Implementing all teh Business logic in teh middle-tier using Java classes, Java beans, used JUnit framework for Unit testing of application.
  • Developed web service for web store components using JAXB and involved in generating stub and JAXB data model class based on annotation.
  • Worked on teh platforms REST APIs, NodeJS.
  • Developed XML configuration and data description using Hibernate. Hibernate Transaction Manager is used to maintain teh transaction persistence.
  • Designed and develop web based application using HTML5, CSS, JavaScript, AJAX, JSP framework.
  • Involved in doing various testing efforts as per teh specifications and test cases using Test Driven.
  • Applied MVC pattern of Ajax framework, which involves creating Controllers for implementing Classes.

Environment: JDK5.0, J2EE, Servlets, JSP, Spring, HTML, Java Script Prototypes, XML, JSTL, XPath, JQuery, Oracle 10, RAD, TTD, Web Sphere Application, SVN, MAVEN, JDBC, Windows XP, Hibernate.

Confidential

JAVA/J2EE Developer

Responsibilities:

  • Involved in Java, J2EE, struts, web services and Hibernate in a fast-paced development environment.
  • Followed agile methodology, interacted directly with teh client on teh features, implemented optimal solutions, and tailor application to customer needs.
  • Involved in design and implementation of web tier using Servlets and JSP.
  • Used Apache POI for Excel files reading.
  • Developed teh user interface using JSP and Java Script to view all online trading transactions.
  • Used JSP and JSTL Tag Libraries for developing User Interface components.
  • Performing Code Reviews.
  • Performed unit testing, system testing and integration testing.
  • Designed and developed Data Access Objects (DAO) to access teh database.
  • Used DAO Factory and value object design patterns to organize and integrate teh JAVA Objects
  • Coded Java Server Pages for teh Dynamic front end content that use Servlets and EJBs.
  • Coded HTML pages using CSS for static content generation with JavaScript for validations.
  • Used JDBC API to connect to teh database and carry out database operations.
  • Involved in building and deployment of application in Linux environment.

Environment: Java, J2EE, JDBC, Struts, SQL. Hibernate, Eclipse, Apache POI, CSS.

Confidential

Software Engineer

Responsibilities:

  • UsedWeb Spherefor developing use cases, sequence diagrams and preliminary class diagrams for teh system in UML.
  • Extensively usedWeb Sphere Studio Application Developerfor building, testing, and deploying applications.
  • UsedSpringFramework based on (MVC) Model View Controller, designed GUI screens by using HTML, JSP.
  • Developed teh user interface using teh JSP pages and DHTML to design teh dynamic HTML pages.
  • Developed Session Beans on Web Sphere for teh transactions in teh application.
  • Developed teh presentation layer andGUIframework inHTML,JSPand Client-Side validations were done.
  • Involved in Java code, which generatedXMLdocument, which in turn used XSLT to translate teh content intoHTMLto present to GUI.
  • ImplementedXQueryandXPathfor querying and node selection based on teh client input XML files to create Java Objects.
  • Used Web Sphere to develop teh Entity Beans where transaction persistence is required and JDBC was used to connect to theMySQL database.
  • Utilized WSAD to createJSP, Servlets, and EJB that pulled information from a DB2 database and sent to a front-end GUI for end users.
  • In teh database end, responsibilities included creation of tables, triggers, stored procedures, sub-queries, joins, integrity constraints and views.
  • Worked onMQ SerieswithJ2EEtechnologies (EJB, Java Mail, JMS, etc.) on Web Sphere server.

Environment: Java, EJB, IBM Web Sphere Application server, Spring, JSP, Servlets, JUnit, JDBC, XML, XSLT, CSS, DOM, HTML, MySQL, JavaScript,Oracle, UML, Clear Case, ANT.

We'd love your feedback!