We provide IT Staff Augmentation Services!

Hadoop Developer Resume

5.00/5 (Submit Your Rating)

West Lake, Tx

SUMMARY:

  • 8 Years of experience in IT industry comprising of extensive work experience on Build Engineering & Release Management process, including end - to-end code configuration, building binaries & deployments of artifacts for entire life cycle model in Enterprise Applications, general Systems Administration and Change Management, Software Configuration Management (SCM)
  • 6+ years of IT experience which includes 6+ years’ experience in Big Data technologies and 1.5 years of experience in JAVA and MAINFRAMES technologies
  • Worked in finance, Insurance and E-commerce domain
  • Expertise in various components of Hadoop Ecosystem - Map Reduce, Hive, Pig, Sqoop, Impala, Flume, Oozie, HBase, Apache Solr, Apache storm, YARN
  • Hands-on Experience in working with Cloudera Hadoop Distribution
  • Written, executed and deployed complex Map Reduce java code using various Hadoop API’s
  • Experienced in Map Reduce code tuning and performance optimization
  • Knowledge in installing, configuring and using Hadoop ecosystem components
  • Proficient in Hive Query language and experienced in hive performance optimization using Partitioning, Dynamic-Partitioning and bucketing concepts
  • Expertise in developing PIG Scripts. Written and implemented custom UDF’s in Pig for data filtering
  • Used Impala for data analysis.
  • Hands-On experience in using the data ingestion tools - Sqoop and flume
  • Collected the log data from various sources (webservers, Application servers and consumer devices) using Flume and stored in HDFS to perform various analysis
  • Performed Data transfer between HDFS and other Relational Database Systems (MySQL, SQLServer, Oracle and DB2) using Sqoop
  • Used Oozie job scheduler to schedule Map Reduce, Hive and pig jobs. Experience in automating the job execution
  • Experience with NoSQL databases like HBase and fair knowledge in MongoDB and Cassandra.
  • Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions
  • Experience in working with different relational databases like MySQL, SQLServer, Oracle and DB2
  • Strong experience in database design, writing complex SQL Queries
  • Expertise in development of multi-tiered web based enterprise applications using J2EE technologies like Servlets, JSP, JDBC and Hibernat
  • Extensive coding experience in Java and Mainframes - COBOL, CICS and JCL
  • Experience of working in all the phases of Software Development in various methodologies
  • Strong base in writing the Test plans, perform Unit Testing, User Acceptance testing, Integration Testing, System Testing
  • Proficient in software documentation and technical report writing.
  • Worked coherently with multiple teams. Conducted peer reviews, organized and participated in knowledge transfer (technical and domain) sessions.
  • Experience in working with Onsite-Offshore model.
  • Developed various UDFs in Map-Reduce and Python for Pig and Hive.
  • Decent experience and knowledge in other SQL and NoSQL Databases like MySQL, MS SQL, MongoDB, HBase, Accumulo, Neo4j and Cassandra.
  • Good Data Warehouse experience in MS SQL.
  • Good knowledge and firm understanding of J2EE frontend/backend, SQL and database concepts.
  • Good experience in Linux, Mac OS environment.
  • Used various development tools like Eclipse, GIT, Android Studio and Subversion.
  • Knowledge with Cloudera Hadoop and Map-R distribution components and their custom packages.

TECHNICAL SKILLS:

Hadoop/Big Data: Map Reduce, Hive, Pig, Impala, Sqoop, Flume, HDFS, Oozie, Hue, HBase, Zookeeper

Operating Systems: Windows, Ubuntu, RedHat Linux, Unix, c#

Java & J2EE Technologies: Core Java, Servlets, JSP, JDBC, Talend

Frameworks: Hibernate, Aws, kafka

Databases/Database Languages: Oracle 11g/10g/9i, MySQL, DB2, SQL Server, SQL, HQL, NoSQL (HBase),Spark.

Web Technologies: JavaScript, HTML, XML, REST, CSS

Programming Languages: Java, Unix shell scripting, COBOL, CICS, JCL

IDE s: Eclipse, Net beans

Web Servers: Apache Tomcat 6

Methodologies: Waterfall, Agile and Scrum

PROFESSIONAL EXPERIENCE:

Confidential, West Lake, TX

Hadoop Developer

Responsibilities:

  • Setup Hadoop cluster on Amazon EC2 using whirr for POC.
  • Worked on analysing Hadoop cluster and different big data analytic tools including Pig Hbase database and Sqoop, developed using working on a talend .
  • Responsible for building scalable distributed data solutions using Hadoop
  • Installed and configured Flume Hive Pig Sqoop HBase on the Hadoop cluster.
  • Managing and scheduling Jobs on a Hadoop cluster.
  • Implemented nine nodes CDH3 Hadoop cluster on Red hat LINUX.
  • Worked on installing cluster commissioning decommissioning of datanode namenode recovery capacity planning and slots configuration.Using Spark to developing the application.
  • Resource management of HADOOP Cluster including adding/removing cluster nodes for maintenance and capacity needs,using Talend and . using c#.
  • Involved in loading data from UNIX file system to HDFS.
  • Created HBase tables to store variable data formats of PII data coming from different portfolios.
  • Implemented best income logic using Pig scripts.
  • Developed Microservices & APIs using Spring Boot and Used Apache Kafka cluster as messaging system between the APIs and Microservices.
  • Implemented test scripts to support test driven development and continuous integration.
  • Responsible to manage data coming from different sources.
  • Installed and configured Hive and also written Hive UDFs.
  • Experienced on loading and transforming of large sets of structured semi structured and unstructured data.
  • Cluster coordination services through Zookeeper.
  • Good Knowledge about scalable, secure cloud architecture based on Amazon Web Services (leveraging AWS cloud services: EC2, Cloud Formation, VPC, S3.
  • Hands on experience in installing, configuring, and using Hadoop ecosystem components like Hadoop MapReduce, HDFS, HBase, Hive, Sqoop, Pig, Zookeeper, Storm, Spark, Kafka and Flume.
  • Implemented multi-tiered architecture using both Microservices and Monolithic architecture.
  • Development of micro services using Dropwizard and Spring Boot. UI implementation using AngularJS 1.x.
  • Exported the analysed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
  • Analysed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Supported in setting up QA environment and updating configurations for implementing scripts with Pig and Sqoop.

Environment: Hadoop HDFS Hive Flume HBase Sqoop PIG Java JDK 1.6 Eclipse MySQL, Spring boot, Web Frameworks Spring (MVC, Core), talend, Dropwizard, c#, Spray-Can (Scala) and Ubuntu Zookeeper Amazon EC2 SOLR., Spark, kafka

Confidential, Chino Hills, CA

Hadoop Developer

Responsibilities:

  • Working on the development of a web application and Spring batch applications. The web application allows the customers to sign up and get the cellular and music services.
  • Tools: MySQL, Tomcat Server, Mybatis, Spring MVC, REST, AWS (Amazon Web Services)
  • Working on the development of User Interface
  • Tools: Angular JS, Backbone JS, java script, velocity
  • Working on the mobile payment functionality using PayPal, Angular JS and Spring MVC
  • Have been involved in Spring Integratio
  • Have been involved in the building and deployment of the applications using Ant build.
  • Involved in fixing the production bugs and also involved in the deployment process.
  • Have been working on Spring Batch applications to make sure the customer cellular and music services gets renewed Spring Batch
  • Involved in deploying the applications in AWS.
  • Proficiency in Unix/Linux shell commands.
  • Maintains the EC2 (Elastic Computing Cloud) and RDS (Relational Database Services) in amazon web services.
  • Created RESTful web services interface for supporting XML message transformation.
  • Developed Junit test case using TestNG.
  • Involved in designing the web applications and I closely work with architect

Environment: Hadoop (CDH), MapReduce, HDFS, Hive, Pig, Sqoop, Flume, Oozie, Java, SQL, Kafka, Cassandra

Confidential, San Mateo, CA

Hadoop Developer

Responsibilities:

  • Involved to provide architect, design, develop and testing services for sub-system components within the data aggregation infrastructure
  • Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Flume and Sqoop.
  • Performed system administration activities on Linux, CentOs & Ubuntu. • Developed Java Map/Reduce job for Trip Calibration, Trip summarization and data filtering.
  • Developed Hive UDFs for rating aggregation.
  • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, and loaded data into HDFS
  • Extracted the data from Teradata into HDFS using Sqoop.
  • Analyzed the data by performing Hive queries and running Pig scripts to know user behavior like shopping Enthusiasts, travelers, music lovers etc.
  • The patterns analyzed are exported back into Teradata using Sqoop.
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.
  • Installed Oozie workflow engine to run multiple Hive. • Monitoring workload, job performance and capacity planning using Cloudera Manager.
  • Developed Hive queries to process the data and generate the data cubes for visualizing
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Experienced in defining job flows.
  • Writing shell scripts for manipulating data.
  • Experienced in managing and reviewing Hadoop log files.
  • Responsible to manage data coming from different sources.
  • Used Oozie tool for job scheduling

Environment: Hadoop, MapReduce, HDFS, Java 6, Hadoop distribution, Apache Hadoop 1.0.1, MapReduce, HDFS, CentOS, Zookeeper, Sqoop, Hive, Pig, Oozie, Java, Eclipse, Amazon EC2, JSP, Servlets, Oracle.

Confidential

Jr. Hadoop Developer

Responsibilities:

  • Worked closely with the Development Team in the design phase and developed use case diagrams using Rational Rose.
  • Responsible for building scalable distributed data solutions using Hadoop.
  • Installed and configured Hive, Pig, Sqoop, Flume and Oozie on the Hadoop cluster.
  • Setup and benchmarked Hadoop/HBase clusters for internal use.
  • Developed Simple to complex Map/reduce Jobs using Hive and Pig.
  • Optimized Map/Reduce Jobs to use HDFS efficiently by using various compression mechanisms
  • Handled importing of data from various data sources, performed transformations using Hive, Map-Reduce, loaded data into HDFS and Extracted the data from MySQL into HDFS using Sqoop.
  • Analyzed the data by performing Hive queries and running Pig scripts to study customer behavior.
  • Used UDF's to implement business logic in Hadoop.
  • Implemented business logic by writing UDFs in Java and used various UDFs from Piggybanks and other sources.
  • Continuous monitoring and managing the Hadoop cluster using Cloudera Manager.
  • Developed Map-Reduce programs in Java for Data Analysis.
  • Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
  • Developed HQL for the analysis of semi structured data.
  • Handled the installation and configuration of a Hadoop cluster.
  • Build and maintained scalable data pipelines using the Hadoop ecosystem and other open source components like Hive, and Cassandra instead of HBase.
  • Used Hive and created Hive tables and involved in data loading and writing Hive UDFs.
  • Used Sqoop to import data into HDFS and Hive from other data systems.
  • Handle the data exchange between HDFS and different web sources using Flume and Sqoop
  • Installed Kafka on Hadoop cluster and configured producer and consumer coding part in java to establish connection

Environment: Hadoop (CDH), MapReduce, HDFS, Hive, Pig, Sqoop, Flume, Oozie, Java, SQL, Kafka, Cassandra.

Confidential

JAVA Developer

Responsibilities:

  • Installation, Configuration & Upgrade of Solaris and Linux operating system.
  • Actively participated in requirements gathering, analysis, design, and testing phases
  • Designed use case diagrams, class diagrams, and sequence diagrams as a part of Design Phase
  • Developed the entire application implementing MVC Architecture integrating JSF with Hibernate and Spring frameworks.
  • Developed the Enterprise Java Beans (Stateless Session beans) to handle different transactions such as online funds transfer, bill payments to the service providers.
  • Implemented Service Oriented Architecture (SOA) using JMS for sending and receiving messages while creating web services
  • Developed XML documents and generated XSL files for Payment Transaction and Reserve Transaction systems.
  • Developed SQL queries and stored procedures.
  • Developed Web Services for data transfer from client to server and vice versa using Apache Axis, SOAP and WSDL.
  • Used JUnit Framework for the unit testing of all the java classes.
  • Implemented various J2EE Design patterns like Singleton, Service Locator, DAO, and SOA.
  • Worked on AJAX to develop an interactive Web Application and JavaScript for Data Validations.
  • Developed the application under JEE architecture, developed Designed dynamic and browser compatible user interfaces using JSP, Custom Tags, HTML, CSS, and JavaScript.
  • Deployed & maintained the JSP, Servlets components on Web logic 8.0
  • Developed Application Servers persistence layer using, JDBC, SQL, Hibernate.
  • Used JDBC to connect the web applications to Data Bases.
  • Implemented Test First unit testing framework driven using Junit.
  • Developed and utilized J2EE Services and JMS components for messaging communication in Web Logic.
  • Configured development environment using Web logic application server for developer’s integration testing

Environment: Java/J2EE, SQL, Oracle 10g, JSP 2.0, EJB, AJAX, Java Script, Web Logic 8.0, HTML, JDBC 3.0, XML, JMS, log4j, Junit, Servlets, MVC, My Eclipse

We'd love your feedback!