We provide IT Staff Augmentation Services!

Sr.hadoop Big Data Developer Resume

Fremont, CA

SUMMARY

  • Result oriented professional with 9+ years of IT experience which includes Analysis, Design, Development, Integration, Deployment and Maintenance of quality software applications using Java/J2EE Technologies and Big dataHadooptechnologies.
  • Around 4+ Years of experience in Big DataHadoopEcosystems with ingestion, storage, querying, processing and analysis of big data.
  • Good knowledge of Hadoop Development and various components such as HDFS, Job Tracker, Task Tracker, Data Node, Name Node and Map - Reduce concepts.
  • Responsible for writing MapReduce programs.
  • Experience using variousHadoopDistributions (Cloudera, Hortonworks, MapR etc) to fully implement and leverage newHadoopfeatures.
  • Expertise in deployment of Hadoop, Yarn,Sparkand Storm integration with Cassandra, Ignite and RabbitMQ, Kafka etc.
  • Good exposure to ApacheHadoopMap Reduce programming, Hive, PIG scripting and HDFS.
  • Expertise with the tools in Hadoop Ecosystem including Pig, Hive, HDFS, Map Reduce, Sqoop, Spark, Kafka, Yarn, Oozie, and Zookeeper.
  • Modern operating systems support concurrency both viaprocesses andthreads .
  • Experience in installation, configuration, Management, supporting and monitoring Hadoop cluster using various distributions such as Apache SPARK and Cloudera.
  • Good exposure to the design, development / support of Apache SPARK, Hadoop and Big data ecosystem using Apache Spark 1.6( SQL + Data Frames, Spark Streaming, MLlib, GraphX ), Info sphere Bigin sights 4.0 (IBM's Product), Cloudera CDH 5.5, Hortonworks HDP 2.3, MapR 5.0.
  • Excellent programming skills with experience in Java, C, SQL and Python Programming.
  • Experience in tuning and troubleshooting performance issues in Hadoop cluster.
  • Experience in using Cloudera Manager for installation and management of single-node and multi-node Hadoop cluster (CDH3, CDH4 & CDH5).
  • Experience in cloud stack such as Amazon AWS and VMWARE stack.
  • Experience in importing and exporting data using Sqoop from HDFS file system to Relational Database Systems and vice-versa.
  • Hands on NoSQL database experience with HBase.Used NoSQL database with Cassandra, MongoDB
  • Worked on data warehouse product Amazon Redshirt which is a part of the AWS (Amazon Web Services).
  • Experience in developing Pig scripts and Hive Query Language.
  • Experience in Data Load Management, importing and exporting data from HDFS to RDBMS and vice versa using Sqoop and Flume.
  • Experience in Elastic search technologies in creating custom Lucene/Solr Query components.
  • Well experienced and possess strong knowledge in Unix Shell Scripting
  • Hands on experience in application development using Java, RDBMS and Linux Shell Scripting.
  • Developed Java applications using various IDE’s like IBM RAD7 and Eclipse.
  • Good in writing Python, Shell and Perl scripts to automate activities via Jenkins and Monitoring Build System
  • Experience in Elastic search technologies in creating custom Lucene/Solr Query components.
  • Good experience in using Graph databases such as Neo4j and Tableau.
  • Good knowledge in Machine Learning algorithms and used Apache Mahout and Spark MLib.
  • Good experience in Apache Storm based streaming analytics.
  • Expertise in Amazon Web Services AWS Redshift, EMR and EC2 web services which provides fast and efficient processing of Big Data.
  • Extensive experience in developing applications using Java, Java Beans, JSP,Servlets, Spring MVC framework, Spring JDBC, JDBC,JNDI, Spring, Hibernate, Ajax, JUnit and Oracle, Test DrivenDevelopment, MS SQL Server and MS Access.
  • Hands on experience in Atlassian tools (JIRA Service Desk, Confluence, Crucible and Bamboo) and Subversion.
  • Expertise in client-side design and validations using HTML 4/5, XHTML, CSS, Bootstrap,Java Script, JSP, JQuery, Angular JS, Cache and JSTL.
  • Excellent experience in the design, development of Design Patterns and DAO’s using Hibernate, J2EE Architecture, Object Modeling, Data Modeling, UML.
  • Expertise in REST and SOAPWeb Services,Micro Services,CXF, JAX-WS, JAX-RS and AXIS, REST API.
  • Have worked on data visualization using Tableau
  • Extensive knowledge in Python, JAVA, MYSQL, UNIX and Linux.
  • Extensive experience in Programming, Deploying, and Configuring on Web Servers such as IBM Web sphere, BEA's Web Logic, Junit, Apache Tomcat Server, Jbuilder and Amazon EC2.
  • Web development using Python, FLASK, and Django.
  • Very good experience with both MapReduce 1 (Job Tracker) and MapReduce 2 (YARN) setups
  • Very good experience in monitoring and managing the Hadoop cluster usingCloudera Manager.
  • Excellent understanding and knowledge of NoSQL databases like MongoDB, HBase, and Cassandra.
  • ImplementedKerberosauthentication for Hadoop services.
  • Experience in usingImpalato analyze stored data using HveQL.
  • Hands on experience in ETL tools such as Informatica(8.X and 9.X) and Talend.
  • Strong understanding of data warehousing concepts, OLTP and OLAP data models.
  • Strong experience in working with UNIX/LINUX environments, writing shell scripts.

TECHNICAL SKILLS

Languages: java, C, C++, SQL, PL/SQL, and Python

Application Frameworks: J2EE Struts, spring, spring IOC, spring AOP, spring JPA, EJB, Hibernate, Cache, node.js,Backbone Js, Bootstrap, CSS3, Angular Js

Hadoop Ecosystem: Hadoop, HDFS, Hive,Pig,Scoop,HBase, Oozie,Scala

Technologies/API: JSP, JavaBeans, JDBC, JMS, OSGI, JNDI, Servlets, AJAX, JSF, JUnit, Log4j, JPA, JAX-B, JAX-P

Web Services / API: SOAP, WSDL, REST,JAXRPC,JAXWS, JAXRS, CXF,REST API, Micro Services

Web Technologies: XML, XSL, XSLT, HTML5, JavaScript,jquery

Web/Application Servers: Apache/Jakarta Tomcat, Web Logic, IBM WebSphere, JBoss.

Design Patterns: MVC, Front Controller, Singleton, DAO patterns

Database: MS SQL Server, Oracle, MS Access, NO SQL (MongoDB)

Build Tools: Maven,ANT, GIT,JIRA, SVN, PERFORCE, GWT, SOAP UI

ETL Tools: SSRS,SSIS

Operating System: Windows XP/2000/98, UNIX, Linux, DOS.

PROFESSIONAL EXPERIENCE

Confidential, Fremont, CA

Sr.Hadoop Big Data Developer

Responsibilities:

  • Analyzing the requirements and estimating the Level of effort and providing the timeline to business and giving updates every week. And achieving the timeline and delivering quality output to Business. And also fixing production issues.
  • Migrated the existing data to Hadoop from RDBMS using Sqoop for processing the data.
  • Used Hive data warehouse tool to analyze the data in HDFS and developed Hive quiries.
  • Developed Simple to complex Map/reduce Jobs usingHive.
  • Created partitioned tables in Hive
  • Responsible for creating Hive tables, loading the structured data resulted from MapReduce jobs into the tables and writing Hive Queries to further analyze the data
  • Wrote NO-SQLqueries and performed Back-End Testing for data validation to check the data integrity during migration from backend to front-end.
  • Worked on setting up Pig,Hive,RedShift and HBase on multiple nodes and developed using Pig,Hive and MapReduce.
  • Java Concurrency API has been enhanced with every new Java release and even Java 8 provides new classes and methods for dealing with concurrency.
  • Wrote MRUnit tests for unit testing the MapReduce jobs.
  • Implemented the MR optimization techniques for the heavy jobs.
  • Experience to run the Job schedulers for the MR jobs on the cluster.
  • Worked on Sparkarchitecture and implementation in Java/Scala.
  • Implemented daily workflow for extraction processing and analysis of data with Oozie.
  • Used HUE to vies HDFS directory structure, monitor jobs, Query editors (Hive, Impala).
  • Used Impala for live connection between hive derived tables and Tableau data sources.
  • Responsible for implementing the business requirements using Spring MVC Framework.
  • Developed RESTFUL and SOAP services on Apache SOLR Java search server data.
  • Worked on Motion chart and Drill down analysis.
  • Worked on Drill through reports by creating HTML s.
  • Updates in the ORACLE database using SQL, PL/SQL by writing materialized views, procedures, functions and triggers. Deployment of application changes \in test and prod environments (UNIX boxes) using LINUX commands.
  • Responsible for developing data pipeline using flume, Sqoop and pig to extract the data from weblogs and store in HDFS.
  • Proficient work experience with NOSQL, MongoDB databases.
  • Developed Hadoop streaming Map/Reduce works using Python.
  • Administrating Tableau Server backing up the reports and providing privileges to users.
  • Worked on Tableau for generating reports on HDFS data.
  • Represented the retrieved results through tableau.
  • Extracted and updated the data into MongoDB using Mongo import and export command line utility interface.
  • Modern operating systems support concurrency both viaprocesses andthreads .
  • Data scrubbing and processing with Oozie.
  • Created and maintained technical documentation for launching Hadoop clusters and for executing Hive queries and Pig Scripts.
  • Extracted and updated the data into MongoDB using Mongo import and export command line utility interface.
  • Used NoSQL database with Cassandra and MongoDB
  • Involved in loading data from LINUX file system to HDFS.
  • Worked in the performance tuning +SQL, ETL and other processes to optimize session performance.
  • Involved in Business requirements and documentation.

Environment: Core Java, J2EE, Spring, SQL, REST API, Hadoop, Hive, Impala, Spark, MPR Scoop, Linux,Scala, HDFS, No-SQL.

Confidential, Troy, MI

Hadoop Developer

Responsibilities:

  • Developed MapReduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables
  • Installed & maintained Cloudera Hadoop distribution.
  • Installed and configured Flume, Hive, Pig, Sqoop and Oozie on the Hadoop cluster.
  • Involved in loading the data from Linux file system to HDFS.
  • Implemented MapReduceprograms on log data to transform into structured way to find user information.
  • Performed performance tuning and troubleshooting of MapReduce jobs by analyzing and reviewing Hadoop log files.
  • Exported the analyzed data to the relational databases using sqoop for virtualization and to generate reports for the BI team
  • Monitored workload, job performance and capacity planning using Cloudera Manager.
  • Installed Oozie workflow engine to run multiple MapReduce, Hive and Pig jobs.
  • Responsible for creating Hive tables, loading the structured data resulted from MapReduce jobs into the tables and writing hive queries to further analyze the logs to identify issues and behavioral patterns.
  • Imported data frequently from MS-SQL Server, Cassandra and no-sql to HDFS using Sqoop.
  • Managing and upgrading MVC framework and Scala.
  • Experience in refactoring the existing spark batch process for different logs written in Scala.
  • Worked in utilizing spark machine learning techniques implemented in Scala.
  • Supported operation team in Hadoop cluster maintenance activities including commissioning and decommissioning nodes and upgrades.
  • Used ETL tool, Talend to do transformations, event joins, filter and some pre-aggregations
  • Created and maintained Technical documentation for launching HADOOP Clusters and for executing Hive queries and Pig Scripts.
  • Used Tableaufor visualizing and to generate reports.

Environment: Hadoop,Cloudera, MapReduce, Hive, Sqoop, Spark, Flume,Talend, Python, MS-SQL Server, Tableau, ETL, no-sql.

Confidential

Hadoop Developer with Java

Responsibilities:

  • Participated in all stages of software development life-cycle including architecture, design, implementation, and unit testing.
  • Developed facelet (xhtml) pages representing the view. Implemented business logic on server-side while keeping focus on developing standard, optimized, generic and loosely coupled code.
  • Integrated JSF with Spring to implement MVC (Model-View-Controller) pattern.
  • Developed java script methods to perform various client side validations, display of errors, update component and invoking various other events.
  • Developed validators to perform additional validation on JSF components.
  • Created composite components for common functionality of different pages.
  • Developed custom components to perform special functionalities as per requirement.
  • Performed test cases throughout the integration and the regression environments.
  • Consumed SOAP based web Services in the application. Created usability prototypes for the UI screens using node. Js, JavaScript and jQuery.
  • Implemented several design patterns like Singleton, MVC and Factory design patterns.
  • Performed Smoke testing, Functional testing, White box testing, Black box testing, Integration testing and Regression testing to find bugs. Logged messages using Log4j to catch all the system events. Developed JUnit test cases to test the java base code.
  • Configured servers and resolved server issues. Developed and ran ant automated scripts to do project builds. Worked with production team to study, analyze and fix bugs. Participated in team agile scrums and meetings.

Environment: J2EE, Java JDK 5, JSF, JUnit, JavaBeans, JavaScript, REST API, spring, Hibernate, JNDI, Ant, Log4j, JavaScript, node.js, CSS, JBOSS, Agile.

Confidential, CT

Java, J2EE Developer

Responsibilities:

  • Involved in creation of Low Level Design including sequence diagrams and class diagrams to comprehend the existing architecture.
  • Involved in the integration of spring for implementing Dependency Injection.
  • Developed code for obtaining bean s in Spring IOC framework.
  • Focused primarily on the MVC components such as Dispatcher Servlets, Controllers, Model and View Objects, View Resolver.
  • Involved in creating the Hibernate POJO Objects and utilizing Hibernate Annotations.
  • Involved in development of Web Services using Spring MVC to extract client related data from databases. Worked in Agile development environment.
  • Developed Web Services using WSDL, SOAP to communicate with the other modules.
  • Developed Graphical User Interfaces using UI frameworks and Webpage's using HTML and JSP's for user interaction.
  • Involved in java MQ.
  • Involved in the implementation of DAO using Spring-Hibernate ORM.
  • Involved in the creation of exhaustive JUnit Unit Test Cases using Test Driven Development (TDD) technique.

Environment: JDK 5, J2EE, Spring, Hibernate, Web Services, AWS, JMS, JavaScript, JSP, JUnit, Agile/Scrum Methodology, Oracle 10g, Web Logic Server, Eclipse IDE, DAO, Design patterns, Log4j.

Confidential

Java Developer

Responsibilities:

  • Worked extensively in creation of the Land Parcel database.
  • Designed and developed In corporation of existing digital data with the main Oracle database. Highly Involved in High volume production of hardcopy farm maps.
  • Created Programming, Procedures and Queries and Documentation
  • Developed data manipulation Servlets with help of ARCSDE APIs.
  • Developed Enterprise Java Beans like Entity Beans, Session Beans.
  • Developed different modules using J2EE (EJB’s and JDBC). Designed and developed JSP pages using Struts. Wrote client side validations using Java Script.
  • Involved in the design of the Referential Data Service module to interface with various databases using JDBC. Extensively worked on PL/SQL, SQL.
  • Developed Database Objects like PL/SQL packages,Stored Procedures, Triggers, Cursors, Views to maintain referential Integrity of the database
  • Interacted with the Users and Documented the Application.
  • Experienced in ArcSDE setup and Arcims.
  • Debugged and unit tested the Java Beans and other Java classes.
  • Prepared documentation and participated in preparing user’s manual for the application.

Environment: Java, Oracle 9i, Map Objects, HTML, JSP, JDBC and Servlets, Arc Macro Language, Arc info 8.3,ArcSDE and ArcIMS.

Hire Now