We provide IT Staff Augmentation Services!

Sr. Java / Hadoop Developer Resume

0/5 (Submit Your Rating)

Roseland, NJ

SUMMARY

  • Over 8 years of experience in IT industry with strong emphasis on Object Oriented Analysis, Design, Development and Implementation, Testing and Deployment ofBigdataSoftware Applications and Web enabled applications.
  • Experienced with Hadoop and Hadoop Ecosystems such as HDFS, MapReduce and Hadoop Ecosystem (Pig, Hive, Hbase, Sqoop, Zookeeper, Oozie).
  • Excellent Hands on Experience in developing Hadoop Architecture within the project in Windows and Linuxplatforms.
  • Experience working with multi - terabyte data sets using relational databases (RDBMS), SQL, and No-SQL.
  • Basic understanding of schedulers, workload management, availability, scalability and distributed data platforms.
  • Experience developing large scale analytical databases / platforms across multiple operating systems (Microsoft and Linux).
  • Experience in large-scale production software development using multiple languages.
  • Experience using agile/scrum methodologies to iterate quickly on product changes, developing user stories and working through backlogs.
  • Excellent experience in large-scale software development (Java / C# preferred).
  • Excellent experience working with multi-terabyte data sets in a Map/Reduce / MPP system.
  • Excellent experience with data modeling & schema design for data warehouses.
  • Solid experience in data analysis for analytic systems.
  • 2 years of experience with agile/scrum methodologies.
  • Solid experience with Hadoop internals, Hive, Java and Map/Reduce.
  • Experience with IBM PureData Systems for Analytics (PDA) / Netezza.
  • Experience with IBM Open Platform / BigInsights.
  • Expertise in writing Java Map Reduce Jobs, HIVEQL forDataArchitects,DataScientists.
  • Excellent understanding of NoSQLDatabases and hands on experience in writing applications on NoSQL databases like Cassandra and Mongo DB.
  • Experienced in Software Development Life Cycle (Requirements Analysis, Design, Development, Testing, Deployment and Support)
  • Expertise with SQL, PL/SQL and database concepts.
  • Experienced in working with differentdatasources like Flat files, XML files and Databases.
  • Expertise in programming and data mining with R/Python/Java/Scala
  • Hands on experience in advanced Big-Data technologies like Spark Ecosystem(Spark SQL, MLlib, SparkR and Spark Streaming), Kafka and Predictive analytics (MLlib, R ML packages including Oxdata’s ML library H2O).
  • Experienced in creating complex SQL Queries and SQL tuning, writing PL/SQL blocks like stored procedures, Functions, Cursors, Index, triggers and packages.
  • Expertise in cross-platform (PC/Mac, desktop, laptop, tablet) and cross-browser (IE, Chrome, Firefox, Safari) development.
  • Passionate for Big data Wrangling/Predictive Modeling/Visualizing in an interactive/meaningful way to drive business values effectively.

TECHNICAL SKILLS

BigDataTechnologies: Hadoop, HDFS, Hive, Map Reduce, Pig, Sqoop, Flume, Zookeeper, Oozie, Avro, HBase

Languages: Java, FoxPro, Linux Script, SQL, Java, C, R, Python, and Scala

Web Technologies: ASP, HTML, XML, JavaScript, JSON

IDE/Tools: Eclipse, VMware, Apache, VSS, TFS 2008, Visio

GUI: Visual Basic 6.0, Oracle, MS Office (Word, Excel, Outlook, PowerPoint, Access)

Browsers: Google Chrome, Mozilla Fire Fox, IE8

Reporting Tools: Crystal XI, SAP BO 4.1, Dashboard, Info View

DB Languages: MSSQL, MSAccess, MySQL, Pervasive SQL & Oracle

Operating Systems: Windows XP, 7, 8, LINUX/Unix

PROFESSIONAL EXPERIENCE

Confidential, Wilmington, DE

Sr. Big Data/Hadoop Developer

Responsibilities:

  • Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase.
  • Adapted existing tools and processes to handle performance, scale, availability, usability, accuracy and monitoring.
  • Built data expertise and guide data quality across multiple business areas.
  • Maintained expertise in the MPP / Hadoop / Big Data ecosystem, including industry trends, strategies, and products.
  • Created POC to store Server Logdatain MongoDBto identify System Alert Metrics.
  • Developed Machine Learning driven models using R and Spark MLlib
  • Implemented Hadoopframework to capture user navigation across the application to validate the user interface and provide analytic feedback/result to the UI team.
  • Loadeddatainto the cluster from dynamically generated files using Flume and from relational database management systems using Sqoop.
  • Performed analysis on the unused user navigationdataby loading into HDFS and writing Map Reduce jobs.
  • Extensively involved in Installation and configuration of Clouderadistribution Hadoop, NameNode, Secondary NameNode, JobTracker, TaskTrackers and DataNodes.
  • Loaded thedatafrom Teradata to HDFS using TeradataHadoop connectors.
  • Wrote Pig scripts to run ETL jobs on thedatain HDFS.
  • Used Hive to do analysis on thedataand identify different correlations.
  • Worked on importing and exportingdatafrom Oracle and DB2 into HDFS and HIVE using Sqoop.
  • Worked on NoSQL databases including HBase and MongoDB. Configured MySQL Database to store Hivemetadata.
  • Importeddatausing Sqoop to loaddatafrom MySQL to HDFS on regular basis.
  • Extracted files from MongoDB through Sqoop and placed in HDFS and processed.

Environment: Hadoop, Map Reduce, HDFS, Flume, Pig, Hive, HBase, Sqoop, ZooKeeper, Cloudera, Oozie, MongoDB, Cassandra, SQL*PLUS, NoSQL, ETL, MYSQL, agile, Windows, UNIX Shell Scripting.

Confidential, Bellevue, WA

Sr. Big Data/Hadoop Developer

Responsibilities:

  • Developed and Supported Map Reduce Programs those are running on the cluster.
  • Created Hive tables and working on them using Hive QL.
  • Involved in installing Hadoop Ecosystem components.
  • Validated Namenode, Datanode status in a HDFS cluster.
  • Used to manage and review the Hadoop log files.
  • Responsible to managedatacoming from different sources.
  • Supporting Hbase Architecture Design with the Hadoop to develop a Database Design in HDFS.
  • Supported Map Reduce Programs those are running on the cluster.
  • Involved in HDFS maintenance and loading of structured and unstructureddata.
  • Wrote MapReduce jobs using Java API.
  • Migrationof 100+ TBs ofdatafrom different databases (i.e. Netezza, Oracle, SQL Server) to Hadoop.
  • Copied the data from HDFS to MONGODB using pig/Hive/Map reduce scripts and visualized the streaming processed data in Tableau dashboard.
  • Wrote Hivequeries fordataanalysis to meet the business requirements.
  • Developed UDFs for PigDataAnalysis.
  • Developed Scripts and Batch Job to schedule various Hadoop Program.
  • Upgraded the Hadoop Cluster from CDH3 to CDH4 and setup High availability Cluster Integrate the HIVE with existing applications
  • Handled importing of data from various data sources,performed transformations using Hive, MapReduce.
  • Analyzed thedataby performing Hive queries and running Pig scripts to know user behavior.
  • Installed Oozieworkflow engine to run multiple Hive and Pig jobs.
  • Developed Hive queries to process the data and generate the data cubes for visualizing.
  • Optimized the mappings using various optimization techniques and also debugged some existing mappings.
  • Developed HiveQL scripts to manipulate thedatain HDFS.
  • Worked on the Hbase architecture design for the Loan Volumes in HDFS.
  • Installed and configured Hive and also written HiveUDFs.

Environment: Java, Hadoop, MapReduce, HDFS, Hive, Pig, Linux, XML, Eclipse, Cloudera, CDH3/4 Distribution, SQL Server, Oracle 11i, MySQL

Confidential, Roseland, NJ

Sr. Java / Hadoop Developer

Responsibilities:

  • Extensively worked on Web Service technologies like SOAP, WSDL to send huge amount of request to other systems for storing the call related information.
  • Tested the web services using SOAPUI 2.5.1.
  • Developed java application using Socket connection to connect to Voice genie servers for running the VXML files.
  • Supported Map Reduce Programs those are running on the cluster.
  • Implemented solutions for ingesting data from various sources and processing the Data-at-Rest utilizing BigData technologies such asHadoop, MapReduce Frameworks,HBase, Hive
  • Imported Bulk Data into HBase Using MapReduce programs.
  • Developed and written ApachePIG scripts and HIVEscripts to process the HDFS data.
  • Perform analytics on Time Series Data exists in HBase using HBaseAPI.
  • Designed and implemented incremental imports into Hive tables.
  • Involved in collecting, aggregating and moving data from servers to HDFS using Apache Flume.
  • Written Hive jobs to parse the logs and structure them in tabularformat to facilitate effective querying on the log data.
  • Wrote multiple java programs to pull data from Hbase.
  • Experience in optimization of Mapreduce algorithm using combiners and partitions to deliver the best results and worked on Application performance optimization for a HDFScluster.
  • Implemented business logic by writing UDFs in Java and used various UDFs from Piggybanks and other sources.
  • Worked on debugging, performance tuning of Hive&PigJobs
  • Created and maintained Technical documentation for launchingHADOOPClusters and for executing Hive queries and PigScripts.

Environment: Java,Hadoop2.1.0, Map Reduce2, Pig 0.12.0, Hive 0.13.0, Linux, Sqoop 1.4.2, Flume 1.3.1, Eclipse, AWS EC2, and Cloudera CDH 4

Confidential, San Jose, CA

Java Developer

Responsibilities:

  • Designed and developed frameworks for Payment Workflow System, Confirmations Workflow System, Collateral System using GWT, Core Java, servlets, JavaScript, XML, AJAX, J2EE design patterns and OOPS/J2EE technologies.
  • Implemented various core framework level changes using Core Java, design patterns, and efficient data structures.
  • Created Terabytes of Mock data which almost simulates the real payments and people data.
  • Used Spring Batch for reading, validating and writing the daily batch files into the database.
  • Used Hibernate in data access layer to access and update information in the database
  • Developed the Struts2 Server side & Client side validations using Actions errors and Validation rules.
  • Implemented JAX-RS REST service using Spring REST technology.
  • Created user interfaces using AJAX, JavaScript, HTML5 and CSS3
  • Handled creation of charts using JavaScriptFusion Charts for reporting module
  • Developed XSLT style sheets for transformation on XML objects
  • Implemented J2EE design patterns such as MVC, Service Locator, Factory Pattern, and Singleton.
  • Expertise in the implementation of Core concepts of Java, J2EE Technologies: JSP, Servlets, JSTL, EJB transaction implementation (Message-Driven Beans), JMS, Spring, Hibernate, Java Beans, JDBC, XML, Web Services, JNDI, Multi-Threading etc.
  • Proficient in implementation of frameworks like spring, AJAX frameworks and ORM frameworks like Hibernate.
  • Used GitHub, Maven and Log4J for version control, build dependencies and logging

Environment: J2EE, JDK 8, … Hibernate 4, JSP, JSTL, JavaScript, Servlet's, JNDI, JBoss, JAX-RS, JAX-WS, HTML5, jQuery, CSS3, Fusion Charts, Angular JS, OracleDatabase SQL, UNIX, JUNIT, Agile, Web Services, QTP, SOAP, RDBMS, CVS, OSB, API, SourceForge, Apache Maven.

Confidential

Java Developer

Responsibilities:

  • Involved in Design, Code, Test, Integrate, Deployment, and production phases.
  • Implemented Spring MVC with JSP2.0 and JSP Tag libraries to facilitate user interface design.
  • Adapted various design patterns like Business Delegate, Singleton, Service locator, Session Façade, Data Transfer Objects DTO and Data Access Objects DAO patterns.
  • Responsible for designing Collateral module.
  • Responsible for developing front end pages using JSP.
  • Integrated springs framework with Hibernate that is used for Database operations.
  • Worked in loading and storing objects using Hibernate 3.0.
  • Involved in configuring Hibernate mapping file.
  • Implemented Hibernate & JDBC to interact with Oracle 10g database.
  • Designed RESTful XML web service for handling AJAX requests.
  • Consumed and produced Restful Web Services for transferring data between different applications.
  • Used various Core Java concepts such as CoreJava, Multithreading, Exception Handling, Collection APIs, JEE to implement various features and enhancements.
  • Used Ajax for asynchronously exchanging amount of data with server and Updating UI.
  • Developed UI using HTML, JavaScript, JQuery and JSP for interactive cross browser functionality and complex user interface
  • Used Log4J to create log files to debug.
  • Responsible for supporting integration and testing environment.
  • Deploying the Application to JBOSS Application Server
  • Used Junit Testing Framework for testing DAO's.
  • Data is parsed using json-parsing and with schema modeling
  • Used Maven to build, run and create Aerial-related JARs and WAR files among other uses.

Environment: Spring, Hibernate, Java, JDK, J2EE, JSP, Maven, XML, Web Services, JAX-RS, Restful, Junit, Mockito, Spring Tool Suit (STS), Spring Integration, RSA v7.1, Oracle, Log4j, TOAD, Subversion, SVN, JavaScript, JQuery, HTML, ANT1.7, AJAX, Restful, Jasper reports Windows, Linux

We'd love your feedback!