We provide IT Staff Augmentation Services!

Java Developer Resume Profile

4.00/5 (Submit Your Rating)

NE

Professional Summary

  • Over 7 years of IT experience as a Developer, Designer quality reviewer with cross platform integration experience using Hadoop, Java, J2EE and SOA.
  • Effective leadership quality with good skills in strategy, business development, client management and project management
  • Excellent global exposure to various work cultures and client interaction with diverse teams
  • Good Understanding of the Hadoop Distributed File System and Ecosystem MapReduce, Pig, Hive, Sqoop and HBase
  • Hands on experience in installing, configuring and using Apache Hadoop ecosystems such as MapReduce, HIVE, PIG, SQOOP, FLUME and OOZIE.
  • Hands on experience on Hortonworks and Cloudera Hadoop environments.
  • Strong understanding of Hadoop daemons and MapReduce concepts.
  • Experienced in importing-exporting data into HDFS format.
  • Experienced in analyzing big data using Hadoop environment.
  • Experienced in handling Hadoop Ecosystem Projects such as Hive, Pig and Sqoop.
  • Experienced in developing UDFs for Hive using Java.
  • Strong understanding of NoSQL databases like HBase, MongoDB Cassandra.
  • Hands on experience with Hadoop, HDFS, MapReduce and Hadoop Ecosystem Pig, Hive, Oozie, Flume and Hbase .
  • Extensive experience in design, development and support Model View Controller using Struts and Spring framework.
  • Develop reusable solution to maintain proper coding standard across different java project.
  • Proficiency with the application servers like WebSphere, WebLogic, JBOSS and Tomcat
  • Developed core modules in large cross-platform applications using JAVA, J2EE, Spring, Struts, Hibernate, JAX-WS Web Services, and JMS.
  • Expertise in debugging and optimizing Oracle and java performance tuning with strong knowledge in Oracle 11g and SQL
  • Ability to work effectively in cross-functional team environments and experience of providing training to business users.

Technical Skill Set

Hadoop/Big Data

HDFS, MapReduce, Hive, Pig, Sqoop, Flume, Oozie, and ZooKeeper

No SQL Databases

Hbase, Cassandra, mongoDB

Languages

C, C , Java, J2EE, PL/SQL, Pig Latin, HiveQL, Unix shell scripts

Java/J2EE Technologies

Applets, Swing, JDBC, JNDI, JSON, JSTL, RMI, JMS, Java Script, JSP, Servlets, EJB, JSF, JQuery

Frameworks

MVC, Struts, Spring, Hibernate

Operating Systems

Sun Solaris, HP-UNIX, RedHat Linux, Ubuntu Linux and Windows XP/Vista/7/8

Web Technologies

HTML, DHTML, XML, AJAX, WSDL, SOAP

Web/Application servers

Apache Tomcat,WebLogic, JBoss

Databases

Oracle 9i/10g/11g, DB2, SQL Server, MySQL, Teradata

Tools and IDE

Eclipse, NetBeans, Toad, Maven, ANT, Hudson, Sonar, JDeveloper, Assent PMD, DB Visualizer

Version control

SVN, CVS

Network Protocols

TCP/IP, UDP, HTTP, DNS, DHCP

PROFESSIONAL EXPERIENCE

Confidential

Role: Hadoop Developer/Admin

The Metropolitan Utilities District, or M.U.D., is the political subdivision and public corporation of the State of Nebraska that operates the drinking water and natural gas systems for Omaha, Nebraska and surrounding areas. M.U.D. is the only metropolitan utilities district in the State of Nebraska, and the fifth largest public natural gas utility in the U.S.

Responsibilities:

  • Installed and configured Apache Hadoop, Hive and Pig environment on Amazon EC2
  • Extensively involved in Installation and configuration of Cloudera distribution Hadoop, Name Node, Job
  • Tracker, Task Trackers and Data Nodes
  • Configured MySQL Database to store Hive metadata.
  • Responsible for loading unstructured data into Hadoop File System HDFS .
  • Created MapReduce jobs using Pig Latin and Hive Queries.
  • Used Sqoop tool to load data from RDBMS into HDFS.
  • Cloudera and Hortonwork POCs.
  • Installed and configured Hadoop ecosystem like HBase, Flume, Pig and Sqoop
  • Involved in Hadoop cluster task like Adding and Removing Nodes without any effect to running jobs and
  • data
  • Managed and reviewed Hadoop Log files
  • Load log data into HDFS using Flume. Worked extensively in creating MapReduce jobs to power data
  • for search and aggregation
  • Worked extensively with Sqoop for importing metadata from Oracle.
  • Responsible for smooth error-free configuration of DWH-ETL solution and Integration with Hadoop.
  • Designed a data warehouse using Hive
  • Designing and implementing semi-structured data analytics platform leveraging Hadoop, with Solr
  • Created partitioned tables in Hive
  • Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS
  • Developed workflow inOozieto automate the tasks of loading the data into HDFS and pre-processing
  • with Pig

Environment: Hadoop, MapReduce, HDFS, Pig, Hive, HBase, Java, Oracle 10g, MySQL, Ubuntu

Confidential

Role: Hadoop Developer

The Coca-Cola Company is the world's largest beverage company, refreshing consumers with more than 500 sparkling and still brands. Led by Coca-Cola, the world's most valuable brand, the Company's portfolio features 15 billion dollar brands including Diet Coke, Fanta, Sprite, Coca-Cola Zero, vitaminwater, Powerade, Minute Maid, Simply, Georgia and Del Valle.

Responsibilities:

  • Developed shell scripts to automate the cluster installation.
  • Played a major role in choosing the right configurations for Hadoop.
  • Developed Pig Latin scripts to extract and filter relevant data from the web server output files to load into HDFS.
  • Involved in start to end process of hadoop cluster installation, configuration and monitoring.
  • Responsible for building scalable distributed data solutions using Hadoop
  • Installed and configured Hive, Pig, Sqoop, Flume and Oozie on the Hadoop cluster
  • Setup and benchmarked Hadoop/HBase clusters for internal use
  • Developed Simple to complex Map/reduce Jobs using Hive and Pig
  • Optimized Map/Reduce Jobs to use HDFS efficiently by using various compression mechanisms
  • Handled importing of data from various data sources, performed transformations using Hive,
  • MapReduce, loaded data into HDFS and extracted the data from MySQL into HDFS using Sqoop
  • Analyzed the data by performing Hive queries and running Pig scripts to study customer behavior
  • Used UDF's to implement business logic in Hadoop
  • Implemented business logic by writing UDFs in Java and used various UDFs from Piggybanks and other
  • Sources.
  • Continuous monitoring and managing the Hadoop cluster using Cloudera Manager
  • Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as
  • Required
  • Installed Oozieworkflow engine to run multiple Hive and Pig jobs
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate
  • reports for the BI team

Environment: Hadoop, MapReduce, HDFS, Hive, Pig, Java, SQL, Cloudera Manager, Sqoop, Flume, Oozie, Java jdk 1.6 , Eclipse

Confidential

Role: Hadoop Developer

American Express provides innovative payment, travel and expense management solutions for individuals and businesses of all sizes. It helps customers realize their dreams and aspirations through industry-leading benefits, access to unique experiences, business-building insights, and global customer care. Purpose of the project is to create Enterprise Data Hub so that various business units and use the date from Hadoop to do Data Analytics. The solution is based on the Cloudera Hadoop. The data will be stored in Hadoop file system and processed using Map/Reduce jobs.

Responsibilities:

  • Installed and configured Hadoop MapReduce, HDFS and developed multiple MapReduce jobs in Java for data
  • Cleansing and preprocessing.
  • Involved in loading data from UNIX file system to HDFS.
  • Installed and configured Hive and also written Hive UDFs.
  • Evaluated business requirements and prepared detailed specifications that follow project guidelines required to
  • Develop written programs.
  • Devised procedures that solve complex business problems with due considerations for hardware/software capacity
  • and limitations, operating times and desired results.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Provided quick response to ad hoc internal and external client requests for data and experienced in creating ad hoc
  • reports.
  • Responsible for building scalable distributed data solutions using Hadoop.
  • Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and troubleshooting,
  • Manage and review data backups, manage and review Hadoop log files.
  • Worked hands on with ETL process.
  • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, and
  • Loaded data into HDFS.
  • Extracted the data from Teradata into HDFS using Sqoop.
  • Analyzed the data by performing Hive queries and running Pig scripts to know user behavior like shopping
  • Enthusiasts, travelers, music lovers etc.
  • Exported the patterns analyzed back into Teradata using Sqoop.
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.
  • Installed Oozie workflow engine to run multiple Hive.
  • Developed Hive queries to process the data and generate the data cubes for visualizing.

Environment: Hadoop, MapReduce, HDFS, Hive, Ooozie, Java jdk1.6 , Cloudera, NoSQL, Oracle 11g, 10g, PL SQL, SQL PLUS, Toad 9.6, Windows NT, UNIX Shell Scripting.

Confidential

Business Intelligence/ETL Developer

Description:

The ED Parachute application is an intranet application which represents a near term opportunity in which the compound would move quickly into Development. The purpose of this application is to create a simplified process where: ideas are generated, tracked and reviewed using existing governance. It enables a creative environment where scientists can brainstorm across TAs to bring novel ideas forward.

Responsibilities:

  • Involved in design development of operational data source and data marts in Oracle
  • Reviewed source data and recommend data acquisition and transformation strategy
  • Involved in conceptual, logical and physical data modeling and used star schema in designing the data warehouse
  • Designed ETL process using Informatica Designer to load the data from various source databases and flat files to target data warehouse in Oracle
  • Used Power mart Workflow Manager to design sessions, event wait/raise, and assignment, e-mail, and command to execute mappings
  • Created parameter based mappings, Router and lookup transformations
  • Created mapplets to reuse the transformation in several mappings
  • Used Power mart Workflow Monitor to monitor the workflows
  • Optimized mappings using transformation features like Aggregator, filter, Joiner, Expression and Lookups
  • Created daily and weekly workflows and scheduled to run based on business needs

Environment:

Data modeling,Informatica Power Centre 9.0, SQL Server SSIS, SSRS, Oracle 10g, Teradata 6, XML, TOAD, SQL, PL/SQL, IBM AIX, UNIX Shell Scripts, Web Intelligence, DSBASIC, Cognos, Erwin, STAR team, Remedy, Maestro job scheduler, Mercury Quality Center, Control-M

Confidential

JAVA Developer

Description:

This project is In Home Agent Message Center,where server receives client messages from different channels and transmits to different client PCs.

Responsibilities:

  • Implemented the project according to the Software Development Life Cycle SDLC .
  • Developed the web layer using Spring MVC framework.
  • Implemented JDBC for mapping an object-oriented domain model to a traditional relational database.
  • Created Stored Procedures to manipulate the database and to apply the business logic according to the
  • user's specifications.
  • Involved in analyzing, designing, implementing and testing of the project.
  • Developed UML diagrams like Use cases and Sequence diagrams as per requirement.
  • Developed the Generic Classes, which includes the frequently used functionality, so that it can be
  • reusable.
  • Exception Management mechanism using Exception Handling Application Blocks to handle the
  • exceptions.
  • Designed and developed user interfaces using JSP, Java script, HTML and Struts framework.
  • Involved in Database design and developing SQL Queries, stored procedures on MySQL.
  • Developed Action Forms and Action Classes in Struts frame work.
  • Programmed session and entity EJBs to handle user info track and profile based transactions.
  • Involved in writing JUnit test cases, unit and integration testing of the application.
  • Developed user and technical documentation.
  • Used CVS for maintaining the Source Code.
  • Logging was done through log4j.
  • Monitoring the failures of the site.

Environment:

JAVA, Java Script, HTML, JDBC Drivers, Soap Web Services, Unix, Shell scripting, SQL Server.

We'd love your feedback!