We provide IT Staff Augmentation Services!

Java Developer Resume Profile

5.00/5 (Submit Your Rating)

KS

PROFESSIONAL SUMMARY:

  • Over Six years' experience in Java and around three years of experience in Big Data in implementing complete Hadoop solutions.
  • Hands on experience in installing, configuring and using Apache Hadoop ecosystem components like HDFS, Hadoop MapReduce, Zoo Keeper, Oozie, Hive, Sqoop, Pig and Flume.
  • Expertise in writing Hadoop Jobs for analyzing data using Hive and Pig
  • Experience in working with MapReduce programs using Apache Hadoop for working with Big Data
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems RDBMS and vice-versa
  • In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce concepts
  • Extending Hive and Pig core functionality by writing custom UDFs
  • Good understanding of Data Mining and Machine Learning techniques
  • Experience in analyzing data using Hive QL, Pig Latin, and custom Map Reduce programs in Java.
  • Efficient in building hive, pig and map-reduce scripts.
  • Extensive experience with SQL, PL/SQL and database concepts
  • Knowledge of job workflow scheduling and monitoring tools like Oozie and Zookeeper
  • Experience in optimization of Map reduce algorithm using combiners and partitioners to deliver the best results
  • Proficient in using Cloudera Manager, an end to end tool to manage Hadoop operations
  • Expertise in core Java, J2EE, Multithreading, JDBC and proficient in using Java API's for application development
  • Good understanding of NoSQL databases
  • Good hands on Experience in Tableau 7,Tableau 8
  • Using Hadoop ecosystem components for storage and processing data, exported data into Tableau using live connection.
  • Having experience on creating databases, tables and views in HIVEQL, IMPALA and PIG LATIN.
  • Strong knowledge of Hadoop and Hive's analytical functions.
  • Having experience on Storage and Processing in Hue covering all Hadoop ecosystem components.
  • Load and transform large sets of structured, semi-structured and unstructured data using Hadoop ecosystem components.
  • Experience in working with different data sources like Flat files, XML files and Databases.
  • Having good knowledge on Sentiment Analysis.
  • Experience in Database design, Entity relationships, Database analysis, Programming SQL, Stored procedure's PL/ SQL, Packages and Triggers in Oracle and SQL Server on Windows and UNIX.
  • Strong experience in interacting with stakeholders/customers, gathering requirements through interviews, workshops, and existing system documentation or procedures, defining business processes, identifying and analyzing risks using appropriate templates and analysis tools.
  • Experience in various phases of Software Development life cycle Analysis, Requirements gathering, Designing with expertise in documenting various requirement specifications, functional specifications, Test Plans, Source to Target mappings, SQL Joins.
  • Worked on different operating systems like UNIX/Linux, Windows XP and Windows 2K

TECHNICAL SKILLS

  • Hadoop/Big Data: HDFS, MapReduce, Pig, Hive, Sqoop, Oozie, Flume, Zookeeper
  • Java J2EE technologies: Core Java, Servlets, JSP, JDBC, JNDI, Java Beans
  • IDE Tools: Eclipse, NetBeans, IBM WebSphere Studio Application Developer WSAD
  • Programming languages: C, C , Java, Python, VB.NET
  • Databases: Oracle 11g/10g/9i, MySQL, DB2, MS-SQL Server, MongoDB
  • Web Technologies: HTML, XML, JavaScript
  • Operating Systems: Windows 95/98/2000/XP/Vista/7, Macintosh, UNIX
  • Monitoring Reporting: Nagios

PROFESSIONAL SUMMARY:

Confidential

Role: Sr. Hadoop Developer

The Cerner systems have several applications to support analytics on the patient data. We support the UI teams by providing back end work and data processing using Apache Hadoop to enable affordable and efficient applications.

Responsibilities:

  • Installed and configured Hadoop MapReduce, HDFS, Developed multiple MapReduce jobs in java for data cleansing and preprocessing.
  • Importing and exporting data into HDFS and Hive using Sqoop
  • HiveQL scripts to analyze customer data to determine patients health patterns.
  • HiveQL scripts to create, load, and query tables in a Hive.
  • HiveQL scripts to perform Sentiment Analysis analyzed customer's comments and product ratings .
  • Installed and configured Hive and also written Hive UDFs.
  • Data Migration from Teradata to Hadoop for building advanced data analytics to achieve better performance.
  • Utilized Apache Hadoop environment by Hortonworks.
  • Experienced in defining job flows
  • Experienced in managing and reviewing Hadoop log files
  • Load and transform large sets of structured, semi structured and unstructured data
  • Responsible to manage data coming from different sources
  • Supported Map Reduce Programs those are running on the cluster
  • Involved in loading data from UNIX file system to HDFS.
  • Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way
  • Gained very good business knowledge on health insurance, claim processing, fraud suspect identification, appeals process etc.

Environment: Core Java, Apache Hadoop Horton works , HDFS, Pig, Hive, Hbase,, Sqoop, Flume, Shell Scripting, My Sql, LINUX, UNIX

Confidential

Hadoop Developer

World Wide Technology WWT is an award-winning systems integrator and supply chain solutions provider that brings an innovative and proven approach to how organizations explore, evaluate, architect and implement technology. It also offers a complete array of configuration and integration services along with a full suite of advanced logistics solutions enabled by sophisticated supply chain management infrastructure.

Roles and Responsibilities:

  • Collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis
  • Extracted files from Mongo DB through Sqoop and placed in HDFS and processed
  • Analyzed large data sets by running Hive queries and Pig scripts
  • Worked with the Data Science team to gather requirements for various data mining projects
  • Involved in creating Hive tables, and loading and analyzing data using hive queries
  • Developed Simple to complex Map Reduce Jobs using Hive and Pig
  • Involved in running Hadoop jobs for processing millions of records of text data
  • Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required
  • Developed multiple MapReduce jobs in java for data cleaning and pre-processing
  • Involved in loading data from LINUX file system to HDFS
  • Responsible for managing data from multiple sources
  • Experienced in running Hadoop streaming jobs to process terabytes of xml format data.
  • Load and transform large sets of structured, semi structured and unstructured data.
  • Responsible to manage data coming from different sources.
  • Assisted in exporting analyzed data to relational databases using Sqoop
  • Created and maintained Technical documentation for launching HADOOP Clusters and for executing Hive queries and Pig Scripts

Environment: Hadoop, HDFS, Pig, Hive, MapReduce, Sqoop, Flume, LINUX and Big Data

Confidential

Hadoop Developer

Ascension Information Services AIS is one of the largest healthcare IT services companies in North America. Ascension created AIS to provide better access to IT resources for the organization, and to support the achievement of our long-term Strategic Direction goals. Ascension Health is transforming healthcare by providing the highest quality care to all, with special attention to the poor and vulnerable. Ascension Health is transforming healthcare by providing the highest quality care to all, with special attention to the poor and vulnerable.

Roles and Responsibilities:

  • Worked on analyzing Hadoop cluster using different big data analytic tools including Pig, Hive, and MapReduce
  • Supported MapReduce Programs those are running on the cluster
  • Collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis
  • Worked on debugging, performance tuning of Hive Pig Jobs
  • Implemented test scripts to support test driven development and continuous integration
  • Worked on tuning the performance Pig queries
  • Involved in loading data from LINUX file system to HDFS
  • Importing and exporting data into HDFS and Hive using Sqoop
  • Experience working on processing unstructured data using Pig and Hive
  • Gained experience in managing and reviewing Hadoop log files
  • Involved in scheduling Oozie workflow engine to run multiple Hive and Pig jobs
  • Assisted in monitoring Hadoop cluster using tools like Nagios
  • Created and maintained Technical documentation for launching HADOOP Clusters and for executing Hive queries and Pig Scripts

Environment: Hadoop, HDFS, Pig, Hive, MapReduce, Sqoop, Oozie, Nagios, LINUX, and Big Data.

Confidential

Java Developer

This project has been designed and developed to process online order request. This project consist of different module such as online User Registration, Update User Information, Submit order online, process order and delivery of order.

Responsibilities:

  • Involved in all phases of Software Development Life Cycle SDLC .
  • Used CVS for version control system and Test Director for bug tracking
  • Involved in developing applications using Java, J2EE, EJB, Struts, JSP and Servlet
  • Created the UI validations using Struts validation framework
  • Strategize and develop enhancements to support the migration process
  • User Training-worked with user community closely to train them and explain various features to them.
  • Developed database schema and SQL queries for querying database on Oracle 9i

Environment: Java, J2EE, JSP, HTML, Java Script, Oracle, SQL, JDBC, XML, Servlet, IBM Web sphere, ANT, C , SQL server

Confidential

Java/J2EE Developer

Description: BellSouth Corporation is an American telecommunications holding company based in Atlanta, Georgia. BellSouth was one of the seven original Regional Bell Operating Companies after the U.S. Department of Justice forced the American Telephone Telegraph Company to divest itself of its regional telephone companies.

Responsibilities:

  • Coded end to end i.e. from GUI on Client side to Middleware to database and Connecting the back end Systems on a subset of sub modules belonging to the above modules.
  • Worked extensively on Swing.
  • Most of the business logic is provided in Session Beans and the database transactions are performed using Container Managed Entity Beans.
  • Worked on Parsing of XML Using DOM and SAX.
  • Implemented EJB Transactions.
  • Used JMS for messaging with IBM MQ-Series.
  • Written stored procedures.
  • Mentoring other programmers.
  • Studied the implementation of Struts.
  • Implemented the Security Access Control both on client and Server side. Applet signing including Jar signing.

Environment: Java, Java Swing JSP, Servlets, JDBC, Applets, Servlets, JCE 1.2, RMI, EJB, XML/XSL, Visual Age java VAJ , Visual C , J2EE.

We'd love your feedback!