Big Data/java Developer Resume
Hoffman Estates, IL
SUMMARY:
- Over 9 years experience in software design and development using Hadoop Big Data Echo Systems and Core JAVA/J2EE with Telecommunication and Finance and Health Care domains.
- Extensive experience in Analyzing, Designing, Developing, Implementing and Testing of
- Software Applications in HDFS, YARN, Map Reduce, HIVE, Sqoop, Spark Core, Spark Sql, Spark Streaming, HBase, Kafka, Zeppeline notebook, Nifi Data flow and Kerberos.
- Extensive experience in software design and development using Core JAVA, SPRING Core, Spring AOP, Spring Hibernate, JDBC, XML in UNIX operating system.
- Excellent understanding of Hadoop architecture and different components of Hadoop clusters which include componenets of Hadoop (Job Tracker, Task Tracker, Name Node and Data Node).
- Involved in Data Ingestion to HDFS from various data sources(AMQ, Flash Blades and Kafka).
- Involved in loading data from LINUX file system to HDFS.
- Programmed for processing and generating big data sets with a parallel, distributed algorithm on a cluster using Map Reduce.
- Importing and exporting the data from relational databases, NOSQL DB’S using Sqoop.
- Analyzed large data sets by running Hive queries .
- Involved in Optimization of Hive Queries.
- Analyzed larger sets of data representing them as data flows, applied Nifi processors to process the data and stored in HDFS for further proces.
- Automated Nifi data flows.
- Managing large set of hosts, Co - coordinating and managing a service in a distributed environment using Zookeeper.
- To improve flexibility, great performance and for big data horizontal scaling, used column based HBASE NOSQL database.
- Written API to store documents using mongo NOSQL database.
- Ability to analyze different file formats Avro
- Having a good knowledge in OOZIE workflows.
- Configured & deployed and maintained multi-node Dev and Clusters.
- Developed multiple Kafka Producers and Consumers from scratch as per the business requirements.
- Strong experience in Software Development Life cycle (SDLC) methodologies like Agile and also have experience in using Waterfall methodology.
- Experience working on multiple projects and handling team in a productive way in all situations
- Experience in Using Ant build tools and performing Unit test.
- Putty is a tool which is used to invoke commands on a remote system over SSH, Telnet.
- Experience in handling the complete software development Impact Analysis, Program specification, preparation, Code review, Unit testing, Integration Testing.
- Experienced in using source code change management and version control tools Subversion, CVS.
- Design and code from specifications, analyzes, evaluates, tests, debugs, documents, implements complex apps.
- Good experience in working, managing and coordinating with offshore teams.
- Well-regarded communication skills, resourcefulness, and personal presentation.
- Excellent team player with the ability to create an environment of trust and cooperation through an open exchange of ideas towards achieving team goals
- Good verbal and written communication skills. Ability to deal with people diplomatically.
TECHNICAL SKILLS:
Web/Application Servers: Apache Tomcat 5.x/6.x, WebLogic
Java/J2EE Technologies: Core Java, Servlets, JSP, JDBC, MVC, Hibernate 2.0/2.3/3.0, Spring, LOG4J, JNDI, JUNIT, JAXB, SAX/DOM, ANT, Jenkins, Maven
Messaging Systems: JMS, AMQ, Kafka
Languages: Java, ShellWebTechnologies/Web Services: JavaScript, CSS, HTML, DHTML, XML (SAX, DOM Parser), JSON,XSL, XSLT,REST
Databases: Oracle 8i/9i, 10g,11g, SQL Server 2008/2005/2000, MySQL, HBASE and MongoDB
Operating Systems: Windows, UNIX, LINUX.
Design Tools: UML, UMLet
BIG DATA/HADOOP ECO System: Hadoop, HDFS, Map Reduce, Pig, Hive, HBase, Sqoop, Zoo Keeper, SPARK 2.2, Spark Core, Spark Streaming and Spark SQL, OZIE, Hortonworks Nifi (HDF), Kafka and Kerberos
IDE Tools: RAD, Eclipse
Version Control Tools: CVS, SVN, GIT
Testing Tools: JUnit
PROFESSIONAL EXPERIENCE:
Confidential, Hoffman Estates, IL
Big Data/Java Developer
Responsibilities:
- Responsible for building Scalable distributed data solutions using Hadoop.
- Gathered business requirements from the business partners and subject matter experts.
- Expert in implementing advanced procedures like text analytics and processing using the in-memory computing capabilities like Apache Spark.
- Computed the complex logics and controlled the data flow through the in-memory process tool and Apache Spark.
- Wrote multiple Spark Jobs for data Extraction, Transformation and Aggregation from multiple file formats including XML, Json, CSV and another compressed file format.
- Optimized Spark Jobs to use Executor’s memory efficiently by using various Broadcast variables, Accumulators and Check points
- Implemented Spark Jobs to load batch files from HDFS and to process further by applying various transformations and actions.
- Involved in designing spark echo system during Ambari installations.
- Implemented Spark Jobs to read streaming data from various sources like AMQ Queues, Kafka, Application Sockets.
- Loaded and transformed large sets of structured, semi structured, and unstructured data.
- Worked with different file formats like JSON, AVROFILE, ORC for Hive querying and processing.
- Created Hive tables and was involved in data loading.
- High sequential throughput of spinning disks was achieved through Kafka.
- Worked extensively with Sqoop/Nifi to move data from DB2 to HDFS.
- Collected the logs data from web servers and integrated in to HDFS using Kafka.
- Provided ad-hoc queries and data metrics to the business users using Hive.
- Facilitated performance optimizations such as using distributed cache for small datasets, partition and bucketing in Hive and completed map side Joins.
- Real time analysis along with continuous operation management was processed by Spark Streaming.
- Worked on importing and exporting data from Oracle and DB2 into HDFS and HIVE using Sqoop for analysis, visualization and to generate reports using Zeppelin.
- Worked on NoSQL databases including HBase.
- Scheduled Nifi data workflow to execute multiple nifi processors which fetch the data from Flash Blade, unzip the files, filter the attributes, convert into Avro format, merge the contents and store in HDFS for further process, and all these processors independently run with time and data availability.
- Involved in running Hadoop streaming jobs to process Terabytes of data.
- Used JIRA for bug tracking and GIT Hub for version control.
Environment: Hadoop, Map Reduce, Hive, HDFS, Sqoop, Hortonworks, Nifi, Zeppelin, Kafka, Java 8.0, Spark 2.2.0, HBase, MySql, Unix/Linux, Eclipse IDE, JIRA
Confidential, Hoffman Estates IL
Hadoop/Java Engineer
Responsibilities:
- Involved requirements gathering, understanding, and grooming with the team.
- Developed Mobile Manager User interface (UI) which logs all the transaction flow between different interfaces.
- Used JSP, JSP Tag Libraries (JSTL), Core Java, Custom Tag Libs, Java Script, XML, and HTML for developing Mobile Manager.
- Implemented Struts Framework for the front end and used Action Controller, Action Form, Validation Frame Work, Struts-Tiles, Struts Tag Libraries, Look up Dispatch Action for the presentation logic.
- Developed the application in Eclipse Environment.
- Worked on BPM tools and BRMS tools to process asynchronous and rules management.
- Responsible for designing XML, XSD, XSLT using Altova XML tool.
- Developed Web services using SOAP protocol for transferring data between different applications.
- Used Log4j for logging purposes.
- Cron/ Quartz schedulers are setup to fire web service URL’s Confidential regular intervals.
- Hibernate 3.0 was used to develop persistence layer. Custom DAOs were developed to retrieve the records from ORACLE database.
- Importing and exporting data into HDFS and Hive using Sqoop.
- Involved in developing Pig Scripts for change data capture and delta record processing between newly arrived data and already existing data in HDFS.
- Involved Implementing POCs using Spark shell commands to process the huge data and compare the process time.
- Involved in creating Hive tables loading data and writing queries that will run internally in MapReduce way.
- Used Pig tool to do transformations, event joins, filter, and some pre-aggregations.
- Involved in processing ingested raw data using MapReduce, Apache Pig and HBase.
- Involved in developing Pig Scripts for change data capture and delta record processing between newly arrived data and already existing data in HDFS.
- Communication between different applications is maintained using JMS MQ and web Service calls.
- SVN version control system has been used to check-in and check-out the developed artifacts.
Environment: Java/J2EE, JSP, XML, HTML, Eclipse, JMS, Struts, Log4j, XSD, XSTL, SOAP Web Services, Hibernate, SPARK, Map Reducer, PIG Scripts, Azkaban, HBASE, BPM, BRMS, JAVA7, REST Services.
Confidential, Hoffman Estates, IL
Senior Java Developer
Responsibilities:
- Worked with end customer to analyze requirements.
- Designed, developed, tested, documented, and maintained software applications.
- Involved in complete development lifecycle using Agile Development Methodology/SCRUM and tested the application in each iteration.
- Developed the server-side RESTful web service APIs to process requests from iOS and android mobile clients.
- Developed the Java service layer using Java 8, Jersey, Jackson, JSON, Spring framework.
- Developed the persistence layer using JDBC, Hibernate ORM and Oracle RDBMS.
- Designed and developed unit, functional, integration and regression testing using Junit, and Spring test framework.
- Built and deployed the packaged application to WebLogic application server using Maven, Jenkins and eclipse.
- SQL queries and PL/SQL SP.
- Implemented messaging system using JMS, MQ and Spring.
- Followed software engineering best practices and OOA/D.
- Developed stored procedures and prepared statements for updating and accessing data from database.
- Wrote shell and python scripts to analyze logs and create test data.
- Maintained documentation using swagger API and twiki.
- Participated in code review using SourceTree and git.
- Trained and mentored junior developers.
Environment: Core Java, Java 8, Spring framework, Hibernate, Maven, Eclipse, Jersey, Jackson, JMS, MQ, JAXB, JSON, JIRA, Git, Source Tree, SQL, Oracle, Junit, Jenkins, WebLogic, Ubuntu.
Confidential, Omaha, NE
Software Engineer
Responsibilities:
- Involved in the analysis, design, and development and testing phases of Software Development Lifecycle (SDLC).
- Designed Use Case Diagrams, Class Diagrams and Sequence Diagrams and Object Diagrams, using Microsoft Visio to model the detail design of the application.
- Used Jakarta Struts Framework for rapid development and ease of maintenance.
- Developed the application front end using Jakarta Struts Framework. Developed action classes, form beans and Java Server Pages using WSAD.
- Developed a web-based reporting for credit monitoring system with HTML, JSTL and custom tags using Struts framework.
- Developed Session beans which encapsulate the workflow logic.
- Wrote SQL queries to load, retrieve and update the date into database.
- Used VB/COM, ASP for developing support Webpages application.
- Designed and implemented Business Delegate, Session Facade and DTO Design Patterns.
- Involved in implementing the DAO pattern for database access and used the JDBC API extensively.
- Used XML Web services for transferring data between different applications and retrieving credit information from the credit bureau.
- Used JAXB API to bind XML schema to java classes.
- Used JMS-MQ Bridge to send messages securely, reliably and asynchronously to WebSphere MQ, which connects to the legacy systems.
- Used JCAPS for system Integration.
- Tested the application functionality with JUnit Struts Test Cases.
- Developed logging module-using Log4J to create log files to debug as well as trace application.
- Used CVS for version control.
- Responsible for supporting application in different phases from QA to production and fix QC bugs.
- Extensively used ANT as a build tool.
- Used shell script for batch processing on Unix environment.
- Deployed the applications on IBM WebSphere Application Server.
Environment: WAS 7.0, WSAD 5.1.2, WebSphere MQ, Java SDK 1.4, Hibernate 3.0, struts 1.2.4, Servlet 2.2, JSP 2.0, JNDI, JDBC, SQL, PL/SQL, XML Web Services, Core Java, VB/COM, ASP, Spring1.0.2, SOAP, WSDL, JavaScript, Windows, Oracle 9i, JUnit, JCAPS, CVS, ANT 1.5 and Log4J.
Confidential
J2EE Designer/Developer
Responsibilities:
- Involved in analysis, design and development of CDF Processing system and developed specifications that include Use Cases, Class Diagrams, Sequence Diagrams, and Activity Diagrams.
- Developed the application using Struts Framework that leverages classical Model View Controller (MVC) architecture.
- Deployed the applications on BEA WebLogic Application Server.
- Used AJAX Framework for Dynamic Searching of covered products for funding.
- Involved in designing the Graphical User Interfaces using HTML, JSP, AJAX4JSF and JSF.
- Used Hibernate in data access layer to access and update information in the database.
- Used JNDI for naming and directory services.
- Developed the web application by integrating JSF ICEFaces employing AJAX Confidential client - side components enabling rich web applications without any separate plugins.
- Used Web services - WSDL and SOAP for getting credit card information of the insured patients from third party.
- Used soapUI for load testing the Web Services.
- Used XML parser APIs such as JAXP and JAXB in the web service's request response data marshaling as well as unmarshalling process.
- Wrote SQL queries and PL/SQL SP.
- Implemented JMS Listeners and Message Driven Beans (MDB).
- Wrote Shell scripts for Schedulers on UNIX environment.
- Developed JUnit test cases for all the developed modules.
- Used Log4J to capture the log that includes runtime exceptions, monitored error logs.
- Used CVS for version control across common source code used by developers.
- Used Maven scripts to build the application and deployed on BEA WebLogic Application Server.
- Designed database and normalization for databases in Oracle 10g and used the Stored Procedures and Triggers in the application.
Environment: Core Java, JDK 1.6, JSF ICE faces, Hibernate 3.6, JSP, Servlets, JMS, XML, SOAP, WSDL, JDBC, JavaScript, HTML, JNDI, CVS, Log4J, Eclipse Indigo, BEA WebLogic Application Server, Rational Rose for UML modeling, JUnit, SQL, Oracle 10g.