We provide IT Staff Augmentation Services!

Big Data/hadoop Developer Resume

3.00/5 (Submit Your Rating)

Bloomington, IL

SUMMARY:

  • Above 8 years of experience in the requirement analysis, design, development, testing, implementation and maintenance of Big Data environment, enterprise applications and client - server architecture applications using Object Oriented Methodologies and Java/J2EE technologies.
  • In depth experience and good knowledge in using Hadoop Ecosystems HDFS, Spark, Clojure, Flambo, Mapreduce, yarn, Hive, Hue, Impala, Sqoop, Oozie and Flume. working with Spark-SQL, Data Frames and RDD's. Experienced in handling large data frames using Partitions, Spark in-Memory capabilities, Effective & efficient Joins, actions, transformations and other during ingestion process itself.
  • Excellent understanding and extensive knowledge of Hadoop architecture and various ecosystem components such as HDFS, JobTracker, TaskTracker, NameNode, DataNode and MapReduce programming paradigm.
  • Good usage of Apache Hadoop along enterprise version of Cloudera and Hortonworks. Good Knowledge on MAPR distribution.
  • Good knowledge of Data modeling, use case design and Object-oriented concepts.
  • Well versed in installation, configuration, supporting and managing of Big Data and underlying infrastructure of Hadoop Cluster.
  • Good knowledge on spark components like Spark SQL, MLib, Spark Streaming and GraphX.
  • Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python/Clojure.
  • Experience in data processing like collecting, aggregating, moving from various sources.
  • Created User Defined Functions (UDFs), User Defined Aggregated Functions (UDAFs) in PIG and Hive.
  • Good knowledge in using job scheduling and monitoring tools like Oozie (workflows, coordinators & bundles) and ZooKeeper.
  • Involved in integrating hive queries into spark environment using SparkSql.
  • Hands on experience in performing real time analytics on big data in Hadoop clusters.
  • Good knowledge in developing data pipeline using Flume, Sqoop, and Pig to extract the data from weblogs and store in HDFS.
  • Good working experience on different file formats (PARQUET, TEXTFILE, AVRO) and different compression codecs (GZIP, SNAPPY, LZO).
  • Valuable experience on practical implementation of cloud-specific AWS technologies including IAM, Amazon Cloud Services like Elastic Compute Cloud (EC2), Simple Storage Services (S3), Virtual Private Cloud (VPC), Route 53, Lambda, EBS.
  • Strong development experience in Java Enterprise Technologies based on MVC frameworks such as spring framework, Struts framework, JSF and EJB.
  • Experience in working with ORM frameworks like O/R Mapping Hibernates, Spring JDBC.
  • Experience using Spring MVC, Spring Core, Spring Boot, Spring JDBC, Spring IOC, Spring Annotations, Spring AOP, Spring Transactions and Spring Security.
  • Experienced in numerous Design Patterns like Singleton, Factory, Abstract Factory Pattern, MVC, Data Access Object, UML and Enterprise Application Integration.
  • Strong work experience in application integration and communicating using SOA, Web Services such as JAX-RS, JAX-WS, SOAP, WSDL and XML over HTTP, Apache CXF, JAX-B, XSD, Axis 2 and RESTful web services using jersey framework and spring’s Rest Controller.
  • Experience in developing applications using Microservices architecture and experienced in migrating architecture from Legacy Monolithic to Micro Service Architecture.
  • Experience to JAVA Lambdas, Functional Interfaces and integrated Stream API into the Collections API, to perform bulk operations on collections, like sequential or parallel map-reduce transformations, Collections, Multithreading, Event handling, Exception handling, Generics and other utility classes.
  • Expertise in client scripting language and server scripting languages like HTML5, CSS3, JavaScript, jQuery, Ajax, AngularJS, JSP, JSF, Servlets, Bootstrap, foundationJs.
  • Strong Experience in database design ER Modeling, Schema design using PL/SQL to write Stored Procedures, Functions, Triggers, Indexers and proficiency in writing complex queries, using Oracle, IBM DB2, SQL Server and Sybase.
  • Expertise in XML technologies such as XSL, XSLT, XML schemas, XPath, XForms, XSL-FO and parsers like SAX, DOM.
  • Experience in developed logging and standard mechanism based on Log4j and SLF4j.
  • Proficient in version control system tools like GIT, GITLAB, Clear Case, and Clear quest, Harvest, IBM RTC and SVN.
  • Hands-on experience working with Continuous Integration (CI) build-automation tools such as Maven, Urban Code Deploy (UCD), GITLAB, Jenkins, Harvest and Apache Ant.
  • Hands on experience with Windows, UNIX and Linux Environments. Having Experience on Shell Scripting, python scripts and Deployment of Applications in Server. Experience in web/application server administration such as on Web sphere, Web logic, Apache Tomcat application Servers.
  • Experience in models such as Iterative, Waterfall and Agile (Scrum, SAFE, and DOJO). Involved in daily stand up meetings, grooming, planning sessions, and sprint demos, pi planning’s and retrospective meetings.
  • Experience in unit testing the applications using JUnit, TDD Framework, Generative testing.
  • Excellent written and verbal communication skills, Analytical, Problem Solving skills, strict attention to detail and ability to work independently, lead/work within a team environment.

TECHNICAL SKILLS:

Hadoop Technologies: HDFS, yarn, MapReduce, Apache Spark, Impala, Hue, Oozie, Hive, Pig, Sqoop

Hadoop Distribution: Cloudera CDHs, Hortonworks HDPs, MAPR

Languages: Java, PYTHON, Clojure, SQL, PL/SQL, HTML

J2EE Technologies: Struts, Spring, EJB, Hibernates, SPRING JDBC, Servlets, JSP, JSF, JMS, JUnit, Java Mail API, JSTL, Log4j.

RDBMS: Oracle, SQL Server, DB2, MySQL, Sybase

Web Technologies: XML, Ajax, CSS, JQuery, JavaScript, Angularjs.

Web Services: JAX-WS, WSDL, SOAP, REST

Application Servers: IBM Web Sphere, WebLogic server, Apache Tomcat.

Design Patterns: MVC, Singleton, Factory, Abstract Factory Pattern

Development Tools: Eclipse, RAD, STS, Intellij

Version tools: SVN, Clear case, Clear quest, Harvest, RTC, GIT, and GITLAB

Build Tools: Ant, Maven

Devops tools: urban code deploy, Jenkins, Git runner

Agile Tool: Version one, Track plus, RTC

PROFESSIONAL EXPERIENCE:

Confidential, Bloomington, IL

Big Data/Hadoop developer

Responsibilities:

  • Worked on Hadoop cluster and data querying tools to store and retrieve data.
  • While developing applications involved in complete Software Development Life Cycle (SDLC)
  • Implemented Spark using python/clojure and Spark SQL for faster testing and processing of data.
  • Data cleansing, transformations tasks are handled using SPARK using python and HIVE.
  • Strong experience in working with Spark-SQL, Data Frames and RDD's. Use Data frames for data transformations using RDD.
  • Worked on real-time data processing using Spark Streaming and Kafka using python.
  • Experienced in handling large data frames using Partitions, Spark in-Memory capabilities, Broadcasts in Spark, Effective & efficient Joins, Actions, Transformations and other during ingestion process itself.
  • Actively involved in the spark tuning techniques by successfully caching the RDD’s and increase the number of executor’s per node.
  • Strong experience in analyzing large amounts of data sets writing PySpark scripts and Hive queries.
  • Involved in tuning of Cassandra cluster by changing the parameters of Read operation, Compaction, Memory Cache, Row Cache.
  • Loaded all data-sets into Hive from Source CSV files using spark and Cassandra from Source CSV files using Spark/PySpark.
  • SPARK-Python RDD s are used to transform, filter data which contains “ERROR”, “FAILURE”, “WARNING” in the log lines and then stored into HDFS.
  • Built a Ingestion Framework that would ingest the files from SFTP to HDFS and ingest Financial data into HDFS.
  • Having experience on RDD architecture and implementing Spark operations on RDD and also optimizing transformations and actions in Spark.
  • Involved in converting Hive/SQL queries into Spark transformations using Spark RDD's.
  • Performed advanced procedures like data analytics and processing, using the in-memory computing capabilities of Spark using Scala/python/clojure.
  • Developed using both Data Frames/SQL and RDD in Spark for data Aggregation queries and reverting back into mainframe.
  • Worked on python scripts to read the data available in csv, json and parquet file format for processing bulk data.
  • Developed python/clojure scripts to do operations of sorting, joining and filtering enterprise data.
  • Worked on shell scripts and python scripts to compare the flat files line by line, if comparison fails then stop the processing.
  • Involved in Data Querying and Summarization using Pig and Hive and created User Defined Functions (UDFs), User Defined Aggregated Functions (UDAFs)
  • Implemented Sqoop jobs for large data exchanges between RDBMS and Hadoop clusters
  • Worked on scheduling and running complex Hadoop jobs using Apache Oozie on Hadoop cluster at given schedule in distributed environment.
  • Have created and implemented Oozie workflows, coordinators & bundles for both auto & Fire lob with their own instance to control production flows in TSMAS, which will be triggered off for a file arriving in HDFS from DAT.
  • Prepared Avro schema files and parquet files for generating impala tables and Created impala tables and loaded the data into tables and query data. Worked with Impala for data retrieval process.
  • Experienced in working with different levels of data compressions like CSV, XML, JSON, PARQUET and writing into S3 with the desired partitioning.
  • Involved in loading data from Linux file systems, servers, java web services using Kafka producers and consumers.
  • Implemented test scripts to support test driven development and integration.
  • Reviewing and managing Hadoop log files by consolidating logs from multiple machines.
  • Installed and Configured Hadoop cluster using Amazon Web Services (AWS) for POC purposes.

Environment: HDFS, Spark, Spark SQL, Clojure, python, Eclipse, Java, Data frames, RDD, AWS, Impala, Oozie, Kafka, Maven, GIT, GITLAB, GITLAB Runner, Bikeshed, Cloverage, East wood, shell scripting.

Confidential, Chattanooga, TN

Java / Big Data Developer

Responsibilities:

  • Developed Spark scripts using Python for Data Aggregation, Validation and verified its performance over MR jobs.
  • Collaborated on insights with Data Scientists, Business Analysts and Partners.
  • Performed advanced procedure like text analytics and processing, using the in-memory computing capabilities of Spark using Python.
  • Utilized Python Panda Frame to provide data analysis.
  • Used Spark-Streaming APIs to perform necessary transformations and actions on the fly for building the common learner data model which gets the data from Kafka in near real time and Persists into Cassandra.
  • Enhanced and optimized Spark scripts to aggregate, group and run data mining tasks.
  • Loaded the data into Spark RDD and do in memory data Computation to generate the output response.
  • Involved in converting Hive/SQL queries into Spark Transformations using Spark RDD’s and PySpark.
  • Optimizing of existing algorithms in Hadoop using Spark Context, Spark SQL, Data Frames and Pair RDD’s.
  • Used Spark API over Hadoop Yarn to perform analytics on data and monitor scheduling.
  • Implemented schema extraction for Parquet and Avro file formats.
  • Experienced in performance tuning of Spark Applications for setting right Batch Interval Time, correct level of Parallelism and memory tuning.
  • Developed Hive queries to process the data and generate the data cubes for visualization.
  • Built specific functions to ingest columns into Schemas for Spark Applications.
  • Experienced in handling large data sets using Partitions, Spark in memory capabilities, effective and efficient Joins, Transformations and other during ingestion process itself.
  • Developed data integration programs in a Hadoop and RDBMS environment with both traditional and non-traditional data sources for data access and analysis.
  • Analyzed SQL scripts and designed the solution to implement using PySpark.
  • Primarily focused on the spring components such as Spring MVC, Dispatcher Servlets, Controllers, Model and View Objects, View Resolver.
  • Used Spring MVC on the web layer, abstract factory design pattern and DAO on the business layer, developed DAO for communicating with the database using Spring JDBC.
  • Involved in Spring Framework- Spring Dependencies and Spring Annotations. Implemented controller and mapped it to a URL in servlet.xml file. Implemented JSP corresponding to the controller where in the data was propagated into it from the model and view object from the controller.
  • Built a RESTFul client to consume APIs which we have written. Accessed a third-party REST service inside a spring application with help of Spring Rest Template class.
  • Used AJAX module to handle RESTful calls to enable communication between view components and servers.
  • Used Java Messaging Services (JMS) for reliable and asynchronous exchange of important information related to health care insurances.
  • Used Client side MVC Framework, i.e. Angular.js in the UI development for data binding and to consume RESTful web services.
  • Implementing Ping Security WS-Trust Security Token Service (STS) which allows organizations to extend SSO identity management to SOAP & REST FULL Web Services.
  • Involved in designing and developing JSP pages, Servlets with HTML, Widgets, JavaScript, JQUERY, XML, and XSL and accomplished the front end validations.
  • Extensively used CSS, Foundation js & Bootstrap for styling the HTML elements.
  • Worked on SOAP Apache axis client for consumption of xml based web services. Done testing using SOAP UI.
  • Designed and Implemented MVC architecture using Spring Framework, which involved writing Action Classes/ Forms/Custom Tag Libraries &JSP pages.
  • Used IBM® Rational® Software Analyzer ROI (RSAR) for extensible code quality analysis solution which enables software code reviews, bug identification, and policy enforcement early in the development cycle.
  • Used IBM AppScan which enables to identify security vulnerabilities and generate reports and fix recommendations. Develop fixes for vulnerabilities found via SONAR QUBE reports.
  • Worked on IBM Urban Code Deploy for automating application deployments through environments, which is designed to facilitate rapid feedback and continuous delivery in agile development while providing the audit trails, versioning and approvals needed in production.
  • Develop fixes for vulnerabilities found via SONAR Qube reports.
  • Extensively used Maven build tool so as to generate EARs and deploy on WAS 8.5.
  • Used NEXUS repository manager which allows storing and retrieving build artifacts.
  • Used JUnit framework for Unit testing of application.

Environment: Hadoop(HDFS), HIVE, YARN, Scala, Spark, Python, Kafka, Linux, Java/J2EE, Spring MVC, Spring JDBC, Tiles, SOAP, REST, JAVASCRIPT, JQuery, AJAX, JSON, Maven, Nexus, Websphere Application Server, REST Template, Rational Team Concert, HTML5, CSS3, Foundation, Junit, RSAR, APPSCAN, SONAR QUBE, SOAP UI, POSTMAN.

Confidential, Dallas, TX

Sr Java Full Stack Developer

  • Involved in SDLC Life Cycle of project including Analysis, Design, Development and Testing.
  • Developed application using Spring MVC framework which includes writing Controller classes for handling requests, processing form submissions and also performed validations using Commons validator.
  • Created service classes in spring MVC to implement business layer.
  • Developed DAO Classes for Data layer using SPRING JDBC.
  • Produce Web Services using RESTFULL to multiple HMS and third party interfaces.
  • Used Client side MVC Framework, i.e. Angular.js in the UI development for data binding and to consume RESTful web services.
  • In angular Js worked on factory methods, controller methods, filters & directives.
  • Worked on various DB2 queries to obtain the user profiles of the application. Developed TWS Jobs to run nightly to process and load claims into database.
  • Used Extensive Java collection Framework such as Array List, Hash Map for the data manipulation.
  • Transformation of the document to the different format like PDF, .xls, .xml, .txt using XML and XSL: FO document, asynchronous processing of the data using the Ajax functionality.
  • Developed the front-end screens using HTML5, CSS, JSP and JSTL.
  • Client side scripting (validations) is written in JavaScript, jQuery & Ajax extensively used for client side validation. Had extensively used JSTL and Tag Libraries.
  • Expertise in analyzing the DOM Layout, Java Script functions, Cascading Styles across cross-browser using Fire Bug, Developer Tool Bar.
  • Built application using ANT and used Log4J to generate log files for the application.
  • Was responsible for Impact analysis and analysis document preparation.
  • Knowledge in Installation, Configuration & administration of WebLogic application server.
  • Used JUnit framework for Unit testing of application.
  • Used Log4j for Logging various levels of information like error, info, debug into the log files.
  • Responsible for maintaining versions of source code using Harvest Tool Used by the client (HMS).
  • Involved in daily agile stand up meetings, grooming, planning sessions, sprint demos and retrospective meetings using Version one tool.

Environment: JDK1.8, Core Java, J2EE, Eclipse, Servlets, JSP, Spring MVC, SPRING BOOT, HTML, TWS, XML, JSTL, XPath, JQuery, AngularJS, AJAX, DB2, REST Web Services, TTD, WebLogic Server, Harvest, ANT, JDBC, SPRING JDBC, Version one .

Confidential, SPR, IL

Sr Java Developer

RESPONSIBILITES:

  • Involved in developing the application using Java/J2EE platform. Implemented the Model View Control (MVC) structure using Struts.
  • Developed the Action Classes, Action Form Classes, created JSPs using Struts tag libraries and configured in Struts-config.xml, Web.xml files.
  • Developed JSPs, Servlets using Struts framework for different modules for the interaction of UI.
  • Designing and developing EJB Session Beans (stateless session beans) for business logic and business access as per Use cases and store persistent data.
  • Developing the distributed applications using EJB 2.1, 3.0 specification. Worked on migration of EJB 2.1 to EJB 3.0 specification.
  • Mapping of SQL databases and objects in java using iBatis, worked on SQL map configuration file (SqlMap.xml) for POJO classes in writing db2 queries and map a query to objects.
  • Worked on Mapper Configuration file (SqlMapConfig.xml) which contains settings and properties.
  • Involved in designing the user interfaces using Struts Tiles Framework (tiles-defs.xml), HTML, and JSPs.
  • Developed JSP pages for presentation layer (UI) using Struts with client side validations using Struts Validator framework/ JavaScript.
  • Worked on migration of another application Customer Service Agency (CSA) from JSF 1.2 to JSF 2.0.
  • Worked on front end validations using JavaScript’s by removing previously used DOJO scripts
  • Worked and implemented facelets by creating templates to represent tiles.
  • Worked on removal of jsp’s and converted to XHTML get the advantages.
  • Worked using rational clear case and rational clear quest, worked on individual views and integration views.
  • Performed unit testing using JUNIT framework and used Struts Test Cases for testing Action Classes.
  • Built application using ANT and used Log4J to generate log files for the application.

Environment: J2EE, Struts, JSF, EJB, WebSphere Application Server, DB2,Ant, Log4j, HTML, JSP,XHTML, JavaScript’s, Clear case, Clear quest, dojo, ibatis, tiles

Confidential

Java/J2EE Developer

­­­­­­­­­­­

Responsibilities:

  • Used Spring Framework to implement Model View Controller (MVC) architecture to promote loose coupling and make the application more scalable in future.
  • Developed the Hibernates mapping files to retrieve & update customer information from/to Oracle database, building UNIX shell scripts for data migration & batch processing.
  • Used Web logic Application Server for deploying various components of application.
  • Developed user interface using JSP, Java Script, HTML and CSS.
  • Developed custom tags, JSTL, STLD to support custom User Interfaces.
  • Developed and Deployed web services using Apache Axis2 in Java and SOAP/WSDL on SOA architecture.
  • Implemented Web Services using WSDL.
  • Developed test cases and performed unit test using JUnit Framework.
  • Used TOAD and Putty extension in IDE.
  • Extensively used Maven script to build the project.
  • Used Log4j for logging and debugging.
  • Experienced in build and release processes and configuration management.

Environment: J2EE, Spring, Hibernate, WebLogic Application Server, Maven, Log4j, JSP, Java Script, HTML, JSTL, STLD, Apache Axis2, SOAP/WSDL, Web Services, SVN, XSLT, Putty, TOAD.

Confidential

System analyst

­­­­­­­­­­­­­­­­­­­­­

Responsibilities:

  • Responsible for development of DAO's (Data Access Objects) to interact with the database using JDBC.
  • Used JavaScript/JQuery and HTML, CSS extensively to develop client side code. Extensively used JavaScript’s for client side validations.
  • Used Tomcat application server for deployment of application
  • Performed extensive test driven development using JUnit for unit testing.
  • Designed and Developed Stored Procedures, Triggers in Oracle to cater the needs for the entire application.
  • Worked on Core Java concepts Exception Handling, Generics and Collections.
  • Developed script using Ant for build process.

Environment: DAO’s, JDBC, JSP, Servlets, JavaScript, JQuery, HTML, Tomcat, JUnit, Oracle, Exception Handling, Generics, Collections and Ant.

We'd love your feedback!