We provide IT Staff Augmentation Services!

Big Data/java Developer Resume

Plano, TX

SUMMARY:

  • Over 5 years of extensive experience in designing, developing, and debugging Web - based Enterprise Applications JAVA/J2EE technologies
  • Over 2 years of experience as a Hadoop Developer with strong knowledge of Hadoop ecosystem, HDFS, MapReduce and Yarn.
  • Hands on experience in MapReduce jobs. Experience in installing, configuring and administrating the Hadoop Cluster of Major Hadoop Distributions.
  • Developed SPARK CODE using SCALA for faster testing and processing of data.
  • Experienced in using CRUD operations using Hbase Java client API and Rest API.
  • Experienced in working with Amazon Web Services (AWS) using EC2 for computing and S3 as storage mechanism.
  • Experience in importing and exporting Tera bytes of data between HDFS and Relational Database Systems using Sqoop
  • Extensive experience with Object Oriented/Core Java concepts like Exception Handling, Collections, Multithreading and with advanced J2EE such as Spring and Hibernate
  • Experience in designing web applications using Node JS, Angular JS, JavaScript, JQuery, HTML 5, CSS 3.
  • Expertise in databases such as MySQL, SQL Server databases to manage tables, views, indexes, sequences, stored procedures, functions, triggers and packages, and expertise in NoSQL databases like Hbase, MongoDB to manage document-oriented data, manage cluster and CRUD operations on data.
  • Experience with Version Control tools like GITHUB, and SVN for Source Code version management and merging Source code after intermittent Project releases and development IDEs like Eclipse, Visual Studio, Sublime3
  • Experience in J2EE Middleware development/integration using Tomcat Application Server and web services like Restful and SOAP.
  • Experience in handling/executing projects using Agile methodologies (SCRUM) along with Test Driven Development techniques.

TECHNICAL SKILLS:

Programming Languages: C, C#, Java, J2EE

Big Data Ecosystem: Hadoop Technologies HDFS, YARN, MapReduce, Hive, Pig, Sqoop, Flume, Spark, Amazon AWS EC2, S3, EMR

Web Technologies: AngularJS, NodeJS, HTML 5, CSS3, PHP, JavaScript, JQuery, Servlets, JSP, JDBC, Spring, JSF, REST API, EJB, Hibernate

Web Servers: Apache Tomcat, WebLogic

Databases: MS SQL Server 14/12/08 R2, MySQL, MongoDB, HBase, MS Access

IDE: Eclispe, Net Beans, IntelliJ, Visual Studio

Version Control Tools: SVN, GIT

Operating Systems: Linux and windows 7, 8, 8.1,10 Build

PROFESSIONAL EXPERIENCE:

Confidential - Plano, TX

Big Data/Java developer

Responsibilities:

  • Installation, configuration, management, supporting and monitoring Hadoop cluster using various distributions such as Apache Spark, Cloudera and AWS service console
  • Used Kafka functionality like distribution, partition, replicated commit log service by messaging systems by maintaining feeds and created applications, which monitors consumer lag within Apache Kafka clusters.
  • Involved in converting Map Reduce programs into Spark transformations using Spark RDD on Scala.
  • Worked with the monitoring and maintenance group for cluster management.
  • Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs and Scala.
  • Developed pig scripts to perform cleansing on datasets.
  • Designed a messaging system using Apache Kafka to send messages across teams.
  • Used Shell scripting for automation of scripts.
  • Involved in developing application to read data from and writing to multiple file formats
  • Used Spark libraries to load excel data and write to Amazon S3 in parquet file format
  • Worked on reading the data from Amazon S3 buckets into Spark RDDs and performed actions on RDDs.
  • Loaded the dataset into Hive for ETL Operation.
  • Worked on reading multiple data formats on HDFS using Scala.
  • Used Sqoop to extract and analyze data to relational databases
  • Used Apache Kafka to aggregate log data from multiple servers and make them available in Downstream systems for analysis using spark streaming.
  • Hands on experience using YARN, and tools like Pig and Hive for data analysis, Sqoop for data ingestion, Oozie for scheduling, Zookeeper for coordinating cluster resources.
  • Experience in handling data in different file formats like Text, Sequence, Avro and JSON.
  • Used AWS SDK for connection to Amazon S3 buckets as it is used as the object storage service to store and retrieve the media files related to the application.
  • Responsible for building scalable distributed data solutions using MongoDB.
  • Used Eclipse as IDE tool to develop the application and JIRA for bug and issue tracking.
  • Experience working in Agile development following SCRUM process, Sprint and daily stand-up meetings.

Environment: Hive, Kafka, Spark Streaming, MapReduce, Spark, Scala, No-Sql database, Hbase, Oozie, YARN, Pig, Zookeeper, AWS.

Confidential, Albany,NY

Big Data/Java developer

Responsibilities:

  • Extensively worked on converting existing MSSQL server stored procedures into Hadoop using Spark with Scala, Spark SQL and Hive.
  • Developed a generic utility in Spark for pulling the data from RDBMS system using multiple parallel connections.
  • Part of Aquila tool development team which converts SQL Server queries to Spark SQL and also handles auto deployment.
  • Developed Batch and streaming workflows with in-built Stone branch scheduler and bash scripts to automate the Data Lake systems.
  • Worked on POC to pull the near real time data from CDC enabled tables in SQL server into HDFS using Spark APIs in Intraday Schedule.
  • Worked on the Hortonworks distribution with 120 nodes cluster handling Peta bytes of transaction data
  • Experience on resolving Spark and Yarn resource management issues like Shuffle issues, Out of Memory, heap space errors, Null Pointer Exceptions and schema compatibility in Spark.
  • Worked on AWS Clusters using Qubole to process near real time data for every hour intervals.
  • Designed and implemented Data check and Data quality frameworks in Spark during the initial load process and the final publish stages.
  • Worked on ORC, Avro, Parquet, JSON file formats and various compression techniques.
  • Responsible for preserving code and design integrity using GitHub.

Environment: Apache Spark 2.0, Hadoop Stack, Scala SDK, Java, Spark SQL, Hive, Stone Branch, SQL Server, Data Warehouse, Tableau, AWS S3, EMR, Qubole, IntelliJ Idea, Jupyter Notebook, Git

Confidential

Java Developer

Responsibilities:

  • Actively participated in the Analysis, System study and Designing of the project. Worked in an AGILE software development environment.
  • Applied the latest development approaches including MVC, event-driven applications using Object Oriented. Built applications using HTML5, JSP, CSS3 and AngularJS.
  • Used AngularJS to develop single page web application through directives, which helps in easy developing and maintenance of the project.
  • Used Atom tool to develop front-end applications.
  • Development Configuration is totally based on Spring Boot. It provides developer friendly environment with less configuration setup. The design of persistence layer is totally based on CRUD Repository. Which is one of the advanced feature in spring framework.
  • Spring Tool Suite is the IDE which is used in developing the web application.
  • Using Hibernate Framework in forming object relations in the development environment which matches to the Database Environment with tables.
  • Providing Configuration setup in mapping the relation between Object and Tables through Java Persistence API annotations.
  • Developed the Web services to extend the utilization of the resources using SOAP Protocols.
  • Used SOAP in integration with various mainframes components like SOAP for CICS, CICS Transaction gateway and CICS web support.
  • Deployed java code under mainframes and distributed systems by using Maven.
  • Involve in developing web application by using configuration of Web Logic Application Server on Windows XP systems for the application.
  • Maven is used as a Build Management Tool helps in injecting the dependency jars from central repository and maintaining the project structure for both testing and development environment.
  • Extensively used GIT for version control management.

Environment: Html, CSS, JavaScript, Atom, AngularJS, Bootstrap, JSP, JSTL, Servlet, Spring, Spring Boot, Spring Tool Suite, Hibernate, SOAP, Web logic, Mainframes, Maven, GIT.

Confidential

Java Developer

Responsibilities:

  • Involved in the design, coding, deployment and maintenance of the project.
  • Involved in design and implementation of web tier using Servlets and JSP.
  • Performed client side validations using Java Script.
  • Maintained responsibility for database design, implementation and administration.
  • Testing the functionality and behavioural aspect of the software.
  • Responsible for customer interaction, analysis of the requirements and project scheduling.
  • Web development using AJAX techniques in combination with Struts and JPF frameworks.
  • Created utility scripts for using AJAX effectively.
  • Designed and developed new module called Report Framework to simplify the process of generating report for the user.
  • Created dashboards for tracking application usage.
  • Developed stored procedures, triggers, and queries using T-SQL in SQL Server.
  • Extensively used Eclipse while writing the code.

Environment : Java, J2EE, Tomcat, SQL Server, Eclipse, AJAX, JSP, Java Script, CSS, HTML.

Hire Now