We provide IT Staff Augmentation Services!

Big Data Hadoop/spark Developer/lead Resume

Newark, DE

SUMMARY:

  • More than 11+ years of strong IT experience in design, development, deploying, supporting and implementation of enterprise applications using Java, J2EE, Spring, Struts, Web Services, Design Patterns, SOA, SOAP, JMS, JSP Servlets, JSON, JQuery and Hibernate and also including Big Data Hadoop technology stack.
  • More than 5.5 years of experience working with Hadoop, HDFS, MapReduce framework and Hadoop Ecosystems
  • Good experience in working on Teradata and Teradata utilities like BTEQ, TpTExport.
  • Experienced in implementing Big Data Technologies - Hadoop ecosystem/HDFS/ Map-Reduce Framework, Flume, Impala, Sqoop, Oozie, Storm, Spark, Scala, Kafka, HBase, Cassandra, Python, Zookeeper and HIVE data warehousing tool.
  • Good experience in using TDCH for exporting and importing data efficiently from Teradata into Hadoop ecosystem.
  • Strong experience on Hadoop distributions like Cloudera and HortonWorks.
  • Exposure in setting up data importing and exporting tools such as Sqoop from Teradata to HDFS and FLUME for real time streaming.
  • Worked with RDBMS Teradata and utilities like Fastload, Multiload, Tpump and Fastexport.
  • Good Experience in Classic Hadoop Admin & Development, Yarn Architecture along with various Hadoop daemons such as Job Tracker, Task Tracker, Name Node, Data Node, Resource Manager, Node Manager and Application Master.
  • Experience in writing Impala and Hive Queries for processing and analyzing large volumes of data.
  • Worked on migrating the old java stack to Type safe stack using Scala for backend programming.
  • Strong experience of Pig, Hive and Impala analytical functions, extending Hive, Impala and Pig core functionality by writing Custom UDFs.
  • Experience on Apache Spark, Streaming and DataFrames using Scala and Python.
  • Experience in Managing Hadoop Clusters using Cloudera Manager Tool.
  • Experience on Spark SQL and Spark Streaming applications.
  • Designed and deployed Apache Spark SBTs in cluster for data process.
  • Developed Kafka Connections for taking Data to HDFS and Designed Spark Streaming.
  • Ability to analyze different file formats like Avro and Parquet.
  • Automated Sqoop and Hive using Oozie scheduling.
  • Proficient in Object Oriented Modelling, Design and Implementation.
  • Expertise in Database Design, Creation and Management of Schemas, writing Stored Procedures, Constraints and SQL queries, Views, Export/Import etc.
  • Have worked on application servers like Tomcat, JBoss, Websphere, Weblogic and JRun.
  • Expertise in developing REST and SOAP based Java web services.
  • Expertise in writing Python and Unix shell scripts.
  • Used testing framework like JUnit, HTTPUnit, DBUnit and JMock.
  • Proficient in Java Multi Threading, Socket Programming.
  • Strong Experience in AGILE and Waterfall SDLC.
  • Worked with onshore, offshore and international client, business, product and technical teams.
  • Great team player and team builder, highly motivated, willing to lead, fast learner, ability to learn new technology quickly and seamlessly manage workload to meet the deadline.

TECHNICAL SKILLS:

Technologies: Java (7 & 8), Spring, Struts, Hibernate, JAX-RS, SOAP API, SOA, Restful, AngularJS, HTML 5, AJAX, JQuery, Python, Scala, Linux, Unix Shell Script, JUnit.

Big Data Ecosystem: HDFS, MapReduce, Hive, Sqoop, Cassandra, Oozie, Pig, Zookeeper, Impala, Flume, Kafka, Storm, Spark and Scala.

Data warehousing tools: Teradata, Talend Open Studio.

Database and Tools: Oracle 10g/11g,JDBC, MySql5.0/5.1, Teradata, Hive, SQLite, DB2

Version Control Tools: GIT, SVN, CVS

Web Technologies: HTML, CSS, XML JavaScript, Jquery, Servlet, JSP, REST, SOAP

Framework and Libraries: Apache Ant, Maven, Spring 2/2.5/3, Jakarta Struts, JSTL, log4j, Hibernate 3.5.

Web Server Container: Apache Tomcat, BEA Weblogic, JBOSS, and Glassfish Application Server

IDE & Tools: Eclipse, IntelliJ and Netbeans

PROFESSIONAL EXPERIENCE:

­­­­­­­­­­

Confidential, Newark, DE

Big Data Hadoop/Spark Developer/Lead

Responsibilities:

  • Developed and designed end-to-end implementation of framework based on ETL Logic, which involved moving data from Teradata, Oracle to HDFS and vice-versa.
  • Loaded the dataset into Hive and Impala for ETL (Extract, Transfer and Load) operation.
  • Worked with Oozie work flow engine to schedule time based jobs to perform multiple actions
  • Used Spark API over Cloudera Hadoop YARN to perform analytics on data in Hive.
  • Developed Scala scripts, UDFFs using both Data frames/SQL/Data sets and RDD/Map Reduce in Spark 1.8 for Data Aggregation, queries and writing data back into OLTP system through Sqoop.
  • Optimizing of existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frames and Pair RDD's.
  • Created Spring Boot web services for triggering Hadoop and Teradata jobs.
  • Experienced in running query-using Impala.
  • Done data analysis for the user data to create models for forecasting engines.
  • Worked on Impala python driver.

Environment: Spark Core, Spark SQL, Scala, Python, Hive, Sqoop, Impala, Jenkins, Teradata, TDCH, Tpt Utilties .

Confidential, Richmond, VA

Senior Hadoop/Spark Developer/Lead

Responsibilities:

  • Developed and designed end-to-end implementation of framework based on ETL Logic, which involved moving data from Teradata, Oracle to HDFS and vice-versa.
  • Loaded the dataset into Hive for ETL (Extract, Transfer and Load) operation.
  • Used Spark API over Cloudera Hadoop YARN to perform analytics on data in Hive.
  • Developed Scala scripts, UDFFs using both Data frames/SQL/Data sets and RDD/MapReduce in Spark 1.8 for Data Aggregation, queries and writing data back into OLTP system through Sqoop.
  • Optimizing of existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frames and Pair RDD's.
  • Implemented ELK (Elastic Search, Log stash, Kibana) stack to collect and analyze the logs produced by the spark cluster.
  • Worked on Cluster of size 130 nodes.
  • Responsible for developing data pipeline with Amazon AWS to extract the data from weblogs and store in HDFS.
  • Implemented Partitioning, Dynamic Partitions, Buckets in Hive
  • Experienced in running query-using Impala.
  • Good Knowledge in Amazon AWS concepts like EMR and EC2 web services which provides fast and efficient processing of Big Data.

Environment: Hadoop YARN, Spark Core, Spark Streaming, Spark SQL, Scala, Python, Kafka, Hive, Sqoop, Amazon AWS, Elastic Search, Impala, Cassandra, Jenkins, Cloudera, Oracle 12c, Linux .

Confidential, Sunnyvale, CA

Senior Big Data Developer/Lead

Responsibilities:

  • Developed and designed end-to-end implementation of framework based on ETL Logic, which involved moving data from Teradata, Oracle to HDFS and vice-versa.
  • Loaded the dataset into Hive for ETL (Extract, Transfer and Load) operation.
  • Loaded unstructured data into HIVE tables using RegEx and SerDe interface.
  • Worked on Hive Queries to categorize data of different iTunes Transactional Reports.
  • Involved in loading data from UNIX file system to HDFS.
  • Hive external tables were used for raw data and managed tables were used for intermediate processing.
  • Written User Define Function for PIG and HIVE.
  • Performed in the integration of Cassandra with Kafka, Apache Storm and Apache Spark
  • Involved the Spark and Kafka integration to get the data from Kafka events to spark input.
  • Designed RDDs with Spark Streaming and Spark SQLs.
  • Used Scalaz libraries for functional programming using scala in Spark. Used scalaz core 2.01 version.
  • Worked on Hadoop Hortonworks 1.8 and 2.3.
  • Worked on Lambda functionality of Java 8.
  • Design and develop Java Restful web service that communicates with the Oracle database in the back-end and returns Json format data to AngularJS framework, using Spring, Jersey Restful, JAX-RS, Angularjs.
  • Participation of performance tuning of Application using JProfiler and java visual vm.
  • Wrote extensive queries using Data Frame.
  • Wrote test cases using JUnit as a framework.
  • Have done FTP implementation for upload and download of JSON files.
  • Developed Autosys and Unix shell scripts to run Cron jobs every midnight to update EI database.
  • Taken care of multithreading part in back end of the components.
  • Developed hive queries and UDFS to analyze/transform the data in HDFS.
  • Migrated data across Hadoop clusters using DISTCP.
  • Involved in writing Stored Procedures in Oracle.
  • Designed and developed configuration wizard using Spring MVC, used Spring WebFlow to create business rule based flow mechanism, and configuration entity model in JPA.
  • Involved in developing application modules with Java and other scripting languages like Python and Unix shell scripts to extract/load ETL metadata from the Oracle database.
  • Troubleshoot production issues and find resolution, Performance tune, wherever applicable.
  • Experience in creating High-level design in design phase.
  • Maintain GIT branches during project development. Conduct merges and use Maven for building and deploying the applications to JBOSS server.
  • Install monitoring tools to investigate memory leaks in the production server.
  • Used Oozie and Aurtosys as an automation tool for running the jobs.
  • Used Splunk tool for log analysis.
  • Worked on a POC to compare processing time of Impala with Apache Hive for batch applications to implement the former in project.

Environment: Java, Apache Hadoop, HDFS, MapReduce, HIVE, Impala, PIG, Sqoop, HBase, Oozie, Java, PL/SQL, Zookeeper, Cassandra, Spark, Scala, Kafka, Spring DI & AOP, Web Services, Log4j, Oracle10g, JUnit 4, Eclipse, Maven, Python, Unix Shell scripting, AngularJs.

Confidential

Senior Developer

Responsibilities:

  • Written functional, design and test case specifications.
  • Developed modules based on struts MVC Architecture.
  • Developed UI using JavaScript, JSP, JSON, JQuery, HTML, and CSS for interactive cross browser functionality and complex user interface.
  • Implemented the Web Service client for the login authentication, credit reports and applicant information using Apache Axis 2 Web Service.
  • Designed use case diagrams, class diagrams, and sequence diagrams as a part of Design Phase.
  • Created complex SQL Queries, PL/SQL Stored procedures, Functions for back end.
  • Migrated the SQL Server stored procedures to Hibernate.
  • Used Hibernate to do the object relational mapping between the tables and java objects.

Environment: Java, J2EE, JSP, Servlets 2.5/3, JMS 1.1, Hibernate 3.5, Spring DI & AOP, Web Services, UML, HTML, DHTML, JavaScript, Struts 1.1, CSS, XML, Weblogic, Log4j, Oracle10g, SQL server, JUnit 4, JNDI 1.2, Eclipse 3.6

Confidential

Senior Developer

Responsibilities:

  • Involve in all phases of the Software Development Life Cycle including requirements gathering, designing the application, implementing the design, testing and maintenance support.
  • Used AJAX components in developing UI.
  • Worked with APIs to connect to 3rd party APIs (like social media).
  • Built apps that communicate with RESTful services.
  • Created an online survey in Flex with retrieving the questions and answers from the database.
  • Developed and used JSP custom tags in the web tier to dynamically generate web pages .

Environment: Java, J2EE, JSP, Servlets 2.5/3, JMS 1.1, Hibernate 3.5, Spring DI & AOP, Web Services, UML, HTML, DHTML, JavaScript, Struts 1.1, CSS, XML, Weblogic, Log4j, Oracle10g, SQL server, JUnit 4, JNDI 1.2, Eclipse 3.6

Confidential

Senior Developer

Responsibilities:

  • Used Google Maps API to enable Google maps search and enabling GPS/WFI to detect user location.
  • Used multithreading to implement parallel processing.
  • Integrated Struts, Hibernate and JBoss Application Server to provide efficient data access.
  • Create and/or update system requirements and associated technical design documentation.
  • Created web services based on SOA to support Confidential business sales operations.
  • Involved in designing and developing back end java beans using OOPS.
  • Troubleshoot critical production issues.
  • Wrote XML Pull Parser to import data into Oracle database.

Environment: Java, J2EE, XML Pull Parser, Web Services, JDBC, Oracle9i, JBOSS, TOAD 8.0, Cruise Control 2.6, Maven 2.0, JUnit, HttpUnit, DBUnit, Eclipse 3.1

Confidential

Senior System Eng

Responsibilities:

  • Worked on MVC struts framework for application design
  • Worked on Hibernate ORM framework with Spring framework for data persistence and transaction management
  • Provided Technical support for production environments resolving the issues, analyzing the defects, providing and implementing the solution defects.
  • Worked on writing DAO layer using Hibernate to access the Oracle database.
  • Worked on writing Hibernate xml based mapping java classes with Oracle Database tables.

Environment: Java, J2EE, JSP, Servlets 2.5/3, JMS 1.1, Hibernate 3.5, Spring DI & AOP, Web Services, UML, HTML, DHTML, JavaScript, Struts 1.1, CSS, XML, Weblogic, Log4j, Oracle10g, SQL server, JUnit 4, JNDI 1.2, Eclipse 3.6,3.4

Hire Now