We provide IT Staff Augmentation Services!

Senior Big Data Developer Resume

Urbana, MarylanD

SUMMARY:

  • Overall 8+ years of professional IT experience in Software Development, which include 4+ years of experience in ingestion, storage, querying, processing and analysis of Big Data using Hadoop technologies and solutions.
  • Excellent understanding of Hadoop Distributed File System (HDFS) and various components such as Spark, YARN and MapReduce concepts.
  • Understandings of Spark for Big Data integration.
  • Hands on experience with Spark - Scala programming with good knowledge on Spark Architecture and its in-memory Processing. used Apache Spark, Strom and Kafka.
  • Knowledge of job workflow scheduling and monitoring it using Oozie.
  • Excellent understanding and knowledge of NoSQL databases like HBase and MongoDB.
  • Experience with distributed systems, large-scale non-relational data stores, and NoSQL map-reduce systems, data modeling, performance tuning, and multi-terabyte data warehouses.
  • In depth knowledge in writing Hadoop MapReduce programs to get the logs and feed into NoSQL database like HBase for future analytical purpose.
  • Good experience in handling different file formats like text files, AVRO and ORC data files in Hive.
  • Extending Hive and Pig core functionality by writing custom UDF's.
  • Experience in importing and exporting data using SQOOP from HDFS to Relational Database Management Systems (RDBMS) and vice-versa.
  • Experience in J2EE, JDBC, Servlets, Struts, Hibernate, Ajax, JavaScript, JQuery, XML and HTML. SQL, PL/SQL and database concepts.
  • Expertise in distributed and web environments focused in core java technologies like Collections, Multithreading, IO, Exception Handling and memory management.
  • Lead solution design and implementations of Big Data solutions using Cloudera
  • Engagement with Costa Rica development team to lead and guide technical decisions
  • Develop solution and provide direction on development, integration, and maintenance of Big Data analytic solutions and application systems to support Intel's business needs and meet IT standards
  • Project Engagement Management Responsibilities include - owning TDD, QAC reviews, Design and Architecture reviews, Pre-reviews as required for larger project / development efforts
  • Measure project quality, define and improve governance process.

TECHNICAL SKILLS:

Big Data: Apache Spark, Scala, Map Reduce, HDFS, HBase, Hive, Pig, SQOOP, PostgreSQL

Databases: Oracle 9i/11g, My SQL, SQL Server 2000/2005

Hadoop distributions: Cloudera, Hortonworks, AWS

Languages: SQL, PL/SQL, Java

UI: HTML, CSS, JavaScript

Defect Tracking Tools: Quality Center, JIRA

Tools: SQL Tools, TOAD, Eclipse

Version Control: Tortoise SVN, GitHub

Operating Systems: Windows ..., Linux/Unix

Methodologies: Agile, Design Patterns

Java Technologies: CoreJava, JDBC,JSP,Tomcat

PROFESSIONAL EXPERIENCE:

Senior Big Data Developer

Confidential, Urbana, Maryland

Responsibilities:

  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Hands on experience in Spark and Spark Streaming creating RDD's, applying operations -Transformation and Actions.
  • Used HIVE to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Handled importing of data from various data sources, performed transformations using Hive, Spark and loaded data into HDFS.
  • Developed Pyspark code and Spark-SQL for faster testing and processing of data.
  • Snapped the cleansed data to the Analytics Cluster for reporting purpose to Business.
  • Hands on experience on AWS platform with S3 & EMR.
  • Experience on working with different data types like FLATFILES, ORC, AVRO and JSON.
  • Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems and suggested some solution.

Environment: Apache Spark, Scala, Spark-Core, Spark-SQL, Hadoop, MapReduce, HDFS, Hive, Pig, MongoDB, Sqoop, Oozie, Python, MySQL, Java (jdk1.7), AWS

Big Data Consultant

Confidential, Raleigh, NC

Responsibilities:

  • Build patterns according to business requirements to help find violations in the market and generate alerts by using Big Data technology (Hive, Tez and Spark) on AWS
  • Installed and configured Hadoop, HDFS, Spark Developed multiple MapReduce and Spark jobs for data cleaning and pre-processing.
  • Have solid understanding of REST architecture style and its application to well performing web sites for global usage.
  • Created Hive tables for data analysis to meet the business requirements.
  • Creating Hive tables and working on them using Hive QL. Importing and exporting data into HDFS from Oracle Database and vice versa using Sqoop.
  • Implemented test scripts to support test driven development and continuous integration.
  • Responsible for managing data coming from different sources.
  • Provide batch processing solution to certain unstructured and large volume of data by using Spark.
  • Experience in reading read in/out of NoSQL databases HBase, MongoDB etc.
  • Experience in managing and reviewing Hadoop log files.
  • Worked on Hive for exposing data for further analysis and for generating and transforming files from different analytical formats to text files.
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Involved in writing Hive scripts to extract, transform and load the data into Database.

Environment: Apache Spark, Scala, Spark-Core, Spark-Streaming, Spark-SQL, Hadoop, MapReduce, HDFS, Hive, Pig, MongoDB, Sqoop, Oozie, MySQL, Java (jdk1.7), AWS

Bigdata Developer

Confidential, Columbus, Ohio

Responsibilities:

  • Involved in the analysis, design, and development and testing phases of Software Development Lifecycle (SDLC).
  • Used Struts framework to add a module that would capture progress notes entered into the system.
  • Designed and developed the web-tier using HTML, CSS, JSP and Servlets
  • Implemented Exception mechanism and used Struts error message mechanism.
  • Developed and implemented intranet website using JSPs, Servlets, HTML and JavaScript to provide information about the application.
  • Used JSTL and developed required tiles and tile definitions for Templating and defined configuration in the struts configuration.xml.
  • Implemented MVC architecture to separate the presentation, business and database logic in the application.
  • Front-end development using HTML, CSS, JSP and client-side validations performed by using JavaScript.
  • Developed JSP pages using Custom tags and Tiles frame work and Struts frame work.

Environment: Java 7, JSP, Servlets, Spring, HTML, CSS, Hibernate Maven, Oracle, WINSCP and Tortoise SVN.

Java Program Analyst

Confidential

Responsibilities:

  • Involved in the complete Software Development Life Cycle including Requirement Analysis, Design, Implementation, Testing and Maintenance.
  • Utilize in-depth knowledge of functional and Technical experience in Java/J2EE and other leading-edge products and technology in conjunction with industry and business skills to deliver solutions to customer.
  • Designed the front end of the application using Ember.js, HTML and CSS3.
  • Used Hibernate for mapping data representation from MVC model to Oracle Relational data model with a SQL-based schema.
  • Implemented Spring Batch jobs for ensuring sending alerts to clients based on configured business rules.
  • Implemented multithreading for parallel processing of requests using various features of Concurrent API.
  • Worked on Oracle 11g databases and wrote SQL queries as well as stored procedures for the application.
  • Assisted with production support activities using JIRA when necessary to help identify and resolve escalated production issues based on the SLA.
  • In addition if the fix requires a code change, documented the Change Request and assign to the respective teams and tracked them to closure.

Environment: Java JSP, Servlets, spring, HTML, CSS, Hibernate, XML, Maven, Oracle.

Oracle Database Developer

Confidential

Responsibilities:

  • Designed, developed, and maintain an internal interface application allowing one application to share data with another.
  • Analyzed 90% of all changes and modifications to the interface application.
  • Coordinated development work efforts that spanned multiple applications and developers.
  • Developed and maintain data models for internal and external interfaces.
  • Worked with other Bureaus in the Department of State to implement data sharing interfaces.
  • Attended Configuration Management Process Working Group and Configuration Control Board meetings.
  • Performed DDL (CREATE, ALTER, DROP, TRUNCATE and RENAME), DML (INSERT, UPDATE, DELETE and SELECT) and DCL (GRANT and REVOKE) operations where permitted.
  • Design and develop database applications.
  • Design the database structure for an application.
  • Estimate storage requirements for an application.
  • Specify modifications of the database structures for an application.
  • Keep the database administrator informed of required changes.
  • Tune the application during development.
  • Establish an application's security requirements during development.
  • Created Functions, Procedures and Packages as part of the development.
  • Assisted the Configuration Management group to design new procedures and processes.
  • Lead the Interfaces Team with responsibility to maintain and support both internal and external interfaces.
  • Responsible for following all processes and procedures in place for the entire Software Development Life Cycle.
  • Wrote documents in support of the SDLC phases. Documents include requirements and analysis reports, design documents, and technical documentation.
  • Created MS Project schedules for large work efforts.

Environment: Oracle 9i, Informatica 7.1.x, Control-M, TOAD, Linux/Unix

Hire Now