We provide IT Staff Augmentation Services!

Sr. Big Data/hadoop Developer /enginer Resume

2.00/5 (Submit Your Rating)

Chicago, IL

SUMMARY:

  • 9+ years of experience in Application development/Administration/Architecture and Data Analytics with specialization in Java and Big Data Technologies
  • Experienced in installing, configuring, testing Hadoop ecosystem components on Linux /UNIX including Hadoop Administration (like Hive, pig, Sqoop etc.)
  • Expertise in Java, Hadoop Map Reduce, Pig, Hive, Oozie, Sqoop, Flume, Zookeeper,Impala andNoSQL Database.
  • Excellent experienced on Hadoop ecosystem, In - depth understanding of Map Reduce and the Hadoop Infrastructure.
  • Excellent experience in Amazon, Cloudera and Hortonworks Hadoop distribution and maintaining and optimized AWS infrastructure (EMR EC2, S3, EBS)
  • Expertise in developing Spark code using Scala and Spark-SQL/Streaming for faster testing and processing of data.
  • Excellent knowledge on Hadoop Architecture and ecosystems such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, YARN and Map Reduce programming paradigm.
  • Experienced working with Hadoop Big Data technologies(hdfs and Mapreduce programs), Hadoop echo systems (Hbase, Hive, pig) and NoSQL database MongoDB
  • Experienced on usage of NoSQL database column-oriented HBase.
  • Extensive experienced in working with semi/unstructured data by implementing complex map reduce programs using design patterns.
  • Experienced on major components in Hadoop Ecosystem including Hive, Sqoop, Flume &knowledge ofMapReduce/HDFS Framework.
  • Experienced in working with MapReduce Design patterns to solve complex MapReduce programs.
  • Excellent Knowledge in Talend Big data integration for business demands to work towards Hadoop and NoSQL
  • Hands-on programming experience in various technologies like JAVA, J2EE, HTML, XML
  • Excellent Working Knowledge on Sqoop and Flume for Data Processing
  • Expertise in loading the data from the different Data sources like (Teradata and DB2) into HDFS using sqoop and load into partitioned Hive tables
  • Experienced on Hadoop cluster maintenance including data and metadata backups, file system checks, commissioning and decommissioning nodes and upgrades.
  • Extensive experience writing custom Map Reduce programs for data processing and UDFs for both Hive and Pig in Java.
  • Strong experience in analyzing large amounts of data sets writing Pigscripts and Hive queries.
  • Extensive experienced in working with structured data using Hive QL, join operations, writing custom UDF's and experienced in optimizing Hive Queries.
  • Experienced in importing and exporting data using Sqoop from HDFS to Relational Database.
  • Expertise in job workflow scheduling and monitoring tools like Oozie.
  • Experienced in Apache Flume for collecting, aggregating and moving huge chunks of data from various sources such as webserver, telnet sources etc.
  • Extensively designed and executed SQL queries in order to ensure data integrity and consistency at the backend.
  • Strong experience in architecting batch style large scale distributed computing applications using tools like Flume, Map reduce, Hive etc.
  • Experience using various Hadoop Distributions (Cloudera, Hortonworks, MapRetc) to fully implement and leverage new Hadoop features
  • Worked on custom Pig Loaders and Storage classes to work with a variety of data formats such as JSON, Compressed CSV, etc.,
  • Experienced in working with different scripting technologies like Python, UNIX shell scripts.
  • Strong experienced in working with UNIX/LINUX environments, writing shell scripts.
  • Excellent knowledge and working experience in Agile&Waterfallmethodologies.
  • Expertise in Web pages development using JSP, HTML, Java Script, JQuery and Ajax.
  • Experienced in writing database objects like Stored Procedures, Functions, Triggers, PL/SQL packages and Cursors for Oracle, SQL Server, and MySQL & Sybase databases.
  • Great team player and quick learner with effective communication, motivation, and organizational skills combined with attention to details and business improvements.

TECHNICAL SKILLS:

Hadoop/Big Data Technologies: HDFS, Map Reduce, Sqoop, Flume, Pig, Hive, Oozie, impala, Zookeeper and Cloudera Manager, MongoDB, NO SQL Database HBase, Cassandra

Monitoring and Reporting: Tableau, Custom shell scripts, Hadoop Distribution Horton Works, Cloudera, MapR

Build Tools: Maven, SQL Developer

Programming & Scripting: JAVA, J2EE, HTML, Java script, JQuery, PL/SQL, C, SQL, Shell Scripting, Python.

Databases: Oracle, MY SQL, MS SQL server, Teradata

Web Technologies: HTML, XML, JSON, CSS, JQUERY, JavaScript, angular JS

Version Control: SVN, CVS, GIT

Operating Systems: Linux, Unix, Mac OS-X, Windows 8, Windows 7, Windows Server

PROFESSIONAL EXPERIENCE

Sr. Big Data/Hadoop Developer /Enginer

Confidential, Chicago IL

Responsibilities:

  • Data architecture - Data classification, formats, transformations, sources, targets and persistence mechanism and application architecture - Data flow between various components, functional dependencies and middle ware layers.
  • Evaluate Hadoop infrastructure requirements and design/deploy solutions (high availability, big data clusters, elastic load tolerance, etc.)
  • Scripts were written for distribution of query for performance test jobs in Amazon Datalake..
  • Created Hive Tables, loaded transactional data from Teradata using Sqoop and Worked with highly unstructured and semi structured data of 2 Petabytes in size
  • Developed MapReduce (YARN) jobs for cleaning, accessing and validating the data and created and worked Sqoop jobs with incremental load to populate Hive External tables.
  • Developed optimal strategies for distributing the web log data over the cluster importing and exporting the stored web log data into HDFS and Hive using Sqoop.
  • Apache Hadoop installation & configuration of multiple nodes on AWS EC2 system and developed Pig Latin scripts for replacing the existing legacy process to the Hadoop and the data is fed to AWS S3.
  • Responsible for building scalable distributed data solutions using Hadoop Cloudera.
  • Designed and developed automation test scripts using Python and analyzed the SQL scripts and designed the solution to implement using Pyspark
  • This project involves analysis of daily application/service monitoring and forecasting the volume of data using Spark, python, Hive, HDFS
  • Integrated Apache Storm with Kafka to perform web analytics and to perform click stream data from Kafka to HDFS.
  • Writing Pigscripts to transform raw data from several data sources into forming baseline data and implemented HiveGenericUDF's to in corporate business logic into HiveQueries.
  • Responsible for developing data pipeline with Amazon AWS to extract the data from weblogs and store in HDFS.
  • Uploaded streaming data from Kafka to HDFS, HBase and Hive by integrating with storm.
  • Analyzed the web log data using the HiveQL to extract number of unique visitors per day, page views, visit duration, most visited page on website.
  • Integrated Tableau with Impala and Hive for Visualization reports and Dashboard. Created custom query for tableau.
  • Supporting data analysis projects by using Elastic MapReduce on the Amazon Web Services (AWS) cloud performed Export and import of data into S 3.
  • Implemented Elastic Solr search including batch process and Real time using Flume with advance concept of Morphline index, Interceptor for data quality check for unstructured data.
  • Worked on MongoDB by using CRUD (Create, Read, Update and Delete), Indexing, Replication and Sharding features.
  • Involved in designing the row key in Hbase to store Text and JSON as key values in Hbase table and designed row key in such a way to get/scan it in a sorted order.
  • Integrated Oozie with the rest of the Hadoop stack supporting several types of Hadoop jobs out of the box (such as Map-Reduce, Pig, Hive, and Sqoop) as well as system specific jobs (such as Java programs and shell scripts).
  • Designed and Implemented Partitioning (Static, Dynamic) Buckets in HIVE and creating Hive tables and working on them using Hive QL.
  • Developed multiple POCs using PySpark and deployed on the YARN cluster, compared the performance of Spark, with Hive and SQL and Involved in End-to-End implementation of ETL logic.
  • Developed syllabus/Curriculum data pipelines from Syllabus/Curriculum Web Services to HBASE and Hive tables.
  • Worked on Cluster co-ordination services through Zookeeper.
  • Monitored workload, job performance and capacity planning using Cloudera Manager and worked on custom talend jobs to ingest entich and distribute data in Cloudera Hadoop ecosystem.
  • Involved in build applications using Maven and integrated with CI servers like Jenkins to build jobs.
  • Exported the analyzed data to the RDBMS using Sqoop for to generate reports for the BI team and involved in Agile methodologies, daily scrum meetings, spring planning.

Environment: Hadoop, HDFS, Map Reduce, Hive, Pig, Hbase, Sqoop, Oozie, Maven, Python, Shell Scripting, CDH, MongoDB, HBase, Cloudera, AWS (S3, EMR), SQL, Python, Scala, Spark, RDBMS, Java, HTML, Pyspark, JavaScript, WebServices, Kafka, Strom, Talend.

Sr. Big Data/Hadoop Developer /Engineer

Confidential, NYC NY

Responsibilities:

  • Responsible for installation and configuration of Hive, Pig, Hbase and Sqoop on the Hadoop cluster and created hive tables to store the processed results in a tabular format.
  • Configured Spark Streaming to receive real time data from the Apache Kafka and store the stream data to HDFS using Scala.
  • Built analytical data pipelines to port data in and out of Hadoop/HDFS from structured and unstructured sources and designed and implemented system architecture for Amazon EC2 based cloud-hosted solution for the client.
  • Processed data into HDFS by developing solutions and analyzed the data using Map Reduce, PIG, and Hive to produce summary results from Hadoop to downstream systems.
  • Build servers using AWS: Importing volumes, launching EC2, creating security groups, auto-scaling, load balancers, Route 53, SES and SNS in the defined virtual private connection.
  • Written Map Reduce code to process and parsing the data from various sources and storing parsed data into HBase and Hive using HBase-Hive Integration.
  • Designed and provisioned the platform architecture to execute Hadoop and Machine Learning use cases under Cloud infrastructure, AWS, EMR, and S3.
  • Worked on writing Spark scripts for improving the performance and optimization of the existing algorithms in Hadoop using Spark Context/Session, SparkSQL, Data Frame, Pair RDD's.
  • Streamed AWS log group into Lambda function to create service now incident.
  • Involved in loading and transforming large sets of Structured, Semi-Structured and Unstructured data and analyzed them by running Hive queries and Pig scripts and created Managed tables and External tables in Hive and loaded data from HDFS.
  • Developed Spark code by using Scala and Spark-SQL for faster processing and testing and performed complex HiveQL queries on Hive tables.
  • Scheduled several time based Oozie workflow by developing Python scripts.
  • Developed Pig Latin scripts using operators such as LOAD, STORE, DUMP, FILTER, DISTINCT, FOREACH, GENERATE, GROUP, COGROUP, ORDER, LIMIT, UNION, SPLIT to extract data from data files to load into HDFS.
  • Exporting the data using Sqoop to RDBMS servers and processed that data for ETL operations.
  • Worked on S3 buckets on AWS to store Cloud Formation Templates and worked on AWS to create EC2 instances.
  • Designing ETL Data Pipeline flow to ingest the data from RDBMS source to Hadoop using shell script, sqoop, package and MySQL.
  • End-to-end architecture and implementation of client-server systems using Scala, Akka, Java, JavaScript and related, Linux
  • Optimized the Hive tables using optimization techniques like partitions and bucketing to provide better.
  • Used Oozie workflow engine to manage interdependent Hadoop jobs and to automate several types of Hadoop jobs such as Java map-reduce Hive, Pig, and Sqoop.
  • Implementing Hadoop with the AWS EC2 system using a few instances in gathering and analyzing data log files.
  • Involved in Spark and Spark Streaming creating RDD's, applying operations -Transformation and Actions and created partitioned tables and loaded data using both static partition and dynamic partition method.
  • Developed custom Apache Spark programs in Scala to analyze and transform unstructured data.
  • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS and Extracted the data from Oracle into HDFS using Sqoop
  • Using Kafka on publish-subscribe messaging as a distributed commit log, have experienced in its fast, scalable and durability.
  • Test Driven Development (TDD) process and extensive experience with Agile and SCRUM programming methodology.
  • Implemented POC to migrate Map Reduce jobs into Spark RDD transformations using SCALA and scheduled map reduce jobs in production environment using Oozie scheduler.
  • Involved in Cluster maintenance, Cluster Monitoring and Troubleshooting, Manage and review data backups and log files.
  • Designed and implemented map reduce jobs to support distributed processing using java, Hive and Apache Pig
  • Analyzing Hadoop cluster and different Big Data analytic tools including Pig, Hive, HBase and Sqoop.
  • Research, evaluate and utilize new technologies/tools/frameworks around Hadoop ecosystem and Improved the Performance by tuning of HIVE and map reduce.

Environment: HDFS, Map Reduce, Hive, Sqoop, Pig, Flume, Vertica, Oozie Scheduler, Java, Shell Scripts, Teradata, Oracle, HBase, MongoDB, Cassandra, Cloudera, AWS, Javascript, JSP, Kafka, Spark, Scala and ETL, Python.

Confidential - Minneapolis, MN

Hadoop Developer

Responsibilities:

  • Responsible for Managing, Analyzing and Transforming petabyte's of data and also quick validation check on FTP file arrival from S3 Bucket to HDFS.
  • Responsible for analyzing large data sets and derive customer usage patterns by developing new MapReduce programs.
  • Involved in creation of Hive tables and loading data incrementally into the tables using Dynamic Partitioning and Worked on Avro Files, JSON Records.
  • Involved in using Pig for data cleansing and developed Pig Latin scripts to extract the data from web server output files to load into HDFS.
  • Worked on distributed frameworks such as Apache Spark and Presto in Amazon EMR, Redshift and interact with data in other AWS data stores such as Amazon 53 and Amazon DynamoDB.
  • Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required.
  • Developed customized Hive UDFs and UDAFs in Java, JDBC connectivity with hive development and execution of Pig scripts and PigUDF's.
  • Worked on Hive by creating external and internal tables, loading it with data and writing Hive queries.
  • Offline Analysis was performed on HDFS and sent the results to MongoDB databases to update the information on the existing table, From Hadoop to MongoDB move was done using MapReduce, Hive/ Pigscripts by connecting with Mongo-Hadoop connectors.
  • Involved in development and usage of UDTF's and UDAF's for decoding Log Record Fields and Conversion's, Generating Minute Buckets for the specified Time Interval's and JSON Field Extractor.
  • Responsible for Debug, Optimization of Hive Scripts and also implementing duplication Logic in Hive using a Rank Key Function (UDF) and developed Pig and Hive UDF's to analyze the complex data to find specific user behavior.
  • Experienced in writing Hive Validation Scripts which are used in validation framework (for daily analysis through graphs and presented to business users).
  • Developed workflow in Oozie to automate the tasks of loading data into HDFS and pre-processing with Pig and Hive.
  • Involved for Cassandra Database Schema design and using BULK LOAD Utility data pushed to Cassandra databases.
  • Responsible for creating Dashboards on Tableau Server and generated reports for hive tables in different scenarios using Tableau
  • Responsible for Scheduling using Active Batch jobs and Cron jobs and involved in Jar builds that can be triggered by commits to Github using Jenkins.
  • Exploring new tools for data tagging like Tealium (POC Report)
  • Actively updated the upper management with daily updates on the progress of project that include the classification levels that were achieved on the data.

Environment: Hadoop, Map Reduce, HDFS, Pig, Hive, HBase, Zookeeper, Oozie, Impala, Cassandra, Java (jdk1.6), Cloudera, Oracle 11g/10g, Windows NT, UNIX Shell Scripting, Tableau, Tealium, AWS, S3, SQL, Python.

Confidential, Charlotte, NC

Sr. Java Developer

Responsibilities:

  • Developed detail design document based on design discussions and involved in designing the database tables and java classes used in the application.
  • Involved in development, Unit testing and system integration testing of the travel network builder side of application.
  • Involved in design, development and building the travel network file system to be stored in NAS drives.
  • Setup Linux environment for to interact with route smart library (.so) file and NAS drive file operations using JNI.
  • Implemented and configure Hudson as Continuous Integration server and Sonar for maintaining code and remove redundant code.
  • Extensively worked with Hibernate Query Language (HQL) to store and retrieve the data from Oracle database.
  • Developed Java Web Applications using JSP and Servlets, Struts, Hibernate, spring, Rest Web Services, SOAP.
  • Provide support in all phases of Software development life cycle (SDLC), quality management systems and project life cycle processes. Utilizing Database Such as MYSQL, Following HTTP and WSDL Standards to Design the REST/ SOAP Based Web API’S using XML, JSON, HTML, and DOM Technologies.
  • Involved in Migrating existing distributed JSP framework to Struts Framework, designed and involved in research of Struts MVC framework
  • Designed Graphical User Interface (GUI) applications using HTML, JSP, JavaScript (JQuery), CSS and AJAX.
  • Worked with Route-smart C++ code to interact with Java application using SWIG and Java Native interfaces.
  • Developed the user interface for requesting a travel network build using JSP and Servlets.
  • Build business logic to users can specify which version of the travel network files to be used for the solve process.
  • Used Spring Data Access Object to access the data with data source and build an independent property sub-system to ensure that the request always picks the latest set of properties.
  • Implemented thread Monitor system to monitor threads. Used JUnit to do the Unit testing around the development modules.
  • Wrote SQL queries and procedures for the application, interacted with third party ESRI functions to retrieve map data and building and Deployment of JAR, WAR, EAR files on dev, QA servers.
  • Bug fixing (Log 4j for logging) and testing support after the development and prepared requirements and research to move the map data using Hadoop framework for future usage.

Environment: Java 1.6.21, J2EE, Oracle 10g, Log4J 1.17, Windows 7 and Red Hat Linux, Sub version, Spring 3.1.0, Icefaces 3, ESRI, Weblogic 10.3.5, Eclipse Juno, Junit 4.8.2, Maven 3.0.3, Hudson 3.0.0 and Sonar 3.0.0, HTML, CSS, JSON, JSP, JQuery, JavaScript.

Software Programmer

Confidential

Responsibilities:

  • Involved in the analysis & design of the application using Rational Rose and developed the various action classes to handle the requests and responses.
  • Designed and created Java Objects, JSP pages, JSF, JavaBeans and Servlets to achieve various business functionalities and created validation methods using JavaScript and Backing Beans.
  • Involved in writing client side validations using JavaScript, CSS.
  • Involved in the design of the Referential Data Service module to interface with various databases using JDBC.
  • Used Hibernate framework to persist the employee work hours to the database.
  • Developed classes and interface with underlying web services layer and prepared documentation and participated in preparing user's manual for the application.
  • Prepared Use Cases, Business Process Models and Data flow diagrams, User Interface models.
  • Back end server side coding and development using Java data structure as a Collections including Set, List, Map, Exception Handling, Vaadin, Spring with dependency injection, Struts Framework, Hibernate, Servlets, Action, Action Forms &Java beans, etc.
  • Responsible to enhance the UI using HTML, Java Script, XML, JSP, CSS as per the requirements and providing the client side using JQuery validations.
  • Involved in write application level code to interact with APIs, Web Services using AJAX, JSON and XML.
  • Wrote lots of JSP's for maintains and enhancements of the application. Worked on Front End using Servlets, JSP and also backend using Hibernate.
  • Gathered & analyzed requirements for EAuto, designed process flow diagrams.
  • Defined business processes related to the project and provided technical direction to development workgroup.
  • Analyzed the legacy and the Financial Data Warehouse and participated in Data base design sessions, Database normalization meetings.
  • Managed Change Request Management and Defect Management and managed UAT testing and developed test strategies, test plans, reviewed QA test plans for appropriate test coverage.
  • Involved in Developing JSP's, action classes, form beans, response beans, EJB's and extensively used XML to code configuration files.
  • Developed PL/SQL stored procedures, triggers and performed functional, integration, system and validation testing.

Environment: Java, J2EE, JSP, JCL, DB2, Struts, SQL, PL/DSQL, Eclipse, Oracle, Windows XP, HTML, CSS, JavaScript, and XML.

We'd love your feedback!