We provide IT Staff Augmentation Services!

Bigdata/hadoop Developer Resume

2.00/5 (Submit Your Rating)

New, YorK

SUMMARY

  • 7+ years of overall IT experience in a variety of industries, which includes hands on experience in Big Data technologies.
  • 3+ years of comprehensive experience as a Hadoop Developer.
  • Passionate towards working in Big Data and Analytics environment.
  • Experience in installation, configuration, management and deployment of Hadoop Cluster, HDFS, Map Reduce, Pig, Hive, Sqoop, Flume, Oozie, Hbase and Zookeeper.
  • Experience in Extraction, Transformation and Loading (ETL) of data from multiple sources like Flat files, XML files, and Databases. Used Informatica for ETL processing based on business.
  • Well versed in installation, configuration, supporting and managing of Big Data and underlying infrastructure of Hadoop Cluster.
  • Expertise in writing Hadoop Jobs for analyzing data using Hive and Pig.
  • Experience in working with MapReduce programs using Apache Hadoop.
  • Good understanding of Zookeeper and Kafka for monitoring and managing Hadoop jobs.
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems.
  • Good understanding of Data Mining and Machine Learning techniques.
  • Good knowledge on Spark and Scala.
  • Experience in analyzing data using Hive QL, Pig Latin, and custom MapReduce programs in Java.
  • Extensive experience with SQL, PL/SQL and database concepts.
  • Knowledge of NoSQL databases such as HBase and MongoDB.
  • Used NoSQL technologies like Hbase, Cassandra and Neo4j for data extraction and storing huge volume of data. Also, experience in Data Warehouse life cycle, methodologies and its tools for reporting and data analysis.
  • Knowledge of job workflow scheduling and monitoring tools like oozie and Zookeeper.
  • Experience with databases like DB2, Oracle 11g, MySQL, SQL Server and MS Access.
  • Strong programming skills in Core Java and J2EE technologies.
  • Experienced in Web Services approach for Service Oriented Architecture (SOA).
  • Extensive use of Open Source Software such as Web/Application Servers like Apache Tomcat 6.0 and Eclipse 3.x IDE.
  • Experience in communicating with team members, discuss the designs and solutions to the problems.

TECHNICAL SKILLS

Programming Languages: C, JAVA and Python

Big Data Ecosystem: HDFS, MapReduce, HBase, Pig, Hive, Sqoop, Oozie, Spark, Apache Kafka

Web technologies: Core Java, JSP, JDBC, Servlets

Database Systems: Oracle 11g/10g, MS - SQL Server, MS-Access

Application Servers: Web Sphere 5.1, Web Logic 9.1/9.2

Frame works: Strut, Spring, Hibernate

Operating system: Linux, Unix, Windows 7/8/9

Programming Tools: Eclipse 2.1/3.7, Net Beans

PROFESSIONAL EXPERIENCE

Confidential, New York

BigData/Hadoop Developer

Responsibilities:

  • Analyzed large data sets by running Hive queries and Pig scripts.
  • Worked with the Data Science team to gather requirements for various data mining projects.
  • Involved in creating Hive tables and loading and analyzing data using hive queries.
  • Developed Simple to complex MapReduce Jobs using Hive and Pig.
  • Involved in running Hadoop jobs for processing millions of records of text data.
  • Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required.
  • Used Spark to hold the intermediate results in memory rather than writing them to disk while working on the same dataset multiple times.
  • Used MongoDB extensively for data extraction and storing huge amount of data.
  • Developed multiple MapReduce jobs in java for data cleaning and preprocessing.
  • Involved in loading data from LINUX file system to HDFS.
  • Responsible for managing data from multiple sources.
  • Used python for partitioner, record writer for all input /output.
  • Used Talend to develop, integrate and management of big data by removing the need for users to learn, write or maintain complicated Hadoop code.
  • Installed and configured Hive and also written Hive UDFs.
  • Extracted files from Couch DB through Sqoop and placed in HDFS and processed.
  • Experienced in running Hadoop streaming jobs to process terabytes of xml format data.
  • Load and transform large sets of structured, semi structured and unstructured data.
  • Responsible to manage data coming from different sources.
  • Translated the ETL job to Map Reduce job by using the Talend.
  • Maintain and develop ETL code written in Java which pulls data from disparate internal and external sources.
  • Assisted in exporting analyzed data to relational databases using Sqoop.
  • Created and maintained Technical documentation for launching HADOOP Clusters and for executing Hive queries and Pig Scripts.

Environment: Hadoop, HDFS, Pig, Hive, MapReduce, Sqoop, LINUX, Spark, ETL, Python and Big Data.

Confidential, NJ

BigData/Hadoop Developer

Responsibilities:

  • Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper and Sqoop.
  • Involved in analyzing system failures, identifying root causes, and recommended course of actions.
  • Wrote the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
  • Managing and scheduling Jobs on a Hadoop cluster.
  • Designed a data warehouse using Hive.
  • Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
  • Developed the Pig UDF’S to pre-process the data for analysis.
  • Worked on Talend for complicated spark code.
  • Develop Hive queries for the analysts.
  • Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing with Pig.
  • Worked on Python to have better performance.
  • Cluster co-ordination services through ZooKeeper.
  • Collected the logs data from web servers and integrated in to HDFS using Flume.
  • Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce jobs given by the users.
  • Responsible to manage data coming from different sources and involved in loading data from UNIX file system to HDFS.
  • Managed and reviewed Hadoop log files.
  • Use Spark to analyze point-of-sale data and coupon usage.
  • Used Apache Kafka for large-scale data processing, handling real-time analytics and real streaming of data.
  • Used Scala for integration Spark into Hadoop.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports.
  • Worked with highly engaged Informatics, Scientific Information Management and enterprise IT teams.

Environment: Hadoop, Hbase, HDFS, Hive, Java (jdk1.6), Pig, Zookeeper, Oozie, Kafka, Spark Flume.

Confidential - Carson City, NV

BigData/Hadoop Developer

Responsibilities:

  • Responsible for building scalable distributed data solutions using Hadoop.
  • Responsible for Cluster maintenance, adding and removing cluster nodes, Cluster Monitoring and Troubleshooting, Manage and review data backups and log files.
  • Analyzed data using Hadoop components Hive and Pig.
  • Created partitioned tables in Hive.
  • Worked hands on with ETL process.
  • Experience in Extraction, Transformation, and Loading (ETL) of data from multiple sources like Flat files, XML files, and Databases.
  • Used Informatica for ETL processing based on business needs and extensively used Oozie workflow engine to run multiple Hive and Pig jobs.
  • Involved in loading data from LINUX file system to HDFS using Sqoop and exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the Business intelligence(BI) team.
  • Responsible for running Hadoop streaming jobs to process terabytes of xml's data.
  • Load and transform large sets of structured, semi structured and unstructured data using Hadoop/Big Data concepts.
  • Involved in loading data from UNIX file system to HDFS.
  • Installed and configured Flume, Hive, Pig, Sqoop and Oozie on the Hadoop cluster.
  • Responsible for creating Hive tables loading data and writing hive queries.
  • Handled importing data from various data sources, performed transformations using Hive, Map Reduce and loaded data into HDFS.
  • Performed Map Reduce programs on log data to transform into structured way to find user location, age group, spending time.
  • Extracted the data from Teradata into HDFS using the Sqoop.
  • Exported the patterns analyzed back to Teradata using Sqoop.
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs which run independently with time and data availability.
  • Managed and reviewed Hadoop log files.

Environment: Hadoop Cluster, HDFS, Hive, Pig, Sqoop, Linux, Hadoop Map Reduce, Hbase, ETL and UNIX Shell Scripting.

Confidential, OH

J2EE Developer

Responsibilities:

  • Connectivity with Databases MySQL and Oracle.
  • Involved in writing the database integration code.
  • Developed various User Controls to use it across the application.
  • Worked on Presentation Layer using spring, JSPs and Servlets.
  • Development of Parser classes to parse the data received from the Front tier to pass it to back end.
  • Development of Servlets in web application.
  • Worked with J2EE and core java concept like Oops, GUI, Networking in java
  • Created quality working J2EE code to design, schedule and cost to implement use cases.
  • Development of web-pages and Applets.
  • Responsible for Oracle 10i logical and physical databases design, implementation, and maintenance.
  • Implemented Service Oriented Architecture (SOA) using JMS for sending and receiving messages while creating web services.
  • Used Hibernate framework to persist the employee work hours to the database.
  • Used Hibernate ORM framework with Spring framework for data persistence and transaction management
  • Developed controllers and actions encapsulating the business logic.
  • Developed classes to interface with underlying web services layer.
  • Prepared the design document based on requirements and sending project status report on weekly basis.

Environment: Java, J2EE 1.4, JSP, Servlets 3.0, MySQL 5.1, Oracle 10g, Apache Tomcat 6.0.

Confidential 

Junior JAVA Developer

Responsibilities:

  • Involved in the analysis, design, implementation, and testing of the project.
  • Developed this application based on MVC Architecture using open source spring.
  • Implemented the presentation layer with HTML and JavaScript.
  • Developed web components using JSP, Servlets and JDBC.
  • Designed tables and indexes.
  • Developed Server Side Validations using Spring Validation Framework.
  • Developed Client Side Validations using Java Script.
  • Deployed the applications on Web sphere Application Server
  • Wrote complex SQL queries and stored procedures.
  • Involved in templates and screens in HTML and JavaScript.
  • Involved in fixing bugs and unit testing with test cases using Junit.
  • Actively involved in the system testing and implementing service layer using Spring IOC.
  • Spring framework AOP features were extensively used.
  • Monitored logs by using LOG4J.
  • Prepared the Installation, Customer guide and Configuration document which were delivered to the customer along with the product
  • Provided Technical support for production environments resolving the issues, analyzing the defects, providing and implementing the solution defects

Environment: Java, J2EE 1.4, Servlets 3.0, JDBC, JavaScript, spring 2.0, MySQL 5.0, JUnit, Eclipse IDE 2.1.

We'd love your feedback!