Hadoop Developer Resume
San Rafael, CA
SUMMARY:
- Over 7 years of professional working experience in Big data systems, Java, Android and Web UI for Finance and University applications on Object Oriented Programming, Web and Mobile based programming, Hadoop Technologies.
- Worked in implementing Solutions using Hadoop tools like Sqoop, MapReduce, Hive, Pig, Oozie, Flume, Hue, Spark, Kafka.
- Hands on experience in developing Java/J2EE Applications using REST and SOAP Web Services.
- Worked with Open source Amazon Web Services (AWS), Google Cloud Platform (GCP), Cloudera Enterprise (CDH), Hortonworks Platform (HDP).
- Hands on experience in importing and exporting the data using Sqoop from HDFS/S3 to RDBMS and vice versa.
- Hands on experience in cleansing semi - structured and un-structured data using Serde, UDF and performing aggregations using Hive and Pig.
- Hands on Experience in developing workflows using Shell scripting, Oozie.
- Knowledgeable in HDFS architecture.
- Hands on experience in developing websites and apps using HTML, CSS, JavaScript, JQuery.
- Knowledgeable on NOSQL databases like Redshift, Hbase.
- Excellent communication skills and flexible to adapt to evolving technology.
TECHNICAL SKILLS:
Programming Skills: Java, C/C++, Scala, PythonBig Data Ecosystem: Hadoop, HDFS, MapReduce (MRV1, MRV2 YARN), Spark, Spark Streaming,Spark SQL, Hive, Pig, Sqoop, Oozie, Impala, Kafka, Flume.
Databases: SQL, Oracle, DB2, Teradata
Hadoop Distributions: Amazon AMI, Cloudera (CDH3, CDH4, CDH5), Hortonworks Data Platform (HDP2), Apache Open Source.
NoSQL Databases: Redshift, Hbase, Cassandra.
IDE: Eclipse, NetBeans.
Build Tools: Ant, Maven, Make
Java Frameworks: Spring, Hibernate, Struts
Web Services: JSP, Servlets, XML, SOAP, REST, WSDL.
Web Development Technologies: HTML, CSS, JavaScript, JQuery, Bootstrap
Operating Systems: Linux (AMI, CentOS, Ubuntu, Fedora, Red hat), Windows, Mac OS.
PROFESSIONAL EXPERIENCE:
Confidential, San Rafael, CA
Hadoop Developer
Responsibilities:
- Worked on collecting the source's data using Amazon API Lambda using Node.js and storing the data in the Amazon S3 buckets.
- Worked on integrating Amazon services like S3, EMR, CloudWatch with the project requirements.
- Worked on ingesting the data through Sqoop and storing the data in S3 buckets and creating Hive tables on the top.
- Worked on cleaning and processing the data from multiple sources using Apache Hive.
- Worked on developing UDF’S in Hive like date formatter, types of masking data etc.
- Involved in performing POC’s using ACID properties using Hive 0.14 and integrate into existing requirements in CSV and ORC file formats.
- Involved in researching and implementing Data Deduplication tactics with Hive/Impala.
- Worked with and implementing Redshift queries which are used for reporting.
- Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs.
- Configured and worked with Spark to optimize data process using Scala.
- Worked in implementing spark streaming for continuously cleaning, aggregating and pushing data to the datastores.
- Implemented workflows using Shell scripting and Oozie.
- Involved in creating Splunk dashboards for managing logging and monitoring daily workflows.
- Worked on implementing Oozie workflows, Coordinators and Bundles as per the requirement.
- Worked on monitoring and re-running Oozie workflow jobs in Hue dashboard.
- Involved in writing automated Test cases/Regression test cases.
- Worked on cloud formation scripts and automated the process of launching the cluster.
- Involved in developing bootstrap scripts for the cluster creation and health check monitoring.
- Involved in data integration with Informatica, RDBMS, AWS S3, Hive, Redshift and Denodo.
- Involved in upgrading all the environment's clusters to newer versions of Amazon EMR.
- Involved in giving KT sessions for the production/ Admin teams before the releases.
Environment: Amazon AMI, EMR 4.2, Sqoop 1.4.6, Hive 0.13, Hive 1.0.0, Impala, Redshift, Oozie, Presto 0.125, Hue, Shell scripting, Java, Kafka, SQL, PL/SQL, MDS, Lambda, Splunk, Denodo, Git
Confidential, Detroit, MI
Big Data Developer
Responsibilities:
- Imported the data from MySQL to HDFS using Sqoop.
- Involved in ingesting real-time server log data into HDFS using Flume.
- Involved in implementing Hive tables as per business requirement with appropriate partitions.
- Involved in developing UDF’s in java for using in Hive and Pig scripts.
- Involved in production cluster setup, administering, maintenance, monitoring and support.
- Worked on shell scripting for automating some Hive, Pig, Sqoop and MapReduce scripts using CRON scheduler.
- Exported the analyzed data to MySQL using Sqoop for visualization and to generate reports for the business intelligence team.
- Involved in working with the QA team in setting up environment for running test cases using Sqoop, Hive and Pig.
- Involved in performing POC’S in real-time streaming using Spark streaming and Kafka.
Environment: CDH 5.3, Hadoop 1.x, Sqoop, Flume, Hive 0.10, Pig 0.11, Shell Scripting, Java, MySQL, Spark, Kafka.
GRADUATE ASSISTANT
Confidential, Dearborn, MI
Responsibilities:
- Worked on a school project, which involves in creating a web/mobile app for guiding students of the University in a map.
- Worked on implementing custom overlays and icons for displaying the landmarks of the university.
- Worked on displaying the students of their current location.
- Assisting the professor in preparing the content and examples for class tutorials, which includes mobile and web development.
- Assisting the Undergraduate/high School students with the course projects.
Confidential
Software Developer
Responsibilities:
- Designed and developed Web Services using Java/J2EE in WebLogic environment.
- Implemented objects using Java and HTML to maintain well structured and to interact with controllers to get data from Oracle database.
- Developed web pages using Java Servlets, CSS, Java Script and HTML.
- Wrote Ant scripts to build and deploy the application.
- Developed the application using Spring Web MVC framework.
- Worked with Spring Configuration to add new content to the website.
- Implemented business logic and generated WSDL for the web services using SOAP.
- Worked on Spring DAO module and ORM using Hibernate.
- Used Hibernate template and Hibernate DAO support for Spring-Hibernate Communication.
- Configure Association Mappings in Hibernate.
- Involved in working with features like Posting, Reporting, View list, Wish list, and Image management in the application.
- Worked closely with project management to understand, learn and to perform analysis of the search techniques.
- Collaborated with business team to fix defects.
Environment: Java 1.6, J2EE, Eclipse, WebLogic, Spring, Hibernate, Oracle, XML, HTML, Ant.