SUMMARY:
- 6+ years of experience in Designing, Developing Web applications and Big data applications using Java, J2ee, Oracle and Cloudera Hadoop based Big data technologies
- Experience in writing Map Reduce programs for both Hive and pig in java.
- Experience in data load management, importing and exporting data using Sqoop and FLUME.
- Experience in creating Hive Internal/External tables and views using shared metastore, writing scripts in HiveQL, data transformation & file processing using Pig Latin Scripts.
- Knowledge on installation, configuration, support, maintenance of Cloudera’s Hadoop platform along with CDH4&5 clusters.
- Excellent understanding of Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, Name node, Data node and MapReduce.
- Trained in Cloudera based Hadoop Distribution.
- Good Working knowledge of Core & Advanced Java, J2EE, Oracle, JavaScript, Struts, spring, Hibernate.
- Experience in application programming using Servlets, EJB’s.
- Designed and develop web based UI application using HTML, CSS, and JSP.
- Well versed in MVC (Model View Controller) architecture using spring, JSF and also implementing JSTL (JSP standard tag library), custom tag development and tiles.
- Good Exposure to Web/application servers such as Apache Tomcat, WebLogic.
- Experience in developing database to create its objects like tables, views, functions, triggers and stored procedures packages using PL/SQL in Oracle.
- Strong analytical and logical ability to work independently or in a team.
TECHNICAL SKILLS:
Big Data Technologies: Hadoop, HDFS, Hive, Pig, HBase, Sqoop, Flume, ZooKeeper, Cloudera CDH4, CDH5, AWS, HiveQL, Pig Latin.
Java/J2EE Technologies: JSF, Struts, Servlets, JSP,EJB, Junit and JDBC
Programming Languages: C, C++, Java, SQL, PL/SQL, HTML,XML
Web Development: HTML5, DHTML, XHTML, CSS, Java Script, AJAX
Frameworks: Struts, Hibernate, Spring, JSTL
PROFESSIONAL EXPERIENCE:
Confidential, NJ
Java/ Hadoop Big Data ConsultantResponsibilities:
- Imported logs from web servers with Flume to ingest the data into HDFS.
- Implemented custom interceptors for flume to filter data and defined channel selectors to multiplex the data into different sinks.
- Retrieved data from HDFS into relational databases with Sqoop.
- Parsed cleansed and mined useful and meaningful data in HDFS using Map - Reduce for further analysis
- Fine tuning hive jobs for optimized performance
- Implemented UDFS, UDAFS, UDTFS in java for hive to process the data that can't be performed using Hive inbuilt functions.
- Designed and implemented PIG UDFS for evaluation, filtering, loading and storing of data.
- Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing with Pig.
Environment: Hadoop, Hive, Pig, Sqoop, Oracle10g, HDFS, Oozio, Flume.