Big Data Developer Resume
SUMMARY
- Overall 10 years of experience in IT industry, spanning across areas like Big Data Technologies, Java/J2EE, Business Rule Management System (IBM ODM, Fico Blaze Advisor ) .
- 1.8+ years’ experience of Spark/Hadoop developer using Spark Ecosystem and Hadoop Ecosystem.
- In depth understanding and Hands on experience of Spark Architecture including Spark Core, Spark SQL, Data frames, Sparks Streaming, Spark MLlib .
- Good Understanding of Hadoop architecture and Hands - on experience with Hadoop components such as Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce concepts and HDFS Framework.
- Experience on Big Data Ecosystem using Map Reduce programming, PIG, Sqoop, Hive, Flume, YARN, Zookeeper.
- Experience in Apache Kafka. Worked on Apache Spark Streaming with Apache Kafka.
- Experience in Core Java .
- Good Experience in Business Rule Management System using IBM Operational Decision Manager (ODM), Fico Blaze Advisor .
- Worked on Java/J2EE on Struts Framework
- Good understanding of HDFS, Parquet and other big data databases
- Excellent communication skills bridging Client Interaction and Team Management .
- Strong analytical and problem solving skills. Team player with ability to communicate at all levels of development process.
TECHNICAL SKILLS
Specialties: Data Engineer using Big Data Technologies, Java / J2EE, Business Rule Management system Tools
Data Engineer: Spark Core, Spark SQL, Spark Streaming, MLlib, Sqoop, Flume, Hive, Apache PIG
Business Rule Management System: IBM ODM, IBM ILOG JRules, Fico Blaze Advisor 7.2
Programming Skill using Java and Client side scripting: Core Java, JDBC, SQL, Servlets, JSP, JSTL, Java API, Struts 2, Spring, CSS,HTML, Ajax, JSON, JavaScript
Platforms: Windows NT/ 2002/2007/2010 , LINUX, UNIX
Databases: SQL Server 2005/2008/2016 , HDFS, MongoDB, Hive
Markup Language: JSON,XML
Languages: Java, Scala, SQL, PL/SQL, UNIX and SHELL Scripting, Hive, Impala, Sqoop, Scala, Apache PIG
Development/ Productivity Tool: Rational Application Developer, Eclipse, Scala IDE, Spark-shell, Hue, Putty, Winscp, SQL Server, Soap UI
PROFESSIONAL EXPERIENCE
Big Data Developer
Confidential
Responsibilities:
- Developing Spark programs using Scala language to consume data from Kafka and analyze each record, do the sentiment analysis and finally store it as parquet file in HDFS.
- Involved in development of multiple Kafka Producer and Consumer using Java /J2EE to get the data from the source and channel it to Spark module.
- Implemented Spark programs using Scala and Spark SQL to process the stored data faster and send the analysis for ETL reporting.
- Responsible for data cleaning and filtration before sending it to Kafka producer.
- Implemented Apache Pig Scripts to do some statistical analysis on the stored data and store it Hive.
- Used Spark for Parallel data processing and better performances
- Implemented Apache PIG scripts to load data from and to store the data into Hive.
- Imported data from AWS S3 and converted into Spark RDD and performed transformation and action on RDD’s .
- Used Avro, Parquet, CSV data formats to store into HDFS .
- Imported required tables from RDBMS using Sqoop and used Kafka to get real time streaming data into HDFS.
Developer
Confidential
Responsibilities:
- Analysis and understanding the legacy business rules .
- Prepare the design document and pseudo code and sends it for SME Review .
- On getting approval from SME, involved in development of the rules using Blaze Advisor 7.2
- Provide daily production support, Production issue Troubleshooting, management and Resolution, provide support for the scheduled job run
- Involved In Java - Blaze integration for the rule project .
- Unit and Integration testing for the rule project
- QA & UAT support and bug fixing.
ILog Developer
Confidential
Responsibilities:
- Analyzing & understanding of existing application as well as new functionalities.
- Development of modules based on use cases.
- Responsible for all changes related to business rules in ODM (ILog at that time )
- Changing of existing rules(change in decision tables, rule flows) in ILog
- Code review of all code components that are being developed.