- 8+ years Professional Software developer with of technical expertise in all phases of Software Development cycle (SDLC), in various Industrial sectors expertise in Bigdata analyzing Frame works And Java/J2EE technologies.
- 4+ years of industrial experience in of Row keys &, Schema Design with NOSQL databases like Mongo DB, HBase, Cassandra.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle.
- Excellent Programming skills at a higher level of abstraction using Scala, Java and Python.
- Experience in using D - Streams, Accumulator, Broadcast variables, RDD caching for Spark Streaming.
- Hands on experience in developing SPARK applications using Spark tools like RDD transformations, Spark core, Spark MLlib, Spark Streaming and Spark SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Working knowledge of Amazon Elastic Cloud Compute (EC2) infrastructure for computational tasks and Simple Storage Service (S3) as Storage mechanism.
- Running of Apache Hadoop, CDH and Map-R distributions, Elastic Map Reduce (EMR) on (EC2).
- Expertise in developing Pig Latin scripts and Hive Query Language.
- Developed Customized UDF and UDAF in java to extend HIVE and Pig core functionality.
- Created Hive tables to store structured data into HDFS and processed it using HiveQL.
- Experience in validating and cleansing the data using Pig statements and hands-on experience in Developing Pig MACROS.
- Working knowledge in installing and maintaining Cassandra by configuring the cassandra.yaml file as per the business requirement and performed reads/writes using Java JDBC connectivity.
- Written multiple MapReduce Jobs using Java API, Pig and Hive for data extraction, transformation and Aggregation from multiple file formats including Parquet, Avro, XML, JSON, CSV, ORCFILE and other
- Compressed file formats Codecs like gZip, Snappy, Lzo.
- Good experience in optimizing Map Reduce algorithms using Mappers, Reducers, combiners and Partitioned to deliver the best results for the large datasets.
- Good knowledge on build tools like Maven, Log4j and Ant.
- Hands on experience in using various Hadoop distributions (Cloudera (CDH 4/CDH 5), Hortonworks,
- Map-R, IBM Big Insights, Apache and Amazon EMR Hadoop distributions.
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, MapReduce Programming Paradigm and YARN architecture.
- Proficient in developing, deploying and managing the Solr from development to production.
- Used various Project Management services like JIRA for tracking issues, GitHub for various code reviews and Worked on various version control tools like CVS, GIT, SVN.
- Hands-on knowledge in Core Java concepts like Exceptions, Collections, Data-structures, I/O. Multi- Threading, Serialization and deserialization of streaming applications.
- Experience in maintaining an Apache Tomcat MYSQL, LDAP, Web service environment.
- Designed ETL workflows on Tableau, Deployed data from various sources to HDFS.
- Done Clustering, regression and Classification using Machine learning libraries Mahout, MLlib(Spark).
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Proven ability to manage all stages of project development Strong Problem Solving and Analytical skills and abilities to make Balanced and Independent Decisions.
Hadoop Technologies and Distributions: HDP, Cloudera
Hadoop Ecosystem: HDFS, Hive, Pig, Sqoop, Oozie, Flume, Spark, Zookeeper, Map-Reduce, Spark-SQL, Spark Streaming and Spark MLib.
NoSQL Databases: HBase, Cassandra
Programming: C, C++,Python, Java, SCALA,PL/SQL,SBT,MAVEN
RDBMS: ORACLE, MySQL, SQL Server
IDE: Eclipse4.x, NetBeans, Microsoft Visual Studio
Operating Systems: Linux (Red Hat, CentOS), Windows XP/7/8 and Z/OS(Main Frames)
Web Servers: Apache Tomcat
Cluster Management Tools: Cloudera Manager, Horton Works Ambari and Hadoop Security Tools
Confidential, Philadelphia, Pennsylvania
Sr. Spark/AWS Developer
- Developed Spark Applications by using Scala, Java and Implemented Apache Spark data processing Project to handle data from various RDBMS and Streaming sources.
- Worked with the Spark for improving performance and optimization of the existing algorithms in Hadoop
- Using Spark Context, Spark - SQL, Spark MLib, Data Frame, Pair RDD, Spark YARN.
- Used Spark Streaming APIs to perform transformations and actions on the fly for building common
- Learner data model which gets the data from Kafka in near real time and persist it to Cassandra.
- Developed Kafka consumer API in Scala for consuming data from Kafka topics.
- Consumed XML messages using Kafka and processed the xml file using Spark Streaming to capture UIUpdates.
- Developed Preprocessing job using Spark Data frames to flatten JSON documents to flat file.
- Load D-Stream data into Spark RDD and do in memory data Computation to generate Output response.
- Experienced in writing live Real-time Processing and core jobs using Spark Streaming with Kafka as a Data pipe-line system.
- Migrated an existing on-premises application to AWS. Used AWS services like EC2 and S3 for small
- Data sets processing and storage, Experienced in Maintaining the Hadoop cluster on AWS EMR.
- Imported data from AWS S3 into Spark RDD, Performed transformations and actions on RDDs.
- Good understanding of Cassandra architecture, replication strategy, gossip, snitch etc.
- Designed columnar families in Cassandra and Ingested data from RDBMS, performed data transformations, and then exported the transformed data to Cassandra as per the business requirement.
- Used the Spark Data Stax Cassandra Connector to load data to and from Cassandra.
- Experienced in Creating data-models for Clients transactional logs, analyzed the data from Casandra
- Tables for quick searching, sorting and grouping using the Cassandra Query Language (CQL).
- Tested the cluster Performance using Cassandra-stress tool to measure and improve the Read/Writes.
- Used Hive QL to analyze the partitioned and bucketed data, Executed Hive queries on Parquet tables stored in Hive to perform data analysis to meet the business specification logic.
- Used Apache Kafka to aggregate web log data from multiple servers and make them available in
- Downstream systems for Data analysis and engineering type of roles.
- Experience in using Avro, Parquet, RC File and JSON file formats, developed UDF in Hive and Pig.
- Developed Custom Pig UDF in Java and used UDFs from Piggy Bank for sorting and preparing the data.
- Developed Custom Loaders and Storage Classes in PIG to work on several data formats like JSON XML, CSV and generated Bags for processing using pig etc.
- Developed Sqoop and Kafka Jobs to load data from RDBMS, External Systems into HDFS and HIVE.
- Developed Oozie coordinators to schedule Pig and Hive scripts to create Data pipelines.
- Written several Map reduce Jobs using Java API, also Used Jenkins for Continuous integration.
- Setting up and worked on Kerberos authentication principals to establish secure network communication n cluster and testing of HDFS, Hive, Pig and MapReduce to access cluster for new users.
- Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.
Environment: Spark, Spark-Streaming, Spark SQL, AWS EMR, MapR, HDFS, Hive, Pig, Apache KafkaSqoop, Java (JDK SE 6, 7), Scala, Shell scripting, Linux, MySQL Oracle Enterprise DB, SOLR, JenkinsEclipse, Oracle, Git, Oozie, Tableau, MySQL, Soap, Cassandra and Agile Methodologies.
Confidential, Houston, TX
- Worked on migrating MapReduce programs into Spark transformations using Spark and Scala, initially done using python (PySpark).
- Developed Spark jobs using Scala on top of Yarn/MRv2 for interactive and Batch Analysis.
- Experienced in querying data using SparkSQL on top of Spark engine for faster data sets processing.
- Worked on implementing Spark Framework a Java based Web Frame work.
- Worked and learned a great deal from AWS cloud services like EC2, S3, EBS, RDS and VPC.
- Implemented Elastic Search on Hive data warehouse platform.
- Worked with ELASTIC MAPREDUCE and setup Hadoop environment in AWS EC2 Instances.
- Written java code to format XML documents, uploaded them to Solve server for indexing.
- Optimized Hive QL Scripts by using execution engine like Tez.
- Worked on Ad hoc queries, Indexing, Replication, Load balancing, Aggregation in MongoDB.
- Processed the Web server logs by developing Multi - hop flume agents by using Avro Sink and loaded into MongoDB for further analysis, also extracted files from MongoDB through Flume and processed.
- Expert knowledge on MongoDB NoSQL data modeling, tuning, disaster recovery backup used it for distributed storage and processing using CRUD.
- Extracted and restructured the data into MongoDB using import and export command line utility tool.
- Experience in setting up Fan-out workflow in flume to design v shaped architecture to take data from many sources and ingest into single sink.
- Experience in creating tables, dropping and altered at run time without blocking updates and queries using HBase and Hive.
- Experience in working with different join patterns and implemented both Map and Reduce Side Joins.
- Wrote Flume configuration files for importing streaming log data into HBase with Flume.
- Imported several transactional logs from web servers with Flume to ingest the data into HDFS. Using
- Flume and Spool directory for loading the data from local system(LFS) to HDFS.
- Installed and configured pig, written Pig Latin scripts to convert the data from Text file to Avro format.
- Created Partitioned Hive tables and worked on them using HiveQL.
- Loading Data into HBase using Bulk Load and Non-bulk load.
- Worked on continuous Integration tools Jenkins and automated jar files at end of day.
- Worked with Tableau and Integrated Hive, Tableau Desktop reports and published to Tableau Server.
- Developed MapReduce programs in Java for parsing the raw data and populating staging Tables.
- Experience in setting up the whole app stack, setup and debug log stash to send Apache logs to AWS Elastic search.
- Used Zookeeper to coordinate the servers in clusters and to maintain the data consistency.
- Experienced knowledge over designing Restful services using java based APIs like JERSEY.
- Used OOZIE Operational Services for batch processing and scheduling workflows dynamically.
- Supported in setting up QA environment and updating configurations for implementing scripts with PigHive and Sqoop
Environment: HDP 2.3, Hadoop, HDFS, Hive, Map Reduce, AWS Ec2, SOLR, Impala, MySQL, OracleSqoop, Flume, Spark, SQL Talend, Python, PySpark, Yarn, Pig, Oozie, Linux-Ubuntu, Scala, Ab InitioTableau, Maven, Jenkins, Java (JDK 1.6), Cloudera, JUnit, agile methodologies
Big Data Hadoop Developer
- Analyzing and writing Hadoop Map reduce jobs using Java API, Pig and Hive.
- Exported data using Sqoop from HDFS to Teradata on regular basis.
- Write scripts to automate application deployments and configurations. Monitoring YARN applications.
- Wrote map reduce programs to clean and pre - process the data coming from different sources.
- Implemented various output formats like Sequence file and parquet format in Map reduce programs.
- Also, implemented multiple output formats in the same program to match the use cases.
- Using Pig to apply transformations, cleaning and deduplication of data from raw data sources.
- Installation of Oozie workflow to run multiple Hive.
- Implemented test scripts to support Test Driven Development(TDD) and continuous integration.
- Converted text files into Avro then to parquet format for the file to be used with other Hadoop eco system tools.
- Experienced on loading and transforming of large sets of structured, semi structured and unstructured data.
- Exported the analysed data to HBase using Sqoop and to generate reports for the BI team.
- Analysed large amounts of data sets to determine optimal way to aggregate and report on it.
- Participate in requirement gathering and analysis phase of the project in documenting the business requirements by conducting workshops/meetings with various business users.
Environment: Hadoop 1.0.4, Python, MapReduce, HDFS, Hive 0.10, Pig, Hue, Spark, Kafka, Oozie, Core Java, Eclipse, Hbase, Flume, Cloudera Manager, Greenplum DB, IDMS, VSAM, SQL PLUS, Toad, PuttyWindows NT, UNIX Shell Scripting, Linux 5, PentahoBigdata, YARN, HawQ, SpringXD, Eclipse, Java SDK 1.6
- Involved in Requirements Analysis, and design an Object-oriented domain model.
- Involvement in the detailed Documentation, written functional specifications of the module.
- Involved in development of Application with Java and J2EE technologies.
- Develop and maintain elaborate services based architecture utilizing open source technologies like
- Hibernate, ORM and Spring Framework.
- Developed server-side services using Java multithreading, Struts MVC, Java, EJB, Spring, Web Services (SOAP, WSDL, AXIS).
- Responsible for developing DAO layer using Spring MVC and configuration XMLs for Hibernate and to also manage CRUD operations (insert, update, and delete).
- Designing, Development and Implementation of JSPs in Presentation layer for Submission, Application, Reference implementation.
- Deployed Web, presentation and business components on Apache Tomcat Application Server.
- Developed PL/SQL procedures for different use case scenarios
- Involvement in post-production support, Testing and used JUNIT for unit testing of the module.
Environment: Java/J2EE, JSP, XML, Spring Framework, Hibernate, Eclipse (IDE), Java Script, Ant, SQLPL/SQL, Oracle, Windows, UNIX, Soap, Jasper reports.
- Actively involved in development of project and requirement gathering to quality assurance testing.
- Coded and Developed Multi - tier architecture in Java, J2EE, Servlets.
- Conducted analysis, requirements study and design according to various design patterns and developed
- Rendering to the use cases, taking ownership of the features.
- Implemented PL/SQL queries, Procedures to perform data base operations.
- Configuring log4j to disable/enable logging in application.
- Designed, developed and maintained the data layer using the ORM framework called Hibernate.
- Used Hibernate framework for Persistence layer, involved in writing Stored Procedures for data retrieval and data storage and updates in Oracle database using Hibernate.
- Developing deploying Archive files (EAR, WAR, JAR) using ANT build tool.
- Developed Rich user interface using HTML, JSP, AJAX, Java Script, JQuery and CSS.
- Used Software development best practices for Object Oriented Design and methodologies throughout
- Object oriented development cycle
- Strictly followed Water Fall development methodologies for implementing projects.
- Thoroughly documented the detailed process flow with UML diagrams and flow charts for distribution across various teams.
- Thoroughly involved in testing phase and implemented test cases using Junit.
- Involved in developing training presentations for developers (off shore support), QA, Production Support.
- Involved in designing use-case diagrams, class diagrams, interaction using UML model.
Environment: Java JDK (1.5), Java J2EE, Informatica, Oracle 11g (TOAD and SQL developer) ServletsJboss application Server, Water Fall, JSPs, EJBs, DB2, RAD, XML, Web Server, JUNIT, Hibernate, MS ACCESS, Microsoft Excel.