Senior Developer Resume
Tampa, FloridA
SUMMARY
- Software professional having around 8+ years of Industry Experience as a Big Data/Java Technical Consultant.
- Hands on experience in installing, configuring, and using Hadoop ecosystem components like MapReduce, HDFS, HBase, Phoenix, Hive, Sqoop, Pig and Flume
- Around 4 Years of experience extensively in Hadoop echo system components such as HDFS, Map Reduce, Pig, Hbase, Sqoop and Hive for scalability, distributed computing and high performance computing
- In depth understanding/knowledge of Hadoop Architecture and its components such as HDFS, Yarn, Resource Manager, Node Manager, Job History Server, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce.
- Expertise in writing Hadoop Jobs for analyzing data using MapReduce, Hive and Pig running on Yarn.
- Having experience in Spark 2.0.0, writing spark streaming, creating Data frames and Datasets from the existing datasets to perform actions on different types of data.
- Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice - versa.
- Hands on experience in full software development life cycle implementation including Business Interaction, Requirement Analysis, Design, Development, Testing and Documentation phases
- Capable of processing large sets of structured, semi-structured and unstructured data and supporting systems application architecture.
- Experience in Java, JSP, Servlets, Hibernate, Spring, JBoss, JDBC, Java Script, XML, and HTML
- Ability to adapt to evolving technology, strong sense of responsibility and accomplishment Hands-on experience on handing multi terabytes of datasets.
- Self-starter, proactive, possesses good communication skills and understanding of business work flow.
- Excellent analytical, problem solving, communication and interpersonal skills with ability to interact with individuals at all levels and can work as a part of a team as well as independently.
TECHNICAL SKILLS
- Spark, Spark-Streaming, Spark-SQL, Hadoop, HDFS, Map Reduce, Yarn, Hive, Pig, Sqoop, Flume
- Kafka, Storm, Phoenix Oozie, Zookeeper, Apache solr, cucumber, junit Mockito, Apache Jmeter
- HBase, Cassandra, Impala JAVA, J2EE, Scala, Spring, Hibernate, Servlets, JSP, Jakarta Struts.
- Teradata, MS SQL Server, Oracle, Sybase. Tomcat, ALM(QC), JIRA.
- HTML, XHTML, XML, XSL. GIT, CVS, SVN, Junit, MRUnit, Ant, Jenkins, Maven,
- Gradle Log4j, winscp, putty, TeamCity, uDeploy. Eclipse, NetBeans, Intellij. Linux,, Windows
PROFESSIONAL EXPERIENCE
Confidential, Tampa, Florida
Senior Developer
Environment: Java, Spark, sqoop, scala, Hadoop YARN, Hive, Pig, Flume, Oozie, Phoenix, Apache solr, Vert.x API
Responsibilities:
- Data Ingestion implemented using SQOOP, SPARK,HBASE loading data from various RDBMS, CSV files.
- Used Phoenix as a layer on top of the HBASE while processing the data to Hadoop cluster.
- Creating phoenix tables with a primary keys, indices, views and joining multiple tables
- Written phoenix join queries and creating indices on the primary keys of each table.
- Created imapla tables with partitions and written a map reduce program to load/copy Billions of historical data from Hbase to Impala.
- Implemented different tuning, Indexing mechanism to increase time to loading Billions of data in minimum time.
- Written cucumber and Junit mockito test cases for the entire application for covering regression test scenarios.
- Load Balancer has been setup among the servers to distribute the more number of users requests parallel.
- Eclipse Vert.x API has been implemented in IOD and DOD integration for user to provide query.
- Installed and configured Apache solr for implementing the indexing mechanism in data processing/retrieving.
- Uploaded data to Hadoop Hive and combined new tables with existing databases
- Setting up the environment for the application to run on different environments (DEV, UAT, PROD,COB).
- Build the Application(creating artifacts, jar files etc..) with a continuous integration server using TeamCity.
- Using uDeploy tool for automating the application deployments on different environments by writing the shell scripts and adding the components and process.
- Provide support/bug fix for any prod issues in current running Applications.
Confidential
Developer
Environment:Spark, sqoop, scala, Hadoop YARN, Hive, Pig, Flume, Oozie, cassandra.
Responsibilities:
- Data Ingestion implemented using SQOOP, SPARK, loading data from various RDBMS, CSV, XML files.
- Data cleansing, transformations tasks are handled using SPARK using SCALA and HIVE.
- Data Consolidation was implemented using SPARK, HIVE to generate data in the required formats by applying various ETL tasks for data repair, massaging data to identify source for audit purpose, data filtering and store back to HDFS.
- Scripts developed to load Log data using FLUME and stores data in HDFS on daily basis.
- Worked on real-time data processing using Spark Streaming and Kafka using Scala.
- SPARK-Scala RDD s are used to transform, filter data which contains “ERROR”, “FAILURE”, “WARNING” in the log lines and then stored into HDFS.
- Uploaded data to Hadoop Hive and combined new tables with existing databases.
- Worked on writing Scala programs using Spark on Yarn for analyzing data.
- Created HBase tables to load large sets of structured data.
- Created PIG script jobs in maintaining minimal query optimization.
- Responsible for writing Hive Queries for analyzing data in Hive warehouse using Hive Query Language (HQL).
- Maintaining Project documentation for the module.
- Exported the analyzed data to the relational databases (MySQL, DB2) using Sqoop into HDFS for visualization and to generate reports for the BI team.
- Monitoring System Metrics and logs for any problems
Confidential
Bug fix/Development
Environment: Java 1.6, J2ee, sql, struts 2, JSP, xml, hibernate.
Responsibilities:
- Development and enhancement of web apps
- Analyze legacy applications for deprecated and / or obsolete code for the purpose of cleaning up bloated legacy applications
- Build the EAR using ant script and deploy the application to integration test and deploy on production environment.
- Support the webapps on the issues.
- Coordinate, Integrate the source code and deployment.
- Support the application by addressing the incidents and defects.
Confidential
DEVELOPER
Environment: Java1.6, JSP, Struts1.x, Spring3.2, Hibernate4.6, Eclipse and Oracle10g
Responsibilities:
- Implemented Struts for the controller logic.
- Extensively used SQL queries, PL/SQL stored procedures & triggers in data retrieval and updating of information in the Oracle database using JDBC.
- Expert in writing, configuring and maintaining the Hibernate configuration files and writing and updating Hibernate mapping files for each Java object to be persisted.
- Expert in writing Hibernate Query Language (HQL) and Tuning the hibernate queries for better performance.
- Updated Struts configuration file, Validation and tiles xml document.
- Implemented Client side validation using Java Script
- Implemented Based on MVC Architecture.
Confidential
Development.
Environment:Oracle, JDK, Struts, Hibernate, Tomcat .
Responsibilities:
- Design and developed user interfaces using HTML, JSP and Struts tags.
- Involved in application performance tuning (code refractory).
- Writing test cases using JUNIT, doing test first development.
- Writing build files using ANT. Used Maven in conjunction with ANT to manage build files.
- Involved in implementing Data Access Object (DAO) classes.
- Involved in developing the business logic as per functional specification using Core Java and J2EE.
- Used Hibernate persistence logic to interact with the database.
- Involved in writing hibernate mapping files to provide communication between Java objects and database tables.
Confidential
Development
Environment: Oracle, JDK, Struts, Hibernate, Tomcat, Windows 2000.
Responsibilities:
- Designed and developed user interfaces using HTML, JSP and Struts tags.
- Experienced in developing applications using all Java/J2EE technologies like Servlets, JSP, EJB, JDBC etc.
- Validating the views using validator plug-in in Struts Frame Work.
- Writing test cases using JUNIT, doing test first development.
- Writing build files using ANT. Used Maven in conjunction with ANT to manage build files
- Used Hibernate for the data persistence and interaction with database.
- Involved in developing the Struts Action Classes.
- Develop test cases for Unit testing and sanity testing
