We provide IT Staff Augmentation Services!

Hardtop Developer Resume

2.00/5 (Submit Your Rating)

West Lake, TX

SUMMARY

  • 8+ Years of experience in IT industry comprising of extensive work experience on Build Engineering & Release Management process, including end - to-end code configuration, building binaries & deployments of artifacts for entire life cycle model in Enterprise Applications, general Systems Administration and Change Management, Software Configuration Management (SCM)
  • 6+ years of IT experience which includes 6+ years’ experience in Big Data technologies and 1.5 years of experience in JAVA and MAINFRAMES technologies
  • Worked in finance, Insurance and E-commerce domain
  • Expertise in various components of Hadoop Ecosystem - Map Reduce, Hive, Pig, Sqoop, Impala, Flume, Oozie, HBase, Apache Solr, Apache storm, YARN
  • Hands-on Experience in working with Cloudera Hadoop Distribution
  • Written, executed and deployed complex Map Reduce java code using various Hadoop API’s
  • Experienced in Map Reduce code tuning and performance optimization
  • Knowledge in installing, configuring and using Hadoop ecosystem components
  • Proficient in Hive Query language and experienced in hive performance optimization using Partitioning, Dynamic-Partitioning and bucketing concepts
  • Expertise in developing PIG Scripts. Written and implemented custom UDF’s in Pig for data filtering
  • Used Impala for data analysis.
  • Hands-On experience in using the data ingestion tools - Sqoop and flume
  • Collected the log data from various sources (webservers, Application servers and consumer devices) using Flume and stored in HDFS to perform various analysis
  • Performed Data transfer between HDFS and other Relational Database Systems (MySQL, SQLServer, Oracle and DB2) using Sqoop
  • Used Oozie job scheduler to schedule Map Reduce, Hive and pig jobs. Experience in automating the job execution
  • Experience with NoSQL databases like HBase and fair knowledge in MongoDB and Cassandra.
  • Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions
  • Experience in working with different relational databases like MySQL, SQLServer, Oracle and DB2
  • Strong experience in database design, writing complex SQL Queries
  • Expertise in development of multi-tiered web based enterprise applications using J2EE technologies like Servlets, JSP, JDBC and Hibernat
  • Extensive coding experience in Java and Mainframes - COBOL, CICS and JCL
  • Experience of working in all the phases of Software Development in various methodologies
  • Strong base in writing the Test plans, perform Unit Testing, User Acceptance testing, Integration Testing, System Testing
  • Proficient in software documentation and technical report writing.
  • Worked on NoSQL Database knowledge Graph DB (Neo 4j).
  • Worked coherently with multiple teams. Conducted peer reviews, organized and participated in knowledge transfer (technical and domain) sessions.
  • Experience in working with Onsite-Offshore model.
  • Developed various UDFs in Map-Reduce and Python for Pig and Hive.
  • Decent experience and knowledge in other SQL and NoSQL Databases like MySQL, MS SQL, MongoDB, HBase, Accumulo, Neo4j and Cassandra.
  • Good Data Warehouse experience in MS SQL.
  • Leveraged Talend to ingest data into the datalake of datafabric. Unit tested talend workflows for the correctness of the data.
  • Assist project teams for their queries on DataLake tables
  • Good knowledge and firm understanding of J2EE frontend/backend, SQL and database concepts.
  • Good experience in Linux, Mac OS environment.
  • Used various development tools like Eclipse, GIT, Android Studio and Subversion.
  • Knowledge with Cloudera Hadoop and Map-R distribution components and their custom packages.

TECHNICAL SKILLS

Hadoop/Big Data: Map Reduce, Hive, Pig, Impala, Sqoop, Flume, HDFS, Oozie, Hue, HBase, Zookeeper, Spark,Scala,HBase

Operating Systems: Windows, Ubuntu, RedHat Linux, Unix

Java & J2EE Technologies: Core Java, Servlets, JSP, JDBC

Frameworks: Hibernate

Databases/Database Languages: Oracle 11g/10g/9i, MySQL, DB2, SQLServer, SQL, HQL, NoSQL (HBase)

Web Technologies: JavaScript, HTML, XML, REST, CSS

Programming Languages: Java, Unix shell scripting, COBOL, CICS, JCL

IDE’s: Eclipse, Net beans

Web Servers: Apache Tomcat 6

Methodologies: Waterfall, Agile and Scrum

PROFESSIONAL EXPERIENCE

Confidential, West Lake, TX

Hardtop Developer

Responsibilities:

  • Setup Hadoop cluster onAmazon EC2using whirr for POC.
  • Worked on analysingHadoop clusterand different big data analytic tools includingPigHbasedatabase andSqoop.
  • Created HBase tables to store variable data formats of PII data coming from different portfolios.
  • Responsible for buildingscalable distributed data solutionsusing Hadoop.
  • Developing business logic using scala.
  • Installed and configuredFlume Hive Pig Sqoop HBaseon the Hadoop cluster.
  • Managing and scheduling Jobs on a Hadoop cluster.
  • Coordinate with project teams for any datalake related changes and take initiatives like standardization of naming standards
  • Developed Kafka consumer in Scala on Spark Streaming.
  • Working with the data extraction, transformation and load usingHive, Pig and HBase
  • Worked extensively on design, development and deployment of jobs to extract data, filter the data and load them into Datalake.
  • Implemented ETL processes with java MapReduce programs, PIG, Hive.
  • Involved in converting Hive/SQL queries into Spark transformations using Spark RDD, Scala.
  • Design and Implementation of Real time applications using Apache Storm, Trident Storm, Kafka, Apache Ignite's Memory grid and Accumulo.
  • Implemented nine nodesCDH3Hadoop cluster on Red hat LINUX.
  • Worked on AWS to create EC2 instances.
  • Worked on installing cluster commissioning decommissioning of datanode namenode recovery capacity planning and slots configuration.
  • Resource management of HADOOP Cluster including adding/removing cluster nodes for maintenance and capacity needs
  • Involved in loading data fromUNIX file systemtoHDFS.
  • Implemented File Transfer Protocol operations using Talend Studio to transfer files in between network folders.
  • Using Spark to create API's in Scala for Big data analysis.
  • AWS provides a secure global infrastructure, plus a range of features that use to secure the data in the cloud
  • CreatedHBase tablesto store variable data formats of PII data coming from different portfolios.
  • Implemented AWS provides a variety of computing and networking services to meet the needs of applications
  • Implemented best income logic usingPigscripts.
  • Developed Microservices & APIs using Spring Boot and Used Apache Kafka cluster as messaging system between the APIs and Microservices.
  • Ability to spin up different AWS instances including EC2-classic and EC2-VPC using cloud formation templates.
  • Experience in working with NOSQL database like HBase, MongoDB in getting real time data analytics using Apache Spark with Scala.
  • Implemented test scripts to support test driven development and continuous integration.
  • Responsible to manage data coming from different sources.
  • Test possible use of Graph database ( NEO 4J) by combing different sources and find the relevant path for a node
  • Installed and configuredHiveand also writtenHive UDFs.
  • Experience in NoSQL Column-Oriented Databases like HBase, Cassandra, Mongo dB and its Integration with Hadoop cluster.
  • Build AWS secured solutions by creating VPC with private and public subnets.
  • Worked on S3 buckets on AWS to store Cloud Formation Templates.
  • Experienced on loading and transforming of large sets of structured semi structured and unstructured data.
  • Cluster coordination services throughZookeeper.
  • Experience in managing and reviewing Hadoop log files.
  • Implemented multi-tiered architecture using both Microservices and Monolithic architecture.
  • Development of micro services using Dropwizard and Spring Boot. UI implementation using AngularJS 1.x.
  • Exported the analysed data to the relational databases usingSqoopfor visualization and to generate reports for the BI team.
  • Analysed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Supported in setting up QA environment and updating configurations for implementing scripts withPigandSqoop.

Environment: Hadoop HDFS Hive Flume HBase Sqoop PIG Java JDK 1.6 Eclipse MySQL, DataLake, Accumulo, ETL, MongoDB,Spring boot, Spark, NEO 4j, HBase, Scala, Web Frameworks Spring (MVC, Core), Dropwizard, Talend, Spray-Can (Scala) and Ubuntu Zookeeper Amazon EC2 SOLR.

Confidential, Chino Hills, CA

Hadoop Developer

Responsibilities:

  • Working on the development of a web application and Spring batch applications. The web application allows the customers to sign up and get the cellular and music services.
  • Tools: MySQL, Tomcat Server, Mybatis, Spring MVC, REST, AWS (Amazon Web Services)
  • Working on the development of User Interface.
  • Tools: Angular JS, Backbone JS, java script, velocity.
  • Working on the mobile payment functionality using PayPal, Angular JS and Spring MVC.
  • Have been involved in Spring Integratio.
  • Developing traits and case classes etc in scala.
  • Experience in creating DStreams from sources like Flume, Kafka and performed different Spark transformations and actions on it.
  • Developed Spark SQL programs for handling different data sets for better performance.
  • Experience in working with the NoSQL, Mongo DB, Apache Cassandra.
  • Got good experience with NOSQLdatabaseSOLR, HBase.
  • Have been involved in the building and deployment of the applications using Ant build.
  • Involved in fixing the production bugs and also involved in the deployment process.
  • Involved in performing the Linear Regression using Scala API and Spark.
  • Have been working on Spring Batch applications to make sure the customer cellular and music services gets renewed Spring Batch.
  • Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs on scala.
  • Involved in deploying the applications in AWS.
  • Installed and configured Hadoop ecosystem like HBase, Flume, Pig and Sqoop
  • Developed Python application using Mongo DB.
  • Good knowledge of creating event-processing data using Spark Streaming.
  • Translated high level requirements into ETL process.
  • Experienced in fixing errors by using debug mode of Talend.
  • Proficiency in Unix/Linux shell commands.
  • Maintains the EC2 (Elastic Computing Cloud) and RDS (Relational Database Services) in amazon web services.
  • Created RESTful web services interface for supporting XML message transformation.
  • Developed Junit test case using TestNG.
  • Involved in designing the web applications and I closely work with architect

Environment: Hadoop (CDH), MapReduce, HBase,HDFS, Hive, Pig, Sqoop, Scala, Flume, Spark, Oozie, MongoDB, ETL, Java,Talend, SQL, Kafka, Cassandra

Confidential, San Mateo, CA

Hadoop Developer

Responsibilities:

  • Involved to provide architect, design, develop and testing services for sub-system components within the data aggregation infrastructure
  • Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Flume and Sqoop.
  • Performed system administration activities on Linux, CentOs & Ubuntu. • Developed Java Map/Reduce job for Trip Calibration, Trip summarization and data filtering.
  • Developed Hive UDFs for rating aggregation.
  • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, and loaded data into HDFS
  • Extracted the data from Teradata into HDFS using Sqoop.
  • Analyzed the data by performing Hive queries and running Pig scripts to know user behavior like shopping Enthusiasts, travelers, music lovers etc.
  • The patterns analyzed are exported back into Teradata using Sqoop.
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.
  • Installed Oozie workflow engine to run multiple Hive. • Monitoring workload, job performance and capacity planning using Cloudera Manager.
  • Developed Hive queries to process the data and generate the data cubes for visualizing
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Experienced in defining job flows.
  • Writing shell scripts for manipulating data.
  • Tuned the developed ETL jobs for better performance
  • Experienced in managing and reviewing Hadoop log files.
  • Responsible to manage data coming from different sources.
  • Used Oozie tool for job scheduling

Environment: Hadoop, MapReduce, HDFS, Java 6, Hadoop distribution, Apache Hadoop 1.0.1, MapReduce, ETL,HDFS, CentOS, Zookeeper, Sqoop, Hive, Pig, Oozie, Java, Eclipse, Amazon EC2, JSP, Servlets, Oracle.

Confidential

Jr. Hadoop Developer

Responsibilities:

  • Worked closely with the Development Team in the design phase and developed use case diagrams using Rational Rose.
  • Responsible for building scalable distributed data solutions using Hadoop.
  • Installed and configured Hive, Pig, Sqoop, Flume and Oozie on the Hadoop cluster.
  • Setup and benchmarked Hadoop/HBase clusters for internal use.
  • Developed Simple to complex Map/reduce Jobs using Hive and Pig.
  • Optimized Map/Reduce Jobs to use HDFS efficiently by using various compression mechanisms
  • Handled importing of data from various data sources, performed transformations using Hive, Map-Reduce, loaded data into HDFS and Extracted the data from MySQL into HDFS using Sqoop.
  • Analyzed the data by performing Hive queries and running Pig scripts to study customer behavior.
  • Used UDF's to implement business logic in Hadoop.
  • Implemented business logic by writing UDFs in Java and used various UDFs from Piggybanks and other sources.
  • Continuous monitoring and managing the Hadoop cluster using Cloudera Manager.
  • Developed Map-Reduce programs in Java for Data Analysis.
  • Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
  • Developed HQL for the analysis of semi structured data.
  • Handled the installation and configuration of a Hadoop cluster.
  • Build and maintained scalable data pipelines using the Hadoop ecosystem and other open source components like Hive, and Cassandra instead of HBase.
  • Used Hive and created Hive tables and involved in data loading and writing Hive UDFs.
  • Used Sqoop to import data into HDFS and Hive from other data systems.
  • Handle the data exchange between HDFS and different web sources using Flume and Sqoop
  • Installed Kafka on Hadoop cluster and configured producer and consumer coding part in java to establish connection

Environment: Hadoop (CDH), MapReduce, HDFS, Hive, Pig, Sqoop, Flume, Oozie, Java, SQL, Kafka, Cassandra.

Confidential

JAVA Developer

Responsibilities:

  • Installation, Configuration & Upgrade of Solaris and Linux operating system.
  • Actively participated in requirements gathering, analysis, design, and testing phases
  • Designed use case diagrams, class diagrams, and sequence diagrams as a part of Design Phase
  • Developed the entire application implementing MVC Architecture integrating JSF with Hibernate and Spring frameworks.
  • Developed the Enterprise Java Beans (Stateless Session beans) to handle different transactions such as online funds transfer, bill payments to the service providers.
  • Implemented Service Oriented Architecture (SOA) using JMS for sending and receiving messages while creating web services
  • Developed XML documents and generated XSL files for Payment Transaction and Reserve Transaction systems.
  • Developed SQL queries and stored procedures.
  • Developed Web Services for data transfer from client to server and vice versa using Apache Axis, SOAP and WSDL.
  • Used JUnit Framework for the unit testing of all the java classes.
  • Implemented various J2EE Design patterns like Singleton, Service Locator, DAO, and SOA.
  • Worked on AJAX to develop an interactive Web Application and JavaScript for Data Validations.
  • Developed the application under JEE architecture, developed Designed dynamic and browser compatible user interfaces using JSP, Custom Tags, HTML, CSS, and JavaScript.
  • Deployed & maintained the JSP, Servlets components on Web logic 8.0. Involved in writing map reduce programs for processing the data from various sources followed by parsing it and storing the parsed data into Accumulo and Hive using proper integration
  • Developed Application Servers persistence layer using, JDBC, SQL, Hibernate.
  • Used JDBC to connect the web applications to Data Bases.
  • Implemented Test First unit testing framework driven using Junit.
  • Experience with design & build of Web Applications using Java/J2EE Technology, AWS and open source technologies.
  • Developed and utilized J2EE Services and JMS components for messaging communication in Web Logic.
  • Configured development environment using Web logic application server for developer’s integration testing

Environment: Java/J2EE, SQL, Oracle 10g, JSP 2.0, EJB, AJAX, Java Script,Accumulo, Web Logic 8.0, HTML, JDBC 3.0, XML, JMS, log4j, Junit, Servlets, MVC, My Eclipse

We'd love your feedback!