We provide IT Staff Augmentation Services!

Developer Resume

4.00/5 (Submit Your Rating)

Sacramento, CA

SUMMARY

  • Over 6+ years of programming experience wif skills in analysis, design, development, and deploying for large Scale distributed data processing using Hadoop, Pig and Java and other various software applications wif emphasis on Object Oriented programming.
  • Good exposure on Map Reduce programming, PIG Scripting and Distributed Application and HDFS.
  • Worked on importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice - versa and managing Hadoop cluster using Cloudera Manager and managing and reviewing Hadoop log files.
  • Good exposure in complete project life cycle (SDLC) of Client Server and Web applications.
  • Good noledge in Administering, Installation, configuration, troubleshooting, Security, Backup, Performance Monitoring and Fine-tuning of Linux Redhat.
  • Working noledge in Oracle, DB2, SQL Server and My SQL database.
  • Hands on in application development using Java, RDBMS, and Linux Shell scripting.
  • Good noledge of J2EE design patterns and Core Java design patterns.
  • Knowledge of Web services, Struts, Servlets, JSP, BEA, WAS, XML, XSL, and XSD.
  • Developed front-end using JSP wif Custom Tag libraries, JSTL, Struts Tag libraries, GWT, Adobe Flex, MXML, HTML, and CSS.
  • Working noledge of popular frameworks like Struts, Hibernate, and Spring MVC.
  • Worked on continuous integration workflow and deployments using Jenkins, Maven and Artifactory
  • Worked in Agile Engineering practices.
  • Worked on data processing of XML,JSON, XLS, CSV data using Python
  • Techno-functional responsibilities include interfacing wif users, identifying functional and technical gaps, estimates, development, producing documentation, and production support.
  • Excellent interpersonal and communication skills, creative, research-minded, technically competent and result-oriented wif problem solving skills.

TECHNICAL SKILLS

Big Data Ecosystem: HDFS, HBase, Hadoop MapReduce, Hive, Pig, Sqoop, Zookeeper

Languages: Java, Python 2.7/3, SQL, PL/SQL

Java Technologies: JSE, JSP, JDBC, Hibernate

Methodologies: Agile, V-model

Database: My SQL, Teradata, Oracle, Amazon EC2

IDE: Eclipse

Tools: Apache Mahout Cloud Computing AWS, EC2, GIT, Jenkins, Maven

Scripts: JavaScript, Shell Scripting, Python

PROFESSIONAL EXPERIENCE

Confidential, Sacramento, CA

Developer

Responsibilities:

  • Managing and scheduling Jobs on a Hadoop cluster using Oozie work flows and java schedulers.
  • Handled importing of data from various data sources, performed transformations using Pig and loaded data into HDFS and Extracted teh data from MySQL into HDFS using Sqoop.
  • Developed Map Reduce programs to process teh Avro files and to get teh results by performing some calculations on data. And performed Map side joins and other operations.
  • Developed Map Reduce Program for searching teh production log files for application issues and download performance.
  • Implemented Map Reduce programs to handle semi/ unstructured data like xml, json, Avro data files and sequence files for log files.
  • Wrote MapReduce job/ Hive QL/ Pig Latin to process teh source data to structured data and store in relational databases or NoSQL database (HBase, Cassandra).
  • Responsible for performing extensive data validation using HIVE.
  • Involved in loading data from UNIX file system to HDFS.
  • Creating Hive tables, dynamic partitions, buckets for sampling, and working on them using Hive QL.
  • Worked on optimizing techniques to get better performance from Hive Queries.
  • Loaded and transformed large sets of semi structured and unstructured data using Pig Latin operations.
  • Exported teh analyzed data to teh relational databases using Sqoop for visualization and to generate reports for teh BI team.
  • Worked on OOZIE to automate data loading into HDFS and PIG to pre-process teh data.

Confidential, Irvine, CA

Developer

Responsibilities:

  • Managing and scheduling Jobs on a Hadoop cluster using Oozie work flows and java schedulers.
  • Handled importing of data from various data sources, performed transformations using Pig and loaded data into HDFS and Extracted teh data from MySQL into HDFS using Sqoop.
  • Developed Map Reduce programs to process teh Avro files and to get teh results by performing some calculations on data. And performed Map side joins and other operations.
  • Developed Map Reduce Program for searching teh production log files for application issues and download performance.
  • Implemented Map Reduce programs to handle semi/ unstructured data like xml, json, Avro data files and sequence files for log files.
  • Wrote MapReduce job/ Hive QL/ Pig Latin to process teh source data to structured data and store in relational databases or NoSQL database (HBase, Cassandra).
  • Responsible for performing extensive data validation using HIVE.
  • Involved in loading data from UNIX file system to HDFS.
  • Creating Hive tables, dynamic partitions, buckets for sampling, and working on them using Hive QL.
  • Worked on optimizing techniques to get better performance from Hive Queries.
  • Loaded and transformed large sets of semi-structured and unstructured data using Pig Latin operations.
  • Exported teh analyzed data to teh relational databases using Sqoop for visualization and to generate reports for teh BI team.
  • Worked on OOZIE to automate data loading into HDFS and PIG to pre-process teh data.

Environment: Hadoop, HDFS, Map Reduce, Hive, Flume, Sqoop, PIG, Java (JDK 1.6), Eclipse, MySQL and Ubuntu, Zookeeper, CDH, Java Eclipse, SQL Server, Shell Scripting.

Confidential, Chicago, IL

Developer

Responsibilities:

  • Enabled speedy reviews and first mover advantages by using Oozie to automate data loading into teh Hadoop Distributed File System and PIG to pre-process teh data.
  • Developed MapReduce jobs for teh users.
  • Involved in maintenance, update and schedule teh periodic jobs which range from periodic MapReduce jobs to creating adhoc jobs for teh business users.
  • Perform customer profiling jobs and clustering algorithms atop teh datasets.
  • Loaded traffic data from into teh HDFS using Apache Flume from Higher environment to lower environment.
  • Created reports for teh BI team using Sqoop to export data into HDFS and Hive.
  • Provide aggregated datasets for downstream GT management teams.
  • Develop and maintain UNIX scripts dat load Salesforce data into teh HDFS.
  • Develop and support teh weekly BI reporting system by providing consumable RDBMS like datasets by aggregating, joining and filtering teh data to enable and sub-setting teh cluster data for Hive analysis
  • Involved in Apache Hive based queries.
  • Creating requested queries over teh datasets using Hive.

Environment: Hadoop, Java, MapReduce, Apache Pig, Flume, Sqoop, Shell scripting, Teradata

Confidential, Charlotte, NC

Developer

Responsibilities:

  • Created Use case, Sequence diagrams, functional specifications and User Interface diagrams using Star UML.
  • Involved in requirement analysis, design, coding and testing phases of teh project.
  • Participated in JAD meetings to gather teh requirements and understand teh End Users System.
  • Developed user interfaces using JSP, HTML, XML and JavaScript.
  • Generated XML Schemas and used XML Beans to parse XML files.
  • Used JDBC to process database calls for DB2/AS400 and SQL Server databases.
  • Created Data sources and Helper classes which will be utilized by all teh interfaces to access teh data and manipulate teh data.
  • Developed web application called iHUB (integration hub) to initiate all teh interface processes using Struts Framework, JSP and HTML.
  • Developed teh interfaces using Eclipse 3.1.1 and JBoss 4.1.
  • Involved in integrated testing, Bug fixing and in Production Support.

Environment: Java 1.3, Servlets, JSPs, Java Mail API, Java Script, HTML, MySQL 2.1, Swing, Java Web Server 2.0, JBoss 2.0, RMI, Rational Rose, Red Hat Linux 7.1.

Confidential

Software Engineer

Responsibilities:

  • Participated in teh requirements analysis.
  • Involved in UML design, Data model design and development of teh project.
  • Worked on teh development of teh Web application using Struts1.3 and Servlets.
  • Involved in teh development of teh controller logic of teh application.
  • Develop and perform teh unit testing using JUnit for teh application code.
  • Run performance tuning of teh SQL queries of teh controller part.
  • Designed and developed user interface using Struts tags, JSP, HTML and JavaScript.
  • Involved in multi-tiered J2EE design utilizing MVC architecture (Struts Framework) and Hibernate.
  • Involved in designing teh user interfaces using HTML, CSS, and JSPs

Environment: Java, Struts, JDBC, SQL/PLSQL, Oracle, UML.

We'd love your feedback!