We provide IT Staff Augmentation Services!

Sr.hadoop Developer/admin Resume

0/5 (Submit Your Rating)

Seattle, WA

SUMMARY

  • Over 8 years of experience in IT industry, 3+ years of experience in developing large scale applications using Hadoop and Other Big data tools.
  • Well experienced in the Hadoop ecosystem components like Hadoop, MapReduce, Cloudera, Mahout, HBase, Oozie, Hive, Sqoop, Pig, and Flume.
  • Experience in developing solutions by analyzing large data sets efficiently
  • Experience with distributed systems, large - scale non-relational data stores, MapReduce systems, data modeling, and big data systems.
  • In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, JobTracker, TaskTracker, NameNode, DataNode.
  • Intense hands on experience in writing complex Mapreducejobs, PigScripts and Hivedatamodeling.
  • Experience in converting MapReduce applications to Spark.
  • Good working experience using Sqoop to import data into HDFS from RDBMS and vice-versa.
  • Good knowledge in using job scheduling and workflow designing tools like Oozie.
  • Experience in working with BI team and transform big data requirements into Hadoop centric technologies.
  • Good Knowledge on Hadoop Cluster administration, monitoring and managing Hadoopclusters using Cloudera Manager.
  • Have good experience creating real time data streaming solutions using Apache Spark/Spark Streaming, Kafka and Flume.
  • Extending Hive and Pig core functionality by writing customUDFs
  • Good understanding of Data Mining and Machine Learning techniques
  • Experience in handling messaging services using ApacheKafka.
  • Experience in fine-tuning Mapreduce jobs for better scalability and performance.
  • Worked extensively with Dimensional modeling, Data migration, Data cleansing, Data profiling, and ETL Processes features for data warehouses.
  • Strong Experience in Spring, Struts and Hibernate.
  • Expertise in design patterns including Front Controller, Data Access Object, Session Facade, Business Delegate, Service Locator, MVC, Data Transfer Object and Singleton.
  • Sound Relational Database Concepts and extensively worked with MySQL, DB2 and SQL Server.
  • Expertise in workflow management tools like SQL Workbench, SQL Developer and TOAD tool for accessing the Database server.
  • Experience in developing and implementing web applications using Java, JSP, jQuery UI, jQuery, ExtJS, CSS, HTML, HTML5, XHTML and Java script, AJAX, JSON, XML, JDBC and JNDI.
  • Expertise in writing Shell-Scripts, Cron Automation and Regular Expressions.
  • Expertise in Web Services architecture in SOAP and WSDL using JAX-RPC.
  • Expertise in using configuration management tool like Sub Version (SVN),Rational Clear case, CVS and Git for version controlling.
  • Expert in Various Agile methodologies like SCRUM, Test Driven Development, Incremental and Iteration methodology and Pair Programming.
  • Experience in writing SQL, PL/SQL queries, Stored Procedures for accessing and managing databases such as SQL Server2014/2012 MySQL, and IBM DB2.
  • Involved in all phases of Software Development Life Cycle (SDLC) in large scale enterprise software using Object Oriented Analysis and Design.
  • Working experience of control version tools like SVN, CVS, Clear Case and PVCS.
  • Highly motivated team player with zeal to learn new technologies.

TECHNICAL SKILLS

Languages: Java, C, C++, R, Scala 2.0, Python.

Big Data: Hadoop(Yarn), Hive, Pig Latin, Spark Storm,Sqoop, Flume, Zookeeper, Oozie, Kafka and MRUnit.

J2EE Standards: JDBC, JNDI, JMS, Java Mail & XML Deployment Descriptors.

Web/Distributed Technologies: J2EE, Servlets2.1/2.2, JSP 2.0, Struts 1.1, Hibernate 3.0, JSF, JSTL1.1,EJB 1.1/2.0, RMI, JNI, XML,JAXP,XSL,XSLT, UML, MVC,STRUTS,Spring 2.0, Corba, Java Threads.

Operating System: Windows 95/98/NT/2000/XP, MS-DOS, UNIX, Linux6.2

Databases: MS SQL Server 2000, DB2, MS Access, MySQL and Hbase.

Browser Languages: HTML, XHTML, CSS, XML,Json, XSL, XSD, XSLT.

Browser Scripting: Java script, HTML DOM, DHTML, AJAX, AngularJS.

App/Web Servers: IBM WebSphere 5.1.2/5.0/4.0/3.5 , BEA Web logic 5.1/7.0, JDeveloper, Apache Tomcat, JBoss.

Frameworks: Java Struts, Hibernate, Apache Mahout.

Testing &Case Tools: Junit, Log4j, Rational Clear case, CVS, JBuilder.

Networking Protocols: HTTP, HTTPS, FTP, UDP, TCP/IP, SNMP, SMTP, POP3.

PROFESSIONAL EXPERIENCE

Confidential, Seattle, WA

SR.Hadoop Developer/Admin

Responsibilities:

  • Installed and Configured Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, Oozie, Zookeeper, Hbase.
  • Implemented multiple Map Reduce Jobs in java for data cleansing and pre-processing.
  • Worked with the team to increase cluster from 28 nodes to 42 nodes, the configuration for additional data nodes was done by Commissioning process in Hadoop.
  • Adding new Data Nodes when needed and running balancer.
  • Responsible to manage data coming from different sources.
  • Involved in importing the real time data to Hadoop using Kafka and implemented the Oozie job for daily imports.
  • Was done various compressions and file formats like snappy, Gzip, Bzip2, avro.
  • Performance tuned Hive jobs.
  • Involved in defining job flows, managing and reviewing log files.
  • Developed Spark scripts by using Scala Shell commands as per the requirement.
  • Written programs in scala that runs in spark and worked on Hue interface for querying the data.
  • Created HBase tables to store variable data formats of data coming from different portfolios.
  • Experienced in managing and reviewing Hadoop log files.
  • Involved in forecast based on the present results and insights derived from data analysis
  • Prepared MRUnit Test cases and executed.
  • Implemented test scripts to support test driven development and continuous integration.

Environment: Hadoop, Map Reduce, Spark, Kafka, HDFS, ZooKeeper, Oozie, Core Java, Eclipse, Hbase, Sqoop, Flume, UNIX Shell Scripting.

Confidential, Herndon, VA

Hadoop Developer

Responsibilities:

  • Experience with professional software engineering practices and best practices for the full software development life cycle including coding standards, code reviews, source control management and build processes.
  • Responsible for building scalable distributed data solutions using Hadoop.
  • Worked on analyzing, writing Hadoop MapReduce jobs using JavaAPI, Pig and Hive.
  • Responsible to manage data coming from different sources and involved in HDFS maintenance and loading of structured and unstructured data.
  • Imported data using Sqoop to load data from MySQL to HDFS on regular basis.
  • Worked extensively on importing data using flume and transportation of data to Hbase
  • Responsible for creating complex tables queries using hive.
  • Created partitioned tables in Hive for best performance and faster querying.
  • Hands on working experience Linux environment.
  • Converting Hive Queries to Spark SQL and using parquet file as the storage format.
  • Worked with NoSQL database Hbase to create tables and store data.
  • Worked on custom Pig Loaders and storage classes to work with variety of data formats such as JSON and XML file formats.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.

Environment: Hadoop, HDFS, HBase, Map Reduce, AWS,JDK 1.5, J2EE 1.4, Struts 1.3, Hive, Pig, Sqoop, Flume, Kafka, Oozie, Hue, Storm, Zookeeper, AVRO Files, SQL, ETL, Cloudera Manager, MySQL,Hbase

Confidential, Memphis, TN

Hadoop Developer

Responsibilities:

  • Involved in running Hadoopjobs for processing millions of records of text data.
  • Analyzed large data sets by running Hive queries and Pig scripts.
  • Involved in creating Hive tables, and loading and analyzing data using hive queries.
  • Responsible to manage data coming from different sources.
  • Experienced in running Hadoop streaming jobs to process terabytes of xml format data.
  • Load and transform large sets of structured, semi structured and unstructured data.
  • Implemented Partitioning, Dynamic Partitions, Buckets in HIVE.
  • Assisted in exporting analyzed data to relational databases using Sqoop.
  • Worked with Informatica 8.6x to perform Source Analyzer, Mapping Designer, Mapplet Designer, Transformations Designer, Warehouse Designer, Repository Manager, and Workflow Manager/Server Manager.
  • Learnt Talend on special interest and used it for the project to make them easy.
  • Performed Tmap component joins and implemented filters using Talend.
  • Worked on data conversion by extracting data from databases, reform data and load data into Cassandra nodes.
  • Developed unit test cases for HadoopMapReduce jobs with MRUnit.
  • Load and transform large sets of structured, semi structured and unstructured data.
  • Supported Map Reduce Programs those are running on the cluster.

Environment: Hadoop, HDFS, Pig, Hive, Map Reduce, Spark, Sqoop, Kafka, Oozie, and Big Data, Python, Apache Java (jdk1.6), Data tax, Flat files, MySQL, Toad, Windows NT, LINUX, Cassandra.

Confidential, Madison, WI

Sr. Java/J2EE Developer

Responsibilities:

  • Involved in installing and configuring Eclipse IDE, Ant, Web Logic and Maven for development.
  • Implemented entire application using Core java, java Collections, Struts and Spring 3.0 MVC design framework.
  • Involved in injecting dependencies into code using spring core module concepts like IOC of Spring Framework.
  • Extensively used various J2EE design patterns like Factory, Singleton, Data Access Objects, Data Transfer Objects, Business Delegate and Session Façade in the project, which facilitates clean distribution of roles and responsibilities across various layers of processing.
  • Implemented Stateless Session Beans to accomplish complex business logic and also for transaction management in various modules.
  • Design and Implemented XML/WSDL/SOAP/RESTFUL Web Services to provide the interface to the various clients running on both Java and Non Java applications.
  • Extensively involved in developing core persistence classes using Hibernate3.0 framework, writing HQL queries, creating hibernate mapping (.hbm) files.
  • Used CVS (Concurrent Version System) as the configuration management tool.
  • Prepared Test Cases to perform Unit, Integration and System Testing. Tested the developed components using JUnit 4.0.
  • Involved in configuring Hibernate mapping files and POJO objects.
  • Used Hibernate Transaction Management, Hibernate Batch Transactions and Hibernate cache concepts.
  • Used Hibernate Query Language (HQL) for writing the queries.
  • Used Log4J components for logging. Perform daily monitoring of log files and resolve issues.
  • Responsible and active in the analysis, definition, design, implementation and deployment of full software development life-cycle (SDLC) of the project.
  • Actively involved in getting the production issues resolved.

Environment: Java1.7/J2EE, Struts Framework 2.0, Spring 3.0, JSP 2.0,Web Services, Hibernate 3.0, JPA, HTML, JavaScript, JQuery, AJAX, Eclipse IDE, Java Beans, log4j, CVS, WebLogic, Rational Rose, JUnit, Maven.

Confidential

Java/J2EE Developer

Responsibilities:

  • Implemented features like logging, user session validation using Spring-AOP module.
  • Used Spring MVC Framework at Business Tier and also Spring Bean Factory for initializing services.
  • Worked extensively on Spring IOC/ Dependency Injection, Configured the cross cutting concerns like logging, security using Spring AOP.
  • Integrated Spring and Hibernate, injecting Hibernate Template class into the DAOs.
  • Have coded numerous DAO's using Hibernate Dao Support.
  • Used Criteria, HQL and SQL as the query languages in Hibernate Mapping. Integrated the Spring and Hibernate framework.
  • Developed shell scripts to call stored procedures which reside on the Database.
  • Used XML for data exchange and schemas (XSDs) for XML validation. Used XSLT for transformation of XML.
  • Implemented SOA architecture with web services using SOAP, WSDL and XML.
  • Employed Water Fall Model and best practices for software development.
  • Deployed the application in JBoss Application Server.
  • Used SVN for version control.
  • Implemented Java Messaging Services (JMS) for asynchronous messaging using the Message Driven Beans. Used Message Driven Beans to call the EJB.
  • Worked on Junit for creating test cases for all the Business Rules and the application code.
  • Communicated with ILOG Rules using EJB Remote Lookup.
  • Used JIBX binding to convert Java object to XML and vice-versa.

Environment: Java (JDK 1.5), J2EE, Spring, Hibernate, XML, JDBC, Web Services, SOAP, UDDI, Subversion, Maven, JBOSS, EJB 3.0, Apache CXF, JMS, UNIX and Eclipse.

Confidential

Java Developer

Responsibilities:

  • Involved in Architecture and System Design and development process.
  • Setting up the basic project set-up from scratch of Struts-Hibernate App based on Design.
  • Created the User Interface screens using StrutsMVC for logging into the system and performing various operations on network elements.
  • The users are classified into various organizations to differentiate the privileges between them in accessing the system.
  • Development using the Use Cases and business logic & Unit Testing of Struts Based Application
  • Developed JSP pages using Custom tags and Tiles framework and Struts framework.
  • Developed the User Interface Screens for presentation logic using JSP, Struts Tiles, and HTML.
  • Used display tag to render large volumes of data. Bean, HTML and Logic tags are extensively used to avoid java expressions and scriplets in JSP.
  • Design patterns like Session Façade, Command, Singleton and DAO are implemented in business layer.
  • JMS is used to send message objects to client queues and topics.
  • JUnit test cases are created for unit testing.
  • Log4j is used for logging purposes and debug levels are defined for controlling what we log.

Environment: Java 1.5, J2EE, JSP, EJB, Struts 1.2, Apache TOMCAT, Web Services, Hibernate, JMS, XML, XSL, HTML, JavaScript, CSS, AJAX.

We'd love your feedback!