Hadoop Developer Resume
- Seeking Hadoop Developer position to pursue & challenge my career in Big data.
- 6 years of overall IT experience with 4 years in Java Developing and 2 years in Hadoop /Big Data.
- Extensive experience using Spark RDD’s and Scala
- Experienced in software systems analysis, design, development, maintenance and implementation, Unit Testing and Documentation, spanning the areas of Distributed, Client/Server and Internet/Intranet Applications.
- Hands on experience with Hadoop, Flume, Pig, Hive and Spark
- Worked with Oozie workflow engine to manage interdependent Hadoop jobs and to automate several types of Hadoop jobs such as Java map - reduce Hive, Pig, and Sqoop.
- Comprehensive knowledge of Web/Client Server Development using n-tier architecture in J2EE framework.
- Analyzed the Business logic to capture all Use Cases and prepared design document with UML methodologies (Use Cases Activity Diagrams, Sequence Diagrams and Class Diagrams) using Rational Rose
- Experience in coordinating with multiple teams and achieve the results on time.
- Excellent written and communication skills, able to communicate effectively with all stakeholders, including customers, senior management, and business partners.
- Extensive experience using Oracle, PL/SQL, SQL and UNIX Shell Scripts
- Experienced with Windows 7, UNIX (Sun Solaris) and Linux operating systems.
J2EE Technologies: JDBC, JSP, Servlets
Database: MySQL, MongoDB, Oracle, DB2, SQL Server, HBase
Big data: Hadoop HDFS, MapReduce, Pig, HBase, Spark, Hive, Yarn, Sqoop, Kafka
Webserver: Tomcat 8
Application Server: BEA Weblogic server 10.3.6
Software Development: SDLC methodologies -Waterfall, Agile, V-Model, SCRUM
Cloud: Amazon Web Services, Microsoft Azure
Operating system: Linux, Windows
Automations tools: Jenkins, chef
Version Control: GIT
Confidential, New York
- Created HBase tables to load large sets of structured, semi-structured and unstructured data coming from UNIX, NoSQL and a variety of portfolios.
- Worked on loadingdatafrom Linux filesystem to HDFS.
- Worked on migrating MapReduce programs into Spark transformations(RDD) using Spark and Scala.
- Responsible for importing log files from various sources into HDFS using Flume.
- Managed elastic Hadoop clusters.
- Worked on custom Pig Loaders and storage classes to work with variety ofdataformats such as JSON and XML file formats.
- Used Oozie workflow engine to manage interdependent Hadoop jobs and to automate several types of Hadoop jobs such as Java map-reduce Hive, Pig, and Sqoop.
- Scheduled datapipelines for automation ofdataingestion in AWS.
- Developed Spark scripts to perform processing on streaming data
- Created Partitions, Buckets based on State to further process using Bucket based Hive joins.
- Integrated Apache Spark and Zeppelin service to present data graphically.
- Used KAFKA and Amazon Kinesis for streaming data
- Integrated Amazon EMR and Spark to provide faster analysis on streaming data.
- Estimated the hardware requirements for NameNode and DataNodes & worked as a team on the cluster planning.
- Utilized AWS framework for content storage and Elastic Search for document search
- Made use of spot instances while performing spark to minimize the cost and make the spark processing faster.
Environment: Hadoop, Flume, Pig, Hive, Spark, Oozie, Yarn, Apache Zeppelin, AWS, Amazon Kinesis, HBase, Amazon Redshift, Amazon EMR
- Developed front-end screens using HTML and JSPs.
- Developed the interface to pre-populate XML Enrollment forms using SP Data Collector with Data Map techniques at the middle tier.
- Developed server-side servlets to start the data collector wizard and to map the data transfer from the existing Java Beans to DCML forms
- Developed plug-ins to store the data maps into the database using middle tier objects
- Established database access through JDBC and created Servlets for Transfer and retrieval of data.
- Designed and developed employee database maintenance system in ORACLE and Product Tracing System for Intranet.
- Designed user interfaces using HTML, JAVA APPLETS (Awt and Swing). Server side validations were implemented using SERVLETS.
- The database connections were established with JDBC-ODBC Bridge Driver (Type-I Driver) to retrieve and pass data to the database.
- .NET Developers/Architects Resumes
- Java Developers/Architects Resumes
- Informatica Developers/Architects Resumes
- Business Analyst (BA) Resumes
- Quality Assurance (QA) Resumes
- Network and Systems Administrators Resumes
- Help Desk and Support specialists Resumes
- Oracle Developers Resumes
- SAP Resumes
- Web Developer Resumes
- Datawarehousing, ETL, Informatica Resumes
- Business Intelligence, Business Object Resumes
- MainFrame Resumes
- Network Admin Resumes
- Oracle Resumes
- ORACLE DBA Resumes
- Other Resumes
- Peoplesoft Resumes
- Project Manager Resumes
- Quality Assurance Resumes
- Recruiter Resumes
- SAS Resumes
- Sharepoint Resumes
- SQL Developers Resumes
- Technical Writers Resumes
- WebSphere Resumes
- Hot Resumes