We provide IT Staff Augmentation Services!

Java/big Data Engineer Resume

Framingham, MA

PROFESSIONAL SUMMARY:

  • 8+ years of IT professional experience wif full project lifecycle development in J2EE technologies,ETLtechnologies Requirements analysis, Design, Development, Testing, Big Data, Deployment and production support of software applications.
  • 4+ years of experience in Hadoop Development.
  • Experience in Software Development Life Cycle (Requirements Analysis, Design, Development, Testing, Deployment and Support).
  • Experienced in setting up standards and processes for Hadoop based application design and implementation.
  • Hands on experience in installing, configuring and using ecosystem components likeHadoopMap Reduce,HDFS, Hbase, Oozie, Hive, Pig, Spark,Flume.
  • Experienced in developing Map Reduce programs using Apache Hadoop for working wif Big Data.
  • Experience in NoSQL databases including HBase.
  • Experience in importing and exporting the data using Sqoop from HDFS to Relational Database systems/mainframe and vice - versa.
  • Hands on experience in application development using Java, RDBMS, and Linux shell scripting.
  • Performed data analysis using Hive and Pig.
  • Loading log data into HDFS using Flume.
  • Experience in using Sqoop, Zoo Keeper and Cloudera Manager.
  • Extensive experience in designing analytical OLAP and transactional OLTP databases.
  • Expertise in back-end procedure development, for RDBMS, Database Applications using SQL and PL/SQL. Hands on experience on writing Queries, Stored procedures, Functions and Triggers by using SQL.
  • Sound Knowledge of ETL workflows, transformations and creating Reports using various Technologies
  • Adept at creating Unified Modeling Language (UML) diagrams such as Use Case diagrams, Activity diagrams, Class diagrams and Sequence diagrams using Rational Rose and Microsoft Visio.
  • Extensive experience in developing applications using Java, JSP, Servlets, JavaBeans, JSTL, JSP Custom Tag Libraries, JDBC, JNDI, SQL, AJAX, JavaScript and XML.
  • Experienced in using Agile methodologies including extreme programming, SCRUM and Test Driven Development (TDD).
  • Proficient in integrating and configuring the Object-Relation Mapping tool, Hibernate in J2EE applications and other open source frameworks like Struts and Spring.
  • Developed web applications based on different Design Patterns including Model-View-Controller (MVC), Data Access Object (DAO), Front Controller, Business Delegate, Service Locator etc.
  • Configured and developed web applications in Spring, employed spring MVC architecture and Inversion of Control.
  • Experience in building and deploying web applications in multiple applications servers and middleware platforms including Web logic, Web sphere, Apache Tomcat, JBoss.
  • Experience in writing test cases in Java Environment using JUnit.
  • Hands on experience in development of logging standards and mechanism based on Log4j.
  • Experience in writing SQL Queries, Stored Procedures, Views, Functions, and Triggers in Oracle 9i/10g/11g and MySQL4.x and 5.x.
  • Good knowledge in Web Services, SOAP programming, WSDL, and XML parsers like SAX, DOM, AngularJS, Responsive design/Bootstrap.
  • Demonstrated technical expertise, organization and client service skills in various projects undertaken.
  • Strong commitment to organizational work ethics, value based decision-making and managerial skills.

TECHNICAL SKILLS:

Big Data Ecosystem: Hadoop, MapReduce, YARN, HDFS, HBase, Zookeeper,Hive, Hue, Pig, Sqoop, Cassandra, Spark, Oozie, Storm, Flume, Talend, Cloudera Manager, MapR, Hortonworks clusters.

Languages: C, Java, PL/SQL, Pig Latin, Python, HiveQL. Scala, SQL

ETL, Reporting& Web Technologies: Informatica, SSIS, BOXI, Tableau, SSRS, CSS, HTML, XHTML, CSS, XML.

Frame works: Struts, Spring 3.x, JDBC

Web Services: SOAP, RESTful, JAX-WS

Web Servers: Web Logic, Web Sphere, Apache Tomcat.

Scripting Languages: Shell Scripting, Java script.

Database: Oracle 9i/10g, Microsoft SQL Server, MySQL, DB2, Teradata SQL, RDBS, MongoDB, Cassandra, HBase

Design: UML, Rational Rose, E-R Modelling, Microsoft Visio

IDE & Build Tools: Eclipse, NetBeans, ANT and Maven.

Version Control System: CVS, SVN, GITHUB.

PROFESSIONAL EXPERIENCE

Confidential, Framingham, MA

Java/Big Data Engineer

Responsibilities:

  • Installed and configured Hadoop MapReduce, HDFS and developed multiple MapReduce jobs in Java for data cleansing and preprocessing.
  • Evaluated business requirements and prepared detailed specifications that follow project guidelines required to develop written programs.
  • Experience in AWS, implementing solutions using services like (EC2, S3, RDS, Redshift, VPC).
  • Responsible for building scalable distributed data solutions using Hadoop.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, and loaded data into HDFS.
  • Extracted the datafrom MySQL, AWS RedShift into HDFS using Sqoop.
  • Wrote MapReduce code to make un-structured data into semi- structured data and loaded into Hive tables.
  • Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and troubleshooting.
  • Worked extensively in creating MapReduce jobs to power data for search and aggregation
  • Worked extensively wif Sqoop for importing metadata from Oracle.
  • Extensively used Pig for data cleansing.
  • Experienced working wif HadoopBig Datatechnologies(hdfs and Map reduce programs), Hadoop echo systems (Hbase, Hive, pig) and NoSQL database MongoDB, Cassandra.
  • Created partitioned tables in Hive.
  • Managed and reviewed Hadoop log files.
  • Involved in creating Hive tables, loading wif data and writing hive queries which will run internally in MapReduce way.
  • Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Installed and configured Pigand also written PigLatin scripts.
  • Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
  • Experience in Hadoop 2.x wif spark and Scala.
  • Configured Spark streaming to receive real time data from the Kafka and store the stream data to HDFS using Scala (Prototype).
  • Implemented Spark using Scala and SparkSQL for faster testing and processing of data.
  • Created Hbase tables to store various data formats of data coming from different portfolios.
  • Developed MapReduce jobs to automate transfer of data from Hbase.
  • Used SVN, Tortoise SVN version control tools for code management (checkins, checkouts and synchronizing the code wif repository).
  • Worked hands on wif ETL process.

Environment: Spark/Scala, Python, Hadoop, Map Reduce, Storm, Hive, HBase, HDFS, Hive, Java (JDK 1.6), Linux,Cloud era,MongoDB, Cassandra,AWS, Map Reduce, Oracle 10g, PL/SQL, SQL*PLUS, Toad 9.6, UNIX Shell Scripting, Eclipse.

Confidential, Austin, TX

Sr. Big data Developer

Responsibilities:

  • Worked wif business teams and created Hive queries for ad hoc access.
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.
  • Involved in review of functional and non-functional requirements
  • Responsible to manage data coming from different sources.
  • Installed and configured Hadoop ecosystem like HBase, Flume, Pig,Storm and Sqoop.
  • Loaded daily data from websites to Hadoop cluster by using Flume.
  • Involved in loading data from UNIX file system to HDFS.
  • Creating Hive tables and working on them using Hive QL.
  • Created complex Hive tables and executed complex Hive queries on Hive warehouse.
  • Wrote MapReduce code to convert unstructured data to semi structured data.
  • Used Pig to extract, transformation & load of semi structured data.
  • Installed and configured Hiveand also written Hive UDFs.
  • Develop Hive queries for the analysts.
  • Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing wif Pig.
  • Cluster co-ordination services through ZooKeeper.
  • Collected the logs data from web servers and integrated in to HDFS using Flume.
  • Creating Hive tables and working on them using Hive QL.
  • Worked on Hive for exposing data for further analysis and for generating transforming files from different analytical formats to text files.
  • Design and implement Map Reduce jobs to support distributed data processing.
  • Supported MapReduce Programs those are running on the cluster.
  • Involved in HDFS maintenance and loading of structured and unstructured data.
  • Wrote MapReduces job using Java API.
  • Designing NoSQL schemas in Hbase.
  • Worked wif NoSQL Mongo DB and heavily worked on Hive, Hbase and HDFS.
  • Wrote the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
  • Involved in Hadoop cluster task like Adding and Removing Nodes wifout any effect to running jobs and data.
  • Developed the Pig UDF’S to pre-process the data for analysis.
  • Involved in Hadoop cluster task like Adding and Removing Nodes wifout any effect to running jobs and data.
  • Configured Spark streaming to receive real time data from the Kafka and store the stream data to HDFS using Scala.
  • Implemented Spark using Scala and SparkSQL for faster testing and processing of data.

Environment: Spark, Scala, Python, MapReduce,Storm, HDFS, Hive, Pig, HBase, Java, Cloudera Linux, XML, MySQL, MySQL Workbench, Java 6, Eclipse, Cassandra

Confidential, Chicago, IL

Hadoop Developer

Responsibilities:

  • Installed and configured Apache Hadoop to test the maintenance of log files inHadoop cluster.
  • Installed and configured Hive, Pig, Sqoop, Flume and Oozie on theHadoop cluster.
  • Installed Oozie workflow engine to run multiple Hive and Pig Jobs.
  • Setup and benchmarkedHadoop /HBase clusters for internal use.
  • Developed Java MapReduce programs for the analysis of sample log file stored in cluster.
  • Developed Simple to complex Map/reduce Jobs using Hive and Pig.
  • Developed Map Reduce Programs for data analysis and data cleaning.
  • Developed PIG Latin scripts for the analysis of semi structured data.
  • Developed and involved in the industry specific UDF (user defined functions)
  • Used Hive and created Hive tables and involved in data loading and writing Hive UDFs.
  • Used Sqoop to import data into HDFS and Hive from other data systems.
  • Continuous monitoring and managing theHadoop cluster using Cloudera Manager.
  • Migration of ETL processes from Oracle to Hive to test the easy data manipulation.
  • Developed Hive queries to process the data for visualizing.

Environment: Apache Hadoop, HDFS, Cloudera Manager, Java, MapReduce, Eclipse Indigo, Hive, PIG, Sqoop, Oozie and SQL.

Confidential, Emeryville, CA

Java Developer

Roles & Responsibilities:

  • Involved in Requirements gathering & analysis.
  • Involved in Design, Development, Testing and Integration of the application.
  • Designing JSP using Java Beans.
  • Used HTML, DHTML, Java script and AJAX for implementing dynamic media play outs.
  • Involved in preparation of scope and traceability matrix for requirements and test scripts.
  • Implementing business logic and data base connectivity.
  • Client-side installation and configuration of project.
  • Implemented Struts (Action and Controller classes) for dispatching request to appropriate class.
  • Used simple Struts Validation for validation of user input as per the business logic and initial data loading.
  • Co-ordinate Application testing wif the help of testing team.
  • Writing database queries on Oracle 9i and Involved in the JDBC queries as part of implementation
  • Ability to quickly adjust priorities and take on projects wif limited specifications.
  • Maintained a separate DAO layer for CRUD operations.
  • Effective team player wif excellent logical and analytical abilities.
  • Followed coding guidelines and update the status leads in time.
  • Supported the applications through production and maintenance releases.
  • Involved in Level 5 company assessment & followed the process.
  • Instrumental in tuning the framework to meet the performance standards.
  • Excellent written and verbal communication skills, inter-personal skills and self-learning attitude.
  • Excellent in designing and developing store procedures
  • Involved in writing JUNIT test cases and Code version controlling using SVN.
  • Involved in building the code using Ant and the deployment.

Environment: Java 1.4, JSP, Servlets, Struts frame work, Tag libraries, Java Script, CSS, AJAX, JDBC, JNDI, Oracle 8i, Java beans, Struts Validation framework, Windows/UNIX, Ant, JUNIT, SVN, QC, Edit Plus, Web Logic application server, SQL Developer

Confidential,

Java Developer

Responsibilities:

  • Implemented new features like creating highly preferment, multi-threaded transforms to process incoming messages into trading object model using Java, Struts 1.2.
  • Conducted client-side validations using JavaScript.
  • Coded JDBC calls in the servlets to access the Oracle database tables. Also invoked EJB 2.1 Stateless Session beans for business service implementation.
  • Used Spring Batch for reading, validating and writing the daily batch files into the database
  • Developed user management screens using JSF framework, business components using Spring framework and DAO classes using Hibernate framework for persistence management and involved in integrating the frameworks for the project.
  • Implemented J2EE design patterns such as Session Facade, Factory, DAO, DTO, and MVC.
  • Designed & Developed persistence service using Hibernateframework.
  • Configured and Integrated JSF, Spring and Hibernate frameworks.
  • Responsible for writing Java code to convert HTML files to PDF file using Apache FOP.
  • Involved in the performance tuning of PL/SQL statements
  • Developed database triggers and procedures to update the real-time cash balances.
  • Worked closely wif the testing team in creating new test cases and created the use cases for the module before the testing phase.
  • Wrote ANT build scripts to compile Java classes and create jar, performed unit testing and package them into ear files.
  • Coordinated work wif DB team, QA team, Business Analysts and Client Reps to complete the client requirements efficiently.

Environment: Java/J2EE, JMS, JNDI, JSP, JSF, My Faces, Spring, Tiles, Hibernate 3.0, HTML, DHTML, CSS, Web Sphere 5.1.2, Ant, ClearQuest, Oracle 9i, AJAX, JSTL, Eclipse, Junit, JavaScript, CSS.

Hire Now