We provide IT Staff Augmentation Services!

Hadoop Developer Resume

4.00/5 (Submit Your Rating)

ColoradO

PROFESSIONAL SUMMARY:

  • 8+ years of Experience in development of Big Data projects using Hadoop, Hive, HDP, PIG, Flume, Storm and MapReduceopen source tools/technologies.
  • Experience in Big Data Analytics with hands on experience in Data Extraction, Transformation, Loading and Data Analysis, Data Visualization using Cloudera Platform (Map Reduce, HDFS, Hive, Pig, Sqoop, Flume, Hbase, Oozie).
  • Substantial experience writing MapReduce jobs in Java, PIG, Flume,Tez, Zookeeper and Hive and Storm.
  • Hands on experience in installing, configuring and using ecosystem components like Hadoop MapReduce, HDFS,Hbase, AVRO, Zoo Keeper, Oozie, Hive, HDP, Cassandra, Sqoop, PIG, Flume.
  • Experience in web - based languages such as HTML, CSS, PHP, XML and other web methodologies including Web Services and SOAP. Worked on Multi Clustered environment and setting up Cloudera Hadoop echo System.
  • Good Experience in importing and exporting data between HDFS andRelational Database Management systems using Sqoop. Extensive knowledge of NoSQL databases such as Hbase.
  • Background with traditional databases such as Oracle, Teradata, Netezza, SQL Server, ETL tools / processes and data warehousing architectures. Extensive Knowledge on automation tools such as Puppet and Chef.
  • Proficient in working with MapReduce programs using Apache Hadoop for working with Big Data. Experienced in working with different Hadoop ecosystem components such as HDFS, MapReduce, HBase, Spark, Yarn, Kafka, Zookeeper, PIG, HIVE, Sqoop, Storm, Oozie, Impala and Flume.
  • Experience in transferring Streaming data from different data sources into HDFS and HBase using Apache Flume.
  • Experienced in using Zookeeper and OOZIE Operational Services for coordinating the cluster and scheduling workflows. Experience in working with Cloudera & Hortonworks Distribution of Hadoop.
  • Extensive experience in Java and J2EE technologies like Servlets, JSP, Enterprise Java Beans (EJB), JDBC.
  • Experienced in importing of data from various data sources, performed transformations using Hive, Map Reduce, loaded data into HDFS and extracted the data from relational databases like Oracle, MySQL, Teradata into HDFS and Hive using Sqoop.
  • Expertise in writing HIVE queries, Pig and Map Reduce scripts and loading the huge data from local file system and HDFS to Hive. Experienced in loading data to hive partitions and creating buckets in Hive.
  • Hands on experience on fetching the live stream data from DB2 to Hbase table using Spark Streaming and Apache Kafka. Good experience in working with cloud environment like Amazon Web Services () EC2 and S3.
  • Hands on experience on working with Amazon EMR framework transferring data to EC2 server.
  • Expertise in Software Development Life Cycle (SDLC) and Software Testing Life Cycle (STLC).
  • Expertise in writing Map-Reduce Jobs in Java for processing large sets of structured semi-structured and un-structured data sets and stores them in HDFS.
  • Experienced in Application Development using Java, Hadoop, RDBMS and Linux shell scripting and performance tuning. Experienced in relational databases like MySQL, Oracle and NoSQL databases like HBase and Cassandra.
  • Expertise in working with NoSQL databases including Hbase, Cassandra and its integration with Hadoop cluster.
  • Expertise in Developing Hadoop cluster on Public and Private Cloud Environment like Amazon AWS, OpenStack.

TECHNICAL SKILLS:

Big Data Technologies: Hadoop, HDFS, Hive, MapReduce, Pig, Sqoop, Flume, Oozie, Spark, Tableau

Programming Languages: C, C++, Java, Scala

Java/J2ee Technologies: Java, Java Beans, J2EE (JSP, Servlets, EJB), JDBC

Databases/ETL: Oracle 11g, SQL Server 2000, MySQL, SQL /PL SQL, Informatica v 8.x

NoSQL Databases: HBase and Cassandra

Web Technologies: HTML, JSP, JavaScript, Ajax, XML, PHP, AWS

Servers: Web Sphere, Apache Web Serve, Tomcat Server 7.0

Methodologies: UML, OOP, OOA, OOD and Agile

Version Controls: Tortoise CVS Client, SVN

Mapping Tools: Hibernate 3.0

Operating Systems: LINUX (Centos and Ubuntu), Windows XP, 7, MS DOS, UNIX

Scripting Languages: Perl, Python, Shell scripts

Build Tools: ANT, Maven 2.2.1

IDE/Tools/Utilities: Eclipse Helios, MS Visio, MS Office 2010, Control M, SQL Programmer

PROFESSIONAL EXPERIENCE:

Confidential, Colorado

Hadoop Developer

Responsibilities:

  • Implemented CDH5 Hadoop cluster on CentOS. Assisted with performance tuning and monitoring. Experienced in Spark Context, Spark SQ, Pair RDD and Spark YARN.
  • Conducted code reviews to ensure systems operations and prepare code modules for staging.
  • Run scrum-based development group. File system management and monitoring.
  • Plays a key role in driving a high-performance infrastructure strategy, architecture, scalability.
  • Involved in converting Hive/SQL queries into Spark Transformations using Spark RDD's and Scala.
  • Creating end to end Spark-Solr applications using Scala to perform various data cleansing, Validation, transformation according to the requirement. Managed and reviewed Hadoop log files.
  • Used Spark streaming APIs to perform transformations and actions on the fly for building common learner data model which gets the data from Kafka in near real time and persist it to Cassandra.
  • Utilized high-level information architecture to design modules for complex programs.
  • Write scripts to automate application deployments and configurations. Monitoring YARN applications.
  • Implemented HAWQ to render queries faster than any other Hadoop-based query interface
  • Wrote map reduce programs to clean and pre-process the data coming from different sources.
  • Implemented various output formats like Sequence file and parquet format in Map reduce programs. Also, implemented multiple output formats in the same program to match the use cases.
  • Implemented test scripts to support test driven development and continuous integration.
  • Converted text files into Avro then to parquet format for the file to be used with other Hadoop eco system tools.
  • Experienced on loading and transforming of large sets of structured, semi structured and unstructured data.
  • Exported the analyzed data to HBase using Sqoop and to generate reports for the BI team.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Participate in requirement gathering and analysis phase of the project in documenting the business requirements by conducting workshops/meetings with various business users.
  • Worked on external HAWQ tables where the data is loaded directly from CSV files then load them into internal tables. Responsible for implementation and ongoing administration of Hadoop infrastructure.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
  • Point of Contact for Vendor escalation.

Environment: Map Reduce, HDFS, Hive, Pig, Hue, Spark, Kafka, Oozie, Core Java, Eclipse, Hbase, Flume, Cloudera, Oracle 11g/12c, DB2, IDMS, VSAM, SQL*PLUS, Toad, Putty,, UNIX Shell Scripting, PentahoBigdata, YARN, HAWQ, SpringXD,CDH.

Confidential, Livonia, MI

Hadoop Developer

Responsibilities:

  • Developed Java code that stream the web log data into Hive using Rest services.
  • Developed Java code that can stream Salesforce data into hive using Spark StreamingAPI.
  • Executed Hive queries on tables and stored in Hive to perform data analysis to meet the business requirements.
  • Worked on Configuring Zookeeper, Kafka cluster. Worked on migrating data from MongoDB to Hadoop.
  • Worked on Creating Kafka topics, partitions, writing custom partitioned classes.
  • Worked on Big Data Integration and Analytics based on Hadoop, Spark and Kafka.
  • Worked with Kafka for the proof of concept for carrying out log processing on a distributed system.
  • Integrated Apache Storm with Kafka to perform web analytics. Uploaded click stream data from Kafka to HDFS, HBase and Hive by integrating with Storm. Implementation of the Business logic layer for MongoDB Services.
  • Performed real-time analysis of the incoming data using Kafka consumer API, Kafka topics, Spark Streaming utilizing Scala. Used the DatastaxOpscenter for maintenance operations and Keyspace and table management.
  • Implemented advanced procedures like text analytics and processing using the in-memory computing capabilities like Spark. Real time streaming the data using Spark with Kafka.
  • Developed Spark code using Python and Spark-SQL/Streaming for faster processing of data.
  • Built real time pipeline for streaming data using Kafka and SparkStreaming.
  • Installation & configuration of a Hadoop cluster using Ambari along with Hive.
  • Developed Kafka producer and consumers, Cassandra clients and Spark along with components on HDFS, Hive.
  • Processing large data sets in parallel across the Hadoop cluster for pre-processing.
  • Developed the code for Importing and exporting data into HDFS using Sqoop.
  • Developed Map Reduce programs to join data from different data sources using optimized joins by implementing bucketed joins or map joins depending on the requirement.
  • Imported data from structured data source into HDFS using Sqoop incremental imports.
  • Implemented Kafka Custom partitioners to send data to different categorized topics.
  • Implemented Storm topology with Streaming group to perform real time analytical operations.
  • Experience in implementing Kafka Spouts for streaming data and different bolts to consume data.
  • Created Hive tables, partitioners and implemented incremental imports to perform ad-hoc queries on structured data. Written Java scripts that execute different MongoDB queries.
  • Written Shell scripts that run multiple Hive jobs which helps to automate different hive tables incrementally which are used to generate different reports using Tableau for the Business use.

Environment: Hadoop, Hive, Flume, Linux, Shell Scripting, Java, Eclipse, MongoDB, Kafka, Spark, Zookeeper, Sqoop, Ambari, HDFS, Pig, Data Lake, MapReduce, Cloudera, Tableau, Snappy, HBase, Scala 2.10/2.11, Windows 7/Vista/ XP, Linux, Unix, NoSQL, MySQL, Shell Scripting, Ubuntu, Teradata.

Confidential, Memphis, TN

Hadoop Developer

Responsibilities:

  • Involved in full life-cycle of the project from Design, Analysis, logical and physical architecture modeling, development, Implementation, testing.
  • Design, deploy, Manage cluster nodes for our data platform operations (racking/stacking).
  • Install and configure cluster. Setting up puppet for centralized configuration management.
  • Monitoring Cluster using various tools to see how the nodes are performing.
  • Expertise in cluster task like Adding Nodes, Removing Nodes without any effect to running jobs and data.
  • Worked on Angular JS and Node JS Framework. worked on Impala and Pig to provide end-user access.
  • Write scripts to automate application deployments and configurations. Monitoring YARN applications. Troubleshoot and resolve cluster related system problems.
  • Developed MapReduce programs to parse the raw data and store the refined data in tables.
  • Designed and Modified Database tables and used HBASE Queries to insert and fetch data from tables.
  • Involved in moving all log files generated from various sources to HDFS for further processing through Flume.
  • Implemented nine nodes CDH3 Hadoop cluster on Red hat LINUX.
  • Developed data pipeline using Pig and Hive from Teradata, DB2 data sources. These pipelines had customized UDF'S to extend the ETL functionality. Designing conceptual model with Spark for performance optimization.
  • Developing Speech Recognition engine by acoustic and language model training & testing in IBM Voice Tailor, Shell, and PERL in Linux environment. Involved in fixing issues arising out of duration testing.
  • Worked on changing and maintaining the data warehousing, optimizing Dimensions.
  • Developed algorithms for identifying influencers with in specified social network channels.
  • Developed and updated social media analytics dashboards on regular basis.
  • Involved in loading and transforming large sets of structured, semi structured and unstructured data from relational databases into HDFS using Sqoop imports. Analyzing data with Hive, Pig and Hadoop Streaming.
  • Responsible for analyzing and cleansing raw data by performing Hive queries and running Pig scripts on data.
  • Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
  • Experience configuring spouts and bolts in various Apache Storm topologies and validating data in the bolts.
  • Created Hive tables, loaded data and wrote Hive queries that run within the map.
  • Used OOZIE Operational Services for batch processing and scheduling workflows dynamically.
  • Populated HDFS and Cassandra with huge amounts of data using Apache Kafka.
  • Exported the analyzed data to HBase using Sqoop and to generate reports for the BI team.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Participate in requirement gathering and analysis phase of the project in documenting the business requirements by conducting workshops/meetings with various business users.
  • Developed Pig Latin Scripts to pre-process the data. Experienced in working with Apache Storm.
  • Hands on experience in application development using Java, RDBMS, and Linux shell scripting.
  • Involved in fetching brands data from social media applications like Facebook, twitter.
  • Performed data mining investigations to find new insights related to customers.
  • Involved in forecast based on the present results and insights derived from data analysis.
  • Involved in collecting the data and identifying data patterns to build trained model using Machine Learning.
  • Create a complete processing engine, based on Cloudera's distribution, enhanced to performance.
  • Research on Hadoop for high performance using AWS, writing PIG and Map Reduce jobs.
  • Manage and review Hadoop log files. Involved in review technical documentation and provide feedback.

Environment: Java, NLP, HBase, Machine Learning, Hadoop, HDFS, Map Reduce, Accumulo, Perl, Python, Hive, Apache Storm, Sqoop, Flume, Oozie, Apache Kafka, Java beans, Spark, RHEL 7.1, Zookeeper, Solr, DWH, Impala, AWS, Mongo DB, MySQL, eclipse, MapReduce, Pig, HBase, Apache Spark, Java, Linux, SQL, Autosys, Tableau, Cassandra.

Confidential, Albany, NY

Hadoop Developer

Responsibilities:

  • Data Ingestion into the Indie-Data Lake using Open source Hadoop distribution to process Structured, Semi-Structured and Unstructured datasets using Open source Apache tools like FLUME and SQOOP into HIVE environment. (Using IBM Big Insights Ver-4.1 platform).
  • Developed Spark code using Scala and Spark-SQL for faster testing and data processing.
  • Developed MapReduce jobs in Java API to parse the raw data and store the refined data.
  • Develop Kafka producer and consumers, Hbase clients, Spark and Hadoop MapReduce jobs along with components on HDFS, Hive. Develop predictive analytics using Apache Spark Scala APIs.
  • Imported millions of structured data from relational databases using Sqoop import to process using Spark and stored the data into HDFS in parquet format. Worked on Database designing, Stored Procedures, and PL/SQL.
  • Implemented Spark Data Frames transformations, actions to migrate Map reduce algorithms.
  • Exploring with the Spark for improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Used Data Frame Developed solutions to pre-process large sets of structured, with different file formats (Text file, AVRO data files, Sequence files, Xml and JSON files, ORC and Parquet).me API in Java for converting the distributed collection of data organized into named columns. Used AWS for backup storage in S3 by Lambda.
  • Automating and scheduling the Sqoop jobs in a timely manner using Unix Shell Scripts.
  • Involved in interactive analytic queries by using Presto for Structured data in huge amounts like petabytes.
  • Involved in identifying job dependencies to design workflow for Oozie & YARN resource management.
  • Responsible for managing existing data extraction jobs, but also play a vital role in building new data pipelines from various structured and unstructured sources into Hadoop.
  • Work on a product team using Agile Scrum methodology to design, develop, deploy and support solutions that leverage the Client big data platform. Experience with batch processing of data sources using Apache Spark.
  • Integrated Apache Storm with Kafka to perform web analytics. Uploaded click stream data from Kafka to Hdfs, HBase and Hive by integrating with Storm. Implemented Cloudera Manager on existing cluster.
  • Design and code from specifications, analyzes, evaluates, tests, debugs, and implements complex software apps.
  • Developed Sqoop Scripts to extract data from DB2 EDW source databases into HDFS.
  • Worked in tuning Hive & Pig to improve performance and solved performance issues in both scripts with understanding of Joins, Group and aggregation and how does it translate to Map Reduce jobs.
  • Created Partitions, Buckets based on State to further process using Bucket based Hive joins.
  • Troubleshooting experience in debugging and fixed the wrong data or data missing problem for both Oracle Database and Mongo DB. Extensively worked with Cloudera Distribution Hadoop, CDH 5.x, CDH4.x

Environment: HDFS, MapReduce, JavaAPI, JSP, JavaBean, Pig, Hive, Sqoop, Flume, Oozie, HBase, Kafka, Impala, Spark Streaming, Storm, Yarn, Eclipse, Spring, PL/SQL, Unix Shell Scripting, Cloudera

Confidential, St. Louis, MO

Hadoop Developer

Responsibilities:

  • Responsible for building Scalable distributed data solutions using Hadoop.
  • Gathered business requirements from the business partners and subject matter experts.
  • Wrote multiple Map Reduce programs for data Extraction, Transformation and Aggregation from multiple file formats including XML, JSON, CSV and another compressed file format.
  • Optimized Map/Reduce Jobs to use HDFS efficiently by using various compression mechanisms.
  • Developed PIG UDFs to provide Pig capabilities for manipulating the data according to business requirements and worked on developing custom PIG Loaders. Implemented various requirements using Pig scripts.
  • Loaded and transformed large sets of structured, semi structured, and unstructured data.
  • Expert in implementing advanced procedures like text analytics and processing using the in-memory computing capabilities like Apache Spark. High sequential throughput of spinning disks was achieved through Kafka.
  • Computed the complex logics and controlled the data flow through the in-memory process tool and Apache Spark. Created Hive tables and was involved in data loading and writing Hive UDFs.
  • Generated Scala and Java classes from the respective APIs so that they could be incorporated in the overall application. Wrote Scala classes to interact with the database. Also wrote Scala test cases to test Scala codes.
  • Worked with different file formats like TEXTFILE, AVROFILE, ORC and PARQUET for Hive querying and processing.
  • Implemented functionalities using machine learning tools like Mahout to display the products best suited user profiles by performing sentiment analysis and trend analysis of the products.
  • Worked on data loading into Hive for data ingestion history and data content summary.
  • Developed Hive UDFs for rating aggregation and HBase Java client API for CRUD Operations.
  • Generated Java APIs for retrieval and analysis on NoSQL database such as HBase and Cassandra.
  • Worked extensively with Sqoop to move data from DB2 and Teradata to HDFS.
  • Collected the logs data from web servers and integrated in to HDFS using Flume.
  • Provided ad-hoc queries and data metrics to the business users using Hive and Pig.
  • Facilitated performance optimizations such as using distributed cache for small datasets, partition and bucketing in Hive and completed map side Joins. Experience managing and reviewing Hadoop log files.
  • Real time analysis along with continuous operation management was processed by Storm.
  • Worked on importing and exporting data from Oracle and DB2 into HDFS and HIVE using Sqoop for analysis, visualization and to generate reports. Worked on NoSQL databases including Hbase and Cassandra.
  • Scheduled Oozie workflow engine to run multiple Hive and Pig jobs, which independently run with time and data availability. Used TDP, Rally, JIRA for bug tracking and CVS for version control.
  • Worked on custom Pig Loaders and Storage classes to work with a variety of data formats such as JSON and Compressed CSV. Involved in running Hadoop streaming jobs to process Terabytes of data.

Environment: Hadoop, Map Reduce, Hive, HDFS, PIG, Sqoop, Oozie, Cloudera, Flume, HBase, CDH5, Cassandra, Oracle, J2EE, Oracle/SQL, DB2, Unix/Linux, JavaScript, Ajax, Eclipse IDE, RALLY, TDP, JIRA

Confidential

Java/ J2EE/ Hadoop Developer

Responsibilities:

  • Developed the application using Struts Framework that leverages classical Model View Layer (MVC) architecture UML diagrams like use cases, class diagrams, interaction diagrams, and activity diagrams were used
  • Participated in requirement gathering and converting the requirements into technical specifications
  • Extensively worked on User Interface for few modules using JSPs, JavaScript and Ajax
  • Created Business Logic using Servlets, Session beans and deployed them on Web logic server.
  • Developed the XML Schema and Web services for the data maintenance and structures
  • Implemented the Web Service client for the login authentication, credit reports and applicant information using Apache Axis 2 Web Service. Managing and scheduling Jobs on a Hadoop cluster
  • Successfully integrated Hive tables and Mongo DB collections and developed web service that queries Mongo DB collection and gives required data to web UI. Involved in integrating Web Services using WSDL and UDDI.
  • Successfully integrated Hive tables and Mongo DB collections and developed web service that queries Mongo DB collection and gives required data to web UI. Involved in templates and screens in HTML and JavaScript.
  • Developed workflows using custom MapReduce, Pig, Hive and Sqoop.
  • Built reusable Hive UDF libraries for business requirements which enabled users to use these UDF's in Hive Querying. Developed a data pipeline using Kafka and Storm to store data into HDFS.
  • Maintain Hadoop, Hadoop ecosystems, third party software, and database(s) with updates/upgrades, performance tuning and monitoring. Responsible in modification of API packages.
  • Extracted feeds form social media sites such as Facebook, Twitter using Python scripts.
  • Installed and configured Hadoop MapReduce, HDFS, Developed multiple MapReduce jobs in java for data cleaning and preprocessing. Wrote test cases in Junit for unit testing of classes
  • Created UDFs to calculate the pending payment for the given Residential or Small Business customer, and used in Pig and Hive Scripts. Responsible to manage data coming from different sources.
  • Developed Shell, Perl and Python scripts to automate and provide Control flow to Pig scripts.
  • Got good experience with NOSQL database. Experience in managing and reviewing Hadoop log files.
  • Used Hibernate ORM framework with spring framework for data persistence and transaction management.
  • Participated in development/implementation of Cloudera Hadoop environment.
  • Built and deployed Java applications into multiple Unix based environments and produced both unit and functional test results along with release notes

Environment: Hadoop, HDFS, Pig, Cloudera, JDK 1.5, J2EE 1.4, Struts 1.3, Kafka, Storm JSP, Servlets 2.5, WebSphere 6.1, HTML, XML, ANT 1.6, Perl, Python, JavaScript, Junit 3.8

Confidential

Programming Analyst (Java/J2EE)

Responsibilities:

  • Installed and configured Hadoop and Hadoop stack on a 16-node cluster.
  • Developed MapReduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables. Involved in loading data from UNIX file system to HDFS.
  • Involved in data ingestion into HDFS using Sqoop from variety of sources using the connectors like jdbc and import parameters. Used Hadoop streaming to process terabytes data in XML format.
  • Analyze large and critical datasets of Global Risk Investment and Treasury Technology (GRITT) Domain using Cloudera, HDFS, Hbase, MapReduce, Hive, Hive UDF, Pig, Sqoop, Zookeeper, & Mahout.
  • Worked with NoSQL database Hbase to create tables and store data.
  • Designed and implemented MapReduce-based large-scale parallel relation-learning system.
  • Worked with NoSQL databases like Hbase in creating Hbase tables to load large sets of semi structured data coming from various sources. Involved in scheduling Oozie workflow engine to run multiple Hive and pig jobs
  • Exported the data from Avro files and indexed the documents in sequence file format.
  • Install, configure, and operate data integration and analytic tools i.e. Informatica, Chorus, SQLFire, & Gem Fire XD for business needs. Develop scripts to automate routine DBA tasks (i.e. refresh, backups, vacuuming, etc.)
  • Installed and configured Hive and wrote Hive UDF's that helped spot market trends.
  • Implemented Fair schedulers on the Job tracker with appropriate parameters to share the resources of the Cluster for the Map Reduce jobs given by the users.
  • Involved in creating Hive tables, loading the data using it and in writing Hive queries to analyze the data.

Environment: CDH4 with Hadoop 1.x, HDFS, Pig, Cloudera, Hive, Hbase, zookeeper, MapReduce, Java, Sqoop, Oozie, Linux, UNIX Shell Scripting and Big Data.

Confidential

Java/J2EE Developer

Responsibilities:

  • Involved in all the Web module UI design and development using HTML, CSS, jQuery, JavaScript, Ajax.
  • Designed and modified User Interfaces using JSP, JavaScript, CSS and jQuery.
  • Developed UI screens using Bootstrap, CSS and jQuery. Implemented Spring AOP for admin services.
  • Developed user interfaces using JSP, JSF frame work with AJAX, Java Script, HTML, DHTML, and CSS.
  • Involved in multi-tiered J2EE design utilizing MVC architecture Struts Framework, Hibernate and EJB deployed on WebSphere Application Server connecting to an Oracle database.
  • Developed software in JAVA/J2EE, XML, Oracle EJB, Struts, and Enterprise Architecture
  • Developed and Implemented Web Services and used Spring Framework.
  • Implemented the caching mechanism in Hibernate to load data from Oracle database.
  • Implemented application level persistence using Hibernate and Spring.
  • Implemented Persistence layer using Hibernate to interact with the Oracle database used Hibernate Framework for object relational mapping and persistence. Developed AJAX scripting to process server side JSP scripting.
  • Developed Servlets and JSPs based on MVC pattern using Spring Framework.
  • Maintained the business standards in EJB and deployed them on to WebLogic Application Server
  • Developed Rest architecture based web services to facilitate communication between client and servers.
  • Used the Eclipse as IDE, configured and deployed the application onto WebLogic application server using Maven.
  • Created applications, connection pools, deployment of JSPs, Servlets, and EJBs in WebLogic.
  • Created SQL queries, PL/SQL Stored Procedures, Functions for the Database layer by studying the required business objects and validating them with Stored Procedures using DB2. Also used JPA with Hibernate provider.
  • Implemented FTP utility program for copying the contents of an entire directory recursively up to two levels from a remote location using Socket Programming.
  • Wrote test cases using JUnit testing framework and configured applications on WebLogic Server.

Environment: Java, J2EE, Spring, Hibernate, Struts, JSF, EJB, MySQL, Oracle, SQL Server, DB2, PL/SQL, JavaScript, jQuery, Servlets, JSP, HTML, CSS, Agile Methodology, Eclipse, WebLogic Application Server, UNIX, XML, Junit, SOAP, Restful Webservices, JDBC.

We'd love your feedback!