We provide IT Staff Augmentation Services!

Sr. Big Data Architect Resume

Malvern, PA

SUMMARY:

  • Above 9 years of experience as a Java/J2EE full stack Programmer for entire Software Development Life Cycle (SDLC) including analysis, design, implementation, integration, testing and maintenance of applications using Java/J2EE and Object Oriented Client - Server technologies.
  • Hands on experience in development, installation, configuring, and using Hadoop & ecosystem components like Hadoop MapReduce, Spark, Scala,HDFS, HBase, Hive, Impala, Sqoop, Pig, Flume, Kafka, Storm, Spark, Elastic Search.
  • Expertise in the IT Industry and sound experience in web& client server based client server applications, hands on experience in analysis, Design, Development, Implementation and Testing using Java J2EE, JMS, JSP, Servlets, Spring, JSF, JPA, AngularJS, JQuery, Hibernate, XML, SOA, JavaBeans, JDBC, JSON, WebSphere, UML, WebLogic, JBoss, Apache Tomcat, SpringSecurity, Hadoop, Big data technologies, Spark, Scala, Hive, HBase, MongoDB, Elastic Search, HDFS, Sqoop, Impala etc.
  • Brief MongoDB project, then Hadoop/Hive on AWS, using EMR and nonEMR-Hadoop in EC2. Tasks: EC2-to-S3 data synch. Hive stand-up, AWS profiling.
  • Excellent experience in End to end application design and development using RESTful APIs, Micro-services, Spring Boot, Spring Cloud, HTML5, CSS3, Bootstrap and JQuery.
  • Extensive experience in designing and developing multi-tier web based client-server applications using data solutions, Intranet/Internet Enterprise Software applications for Health domain, financial, ecommerce organizations using Java, JSP, Servlets, EJB, AJAX, JMS, ORM, JDO, JAAS, JNDI, Web services, JDBC, JAXP, RMI, Applets, Swing, XML, JavaScript.
  • Good Exposure on Apache Hadoop MapReduce programming, PIG Scripting and Distributed Application and HDFS.
  • Expertise skills in Java Multithreading, Exception Handling, Servlets, JSP, Custom Tag Libraries, Java Script, AJAX, CSS, HTML, Enterprise Java Beans, JDBC, RMI, JNDI and XML related technologies.
  • Experienced in Software Development life cycle using various methodologies like waterfall, agile and test driven development.
  • Experience with Oozie Workflow Engine in running workflow jobs with actions that run Hadoop, MapReduce and Pig jobs.
  • Hands on experience in installation, configuration, management and deployment of Big Data solutions and the underlying infrastructure of Hadoop Cluster using Cloudera, Horton works distributions.
  • Experienced in designing both Front end & Backend applications using Java, J2EE Web frameworks, JSP, HTML, CSS, Angular JS, JavaScript, AJAX, jQuery, XSL, Node.js, Bootstrap, JSP, JASON, ext.js, XSLT.
  • Strong experience and knowledge in Object Oriented Design Pattern concepts.
  • Hands on Experience in developing applications using Spring Framework’s Spring Web Flow, Inversion of Control and Dependency Injection.
  • Good knowledge in using apache NiFi to automate the data movement between different Hadoop systems.
  • Good hands on experience in NoSQL databases such as HBase, Couldera and MongoDB.
  • Hands on experience in working with client scripting languages HTML, DHTML, JavaScript and CSS including special handling of various browsers like IE, Netscape and Mozilla.
  • Strong experience and Knowledge in XML technologies including XML, XSD, XSLT, JAXP (DOM, SAX, JDOM), JAXB (Castor, XML Beans).
  • Experienced in Service Oriented Architecture (SOA) and publishing Web Services that include several components like WSDL, SOAP, UDDI, Axis and JAX-WS.
  • Hands on exposure to multiple Application Servers like JBoss, IBM WebSphere, and Weblogic.
  • Expertise in IDEs and tools like IBM RAD, Eclipse, JDeveloper, Jbuilder, Visio, Rational Rose, TOAD,SQL Developer, Jenkins, Cruise Control SOAP UI, REST Client, LOAD UI, Wily, Memory Analyzer etc.
  • Expertise in back-end procedure development, for Database Applications using Oracle, DB2, SQL and PL/SQL, SQL Server.
  • Hands on experience on writing Queries, Stored procedures, Functions and Triggers by using PL/SQL.
  • Expertise in using Design Patterns including Singleton, Business Delegate, Factory Method, Prototype and Session Facade, MVC as well as Data Access Object (DAO) pattern.
  • Excellent interpersonal, communication and problem solving skills, quick learner, organized, resilient and self-motivated.

TECHNICAL SKILLS:

Operating System: Windows 2000/NT/ XP/Vista/7 and Linux Redhat, Ubuntu, UNIX.

Languages: C, C++,Scala, Java 1.7/1.6/1.5/1.4/1. X.

Big Data Technologies: HDFS, Hive, Hana, AWS, Map Reduce, Pig, Sqoop, Oozie, Zookeeper, YARN, Avro, Spark, kafkaFrameworks and Utilities: Spring (Spring-Core, Spring AOP, Spring MVC, Spring Batch), Hibernate, Struts.

Databases: Oracle 12c/11g/10g/9i/8i, DB2 UDB, MySQL, MS SQL Server 2000, MS-Access, HBase, MongoDB, Cloudera.

Web technologies: JSP, Servlets, EJB, JNDI, JDBC, Java Beans, HTML, CSS, DHTML, JavaScript, Web Services, SOAP, WSDL, AJAX, JQuery, Angular.JS, Node.JS, Bootstrap, Ext.JS, JSP, JASON, AJAX.

XML technologies: XML, XSL, XSLT, SAX, DOM, AXIS, XML Beans and JAXB.

IDE: RAD 6.x, IBM WSAD 5.1.2, Eclipse, Net beans, Jbuilder.

App Server: WebSphere 8.X/7.X/6.X/5.X, WebLogic 7.1/6.1, JBoss, iPlanet.

Web Server: Apache Tomcat 6.0/5.5, Java Web Server 2.0, IIS.

Version Control: Clear case, CVS, RTC, Git.

Testing: JUnit, JUnit-Perf, JMock, Cactus,IBM RPT.

Build tool: Ant, Maven, Jenkins

Tools: TOAD, SQL Developer, DB Visualizer, XML SPY, Rational Rose, Server studio., SOAP UI, REST, LOAD UI, Wily, Memory Analyzer.

System Design and Development: Requirement gathering and analysis, design, development, testing, delivery.

PROFESSIONAL EXPERIENCE:

Confidential

Sr. Big Data Architect

Responsibilities:

  • Involved in the high-level design of the Hadoop architecture for the existing data structure and Business process.
  • Extensively involved in Design phase and delivered Design documents in Hadoop eco system with HDFS, HIVE, PIG, SQOOP and SPARK with SCALA.
  • Collected the logs from the physical machines and the OpenStack controller and integrated into HDFS using Kafka.
  • Part of Configuring & deployment of Hadoop Cluster in the AWS cloud and worked with clients to better understand their reporting and dash boarding needs and present solutions using structured + Agile project methodology approach.
  • Worked on analyzing Hadoop cluster and different Big Data Components including Pig, Hive, Spark, HBase, and Kafka, Elastic Search, database and SQOOP.
  • Worked on MapReduce programs on Amazon Elastic MapReduce framework by using Amazon S3 for Input and Output.
  • Implemented Partitioning, Dynamic Partitions and Buckets in HIVE for increasing performance benefit and helping in organizing data in a logical fashion.
  • Installed Hadoop, Map Reduce, HDFS, and developed multiple Map-Reduce jobs in PIG and Hive for data cleaning and pre-processing.
  • Developed Kafka producer and consumers, HBase clients, Spark and Hadoop MapReduce jobs along with components on HDFS, Hive.
  • Worked on Sequence files, RC files, Map side joins, bucketing, partitioning for Hive performance enhancement and storage improvement and Created tables in HBase to store variable data formats of PII data coming from different portfolios.
  • Used cloud computing on the multi-node cluster and deployed Hadoop application on cloud S3 and used Elastic Map Reduce (EMR) to run a MapReduce.
  • Explored MLlib algorithms in Spark to understand the possible Machine Learning functionalities that can be used for use case.
  • Started using apache NiFi to copy the data from local file system to HDP.
  • In preprocessing phase of data extraction, we used Spark to remove all the missing data for transforming of data to create new features.
  • Developed data pipeline using Flume, Sqoop, Pig and Java map reduce to ingest customer behavioral data and financial histories into HDFS for analysis.
  • Involved in loading data from UNIX file system to HDFS using Flume and HDFS API.
  • Configured Spark Streaming to receive real time data from the Kafka and store the stream data to HDFS.
  • Exported the analyzed data to the NoSQL Database using HBase for visualization and to generate reports for the Business Intelligence team using SAS.
  • Used various HBase commands and generated different Datasets as per requirements and provided access to the data when required using grant and Revoke
  • Created Hive tables as per requirement as internal or external tables, intended for efficiency.
  • Developed MapReduce programs for the files generated by hive query processing to generate key, value pairs and upload the data to NoSQL database HBase, MongoDB, Cassandra.
  • Implemented installation and configuration of multi-node cluster on the cloud using Amazon Web Services (AWS) on EC2.
  • Created and maintained Technical documentation for launching Hadoop Clusters and for executing Hive queries and Pig Scripts and worked in tuning Hive & Pig to improve performance and solved performance issues in both scripts
  • Worked with Elastic MapReduce (EMR) and setting up environments on Amazon AWS EC2 instances.
  • Developed various data connections from data source to SSIS, Tableau Server for report and dashboard development
  • Involved unit testing, interface testing, system testing and user acceptance testing of the workflow tool.
  • Used JIRA for bug tracking and GIT for version control.
  • Developed Pig scripts for data analysis and extended its functionality by developing custom UDF's written in Java or Python and developed Shell and Python scripts to automate and provide Control flow to Pig scripts.
  • Involved in story-driven agile development methodology and actively participated in daily scrum meetings.

Environment: Apache Hadoop, AWS, MLlib, MYSQL, Kafka, HDFS, Hive, Pig, MapReduce, Nifi, Flume, Cloudera, Oozie, UNIX, Oracle 12c, Tableau, GIT, UNIX, Python, Spark, Scala, SQL, EC2, EMR, Redshift, HBase, MongoDB and Cassandra.

Confidential, Malvern, PA

Sr. Java/BigData Developer

Roles and Responsibilities:

  • Involved in Requirement gathering, Business Analysis and translated business requirements into Technical design in Hadoop and Big Data.
  • Worked on Big Data Integration &Analytics based on Hadoop, SOLR, Spark, Kafka, Storm and web Methods.
  • Implemented SOA architecture with web services using SOAP, WSDL, UDDI and XML.
  • Involved in development of the application using Spring Web MVC and other components of the Spring Framework, the controller being Spring Core (Dispatcher Servlet). Also implemented Dependency Injection using the spring framework.
  • Delivered Working Widget Software using EXTJS4, HTML5, RESTFUL Web services, JSON Store, Linux, Hadoop, ZOOKEEPER, NO SQL databases, JAVA, SPRING Security, and JBOSS Application Server for Big Data analytics.
  • Implemented the Spring Batch to process large volumes of information that is most efficiently processed without user interaction.
  • All the data was loaded from our relational DBs to HIVE using Sqoop. We were getting four flat files from different vendors. These were all in different formats e.g. text, EDI and XML formats
  • Involved in migration of data from existing RDBMS (oracle and SQL server) to Hadoop using Sqoop for processing data.
  • Written Hive join query to fetch info from multiple tables, written multiple Map Reduce jobs to collect output from Hive
  • Worked on migrating MapReduce programs into Spark transformations using Spark and Scala.
  • Strongly recommended to bring in Elastic Search and was responsible for installing, configuring and administration.
  • Design & implement ETL process using Talend to load data from Worked extensively with Sqoop for importing and exporting the data from HDFS to Relational Database systems/mainframe and vice-versa. Loading data into HDFS.
  • Developed data pipeline using Flume, Sqoop, Pig and Java MapReduce to ingest behavioral data into HDFS for analysis.
  • Developed Spark jobs using Scala in test environment for faster data processing and used Spark SQL for querying.
  • Involved in developing Map-reduce framework, writing queries scheduling map-reduce
  • Installed and configured Hadoop and responsible for maintaining cluster and managing and reviewing Hadoop log files.
  • Developed a custom AVRO Framework capable of solving small files problem in Hadoop and also extended PIG and Hive tools to work with it.
  • Developed Shell, Perl and Python scripts to automate and provide Control flow to Pig scripts.
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager. \
  • Performed Filesystem management and monitoring on Hadoop log files.
  • Utilized Oozie workflow to run Pig and Hive Jobs Extracted files from MongoDB through Sqoop and placed in HDFS and processed.
  • Implemented Installation and configuration of multi-node cluster on Cloud using Amazon Web Services (AWS) on EC2.
  • Used Flume to collect, aggregate, and store the web log data from different sources like web servers, mobile and network devices and pushed to HDFS.
  • Implemented partitioning, dynamic partitions and buckets in HIVE.
  • Implemented using SCALA and SQL for faster testing and processing of data. Real time streaming the data using with KAFKA.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Involved in Configuring core-site.xml and mapred-site.xml per the multi node cluster environment.
  • Used Apache Maven 3.x to build and deploy application to various environments.
  • Wrote shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions
  • Implemented object/relational persistence (Hibernate) for the domain model and designed and implemented the Hibernate Domain Model for the services.
  • Developed and implemented the MVC Architectural Pattern using Struts Framework including JSP, Servlets and Action classes.
  • Used parsers like SAX and DOM for parsing xml documents and used XML transformations using XSLT.
  • Used ANT automated build scripts to compile and package the application and implemented Log4j for the project.

Environment: Java 1.8, Servlets 3.0, Struts 2.x MVC Framework, Apache Hadoop, HDFS, Hive, Map Reduce, Cloudera, Pig, Sqoop, Kafka, Apache Cassandra, Spark, Scala, Oozie, Impala, Cloudera, Flume, Zookeeper, Hibernate 3, Ant, JDBC, Web Services, IBM WebSphere 7.2, Oracle 11g, Spring Framework 3.1,Spring Batch 2.2, JQuery 1.4, JPA 2.0, JMS, Eclipse Helios 3.6, IBM-RTC, JAX-RPC,JAX-WS,PSE-HSM, Maven, Jenkins, HP - QC, Wily, REST Client, SOAP UI, LOAD UI.

Confidential, Dallas TX

Sr. Java/BigData Developer

Responsibilities:

  • Responsible for building scalable distributed data solutions using Hadoop and worked on Data Lake architecture.
  • Worked on installation, configuration, monitoring and troubleshooting Hadoop cluster and eco-system components including Flume, Oozie & Kafka.
  • Extracted data from different databases and to copy into HDFS using Sqoop and have an expertise in using compression techniques to optimize the data storage.
  • Developed MapReduce Jobs in data cleanup, validating and to perform ETL.
  • Wrote Hive/Impala queries for ad-hoc reporting, summarizations and ETL.
  • Ingested data into Hadoop using Sqoop from RDBMS on regular basis and validated the data.
  • Exported data from HDFS/Hive to RDBMS for BI reporting using Sqoop.
  • Worked with different file formats such as Text, Sequence files, Avro, ORC and Parquette.
  • Developed spark programs using Scala, Involved in creating Spark SQL Queries and Developed Oozie workflow for spark jobs
  • Application deployment and data migration on AWS Redshift and Involved in writing java API for Amazon Lambda to manage some of the AWS services.
  • Managing, defining and scheduling Oozie Jobs on a Hadoop cluster.
  • Resource management of Hadoop Cluster including adding/removing cluster nodes for maintenance and capacity needs.
  • Worked on scalable distributed computing systems, software architecture, data structures and algorithms using Hadoop, Apache Spark, and Apache Storm etc.
  • Worked on Talend to integrate with Hadoop and ingest data into Hadoop.
  • Integrated Tableau with Hive, Impala and Spark to build the dashboard reports.
  • Analysis on implementing Spark using Scala/Python/Java.
  • Worked on real-time data integration using Kafka, Strom and Spark to get and load the HL7 messaging between hospitals, Pharmacies and the laboratories.
  • Responsible for monitoring the Hadoop cluster using Zabbix/Nagios.
  • Responsible for maintaining the clusters in different environments.
  • Ingested streaming data into Hadoop using Spark, Storm Framework and Scala.
  • Involved in up gradation process of the Hadoop cluster from CDH4 to CDH5.
  • Enabled faster search by loading data from HDFS to Elastic Search.
  • Analysis on integrating Kibana with Elastic Search.
  • Implemented test scripts to support test driven development and continuous integration.
  • Experienced on loading and transforming of large sets of structured, semi structured and unstructured data.
  • Worked on NOSQL database Mongo DB, Cassandra & HBase.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Supported in setting up QA environment and updating configurations for implementing scripts.

Environment: Hadoop, MapReduce, Spark, Shark, Hive, Impala, Pig, Sqoop, Storm, Kafka, Datameter, Oracle, Teradata, SAS, Tableau, Talend, Datastage, AWS (EC2, EMR, S3), Java, Python, Nagios, Zabbix, Cloudera Manager, Kibana, Log4J, Junit, MRUnit, SVN, JIRA.

Confidential, Newton, MA

Sr. Java/J2EE Developer

Responsibilities:

  • Followed Agile Methodology in analyze, define, and document the application will support functional and business requirements. Coordinate these efforts with Functional Architects.
  • Developed frontend using JSP, CSS, and JavaScript and integrating spring with Hibernate.
  • Implemented the Spring Batch to process large volumes of information that is most efficiently processed without user interaction.
  • Used different Design patterns, like MVC, Façade, Controller Servlets, Business Delegate, Service Locator, Singletons, and Value Objects while implementing the framework and Factory.
  • Designed and developed the UI using Struts view component JSP, HTML, Angular JS, JavaScript, AJAX, and JSON.
  • Used Hibernate, object/relational-mapping (ORM) solution, technique of mapping data representation from MVC model to Oracle Relational data model with a SQL-based schema.
  • Used Apache tomcat as web server and Weblogic as an application server to deploy various components of application.
  • Implemented GUI screens for viewing using Servlets, JSP, Tag Libraries, JSTL, JavaBeans, HTML, JavaScript and Strutsframework using MVC design pattern.
  • Wrote Hibernate configuration file, Hibernate mapping files and defined persistence classes to persist the data into Oracle Database.
  • Configured Hibernate session factory to integrate Hibernate with spring.
  • Employed SpringJDBC to implement batch jobs to pull organization structure related data.
  • Developed front-end content using FTL, HTML, CSS and client-side validations using JavaScript
  • Used Spring Core for dependency injection/Inversion of control (IOC), and integrated frameworks like Struts and Hibernate.
  • Worked on Oracle & SQL Server as the backend databases and integrated with Hibernate to retrieve Data Access Objects.
  • Used Web services (SOAP) for transmission of large blocks of XML data over HTTP and developed Web Services for data transfer using SOAP and WSDL
  • Developed and implemented the web application using design patterns like MVC, Singleton, DAO, Front Controller, and Factory.
  • Developed and debugged the servlets and EJB with WebSphere Application server.
  • Used SpringMVCframework to eliminate complexity and to achieve faster and better result
  • Used Hibernate for the database framework
  • Wrote various PL/SQL queries for transactional data and created JDBC connections.
  • Used Ajax for form validation in JSPs and Used Log4J utility to log error, info and debug messages.
  • Extensively used GIT as the version controlling Tool, Involved in the configuration management using GIT.
  • Developed the project using Agile/Scrum methodologies.
  • Developed web services in Java and Experienced with SOAP, WSDL and Restful web services.
  • Developed Spring Framework based RESTFUL Web Services for handling and persisting of requests and Spring MVC for returning response to presentation tier.
  • Used Ant build tool for building and deploying the application.
  • Used spring annotations to create controller as well as service layer classes.
  • Used Spring Framework for Dependency Injection and integrated with Hibernate DAOs.
  • Developed Web Services (SOAP & REST) to interact with different Components.
  • Configured Hibernates second level cache using ehCache to reduce the number of hits to the configuration table data.
  • Implemented object/relational persistence (Hibernate) for the domain model.
  • Designed and implemented the Hibernate Domain Model for the services.
  • Created RESTful Web Services and SOAP Web Services to neutralize claims and get information related mappings and codes.
  • Used Jenkins for build maven project.

Environment: Java 1.6, Servlets 3.0, Struts 2.0 MVC Framework, Hibernate 3, Ant, JDBC, Web Services, IBM WebSphere 7.2, Oracle 11g, Spring Framework 3.1, HTML, CSS, JASON, JSP, AJAX, Angular.JS, Ext.JS, Node.JS, Spring Batch 2.2, JQuery 1.4, JPA 2.0, JMS, Eclipse Helios 3.6, IBM-RTC, JAX-RPC,JAX- WS,PSE-HSM, Maven, Jenkins, HP - QC, Wily, REST Client, SOAP UI, LOAD UI.

Confidential, Washington, DC

Sr. Java/J2EE Developer

Roles and Responsibilities:

  • Designed and integrated the full scale Struts/Hibernate / Spring / EJB persistence solution with the application architectures.
  • Responsible for architecture and implementation of new Stateless Session Bean (EJB) with annotation based for the entity manager look up module.
  • Developed and implemented the MVC Architectural Pattern using Struts Framework including JSP, Servlets and Action classes.
  • Developed web components using JSP, Servlets, and JDBC.
  • Implemented J2EE standards, MVC2 architecture using Struts Framework and Implemented database using MySQL.
  • Developed various J2EE components like Servlets, JSP, JSTL, AJAX, SAX, XSLT, JAXP, JNDI, LDAP, JMS, and MQ Series by using RAD. Application Framework.
  • Developed and implemented the MVC Architectural Pattern using Struts Framework including JSP, Servlets and Action classes.
  • Used JSP, JavaScript, JQuery, AJAX, Angular JS, Bootstrap, CSS, and HTML as data and presentation layer technology.
  • Set up JBoss Server, Configured MySQL Cluster in Linux OS and installed Open Call XDMS.
  • Used various Core Java concepts such as Multi-Threading, Exception Handling, Collection APIs to implement various features and enhancements.
  • Developed the JSP, JSTL, Servlets and Enterprise Java Beans to communicate between JSP and Servlets. Built JSP custom tags for common presentation components.
  • Designed and developed UI screens with XSLT and JSF (MVC) to provide interactive screens to display data.
  • Client pages are built using HTML, JSP, XML to interact with users, and the business logic is implemented using Servlets and Java Beans.
  • Implemented Web Services using SOAP, REST and XML/HTTP technologies.
  • Used Hibernate framework for DAO layer to access the Oracle database Used JavaScript for client side validations.
  • Implemented and used Web Services with the help of WSDL, SOAP and JAX-WS to get updates from the third parties.
  • Involved in writing POJOs and hibernate.cfg files and configured the same for application development.
  • Deployed the application on Tomcat server.
  • Used the DAO Pattern and J2EE framework facilitated the integration and deployment of DAO, Servlets, JSP and XML.
  • Created dynamic HTML pages, used JavaScript for client-side validations, and AJAX to create interactive front-end GUI.
  • Worked with GIT Version Control for Project Configuration Management.
  • Extensively used SQL, PL/SQL in constructing views, indexes, stored procedures, triggers, cursors, functions, relational database models.
  • Used GIT version control to track and maintain different versions of the application.
  • Used JavaScript, Jquery, Ajax to control display as per user selection and avoid server round trips with unnecessary data and as required by application.
  • Designed various tables required for the project in Oracle database and used the Stored Procedures in the application.
  • Coding of SQL, PL/SQL, and Views using IBM DB2 for the database.
  • Developed UI using HTML, JavaScript, and JSP, and developed Business Logic and Interfacing components using Business Objects, XML, and JDBC.

Environment: : Java 1.6, JSP 2.2, JavaEE 1.5,Servlets 3.0, Struts 2.0 MVC Framework, Hibernate 3, Ant, JDBC, Web Services, Axis, Eclipse, Weblogic 10.3.2, Oracle 11g, HTML, CSS, JASON, JSP, EJB, Angular.JS, JSP, AJAX, Spring Framework 3.1, JQuery 1.4, EJB 3.0, JPA 2.0, JMS, Eclipse Helios 3.6, SVN, JAX-RPC.

Confidential

Java Web Developer

Roles & Responsibilities

  • Involved in various phases of Software Development Life Cycle (SDLC) of Search module.
  • Designed UI screens using JSP, Custom Tags, Struts tags and HTML.
  • Used Struts Framework in the application which is based on MVC2 design pattern.
  • Used EJBs in the application and developed Session beans to house business logic at the middle tier level and Entity Beans for persistence.
  • Used Spring MVC-Easy REST-Based JSON Service for development.
  • Developed server side components using Spring MVC framework.
  • Developed the application using Struts, Servlets and JSP for the presentation layer along with JavaScript for the client side validations.
  • Developed user interfaces using Java Server Pages using HTML, DHTML, XHTML, AJAX, CSS & JavaScript.
  • Developed several DAO classes interacting with EOD DB2 database and participated in writing JPA criteria builders and predicates.
  • Configured Spring JDBC for database management.
  • Responsible in testing the classes and methods using JUnit test case.
  • DevelopedSOAP (JAX-WS) web service applications using contract last approach.
  • Extensively developed stored procedures, triggers, functions and packages in oracle SQL, PL/SQL.
  • Development of the application that was used to create dynamic JSPs, given input from a database table containing information about the contents of the JSP being developed.
  • Developed JavaBeans for the Forms and Action classes for Struts framework.
  • Used Eclipse as an IDE for developing the applications.
  • Designed, developed and integratedREST based WebServices into application.
  • Used J2EE design patterns namely Factory, MVC, Facade, DAO, and Singleton
  • Used JDBC to retrieve data from Oracle database.
  • Developed build scripts using Ant.
  • Built components scheduling and configuration using Maven2
  • Full life cycle experience in development methodologies like Agile and RUP.
  • Actively involved in designing and implementing Session Facade, Service Locator, Data Access Objects, and Singleton and Data Transfer Object design patterns.
  • Consumed a Web Services using WSDL and SOAP and Used Liferay to convert text to HTML to be presented.
  • Developed the application on Eclipse and deployed it on Tomcat Sever and developed SQL Queries to query the database to test the back end process of the application.
  • Used JUnit framework for Unit testing of application and used Log4J to create log files to debug as well as trace application.

Environment: Rational Rose, EJB2.0, Struts 1.1, JSP 1.2, Servlets 2.3, JDBC, JavaScript, CSS, WebServices, UML, HTML, JNDI, JMS, Log4J, JUnit, Tomcat Server, Eclipse, CSS, EJB, Spring, Hibernate, Linux, Windows 2000.

Hire Now