We provide IT Staff Augmentation Services!

Hadoop/spark Developer Resume

2.00/5 (Submit Your Rating)

Eden Prairie, MN

PROFESSIONAL SUMMARY:

  • Professional Software developer with 8+ years of technical expertise in all phases of Software development cycle (SDLC), in various Industrial sectors expertizing in Big Data analyzing Frameworks and Java/J2EE technologies
  • 4+ years of industrial experience in Big Data analytics, Data manipulation, using Hadoop Eco system tools Map - Reduce, HDFS, Yarn/MRv2, Pig, Hive, HDFS, HBase, Spark, Kafka, Flume, Sqoop, Flume, Oozie, Avro, Sqoop, AWS, Spark integration with Cassandra, Avro, Solr and Zookeeper.
  • Hands on expertise in working and designing of Row keys & Schema Design with NOSQL databases like Mongo DB, HBase, Cassandra.
  • Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle.
  • Excellent Programming skills at a higher level of abstraction using Scala, Java and Python.
  • Experience in using D-Streams, Accumulator, Broadcast variables, RDD caching for Spark Streaming.
  • Hands on experience in developing SPARK applications using Spark tools like RDD transformations, Spark core, Spark MLlib, Spark Streaming and Spark SQL.
  • Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
  • Working knowledge of Amazon’s Elastic Cloud Compute(EC2) infrastructure for computational tasks and Simple Storage Service (S3) as Storage mechanism.
  • Running of Apache Hadoop, CDH and Map-R distributions, Elastic MapReduce(EMR) on (EC2).
  • Expertise in developing Pig Latin scripts and Hive Query Language.
  • Developed Customized UDFs and UDAF’s in java to extend HIVE and Pig core functionality.
  • Created Hive tables to store structured data into HDFS and processed it using HiveQL.
  • Experience in validating and cleansing the data using Pig statements and hands-on experience in developing Pig MACROS.
  • Excellent Programming skills at a higher level of abstraction using Scala, Java and Python.
  • Working knowledge in installing and maintaining Cassandra by configuring the cassandra.yaml file as per the business requirement and performed reads/writes using Java JDBC connectivity.
  • Written multiple MapReduce Jobs using Java API, Pig and Hive for data extraction, transformation and aggregation from multiple file formats including Parquet, Avro, XML, JSON, CSV, ORCFILE and other compressed file formats Codecs like gZip, Snappy, Lzo.
  • Good experience in optimizing MapReduce algorithms using Mappers, Reducers, combiners and partitioner’s to deliver the best results for the large datasets.
  • Good knowledge on build tools like Maven, Log4j and Ant.
  • Hands on experience in using various Hadoop distributions (Cloudera (CDH 4/CDH 5), Hortonworks, Map-R, IBM Biginsights, Apache and Amazon EMR Hadoop distributions.
  • Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
  • In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, MapReduce Programming Paradigm and YARN architecture.
  • Proficient in developing, deploying and managing the Solr from development to production.
  • Used various Project Management services like JIRA for tracking issues, GitHub for various code reviews and Worked on various version control tools like CVS, GIT, SVN.
  • Hands-on knowledge in Core Java concepts like Exceptions, Collections, Data-structures, I/O. Multi-threading, Serialization and deserialization of streaming applications.
  • Experience in Software Design, Development and Implementation of Client/Server Web based Applications using JSTL, jQuery, JavaScript, Java Beans, JDBC, Struts, PL/SQL, SQL, HTML, CSS, PHP, XML, AJAX and had a bird’s eye view on React Javascript Library.
  • Experience in maintaining an Apache Tomcat MYSQL, LDAP, Web service environment.
  • Designed ETL workflows on Tableau, Deployed data from various sources to HDFS.
  • Done Clustering, regression and Classification using Machine learning libraries Mahout, MLlib(Spark).
  • Good experience with use-case development, with Software methodologies like Agile and Waterfall.
  • Proven ability to manage all stages of project development Strong Problem Solving and Analytical skills and abilities to make Balanced & Independent Decisions.

TECHNICAL SKILLS:

Big Data Ecosystem: HDFS, MapReduce, Pig, Hive, Spark, YARN, Kafka, Flume, Sqoop, Solr, Impala, Oozie, ZooKeeper, Spark, Ambari, Mahout, MongoDB, Cassandra, Avro, Parquet and Snappy.

Hadoop Distributions: Cloudera (CDH3, CDH4, and CDH5), Hortonworks, MapR and Apache

Languages: Java, Python, Scala, SQL, JavaScript and C/C++

No SQL Databases: Cassandra, MongoDB, HBase and Amazon Dynamodb.

Java Technologies: JSE, Servlets, JavaBeans, JSP, JDBC, JNDI, AJAX, EJB and struts

Web Design Tools: HTML, DHTML, AJAX, JavaScript, JQuery and JSON

Development / Build Tools: Eclipse, Jenkins, Git, Ant, Maven, IntelliJ, JUNIT and log4J.

App/Web servers: WebSphere, WebLogic, JBoss and Tomcat

DB Languages: MySQL, PL/SQL, PostgreSQL and Oracle

RDBMS: Oracle 10g,11i, MS SQL Server, MySQL and DB2

Operating systems: UNIX, Red Hat LINUX, Mac os and Windows Variants

Testing: Junit

ETL Tools: Talend

PROFESSIONAL EXPERIENCE:

Confidential, Eden Prairie, MN

Hadoop/Spark Developer

Responsibilities:

  • Designed use cases for the Application as per the business requirements.
  • Involved in various phases of Software Development Life Cycle (SDLC).
  • Used MapR distribution for dealing with Hadoop eco system components.
  • Involved in loading data from Teradata database into HDFS using Sqoop Jobs.
  • Involved in performance tuning of Spark jobs using Cache and using complete advantage of cluster environment.
  • Responsible for design & development of Spark SQL Scripts based on Functional Specifications and for optimizing the query performance.
  • Developed Spark SQL code to extract data from Teradata and Data Lake and push them back to Elastic search cluster.
  • Built centralized logging to enable better debugging using Elastic Search , and Kibana .
  • Developed search solution using Elastic search to extract, transform and index the source data.
  • Build and updated Kibana dashboards. Mostly used to visualize logs that are written during production. This helped to figure out and easily understand while having prod issues.
  • Created Indices in ES cluster using mappings provided and uploaded the JSON data.
  • Written full Query DSL (Domain Specific Language) based on JSON to define queries for retrieving the data from Elastic Search cluster.
  • Build Kibana dashboards to monitor health of Elastic Search Cluster from time to time basis.
  • Developed RESTful APIs using Spring Boot Framework that included Gradle as per the business requirement.
  • Used Open Shift application server for deploying and configuring application.
  • Used Jenkins as a continuous integration tool for build and deployment of JAVA code. Experience in scheduling and monitoring builds and deployment in Jenkins .
  • Handled deployment of restful web services by using Docker and Open Shift .
  • Used Sonar for the code quality assessment. Fixed all Critical and Major Quality issues as reported in SonarQube from time to time.
  • Used JUnit to test persistence and service tiers. Involved in unit test case preparation suing Mockito framework.
  • Performed Fortify Scans to look over security issues from Time to Time.
  • Worked extensively with JMeter to create test plans for Web applications to perform Load and scalability testing.
  • Used tools like WinSCP , Putty to connect and interact with EDGE node.
  • Deployed and built the application using GRADLE .
  • Used GIT as a version control tool for merging branches and used SourceTree to solve conflicts.
  • Used Rally as a Ticketing tool for dealing with User stories and Agile Scrum Methodology
  • Involved in sprint planning, code review, and daily standup meetings to discuss the progress of the application.

Environment: Hadoop, HDFS, Spark, MapR, Yarn, Elastic Search, Kibana, Log4j, Shell Scripting, Mockito, Java, SQL, DSL, SonarQube, Docker, Rally, JMeter, Fortify, WinSCP, Putty, JUnit, Agile, Jenkins, Linux, Teradata, Git.

Confidential, St. Louis, MO

Hadoop/Spark Developer

Responsibilities:

  • Performed Data Injection from various API's which holds Geospatial location, Weather and Product based information of the fields and products grown in it.
  • Worked on Cleaning, Processing the data obtained and performing statistical analysis it to get useful insights.
  • Explored Spark framework for improving the performance and optimization of the existing algorithms in Hadoop using Spark Core, Spark SQL, Spark Streaming APIs.
  • Ingested data from relational databases to HDFS on regular basis using Sqoop incremental import.
  • Extracted structured data from multiple relational data sources as DataFrames in SparkSQL.
  • Involved in schema extraction from file formats like Avro, Parquet.
  • Transformed the DataFrames as per the requirements of data science team.
  • Loaded the data into HDFS in Parquet, Avro formats with compression codecs like Snappy, LZO as per the requirement.
  • Worked on the integration of Kafka service for stream processing.
  • Worked towards creating near real time data streaming solutions using Spark Streaming, Kafka and persist the data in Cassandra.
  • Involved in data modeling, ingesting data into Cassandra using CQL, Java APIs and other drivers.
  • Implemented CRUD operations using CQL on top of Cassandra file system.
  • Analyze the transactional data in HDFS using Hive and optimizing the performance of the queries by segregating the data using clustering and partitioning.
  • Developed Spark Applications for various business logics using Scala.
  • Created Dynamic visualizations and displaying the statistics of the data based on location on the maps.
  • Wrote Restful API's in scala to implement the functionality defined.
  • Collaborated with other teams in the data pipeline to achieve desired goals.
  • Used Amazon Dynamodb to gather and track the event based metrics .
  • Imported data from AWS S3 into Spark RDD , Performed transformations and actions on RDDs .
  • Worked with various AWS Components such as EC2 , S3 , IAM , VPC , RDS , Route 53 , SNS and SQS .
  • Involved in pulling the data from Amazon S3 data lake and built Hive tables using Hive Context in Spark
  • Involved in running Hive queries and Spark jobs on data stored in S3.
  • Run short term ad-hoc queries, jobs on the data stored on S3 using AWS EMR.

Environment: Hadoop, HDFS, Hive, Kafka, Sqoop, Shell Scripting, Spark, AWS EMR, Linux-Cent OS, AWS S3, Cassandra, Java, Scala, Eclipse, Maven, Agile.

Confidential, Columbus, OH

Hadoop/Spark Developer

Responsibilities:

  • Developed Spark Applications by using Scala and Implemented Apache Spark data processing project to handle data from various RDBMS and Streaming sources.
  • Worked with the Spark for improving performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Spark MLlib, Data Frame, Pair RDD's, Spark YARN.
  • Used Spark Streaming APIs to perform transformations and actions on the fly for building common learner data model which gets the data from Kafka in Near real time and persist it to Cassandra.
  • Developed Kafka consumer's API in Scala for consuming data from Kafka topics.
  • Consumed XML messages using Kafka and processed the xml file using Spark Streaming to capture UI updates.
  • Developed Preprocessing job using Spark Data frames to flatten Json documents to flat file.
  • Load D-Stream data into Spark RDD and do in memory data Computation to generate Output response.
  • Experienced in writing live Real-time Processing and core jobs using Spark Streaming with Kafka as a data pipe-line system.
  • Used Apache Nifi for ingestion of data from the Messages Queue's
  • Migrated an existing on-premises application to AWS. Used AWS services like EC2 and S3 for small data sets processing and storage, Experienced in Maintaining the Hadoop cluster on AWS EMR.
  • Imported data from AWS S3 into Spark RDD, Performed transformations and actions on RDD's.
  • Good understanding of Cassandra architecture, replication strategy, gossip, snitch etc.
  • Designed TABLES in Cassandra and Ingested data from RDBMS, performed data transformations, and then exported the transformed data to Cassandra as per the business requirement.
  • Used the Spark DataStax Cassandra Connector to load data to and from Cassandra.
  • Experienced in Creating data-models for Client’s transactional logs, analyzed the data from Casandra tables for quick searching, sorting and grouping using the Cassandra Query Language(CQL).
  • Tested the cluster Performance using Cassandra-stress tool to measure and improve the Read/Writes.
  • Used Hive QL to analyze the partitioned and bucketed data, Executed Hive queries on Parquet tables stored in Hive to perform data analysis to meet the business specification logic.
  • Used Apache Kafka to aggregate web log data from multiple servers and make them available in Downstream systems for Data analysis and engineering type of roles.
  • Experience in using Avro, Parquet, RCFile and JSON file formats, developed UDFs in Hive and Pig.
  • Developed Custom Pig UDFs in Java and used UDFs from PiggyBank for sorting and preparing the data.
  • Developed Custom Loaders and Storage Classes in PIG to work on several data formats like JSON, XML, CSV and generated Bags for processing using pig etc.
  • Developed Sqoop and Kafka Jobs to load data from RDBMS, External Systems into HDFS and HIVE.
  • Developed Oozie coordinators to schedule Pig and Hive scripts to create Data pipelines.
  • Used Impala where ever possible to achieve faster results compared to Hive during Data Analysis.
  • Written several Map reduce Jobs using Java API, also Used Jenkins for Continuous integration.
  • Setting up and worked on Kerberos authentication principals to establish secure network communication on cluster and testing of HDFS, Hive, Pig and MapReduce to access cluster for new users.
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.

Environment: Spark, Spark-Streaming, Spark SQL, AWS EMR, MapR, HDFS, Impala, NiFi, Hive, Pig, Apache Kafka, Sqoop, Java (JDK SE 6, 7), Scala, Shell scripting, Linux, MySQL Oracle Enterprise DB, SOLR, Jenkins, Eclipse, Oracle, Git, Oozie, MySQL, Soap, Cassandra and Agile Methodologies.

Confidential, McLean, VA

Hadoop/Spark Developer

Responsibilities:

  • Worked on analyzing Hadoop cluster using different big data analytic tools including Pig, Hive, Oozie, Zookeeper, Sqoop, Spark, Kafka and Impala with Cloudera distribution.
  • Developed Spark Applications by using Scala, Java and Implemented Apache Spark data processing project to handle data from various RDBMS and Streaming sources.
  • Worked with the Spark for improving performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Spark MLlib, Data Frame, Pair RDD's, Spark YARN.
  • Experience in implementing Spark RDD’s in Scala.
  • Configured Spark streaming to get ongoing information from the Kafka and store the stream information to HDFS.
  • Used Kafka functionalities like distribution, partition, replicated commit log service for messaging systems by maintaining feeds.
  • Involved in loading data from rest endpoints to Kafka Producers and transferring the data to Kafka Brokers.
  • Used Apache Kafka to aggregate web log data from multiple servers and make them available in Downstream systems for Data analysis and engineering type of roles.
  • Developed Preprocessing job using Spark Data frames to flatten Json documents to flat file.
  • Load D-Stream data into Spark RDD and do in memory data Computation to generate Output response.
  • Involved in performance tuning of Spark jobs using Cache and using complete advantage of cluster environment.
  • Experienced in writing live Real-time Processing and core jobs using Spark Streaming with Kafka as a data pipe-line system.
  • Experienced in using Spark Core for joining the data to deliver the reports and for delivering the fraudulent activities.
  • Designed Columnar families in Cassandra and Ingested data from RDBMS, performed data transformations, and then exported the transformed data to Cassandra as per the business requirement.
  • Used DataStax Spark-Cassandra connector to load data into Cassandra and used CQL to analyze data from Cassandra tables for quick searching, sorting and grouping.
  • Experienced in Creating data-models for Client’s transactional logs, analyzed the data from Cassandra tables for quick searching, sorting and grouping using the Cassandra Query Language(CQL).
  • Tested the cluster Performance using Cassandra-stress tool to measure and improve the Read/Writes.
  • Good understanding of Cassandra architecture, replication strategy, gossip, snitch etc.
  • Developed Sqoop Jobs to load data from RDBMS, External Systems into HDFS and HIVE.
  • Developed Oozie coordinators to schedule Pig and Hive scripts to create Data pipelines.
  • Worked and learned a great deal from AWS Cloud services like EC2, S3, EBS, RDS and VPC.
  • Migrated an existing on-premises application to AWS. Used AWS services like EC2 and S3 for small data sets processing and storage.
  • Experienced in Maintaining the Hadoop cluster on AWS EMR.
  • Imported data from AWS S3 into Spark RDD, Performed transformations and actions on RDD's.
  • Implemented Elastic Search on Hive data warehouse platform.
  • Worked with ELASTIC MAPREDUCE and setup Hadoop environment in AWS EC2 Instances.
  • Used Hive QL to analyze the partitioned and bucketed data, executed Hive queries on Parquet tables stored in Hive to perform data analysis to meet the business specification logic.
  • Experience in using Avro, Parquet, RCFile and JSON file formats, developed UDFs in Hive and Pig.
  • Worked with Log4j framework for logging debug, info & error data.
  • Developed Custom Pig UDFs in Java and used UDFs from PiggyBank for sorting and preparing the data.
  • Developed Custom Loaders and Storage Classes in PIG to work on several data formats like JSON, XML, CSV and generated Bags for processing using pig etc.
  • Experienced with Full Text Search and Faceted Reader search using Solr and implemented data querying with Solr.
  • Well versed on Data Warehousing ETL concepts using Informatica Power Center, OLAP, OLTP and AutoSys.
  • Setting up and worked on Kerberos authentication principals to establish secure network communication on cluster and testing of HDFS, Hive, Pig and MapReduce to access cluster for users.
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.
  • Used the external tables in Impala for data analysis.
  • Generated various kinds of reports using Power BI and Tableau based on Client specification.
  • Used Jira for bug tracking to check-in and checkout code changes.
  • Worked with Network, Database, Application, QA and BI teams to ensure data quality and availability.
  • Responsible for generating actionable insights from complex data to drive business results for various application teams and worked in Agile Methodology projects extensively.
  • Hands on experience with container technologies such as Docker , embed containers in existing CI/CD pipelines.
  • Worked with SCRUM team in delivering agreed user stories on time for every Sprint.

Environment: Hadoop, Spark, Spark-Streaming, Spark SQL, AWS EMR, MapR, HDFS, Hive, Pig, Apache Kafka, Sqoop, Java (JDK SE 6, 7), Scala, Shell scripting, Linux, MySQL, SOLR, Jenkins, Eclipse, Oracle, Git, Oozie, Tableau, MySQL, Soap, NIFI, Cassandra and Agile.

Confidential, Shreveport, LA

Hadoop Developer

Responsibilities:

  • Primary responsibilities include building scalable distributed data solutions using Hadoop ecosystem.
  • Experienced in designing and deployment of Hadoop cluster and different big data analytic tools including Pig , Hive , Flume , Hbase and Sqoop .
  • Imported weblogs and unstructured data using the Apache Flume and store it in Flume channel.
  • Loaded the CDRs from relational DB using Sqoop and other sources to Hadoop cluster by Flume.
  • Developed Pig Latin Scripts to extract the data from the web server and the output files to load into HDFS .
  • Worked on migrating MapReduce programs into Spark transformations using Spark and Scala, initially done using python (PySpark).
  • Developed Spark jobs using Scala on top of Yarn/MRv2 for interactive and Batch Analysis.
  • Experienced in querying data using SparkSQL on top of Spark engine for faster data sets processing.
  • Worked on implementing Spark Framework, a Java based Web Frame work.
  • Worked with Apache SOLR to implement indexing and wrote Custom SOLR query segments to optimize the search and written Java code to format XML documents, uploaded them to Solr server for indexing.
  • Experience in creating tables, dropping and altered at run time without blocking updates and queries using HBase and Hive.
  • Experience in working with different join patterns and implemented both Map and Reduce Side Joins.
  • Wrote Flume configuration files for importing streaming log data into HBase with Flume.
  • Imported several transactional logs from web servers with Flume to ingest the data into HDFS.
  • Continuous monitoring and managing the Hadoop cluster through Hortonworks(HDP) distribution.
  • Configured various views in Ambari such as Hive view, Tez view, and Yarn Queue manager
  • Installed and configured Pig, written Pig Latin scripts to convert the data from Text file to Avro format.
  • Created Partitioned Hive tables and worked on them using HiveQL.
  • Designed and developed Job flows using Oozie , managing and reviewing log files.
  • Written and Implemented Teradata Fast load , Multiload scripts, DML and DDL .
  • The data is collected from distributed sources into AVRO models . Applied transformations and standardizations and loaded into HBase for further data processing.
  • Importing log files using Flume into HDFS and load into Hive tables to query data.
  • Used Oozie Operational Services for batch processing and scheduling workflows dynamically.
  • Worked on Ad hoc queries, Indexing, Replication, Load balancing, Aggregation in MongoDB.
  • Processed the Web server logs by developing Multi-hop flume agents by using Avro Sink and loaded into MongoDB for further analysis, also extracted files from MongoDB through Flume and processed.
  • Expert knowledge on MongoDB NoSQL data modeling, tuning, disaster recovery backup used it for distributed storage and processing using CRUD.
  • Extracted and restructured the data into MongoDB using import and export command line utility tool.
  • Implemented Custom Sterilizer, interceptors to Mask, created confidential data and filter unwanted records from the event payload in flume.
  • Installed, Configured TalendETL on single and multi-server environments.
  • Worked on continuous Integration tools Jenkins and automated jar files at end of day.
  • Worked with Tableau and Integrated Hive, Tableau Desktop reports and published to Tableau Server.
  • Developed data pipeline expending Pig and Java MapReduce to consume customer behavioral data and financial antiquities into HDFS for analysis.
  • Developed MapReduce programs in Java for parsing the raw data and populating staging tables.
  • Developed Unix shell scripts to load files into HDFS from Linux File System.
  • Collaborated with Database, Network, application and BI teams to ensure data quality and availability.
  • Supported in setting up QA environment and updating configurations for implementing scripts with Pig, Hive and Sqoop.
  • Experienced knowledge over designing Restful services using java-based API’s like JERSEY.
  • Followed Agile methodology for the entire project. Experienced in Extreme Programming, Test-Driven Development and Agile Scrum

Environment: Hadoop, HDFS, Hive, Map Reduce, Hortonworks(HDP), AWS EC2, SOLR, TEZ, MySQL, Oracle, Sqoop, Flume, Spark, SQL Talend, Python, PySpark, Yarn, Pig, Oozie, Linux-Ubuntu, Scala,Ab Initio, Tableau, Maven, Jenkins, Java (JDK 1.6), Agile.

Confidential

Java/Web Developer

Responsibilities:

  • Implemented the project according to the Software Development Life Cycle (SDLC).
  • Analyzing and Preparing the requirement Analysis Document.
  • Involved in developing Web Services using SOAP for sending and getting data from external interface.
  • Involved in requirement gathering, requirement analysis, defining scope, and design.
  • Worked with various J2EE components like Servlets, JSPs, JNDI, JDBC using Web Logic Application server.
  • Involved in developing and coding the Interfaces and classes required for the application and created appropriate relationships between the system classes and the interfaces provided.
  • Assisting project managers with drafting use case scenarios during the planning stages.
  • Developing the Use Cases, Class Diagrams and Sequence Diagrams.
  • Used Java Script for client-side Validation.
  • Used HTML, CSS, JavaScript for create web pages.
  • Involved in Database design and developing SQL Queries, stored procedures on MySQL.

Environment: Java, J2EE, JDBC, HTML, CSS, JavaScript, Servlets, JSP, JDBC, Oracle, Eclipse, Web Logic, MySQL.

Confidential

Hadoop Developer

Responsibilities:

  • Involved in Requirements Analysis, and design an Object-oriented domain model.
  • Involvement in the detailed Documentation, written functional specifications of the module.
  • Involved in development of Application with Java and J2EE technologies.
  • Develop and maintain elaborate services based architecture utilizing open source technologies like Hibernate, ORM and Spring Framework.
  • Developed server-side services using Java multithreading, Struts MVC, Java, EJB, Spring, Web Services (SOAP, WSDL, AXIS).
  • Responsible for developing DAO layer using Spring MVC and configuration XML’s for Hibernate and to also manage CRUD operations (insert, update, and delete).
  • Designing, Development and Implementation of JSPs in Presentation layer for Submission, Application, implementation.
  • Development of JavaScript for client end data entry validations and Front-End Validation.
  • Deployed Web, presentation and business components on Apache Tomcat Application Server.
  • Developed PL/SQL procedures for different use case scenarios
  • Involvement in post-production support, Testing and used JUNIT for unit testing of the module.

Environment: Java/J2EE, JSP, XML, Spring Framework, Hibernate, Eclipse(IDE), Java Script, Ant, SQL, PL/SQL, Oracle, Windows, UNIX, Soap, Jasper reports.

We'd love your feedback!