Sr. Big Data/hadoop Developer/engineer Resume
Chicago, IL
SUMMARY:
- Above 10+ working experience as a Big Data/Hadoop Developer/Engineer in designed and developed various applications like Big Data, Hadoop, Java/J2EE open - source technologies.
- Excellent knowledge on Hadoop Architecture and ecosystems such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, Yarn and MapReduce programming paradigm.
- Experienced on major components in Hadoop Ecosystem including Hive, Sqoop, Flume & knowledge of MapReduce/HDFS Framework.
- Hands-on programming experience in various technologies like Java, J2EE, HTML, XML, JSON, CSS and angular.js.
- Expertise in loading the data from the different Data sources like (Oracle, Teradata and DB2) into HDFS using Sqoop and load into partitioned Hive tables.
- Experienced in Amazon AWS cloud which includes services like: EC2, S3, EBS, ELB, AMI Route53, Auto scaling, Cloud Front, Cloud Watch, and Security Groups.
- Experience on Machine Learning and data analytics on Big Data set and hands on experience in developing SPARK applications using Spark API's like Spark core, Spark Streaming, Spark MLlib and Spark SQL and worked with different file formats such as Text, Sequence files, Avro, ORC, JSON and Parquette.
- Very good experience and knowledge in Amazon Web Service (AWS) concepts like EMR and EC2 web services which provides fast and efficient processing of Teradata Big Data Analytics.
- Experienced in Apache Flume for collecting, aggregating and moving huge chunks of data from various sources such as web server, telnet sources etc.
- Experience using various Hadoop Distributions (Cloudera, Hortonworks, MapR, etc) to fully implement and leverage new Hadoop features
- Expertise in Data Development in Hortonworks HDP platform & Hadoop ecosystem tools like Hadoop, HDFS, Spark, Zeppelin, Hive, HBase, SQOOP, flume, Atlas, SOLR, Pig, Falcon, Oozie, Hue, Tez, Apache NiFi, Kafka.
- Extensive experience in use of Talend ELT, database, data set, HBase, Hive, PIG, HDFS and SCOOP components and generating metadata, create Talend etl jobs, mappings to load data warehouse, Data Lake.
- Hands on experience in coding Map Reduce/Yarn Programs using Java, Scala and Python for analyzing Big data.
- Expertise in DevOps, Release Engineering, Configuration Management, Cloud Infrastructure Automation, it includes Amazon Web Services (AWS), Ant, Maven, Jenkins, Chef, and GitHub.
- Strong experienced in working with Unix/Linux environments, writing shell scripts and excellent knowledge and working experience in Agile & Waterfall methodologies.
- Excellent knowledge and experience on Hadoop architecture; as in HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradig
- Having Good Experience in Object Oriented Concepts with Python and integrated different data sources, data wrangling: cleaning, transforming, merging and reshaping data sets by writing Python scripts.
- Good knowledge on Amazon web services: EC2, Redshift, S3, Elastic Load balancer, Cloud watch, Auto scaling etc.
- Expertise in writing Hadoop Jobs for analyzing data using Hive QL (Queries), Pig Latin (Data flow language), and custom MapReduce programs in Java.
- Expertise in Web pages development using JSP, Html, Java Script, JQuery and Ajax and strong working knowledge of front end technologies including Java script framework and Angular.js
- Hands on experience with NoSQL Databases like HBase, MongoDB and Cassandra and relational databases like Oracle, Teradata, DB2 and MySQL.
- Proficiency in developing MVC patterns based web applications using Struts by creating forms using Struts tiles and validates using Struts validation framework
- Experience in deploying applications in various Application servers like Apache Tomcat, and Web Sphere and responsible for deploying the scripts into GIT hub version control repository hosting service and deployed the code using Jenkins and experience with web-based UI development using JQuery, Ext JS, CSS, Html, Html5, XHTML and Java script
- Experience in working with Developer Toolkits like Force.com IDE, Force.com Ant Migration Tool, Eclipse IDE, Mavens
- Experience in Front-end Technologies like Html, CSS, Html5, CSS3, and Ajax and experience in Data Migration process using Azure by integrating with GIT hub repository and Jenkins.
- Experienced in installing Kafka on Hadoop cluster and configure producer and consumer coding part in java to establish connection from twitter source to HDFS with popular hash tags.
TECHNICAL SKILLS:
Hadoop Ecosystem: Hadoop 3.0, HDFS, MapReduce, Hive 2.3, Impala 2.10, Apache Pig 0.17, Sqoop 1.4, Oozie 4.3, Zookeeper 3.4, Flume 1.8, Kafka 1.0.1, Spark, Sql, Spark streaming, AWS, Azure Data lake, NoSQL.
Application Server: Web sphere, Weblogic, JBoss, Apache Tomcat
Databases: HBase 1.2, Cassandra 3.11, MongoDB 3.6, MySQL 8.0, Sql Server2016, Oracle 12c
IDE: Eclipse, NetBeans, MySQL Workbench.
Agile Tools: Jira, Jenkins, Scrum
Build Management Tools: Maven, Apache Ant
Java & J2EE Technologies: Core Java, Servlets, JSP, JDBC, JNDI, Java Beans
Languages: C, C++, JAVA, SQL, PL/SQL, PIG Latin, HiveQL, UNIX shell scripting
Frameworks: MVC, Spring, Hibernate, Struts 1/2, EJB, JMS, JUnit, MR-Unit
Version control: Github, Jenkins
Methodology: RAD, RUP, UML, System Development Life Cycle (SDLC), Waterfall Model.
PROFESSIONAL EXPERIENCE:
Sr. Big Data/Hadoop Developer/Engineer
Confidential, Chicago IL
Responsibilities:
- Involved in gathering requirements from client and estimating time line for developing complex queries using Hive for logistics application and identifying data sources, create source-to-target mapping, storage estimation, provide support for Hadoop cluster setup, data partitioning.
- Worked with cloud provisioning team on a capacity planning and sizing of the nodes (Master and Slave) for an AWS EMR Cluster.
- Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Manage and review data backups, Manage & review log files.
- Worked with Amazon EMR to process data directly in S3 when we want to copy data from S3 to the Hadoop Distributed File System (HDFS) on your Amazon EMR cluster by setting up the Spark Core for analysis work.
- Involved in the complete SDLC, Daily Scrum (Agile) including design of System Architecture, development of System Use Cases based on the functional requirements.
- Analyzed the existing data flow to the warehouses and taking the similar approach to migrate the data into HDFS and created Partitioning, Bucketing, and Map Side Join, Parallel execution for optimizing the Hive queries decreased the time of execution from hours to minutes.
- Responsible for creating an instance on Amazon EC2 (AWS) and deployed the application on it.
- Working on Spark Architecture and how RDD's work internally by involving and processing the data from Local files, HDFS and RDBMS sources by creating RDD and optimizing for performance.
- Exploring with the Spark for improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
- Extensively used Talend Bigdata to build the data lake on Hadoop and design an efficient disaster recovery mechanism. Ensure efficient recovery and low latency environment by migrating to Hadoop servers.
- Developed Spark code to using Scala and Spark-SQL for faster processing and testing and worked towards creating real time data streaming solutions using Apache Spark/Spark Streaming, Kafka.
- Involved in data pipeline using Pig, Sqoop to ingest cargo data and customer histories into HDFS for analysis.
- Imported the data from different sources like AWSS3, Local file system into Spark RDD and worked on cloud Amazon Web Services (EMR, S3, EC2, Lambda)
- Involved in ingesting data into HDFS using Apache Nifi and developed and deployed Apache Nifi flows across various environments, optimized Nifi data flows and written QA scripts in python for tracking missing files.
- Importing and exporting tera bytes of data using Sqoop and real time data using Flume and Kafka and written Programs in Spark using Scala and Python for Data quality check.
- Worked on importing data from MySQL DB to HDFS and vice-versa using Sqoop to configure Hive metastore with MySQL, which stores the metadata for Hive tables and worked with NoSQL databases like HBase in creating HBase tables to load large sets of semi structured data coming from various sources.
- Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, Python and Scala and used Spark API ove r Cloudera Hadoop YARN to perform analytics on data in Hive and involved in creating Hive Tables, loading with data and writing Hive queries which will invoke and run Map Reduce jobs in the backend.
- Responsible for loading the customer's data and event logs from Kafka into HBase using REST API and created custom UDF's for Spark and Kafka procedure for some of non-working functionalities in custom UDF into Scala in production environment.
- Developed workflows in Oozie and scheduling jobs in Mainframes by preparing data refresh strategy document & Capacity planning documents required for project development and support and worked with different actions in Oozie to design workflow like Sqoop action, pig action, hive action, shell action.
- Implemented Kafka consumers to move data from Kafka partitions into Cassandra for near real time analysis.
- Ingested all formats of structured and unstructured data including Logs/Transactions, Relation databases using Sqoop & Flume into HDFS and involved in collecting and aggregating large amounts of log data using Flume and staging data in HDFS for further analysis.
- Have used Enterprise Data Warehouse (EDW) architecture and various data modeling concepts like star schema, snowflake schema in the project.
Environment: AWS S3, EMR, Python 3.6, PySpark, Scala, Hadoop 3.0, MapReduce, Hive 2.3, impala, Sqoop 1.4, Spark 2.2 SQL, Spark Stream, Airflow, Jenkins, GIT, Bitbucket, R Language 3.4 and Tableau, Oozie, Flume, AWS EC2, Lambda, MongoDB, HDFS, Pig, Unix Shell Scripting, Kafka, HBase.
Sr. Big Data/Hadoop Developer/Engineer
Confidential, Dallas TX
Responsibilities:
- Involved in Agile methodologies, daily scrum meetings, spring planning and scripts were written for distribution of query for performance test jobs in Amazon Data Lake.
- Created Hive Tables, loaded transactional data from Teradata using Sqoop and worked with highly unstructured and semi structured data of 2 Petabytes in size.
- Developed MapReduce jobs for cleaning, accessing and validating the data and created and worked Sqoop jobs with incremental load to populate Hive External tables.
- Developed optimal strategies for distributing the web log data over the cluster importing and exporting the stored web log data into HDFS and Hive using Sqoop.
- Apache Hadoop installation & configuration of multiple nodes on AWS EC2 system and developed Pig Latin scripts for replacing the existing legacy process to the Hadoop and the data is fed to AWS S3.
- Responsible for building scalable distributed data solutions using Hadoop Cloudera and designed and developed automation test scripts using Python
- Integrated Apache Storm with Kafka to perform web analytics and to perform click stream data from Kafka to HDFS.
- Analyzed the SQL scripts and designed the solution to implement using Pyspark and implemented Hive Generic UDF's to in corporate business logic into Hive Queries.
- Responsible for developing data pipeline with Amazon AWS to extract the data from weblogs and store in HDFS.
- Uploaded streaming data from Kafka to HDFS, HBase and Hive by integrating with storm and writing Pig-scripts to transform raw data from several data sources into forming baseline data.
- Analyzed the web log data using the HiveQL to extract number of unique visitors per day, page views, visit duration, most visited page on website.
- Supporting data analysis projects by using Elastic MapReduce on the Amazon Web Services (AWS) cloud performed Export and import of data into S3.
- Worked on MongoDB by using CRUD (Create, Read, Update and Delete), Indexing, Replication and Sharding features.
- Involved in designing the row key in HBase to store Text and JSON as key values in HBase table and designed row key in such a way to get/scan it in a sorted order.
- Integrated Oozie with the rest of the Hadoop stack supporting several types of Hadoop jobs out of the box (such as Map-Reduce, Pig, Hive, and Sqoop) as well as system specific jobs (such as Java programs and shell scripts)
- Worked on custom Talend jobs to ingest, enrich and distribute data in Cloudera Hadoop ecosystem.
- Creating Hive tables and working on them using Hive QL and designed and Implemented Partitioning (Static, Dynamic) Buckets in HIVE.
- Developed multiple POCs using PySpark and deployed on the YARN cluster, compared the performance of Spark, with Hive and SQL and Involved in End-to-End implementation of ETL logic
- Used Spark-Streaming APIs to perform necessary transformations and actions on the fly for building the common learner data model which gets the data from Kafka in near real time and Persists into Cassandra.
- Developed syllabus/Curriculum data pipelines from Syllabus/Curriculum Web Services to HBASE and Hive tables.
- Worked on Cluster co-ordination services through Zookeeper and monitored workload, job performance and capacity planning using Cloudera Manager
- Involved in build applications using Maven and integrated with CI servers like Jenkins to build jobs.
- Configured deployed and maintained multi-node Dev and Test Kafka Clusters and implemented data ingestion and handling clusters in real time processing using Kafka.
- Exported the analyzed data to the RDBMS using Sqoop for to generate reports for the BI team and worked collaboratively with all levels of business stakeholders to architect, implement and test Big Data based analytical solution from disparate sources and involved in exporting the analyzed data to the databases such as Teradata, MySQL and Oracle use Sqoop for visualization and to generate reports for the BI team.
- Creating the cube in Talend to create different types of aggregation in the data and also to visualize them.
- Monitor Hadoop Name Node Health status, number of Task trackers running, number of Data Nodes running and automated all the jobs starting from pulling the Data from different Data Sources like MySQL to pushing the result set Data to Hadoop Distributed File System.
Environment: Hive 2.3, Teradata r15, MapReduce, HDFS, Sqoop 1.4, AWS, Hadoop 3.0, Pig 0.17, Python 3.4, Kafka 1.1, Apache Storm, SQL scripts, data pipeline, HBase, JSON, Oozie, ETL, Zookeeper, Maven, Jenkins, RDBMS
Sr. Hadoop Developer
Confidential, Oakland, CA
Responsibilities:
- Worked wide range of tasks related to a massive modernization effort (including the in corporation of Hadoop Big Data Platform namely, Hortonworks Data Platform) for the Health Informatics program.
- Written Hive queries for data analysis to meet the business requirements and load and transform large sets of structured, semi structured and unstructured data coming from UNIX, NoSQL and a variety of portfolios.
- Performed advanced procedures like text analytics and processing, using the in-memory computing capabilities of Spark using Scala.
- Migrated data between RDBMS and HDFS/Hive with Sqoop and used Partitions, Bucketing concepts in Hive and designed both Managed and External tables in Hive for optimized performance and used Sqoop to import and export data among HDFS, MySQL database and Hive
- Implemented discretization and binning, data wrangling: cleaning, transforming, merging and reshaping data frames using Python and developed and maintained Python ETL scripts to scrape data from external sources and load cleansed data into a SQL Server.
- Worked with Spark eco system using SCALA and HIVE Queries on different data formats like Text file and parquet and used Scala to convert Hive/SQL queries into RDD transformations in Apache Spark.
- Handled importing of data from various data sources, performed transformations using Hive, MapReduce, and loaded data into HDFS.
- Involved in loading data from UNIX/LINUX file system to HDFS and analyzed the data by performing Hive queries and running Pig scripts.
- Worked on implementing Spark Framework a Java based Web Frame work and designed and implemented Spark jobs to support distributed data processing.
- Involved in managing and reviewing the Hadoop log files, used Pig as ETL tool to do Transformations, even joins and some pre-aggregations before storing the data onto HDFS.
- Worked on Spark Code using Scala and Spark SQL for faster data sets processing and testing.
- Implemented Spark Scripts using Scala, Spark SQL to access hive tables into spark for faster processing of data.
- Processed the Web server logs by developing Multi-hop flume agents by using Avro Sink and loaded into MongoDB for further analysis and extracted and restructured the data into MongoDB using import and export command line utility tool.
- Worked on python files to load the data from csv, json, mysql, hive files to Neo4j Graphical database.
- Managed and reviewed Hadoop and HBase log files. Worked on HBase in creating HBase tables to load large sets of semi structured data coming from various sources.
- Performed data analysis with HBase using Hive External tables and exported the analyzed data to HBase using Sqoop and to generate reports for the BI team.
- Imported the data from relational database to Hadoop cluster by using Sqoop and developed Hive queries to process the data and generate the data cubes for visualizing.
- Responsible for building scalable distributed data solutions using Hadoop. Create tables, dropping and altered at run time without blocking updates and queries using HBase and Hive.
- Wrote Flume configuration files for importing streaming log data into HBase with Flume.
- Imported logs from web servers with Flume to ingest the data into HDFS. Using Flume and Spool directory loading the data from local system to HDFS and developed UNIX shell scripts to load large number of files into HDFS from Linux File System.
- Installed and configured pig, written Pig Latin scripts to convert the data from Text file to Avro format and developed MapReduce programs in Java for parsing the raw data and populating staging Tables and created Partitioned Hive tables and worked on them using HiveQL and loaded Data into HBase using Bulk Load and Non-bulk load.
- Worked with Tableau and Integrated Hive, Tableau Desktop reports and published to Tableau Server.
- Used Zookeeper to coordinate the servers in clusters and to maintain the data consistency and developed interactive shell scripts for scheduling various data cleansing and data loading process.
- Worked in Agile development environment having KANBAN methodology. Actively involved in daily scrum and other design related meetings.
Environment: MapReduce, PIG Latin, Hive 1.9, Apache Crunch, Spark, Scala, HDFS, HBase, Core Java, J2EE, Eclipse, Sqoop, Impala, Flume, Oozie, MongoDB 3.0, Jenkins, Agile Scrum methodology
Sr. Java/Hadoop Developer
Confidential, GA
Responsibilities:
- Developed the business solution to make data-driven decisions on the best ways to acquire customers and provide them business solutions.
- Actively participated in every stage of Software Development Lifecycle (SDLC) of the project
- Designed and developed user interface using JSP, Html and JavaScript for better user experience.
- Exported analyzed data to downstream systems using Sqoop-RDBMS for generating end-user reports, Business Analysis reports and payment reports
- Participated in developing different UML diagrams such as Class diagrams, Use case diagrams and Sequence diagrams
- Involved in developing UI (User Interface) using Html, CSS, JSP, JQuery, Ajax, and Java Script.
- Designed dynamic client-side JavaScript, codes to build web forms and simulate process for web application, page navigation and form validation.
- Imported and exported data jobs to perform operations like copying data from HDFS and to HDFS using Sqoop.
- Data integration into destination which is received from various providers using Sqoop onto HDFS for analysis and data processing.
- Managed clustering environment using Hadoop platform and worked with Pig, HBase, NoSQL database HBase and Sqoop, for analyzing the Hadoop cluster as well as big data.
- Managed data using the ingestion tool Kafka and wrote and implemented Apache PIG scripts to load data from and to store data into Hive.
- Assisted admin for extending and setting up the nodes on to the cluster.
- Implemented the NoSQL database HBase and the management of the other tools and process observed running on YARN.
- Wrote Hive UDFS to extract data from staging tables and analyzed the web log data using the Hive QL.
- Used multi-threading concepts and clustering concepts for data processing and managed the clustering and designing of debug the issue if exits any.
- Involved in creating Hive tables, load data and writing hive queries, which runs map reduce in backend and further Partitioning and Bucketing was done when required.
- Used Zookeeper for various types of centralized configurations and tested the data coming from the source before processing and resolved problem faced.
- Pushed and commit the sample codes on to the Github and ingested the raw data, populated staging tables, and stored the refined data.
- Developed programs to parse the raw data, populate staging tables and store the refined data in partitioned tables.
- Involved in the regular Hadoop Cluster maintenance such as patching security holes and updating system packages.
- Tested raw data and executed performance scripts and shared responsibility with administration of Hive and Pig.
- Worked in Apache Tomcat for deploying and testing the application and worked with different file formats like Text files, Sequence Files, Avro.
- Written Java program to retrieve data from HDFS and providing REST services and used Automation tools like Maven.
- Used spring framework to provide the RESTFUL services and provided design recommendations and thought leadership to stakeholders that improved review processes and resolved technical problems.
Environment: Java 8, Eclipse, Hadoop 2.8, Hive 1.5, HBase, Linux, Map Reduce, Pig 0.15, HDFS, Oozie, Shell Scripting, MySQL. 2012
Sr. Java/J2EE Developer
Confidential, Eden Prairie, MN
Responsibilities:
- Involved in the complete Software Development Life Cycle (SDLC) including Requirement Analysis, Design, Implementation, Testing and Maintenance.
- Worked on designing and developing the Web Application User Interface and implemented its related functionality in Java/J2EE for the product.
- Used JSF framework to implement MVC design pattern.
- Developed and coordinated complex high quality solutions to clients using J2SE, J2EE, Servlets, JSP, HTML, Struts, Spring MVC, SOAP, JavaScript, JQuery, JSON and XML.
- Wrote JSF managed beans, converters and validators following framework standards and used explicit and implicit navigations for page navigations.
- Designed and developed Persistence layer components using Hibernate ORM tool and designed UI using JSF tags, Apache Tomahawk & Rich faces.
- Used Oracle 10g as backend to store and fetch data and used IDEs like Eclipse and Net Beans, integration with Maven.
- Created Real-time Reporting systems and dashboards using XML, MySQL, and Perl
- Worked on Restful web services which enforced a stateless client server and support JSON (few changes from SOAP to RESTFUL Technology)
- Involved in detailed analysis based on the requirement documents.
- Involved in Design, development and testing of web application and integration projects using Object Oriented technologies such as Core Java, J2EE, Struts, JSP, JDBC, Spring Framework, Hibernate, Java Beans, Web Services (REST/SOAP), XML, XSLT, XSL and Ant.
- Designing and implementing SOA compliant management and metrics infrastructure for Mule ESB infrastructure utilizing the SOA management components.
- Used NodeJS for server side rendering. Implemented modules into NodeJS to integrate with designs and requirements.
- Used JAX-WS to interact in front-end module with backend module as they are running in two different servers.
- Responsible for Offshore deliverables and provide design/technical help to the team and review to meet the quality and time lines.
- Migrated existing Struts application to Spring MVC framework and provided and implemented numerous solution ideas to improve the performance and stabilize the application.
- Extensively used LDAP Microsoft Active Directory for user authentication while login and developed unit test cases using JUnit
- Created the project from scratch using Angular JS as frontend, Node Express JS as backend.
- Involved in developing Perl script and some other scripts like java script and tomcat is the web server used to deploy OMS web application.
- Used SOAP Lite module to communicate with different web-services based on given WSDL.
- Prepared technical reports & documentation manuals during the program development.
- Tested the application functionality with JUnit Test Cases.
Environment: JDK 1.5, JSF, Hibernate 3.6, JIRA, NodeJS, Cruise control, Log4j, Tomcat, LDAP, JUNIT, NetBeans, Windows/Unix.
Java Developer
Confidential
Responsibilities:
- Involved in prototyping, proof of concept, design, Interface Implementation, testing and maintenance.
- Created use case diagrams, sequence diagrams, and preliminary class diagrams for the system using UML/Rational Rose.
- Used various Core Java concepts such as Multi-Threading, Exception Handling, Collection APIs to implement various features and enhancements.
- Developed reusable utility classes in core java for validation which are used across all modules.
- Actively designed, developed and integrated the Metrics module with all other components.
- Involved in Software Development Life Cycle (SDLC) of the application: Requirement gathering, Design Analysis and Code development.
- Implemented Struts framework based on the Model View Controller design paradigm.
- Designed the application by implementing Struts based on MVC Architecture, simple Java Beans as a Model, JSP UI Components as View and Action Servlets as a Controller.
- Used JNDI to perform lookup services for the various components of the system and involved in designing and developing dynamic web pages using HTML and JSP with Struts tag libraries.
- Used HQL (Hibernate Query Language) to query the Database System and used JDBC Thin Driver to connect to the database.
- Developed Hibernate entities, mappings and customized criterion queries for interacting wit database.
- Responsible for designing Rich user Interface Applications using JavaScript, CSS, HTML and AJAX and developed Web Services by using SOAP UI.
- Used JPA to persistently store large amount of data into database and implemented modules using Java APIs, Java collection, Threads, XML, and integrating the modules.
- Applied J2EE Design Patterns such as Factory, Singleton, and Business delegate, DAO, Front Controller Pattern and MVC.
- Used JPA for the management of relational data in application and designed and developed business components using Session and Entity Beans in EJB.
- Developed the EJBs (Stateless Session beans) to handle different transactions such as online funds transfer, bill payments to the service providers.
- Developed XML configuration files, properties files used in struts Validate framework for validating Form inputs on server side.
- Extensively used AJAX technology to add interactivity to the web pages and developed JMS Sender and Receivers for the loose coupling between the other modules and implemented asynchronous request processing using Message Driven Bean.
- Used JDBC for data access from Oracle tables and JUnit was used to implement test cases for beans.
- Successfully installed and configured the IBM WebSphere Application server and deployed the business tier components using EAR file.
- Involved in deployment of application on Weblogic Application Server in Development & QA environment.
- Used Log4j for External Configuration Files and debugging.
Environment: JSP 1.2, Servlets, Struts1.2.x, JMS, EJB 2.1, Java, OOPS, Spring, Hibernate, JavaScript, Ajax, Html, CSS, JDBC, JMS, Eclipse, WebSphere, DB2, JPA, ANT.
