Sr. Data Engineer Resume
Dallas, TX
SUMMARY
- 8+ years of professional IT experience in analyzing requirements, designing, building, highly distributed mission critical products and applications.
- Highly dedicated and results oriented Hadoop, Big Data, with 7+ years of strong end - to-end experience on Hadoop Development with varying level of expertise around different BIGDATA Environment projects and Big Data technologies like MapReduce, YARN, HDFS, Apache Cassandra, HBase, Oozie, Hive, Sqoop, Pig, Zoo Keeper and Flume.
- In depth knowledge in HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming.
- Experienced working extensively on the Master Data Management(MDM) and application used for MDM
- Efficient in all phases of the development lifecycle, coherent with Data Cleansing, Data Conversion, Data Profiling, Data Mapping, Performance Tuning and System Testing.
- Expertise in converting Map Reduce programs into Spark transformations using Spark RDD's.
- Expertise in Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming and Spark MLlib.
- Configured Spark streaming to receive real time data from the Kafka and store the stream data to HDFS using Scala.
- Experience in implementing Real-Time event processing and analytics using messaging systems like Spark Streaming.
- Experience in using Kafka and Kafka brokers to initiate spark context and processing live streaming information with the help of RDD.
- Good knowledge on Amazon AWS concepts like EMR and EC2 web services which provides fast and efficient processing of Big Data.
- Experience with all flavor of Hadoop distributions, including Cloudera, Hortonworks, Mapr and Apache.
- Experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (5.X) distributions and on Amazon web services (AWS).
- Expertise in implementing Spark Scala application using higher order functions for both batch and interactive analysis requirement.
- Extensive experienced working with Spark tools like RDD transformations, spark MLlib and spark QL.
- Hands on experience in writingHadoopJobs for analyzing data using Hive QL (Queries), Pig Latin (Data flow language), and custom MapReduce programs in Java.
- Experienced in working with structured data using HiveQL, join operations, Hive UDFs, partitions, bucketing and internal/external tables.
- Proficient in Normalization/De-normalization techniques in relational/dimensional database environments and have done normalizations up to 3NF.
- Good understanding of Ralph Kimball (Dimensional) & Bill Inman (Relational) model Methodologies.
- Extensive experience in collecting and storing stream data like log data in HDFS using Apache Flume.
- Experienced in using Pig scripts to do transformations, event joins, filters and some pre-aggregations before storing the data onto HDFS.
- Involvement in creating custom UDFs for Pig and Hive to consolidate strategies and usefulness of Python/Java into Pig Latin and HQL (HiveQL).
- Skilled withPython, Bash/Shell, Power Shell, Ruby, Perl, YAML, Groovy. Developed Shell and Python Scripts used to automate day to day administrative tasks and automated thebuildandrelease process.
- Good Experience with NoSQL Databases like HBase, MongoDB and Cassandra.
- Experience on using Cassandra CQL with Java APIs to retrieve data from Cassandra tables.
- Hands on experience in querying and analyzing data from Cassandra for quick searching, sorting and grouping through CQL.
- Experience working with MongoDB for distributed storage and processing.
- Good knowledge and experienced in Extracting files from MongoDB through Sqoop and placed in HDFS and processed.
- Worked on importing data into HBase using HBase Shell and HBase Client API.
- Experience in designing and developing tables in HBase and storing aggregated data from Hive Table.
- Experience with Oozie Workflow Engine in running workflow jobs with actions that run Java MapReduce and Pig jobs.
- Great hands on experience withPysparkfor using Spark libraries by using python scripting for data analysis.
- Implemented data science algorithms like shift detection in critical data points using Spark, doubling the performance.
- Involvement in creating custom UDFs for Pig and Hive to consolidate strategies and usefulness of Python/Java into Pig Latin and HQL (HiveQL).
- Extensive experience in working with various distributions of Hadoop like enterprise versions of Cloudera (CDH4/CDH5), Hortonworks and good knowledge on MAPR distribution, IBM Big Insights and Amazon’s EMR (Elastic MapReduce).
- Experience in design and develop the POC in Spark using Scala to compare the performance of Spark with Hive and SQL/Oracle.
- Developed automated processes for flattening the upstream data from Cassandra which in JSON format. Used Hive UDFs to flatten the JSON Data.
- Expertise in developing responsive Front-End components with JavaScript, JSP, HTML, XHTML, Servlets, Ajax, and AngularJS.
- Experience as a Java Developer in Web/intranet, client/server technologies using Java, J2EE, Servlets, JSP, JSF, EJB, JDBC and SQL.
- Good knowledge in working with scheduling jobs in Hadoop using FIFO, Fair scheduler and Capacity scheduler.
- Experienced in designing both time driven and data driven automated workflows using Oozie and Zookeeper.
- Experience in writing stored procedures and complex SQL queries using relational databases like Oracle, SQL Server, and MySQL.
- Experience in Extraction, Transformation and Loading (ETL) of data from multiple sources like Flat files, XML files, and Databases.
- Supported various reporting teams and experience with data visualization tool Tableau.
- Implemented Data Quality in ETL Tool Talend and having good knowledge in Data Warehousing and
- ETL Tools like IBM DataStage, Informatica and Talend.
- Experienced and in-depth knowledge of cloud integration with AWS using Elastic Map Reduce (EMR), Simple Storage Service (S3), EC2, Redshift and Microsoft Azure.
- Detailed understanding of Software Development Life Cycle (SDLC) and strong knowledge in project implementation methodologies like Waterfall and Agile.
TECHNICAL SKILLS
Certifications: AWS Developer Associate, AWS DevOps Professional
Languages: C, C++, Python, R, PL/SQL, Java, HiveQL, Pig Latin, Scala, UNIX shell scripting.
Hadoop Ecosystem: HDFS, YARN, Scala, Map Reduce, Hive, Pig, Zookeeper, Sqoop, Oozie, Bedrock, Flume, Kafka, Impala, NiFi, MongoDB, HBase.
Databases: Oracle, MS-SQL Server, MySQL, PostgreSQL, NoSQL (HBase, Cassandra, MongoDB), Teradata.
Tools: Eclipse, NetBeans, Informatica, IBM DataStage, Talend, Maven, Jenkins.
Hadoop Platforms: Hortonworks, Cloudera, Azure, Amazon Web services (AWS).
Operating Systems: Windows XP/2000/NT, Linux, UNIX.
Amazon Web Services: Redshift, EMR, EC2, S3, RDS, Cloud Search, Data Pipeline, Lambda.
Version Control: GitHub, SVN, CVS.
Packages: MS Office Suite, MS Vision, MS Project Professional.
PROFESSIONAL EXPERIENCE
Confidential
Sr. Data Engineer
Responsibilities:
- Responsible for building scalable distributed data solutions using Hadoop.
- Experience in creating Kafka producer and Kafka consumer for Spark streaming which gets the data from different learning systems of the patients.
- Configured Spark streaming to receive real time data from the Kafka and store the stream data to HDFS using Scala.
- Developed various Java objects (POJO) as part of persistence classes for OR mapping with databases.
- Developed HBase data model on top of HDFS data to perform real time analytics using Java API.
- Implemented data injection systems by creating Kafka brokers,Javaproducers, Consumers, custom encoders.
- Used Spark Streaming to divide streaming data into batches as an input to Spark engine for batch processing.
- Evaluated the performance of Apache Spark in analyzing genomic data.
- Performed advanced procedures like text analytics and processing using the in-memory computing capabilities of Spark.
- Experienced in AWS Cloud Services such as IAM, EC2, S3, AMI, VPC, Auto-Scaling, Security Groups, Route53, ELB, EBS, RDS, SNS, SQS, CloudWatch, CloudFormation, CloudFront, Snow balland Glacier.
- Used AWS S3 Buckets to store the file and injected the files into Snowflake tables using Snow Pipe and run deltas using Data pipelines.
- Worked on complex SNOW SQL and Python Queries in Snowflake.
- Worked on optimizing volumes withEC2 instances and created multiple VPC instances and deployed those applications on AWS using Elastic Beanstalk and Implemented and set up Route53 for AWS Web Instances.
- Configured AWSIdentity and Access Management(IAM)Groups and Users for improved login authentication. Provided policies to groups using policy generator and set different permissions based on the requirement along with providingAmazon Resource Name(ARN).
- Worked on all data management activities on the project data sources, data migration.
- Worked with data compliance teams, Data governance team to maintain data models, Metadata, Data Dictionaries define source fields and its definitions.
- Deployed microservice on boarding tools leveraging Python and Jenkins allowing for easy creation and maintenance of build jobs and Kubernetes deploy and services.
- Worked withAWS Cloud Formation Templates, terraformalong withAnsibleto render templates and Murano with HeatOrchestration templatesinOpen stack Environment.
- Developed rest API's using python with flask framework and done the integration of various data sources including RDBMS, Shell Scripting, Spreadsheets, and Text files.
- Generated Java APIs for retrieval and analysis on No-SQL database such as HBase and Cassandra.
- Written HBASE Client program in Java and web services.
- Implemented Agile Methodology for building an internal application.
- Developed A.I machine learning algorithms like Classification, Regression, Deep Learning using python.
- Conducted statistical analysis on healthcare data using python and various tools.
- Hands-on experience withAmazon EC2, Amazon S3, Amazon RDS, VPC, IAM, Amazon Elastic Load Balancing, Auto Scaling, Cloud Front, CloudWatch, SNS, SES, SQSand other services of the AWS family.
- Developed Python, ANTscripts and UNIXshell scripts to automate the deployment process.
- Experience in working with JFrog Artifactory to deploy artifacts and used shell scripts Bash, Python and Power Shell for automating tasks in Linux and Windows environments.
- Selecting appropriate AWS services to design and deploy an application based on given requirements.
- Created concurrent access for Hive tables with shared and exclusive locking that can be enabled in Hive with the help of Zookeeper implementation in the cluster. Designed and Implement test environment on AWS.
- Storing and loading the data from HDFS to Amazon S3 and backing up the Namespace data into NFS.
- Worked closely with EC2 infrastructure teams to troubleshoot complex issues.
- Worked with AWS cloud and created EMR clusters with spark for analyzing raw data processing and access data from S3 buckets.
- Involved in installing EMR clusters on AWS.
- Used AWS Data Pipeline to schedule an Amazon EMR cluster to clean and process web server logs stored in Amazon S3 bucket.
- Designed the NIFI/HBASE pipeline to collect the processed customer data into Hbase tables.
- Apply Transformation rules on the top of Data Frames.
- Worked with different File Formats like TEXTFILE, AVROFILE, ORC, and PARQUET for HIVE querying and processing
- Developed Hive UDFs and UDAF’s for rating aggregation.
- Developed java client API for CRUD and analytical Operations by building a restful server and exposing data from No-SQL databases like Cassandra via rest protocol.
- Created Hive tables and involved in data loading and writing Hive UDFs.
- Experience in managing and reviewing Hadoop Log files.
- Involved in Core Java concepts like Collections, Multi-Threading and Serialization.
- Worked extensively with Sqoop to move data from DB2 and Teradata to HDFS.
- Collected the logs data from web servers and integrated in to HDFS using Kafka.
- Provided ad-hoc queries and data metrics to the Business Users using Hive, Impala.
- Worked on various performance optimizations like using distributed cache for small datasets, partition, bucketing in hive, map side joins etc.
- Scheduled Oozie workflow engine to run multiple Hive and Pig jobs, which independently run with time and data availability
- Responsible for coding the business logic using Python, J2EE/Full Stack technologies and Core Java concepts.
- Utilized Spark Core, Spark Streaming and Spark SQL API for faster processing of data instead of using MapReduce in Java.
- Defined the reference architecture for Big Data Hadoop to maintain structured and unstructured data within the enterprise.
- Lead the efforts to develop and deliver the data architecture plan and data models for the multiple data warehouses and data marts attached to the Data Lake Project.
- ImplementedAWSprovides a variety of computing and networking services to meet the needs of applications
- Created Talend jobs to copy the files from one server to another and utilized Talend FTP components
- Created Talend jobs to load data into various Oracle tables. Utilized Oracle stored procedures and wrote few Java code to capture global map variables and use them in the job. Used ETL methodologies and best practices to create Talend ETL jobs. Followed and enhanced programming and naming standards.
- Developed Talend jobs to populate the claims data to data warehouse - star schema.
Environment: Hadoop, Java, MapReduce, HDFS, PIG, Hive, Sqoop, Oozie, Storm, Data Modelling, MDM, Kafka, Spark, Spark Streaming, Scala, Cassandra, Cloudera, ZooKeeper, AWS, Snowflake, Snow Pipelines, Solr, MySQL, Shell Scripting, Java, Tableau.
Confidential, Dallas, TX
Data Engineer
Responsibilities:
- Extensively migrated existing architecture toSpark Streaming to process the live streaming data.
- Responsible forSparkCore configuration based on type of Input Source.
- Executed Spark code using Scala forSpark Streaming/SQL for faster processing of data.
- Performed SQL Joins among Hive tables to get input forSparkbatch process.
- Gathered the business requirements from the Business Partners and Subject Matter Experts.
- DevelopedPySparkcode to mimic the transformations performed in the on-premise environment.
- Analyzed the Sql scripts and designed solutions to implement using pyspark. created custom new columns depending up on the use case while ingesting the data into Hadoop lake using pyspark.
- Worked with Data Governance, Data Quality and Metadata Management team to understand project.
- Implemented Data Governance, Data quality frame work including data cataloging, data lineage, MDM, business glossary, data steward, metadata -operational.
- Created S3 buckets and managing policiesfor S3 buckets and using them for storage, backup and archived in AWS and worked on AWS LAMBDA which runs the code with a response of events.
- Assisted Application Teams in creating complex IAM policies for administration within AWS and Maintained DNS records using Route53. Used Amazon Route53 to manage DNS zones and give public DNS names to elastic load balancer Ips.
- Review source feeds, delivery mechanism (messages/sftp..etc), frequency, full/partial..etc and identification of customer PII info using fuzzy logic and merge customers info from multi sources into one MDM record and maintain with help of data steward..etc.. data management and data governance -lineage, Data Quality rules/threshold/alerts…etc
- Evaluated tools using gartners magic quadrants and functionality metrics and product scores—informatica MDM, Ataccama, IBM, Semarchy, Contentserv..etc
- Developed data pipeline using Flume, Sqoop, Pig and Java map reduce to ingest customer behavioral data and financial histories into HDFS for analysis.
- Automating AWS components like EC2 instances, Security groups, ELB, RDS, IAM through AWS Cloud information templates.
- Created Snowflake warehouse, databases, tables, designed table; Data pipeline to snowflake stage and using snowsql to transform and load to snowflake warehouse.
- Created RedShift data warehouse for another clients -who preferred Redshift to snowflake. created tables, applied distribution key, sort key, vacuum strategy...etc
- Analyze Cassandra database and compare it with other open-source NoSQL databases to find which one of them better suites the current requirement.
- Integrated Cassandra as a distributed persistent metadata store to provide metadata resolution for network entities on the network.
- Developed multiple MapReduce jobs in java to clean datasets.
- Implemented Spark using Scala and used Pyspark using Python for faster testing and processing of data.
- Designed multiple Python packages that were used within a large ETL process used to load 2TB of data from an existing Oracle database into a new PostgreSQL cluster.
- Wrote ETL jobs to read from web APIs using REST and HTTP calls and loaded into HDFS using Java.
- Involved in converting Hive/Sql queries into Spark transformations using Spark RDD’s.
- Loading data from Linux file system to HDFS and vice-versa
- Developed UDF’s using both Data Frames/Sql and RDD in Spark for Data Aggregation queries and reverting back into OLTP through Sqoop.
- Manage AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our QA and UAT environments as well as infrastructure servers for GIT.
- Migrated an existing on-premises application to AWS.
- Knowledge of ETL methods for data extraction, transformation and loading in corporate-wide ETL Solutions and Data warehouse tools for reporting and data analysis.
- Implementing advanced procedures like text analytics and processing using the in-memory computing capabilities like Apache Spark written in Scala.
- Extensively used Informatica Client tools like Designer, Workflow Manager, Workflow Monitor, Repository Manager and Server tools - Informatica Server, Repository Server
- Installed and monitored Hadoop ecosystems tools on multiple operating systems like Ubuntu, CentOS.
- Developed Scala scripts using both Data frames/SQL/Datasets and RDD/MapReduce in Spark for Data aggregation, queries and writing data back into OLTP system through Sqoop.
- Extensively use Zookeeper as job scheduler for Spark Jobs.
- Extending Hive and Pig core functionality by writing custom UDFs.
- Designed, wrote, and maintained systems inPython scriptingfor administeringGIT, by usingJenkinsas a full cyclecontinuous deliverytool involving package creation, distribution, and deployment ontoTomcat applicationservers via shell scripts embedded intoJenkinsjobs.
- Involved in writing shell scripts to automate WebSphere admin tasks and application specific syncs / backups and other schedulers.
- Experience inMoving Data in and out of Windows Azure SQL Databases and Blob Storage.
- Experience in designing Kafka for multi data center cluster and monitoring it.
- Worked on migrating MapReduce programs into Spark transformations using Spark and Scala, initially done using python (PySpark).
- Implemented a distributed messaging queue to integrate with Cassandra using Apache Kafka and Zookeeper.
- Experience on Kafka and Spark integration for real time data processing.
- Developed Kafka producer and consumer components for real time data processing.
- Hands-on experience for setting up Kafka mirror maker for data replication across the cluster’s.
- Experience in Configure, Design, Implement and monitor Kafka Cluster and connectors.
- Involved in loading data fromUNIXfile system to HDFS using Shell Scripting.
- Hands on experience on linux shell scripting.
- Importing and exporting data into HDFS from Oracle database using NiFi.
- Started using apacheNiFito copy the data from local file system to HDFS.
- Worked on NiFi data Pipeline to process large set of data and configured Lookup’s for Data Validation and Integrity.
- Worked with different file formats like Json, AVRO and parquet.
- Experienced in using apacheHueandAmbari to manage and monitor the Hadoop clusters.
- Experienced in using version control systems like SVN, GIT build tool Mavenand continuous integration toolJenkins.
- Worked on REST APIs in java 7 to support internalization, and apps to help our buyer team visualize and set portfolio performance targets.
- Used Mockito to develop test cases for java bean components and test them through testing framework.
- Good experience in using Relational databasesOracle, SQL Server andPostgreSQL.
- Worked on developing Middle tier environment using SSIS, Python, Java in a J2EE/Full Stack environment.
- Worked withagile, Scrum and Confidentialsoftware development framework for managing product development.
- UsingAmbarito monitor node’s health and status of the jobs in Hadoop clusters.
- ImplementedKerberosfor strong authentication to provide data security.
- Involved in creatingHivetables, loading and analyzing data using hive queries.
- Experience in creating dash boards and generating reports usingTableauby connecting to tables in Hive and HBase.
- CreatedSqoopjobs to populate data present in relational databases to hive tables.
- Experience in importing and exporting data usingSqoopfromHDFS/Hive/HBaseto Relational Database Systems and vice - versa. Skilled in Datamigration and data generationin Big Data ecosystem.
- Oracle SQL tuning using explain plan.
- Manipulate, serialize, model data in multiple forms like JSON, XML.
- Involved in setting up map reduce 1 and map reduce 2.
- Prepared Avro schema files for generating Hive tables.
- Used Impala connectivity from the User Interface(UI) and query the results using ImpalaQL.
- Worked on physical transformations of data model which involved in creating Tables, Indexes, Joins, Views and Partitions.
- Involved in Analysis, Design, System architectural design, Process interfaces design, design, documentation.
- Used Jira for bug tracking and Bit Bucket to check-in and checkout code changes.
- Involved in Cassandra Data modelling to create key spaces and tables in multi Data Center DSE Cassandra DB.
- Utilized Agile and Scrum Methodology to help manage and organize a team of developers with regular code review sessions.
- Re-engineered n-tiered architecture involving technologies like EJB, XML and JAVA into distributed applications.
- Load and transform large sets of structured, semi-structured using Hive and Impala with elastic search
- Worked closely with different business teams to gather requirements, prepare functional and technical documentsand UAT Processfor CreatingData quality rules in cosmos & Cosmos Streams.
Environment: Data Governance, MDM, Hadoop, HDFS, PIG, Hive, Sqoop, Oozie, Cloudera, ZooKeeper, AWS, Snowflake, Redshift, Oracle, Shell Scripting, Nifi, Unix, Linux, BigSQL.
Confidential
Data Engineer
Responsibilities:
- Installed and configured Hadoop Map Reduce, HDFS, developed multiple Map Reduce jobs in java for data cleaning and preprocessing.
- Migrated existing SQL queries to HiveQL queries to move to big data analytical platform.
- Integrated Cassandra file system to Hadoop using Map Reduce to perform analytics on Cassandra data.
- Installed and configured Cassandra DSE multi-node, multi-data center cluster.
- Created Business Logic using Servlets, Session beans and deployed them on Web logic server.
- Wrote complex SQL queries and stored procedures.
- Developed the XML Schema and Amazon Web services for the data maintenance and structures.
- Worked on analyzing, writing Hadoop MapReduce jobs using Java API, Pig and hive.
- Selecting the appropriate AWS service based upon data,compute, system requirements.
- Implemented Shell, Perl and Python scripts for release and built automation then manipulated and automated scripts to suit the requirements.
- Designed and implemented a 24 node Cassandra cluster for single point inventory application.
- Analyzed the performance of Cassandra cluster using nodetool TP stats and CFstats for thread analysis and latency analysis.
- Implemented Real time analytics on Cassandra data using thrift API.
- Responsible to manage data coming from different sources.
- Supported Map Reduce Programs those are running on the cluster.
- Involved in loading data from UNIX file system to HDFS.
- Worked on installing cluster, commissioning & decommissioning of data node, name node recovery, capacity planning, and slots configuration.
- Load and transform large sets datainto HDFS using Hadoop fs commands.
- Scheduled Oozie workflow engine to run multiple Hive and Pig jobs, which independently run with time and data availability.
- Implemented UDFS, UDAFS in java and python for hive to process the data that can’t be performed using Hive inbuilt functions.
- Did various performance optimizations like using distributed cache for small datasets, partition and bucketing in hive, doing map side joins etc.
- Worked on importing and exporting data from Oracle and DB2 into HDFS and HIVE using Sqoop for analysis, visualization and to generate report.
- Involved in writing optimized Pig Script along with involved in developing and testing Pig Latin Scripts
- Supported in setting up updating configurations for implementing scripts with Pig and Sqoop.
- Designed the logical and physical data modeling wrote DML scripts for Oracle 9i database.
- Used Hibernate ORM framework with Spring framework for data persistence.
- Wrote test cases in JUnit for unit testing of classes.
- Involved in templates and screens in HTML and JavaScript.
Environment: Java, HDFS, Cassandra, Map Reduce, Sqoop, JUnit, HTML, JavaScript, Hibernate, Spring, Pig, Hive.
Confidential
Java Developer
Responsibilities:
- Experience in coding Servlets on the server side, which gets the requests from the client and processes the same by interacting the Oracle database.
- Coded Java Servlets to control and maintain the session state and handle user requests
- GUI development using HTML Forms and Frames and validating the data With JavaScript.
- Used JDBC to connect to the backend database and developed stored procedures.
- Developed code to handle web requests involving Request Handlers, Business Objects, and Data Access Objects.
- Creation of JSP pages including the use of JSP custom tags and other methods of Java Beam presentation and all HTML and graphically oriented aspects of the site' s user interface.
- Used XML for mapping the pages and classes and to transfer data universally among different data sources.
- Worked in unit testing and documentation.
- Hands on experience in J2EE framework Struts. Implemented Spring Model View Controller (MVC) Architecture based presentation using JSF framework. Extensively used Core Java API, Spring API in developing the business logic.
- Designed and developed Agile Applications, Light weight solutions, and integrated applications by using and integrating different frameworks like Struts and Spring.
- Involved in all the phases of (SDLC) Software Development Life Cycle including analysis, designing, coding, testing and deployment of the application.
- Developed Class Diagrams, Sequence Diagrams, State diagrams usingRational Rose.
- Developed user interface using JSP, JSP Tag libraries JSTL, HTML, CSS, and Java Script to simplify the complexities of the application.
- Adapted various design patterns like Business Delegate, Data Access Objects, MVC
- Used Spring framework to implement MVC Architecture.
- Implemented Layout management usingStruts Tiles Framework.
- Used the Struts validation Framework in the presentation layer.
- Used Core Spring framework for Dependency injection.
- Developed JPA mapping to the Database tables to access the data from the Oracle database.
- Creating JUnit test case design logic and implementation throughout application.
- Extensively used Clear Case for version controlling.
Environment: Java 6, J2EE 5, Hibernate 3.0(JPA), Spring, XML, JSP, JSTL, CSS, JavaScript, HTML, AJAX, JUnit, Oracle 10g, Log4J 1.2.1, Eclipse 3.4, UNIX.