Sr. Bigdata Engineer Resume
Charlotte, NC
SUMMARY
- 10+ years of IT experience in Analysis, design, development, implementation, maintenance and support with experience in Big Data, Hadoop Development and Ecosystem Analytics, AWS Cloud, Data Warehousing, Data lake, Development and Design of Java based enterprise applications.
- Solid programming skills in both OOP and functional languages such as JAVA, Python and Scala.
- Have Extensive Experience in IT data analytics projects,Hands on experience in migrating on premise ETLs to Google Cloud Platform (GCP) using cloud native tools such as BIG query, Cloud Data Proc, Google Cloud Storage Composer.
- Evaluating technology stack for building Analytics solutions on cloud by doing research and finding right strategies, tools for building end to end analytics solutions and help designing technology roadmap for Data Ingestion, Data lakes, Data processing and Visualization.
- More than 5 years of experiences in Hadoop, Eco - system components HDFS, MapReduce (MRV1, YARN), CDH, Pig, Hive, HBase, Scoop, Flume, Kafka, Impala, Oozie and Programming in Spark using Python and Scala.
- Good experience and knowledge in Amazon Web Service (AWS) concepts like EMR, Redshift, Athena, SNS, SQS, AWS Glue, Lambda and EC2 web services which provides fast and efficient processing of Teradata Big Data Analytics.
- Working experience in JAVA/J2EE Technology and development design of various scalable systems using Hadoop technologies on various environments.
- Experience in creating Notebooks inData bricksto pull the data from S3 and process with the transformation rules and load back the data to persistence area in S3 in Apache Parquet format.
- Have very good experience in Apache Spark, Spark Streaming, Spark SQL and No SQL databases like Cassandra and Hbase.
- Good knowledge of web services using GRPC and GRAPHQL protocols.
- Strong experience in splunk dashboard creation, app development, validation etc. Also aware of various quality concepts like SCM.
- Worked on EC2,EMR, Data pipeline,MSK,AWS Glue, CloudWatch, Lambda, Athena, and sage maker.
- Experience working with React, Node.js, Redux, and Immutable.js for developing single page Application with Responsive Web Design as React.js with the Virtual DOM.
- Proficient in Data Visualization tools such as Tableau and PowerBI, Big Data tools such as Hadoop HDFS, Spark and MapReduce, MySQL, Oracle SQL and Redshift SQL and Microsoft Excel (VLOOKUP,Pivottables)
- Very keen in knowing newer techno stack that Google Cloud platform(GCP)adds.
- Expert in Amazon EMR, Spark, Kinesis, S3, Boto3, Bean Stalk, ECS, Cloud watch, Lambda, ELB, VPC, Elastic Cache, Dynamo DB, Redshit, RDS, Aethna, Zeppelin & Airflow.
- Strong knowledge on creating and monitoring Hadoop clusters on Amazon EC2, VM, Hortonworks Data Platform 2.1 & 2.2, CDH 4,5,6, Cloudera Manager on Linux, Ubuntu OS etc.
- Experienced with Apache Mesos.
- Exposure on creatingData pipelinesfor Kafka cluster and process the data by using spark streaming and working on streaming data to consume data fromKAFKA topicsand load the data to landing area for reporting in near real time.
- Excellent exposure to Data Visualization with Tableau, PowerBI, Seaborn, Matplotlib and ggplot2.
- Expertise in Java Script, JavaScript MVC patterns, Object Oriented JavaScript Design Patterns and AJAX and developed core modules in large cross-platform applications using JAVA, JSP, Servlets, JDBC, JavaScript, XML, and HTML.
- In-depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Spark MLlib and expertise in writing Spark RDD transformations, actions, Data Frame's, case classes for the required input data and performed the data transformations using Spark-Core.
- Good knowledge of Hadoop Architecture and various components such as YARN, HDFS, NodeManager, ResourceManager, JobTracker, TaskTracker, NameNode, DataNode and MapReduce concepts.
- Solid SQL skills can write complex SQL queries, functions, triggers and stored procedures for Backend testing, Database Testing and End-to-End testing.
- Experienced on Hadoop cluster on Azure HD Insight Platform and deployed Data analytic solutions using tools like Spark and BI reporting tools.
- Strong programming skills in designing and implementation of applications using Core Java, J2EE, JDBC, JSP, HTML, Spring Framework, Spring batch framework, Spring AOP, Struts, JavaScript, Servlets.
- Can work parallelly in both GCP and Azure Clouds.
- Very Good understanding of SQL, ETL and Data Warehousing Technologies and Have sound knowledge on designing data warehousing applications with using Tools like Teradata, Oracle and SQL Server
- Experience in build scripts using Maven and do continuous integrations systems like Jenkins, Ansible, Docker and Kubernetes.
- Expertise in using Kafka as a messaging system to implement real-time Streaming solutions and implemented Sqoop for large data transfers from RDMS to HDFS/HBase/Hive and vice-versa.
- Good Knowledge on Object Oriented Analysis and Design (OOAD) and Java Design patterns and good level of experience in Core Java, JEE technologies as JDBC, Servlets, and JSP.
TECHNICAL SKILLS
Hadoop/Big Data: HDFS, MapReduce,Nifi, Druid DB, Hive, Pig, HBase, Sqoop, Pig, Impala, Oozie, Kafka, Spark, Presto, Zookeeper, Storm, Yarn.
Java & J2EE Technologies: Core Java, Servlets, JSP, JDBC, Spring, Hibernate.
Frameworks: MVC, Struts, Hibernate, Spring
Programming languages: Java, Core Java,J2EE, JavaScript, C++, Scala, CSH, Python, PySpark, Unix & Linux shell scripts and SQL
Micro service Tools: GRPC,GRAPHQL
Google Cloud Platfrom: GCP Cloud Storage, Big Query, Composer, Cloud Dataproc, Cloud SQL,Cloud Functions, Cloud Pub/Sub.
Databases: Oracle … MySQL, DB2, Teradata, MS-SQL Server.
NoSQL Databases: Hbase, Cassandra, MongoDB
Web Servers: Web Logic, Web Sphere, Apache Tomcat
ETL and Data warehousing Tools: Informatica and AWS Glue, Tableau, PowerBI, Erwin.
Web Development: HTML, DHTML, XHTML, CSS, Java Script, AJAX, React.js, Node.js, Redux
Cloud Utilities: AWS S3, EMR, EFS, Mesos, MSK, Redshift, RDS, Lambda, Athena, SNS, SQS, Glue, DynamoDB, Azure Databricks, Azure SQL, Storage Blob, Snowflake Data Warehouse.
PROFESSIONAL EXPERIENCE
Confidential, CHARLOTTE NC
SR. BIGDATA ENGINEER
RESPONSIBILITIES:
- Design Architecture ofdatapipelines as well as optimization of ETL workflows and involved in migrating the batch and real-time data using Spark and Kafka to AWS Could platform.
- Used Apache Kafka to aggregate web log data from multiple servers and make them available in Downstream systems for analysis and used Kafka Streams to Configure Spark streaming to get information and then store it in HDFS.
- Design and development of full text search feature with multi-tenancy elastic search after collecting the real time data throughSparkstreaming.
- Developed data pipeline for real time use cases using Kafka, Flume andSparkStreaming and worked on setting up and configuring AWS's EMR Clusters and Used Amazon IAM to grant fine-grained access to AWS resources to users.
- Install HDP and HDF (NiFi)multi node cluster.
- Start creating jobs using open Ingest framework which loads data toS3 BucketinParquetformat and created Glue jobs to process the data from S3 staging area to S3 Persistence area.
- Performed data analysis, feature selection, feature extraction using Apache Spark Machine Learning streaming libraries in Python and evaluate deep learning algorithms for text summarization using Python, Keras and TensorFlow on Cloudera Hadoop system.
- Extracting real time data using Kafka and spark streaming by Creating DStreams and converting them into RDD, processing it and stored it into Cassandra.
- Used AWS Data Pipeline to schedule an Amazon EMR cluster to clean and process web server logs stored in Amazon S3 bucket.
- Used GRPC and GRAPHQL as a data gateway.
- Build data pipelines in airflow in GCP for ETL related jobs using different airflow operators.
- Used Sqoop to ingest from DBMS and Python to ingest logs from client data centers.
- Develop Python and bash scripts for automation and implemented Map Reduce jobs using Java API and Python using Spark.
- Experience in moving data between GCP and Azure using Azure Data Factory.
- Develop NiFi workflow to pick up the multiple retail files from ftp location and move those to HDFS on daily basis.
- Experience working with React, Node.js, Redux, and Immutable.js for developing single page Application with Responsive Web Design as React.js with the Virtual DOM.
- Built the web application by using Python, Django, AWS, PostgreSQL, MySQL and MongoDB with continuous deployment using Heroku.
- Hands on experience integrating external applications using python.
- Imported data from RDBMS systems like MySQL into HDFS using Sqoop and developed Sqoop jobs to perform incremental imports into Hive tables.
- Involved in loading and transforming of large sets of structured and semi structured data and created Data Pipelines as per the business requirements and scheduled it using Oozie Coordinators.
- Worked on migrating MapReduce programs into Spark transformations using Spark and Scala, initially done using python (PySpark).
- Created Kafka spark streamingdatapipelinesfor consuming thedatafrom external source and performing the transformations in scala and contributed towards developing aDataPipeline to loaddatafrom different sources like Web, RDBMS, NoSQL to Apache Kafka or Spark cluster.
- Developed multiple POCs using Scala and Pyspark and deployed on the Yarn cluster, compared the performance of Spark, and SQL.
- Design, Architect and support Hadoop cluster: Hadoop, MapReduce, Hive, Sqoop, Ranger, Presto and high.
- Hands on experience with cloud-based LaaS (OpenStack AWS) and distributed schedulers (kubernets,Mesos).
- Worked with xml's extracting tag information using xpaths and Scala XML libraries from compressed blob data types.
- Involved in file movements between HDFS and AWS S3 and extensively worked with S3 bucket in AWS and Working on creating different tables in Hive to in corporate CDC logics by writing some Pig Scripts andHiveQlscripts which will perform CDC logics
- Developed Spark jobs using Scala on top of Yarn/MRv2 for interactive and Batch Analysis and involved in querying data using SparkSQL on top of Spark engine for faster data sets processing and worked on implementing Spark Framework, a Java based Web Framework.
- Created Hive tables, loaded data and wrote Hive queries that helped market analysts spot emerging trends by comparing fresh data with EDW reference tables and historical metrics and Used Hive QL to analyze the partitioned and bucketed data and compute various metrics for reporting.
- Experience with Docker compose,Docker file, Image creation/deployment,Docker and orchestration technologies including Swarm, Compose and Kubernets.
- Use of Docker and Kubernetesto manage micro services for development of continuous integration and continuous delivery.
- Designed solutions to process high volume data stream ingestion, processing and low latency data provisioning using Hadoop Ecosystems Hive, Pig, Scoop and Kafka,Python, Spark, Scala, NoSql,Nifi,Druid
- DevelopedSparkcode by using Scala andSpark-SQL for faster processing and testing and performed complex HiveQL queries on Hive tables.
Environment: Hadoop, Hive, HDFS, Pig, Sqoop, Python, SparkSQL, NiFi, Druid DB, Machine Learning, MongoDB, Postgre SQL, Django, AWS, AWS S3, AWS EC2, AWS EMR, GCP,Azure,Swarm, Docker, Oozie, React, node.js, redux, ETL, Tableau, Spark, Presto, Splunk, GRAPHQL, Spark-Streaming, Pyspark, KAFKA, Netezza, Apache Solr, Cassandra, Cloudera Distribution, Java, Impala, Web Server's, Maven Build, MySQL, Grafana, AWS, Agile-Scrum.
Confidential, HILLSBORO OR
SR. BIGDATA ENGINEER/DEVELOPER
RESPONSIBILITIES:
- Exploring with the Spark improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark SQL, Data Frame, and Spark Yarn.
- Involved in file movements between HDFS and AWS S3 and extensively worked with S3 bucket in AWS and converted all Hadoop jobs to run in EMR by configuring the cluster according to the data size.
- Used Spark Streaming APIs to perform transformations and actions on the fly for building common learner data model which gets the data from Kafka in Near real time and persist it to Hive.
- Wrote Spark applications for Data validation, cleansing, transformations and custom aggregations and imported data from different sources into Spark RDD for processing and developed custom aggregate functions using Spark SQL and performed interactive querying
- Responsible for developingdatapipeline with Amazon AWS to extract thedatafrom weblogs and store in HDFS and created data pipeline for different events of ingestion, aggregation and load consumer response data in AWS S3 bucket into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Responsible for creating on-demand tables on S3 files using Lambda Functions and AWS Glue using Python and PySpark.
- Worked ondatapipeline creation to convert incomingdatato a common format, preparedatafor analysis and visualization, migrate between databases, sharedataprocessing logic across web apps, batch jobs, and APIs, consume large XML, CSV, and fixed-width files and createddatapipelinesin Kafka to Replace batch jobs with real-timedata.
- Extensively worked with Avro and Parquet files and converted the data from either format Parsed Semi Structured JSON data and converted to Parquet using Data Frames in Spark.
- Involved in converting Hive/SQL queries into Spark Transformations using Spark RDDs and Scala and involved in using SQOOP for importing and exporting data between RDBMS and HDFS.
- Experience in setting up Kubernets cluster using KOPS Scripts and Rancher.
- DevelopingSparkStreaming program on Scala for importing data from the Kafka topics into the Hbase tables and involved in transforming the relational database to legacy labels to HDFS, and HBASE tables using Sqoop and vice versa.
- Developed a Python Script to load the CSV files into the S3 buckets and created AWS S3 buckets, performed folder management in each bucket, managed logs and objects within each bucket
- Worked with different file formats like JSon, AVRO and parquet and compression techniques like snappy and developed python code for different tasks, dependencies, SLA watcher and time sensor for each job for workflow management and automation using Airflow tool.
- Scheduled Airflow DAGs to run multiple Hive and Pig jobs, which independently run with time and data availability.
- Created AWS Glue job for archiving data from Redshift tables to S3 (online to cold storage) as per data retention requirements and created monitors, alarms, notifications and logs for Lambda functions, Glue Jobs, EC2 hosts using Cloudwatch
- Developed shell scripts for dynamic partitions adding to hive stage table, verifying JSON schema change of source files, and verifying duplicate files in source location.
- Worked on CICD Automation using tools like Jenkins, Git, Docker, Kubernetes and Ansible and container management using Docker by writing Docker files and set up the automated build on Docker HUB and installed and configuredKubernetes.
- Worked with importing metadata into Hive using Python and migrated existing tables and applications to work on AWS cloud (S3).
- Involved with writing scripts in Oracle, SQL Server databases to extract data for reporting and analysis and worked in importing and cleansing of data from various sources like DB2, Oracle, flat files onto SQL Server with high volume data
- Worked extensively with importing metadata into Hive and migrated existing tables and applications to work on Hive and AWS cloud and making the data available in Athena and Snowflake.
Environment: Spark, AWS, EC2, EMR, Hive, AWS Glue, SQL Workbench, Genie Logs, Kibana, Sqoop, Spark SQL, Spark Streaming, Scala, Python, Hadoop (Cloudera Stack), Hue, Spark, Netezza, Kafka, HBase, HDFS, Hive, Pig, Sqoop, Oracle, ETL, AWS S3, AWS EMR, GIT, Grafana.
Confidential, NASHVILLE TN
HADOOP DEVELOPER
RESPONSIBILITIES:
- Worked on loading disparate data sets coming from different sources to BDpaas (HADOOP) environment using Spark.
- Developed UNIX scripts in creating Batch load for bringing huge amount of data from Relational databases to BIGDATA platform.
- Delivery experience on major Hadoop ecosystem Components such as Pig, Hive, Spark Kafka, Elastic Search & HBase and monitoring with Cloudera Manager.
- Used AWS Data Pipeline to schedule an Amazon EMR cluster to clean and process web server logs stored in Amazon S3 bucket.
- Involved with the team of fetching live stream data from DB2 to Hbase table usingSparkStreaming and Apache Kafka.
- Implemented the Machine learning algorithms using Spark with Python and worked on Spark Storm, Apache and Apex and python.
- Involved in analyzing data coming from various sources and creating Meta-files and control files to ingest the data into the Data Lake.
- Involved in configuring batch job to perform ingestion of the source files into the Data Lake and developed Pig queries to load data to HBase
- Leveraged Hive queries to create ORC tables and developed HIVE scripts for analyst requirements for analysis.
- Implemented Kafka consumers to move data from Kafka partitions into Cassandra for near real-time analysis and worked extensively on Hive to create, alter and drop tables and involved in writing hive queries.
- Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing with Pig and parsed high-level design spec to simple ETL coding and mapping standards.
- Created and altered HBase tables on top of data residing in Data Lake and Created external Hive tables on the Blobs to showcase the data to the Hive Meta Store.
- Involved in requirement and design phase to implement Streaming Architecture to use real time streaming using Spark and Kafka.
- UsedSparkfor interactive queries, processing of streaming data and integration with HBase database for huge volume of data.
- Use Spark API for Machine learning. Translate a predictive model from SAS code to Spark and used Spark API over Cloudera Hadoop YARN to perform analytics on data in Hive.
- Created Reports with different Selection Criteria from Hive Tables on the data residing in Data Lake.
- Worked on Hadoop Architecture and various components such as YARN, HDFS, NodeManager, Resource Manager, JobTracker, TaskTracker, NameNode, DataNode and MapReduce concepts.
- Deployed Hadoop components on the Cluster like Hive, HBase, Spark, Scala and others with respect to the requirement.
- Uploaded and processed terabytes of data from various structured and unstructured sources into HDFS (AWS cloud) using Sqoop.
- Implemented the Business Rules in Spark/ SCALA to get the business logic in place to run the Rating Engine.
- DevelopedSparkcode using Scala andSpark-SQL/Streaming for faster testing and processing of data.
- Used Spark UI to observe the running of a submitted Spark Job at the node level and used Spark to do Property Bag Parsing of the data to get the required fields of data.
- Extensively used ETL methodology for supporting Data Extraction, transformations and loading processing, using Hadoop.
- Used both Hive context as well as SQL context of Spark to do the initial testing of the Spark job and used WINSCP and FTP to view the data storage structure in the server and to upload JARs which were used to do the Spark Submit.
- Developed code from scratch in Spark using SCALA according to the technical requirements.
Environment: Hadoop, Map Reduce, Yarn, Hive, Pig, HBase, Sqoop, Spark, Scala, MapR, Core Java, R Language, SQL, Python, Eclipse, Linux, Unix, HDFS, Map Reduce, Impala, Cloudera, SQOOP, Kafka, Apache Cassandra, Oozie, Impala, Zookeeper, MySQL, Eclipse, PL/SQL
Confidential, SAN ANTONIO- TXJAVA/HADOOP DEVELOPER
RESPONSIBILITIES:
- Imported Data from Different Relational Data Sources like RDBMS, Teradata to HDFS using Sqoop.
- Imported Bulk Data into HBase Using Map Reduce programs and perform analytics on Time Series Data exists in HBase using HBaseAPI.
- Designed and implemented Incremental Imports into Hive tables and used Rest API to Access HBase data to perform analytics.
- Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Flume, Oozie, Zookeeper and Sqoop.
- Created POC to store Server Log data in MongoDB to identify System Alert Metrics.
- Implemented usage of Amazon EMR for processing Big Data across a Hadoop Cluster of virtual servers on Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3).
- Importing of data from various data sources, performed transformations using Hive, MapReduce, and loaded data into HDFS& Extracted the data from MySQL into HDFS using Sqoop.
- Developed MapReduce/Spark Python modules for machine learning & predictive analytics in Hadoop on AWS.
- Worked in Loading and transforming large sets of structured, semi structured and unstructured data
- Involved in collecting, aggregating and moving data from servers to HDFS using Apache Flume
- Implemented end-to-end systems for Data Analytics, Data Automation and integrated with custom visualization tools using R, Hadoop and MongoDB, Cassandra.
- Involved in Installation and configuration of Cloudera distribution Hadoop, NameNode, Secondary NameNode, JobTracker, TaskTrackers and DataNodes.
- Written Hive jobs to parse the logs and structure them in tabular format to facilitate effective querying on the log data.
- Utilized Spark, Scala, Hadoop, HBase, Kafka, Spark Streaming, MLLib, Python, a broad variety of machine learning methods including classifications, regressions, dimensionally reduction etc.
- Used S3 Bucket to store the jar's, input datasets and used Dynamo DB to store the processed output from the input data set.
- Worked with Cassandra for non-relational data storage and retrieval on enterprise use cases and wrote MapReduce jobs using Java API and Pig Latin.
- Improving the performance and optimization of existing algorithms in Hadoop using Spark context, Spark-SQL and Spark YARN.
- Involved in creating Data Lake by extracting customer's Big Data from various data sources into Hadoop HDFS. This included data from Excel, Flat Files, Oracle, SQL Server, MongoDb, Cassandra, HBase, Teradata, Netezza and also log data from servers
- Doing data synchronization between EC2 and S3, Hive stand-up, and AWS profiling.
- Created reports for the BI team using Sqoop to export data into HDFS and Hive and involved in creating Hive tables and loading them into dynamic partition tables.
- Involved in managing and reviewing the Hadoop log files and migrated ETL jobs to Pig scripts to do Transformations, even joins and some pre-aggregations before storing the data to HDFS.
- Worked on Talend ETL tool and used features like context variable and database components like input to oracle, output to oracle, tFile compare, tFile copy, to oracle close ETL components.
- Worked on NoSQL databases including HBase and MongoDB. Configured MySQL Database to store Hive metadata.
- Deployment and Testing of the system in Hadoop MapR Cluster and worked on different file formats like Sequence files, XML files and Map files using Map Reduce Programs.
- Developed multiple MapReduce jobs in Java for data cleaning and preprocessing and imported data from RDBMS environment into HDFS using Sqoop for report generation and visualization purpose using Tableau.
- Developed the ETL mappings using mapplets and re-usable transformations, and various transformations such as source qualifier, expression, connected and un-connected lookup, router, aggregator, filter, sequence generator, update strategy, normalizer, joiner and rank transformations in Power Center Designer.
- Worked on Oozie workflow engine for job scheduling and created and maintained technical documentation for launching HADOOP Clusters and for executing PigScripts.
Environment: Hadoop, HDFS, Map Reduce, Hive, HBase, Oozie, Sqoop, Pig, Java, Tableau, Rest API, Maven, Strom, Kafka, SQL, ETL, MapR, PySpark, JavaScript, Shell Scripting.
Confidential, CHICAGO-IL
SR. JAVA DEVELOPER
RESPONSIBILITIES:
- Involved in the design and development phases of Agile Software Development and analyzed current Mainframe system and designed new GUI screens.
- Developed the application using 3 Tier Architecture i.e., Presentation, Business and Data Integration layers in accordance with the customer/client standards.
- Played a vital role in Scala framework for web-based applications and used File net for Content Management and for streamlining Business Processes.
- Created Responsive Layouts for multiple devices and platforms using foundation framework and implemented printable chart report using HTML, CSS and jQuery.
- Applied JavaScript for client-side form validation and worked on UNIX, LINUX to move the project into production environment.
- Created Managed Beans for handling JSF pages and include logic for processing of the data on the page and created simple user interface for application's configuration system using MVC design patterns and swing framework.
- Used Object/Relational mapping tool Hibernate to achieve object to database table persistency.
- Worked with Core Java to develop automated solutions to include web interfaces using HTML, CSS, JavaScript and Web services.
- Developed web GUI involving HTML, Java Script under MVC architecture and creation of WebLogic domains and setup Admin & Managed servers for JAVA/J2EE applications on Non-Production and Production environments.
- Configured the Web sphere application server to connect with DB2, Oracle and SQL Server in the back end by creating JDBC data source and configured MQ Series with IBM RAD and WAS to create new connection factories and queues.
- Extensively worked on TOAD for interacting with data base, developing the stored procedures and promoting SQL changes to QA and Production Environments.
- Used Apache Maven for project management and building the application and CVS was used for project management and version management.
- Involved in the configuration of Spring Framework and Hibernate mapping tool and monitoring WebLogic/JBoss Server health and security.
- Creation of Connection Pools, Data Sources in WebLogic console and implemented Hibernate for Database Transactions on DB2.
- Implemented CI, CD usingJenkinsfor continuous development and delivery.
- Involved in configuring hibernate to access database and retrieve data from the database and written Web Services (JAX-WS) for external system via SOAP/HTTP call.
- Used Log4j framework to log/track application and involved in developing SQL queries, stored procedures, and functions.
- Creating and updating existing build scripts using Ant for deployment Tested and implemented/deployed application on WAS server and used Rational Clear Case for Version Control.
Environment: FileNet, IBM RAD 6.0, Scala, Java 1.5, JSP, Servlets, Core Java, Spring, Swing, Hibernate, JSF, ICE Faces, Hibernate, HTML, CSS, Jenkins, JavaScript, NodeJs, UNIX, Web Services- SOAP, WAS 6.1, XML, IBM WebSphere 6.1, Rational Clear Case, Log 4j, IBM DB2.