We provide IT Staff Augmentation Services!

Senior Hadoop Consultant Resume

4.00/5 (Submit Your Rating)

SUMMARY

  • IT Consultant having more than 7 years of extensive experience in Operations, developing, maintaining, monitoring and upgrading Hadoop Clusters.
  • Involved in all phases of Software Development Life Cycle (SDLC) and Worked on all activities related to the Operations, implementation, administration and support of ETL processes for large - scale Data Warehouses.
  • Good Experience in translating client's Big Data business requirements and transforming them into Hadoop centric technologies.
  • Hands on experience in installing/configuring/maintaining Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, Spark, Kafka, Zookeeper, Hue and Sqoop using Hortonworks.
  • Possess extensive analysis, design and development experience in Hadoop and Big Data platforms
  • Able to critically inspect and analyze large, complex, multi-dimensional data sets in Big Data platforms
  • Experience with Big Data technologies, distributed file systems, Hadoop, HDFS, Hive, and Hbase
  • Defined and executed appropriate steps to validate various data feeds to and from the organization
  • Collaborate with business partners to gain in-depth understanding of data requirements and desired business outcomes
  • Create scripts to extract, transfer, transform, load, and analyze data residing in Hadoop and RDBMS including Oracle and Teradata
  • Design, implement, and load table structures in Hadoop and RDBMS including Oracle and Teradata to facilitate detailed data analysis
  • Participate in user acceptance testing in a fast-paced Agile development environment
  • Troubleshoot data issues and work creatively and analytically to solve problems and design solutions
  • Create documentation to clearly articulate designs, use cases, test results, and deliverables to varied audiences
  • Highly proficient and extensive experience working with relational databases, particularly Oracle and Teradata
  • Experience in converting Hive/SQL queries into Spark transformations using Java. Experience on ETL development using Kafka, Flume, and Sqoop.
  • Built large-scale data processing pipelines and data storage platforms using open-source big data technologies.
  • Strong database experience including administration and maintenance of SQL Server as well as Oracle. Extensively worked in writing T-SQL, SQL queries, PL/SQL, Functions, Stored Procedures, database triggers, exception handlers, DTS Export & Import
  • Experience with AWS services like Kinesis, EMR, Databricks, S3, API Gateway, SQS, SNS, Cloudwatch, etc.
  • Experienced in working with Apache Ambari and AWS Cloud and components like EC2, EMR, S3, and Elastic Search.
  • In depth knowledge about database imports, worked with imported data to populate tables in Hive. Exposure about how to export data from relational databases to Hadoop Distributed File System.
  • Load data from relational databases into HBase using Sqoop and setting up MapR metrics with NoSQL database to log metrics data.
  • Experience in security requirements for Hadoop and integrating Kerberos authentication by implementing KDC server and creating realm /domain.
  • Experience in Commissioning, Decommissioning, Balancing, and Managing Nodes and tuning server for optimal performance of the cluster.
  • Hands on experience on Amazon EC2 Spot integration & and Amazon S3 integration.
  • Responsible for data extraction and data ingestion from different data sources into Hadoop Data Lake by creating ETL pipelines using Sqoop, Kafka (JSON) and Talend.
  • Close monitoring and analysis of the MapReduce job executions on cluster at task level and optimized Hadoop clusters components to achieve high performance.
  • Developed Scala scripts, UDFs using both Data frames/SQL and RDD/MapReduce in Spark for Data Aggregation, queries and writing data back into OLTP system through Scoop.
  • Hands on experience in application development using Java, UNIX Shell scripting and RDBMS.
  • Experience in building, deploying and integrating applications with ANT, Maven.
  • Created multiple tableau dashboards with custom SQL queries to enhance the processing of complex visualization.
  • Experience in tuning and debugging Spark application running.
  • Experience with Software Development Processes & Models: Agile, Waterfall & Scrum Model.
  • Experience in UNIX shell scripting and has good understanding of OOPS and Data structures.
  • Very good understanding of Partitions, bucketing concepts in Hive and designed both internal and external tables in Hive to optimize performance.
  • Strong experience using different columnar file formats like Avro, RCFile, ORC and Parquet formats. Hands on experience in installing, configuring, and deploying Hadoop distributions in cluster environments.
  • Experience in NoSQL Column - Oriented Databases like HBase, Apache Cassandra, MongoDB and its integration with Hadoop cluster.
  • In-depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, MapReduce programming paradigm.

TECHNICAL SKILLS

Big Data Ecosystems: Hadoop, Map Reduce, HDFS, HBase, Zookeeper, Hive, Pig, Sqoop, Kafka, Cassandra, Oozie, Flume, Chukwa, Pentaho Kettle and Talend

Cloud: AWS (EC2, S3, ELB, EBS, VPC, Auto Scaling)

Programming Languages: Java, C/C++, eVB, Assembly Language (8085/8086)

Scripting Languages: PHP, JavaScript, Python and Bash

Databases: NoSQL, Oracle

UNIX Tools: Apache, Yum, RPM

Tools: Eclipse, JDeveloper, JProbe, CVS, Ant, MS Visual Studio

Platforms: Windows (2000/XP), Linux, Solaris, AIX, HPUX

Application Servers: Apache Tomcat 5.x 6.0, Jboss 4.0

Testing Tools: Net Beans, Eclipse, WSAD, RAD

Methodologies: Agile, UML, Design Patterns

PROFESSIONAL EXPERIENCE

Senior Hadoop Consultant

Confidential

Responsibilities:

  • Involved in DataModeling, System/Data Analysis, Design and Development of data warehouse process models using Hadoop Eco system tools.
  • Involved in end to end data processing like ingestion, processing, and quality checks and splitting.
  • Interact with business users to consolidate data and processing requirements to support the Enterprise Business Intelligence.
  • Designed and implemented Mapreduce based large-scale parallel relation-learning system.
  • Real time streaming the data using Spark Streaming with Kafka.
  • Developed Spark scripts by using Scala as per the requirement.
  • Responsible for building scalable, distributed data solutions using hive, Impala, Spark in Hadoop.
  • Exposure to performance tuning and optimization of Hive Queries.
  • Moved data from HDFS to RDBMS and vice-versa using SQOOP and TPT.
  • Developed pig and hive scripts for transforming/formatting the data as per business rules.
  • Experienced in Extracting, Transforming, and Loading (ETL) of data from various sources like Flat, Sequence, RC, XML and ORC files, RDBMS and various Compression Techniques like gzip, bzip2, snappy.
  • Installed and configured PRESTO on the cluster and worked on benchmarking testing between PRESTO, Hive and Teradata
  • Worked on creating PRESTO views to between the Hive and the Teradata query grid.
  • Implemented daily workflow for extraction and processing of data, scheduled jobs using Oozie engine.
  • Interact with team to review designs, code, test plans and complete the project analysis, scoping, scheduling and documentation to ensure quality.
  • Designed and implemented data processing using AWS Data Pipeline and worked on cluster coordination services through Zookeeper.
  • Deployed Data lake cluster with Hortonworks Ambari on AWS using EC2 and S3. Installed the Apache Kafka cluster and Confluent Kafka open source in different environments.
  • Worked on AWS Kinesis for processing huge amounts of real-time data
  • Involved in file movements between HDFS and AWS S3 and extensively worked with S3 bucket in AWS.
  • Worked on AWS DMS (Database Migration Service) to migrate data from RDS to Snowflake
  • Configured AWS IAM and Security Group in Public and Private Subnets in VPC.
  • Working on configuring and Maintaining Hadoop environment on AWS.
  • Created Databricks Delta Lake process for real-time data load from various sources (Databases, Adobe and SAP) to AWS S3 data-lake using Python/PySpark code
  • Deploy Kubernetes in both AWS and Google cloud. Setup cluster, replicator. Deploy multiple containers
  • Worked extensively with importing metadata into Hive using Python and migrated existing tables and applications to work on AWS cloud (S3).
  • Contribute to preparing a big data platform for cloud readiness within AWS/Microsoft Azure platforms by containerizing modules using Docker.
  • Created automated pipelines in AWS Code Pipeline to deploy Docker containers in AWS ECS using S3.
  • Import the data from CASSANDRA databases and stored it into AWS.
  • Have been working with AWS cloud services (VPC, EC2, S3, RDS, Redshift, Data Pipeline, EMR, DynamoDB, Work Spaces, Lambda, Kinesis, RDS, SNS, and SQS).
  • Design, development and implementation of performant ETL pipelines using python API (pySpark) of Apache Spark on AWS EMR.
  • Backing up AWS Postgres to S3 on daily job run on EMR using Data Frames.
  • Created custom CFT (cloud Formation Template) for EMR, Lambda, and EC2 in AWS as per requirement.
  • Developed API for using AWS Lambda to manage the servers and run to code in the AWS.
  • Implementation of highly scalable and robust ETL processes using AWS (EMR, CloudWatch, IAM EC2, S3, Lambda Functions, and DynamoDB).
  • Built a serverless ETL in AWS lambda to process the files in the S3 bucket to be cataloged immediately.
  • Developed AWS Lambda to invoke glue job as soon as a new file is available in Inbound S3 bucket.
  • Optimizing the EMRFS for Hadoop to directly read and write in parallel to AWS S3 performantly.
  • Load the data into Spark RDD and performed in-memory data computation to generate the output response.
  • Performed different types of transformations and actions on the RDD to meet the business requirements.
  • Developed a data pipeline using Kafka, Spark and Hive to ingest, transform and analyzing data.
  • Also worked on analyzing Hadoop cluster and different big data analytic tools including Pig, HBase and Sqoop.
  • Involved in loading data from UNIX file system to HDFS.
  • Developed multiple MapReduce jobs in Java and python for data cleaning and pre-processing.
  • Created HBase tables to store variable data formats of PII data coming from different portfolios.
  • Installed and configured Hive and also written Hive UDFs also implemented best offer logic using Pig scripts and Pig UDFs
  • Experience on loading and transforming of large sets of structured, semi structured and unstructured data.
  • Cluster coordination services through Zookeeper.
  • Exported the analysed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
  • Developed multiple MapReduce jobs in java for data cleaning and pre-processing.
  • Build REST web service by building Node.js Server in the back-end to handle requests sent from the front-end jQuery Ajax calls.
  • Write a Python program to maintain raw file archival in GCS bucket.
  • Analyze various type of raw file like Json, Csv, Xml with Python using Pandas, Numpy etc.
  • Data extraction, data scaling, data transformations, data modeling and visualizations using Python, SQL and Tableau based on requirements.
  • Involved in data modeling and sharing and replication strategies in MongoDB.
  • Importing and exporting data from different databases like MySQL, RDBMS into HDFS and HBASE using Sqoop.
  • Involved in managing and reviewing Hadoop log files.
  • Worked on Kafka Backup Index, Log4j appender minimized logs and Pointed ambari server logs to NAS Storage.
  • Experience in Upgrades and Patches and Installation of Ecosystem Products through Ambari.
  • Imported data using Sqoop to load data from MySQL to HDFS on regular basis.
  • Responsible for creating Hive tables and working on them using Hive QL.
  • Responsible for importing and exporting data into HDFS and Hive using Sqoop.
  • Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
  • Knowledge in Data warehousing and using ETL tools like Informatica and Pentaho.
  • Experienced with Spark processing framework such as Spark SQL, and Data Warehousing and ETL processes
  • Expert in understanding the data and designing and implementing the enterprise platforms like Hadoop Data Lake and Huge Data warehouses.
  • Pulling the data from data lake (HDFS) and massaging the data with various RDD transformations.
  • Developed a data pipeline using Kafka and Storm to store data into HDFS.
  • Performed Real time event processing of data from multiple servers in the organization using Apache Storm by integrating with Apache Kafka.
  • Experience in setting up Kafka cluster for publishing topics and familiar with lambda architecture
  • Experience in setting up Kafka cluster for publishing topics and familiar with lambda architecture.
  • Utilized Kubernetes and Docker for the runtime environment for the CI/CD system to build, test, and deploy.
  • Experience working on several docker components like Docker Engine, Hub, Machine, creating docker images.
  • Configured Airflow workflow engine to run multiple Hive jobs.
  • Developed serverless ETL applications to process and the data in S3 buckets using Glue.
  • Developed Airflow scripts to Orchestrate complex workflows and schedule using Apache Airflow.
  • Involved in Migrating Objects from Teradata to Snowflake
  • Developed stored procedures/views in Snowflake and use in Talend for loading Dimensions and Facts.
  • Practicing consulting on Snowflake Data Platform Solution Architecture, Design, Development, and deployment focused to bring the data driven culture across the enterprises.
  • Set up Environment for Snowflake and worked on creation of warehouse and our own databases.
  • Optimized the Pyspark jobs to run on Kubernetes Cluster for faster data processing
  • Extensive experience in implementation of Data Warehousing Projects such as Extraction, - HDFS, Pig, Hive.
  • Involved in scheduling Oozie workflow engine to run multiple Hive jobs.

Environment: Hadoop, MapReduce, Hive, Pig, Sqoop, Java, node.js, Oozie, HBase, Kafka, Spark, Scala, Eclipse, Linux, Oracle, Teradata.

Senior Hadoop Developer

Confidential, Phoenix AZ

Responsibilities:

  • Used Flume to collect, aggregate and store the web log data from different sources like web servers, mobile and network devices and pushed into HDFS.
  • Installed and configured Hadoop MapReduce, HDFS, Developed multiple MapReduce jobs in Java for data cleansing and processing.
  • Used Spark API over Hortonworks Hadoop YARN to perform analytics on data in Hive.
  • Exploring with the Spark improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDDs, and Spark YARN.
  • Designed and implemented Distributed/Cloud Computing (Map Reduce/Hadoop, Pig, Hbase, AVRO, and Zookeeper), Amazon Web Services.
  • Extracted the data from Teradata into HDFS using Sqoop.
  • Written MapReduce code to process and parsing the data from various sources and storing the parsed data into HBase and Hive using Hive integration.
  • Worked with Hbase and Hive scripts to extract, transform and load the data into HBase and Hive.
  • Designed appropriate partitioning/bucketing schema to allow faster data retrieval during analysis using HIVE.
  • Extensive working knowledge of partitioned table, UDFs, performance tuning, compression related properties in Hive.
  • Worked with different File Formats like TEXTFILE, AVROFILE, and ORC for HIVE querying and processing.
  • Worked on installing cluster, commissioning & decommissioning of Datanode, Namenode recovery, capacity planning and slots configuration.
  • Migrated Snowflake database to Windows Azure and updating the Connection Strings based on requirement.
  • Created CI/CD Pipelines on Azure DevOps environments by providing their dependencies and tasks.
  • Design, development, and implementation of performant ETL pipelines using python API (pySpark) of Apache Spark on Azure Databricks
  • Implemented objects in Azure Databricks for big data analytic, mostly used SQL in notebook, occasionally Python and Scala.
  • Worked on designing, data modeling for CosmosDB NoSQL database also managed and reviewe Azure AppInsights log files.
  • Used Ambari in Azure HDInsight cluster recorded and managed the data logs of name node and data node.
  • Experience with ETL/ELT patterns (preferably using Azure Data Factory and Databricks)
  • Created CI/CD Pipelines on Azure DevOps environments by providing their dependencies and tasks.
  • Designed schema and modeling of data in Cassandra
  • Used Cassandra CQL with Java API's to retrieve data from Cassandra tables
  • Written algorithm to store all validated data in Cassandra using Spring Data Cassandra REST
  • Cassandra & Hadoop and Cassandra & Spark integrations.
  • Implemented Flume to collect data from various sources and is loaded into HDFS for further processing.
  • Developed workflows using custom MapReduce, Pig, Hive and Sqoop.
  • Built reusable Hive UDFs to sort structure fields and return complex datatype.
  • Responsible for loading data from UNIX file system to HDFS.
  • Developed suit of Unit Test Cases for Mapper, Reducer and Driver classes using MR testing library.
  • Used Maven extensively for building jar files of MapReduce programs and deployed to cluster.
  • Used OOZIE engine for creating workflow jobs for executing Hadoop jobs such as Hive and Sqoop operations.
  • Used Log4J for standard logging and used Jackson API for JSON request handling. .
  • Exported the analyzed data to relational databases using Sqoop for visualization and to generate reports.
  • Create Python Flask login and dashboard with Neo4j graph database and execute various cypher queries for data analytics.
  • Implemented PySpark using python and Spark SQL for faster testing processing the data.
  • Worked with Python and Docker on an Ubuntu Linux platform using HTTP/REST interfaces with deployment into a multi-node Kubernetes environment.
  • Worked on python 3.5.1 scripts to analyze the data of the customer.
  • Developed MapReduce jobs in Python for data cleaning and data processing.
  • Data extraction, data scaling, data transformations, data modeling and visualizations using Python, SQL and Tableau based on requirements.
  • Performed Data Modeling, Data Migration, consolidate and harmonized data. Hands-on data modeling experience using Dimensional Data Modeling, Star Schema, Snowflake, Fact and Dimension Table, Physical and Logical Data Modeling using Erwin 3.x/4.x
  • Data modeling, building dimensional schemas with Entity Relationship Diagrams using tools like Lucid chart etc.
  • Extensive experience in developing and deploying applications using Web Logic, Apache Tomcat and JBOSS and Apache Ambari.
  • Very good experience in monitoring and managing the Hadoop cluster using Ambari
  • Strong understanding of Data Lake, Business Intelligence and Data Warehousing Concepts with an emphasis on ETL and SDLC.
  • Monitoring production jobs for data warehousing applications
  • Implemented Batch and Near Real-Time (MQ) data Integration and Data Warehousing solutions on Relational (Oracle, SQL Server, DB2
  • Have sound knowledge on designing data warehousing applications with using Tools like Teradata, Oracle, and SQL Server.
  • Have solid Background working on DBMS technologies such as Oracle, MY SQL, NoSQL, data warehousing architectures and performed migration from different databases SQL server, Oracle, MYSQL to Hadoop.
  • Build ETL/ELT pipeline in data technologies like Pyspark, hive, presto and data bricks
  • Experience in making the Devops pipelines using Openshift and Kubernetes for the Microservices Architecture.
  • Working experience in CI/CD automation environment with GitHub, Bitbucket, Jenkins, Docker and Kubernetes
  • Expertise in creating Kubernetes cluster with cloud formation templates and PowerShell scripting to automate deployment in a cloud environment.
  • Working experience in CI/CD automation environment with GitHub, Bitbucket, Jenkins, Docker and Kubernetes
  • Installed and configured OpenShift platform in managing Docker containers and Kubernetes Clusters.
  • Deployment of Cloud service including Jenkins and Nexus on Docker using Terraform.
  • Optimization and troubleshooting, test case integration into CI/CD pipeline using Docker images.
  • Responsible for importing real time data to pull the data from sources to Kafka clusters.
  • Developed code to write canonical model JSON records from numerous input sources to Kafka queues.
  • Worked extensively on building Nifi data pipelines in Docker container environment in development phase.
  • Exported the analysed data to the relational databases using Scoop for visualization and to generate reports for the BI team
  • Implementing and deploying (Elastic Map Reduce) EMR cluster leveraging Amazon Web Services with EC2 instances
  • Created APIs using Lambda functions with DynamoDB.
  • Involved in requirement and design phase to implement Streaming Lambda Architecture to use real time streaming using Spark and Kafka and Scala.
  • Involved in requirement and design phase to implement Streaming Lambda Architecture to use real time streaming using Spark and Kafka and Scala.
  • Build data platforms, pipelines, and storage systems using the Apache Kafka, Apache Storm and search technologies such as Elastic search.
  • Utilized Kubernetes and Docker for the runtime environment for the CI/CD system to build, test, and deploy.
  • Set up NoSQL Cassandra and Neo4j Graph database, implement a connection to Python application, Implement CI/CD based on Docker, Kubernete and swarm cluster.
  • Designing of Data Modelling for Next Generation Data Lake by working with Business SME's and Architecture teams.
  • Used Impala to read, write and query the Hadoop data in HDFS from HBase or Cassandra.
  • In data exploration stage used hive and impala to get some insights about the customer data.
  • Develop complex SQL queries, scripts, user defined functions, views, and triggers for business logic implementation in Snowflake Enterprise Data Warehouse
  • Expertise in Creating, Debugging, Scheduling and Monitoring jobs using Airflow and Oozie.
  • Installed Airflow workflow engine to executre spark jobs that run independently with time and data.
  • Also assisted admin team in installed and configuration of additional nodes in Hadoop cluster
  • Used Visualization tools such as Power view for excel, Tableau for visualizing and generating reports.
  • Used GIT and ClearCase for software configuration management in collaboration with clearquest to log and keep track of SCM Activities.
  • Implemented MapReduce programs on log data to transform into structured way to find user information.
  • Extensive experience in writing Pig scripts to transform raw data from several data sources into forming baseline data.
  • Utilized Flume to filter out the JSON input data read from the web servers to retrieve only the required data needed to perform analytics.
  • Developed UDF functions for Hive and wrote complex queries in Hive for data analysis.
  • Developed a well-structured and efficient ad-hoc environment for functional users.
  • Export the analyzed data to relational databases using Sqoop for visualizations and to generate reports for the BI team.
  • Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing with Pig.
  • Wrote ETLs using Hive and processed the data as per business logic.
  • Extensive work in ETL process consisting of data transformation, data sourcing, mapping, conversion and loading using Informatica.
  • Extensively used ETL processes to load data from flat files into the target database by applying business logic on transformation mapping for inserting and updating records when loaded.
  • Created Talend ETL jobs to read the data from Oracle Database and import in HDFS.
  • Worked on data serialization formats for converting complex objects into sequence bits by using Avro, RC and ORC file formats.

Environment: Apache Hadoop, Hortonworks HDP 2.0, HDFS, MapReduce, Sqoop, Flume, Pig, Hive, HBase, Oozie, Teradata, Talend, Avro, Java, Linux

Hadoop Java Developer

Confidential

Responsibilities:

  • Interacted daily with the onshore counterparts to gather requirements.
  • Worked on few development projects on Teradata and gained a knowledge on Teradata utilities like Mload, Tpump, Bteq etc.
  • Performed Input data analysis, generated space estimation reports for Staging and Target tables in Testing and Production environments.
  • Worked on a Remediation project to optimize all the Teradata SQL queries in e-commerce line of business and applied several query tuning and query optimization techniques in Teradata SQL.
  • Converted projects from Teradata to Hadoop as Teradata proved to be costly to handle huge data of the Bank, thus gained knowledge on Hadoop right from scratch.
  • Installed and configured Hadoop MapReduce, HDFS and started to load data into HDFS instead of Teradata and performed Data Cleansing and Processing operations.
  • Have gained an in-depth knowledge on HDFS data storage and MapReduce data processing techniques.
  • Performed importing and exporting data into HDFS and Hive using Sqoop.
  • Designed and Scheduled workflows using Oozie.
  • Develop and own ETL Data Pipeline and Data Model solutions for integrating new data sets into the Snowflake Enterprise Data Warehouse
  • Exported data from Impala to Tableau reporting tool, created dashboards on live connection.
  • Maintain and extend existing (Informatica, Snowflake/Teradata, Tidal) data warehouse by developing business-centric solutions in a timely fashion.
  • Used IICS to pull data from Snowflake Stage DB to Reporting DB
  • Developed Airflow scripts to Orchestrate complex workflows and schedule using Apache Airflow.
  • Created UNIX Shell Scripts to with in-turn call several Hive and Oozie scripts.
  • Built several Managed and External HIVE tables and performed several Joins on those tables to achieve the result.
  • Modelled Hive Partitions extensively for data separation and faster data processing and followed Pig and Hive best practices for tuning. Partitioned each day's data into separate partitions for easy access and efficiency.
  • Scheduled several jobs which include several Shell, Oozie, hive and SQOOP scripts using Autosys. Coded JIL scripts to determine the job dependencies while scheduling.
  • Performed Data analytics on the credit card transactional data and Coded Automatic Report Mailing Scripts in UNIX.
  • Worked in Testing and Production environments and learnt moving components from one environment to the other Subversion (SVN).
  • Performed Encryption and Decryption of key fields like Account Number of the Bank customers in several input and reporting files using COBOL.
  • Monitor Hadoop jobs on Performance and Production cluster. Provided Production support for few successful runs.
  • Built complete ETL logic, generated transformations, work flows and automated the scheduled runs.
  • Built an Automatic Query Performance Metrics Generation Tool in Teradata using SQL Procedures.
  • Generated Test scripts and Test plan, Data Analysis and Defect Reporting using HP Quality Center.

Java Developer

Confidential

Responsibilities:

  • Collecting and understanding the user requirements and functional specifications.
  • Developed message driven beans to listen to JMS.
  • Developed the web interface using Servlets, Java Server Pages, HTML, and CSS.
  • Development of GUI using HTML, CSS, JSP, and JavaScript.
  • Creating components for isolated business logic.
  • Used WebLogic to deploy applications on local and development environments of the application.
  • Extensively used the JDBC prepared statement to embed the SQL queries in the Java code.
  • Developed (DAO) using Spring Framework 3.
  • Developed web applications with Rich Internet applications using Java applets, Silverlight, Java.
  • Used JavaScript to perform client side validations and Struts-Validator Framework for Server-side validation.
  • Provided on call support based on the priority of the issues.
  • Deployment of application in J2EE architecture.
  • Implementing Session Façade Pattern using Session and Entity Beans.

Environment: Java, J2EE, JDBC, JSP, Struts, JMS, spring, SQL, MS-Access, JavaScript, HTML.

We'd love your feedback!