Big Data Developer, Technical Lead Resume
Phoenix, AZ
SUMMARY
- Around 8+ years of IT experience as Associate Java Developer&Big Data/Hadoop Developer.
- Passionate towards working in Hadoop & Big Data Architectures, Big Data Processing, Analysis & Visualization.
- Good knowledge of Hadoop Distributed File System & Ecosystem components like MapReduce, Hive, Pig, Hbase, Zookeeper, Flume, Splunk, Sqoop, Storm, Kafka, Oozie & Spark streaming, Core Spark API.
- Good understanding of Hadoop MapR distributions.
- Ecosystems familiarity with Cloudera CDH1, CHD2, CDH3, CDH4, CDH5 and Hortonworks HDP2.1
- Execution of Batch jobs through the data streams through SPARK Streaming.
- Experience in Apache Spark, Spark Streaming, Spark SQL and No SQL databases like Cassandra and Hbase.
- Have experience in Hadoop distributions like Amazon, Cloudera and Hortonworks.
- Have thorough knowledge on spark architecture and how RDD's work internally. Have exposure to Spark Streaming and Spark SQL.
- Integrated Spark Streaming with Sprinkler using pull mechanism and loaded the JSON data from social media into HDFS system.
- Have experience in Shell Scripting like Scala/Pythonscripting languages and used it extensively with Spark for data processing.
- Detailed understanding of Hadoop internal architecture and functionality of various components such as Job Tracker, Task Tracker, Name Node & Data Node, Application Master, Resource Manager, Node Manager & MapReduce programming paradigm.
- Developing various cross platform products while working with different Hadoop file formats like Sequence File, RC File, ORC, AVRO & Parquet.
- Analyzing Data through Hive QL, Pig Latin & MapReduce programs in Java.Extending HIVE and PIG core functionalitiesby implementingcustom UDF’s.
- Installing and setting up Hadoop environment in cloud through the AMZON WEB SERVIECS.
- Implementing, partitioning and bucketing in Hive for more efficient querying of data.
- Extensively involved in design, development, tuning and maintenance of HBase Casandra databases.
- Extensive expertise in creating and Automation(Maven) of workflows using Oozie workflow Engine.
- Scheduled jobs using Oozie Coordinator, to execute jobs on specific days (excluding weekends).
- Hands on experience on Apache, Cloudera and Hortonworks Hadoop environments.
- Good understanding of Hadoop administration with Hortonworks.
- Experience in using, Jenkins,Maven 2.0 to compile, package and deploy to the application servers.
- Great expertise experience in Requirements gathering, Design, Development & Reporting applications.
- Expertise in data modelling & deployment strategies in production environment meeting Agile requirements.
- Proficient in coding and optimizing Teradata batch processing scripts for data transformation, aggregation and load using BTEQ.
- Teradata consultant with experience in Teradata Physical implementation and Database Tuning
- Strong experience in coding and debugging Teradata utilities like Fast Load, Multiplied and Tpump for loading data from various sources into Teradata and data export using Fast Export.
- Release Automation while trouble shooting. (CA Automation, Jenkins, Ant &Maven build&IBM SCM Architectures)
- Experience in creating functional flow, technical design specification document based on the functional requirement adopting Agile Methodologies involving code check - in, checkout from repositories.
- Proficiency in working with PL/SQL implementation on Data warehousing concepts and strong experience in implementing data warehousing methodologies.
- Experienced in using waterfall, Agile and Scrum models of software development process framework.
- Experienced in using Jenkins &Maven 3.3. to compile the package and deploy to the Application Servers.
- Deployment, Distributed and Implementation of Enterprise applications in J2EE environment.
- Comprehensive knowledge of Software Development Life Cycle (SDLC), having thorough understanding of various phases like Requirements Analysis, Design, Development and Testing.
- Experienced with different modules of spring like spring - AOP, IOC/Core & MVC.
- Good understanding of bootstrap, spring rest and integration.
- Extensive hands-on experience with core expertise in design, development and deployment of N-Tier enterprise applications for J2EE platform using Java, Java Script, Struts, Spring, EJB, Servlets, JSP, Web services,Angular JS & Node JS,JNDI, JSON JMS, JAXP, JUnit and XML.
- Proficient in developing web page quickly and effectively using, HTML 5, CSS3, JavaScript and jQuery and also experience in making web page cross browser compatible.
- Worked on generating several reporting tools in collaboration to reporting tools like Talend & Tableau
- Experience in developing applications with Angular JS & Node JS.
- Experienced with Java Multithreaded programming to develop multithreaded modules and applications, Experience in Development of Multi-Tier distributed application using Java and Technologies.
- Experience in monitoring, troubleshooting and supporting J2EE based applications and infrastructure.
- Proven experiences of using Application Servers like Web Sphere, Tomcat, Web Logic, JSON, JBoss, Tomcat.
- Good understanding of RDBMS which includes writing queries, stored procedures using Oracle … MS SQL Server, AS400 and DB2.
- Strong experience in Unix Scriptingand running Python& Scala Scripts.
- Good Working knowledge of reporting tools like Talend & Tableau.
- Implementing database driven applications in Java using JDBC, JSON, XML API and using hibernate framework.
- Proficient in using Design Patterns like GOF Design patterns (Creational, Structural and behavioral) and J2EE design patterns like MVC, Singleton, Front Controller, Business Delegate, Service Locator, DAO, VO, DTO etc.
- Involved in the Software Life Cycle phases like AGILE and Waterfall estimating the timelines for projects.
- Experience in Installation, Configuration and Re-Configuration of Oracle Database on Red hat LinuxAS/ES/4/5 and Windows advanced servers and in RAC environment.
- Strong knowledge of version control systems like SVN, CVS & GIT.
TECHNICAL SKILLS
Big Data Skillset - Frameworks & Environments: Cloudera CDHs, Hortonworks HDPs, Hadoop1.0, Hadoop2.0, HDFS, MapReduce, Pig, Hive, Impala, HBase, Data Lake, Cassandra, MongoDB, Mahout, Sqoop, Oozie, Zookeeper, Flume, Splunk, Spark, Storm, Kafka, YARN, Falcon, Avro.
Amazon Web Services(AWS): Elastic Map Reduce(EMR), Amazon EC2, Amazon S3, AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, Amazon CloudFront, AWS Import/Export.
JAVA & J2EE Technologies: Core Java(Java8 & Java FX versions), Hibernate framework, Spring framework, JSP, Servlets, Java Beans, JDBC, EJB 3.0, Json, Java Sockets & Java Scripts.JavaScript, jQuery, JSF, Prime Faces, XML, … Servlets, EJB, JDBC, HTML, XHTML, CSS, SOAP, XSLT and DHTMLMessaging Services JMS, MQ Series, MDB, J2EE MVC Frameworks Struts … Struts 2.1, Spring 3.2, MVC, Spring Web Flow, AJAX.
IDE Tools: Eclipse, Net Beans, Spring Tool Suite, Hue (Cloudera specific).
Web services & Technologies: XML, HTML, XHTML, JNDI, HTML5, AJAX, jQuery, CSS, JavaScript, AngularJS, VB Script, WSDL, SOAP, JDBC, ODBC Architectures REST, MVC architecture.
Databases & Application Servers: Oracle, MySQL, DB2, Cassandra, MangoDB, Hbase, MangoDB, Database Technologies MySQL, Oracle 8i, 9i, 11i & 10g, MS Access,Teradata, Microsoft SQL-Server 2000 and DB2 8.x/9.x, PostgreSQL.
Other Tools: Putty, WinScp, DataLake,Talend, Tableau, GitHub, SVN, CVS.
PROFESSIONAL EXPERIENCE
Big Data Developer, Technical Lead
Confidential, Phoenix, AZ
Responsibilities:
- Implemented several scheduled Spark, Hive & Map Reduce jobs in Hadoop MapR distribution.
- Responsible for achieve the deliverable as Technical Lead for GCP & DQM(Global Corporate Payments on-cop access and Data Quality Management for Confidential .
- Deployed several process oriented scheduled jobs through cron tabs and event engines using wrapper scripts for invoking the Spark module.
- Developed various main & service classes through Scala using spark SQLs for the requirement specific tasks.
- Great familiarity with Hive joins& used HQL for querying the databases eventually leading to complex Hive UDFs.
- Implemented Data Ingestion in real time processing using Kafka.
- Expertise knowledge of handling Unix environment like changing the permissions of the files and groups. Great ability to work through the command line interface.
- Involved in building the runnable jars for the module framework through Maven clean & Maven dependencies.
- Involved in Data Validation and fixing discrepancies by working in coordination with the Data Integration and Infra Teams.
- Productive implementation of DStreams on resilient distributed dataset (RDD) through various windows also simultaneously update log files for the streams.
- Extensive experience in Spark Streaming (version 1.5.2) through core Spark API running Scala, Java to transform raw data from several data sources into forming baseline data.
- Hands on expertise in running the SPARK & SPARK SQL.
- Implemented SPARK batch jobs.
- Developing the Tasks and setting up the requirement environment for running Hadoop in cloud on various instances.
- Developed Hive (version 0.11.0.2) and Impala (2.1.0 & 1.3.1) for end user / analyst requirements to perform ad hoc analysis
- Performed Unit Testing & Integration with sample test cases and assisted QA Team and addressed several performance issues according to the Business Unit requirements.
- Proven expertisein handling the exception scenarios while handling the errored feed data in coordination with the Data Architects, Data Integration team, Business Partners and Stakeholders.
- Indulged in regular stand-up meetings, status calls, Business owner meetings with stake holders, Risk management Teams in an Agile Environment.
- Developed Data pipeline using Kafka and Storm to store Data into HDFS.
- Great understanding of the high-level architecture of the business logic for decomposing the complexity of module to simple achievable tasks for efficient development.
- Ideal approachability towards handling issues raised against the team by being a great team player as well as shouldering the responsibility as an individual in an appropriate manner whenever it matters.
Environment: MapR Hadoop Distribution, M3&M5, Hive, Scala, HBase,Sqoop, Maven builds, Spark, Kafka, Spark SQL, Oozie, Linux/Unix, SVN,Talend.
Hadoop Developer
Confidential, Houston, TX
Responsibilities:
- Worked on a live60 nodes Hadoop clusterrunningCDH5.4.4, CHD5.2.0, CDH5.2.1
- Responsible for migrating the code base from Cloudera Platform to other eco systems components like Redshift, Dynamo DB.
- Involved in migrating the map reduce jobs into Spark Jobs and Used Spark SQL and Data frames API to load structured and semi structured data into Spark Clusters.
- Involved in requirement and design phase to implement Streaming Lambda Architecture to use real time streaming using Spark and Kafka.
- Worked with highlyunstructured & semi structured data of 90 TB (270 TB with replication factor of 3)
- Extracted the data from Teradata into HDFS/Databases/Dashboards usingSPARK STREAMING.
- Workedon SPARK engine creating batch jobs with incremental loadthrough STORM,KAFKA, SPLUNK, FLUME, HDFS/S3, KINESIS, Sockets etc.
- Productive implementation of DStreams on resilient distributed dataset(RDD) through various windows also simultaneously update log files for the streams.
- Extensive experience in Spark Streaming (version 1.5.2)through core Spark API running Python, Scala, Java & Python Scripts to transform raw datafrom several data sources into forming baseline data.
- Hands on expertise in running the SPARK & SPARK SQL.
- Implemented SPARK batch jobs.
- Developing the Tasks and setting up the requirement environment for running Hadoop in cloud on various instances.
- DevelopedHive(version 0.11.0.2) andImpala (2.1.0&1.3.1) for end user / analyst requirements to perform ad hoc analysis
- Implemented various MapReduce Jobs in custom environments and updating them to Hbase tables by generating hive queries.
- Performed Sqooping for various file transfers through the Hbase tables for processing of data to several NoSQL DBs.
- Involved in developing Hive UDFs and reused in some other requirements.
- Worked on performing Join operations.
- Involved in creating partitioning on external tables.
- Good hands on experience in writing HQL statements as per the user requirements.
- Implemented Cassandra connector for Spark in Java.
- Implemented Cassandra connection with the Resilient Distributed Datasets (local and cloud)
- Data visualization and reporting was performed through Tableau & Talend.
- Very good understanding ofPartitions, Bucketingconcepts in Hive and designed both Managed and Externaltables in Hive to optimize performance.
- Solved performance issuesin Hive and Pig scripts with understanding of Joins, Group and aggregation and how does it translate to MapReduce jobs.
- DevelopedUDFsin Scala, Java & Python as and when necessary to use in PIG and HIVE queries
- DevelopedOozieworkflow for scheduling and orchestrating the ETL process
- Implemented authentication usingKerberosand authentication usingApache Sentry.
- Worked with the admin team in designing and upgrading CDH 4 to CDH 5
- Good working knowledge of Amazon Web Service components likeEC2, EMR, S3etc.
- Examining job fails and trouble shooting.
- Very good experience in monitoring and managing the Hadoop cluster usingCloudera Manager.
- Good working knowledge ofHBase
- Good Working knowledge ofTalend &Tableau
Environment: Cloudera CHD4, Teradata, Amazon Web ServicesHBase,CDH5, Scala, Python Java Scripts Hive, Sqoop, Splunk,Storm,Spark, Kafka, Flume,AVRO, Oozie, CentOS, Ambari, Oracle, SVN, Kafka, Data Lake,Github Java 1.7.x, JIRA, Talend.
Hadoop Developer
Confidential, Boston
Responsibilities:
- Determining the viability of a business problem for a Big Data solution.
- Worked on a 42 nodes Hadoop Hortonworks Data PlatformrunningMapR Distribution.
- Worked with highlyunstructured and semi structured data sets of 120 TBin size.
- Extracted the data from RDBMS into HDFS usingSqoop.
- Created and workedSqoop (version 1.4.3)jobs with incremental loadto populate Hive External tables.
- Setting up nodes for performing MapReduce Jobs.
- Extensive experience in writingPig (version 0.11)scripts to transform raw datafrom several data sources into forming baseline data.
- DevelopedHive(version 0.10) scripts for end user / analyst requirements to perform predictive analysis
- Implemented data access jobs through Pig, Hive, Hbase (0.98.0), Storm (0.9.1)
- Solved performance issuesin Hive and Pig scripts with understanding of Joins, Group and aggregation and how does it translate to MapReduce jobs.
- DevelopedOozieworkflow for scheduling and orchestrating the process
- Implemented authentication for No SQL DB through MangoDB&Cassandra connectors scripts for enabling the scripts to run.
- Very good experience with both MapReduce 1 (Job Tracker) and MapReduce 2 (YARN) setups
- Involved in selecting the right products to implement a Big Data solution.
- Setup and Install Hadoop MR1 cluster and Enterprise Data Ware House.
- Implemented analytics using PIG & HIVE.
- Plan and manage HDFS storage capacity. Advise a team on best tool selection, best practices, and optimal processes using Sqoop, Flume, Splunk, Pig, Oozie, Hive, and Bash Shell Scripting& Python Scripting.
- Facilitate access to large data sets utilizing Pig/Hive on Hadoop Ecosystem.
- Fetching the HQL results into CSV files and handover to reporting team.
- Good hands on experience in working with hive complex datatypes.
- Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, Scala, Python and have a good experience in using Spark-Shell and Spark Streaming.
- Developed Spark code using Scala&Python and Spark-SQL for faster testing and data processing.
- Imported millions of structured data from relational databases using Sqoop import to process using Spark and stored the data into HDFS in CSV format.
- Used Spark SQL to process the huge amount of structured data.
- Implemented Spark RDD transformations, actions to migrate Map reduce algorithms.
- Developed mappings/workflows to transfer data from Oracle to HDFS and vice versa tables using Hadoop hdfc connection.
- Developed mappings using data processor transformation to load data from word, pdf documents to HDFS.
- Used parser to load structured and unstructured data.
- Resolving the Context based scenarios for faster through put.
- Good working knowledge of various NoSQL BDs like Cassandra, HBase & Mango DB.
- User defined functionality of the MangoDB&Cassandra Architectures.
- Hands-on expertise with various architectures in MangoDB&Cassandra DB.
- Very good experience in monitoring and managing the Hadoop cluster usingHortonworks
- Good working knowledge ofCassandra Architecture
- Reporting Expertise throughTalend
Environment: MapR Data platform, GitHub, Cassandra, MangoDB¸Hbase, Unix, Pig, Hive, Hbase, Storm, Maven, Tableau, RDBMS, MS SQL VizualStudio, Talend.
Hadoop Developer
Confidential, Bloomington, IL.
Responsibilities:
- Installed and configured Hadoop MapReduce, HDFS, developed multiple MapReduce jobs in Java for data cleaning and preprocessing.
- Importing and exporting data into HDFS from Oracle database and vice versa using Sqoop.
- Experience in installing, configuring Hadoop cluster for major Hadoop distributions.
- Experience in using Hive and Pig as an ETL tool for event joins, filters, transformations and pre- aggregations.
- Created partitions, bucketing across state in Hive to handle structured data.
- Implemented Dash boards that handle HiveQL queries internally like Aggregation functions, basic hive operations, and different kind of join operations.
- Implemented business logic based on state in Hive using Generic UDF's.
- Developed workflow in Oozie to orchestrate a series of Pig scripts to cleanse data, such as removing personal information or merging many small files into a handful of very large, compressed files using Pig pipelines in the data preparation stage.
- Experienced in writing the Map Reduce programs for analyzing of data as per the business requirements.
- Used Pig in three distinct workloads like pipelines, iterative processing and research.
- Involved in moving all log files generated from various sources to HDFS for further processing through Kafka, Flume& SPLUNK and process the files by using Piggybank.
- Extensively used PIG to communicate with Hive using HCatalog and HBASE using Handlers.
- Implemented MapReduce jobs to write data into Avro format.
- Created Hive tables to store the processed results in a tabular format.
- Used SparkSQL for Scala& Python interface that automatically converts RDD case classes to schema RDD
- Using SparkSQL read and write table which are stored in hive.
- Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
- Implemented various MapReduce Jobs in custom environments and updating them to Hbase tables by generating hive queries.
- Performed Sqooping for various file transfers through the Hbase tables for processing of data to several NoSQL DBs- Cassandra, MangoDB.
- Involved in developing Hive UDFs and reused in some other requirements.
- Worked on performing Join operations.
- Involved in creating partitioning on external tables.
- Experience in using Pentaho Data Integration tool for data integration, OLAP analysis and ETL process
- Used Hadoop's Pig, Hive and Map Reduce for analyzing the Health insurance data and transforming into data sets of meaningful information such as medicines, diseases, symptoms, opinions, geographic region detail etc.
- Worked on data analytics using Pig and Hive on Hadoop.
- Evaluated Oozie for workflow orchestration in the automation of MapReduce jobs, Pig and Hive jobs.
- Installed and configured Hadoop Map Reduce, HDFS, developed multiple Map Reduce jobs in java for data cleaning and preprocessing.
- Created tables, secondary indexes, join indexes, viewed. in Teradata development Environment for testing.
- Assisted in Batch processes using Fast Load, BTEQ, UNIX Shell and Teradata SQL to transfer, cleanup and summarize data.
- Developed MLOADscripts to load data from Load Ready Files to Teradata Warehouse
- Extracted files from MongoDB through Sqoop and placed in HDFS and processed
- Captured the data logs from web server into HDFS using Flume& Splunk for analysis.
- Experienced in writing Pig scripts and Pig UDFs to pre-process the data for analysis.
- Experienced in managing and reviewing Hadoop log files.
- Built front end using JSP, JSON, Servlets, HTML and JavaScript to create user friendly and appealing interface.
- Reporting Expertise throughTalend.
- Used JSTL and built custom tags whenever necessary.
- Used Expression Language to tie beans to UI components.
- Gained very good business knowledge on health insurance, claim processing, fraud suspect identification, appeals process etc.
Environment: Hive, Pig, MapReduce, Spark, AVRO, Sqoop, Oozie, Flume, TeradataKafka, Storm, HBase, Unix, Python, Sql, Hadoop 1.x, HDFS,Talend, Pig, Hive, Hbase, Github, MapReduce, Java, Sqoop, Flume, Splunk, Oozie, Linux, UNIX Shell& Python Scripting.
Java Developer
Confidential, NYC
Responsibilities:
- Involved in design, development and analysis documents in sharing with Clients.
- Responsible for the Requirement Analysis and Design of Smart Systems Pro (SSP)
- Involved in Object Oriented Design (OOD) and Analysis (OOA).
- Analysis and Design of the Object models using JAVA/J2EE Design Patterns in various tiers of the application.
- Worked with Restful Web Services and WSDL.
- Worked with Jenkins,Maven build tool to build the Project.
- Involved in Coding JavaScript code for UI validation and worked on Struts validation frameworks.
- Analyzing the Client Requirements and designing the specification document based on the requirements.
- Worked on implementing directives and scope values using AngularJs, JSON for an existing webpage.
- Familiar with the state-of-the-art standards, processes, design processes used in creating and designing optimal UI using Web 2.0 technologies like Angular JS, Node JS, Ajax, JavaScript, CSS, and XSLT.
- Involved in the Preparation of Program Specification and Unit Test Case Document.
- Designed the Proto according to the Business requirements.
- Used application servers like WebLogic, WebSphere, Apache Tomcat, Glassfish and JBoss based on the client requirements and project specifications.
- Involved in mapping of all configuration files according to the JSF Framework
- Written SQL, PL/SQL and stored procedures as part of database interaction.
- Testing and production support of core java based multithreading ETL tool for distributed loading XML data into Oracle11g database using JPA/Hibernate.
- Used Hibernate framework and Spring JDBC framework modules for backend communication in the extended application.
- Developed Presentation Layer using HTML, CSS, and JSP and validated the data using AJAX and Ext JS and JavaScript.
- Involved in the development of Database Connections and Database Operations using JDBC.
- Involved in write SQL Queries and Stored Procedures.
- Defined and Developed Action and Model Classes.
- Wrote Action Form and Action classes and used various HTML tags, Bean, and Logic etc., also configured Struts-Config.xml for global forwards, error forwards & action forwards.
- Developed UI using JSP, JSON and Servlet and server-side code with Java.
- Developed Customs tags to display dynamic contents and to avoid large amounts of java code in JSP pages.
- Designed and developed GUI's for a good user experience (Includes designing of JSP's and Swings/Applet).
- Used Java Mail (JMS) API to send Email Notifications for the users.
- Worked on database design and implementation (Oracle).
- Prepared checklist and guidelines documentation.
- Developed Maven build scriptsusing Jenkins and involved in deploying the application on WebSphere.
- Created WSDLs as per wire frames, UI pages & generated client jars using JAX-WS.
- Used Apache CXF to create SOAP based &Restful web services.
- Designed and developed various stored procedures, functions and triggers in PL/SQL to implement business rules.
- Used SVN as version control repository.
Environment: Java/J2EE, JSP,JSON Servlets, EJB, XML, XSLT, Struts, Rational Rose, Apache Struts Framework, Web Services, DB2, Beyond Compare, Angular JS, Node JS, GitHub, Web Services, CVS, IBM WebSphere Studio Enterprise Developer, JUnit, Log4j, Windows XP, Red Hat LINUX.
Associate Java Developer
Confidential, IL
Responsibilities:
- Involved in design, development and analysis documents in sharing with Clients.
- Developed web pages using Struts framework, JSP, XML, JavaScript, Html/ DHTML and CSS, configure struts application, use tag library.
- Developed Application using Spring and Hibernate, Spring batch, Web Services like Soap and restful Web services.
- Used Spring Framework at Business Tier and also Spring's Bean Factory for initializing services.
- Used AJAX, JavaScriptto create interactive user interface.
- Implemented client side validations using JavaScript & server side validations.
- Developed Single Page application using angular JS & backbone JS.
- Developed app using Front Controller, Business delegate, DAO and Session Facade Patterns.
- Implemented Hibernate to persist the data into Database and wrote HQL based queries to implement CRUD operations on the data.
- Used Hibernate annotations and created Hibernate POJOs.
- Developed various client requirement based Angular JS & Node JSpages.
- Developed Web Services to communicate to other modules using XML based SOAP and WSDL.
- Designed and implemented (SOA, SOAP) next generation system on distributed platform.
- Designed and developed most of the application's GUI screens using GWT framework.
- Used JAXP for Xml parsing & JAXB for marshalling & un marshalling.
- Followed top down approach to implement SOAP based web services & used AXIS commands to generate artifacts from WSDL file.
- Expertise Knowledge about Angular JS & Node JS
- Used SOAP-UI to test the Web Services using WSDL.
- Developed Schema/Component Template/Template Building Block components in SDL Tridion.
- Developed coding using SQL, PL/SQL, Queries, Joins, Views, Procedures/Functions, Triggers and Packages.
- Involved in using spring concepts - DI/IOC, AOP, Batch implementation and Spring MVC.
- Involved in doing analysis on DB Schema as per new design in DB2 from Oracle.
- DOJO toolkit Used for UI development and sending asynchronous AJAX requests to the server.
- Utilized DOJO as Proof of concept to transform an Applet application into a web application.
- Used UNIX & Python/ScalaScripting technologies for coding and decoding.
- Used spring JDBC Template for persistence with Data Base.
- Provided security for REST using SSL& for SOAP using Encryption with X.509 Digital signature.
- Involved in creating JUNIT test cases and ran the TEST SUITE using EMMA tool.
- Ran check style, PMD defects & Find bugs and fixed them.
- Used CVS as a source control for code changes
Environment: Java/J2EE, JSP,JSON Servlets, EJB, XML, XSLT, Struts, Rational Rose, Apache Struts Framework, Web Services, DB2, Beyond Compare, Web Services, Angular JS, Node JS CVS, IBM WebSphere Studio Enterprise Developer, JUnit, Log4j, Windows XP, Red Hat LINUX.