We provide IT Staff Augmentation Services!

Senior Software Engineer Resume

2.00/5 (Submit Your Rating)

Temple Terrace, FL

SUMMARY

  • 11 years of relevant Java, JEE/J2EE experience involving in Architecting, Designing and Development of web based & distributed object oriented enterprise level, scalable, mission - critical applications.
  • 18 years of SDLC (software development life cycle) experience using iterative and incremental or sequential processes/methodologies like Agile, Waterfall.
  • ESRI ArcGIS Specialist.
  • Specialized in GIS (Geographic Information Systems) applications and Geospatial technology.
  • 2+ years of extensive and hands-on experience in ESRI JavaScript API, ArcMap, ArcCatalog, ArcGIS server, Vector and Raster publishing. Well versed wif Geodatabases. Importing, exporting kml/kmz/shape/dgn/csv/xlsx/xls formats. Dealing wif “mxd” files and their REST Endpoints.
  • Have been involved in collecting & understanding teh new requirements from BA’s (Business Analysts), transforming them into OO design using visual UML models developed wif various modeling tools like Oracle JDeveloper, NetBeans IDE, IBM Rational Rose Enterprise, EMF (Eclipse Modeling Framework) and ArgoUML.
  • Have been doing teh round-trip engineering (forward and reverse) for code generation and model updates conform to teh changing source code, using teh above modeling tools.
  • Have thorough noledge, understanding and applying of best OO Principals, OO Concepts to teh system design.
  • Have good understanding of GoF/Java/JSE and J2EE Design Patterns.
  • Have very good hold on applying teh patterns like MVC, Singleton, DAO, Business Delegate, Factory, Observer, Decorator, Front Controller, and Chain of Responsibility, Adapter etc. based on teh need, requirement, respective tier/layer etc. of teh application.
  • Proficient and Fluent hands on working experience on Microservices development on DevOps model and their deployment, management using Spring Boot consoles.
  • Hands-on development experience in RESTful Web Services, using both Spring RESTful API and JAX-RS API, Jersey implementation.
  • Hands-on development experience in SOAP Web Services, using both JAX-WS API and Apache Axis2 implementation.
  • Implemented teh Web Services security using OAuth 2.0/OAuth 1.0, TLS/SSL, SecurityContext, JAX-RS Client API, SAML.
  • Took teh Confidential corporate intensive training on AWS on-demand Cloud computing platform and its components, such as, Amazon Elastic Compute Cloud (EC2), S3, Amazon EC2 Container Service (Amazon ECS), EMR for Hadoop, DynamoDB for NoSQL, ElastiCache for Memcached and Redis, RDS Database Server, SWF Workflow, Kinesis Streaming, Redshift, SimpleDB, ELB, VPC Services etc., and exploring, delving into teh AWS web services.
  • Hands-on development experience in Rapid Application Development using Spring Boot.
  • Implemented several personal projects on Spring Cloud, Spring Boot, Microservices by utilizing teh nights and weekend times.
  • Solved teh Scalability issues using teh technologies/tools for large Data and Analytics, such as Big Data, Hadoop, Hive, MapReduce, Pig, NoSQL, MongoDB, Cassandra, Memcached, Redis, and visualized them wif Visual Business Intelligence Data Analysis, using Tableau.
  • Used teh open-source distributed search engine, ElasticSearch, which is built on top of Lucene (just like Apache Solr4), supporting realtime time indexing and full text search, which is very much scalable, supporting a good amount of enterprise Search use cases.
  • Hands-on experience in developing web/enterprise applications using Spring ( Confidential modules, Batch), Struts frame works, EJB and teh various UI stack like JSP, Servlet, JavaScript, Angular JS, jQuery, HTML5, CSS3, AJAX, DOJO, JSON, GWT (for Presentation layer) on Weblogic, JBoss, Websphere (App servers) and Tomcat (Web server).
  • Used teh Hibernate ORM wif JPA (Java Persistence API) annotations, HQL (Hibernate Query Language), native SQL for Persistence layer.
  • Used teh Hibernate Spatial ORM for mapping to Oracle Spatial db.
  • Have expertise in design, develop and applying of SOA for EAI. Used Oracle SOA Suite implementation and its components like Mediator, ESB, OSB, Complex Event Processing for this.
  • Have been using teh SCM (Source Code Management) repository, Version/Revision control tools like Stash, GitHub, GitHub Enterprise, Git, SVN, CVS etc. in various projects.
  • Have good skills in teh configuration, usage of CI (Continuous Integration) and Build tools like Jenkins, Cruise Control, Maven, Ant.
  • Performed teh code reviews, code quality check, test code coverage wif teh halp of tools like SonarQube, Clover, Cobertura.
  • Fluent and Proficient working experience on MOM (Message Oriented Middleware), JMS specification. Used Apache Camel “Enterprise Integration Patterns”, ActiveMQ, Kafka, RabbitMQ, JEE 7 (GlassFish), MQSeries implementations.
  • Have been using teh Oracle, MySQL, MS-SQL Server databases and H2, HSQLDB in-memory databases. Involved in design of Relational model (Logical data model), ER (Entity-relationship) models (Logical, Physical) and teh ER diagrams. Applied Normalization techniques to minimize teh data redundancy.
  • Used teh Oracle BPM Suite for developing business process model oriented applications involving workflow and business rules. Used teh BPM run time components like BPM engine, Human Workflow, Business Rules and Enterprise Manager. Also have experience in teh JBoss BPM product JBPM/Workflow and Drools Engine.
  • Wif teh software tools like Oracle JDeveloper, Netbeans IDE (Visual Paradigm plugin), IBM Rational Rose Enterprise and their support for round-trip engineering, transformed teh object models into data models and vice versa and generated teh application logic code.
  • Well versed in writing complex SQL, PL/SQL, stored procedures, functions, triggers, cursors, indexes.
  • Good at test driven development using Spring Test module, JUnit, JsUnit, TestNG, Mockito frameworks.
  • Domain experience in Telecom, Health Care, Home Mortgage, Financial services, Automobile Insurance, Securities, Education, HRMS, Retail Industry, General Insurance, Family welfare, Power & Utility, Timeshare business and Pharmaceutical industry.
  • Have strong interpersonal and communication skills, ability to Lead, Mentor, guide a technical development team independently/perform a senior team member role at an individual contribution.
  • As a Lead/Senior Developer, me have made teh co-developers to understand teh design, outcome of teh product, gave solutions to teh blockers while coding and testing, trained them from time-to-time and brought them up to speed. Have been filling teh gap and acting as a bridge between teh Project Manager and co-developers.

TECHNICAL SKILLS

Specification/Standard: REST, SOA, Jersey OAuth, OASIS WSS, JMS, EJB, JPA, XMI

Frameworks: Spring (Spring Boot RAD, Batch, AOP, MVC, Data access, Transaction management, Test), Guice, Struts, JUnit, JsUnit, TestNG, Mockito, log4j

Web Service Stack: RESTful (Spring RESTful API, JAX-RS Jersey), SOAP (JAX-WS), Apache Axis2

Cloud Platforms: AWS Cloud, Google Cloud, Spring Cloud, Cloud Foundry

Distributed Config Service: Apache Zookeeper

Microservices Platform: fabric8

Containers/Images: Docker, Amazon EC2 Container Service (Amazon ECS), Google Container Engine wif Kubernetes, Apache Karaf

Microservices Registration: Google Container Registry, Kubernetes Service Discovery /Discovery

Middleware Stack: Microservices on DevOps model, SOA, JBoss Fuse ESB, Apache ServiceMix, Oracle SOA Suite, Mediator, ESB, OSB, Mule ESB, MOM, JMS, JEE 7 (GlassFish), Apache Camel “Enterprise Integration Patterns”, ActiveMQ, Kafka, RabbitMQ, MQSeries, Pub/Sub, P2P, Topic, Queue

Geospatial Stack: ESRI JavaScript API, ArcMap, ArcCatalog, ArcGIS Server

Security Stack: Jersey OAuth (OAuth 2.0/OAuth 1.0), TLS/SSL, SecurityContext, JAX-RS Client API, SAML, WSS, JEE 7 (Application/Transport/Message Layer Security), Java 8 Authenticator, JSE (JAAS/Java GSS-API/JCE/JSSE)

Process/Methodologies: Agile (SCRUM), Waterfall

Scalability Stack: Big Data, Hadoop, Hive, MapReduce, Pig, NoSQL, MongoDB, Cassandra, Memcached, Redis

Distributed Search Engine: ElasticSearch

Visual BI Data Analysis: Tableau

Build/CI Tools: Jenkins CI, Cruise Control CI, Maven, Ant

SCM/Version Control: Stash, GitHub, GitHub Enterprise, Git, SVN, CVS, Perforce, CM Synergy

Code Quality/Coverage: SonarQube, Clover, Cobertura

DevOps Tools: Stash, JIRA

ORM Tools: Hibernate, Hibernate Spatial

Platforms/Languages: JEE, J2EE, Java, JavaScript, Angular JS, jQuery, HTML, CSS, XML, XSLT, WSDL, SQL, PL/SQL, HQL

UI Stack: JSP, Servlet, AJAX, DOJO, JSON, GWT

App/Web Servers: Weblogic, JBoss, Websphere, GlassFish, Tomcat

Databases: Oracle, Oracle Spatial, MySQL, MS-SQL Server

In-memory databases: H2, HSQLDB

Database Interfaces: Oracle SQL Developer, TOAD

BPM Stack: Oracle BPM Suite, Human Workflow, Oracle Business Rules, BPM/BPMN/BPEL/Rules Engines, JBoss JBPM/Workflow/Drools Engine

Design Patterns/Principals: GoF/Java/JSE, J2EE Patterns, OO Design Principals/Concepts

Modeling Language/Tools: UML, Oracle JDeveloper, NetBeans IDE (Visual Paradigm plugin), IBM Rational Rose Enterprise, EMF (Eclipse Modeling Framework), ATL, Acceleo, ArgoUML

Schedulers: Cron, Quartz, Cisco Tidal

IDE/Other Tools: Eclipse, NetBeans IDE, Oracle JDeveloper, JBoss developer studio, SoapUI, REST Client, Fiddler, QC, Rally, Putty, WinSCP

Protocols: SOAP, HTTP, FTP, SMTP, TCP/IP, IIOP

OS: Windows, Linux, HP UX

PROFESSIONAL EXPERIENCE

Senior Software Engineer

Confidential, Temple Terrace, FL

Responsibilities:

  • Confidential is a Geospatial GIS, Web based and Distributed Object Oriented Scalable application. It’s an OSP (Outside Plant) engineering application used by teh Network Engineers of Confidential Telecom and Confidential Business, who does teh planning, design, and draw/route teh fiber network involving various telecom components such as Fiber cable, Manhole, Underground Span etc. Teh application provides various modules like Attribute View, Attribute Editor, Data View, Fiber Availability, Trace Network, Connection Editor, Association Editor, Special Construction, Splice Report, Trace Report etc. Teh telecom components used to design teh network are published as map service/feature service layers, accessible through a REST end point. Teh ESRI JavaScript API, ArcMap, ArcCatalog and ArcGIS server are used for this. Microsoft Bing map is used as teh base map and teh layers are added on top of it. Engineers create a Work Order (WO) to design their network and it is Versioned. Once teh design is complete, teh WO is transitioned to Inventory Update Complete status. Once teh WO is implemented, teh engineer will reconcile/close teh WO and teh WO is versioned to “SDE.DEFAULT”.
  • me have been teh Lead/Senior Software Engineer for Confidential for teh past 2+ years, responsible for designing, development and testing of most of teh modules of teh Fiber Inventory Management ( Confidential ) application. This includes teh Frontend UI design, business logic implementation, database interaction through persistence layer.
  • Teh business problem we have addressed wif teh development of Confidential is to reduce teh burden of OSP Network Engineers. Confidential halps them to drastically reduce teh time taken for their network routing & design. Also, it makes their job easy to identify teh existing available network by looking them visually on teh Bing map and filling teh required gaps.
  • me loved teh fast pace of teh job and teh ability to be innovative.
  • me have asked my Technical Project Manager at Confidential, to consider and think about Microservices SOA on DevOps model, wif CICD, to expose teh Business Logic/Rules of Confidential / Confidential .
  • me have explained teh two main benefits that, software teams receive by implementing Microservices SOA: Agility and Resilience. Also, me have briefed to my Manager that, Microservices encourage software developers to decouple their software into smaller functional pieces which are expected to fail. Teh advantage, when done well, provides increased Agility and Resilience. Hence, me was able to convince my Manager to go for Microservices SOA on DevOps model, wif CICD.
  • Used Spring RESTful API, such as, RestTemplate, RestController, RequestMapping, RequestParam etc. to develop Microservices as Restful Web Services to expose teh application Business Logic/Rules.
  • Next, me have explained teh use of Docker Containers, and how teh benefits of Microservice Architectures can be Amplified, when used in combination wif Docker Containers. Teh former encourages us to decouple our software into smaller functional pieces which are expected to fail, bringing us Agility and Resilience. Teh latter decouples our Software from teh underlying Hardware, bringing us Portability and Speed not seen before in VM-based solutions.
  • me have also briefed about teh portability feature of Containers, which also makes deployment of Microservices a breeze. To push out a new version of a service running on a given host, teh running Container can simply be stopped and a new Container started, that is based on a Docker Image, using teh latest version of teh service code. All teh other Containers running on teh host will be uneffected by this change.
  • Used Google Container Engine wif Kubernetes as teh Docker Container.
  • me have used teh Spring Boot for Bootstrapping our application and ran it in teh Google Container Engine.
  • Defined teh Docker Images for teh app through teh Dockerfiles.
  • me have used fabric8 an opinionated open source Microservices Platform based on Docker, Kubernetes and Jenkins, as fabric8 makes it easy to create Microservices, build, test and deploy them via Continuous Delivery pipelines then run and manage them wif Continuous Improvement and ChatOps.
  • Also, me have benn utilizing teh out-of-box Management Facilities provided by fabric8, such as, centralised Logging and Metrics of app, ChatOps and Chaos Monkey along wif deep management of Java Containers using Hawtio and Jolokia.
  • Used teh Maven plugin for fabric8. To build teh Docker Image, wif our Spring Boot artifact, ran this maven command - “mvn clean packagedocker:build”.
  • Understood and Explained to teh Dev Team that, Docker Images are built in Layers. Each command in teh Dockerfile is another 'Layer'. Docker works by 'Caching' these Layers locally. Also, teh first time you build teh Docker Image, it will take longer, since all teh Layers are being downloaded/built. Teh next time we build this, teh only Layers that change, are teh one which adds teh new Spring Boot artifact, all commands after this. Teh Layers before teh Spring Boot artifact aren't changing, so teh Cached version will be used in teh Docker build.
  • Ran teh Spring Boot Docker Image, that was built in teh earlier step, using teh "docker run" command, which will start teh Docker Container and Echo teh ID of teh started Container, as well as bringing up and running our App, Bootstrapped by Spring Boot.
  • To view all teh Containers running, using command - "docker ps". To stop any running Container used teh command - "docker stop".
  • Now that teh Image works as intended and is all tagged wif teh $PROJECT ID, me have pushed it to teh Google Container Registry, a private repository for teh Docker Images, accessible from every Google Cloud project (but also from outside Google Cloud Platform), using teh command - "gcloud docker push gcr.io/$PROJECT ID/hello-node:v1".
  • Everything went well, and me was able to see teh Container Image listed in teh console: Compute > Container Engine > Container Registry. Now, we have a project-wide Docker Image available, which Kubernetes can access and orchestrate.
  • Services are implemented by one or more pods, for Elasticity and Resilience. In teh cloud, pods can come and go, when there are hardware failures or when pods get rescheduled onto different nodes to improve resource utilization.
  • In order to use a service, me have dynamically discovered teh pods implementing teh service, so that me can invoke it. This is called service discovery. Teh default way to discover teh pods for a kubernetes service is via DNS names.
  • Teh value of teh host and port are fixed for teh lifetime of teh service. So, we can just resolve teh environment variables on startup and we're all set. Under teh covers Kubernetes will load balance over all teh service endpoints for us.
  • We need to Note that, a pod can terminate at any time. So, it is recommended that, any network code should retry requests if a socket fails. Then kubernetes will failover to a pod for us.
  • Once teh Confidential bought teh “Premier License” for Amazon AWS on-demand Cloud computing platform, me have moved our Microservices from Google Cloud and deployed to AWS Cloud, so that me can teh full advantage of AWS out-of-box services.
  • me have shared wif my Dev team that; Containers also halp wif teh efficient utilization of resources on a host. If a given service isn't using all teh resources on an Amazon EC2 instance, additional services can be launched in Containers on that instance that make use of teh idle resources. Of course, deploying services in Containers, managing which services are running on which hosts, and tracking teh capacity utilization of all hosts that are running Containers will quickly become unmanageable, if done manually.
  • In order to automate teh above said problem, me have used teh Amazon EC2 Container Service (Amazon ECS), which takes care of teh problem for us, wif a lot of ease. Wif Amazon ECS, me have defined a pool of compute resources called a "Cluster". A Cluster consists of one or more Amazon EC2 instances. Amazon ECS manages teh state of all Container-based applications running in a Cluster, provides telemetry and logging, and manages capacity utilization of teh Cluster, allowing for efficient scheduling of work. Using teh construct called a "task definition" that Amazon ECS provides, me have defined a grouping of Containers that comprise our application. Each Container in teh task definition specifies teh resources required by that Container, and Amazon ECS will schedule that task for execution, based on teh available resources in teh Cluster.
  • me have easily well-defined our Application Microservices, as individual tasks. A task might consist of two containers. One running teh service endpoint code, and another a database. Amazon ECS manages teh dependencies between these Containers, as well as all teh balancing of resources across teh Cluster. Also, using Amazon ECS, me have been accessing teh important AWS services like Elastic Load Balancing, Amazon EBS, Elastic Network Interface, and Auto Scaling, as per our Application needs. me have used all these essential features of Amazon ECS, that are available to Container-based applications, to deploy teh Microservices of our Application, using Amazon EC2 on AWS Cloud Platform.
  • me have utilized some of teh best Container Management Solutions that, Amazon ECS provide by teh implementation of "Service Discovery". coz Microservices are often deployed across multiple hosts, and often scale up and down based on load, Service Discovery is needed in order for one service to no how to locate other services.
  • me have realized that, in teh simplest case, a load balancer is needed to address teh above issue. Then, me have used teh Apache Zookeeper for our Application, as a true Distributed Configuration Service. Teh Amazon ECS API also makes it possible to integrate wif Zookeeper very easily. me took teh advantage of Amazon ECS, to manage teh Zookeeper Cluster for our Application. me have grouped together, teh Containers comprising teh Zookeeper Cluster, using a task definition, and scheduled for execution on teh Amazon EC2 hosts in teh Cluster, by teh Amazon ECS service.
  • As a Lead/Senior Developer, me have made teh co-developers to understand teh design, outcome of teh product, gave solutions to teh bottle necks or blockers they have been facing while coding and testing. Mentored and trained them from time to time to bring them up to speed.

Confidential, St. Louis, MO

NQ Automation

Responsibilities:

  • Confidential is into healthcare services domain. Teh key stakeholders for Centene are state governments, members, providers, uninsured individuals and families, and other healthcare and commercial organizations. Centene offers a wide variety of health plans. It also offers a full range of healthcare solutions for teh rising number of uninsured Americans. Centene also contract wif their health plans and other healthcare and commercial organizations to provide speciality services such as behavioral health, life and health management, managed vision, telehealth, pharmacy benefits management and medication adherence. me have been working on two projects for Centene, namely "NQ Automation" and "FileNet Publish". These two projects have been developed for teh Centene's Provider Data Management (PDM) - IT division.
  • NQ Automation is about pulling teh claims, which are under NQ status, wif teh information of Practitioner and Provider data from various systems of teh Centene like EDW and HTR. Then check if those Practitioner and Provider data exists in teh Centene's Portico system. If it doesn't exist, teh data will be inserted into teh Portico database.
  • FileNet Publish is a utility project developed using teh IBM FileNet API, to publish/upload various types of files to FileNet CE (Content Engine) server. Teh files can be input/output/error files resulting from teh various projects of Centene like NQ Automation/Vendor Load/Print Directory etc.
  • As a Senior Java/JEE Developer, me have been understanding teh requirements and participate in teh design and development of "NQ Automation" and "FileNet Publish" projects.
  • Using teh Agile methodology for development life cycle and participate in teh Sprint planning sessions.
  • Wif teh halp of BA’s understood teh requirements. Used Oracle JDeveloper to create teh object model using UML. Did teh round-trip engineering, transformed teh object model into data model and generated teh POJO classes.
  • Using teh JIRA tool for defining and creating teh user stories and tasks.
  • Used Spring batch features to implement teh batch jobs.
  • Used Cisco Tidal Enterprise Scheduler to schedule teh batch jobs.
  • Developed Spring Java bean service classes and used dependency injection to wire teh beans.
  • Used Spring annotations instead of declarative configuration in teh applicationContext.xml.
  • One of teh key challenge that me have faced and was asked by teh client to resolve it was, teh Scalability and Performance of NQ Automation. As NQ Automation TEMPhas been recognized well across Centene, its popularity TEMPhas increased a lot and hence teh head count of users. This resulted in teh increase of volume of hits to NQ Automation, and our initial design didn’t accommodate this situation.
  • Teh approach me have taken to resolve this problem is, since NQ Automation is read-heavy, me went for implementing teh in-memory caching, for example caching teh repeatedly executing queries and their results directly. Also, caching teh specific heavy objects.
  • me have asked my Technical Project Manager at Centene, to consider and think about Cassandra, an Open Source NoSQL Database from Apache, based on teh Principals of Denormalization, as an alternative way of achieving high scalability. me have also briefed that, wif Cassandra, we can very easily achieve high scalability for huge amounts of data, me.e., Big Data, such as terabytes or even petabytes of storage.
  • Below are some of teh notable points of Cassandra, that me have used to convince my Manager:
  • It is scalable, fault-tolerant, and consistent.
  • It is a column-oriented database.
  • Its distribution design is based on Amazon’s Dynamo and its data model on Google’s Bigtable.
  • Created at Facebook, it differs sharply from relational database management systems.
  • Cassandra implements a Dynamo-style replication model wif no single point of failure, but adds a more powerful “column family” data model.
  • Cassandra is being used by some of teh biggest companies such as Facebook, Twitter, Cisco, Rackspace, ebay, Twitter, Netflix, and more.
  • me have also described teh difference between Cassandra vs MongoDB, in teh perspective of how teh data is stored. me have detailed to my team that, in Cassandra data is stored in column families, similar to tables, but rows do not need to share teh same columns.
  • Used teh “Spring Data Cassandra” support to deal wif teh Persistence to Cassandra Datstore.
  • Spring configuration support using Java based @Configuration classes or an XML namespace for a Cassandra driver instance and replica sets.
  • CassandraTemplate halper class that increases productivity performing common Cassandra operations. Includes integrated object mapping between CQL Tables and POJOs.
  • Exception translation into Spring's portable Data Access Exception hierarchy.
  • Feature Rich Object Mapping integrated wif Spring's Conversion Service.
  • Annotation based mapping metadata but extensible to support other metadata formats.
  • Persistence and mapping lifecycle events.
  • Java based Query, Criteria, and Update DSLs.
  • Automatic implementation of Repository interfaces including support for custom finder methods.
  • Used CQL, CQLSH and Java API offered by Cassandra extensively. For example, to create, alter, update, and delete on keyspaces, tables, and indexes, also, to understand teh data types and collections available in CQL and how to make use of user-defined data types.
  • Also, as teh CRUD operations were happening very slow, and consuming a lot of bandwidth and resources, me got a chance to apply one of teh system design principal, that say, slow operations should ideally be done Asynchronously. Otherwise, a user might get stuck waiting and waiting for a process to get complete.
  • me have implemented teh Asynchronous messaging using Apache Camel “Enterprise Integration Patterns”, on a ActiveMQ Message broker, to make sure teh user just submits his request/operation and doesn’t wait for teh completion of it. After completing his request in teh background, me have just Pushed an acnoledgement message to teh user’s desktop.
  • Next, me have also suggested to my Project Manager to go for Database Partitioning/Sharding, which can provide a good scale for NQ Automation, as teh data for NQ Automation is growing very fast.
  • To improve teh performance, me have done teh performance metrics analysis, using teh properties such as average application response time, peak response time, error rate, concurrent users, requests per second and throughput. me have discussed wif my Manager and teh key stake holders from Centene, to make a decision on whether we can go for a low latency in teh face of increasing requests, which will automatically lead to teh high throughput. Teh client is fine wif my decision and which TEMPhas played a big part in enabling scale for NQ Automation.
  • Teh client and my Manager were quite happy about teh results of my approach and design. me have received teh highest appreciation award for a Contractor/Consultant that Centene gives. Client personally called and informed teh same to teh Centene Account Manager of my employer, which TEMPhas resulted in a good amount of performance bonus to me from my employer.
  • Apache Camel/ActiveMQ are used to communicate between Portico system and EDW, HTR systems.
  • Used Maven to build teh application. Create/Write teh Maven pom.xml files and generate teh JAR file of teh application to deploy on teh Linux box.
  • These are mavenized projects and configured wif teh Jenkins CI (Continuous Integration) build for every code check in to SVN. They have been released to Centene’s Nexus repository.
  • Created teh executable JAR file of teh application, which can be called/utilized in teh Unix Shell Scripts.
  • Used teh Jackson parser to parse teh application specific metadata in JSON format to a Java HashMap.
  • Used teh Apache Commons CLI library API for parsing command line options passed to Java programs.
  • Used Subversion (SVN) repository for teh source code management and versioning.
  • Used Mockito and JUnit open source testing frameworks for Test Driven Development (TDD).
  • Used Cobertura Maven Plugin (from org.codehaus.mojo group) for code coverage reporting and always tried to keep teh percentage of code coverage as high as possible.

Confidential, Urbandale, IA

Responsibilities:

  • Home Mortgage Product Confidential is designed to perform all functionalities required for a home mortgage application. It's an intranet application for different kinds of users. Teh user base includes sales, underwriters, processors, fulfillment and others, who are involved in different stages of processing teh mortgage loan for teh customer. Various modules are there to capture information like Borrower, Credit, Pricing, Property, Closing and Funding, Other Credits and Authorization. Processing involves assessing teh gathered information, performing credit check, setting up tasks for teh users, creating mile stones, performing risk review, and finally deciding on granting teh loan to teh customer.
  • TEMPHas been performing teh Senior Java/JEE Developer role.
  • We used to have quarterly production releases based on teh waterfall model.
  • Application is SOA complaint.
  • Discussed wif teh BA’s and took requirements from them, converted them into OO design using UML model (object) which is created using teh NetBeans IDE (Visual Paradigm plugin).
  • Using teh NetBeans IDE, teh object model created is used for teh forward engineering/code generation of java interface templates.
  • me have also performed reverse engineering to keep teh design conformed to our source code.
  • Involved in teh development of all teh 3-tiers, me.e., teh Presentation Layer, Business Service Layer and Persistence Layer.
  • Developed teh User Interface using teh XMI-GWT-Siriusforce - framework.
  • Exposed teh Business Logic/Rules of Confidential / Confidential, using Microservices SOA.
  • Used teh Open Source Cloud Foundry, cloud computing platform as a service (PaaS), for deploying teh Microservices of Confidential .
  • me have suggested my Project Manager at Wells, to use Cassandra, an Open Source NoSQL Database from Apache, which ca handle teh high scalability for huge amounts of data, me.e., Big Data, such as terabytes or even petabytes of storage.
  • me have also described teh difference between Cassandra vs MongoDB, in teh perspective of how teh data is stored. me have detailed to my team that, in Cassandra data is stored in column families, similar to tables, but rows do not need to share teh same columns.
  • Used CQL, CQLSH and Java API offered by Cassandra extensively. For example, to create, alter, update, and delete on keyspaces, tables, and indexes, also, to understand teh data types and collections available in CQL and how to make use of user-defined data types.
  • JBoss App Server is used for teh application.
  • Used teh Apache Axis2 implementation of teh JAX-WS API to create SOAP Web Services.
  • Generated teh WSDL, Axis Archive and Stub Files by running teh ant build.xml file having teh targets (generate.wsdl, generate.service and generate.client). Deployed teh generated “aar” file to teh JBoss. Tested teh web service using SoapUI. Wrote teh client java file which utilizes teh stub files to invoke teh web service.
  • Used teh Maven for teh application build.
  • TEMPHas been using Spring IOC DI Container for injecting all teh Service, DAO layer beans.
  • Using teh TestNG framework, me have created teh automated test suite and achieved teh TDD.
  • Doing teh code reviews and performing teh code quality check.

Project Leader

Confidential, Fallon, MO

Responsibilities:

  • Confidential 's BillPay system TEMPhas been developed to replace its legacy mainframe system called RPPS. me have been involved in teh R2 release. This release provides various supporting and halper functionalities for Biller Parameter Services. For example, Add Biller, Update Biller and Delete Biller etc. We will be providing various services and teh third-party front end team will make use of those services. To simulate teh Batch processes, services like Mass Upload Service will be used to upload a bulk list of Biller's to database. Reports like Daily Update Report have been used to provide teh users wif up to date Biller data. JBPM workflow TEMPhas also been used for Biller's authorization and approval process.
  • As a Project Leader from Wipro, me have been deployed to teh on-site and was asked to play teh on-site coordinator role, managing teh 10-member team at offshore and also to involve in teh hands-on development.
  • Followed teh Agile process. Involved in teh scrum meetings, playing teh poker game to size teh user stories in teh Iteration planning meeting. Each iteration lasts for 10 days.
  • Involved in teh user story analysis, task breakdown, and task estimation.
  • Involved in teh design, development, and unit testing, deployment and post production maintenance of teh application.
  • Used teh IBM Rational Rose Enterprise for creating teh object, data models in UML and generated teh code (java classes) for application, db schema, tables.
  • Performed teh reverse engineering also using Rose, from time to time to keep teh model up to date.
  • Application is SOA, BPM complaint. Used Oracle SOA Suite, JBoss JBPM, Drools Engine for this.
  • Involved in teh hands-on coding wif Confidential Java, Spring, Hibernate and JUnit.
  • Used Eclipse and JBoss developer studio as teh IDE.
  • Developed teh RESTful Web Services, using teh Jersey implementation of JAX-RS API.
  • Used JSON input/output data.
  • SoapUI, REST Client and Fiddler are used to test/run teh Web Services.
  • Used Maven to build teh application.
  • JBoss App Server is used for application deployment.
  • Cruise Control CI is used for creating custom continuous build process, sending auto email’s in teh case of build failure or error prone code check-in.
  • Clover code coverage report is used to check teh percentage of code cover by JUnit.
  • HSQL DB is used for JUnit development.
  • TOAD is used as a database interface to Oracle DB.
  • Resolved JIRA tickets and QC defects.
  • Reported to teh offshore delivery manager and on-site program manager.
  • Responsible for teh code reviews and offshore team’s code quality & delivery.
  • Provided Daily Status Reports to teh client, conducted daily touch base calls wif offshore team and attended weekly status meetings.
  • Confidential is used to connect to Confidential network while working from home.

We'd love your feedback!