We provide IT Staff Augmentation Services!

Senior Full Stack Developer Resume

2.00/5 (Submit Your Rating)

SUMMARY:

Java microservice development using Spring Boot, Spring RestTemplate, Spring Cloud, Spring Netflix including Feign rest client, Ribbon load balancer, and Eureka Naming Server for service registration. Real - life experience designing controllers and services within microservices paradigm that emphasizes more fine-grained domain-centric design of services to avoid bloated controllers and services.

TECHNICAL SKILLS SUMMARY:

LANGUAGES, APIs, and J2EE SERVICES: Java,Spring Boot, Spring Core, Spring MVC, Spring Data (Templates), JPA, Java Microservices with Spring Netflix and Spring Cloud, AMQP, JMS, J2EE, Servlets, Python.

CONTAINER FRAMEWORKS: Docker, Docker-Compose, Docker-Kompose, moderate Kubernetes, moderate Open Shift experience

DATABASE: MongoDB, Oracle, MySQL, Postgres, Data Modeling, DDL script creation.

JAVA DEVELOPMENT (IDE) and BUILD TOOLS: Currently, Eclipse Spring Tools Suite 4 is Java IDE of choice especially for aiding Spring Boot microservices development and testing/debugging environment. Intellij was previously Java IDE of choice for a 4 year period, Maven 3.x, Git, Bitbucket.

PYTHON DEVELOPMENT (IDE): Jupyter Notebook and Anaconda environment kernels.

WEB TIER FRAMEWORKS & CLIENT-SIDE: Spring MVC, Jackson JSON Processor, Angular 7 and 8, JQuery, Bootstrap, D3, Stomp Web Sockets

MACHINE LEARNING: Feature Engineering using Python including Scikit-learn and TensorFlow. Scikit-learn and TensorFlow ML Framework with Keras API, Linear Regression and Classification Deep Feed Forward Neural Networks (DFF). Convolutional Neural Network (CNN), specifically EfficientNet-B0 architecture to fit image data after using Librosa to transform audio samples to the equivalent of image data by generating spectrograms of the audio recordings and using the resulting array as primary feature of training, test and validation data. Model Evaluation and Validation using Confusion Matrix, and Classification Report that includes Micro and Macro Precision, Recall, and F1-score. Also, True Negative Rate (Specificity), or False Positive Rate, depending on dataset balance and where applicable. Training and hosting models within Amazon SageMaker using SageMaker TensorFlow wrapper notebook integrated with Amazon S3 and SageMaker Hyperparameter Tuner, including hyperparameter ranges.

BIG DATA: Elasticsearch, Kibana, Logstash, Filebeat (Elastic stack v7.x), Spring Data Elasticsearch Java API, Integrated Elastic with Java microservices architecture using Logstash JSON-encoder jar. Logstash RabbitMQ, Nginx, and JDBC Plugins.

AMQP/JMS FRAMEWORK TOOLS: RabbitMQ, Apache ActiveMQ, RabbitMQ integration with Stomp Websockets

APPLICATION SERVERS: Tomcat, Nginx, Jetty embedded in Itellij, Apache Karaf, Red Hat JBoss Fuse.

WEB SERVICES: Spring Restful Web Services (REST), Spring RestTemplate, Spring Feign Rest Client with Proxies (for making REST calls to services within microservices paradigm)

AMAZON WEB SERVICES: S3, EC2, Glue ETL, Kinesis Data Streams, Kinesis Data Analytics, Kinesis Data Firehose, SageMaker

OBJECT/RELATIONAL PERSISTENCE (ORM): Hibernate, JPA w/Annotations (EJB3), Spring JpaRepository, Native JPA Template, Spring - managed transactions within service classes, Spring Data

KNOWLEDGE MANAGEMENT & TEXT ENGINEERING: Elasticsearch and Kibana, Protégé Ontology Framework and API

MULTI-THREADING: Java.util.concurrent.ExecutorService with Callable, Spring task scheduler and task executor annotations

METHODOLOGIES: Design Patterns and J2EE Patterns, Use Case Analysis, Unified Modeling Language (UML), Agile (SCRUM) - Sprint Iterations, JIRA, XPlanner and VersionOne Agile tools for planning Sprints, Rational Unified Process, Rational Rose Modeling software (Rational 2000 Enterprise Edition) Some examples of design patterns that I have applied over the years (since 1999) Singleton, Proxy, Adapter, Command, Command Processor, Factory, Virtual Proxy (Dynamic Loading), Builder, Hashed Adapter, Request Broker, Service Locator (JNDI), Front Controller, Business Delegate, and combinations of the above. Focus on applying design patterns where there is a problem in search of a solution, not a solution in search of a problem. Make use of class parameterized type() generics for generic design such as DAO classes.

SOURCE CONTROL: Git, BitBucket

CM/CI TOOLS: Jenkins, Bamboo, Nexus, Artifactory

PROVISIONING TOOLS: Some Vagrant, Ansible, and Packer (Utilize all as a developer and not as a provisioning expert)

OPERATING SYSTEMS: Linux (including Centos 7, Ubuntu 18, Oracle, Red Hat VMs in Oracle VirtualBox). Able to setup my own VM, including Guest Additions, with everything that I need.

TESTING TOOLS: Junit, Mockito, SpringJunit4ClassRunner

SERVICE-ORIENTED ARCHITECTURE (SOA) - Legacy: Apache Karaf (OSGI container), Apache Camel, Apache CXF, JBoss Fuse ESB

WORKFLOW ENGINES: Activiti BPM, JBoss JBPM

WORK EXPERIENCE:

Confidential

Senior Full Stack Developer

Responsibilities:

  • The goal of the research/competition project was to identify a wide variety of bird vocalizations in soundscape recordings. The quality of audio samples varied, resulting with some being weakly labeled with background noise, including sounds such as other animals, rolling water, insects, airplanes, city sounds and more.
  • My implementation approach was to use the EfficientNet-B0 Convolutional Neural Network (CNN) architecture to fit image data after using Librosa to transform birdcall audio samples to the equivalent of image data by generating spectrograms of the audio recordings and using the resulting transformed array as primary feature of training, test, and validation data. Experimentation was conducted on samples using 3 rd -party noise reduction library to remove some of the background noise.
  • Model evaluation using classification report and confusion matrix that includes micro and macro recall, precision, and F1-score.
  • What I gained from working on this project was experience with CNNs, including CNNs and audio/image detection. Most of my previous ML experience has been linear regression, binary classification, and multi classification Deep Feed Forward Neural Networks (DFFs).

Confidential

Senior Full Stack Developer

Responsibilities:

  • Application implemented using Java 11, Spring Boot, Spring microservices, REST, RabbitMQ for messaging, MongoDB, Elastic (Elasticsearch, Logstash, Kibana, Filebeat), Stomp Websockets, and Angular 7. I work directly with all of the above.
  • I implemented various business requirements working across all tiers (Java full stack). Implementations include constructing MongoDB Spring template repositories with CRUD operations supporting CRUD front-end functionality, MongoDB lookup (join) and aggregate queries for front-end lists, Spring services, Spring controllers, and various Angular components. Typical business requirement implementations utilize Java generic types whenever possible when there are multiple business entities extending the same base entity. This allows for more code reuse on the server-side.
  • I constructed MongoDB facet operation queries that allows a single aggregate query to have several WHERE clause options each mapped to separate result set. This was for a business requirement requiring that various categories of message counts be pushed to the front-end in real-time.
  • I implemented Stomp websocket framework that is used across several business requirements including the capture and displaying of various user, activity, and system events for auditing, informational, warning and real-time error reporting purposes. Stomp websocket is the gateway used to push content from server-side to UI via RabbitMQ amq.topic exchange.
  • Additional business requirement I implemented required the user to be able to create and manage multi-level nested entities that required corresponding multi-level nested documents in MongoDB. I did both back-end and front-end work for this requirement.
  • I implemented Logstash/Elastic logging solution where JSON-encoded log files are created with the aid of Logstash dependency jar that is used to configure logback-spring.xml files using MDC to add custom data to log events. Confidential is used to ingest log files across all Java micro services, which forwards those log events to Logstash pipeline for parsing, and where those log events are forwarded to Elasticsearch index. I created the entire workflow/framework.
  • The same framework was used to ingest Nginx log files using a 2nd Confidential instance. The Nginx log events are then forwarded from Confidential to Logstash where they are parsed using Grok before being sent to the same Elasticsearch index.
  • I created various vertical bar chart/timeline visualizations based on log events using Kibana.
  • Using Docker images of Elastic components. Elastic Docker images are deployed using docker-compose YML file in development environment, and using Kubernetes Templates for production deployment into Openshift cluster.
  • Implemented custom Logstsash RabbitMQ plugin pipeline that captures message events from RabbitMQ exchange, binding key, and queue and forwards to Elasticsearch index.
  • I also TLS/SSL-enabled Elastic workflow both at transport and HTTP layer where all components communicate via SSL using self-signed certificates created using OpenSSL. This includes Elasticsearch node-to-node communication, Kibana-to-Elasticsearch communication, and browser-to-Kibana communication.
  • I integrated Elastic with OpenAM and ForgeRock using OpenID Connect authorization token access using OAuth 2.0 Code Flow.

Confidential

Senior Software Developer

Responsibilities:

  • I work on modernization across multiple applications that GCE has, including containerization with Docker, upgrading to newer Spring Frameworks (Spring MVC, JPA, Swagger, etc.) using Spring Boot, breaking out monolithic code into more manageable Java microservices, and upgrading the front-end to Angular 5. As a result of upgrading frameworks, also refactoring necessary code, configuration and annotations, and upgrading jars.
  • Related to above item, I am currently refactoring existing, monolithic application to one that conforms to microservices design using Spring Netflix along with all of the usual Spring Netflix infrastructure components including Eureka Discover, Netflix Zuul API Gateway, Spring Cloud, Zipkin Distributed Tracing Server, Netflix Feign, and Netflix Ribbon. External services (outside of our control) are consumed using RestTemplate.
  • A percentage of my time is converting SOAP/WSDL legacy web services to RESTful; some are microservices and some are traditional RESTful using Rest Template.
  • A percentage of my time is fixing Fortify and SonarType code scan security findings, and working on production problems.
  • Based on previous Elasticsearch experience, I have implemented services where necessary that build Elasticsearch indices from unstructured text content stored in database, and ontology OWL files on the file system. I also utilize Spring Data Elasticsearch for creating index models and create Elasticsearch CRUD repositories where necessary. Indices are created at application startup in configuration Spring beans. Users can search using canned ‘match phrase’ queries that are accessed via REST calls. Kibana is only used for testing index creation and testing search queries.
  • Creation of Docker-Compose scripts and executable jar microservices scripts, testing and memory management of all application-related and infrastructure Linux processes.

Confidential

Software Developer

Responsibilities:

  • I worked on internal projects that were used as templates for subsequent development. These projects include the following listed below.
  • I implemented Java-based microservices project using Eclipse as the IDE and uses Spring Boot, Spring MVC, JPA, Spring Netflix and Spring Cloud components including Eureka Discovery Service, Netflix Zuul API Gateway, Spring Cloud Server, Zipkin Distributed Tracing Server, Netflix Feign, Netflix Ribbon, and Swagger.
  • The above microservices project have all services and components deployed within their own Docker containers. Most services and components have Dockerfiles within their specific folders. I implemented the containerization in two ways: Docker-Compose and Kubernetes.
  • The Docker-Compose deployment was created first using a docker-compose.yaml file that consists of 10 containers including the previously mentioned microservices-related along with Postgres, Elasticsearch, Kibana, and Logstash containers.
  • Next, I did a Kubernetes implementation of the above microservices project. The Kubernetes deployment, and services YAML files were created from the Docker-Compose scripts using the product Docker Kompose. The specific Kubernetes version uses Minikube and Kubectl (from within a VM).
  • All of the above development was done within an Oracle VirtualBox VM that has Ubuntu installed as the OS on the host and most of the Docker images. I did all development within a Linux Ubuntu OS VM (and more recently a VM with Centos 7).
  • The front-end for the microservices project was implemented using Angular and Angular Material. I use Visual Studio Code as the IDE for Angular development. I exploit the Observable/Subscribe front-end pattern in Angular along with the utilization of service classes.
  • Elasticsearch development included using the Elasticsearch Java API and the Spring Elasticsearch Repository packages.
  • I designed and implemented a Java-based Elasticsearch dynamic query builder that takes user-provided parameters and builds an Elasticsearch query. The builder uses the Elasticsearch Java API. The type of dynamically created queries the builder constructs are basic Match queries using either term or range, Match All queries, RANGE queries, and more complex Boolean queries allowing for multiple Must, Must Not, and Should clauses. All query types allow filters and aggregations including sub aggregations. The builder is designed to accept any Elasticsearch model class by using the Java Generic pattern where the main builder class accepts and works with a generic type injected by the client consuming the class.
  • The Elasticsearch dynamic query builder described above exposes the constructed query to the front-end where it can be copied and pasted into Kibana and tested “as is” if necessary.
  • I utilized Elasticsearch X-Pack to secure Elasticsearch, Kibana, and Logstash. Roles and users were created via CURL commands to grant various privileges such as creating new indices, reading indices, adding documents to indices, and other necessary privileges.
  • I used Kibana mainly to test queries within the Dev console and to manage indices, users and roles.
  • I utilized the Logstash JDBC plugin for data ingestion via SQL queries for structured data. However, not all data ingestion was done using Logstash. For some unstructured data such as text files containing email content, I used the Elasticsearch Java API within Java service objects to ingest data in order to have specific control over parsing the unstructured data. Moreover, some data was ingested via Kibana using bulk POST commands.
  • I made moderate use of Open Shift.

Confidential

Lead Software Developer

Responsibilities:

  • I worked on new requirements for NodeJS, Express application that generates XML from both structured relational data, and unstructured file-based document data. The project uses AWS S3 storage for its transformation cache and XML document storage.
  • I worked on new requirements for NodeJS, Express application that transforms XML document into JSON document and JSON document into PDF document using pdfmake NodeJS 3rd party module. Because multi-page PDF documents are heap space hogs, and slow, the Node JS application spawns child processes that handle PDF document creation using pdfmake. Child processes are spawned using the child-process Node JS module.
  • I made performance enhancements to the worker child processes that are spawned for the purpose of PDF document creation. My enhancements allowed PDF documents consisting of over 800 pages each to be created successfully.
  • I Implemented requirements for new sections of data within HTML transcripts that originate from data persisted in back-end where the data is transformed into XML, then JSON, followed by the mustache templating Node JS module being used to transform the JSON data into HTML. The entire workflow is Node JS based.
  • Implemented LDAP connect/search/bind using ldap Node JS module used during authentication into the new FinCEN intranet website.
  • Worked on new requirements for FinCEN Query Java-based application that uses Oracle, JPA, JSF/ADF, Oracle WebCenter, and Oracle Identity Manager (OIM). I worked the entire stack including the front-end (JSF/ADF), web tier, server tier, DAO tier (JPA), and Oracle back-end.
  • Worked on new requirements for FinCEN Training Java-based application that included working with Oracle database, Oracle Identity Manager (OIM), LDAP, Java batch jobs, and Oracle WebCenter.
  • Fix security scan findings found by HP WebInspect and Fortify security products. Fixes are either Apache HTTP server based, Drupal based, Linux file system based, Node JS based, or Java based.

Confidential

Senior Software Developer

Responsibilities:

  • Purpose of contract was to provide integrated, distributed OSGI environment used by P8A patrol and reconnaissance aircraft for processing sensor data against user-created subscriptions.
  • I was responsible for creating the RESTful, lightweight, loosely coupled adapter objects that handles integration between the distributed service tier and the web tier that uses Apache CXF REST on the service tier side and Apache HttpClient on the web tier side. I implemented all of the adapter classes across all services in the service tier that handle REST calls that are made back-and-forth between the Spring MVC controllers and the RESTful adapter objects in the various services
  • I was responsible for designing and implementing the Spring MVC REST based web tier and the AngularJs based UI that handle the creation and configuration of subscriptions including the match field ruleset and the creation and configuration of match target types (MTTs).
  • The UI technologies and frameworks that I utilized to create the UI were AngularJs, JQuery, ThymeLeaf, Bootstrap WrapBootstrap template and numerous UI widgets including list widget (DataTable) and tree widget (JqTree).
  • The server-side technologies and frameworks that I utilized at the beginning of the project to assist in developing the server-side messaging infrastructure were Java, AMQP queues and topics for processing incoming MTT-based messages from plane and sending outgoing messages to dynamic MTT-based routes, OSGI for handing coordination between service bundling, Apache Camel for AMQP-based routes using AMQP endpoints, Apache Karaf as the OSGI container including the usage of blueprint Spring-like XML config files containing Camel routes, JaxB for marshalling and unmarshalling message data, Apache HttpClient, Spring Boot for creating starter POM files, Spring MVC as the MVC framework, and Jackson JSON parser for converting JSON to Java beans and vice versa.
  • The backend used MongoDB with a DAO tier in the application codebase that is JPA-based. My involvement with MongoDB and the DAO tier included writing and debugging queries using JPA.
  • The final web tier artifact was a Maven produced .war file that is pushed to Artifactory and deployed in Tomcat web server. I was responsible for creating and maintaining the Maven build script that builds the war file and pushes to Artifactory.
  • The build process created container image using Docker. Automated deployment process involved deploying container image(s) in its associated container(s) (ie. Tomcat, Karaf). Kubernetes was used to manage a group of applications deployed on the same server within atomic nodes. I utilized Docker and Kubernetes daily as a developer and not as a DevOps or CM guy.
  • As the only developer working within the web tier and service tier, I led effort to convert portions of existing Spring MVC/REST app to Spring-based microservices using Spring Cloud and Spring Netflix components and packages with annotations. First, refactored code to use more fine-grained domain-based API design for controllers and services. Created proxies that used Feign to make REST calls to services similar to what Apache CXF was already doing. Services are registered with Eureka naming server and utilize Ribbon for automatic load balancing.
  • Microservices effort ended when the customer halted work orders for new requirements. I eventually left the project after doing about three months of bug fixes, code cleanup, and a lot of automated deployment CM work with Docker and Kupernetes.
  • I was responsible for building from the ground-up the web application (both server-side web tier and UI) that allows users to monitor performance of any device with the aid of Flot charting software. The UI allows a user to select a particular view (host, network, partition, broker, destination) and select devices for that particular view. Time series data is rendered on line chart (Flot javascript chart framework).
  • The web tier was built using Spring Boot, Spring Core, Spring MVC including servlets and Spring MVC based JAX-WS-RS annotations.
  • The UI uses AngularJs, Bootstrap, WrapBootstrap template, CSS, Flot charting software and other javascript widget and frameworks.
  • This application monitors performance of hardware devices including hosts, networks, partitions, AMQP brokers, and AMQP destinations.
  • The collectd framework constantly polls and collects performance data for hardware devices.
  • User is able to select from a set of timeframe options or enter custom ‘to’ and ‘from’ dates that determine chart X-axis range. User can also choose whether chart is in polling mode (where chart moves over time) or non-polling mode when looking at historical view.
  • Application is built using Gradle. The final .war artifact is uploaded to Artifactory. The build process creates container image using Docker. Automated deployment process involves deploying container image(s) in its associated container(s) (ie. Tomcat, Karaf). Kubernetes is used to manage a group of applications deployed on the same server within atomic nodes. An atomic node can contain multiple containers.

Confidential

Senior Software Developer

Responsibilities:

  • My duties included maintenance (OEM) support that includes fixing bugs, enhancements, and new requirement design and implementation.
  • Also, as a senior developer, I had to conduct analysis and design for certain larger tasks in order to determine projected scope of tasks.
  • Domain areas of focus that I enhanced and maintained included trip itinerary creation and edit, voucher (reimbursement) creation and edit, traveler profile edit, accounting, air, rail and car reservations, hotel lodging search and booking, and mileage and non-mileage expense creation and edit.
  • Primary technology tasks involved design and coding with Java, Spring Core, Hibernate, Javascript, JQuery, HTML, and SQL. Other technologies used by DTS were Spring Core (for service tier and DAO tier), and Hibernate including Hibernate-managed transactions using Spring. The MVC framework was Tapestry. The UI consisted of JSPs and Javascript (including some JQuery). I was involved in all aspects of the above (full stack).
  • I worked with Linux scripts and moderate use of Linux commands for doing everyday tasks including deployment that included tools such as Putty and WinSCP.
  • I worked extensively with SQL within Oracle context both enhancing Hibernate-managed queries and also doing ad-hoc performance assessment queries that included aggregations and correlated sub-queries.
  • In addition to coding, I also prepared documentation that included functional documents and design documents.
  • I often served as reviewer for code reviews using Crucible.

Confidential, Patuxent River, MD

Senior Software Developer

Responsibilities:

  • I designed and created an acquisition purchase request application in conformance with a complex and strict business workflow process involving multiple user roles (swim lanes).
  • I created and implemented the entire web-based process that handles the creation and maintenance of several accounting documents needed by the project management office (PMO) and contract office such as the Statement of Work (SOW), Independent Government Cost Estimate (IGCE), Justification and Analysis (J&A), Market Research, and DD254 documents.
  • I created the web-based document upload and download feature.
  • I setup, configured, customized and integrated a business process workflow engine in the middle tier that managed the working of an acquisition along its workflow lifecycle step-by-step. The business process workflow engine that I used was Activiti.
  • The application (still in existence) uses Java 1.7 and Tomcat 7.0.57 as the web server. I used Apache Geronimo J2EE runtime jars to turn Tomcat into a J2EE-compliant web server so that Apache ActiveMQ would work within Tomcat enabling JMS capability.
  • Confidential uses Oracle 11G as its database. It formerly used MySQL. I had to convert all DDL scripts and JPA entities to work with Oracle instead of MySQL.
  • I used the following Java technologies and frameworks that I directly worked with the entire time: Core Spring, Spring MVC, Spring Security, Spring DBUnit integrated test framework, Jackson JSON-bean mapper, JPA/Hibernate, Java Messaging System (JMS), Apache ActiveMQ, EH Cache, Apache Geronimo, Apache Commons, Apache Lang, Apache BeanUtils, Maven build manager, SVN.
  • I used the following UX technologies and frameworks: HTML, HTML5, Native Javascript, JQuery, JQuery DataTables, Bootstrap, CSS.
  • I created automated web-based process in order for ‘Aspose for Word’ product to convert HTML text to a Word document.
  • Initial architect tasks I completed were setting up Maven-centric web project structure, setup and configure Spring MVC based web tier, configure Hibernate with Spring, JDBC data source and persistence unit, and configure Spring security authentication.
  • Day-to-day tasks included analysis, design and implementation (coding) of business requirements, including RESTful web tier controllers and transaction-managed mid-tier service classes, and maintaining the Maven build process and frameworks including upgrading frameworks (jar files) when necessary. Other daily tasks included schema design, adding new database tables and constraints, creating new JPA-annotated Hibernate domain beans (and possible named-queries), creating and maintaining DAO methods and queries in DAO layer, creating initial DDL and DML database scripts, and periodic database update scripts.
  • I also configured the Confidential build process to use the Sonar code quality plugin and the FindBugs plugin.

Confidential

Senior Software Developer and Company Owner

Responsibilities:

  • I created a Knowledge base / ontology tool / search engine website product in collaboration with a small user community consisting of historians, academic instructors, and general subscribers.
  • Content is housed in OWL ontology file. OWL-based content is created and managed by Protégé Ontology API, framework, and editor tool.
  • Data is formatted dynamically for display within the UI using AngularJs template.
  • I designed and implemented the website so that its OWL-based content is organized using navigable hierarchical structure composed of categories (classifications) at a higher level and topics (nodes) at the lowest level.
  • Integrated with Authorize.Net payment gateway Java API for processing automatic recurring subscriptions and subscription cancellations.
  • I used the following Java technologies and frameworks to create the website: Core Spring, Spring MVC, Spring Security, Jackson JSON-bean mapper, Spring-managed services, JPA/Hibernate, EH Cache, Apache Commons, Apache Lang, Apache BeanUtils, Maven build manager, SVN.
  • The UI is AngularJs and some JQuery.
  • Website uses Extended Validation (EV) SSL X.509 public key certificate obtained from an authentic CA that I integrated into the application.
  • I developed and maintain the website product in a local environment and deploy it on a Linux cloud hosting production environment.
  • Product contains Java admin component that I built using AngularJs front-end for managing customers security attributes.

Confidential, Reston, Virginia

Principal Software Engineer

Responsibilities:

  • Allowed user to configure where a device's log messages are distributed. Log messages could be distributed to different event source types. Per event source type log messages could be channeled to multiple hosts (IPs).
  • I worked on front-end and mid-tier (service) layers that let a user configure log collection. This configuration meta data was delivered to RESTful service that processed data for that device. The front-end process also allowed user to attach security certificate to configured hosts. User selected certificates from a repository.
  • I implemented tree map visualization. This consisted of event count aggregation by selected meta data value along with Average Score type aggregation that rendered in the form of a colorized tree map.
  • I implemented pie chart visualization. This consisted of event count aggregation by meta value that rendered in the form of a pie chart. The query is also a Top N.
  • I implemented time line visualization. This consisted of event count aggregation by meta value with aggregation per date unit interval. This was time line data with event count per hour or per day intervals for particular meta values.
  • I implemented circular sunburst visualization. Query returned nested data beginning with data hierarchically arranged by score types as depth. Each score type consisted of risk levels (low, medium, high). Each risk level then contained nested data of next score type. Data was arranged in a colored circular sunburst visualization. The specific color denoted the risk level.
  • I wrote Google Earth application that enabled a user to select time range of request/response packet sessions and create Google Earth view of session communications. The relevant session meta data for each request/response pair was the lat/long coordinates of the IP addresses. This feature utilized the Google Earth API integrated with the Confidential Security Analytic (SA) web application.
  • I used Flexera application licensing framework to implement new features and enhancements within Confidential Security Analytic (SA) web application. Various features allowed user to manage licenses that were allocated to devices that monitor events. Flexera has a license server (fneserver) that is deployed in a Linux-based Centos environment.

Confidential

Senior Software Developer

Responsibilities:

  • I worked as a full stack developer working in all tiers implementing requirements for Navy SPAWAR One Nalcomis application.
  • One Nalcomis was a web-based application that supported aircraft maintenance with the basic unit of work being a work order.
  • The application was written in Java and used Spring MVC, EJB 3.0 (specifically JPA), Hibernate, JBoss ESB, Spring Core, JBPM, and Drools. I was involved in all aspects of the above.
  • My specific duties included constructing JPA entity objects that map to tables including any relationships (one-to-one or one-to-many joins) to other tables in order to traverse the object graph, coding SQL or HQL queries or named queries within the DAOs, coding the web tier, coding the UI, creating new or maintaining existing ESB services, creating new or maintaining EJB stateless session beans that serve as a DAO layer and demarcate transaction management, creating and maintaining rules within Drools, and making additions/revisions to JBPM workflows.
  • I designed the middle tier (service layer) business rule component that used the Chain of Responsibility pattern for applicable functionality. Other duties included implementing web tier controller classes and middle tier service classes. Controller objects were kept as thin as possible allowing the service objects to handle most of the business functionality work. This was because the service layer must be able to be accessed via web service calls. Therefore, the goal was for the same service object to serve both the UI client and web service client.
  • I was assigned stories within a Sprint duration and worked the entire story in all tiers implementing functionality.
  • The UI used DOJO, AJAX (using DOJO XHR), Spring MVC, FreeMarker, HTML and heavy Javascript since many requirements required a lot of user interaction on the front-end.
  • I implemented proper Hibernate transaction management within a web service that used eager loading instead of lazy loading since web tier was not coupled to DAO tier.
  • I collaborated with other developers in creating abstraction layer and object mappings between web tier, ESB/business delegate layer and JBPM/Drools layer. Techniques conform to SOA standards and best practices.
  • I utilized JAXB to convert queried resultset data to XML data and used XSLT to transform XML data into HTML.
  • I created User and User-related tables and associated JPA entities and infrastructure for timekeeping system where multiple workers could be clocked in and out at the same time in one transaction. Created UI for timekeeping system.
  • Wrote business rules for JBoss Drools using MVEL and Java for both regular stateless session rules and stateful session used by Complex Event Processing (CEP). The decision was eventually made to stop utilizing CEP.
  • I had to have moderate knowledge of and integration with JBoss stateful session Complex Event Processing (CEP). Portion of application that used CEP allowed for putting one or more aircraft maintenance workers in and out of work with aggregations that maintain various worker hour SUM and AVG equations. As mentioned above, the decision was eventually made to stop utilizing CEP.

We'd love your feedback!