We provide IT Staff Augmentation Services!

Chief Architect & Senior Requirements Engineer Resume

2.00/5 (Submit Your Rating)

Scottsdale, AZ

PROFESSIONAL EXPERIENCE:

Confidential

Chief Architect & Senior Requirements Engineer

Responsibilities:
  • Name confidential per NDA Class 3 Medical Devices that generate both clinical and engineering data that assists in patient diagnostic and quality engineering. The candidate provisions analytic systems to gain predictive insights and warn of impending device failures. All Architecture implemented entirely atop Amazon Web Services (AWS).
  • Apache Storm is utilized for Real Time Clinical Medical Alerts and Warnings.
  • Apache Spark MLlib over Mesosphere DCOS orchestrating Mesos & YARN and Hadoop over YARN is utilized for long term Clinical Learning searching for earmarking patterns of Medical Data & Events and Engineering Patterns for equipment failure modalities
  • QA Testing Suites utilizing JUnit.
  • Name confidential per NDA Building Management Systems (BMS) to provision real time & historical commercial & industrial building HVAC performance & predictive analysis from both financial and maintenance PoVs. All Architecture implemented entirely atop Amazon Web Services (AWS).
  • Maintenance of existing Microservices utilizing the Spring Framework (Java using the principles of DI/IoC & AOP)
  • Porting of existing and the creation of new Microservices utilizing the Akka, Play & Scala
  • All code written in Scala utilizing a COMBINED Functional & Object Oriented Programming Paradigm ( FP / OOP). Principles emphasized were:
  • Strong usage of Case Classes and Pattern Matching through out the Design.
  • Strong usage of Abstract Types & Higher - order Functions to better facilitate re-usability of Generic Algorithms
  • XML Processing supported by the data-binding tool schema2src
  • Back end written in the Akka Framework emphasizing:
  • Actor Based architecture for light weight, concurrent, asynchronous event handling
  • Fault Tolerance implemented as a 'let it fail' paradigm
  • Location Transparency
  • State Persistence in which journaled messages are replayed
  • Front end GUI written in Play
  • Utilized Vagrant for the provisioning of multi-developer development environments
  • Project Process governing the DevOps efforts are governed by The 12 (twelve) Factor App Methodology & Scrum.
  • Systems developed as S oftware-as-a-Service (SaaS) with code base developed principally upon AWS EC2 s utilizing Akka & Scala in a Micro-Services architecture.
  • AWS Redshift (aka PostgreSQL 8.0.2) & Greenplum MPP DB utilized to perform both Data Warehousing and analytics of the client's BMS systems.
  • Sqoop utilized for ETL between Hadoop & RDBMS: MS SQL Server; Greenplum MPP DB & PostgreSQL.
  • QA Testing Suites utilizing: Cucumber, Gauge, Gherkin
  • Name confidential per NDA Engineered Industrial Products utilization of Internet of Things (IOT) principles & technologies to develop event driven both real time & historical analytical and predictive applications improving maintenance & quality engineering. All Architecture implemented entirely atop Amazon Web Services (AWS). The candidate Architected, Designed & Maintained system with:
  • Apache Kafka utilized for Real Time Device Ingestion; followed by Real Time Processing via Apache Storm; and providing data summarization, query, and analysis via Apache Hive; atop of Apache YARN in turn atop of Apache Hadoop.
  • Deployed, Maintained & Tuned in a Virtualized Environment of open-source software frameworks for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware.
  • Deployments from 10 to 125 Nodes of Centos & Ubuntu upon AWS EC2 instances utilizing Apache Ambari
  • Scheduling & Managing Hadoop jobs through Hue & Apache Oozie
  • Tuning Hadoop Clusters for better performance by:
  • Analyzing job history with Rumen
  • Benchmarking a Hadoop cluster with GridMix
  • Using Hadoop Vaidya to identify performance problems
  • Balancing data blocks for a Hadoop cluster
  • Using compression for input and output
  • Configuring speculative execution
  • Setting proper number of map and reduce slots for TaskTracker
  • Optimizing the JobTracker configuration
  • Optimizing the TaskTracker configuration
  • Optimizing shuffle, merge, and sort parameters
  • Configuring memory for a Hadoop cluster
  • Setting proper number of parallel copies
  • Tuning JVM parameters
  • Configuring JVM Reuse
  • Configuring the reducer initialization time
  • Provisioned Big Data services to small through mid-sized enterprises, principally utilizing Amazon Web Services (AWS).
  • Other Client's Use Cases addressed & Technologies employed:
  • DevOps performing Continuous Integration ( CI) & Continuous Deployment ( CD) utilizing Jenkins, Docker, Ansible, Git
  • Microservices Development utilizing Red Hat JBoss EAP 6 (Java using JEE & DROOLS Rules Engine)
  • Utilization of Spark GraphX to mine out customer preferences visiting web pages associated with IoT Device Consumers
  • Maintaining Servers running Ubuntu, RHEL, SUSE & Centos utilizing Bash Shell Scripts
  • Utilization of behavior-driven development (BDD) Methodologies with Cucumber
  • Utilization of test-driven development (TDD) Methodologies with TestNG
  • Pivotal Spring & JBoss JEE Applications prototyped in Spring Roo & JBoss Forge & Wildfly, Scala Play and deployed to EC2
  • Creation and extension of Enterprise & Application Dashboards and general reporting utilizing Eclipse BIRT
  • Architecting several Enterprise Data Warehouse (EDW) with both Bottom Up Data Mart driven designs & Top Down designs using a Normalized Enterprise Data Model, both tied together with a Data Vault Modeling methodology utilizing Hubs, Links & Satellites to better facilitate ETL Processing. Upstream Data Warehouse & downstream Business Intelligence ( BI) Technologies utilized are:
  • Amazon Redshift;
  • Pivotal Greenplum;
  • Microsoft SQL Server Integration Services (SSIS).
  • Integration of Greenplum with Hadoop HDFS data stores
  • Big Data Architecting utilizing Apache Hadoop, Apache YARN & Ambari, Apache Avro, Apache Spark (Scala), PySpark & Python (UDFs), Apache Mesos, ZooKeeper, Cloudera Manager, Ganglia. Sqoop2, Pig, Hive (HiveQL), Presto (SQL query engine), Impala, Amazon S3, Amazon Glacier
  • Real Time Data Ingestion utilizing Kafka & Storm
  • Columnar Databases: Apache Cassandra, Apache Hbase
  • Big Data Use Cases implemented: Bloom Filters for cardinality estimation; Creation of Inverted Indexes for online web stores; Correlative Studies; principally in Python & PySpark

Confidential, Scottsdale, AZ

Information Systems Architect & Chief Requirements Engineer

Responsibilities:
  • Utilization of Agile & Kanban Methodologies
  • Servers maintained with Bash Shell Scripts
  • Analysis performed to optimize Fleet Schedule & Loading utilizing Spark over Mesos written in Python, Scala & PySpark
  • Real Time Data Ingestion utilizing Kafka & Storm
  • Performing ETL with PIG & Sqoop
  • Development of a customizable Load Based Fleet Management Systems utilizing:
  • Spring Framework
  • Spring Tool Suite (STS)
  • Spring Roo
  • Spring MVC
  • Design of a Hibernate ORM data bridge to the application's MySQL RDBMS
  • Design and maintenance of MySQL RDBMS
  • DevOps utilizing Jenkins for CI/CD, Maven for builds, and Git for CCM
  • Creation of live Transport Fleet Status Reporting applications utilizing Eclipse Business Intelligence and Reporting Tools (BIRT) from MySQL RDBMS

Confidential

Software Engineer

Responsibilities:
  • Utilization of Test-driven development ( TDD ) Methodology with JUnit
  • Software development in the Microsoft .NET Framework, C#, SQL Server utilizing Visual Studio
  • Utilization of Agile & Kanban Methodologies
  • Creation of BI Dashboard & Ad Hoc Reporting applications utilizing Eclipse Business Intelligence and Reporting Tools ( BIRT ) atop MS SQL Server
  • HVAC Performance & Maintenance Analytics utilizing Hapoop
  • Performing ETL with PIG
  • HVAC, Plumbing, & Electrical Systems Monitoring, Controlling and Optimization utilizing Apache Spark
  • Development of BMS software in Groovy Grails, Python & Jython
  • DevOps of BMS software utilizing Jenkins, Git, Spock
  • Database Administration ( DBA) in IBM DB2 and MS SQL
  • Database Design and Migration from IBM DB2 to MS SQL Server using ERWin

Confidential,St Petersburg, FL

Quality Engineer

Responsibilities:
  • Medical Device Analytics for Clinical & Maintenance Use Cases for P eritoneal Dialysis Machines (PDM):
  • Utilization of USDP & CMMI Methodologies
  • RDBMS Design and DBA to provision Data for Medical Verification & Validation (V&V) QMS under ISO13485 upon: Oracle; MS SQL-Server; PostgreSQL
  • Application Development upon Apache Tomcat Web Server
  • Utilizing log4j & Chukwa to ingest real time log data of home based PDM
  • Performing ETL with PIG
  • Utilizing Hadoop to perform predictive failure analytics of PDMs
  • Utilizing Hadoop to perform non-clinical predictive analysis of patients undergoing peritoneal dialysis as an adjunct to physicians patient's care
  • Quality Management Systems ( QMS ) performing Verification & Validation ( V&V ) utilizing Relex, DOORS & ProEngineer
  • Mechanical Engineering Design utilizing ProEngineer & SolidWorks

Confidential, San Jose, CA

Systems Analyst

Responsibilities:
  • Company designed, developed & marketed Genomic & Proteomic Assay Systems. While there this candidate:
  • Utilization of the RUP & CMMI to develop LIMS
  • Performed Systems Analysis for Laboratory Information Management Systems ( LIMS ) utilizing Use Case Modeling & Rule Base Modeling (for Alternative Flows & Exception State determinations). Models maintained in Rational Rose
  • Prototyping LIMS systems in MS Visual Studio & MS SQL Server
  • Database Design for the persistence of LIMS data in Oracle, MS SQL utilizing ERWin
  • Business Intelligence ( BI) utilizing Crystal Reports

Confidential, Princeton, NJ

Senior Methodologist

Responsibilities:
  • Rolled out Enterprise wide a Development Case of the Rational Unified Process ( RUP ) and the Capability Maturity Model Integration ( CMMI ). Processes architected adhered to the Sarbanes - Oxley Act.
  • Determined enterprise wide CMMI Maturity Level at the various Confidential campuses as a prequel to customizing the Development Case and establishing the Enterprise Training Syllabus
  • Architected the Confidential Unified Development Process (CUDP) by specializing the basal RUP Development Case for the development & maintenance of Laboratory Information Management Systems
  • Instructed Trainers in Business & System Requirements Management (RM)
  • Instructed Trainers in Object Oriented Analysis & Design (OOAD) in Rational Rose
  • Instructed Trainers in Configuration & Control Management ( CCM ) utilizing Rational ClearCase & ClearQuest

We'd love your feedback!