Rpa Architect Resume
West Windsor, NJ
EXPERIENCE SUMMARY
- Principal AI architect and engineer with over twenty years of experience.
- Business acumen in a host of diverse industries, including high technology, banking, securities, insurance, retail, transportation, media, outsourced services, and healthcare.
- Expert in core AI related software architecture and engineering disciplines, including NLP, deep learning, and the simulation of human like reasoning.
- Track record of creating highly successful software products and services.
- Extremely broad range of skills that have consistently created new revenue streams, reduced costs, increased profitability, and provided clients with a competitive edge in the markets they serve.
- Future State Enterprise Architecture
- Artificial Intelligence
- Decision Support Systems
- Deep Learning
- Human Reasoning
- Intelligent Assistants
- Offer and Recommendation Engines
- Cognitive Computing
- Business Rules Engines
SKILLS INVENTORY:
Cognitive Computing, Artificial Intelligence, SOA, SOAP, XML, Spark, Scala, Ignite, DATA GRID, Storm, NoSQL, Hadoop, Flume, Hive, HDFS, HBase, MongoDB, Cassandra, Big Data, Kafka, data pipeline, EAI, Natural Language Processing, ESB, BRMS, business rule engines, Prolog, speech recognition, speech synthesis, Robotic Process Automation, CEP, BPM, Current and Future State Architecture, Physical and Logical Architectures, Linux, platform services, Intelligent Virtual Assistants, enterprise architecture, Java, Servlets, EJB, JMS, Spring, Python, learn - imbalaned, scikit-learn, Pandas, NumPy, Deep Learning, H2O, Tensor Flow, Keras, virtual assistants, data mining, predictive analytics, component based design, OOAD, design patterns.
PROFESSIONAL EXPERIENCE:
Confidential, West Windsor, NJ
RPA Architect
Responsibilities:
- Researched and assessed leading approaches to building deep learning models that would analyze medical images in order to make a diagnosis. Using sample x-ray and blood smear datasets from the National Library of Medicine and National Institute of Health, adapted open source implementations in order to maximize the recall rate for the classification of pneumonia and malaria images. Recall rate for the classification of pneumonia x-rays approached 96%.
- Developed a Natural Language Processing/Understanding (NLP/NLU) front end for a Public Key Infrastructure (PKI) proof of concept. Utilized Prolog’s Definite Clause Grammar (DCG) representation to create a parser which ensured that the language constructs conformed to well defined sentence structure and form. Incrementally built a knowledge based representation of the spoken sentences during parsing, through the use of lambda calculus expressions, in order to determine the semantic and contextual meaning of the statements.
- Used scikit-learn, learn-imbalanced, and pandas, to build a highly accurate machine learning model that correctly identified 5 out 6 plan members who were likely to experience a stroke at some time in the future.
- Constructed data pipeline used to manipulate raw training and test data. Dropped rows having large numbers of missing values, imputed mean,and performed one hot encoding of nominal values.
- As the original dataset was imbalanced (98%, 2%), used random, over, and under sampling, as well as the SMOTE algorithm, to ensure that there were an equal number of positive and negative target values within the training dataset.
- Built predictive models using logistical regression, random forest, and support vector machine algorithms. Compared model performance using a confusion matrix and the calculation of precision, recall and F1 scores.
- Developed a detailed business case and reference architecture, for the use of machine learning, natural language understanding, and human like heuristics to delay the onset, as well as reduce the mortality and morbidity rates normally associated with strokes within the member population. This would be accomplished through the identification of high risk groups and the subsequent creation of intervention strategies targeted towards affecting (delaying if possible) the medical outcome.
- Reviewed potential AI use cases, across different lines of business and operating groups within a health insurance environment, including compliance, claims processing, and member health. The goal was to identify high value opportunities for cognitive automation, that would readily lend themselves to a proof of concept whose chances of success were very high.
- Built a “No Cost” Robotic Process Automation (RPA) environment from scratch, using common off the shelf infrastructure components. This included the JADE mobile agent platform, which provided a highly scalable, container based system for executing, managing, and monitoring the entire lifecycle robotic software processes. Also incorporated the jBPM business process manager, in support of externalization of both business logic and robotic workflows.
- Developed a lightweight RPA framework (set of API’s), that “wrapped” the above mentioned open source infrastructure components. This simplified the coding effort required, and reduced the complexity of writing, any type of RPA application.
- Using the “homegrown” RPA environment built a proof of concept to automate some of the laborious manual processes that auditors perform within the corporate compliance group.
Artificial Intelligence/Digital Architect
Confidential
Responsibilities:
- Reviewed potential AI use cases, across business lines, including drug discovery, translational medicine, medical, clinical trials, legal, regulatory, manufacturing, and commercial, in order to identify those which had the greatest chance of being successfully implemented using currently existing technologies.
- Developed an AI Platform Reference Architecture, that effectively combined learning, reasoning, natural language understanding, machine vision, and speech recognition. The architecture was created to support an extremely wide range of Robotic Process Automation (RPA), operational decision management (ODM), machine learning (ML), natural language processing and understanding (NLP/NLU) use cases, on a single integrated platform.
- Authored a strategic roadmap for the use of artificial intelligence across the enterprise, looking outwards three years. The roadmap covers the phased adoption of the functional capabilities described with the AI Platform Reference Architecture.
- Began work on the first phase of the physical implementation of the AI Platform Reference Architecture, using Amazon EC2 instances and S3 storage
- Built deep learning models that automated the previously manual process of identifying adverse drug events. Cleansed, embellished, and joined text extracted from live chat conversations, as well as incident reports taken directly from health care providers (HCPs). Using Tensorflow, and the Keras framework, built and tested binary classification models. The deep learning models predicted the likelihood of an adverse event occurring, solely from the text of a chatbot conversation. The DL models effectively combined both Long Short Term Memory (LSTM) networks along with One Dimensional Convolutional Neural Networks (CNN), to achieve a level of accuracy comparable to humans.
- Utilized Amazon AWS EC2 instances, and S3 storage to build a working prototype of the Adverse Event Detection System (AEDS).
- Assessed the potential of using Prolog based definite clause grammars (DCG’s) for natural language understanding (NLU), in order to supplement, or possibly replace, existing cloud based AI service offerings currently being used for chatbot development.
Chief Scientist
Confidential
Responsibilities:
- Developed a framework to build realistic 3D intelligent virtual assistants (IVA). The framework significantly reduces the amount of development time, as well as the technical skills, that are needed to build IVAs.
- Built a cognitive computing engine that combines mathematics, including deep learning algorithms, along with human like reasoning, in order to create a new form of ‘hybrid intelligence’.
- Designed and developed the interfaces, and reference implementations, for ‘plug-n-play’ decision strategies that can be loaded by the cognitive engine at runtime.
- Identified, tested, and integrated over fifteen core AI and infrastructure components, including deep learning (and other ML techniques), natural language processing, speech recognition, speech synthesis, a compute grid, distributed in-memory cache, forward chaining rules engine, predicate logic based reasoner (based on Prolog), text analytics tools, as well as a cluster based task scheduler and computing resource manager.
- Designed a set of inter operable services, and implemented their physical application programming interfaces (API’s), to encapsulate the full functionality of the underlying Confidential infrastructure
- Built supervised classification models, including those used for sentiment analysis, using Scikit-learn’s support vector machine (SVM), logistical regression, SGD, and decision tree algorithms
- Built text classification models using Tensorflow and the Keras deep learning framework, including some which utilized GLOVE based word embeddings
- Performed data pre-processing including imputation of missing values, one hot encoding, and converting numerical attributes to a standard scale.
- Performed feature selection and extraction using both supervised and unsupervised means.
- Utilized Sci-kit learns pipeline and grid search capabilities to automate the model building process.
Intelligent Virtual Assistant
Confidential
Responsibilities:
- Assessed the functional capabilities and technical design of the existing marketing offer engine, which was a batch oriented system built using SAS and a homegrown rules engine.
- Designed a new intelligent offer engine, using cognition Confidential, that would provide personalized product recommendations across divisional lines, making the best possible offer based upon events that were taking place in real-time.
- Developed an intelligent virtual assistant (IVA) that would enable the marketing team to author product eligibility rules via plain English conversation. In addition, the IVA would also provide advice to data scientists on how to improve their deep learning models.
- Built deep learning and logistical regression models, using H2O and Tensorflow/Keras, to predict a customer’s ability to spend, and propensity to respond to core product offers.
- The prototype of the new AI based system processed cardholder ‘life’ events in real-time, allowing offer eligibility to change dynamically, and hence the recommendations being made.
- Authored eligibility and offer logic in Prolog, which was leveraged by the cognitive engine at application runtime
Intelligent Virtual Assistant (Technical Support Representative)
Confidential
Responsibilities:
- Worked with corporate innovation team to identify and assess business use cases in which cognitive computing technology could be successfully applied
- Reviewed call transcripts relating to the resolution of networking issues that were handled by a cross section of level two help desk support agents.
- Interviewed subject matter experts to understand the thought processes, and decision making criteria, used by technical support agents to resolve networking issues
- Designed and built a fully conversant digital help desk agent using the cognition Confidential .
- Devised a means by which to fully comprehend, both the content and context, of written notes taken by level one support agents during their initial conversation with the customer.
- Utilized natural language processing to perform parts of speech tagging, and named entity recognition.
- Created domain specific vocabulary, and language models, to be used by the speech recognizer
- Developed code, and configuration settings needed to achieve continuous speech recognition
- Authored production rules which analyzed sentence grammar, and structure, in real time as they were spoken, with the purpose of enhancing language understanding
- Developed Prolog code to facilitate the question and answering process, as a tool by which to prune the potential solution space, and arrive at the correct resolution to the networking issue
Intelligent Virtual Assistant (Customer Retention At Point of Disconnect)
Confidential
Responsibilities:
- Worked with business stakeholders and subject matter experts to define use case and business requirements.
- Gathered functional requirements by reviewing retention center guidelines, recorded audio transcripts, and conducting one on one interviews with key customer service reps.
- Created application level architecture, and detailed design, for an intelligent virtual assistant that would serve in a customer service role at the point of retention.
- Built fully conversant system using the Confidential .
- Created and implemented paradigm for dynamically asking questions and developing rapport with customers using Prolog.
- Combined cognitive and mathematical models of human behavior to create a hybrid recommendation engine that made suggestions on how to repackage customers into a different service tier, or to offer discounts for lowering their monthly bill.
- Developed domain specific vocabulary, a language module, and trained the speech recognizer
- Utilized natural language processing (NLP) tools to break down sentences into their elemental parts of speech (POS)
- Developed a dialog based subsystem which maintained conversational state, and served to facilitate a question and answer based investigational paradigm
- Developed production rule code to interpret and derive meaning from spoken words
- Built and incorporated predictive models that identified a customer’s likelihood to churn.
- Ingested file based Damballa hourly data into HDFS using Flume
- Developed Hive scripts using the Hue web interface included with Cloudera CDH 5.3 distribution which cleansed, transformed, and aggregated subscriber threat intelligence data. Wrote inner and outer joins in Hive
- Inserted data into Damballa reporting tables that utilized Hive dynamic partitions.
- Developed Spark driver application in Scala to process threat intelligence feeds.
- Built real-time data ingestion pipeline using Spark Streaming.
- Performed filtering and aggregation of raw data, creating logical tables in memory, using Spark SQL. Used a case class to represent a record structure, and then overlay it on top of the feed. Issued real-time queries against the in memory database, as a precursor to notifying the OSM system of subscriber level threats, through a web service based client.
- Created data frames from raw events that were read from partitioned Kafka topics in parallel.
- Utilizing time windows to aggregate information, and merge it with data stored within HDFS.
- Used clustering algorithms to categorize similar types of infections and malware using RapidMiner desktop data mining toolkit
- Reviewed and assessed Spark’s machine learning libraries for possible use on the project.
- Created the platform architecture for an operational data store (ODS), built upon Hadoop, that would house core customer product usage information across multiple product lines.
- Defined the logical as well as physical architectures for the ODS.
- Defined the application programming interfaces (API’s) that would be exposed to both producers and consumers of data. These “platform services” provide support for data ingestion, schema registration and table definition, data validation, data transformation, searching the metadata store, and querying the repository itself.
- Incorporated rules engine technology into the Hadoop environment, in order to externalize billing logic and make it available to multiple applications,
- Worked with subject matter experts to define per security and cross account billing rules. Re-engineered existing billing logic, which had previously been implemented using stored procedures and Java code, into a structured rules language (SRL). The rules could then be versioned controlled and kept externally from the rest of the Java application.
- Installed and configured a Zookeeper server, and Kafka message broker. Designed a high volume, low latency Hadoop ingestion framework. Developed a custom publisher, and corresponding consumer, using the Kafka backbone. The consumer was capable of reading from multiple partitions within a given topic in parallel.
- Developed a custom Hive client that would read database schemas defined within JSON documents, and then automatically create the corresponding table structures within the Hive metadata store.
- Created the software architecture for a platform that would be used to ingest, manage, process, monitor, and derive business knowledge from massive amounts of transactional data.
- Defined the logical, physical, and deployment architecture for all components.
- Developed a detailed design for the framework including UML class and sequence diagrams
- Evaluated, and recommended a job scheduler, business process management engine, data grid, complex events processor, and NoSQL data store.
- Developed initial implementations of each of the core components within the framework. This included subsystems for polling, scheduling, data transport, data ingestion, data transformation, lifecycle management, events generation and processing, persistence, in-memory caching, process orchestration, configuration and job execution.
- Developed and maintained both framework and application level map reduce jobs.
- Built a file sequencer that made more effective use of HDFS block storage.
- Wrote a “universal” job runner that could submit any type of workload to the cluster
Confidential
Responsibilities:
- Gathered target state requirements and developed a big data reference architecture.
- Architected and designed a system which would aggregate, distribute, process, and make intelligent decisions from five billion network events that occur each and every day.
- Installed and configured a development environment which included Ubuntu, a multi-node Hadoop and HDFS cluster, HBase and Storm.
- Wrote Hadoop map reduce jobs to ingest log files and persist events on a temporal basis within an HBase repository.
- Authored a spout within Storm which used the static events persisted within the HBase repository to simulate real-time streaming data. Authored multiple Storm bolts used to aggregate, filter, and correlate network events. Integrated a complex events processor, rules engine, as well as predictive modeling tools into the Storm environment.
- Built decision trees that predicted the likelihood of CMTS device failures, with an extremely high reliability, given network usage patterns, time based frequency of certain events, battery state, and age of equipment.
- Evaluated in-house developed tools and software components that are used to make recommendations or suggestions, including the cross-sell and up-sell of products, as well as suggestions for movies and TV shows that customers would enjoy watching.
- Performed detailed analysis, as part of the due diligence process, of third party products including Oracle Real Time Decisions (RTD), IBM Coremetrics Intelligent Offer, and veveo.
- Evaluated open source tools that could be used to build a finely tailored, in-house recommendation facility, including a rules engine, collaborative filtering algorithms, and feature rich data mining toolkits.
- Created matrix which correlated business requirements to core algorithms and rules based paradigms that could be used to implement five different types of recommendations.
- Developed the technical design to support the business requirements for concentrated positions, and acceptable assets.
- Created an XML based XOM, and then generated the Business Object Model (BOM) from it. Wrote custom code to enhance the BOM functionality.
- Authored complex rules using both the Ilog Rule Language, as well as the Intellirule editor.
- Used rule flows in conjunction with the Reté algorithm to manage and control program execution.
- Created the project level artifacts that were needed in order to deploy rule applications as Hosted Transparent Decision Services (HTDS). The rule applications were deployed within a Rule Execution Server (RES) environment that was running on top of WAS 7.0.
- Generated Java based web service clients, using the Axis2 toolkit, in order to test the HTDS services.
- Provided hands on training to client personnel. Some of the topics included the Ilog technical Rule Language (IRL), the higher level Intellirule editor, how to create a BOM using an XML based XOM. As well as how to build and deploy rule based web services.
Confidential
Enterprise architect
Responsibilities:
- Evaluated and compared industry leading rules engine products.
- Created a rules engine architecture, as well as a strategy and roadmap for using a BRMS within a health care portal environment.
- Worked with business unit leaders to develop the requirements, and build a proof of concept that would add human like intelligence to the clinical components within the member portal. The goal was to improve the health and well being of plan members, in order to reduce the dollar volume of claims being submitted.
- Developed business entity, domain as well as business object models.
- Designed and implemented a lightweight, extensible, rule engine service. The framework encapsulated the core functionality normally found within an inference engine, and made it available to various portal applications within a distributed computing environment.
- The architecture allowed products from multiple vendors to be “plugged into” the service layer, including open source products, without any impact to the consuming client application. The service could also be scaled out onto a GRID, or cloud computing environment, if needed.
- Created an implementation and deployment architecture for the rule engine framework, rule repository, and application specific rule sets.
Confidential, Manalapan, NJ
Enterprise architect
Responsibilities:
.
- Evaluated and compared industry leading rules engine products.
- Created a rules engine architecture, as well as a strategy and roadmap for using a BRMS within a health care portal environment.
- Worked with business unit leaders to develop the requirements, and build a proof of concept that would add human like intelligence to the clinical components within the member portal. The goal was to improve the health and well being of plan members, in order to reduce the dollar volume of claims being submitted.
- Developed business entity, domain as well as business object models.
- Designed and implemented a lightweight, extensible, rule engine service. The framework encapsulated the core functionality normally found within an inference engine, and made it available to various portal applications within a distributed computing environment.
- The architecture allowed products from multiple vendors to be “plugged into” the service layer, including open source products, without any impact to the consuming client application. The service could also be scaled out onto a GRID, or cloud computing environment, if needed.
- Created an implementation and deployment architecture for the rule engine framework, rule repository, and application specific rule sets.