We provide IT Staff Augmentation Services!

Devops Cloud Automation Engineer Resume

SUMMARY

  • TS/SCI Full scope Poly/SCI Poly AWS Certified Solutions Architect Professional with 17+ years of experience.
  • Background skills include: AWS, Big Data, DevOps, Virtualizations, Storage, Cloud Technology and Networking

TECHNICAL SKILLS

OPERATING SYSTEMS: Windows 2008/2012/2016 / Win7, AIX, Linux CentOS Red - Hat, Ubuntu, Fedora OSX and Solaris

IBM ECM/BPM/ICM: ECM 4.x/4.5.x/5.1x/5.2x, BPM 4.x/4.5.x/5.1x/5.2x, IBM Case Manager (ICM), IBM Content Navigator (ICN), Business Process Framework (BPF), Infosphere Content Collector Records Manager(ICC)

RDBMS: Oracle 10/11g/12c, MS-SQL Server 2005/2008/2012 , 2016 DB2 9.7,10.1

VMWARE: ESX 3.5, Vsphere 4.0, Vcenter 5.0, 6.0 Workstation 6.0/6.5/7.0

CLOUD: AWS, AZURE, OpenShift,OpenStack,PCF

TECHNICALSKILLS: UNIX (Solaris, SGI, HP AUX, AIX Centos, Ubuntu,), Linux Red Hat, AWS, Cloudera, Windows Server 8R2, 2012, 2016, OpenStack, Puppet, Hadoop, EMC Isilon, VNX , NetApp, Html, Internet Explorer, Netscape Navigator, McAfee, Heat Management Tools, Lotus Notes, Pine, Hummingbird exceed (NETBackup, VMWare Citrix XenApp / XenDesktop) EMC, Isilon, VNX, Cisco Nexus, SQL Server 2012,, Active Directory, Certification and Accreditation Process (C&A), Security Scanning Tools, Varonis, LINUX Server and workstation Administration, HPC Administration, NAS and SAN Administration and Engineering, System Security Administration and Engineering, System integration, Data Migration, Computer Systems Certification and Accreditation, Data Center Management, Data Center Migration, Big Data Management, Tanium Foundation, Operations, & IR Deep Dive

PROFESSIONAL EXPERIENCE

Confidential

DevOps Cloud Automation Engineer

Responsibilities:

  • Focuses on optimizing existing systems, build and secure cloud infrastructure, while eliminating work through automation.
  • Ensures that Integration architecture is consistent with Application Reference architectures, and infrastructure architecture and standards
  • Support help automate and be responsible for how SaaS and PaaS products relate to each other.
  • Engage improve the complete lifecycle of services inception and design, through deployment, operation, and refinement.
  • Support services before going-live through the process of developing automation, developing security frameworks, and planning continuous delivery cycles.
  • Maintain services that are live by helping to measure and monitor availability, security, and overall system health.
  • Assisted in the development of the framework to manage the Integration Architecture for UBS
  • Scale systems sustainability through mechanisms like automation and evolve systems by pushing for changes in reliability, security, and velocity. Implemented a 'server less' architecture using API Gateway, Lambda, and Dynamo DB and deployed AWS Lambda code from Amazon S3 buckets. Created a Lambda Deployment function, and configured it to receive events from your S3 bucket
  • Design the data models to be used in data intensive AWS Lambda applications which are aimed to do complex analysis creating analytical reports for end-to-end
  • Create and maintain highly scalable and fault tolerant multi-tier AWS and Azure environments spanning across multiple availability zones using Terraform and CloudFormation.
  • Manage existing application and create new applications (visual and non-visual)
  • Manage SPLUNK user accounts (create, delete, modify, etc.)
  • Create data retention policies and perform index administration

Confidential

Architect

Responsibilities:

  • Deploy cloud infrastructure (Security Groups and load balancers needed to support EBS environment)
  • Perform the integration architecture assessments during Solution Gate Review
  • Used Cloud watch to monitor various issues within Applications in AWS
  • Documented customers entire integration architecture design and analysis work
  • Implemented a 'server less' architecture using API Gateway, Lambda, and Dynamo DB and deployed AWS Lambda code from Amazon S3 buckets. Created a Lambda Deployment function, and configured it to receive events from your S3 bucket
  • Design the data models to be used in data intensive AWS Lambda applications which are aimed to do complex analysis creating analytical reports for end-to-end
  • Create and maintain highly scalable and fault tolerant multi-tier AWS and Azure environments spanning across multiple availability zones using Terraform and CloudFormation.
  • Manage existing application and create new applications (visual and non-visual)
  • Manage SPLUNK user accounts (create, delete, modify, etc.)
  • Provided technical support in planning, designing and developing Enterprise Integration architecture.
  • Create data retention policies and perform index administration, maintenance and optimization
  • Work with third party application, hosting and CDN providers to integrate data feeds to a centralized Splunk platform
  • Provide overall management of the SPLUNK platform
  • Assist with design of core scripts to automate SPLUNK maintenance and alerting tasks. Support SPLUNK on UNIX,
  • Write terraform scripts from scratch for building Dev, Staging, Prod and DR environments.
  • Installed configured Big Query application in the cloud
  • Supported issues with Big Query as needed
  • Supported large big data applications hosted on Big Query
  • Made sure large applications in customers environment are on BigQuery is used for handling or analyzing big data.,
  • Work with customer application on BigQuery to manage data using fast SQL-like queries for real-time analysis
  • Design and developed various Web forms using HTML, CSS, Bootstrap and JavaScript, React.JS
  • Created documentation for all the components which is included in React-Bootstrap page.
  • Provided tailored designs to multiple teams using the service mesh.
  • Designed and implemented solutions for scaling DNS, auditing and authentication on the service mesh
  • Support work with API management, RESTful APIs and/or managed API gateways
  • Support Monitor ESB and API usage and effectiveness
  • Ensure security and compliance of APIs and assist with discovery efforts •
  • Provided Tier 3/troubleshoot support for API capabilities for enterprise services (e.g., ESB, API Manager, performance and utilization reporting systems)
  • Maintain and support PaaS and API gateway infrastructure and associated tools
  • Lead API lifecycle development; responsible for software development and code quality
  • Set up an API Gateway installation.
  • Design, implement, test and deploy APIs using the latest technologies and best practices.
  • Implement API management using API management software (like Apigee or others) to include API proxies, mashups, rate limiting, security, analytics, monetization and developer portals
  • Set up processes for Policy generation using various techniques
  • Set up error management and log management processes
  • Set up a onboarding, roll out to production and support processes using the API gateway tool.
  • Work with various network/ security engineers to maintain security CA Layer7 to setup infrastructure, install Layer7 products, patches, and design, develop, modify, configure, debug and evaluate application programs for functional business areas. Provided strict adherence to change control and process documentation is required.
  • Enable users to understand the usage of API gateway platform.
  • Gather requirements and build, test and roll out projects using API Gateway tool.
  • Support current installation of API Gateway projects.
  • Developed API services in an Agile environment
  • Manage the Openshift cluster that includes scaling up and down the AWS app nodes.
  • Develop course work that’s designed customers looking to learn DevOps.
  • Teach Clients to learn the ins and outs of Cloud Foundry.
  • Wrote ansible scripts to ensure Openshift container platform works in sync with Bank system of records.
  • Implemented Microservices on RedHat OpenShift based on Kubernetes, Etcd, and Docker to achieve Continuous Delivery.
  • Provide day to day training on Cloud Foundry architecture and various components of Cloud Foundry,
  • Worked on OpenShift for container management and to enhance container platform multi - tenancy.
  • Design a patch process and wrote ansible playbooks for patching openshift -RHEL, atomic OS and also for bug fix for Openshift.
  • Provide training on how to tailor apps to ensure that they will run correctly when you execute a cf push, and how to write a manifest that will make your deploy process repeatable and predictable.
  • Teach clients how to perform blue-green deployments of your apps already running in Cloud Foundry. Last but not least, this course will prepare you for taking the Cloud Foundry Certified Developer exam.
  • Create and manage TFS Continuous integration builds on VSTS
  • Responsible for installation & configuration of Jenkins to support various Java builds and Jenkins plugins to automate continuous builds and publishing Docker images to the Nexus repository.
  • Manage Docker orchestration and Docker containerization using Kubernetes.
  • Used Kubernetes to orchestrate the deployment, scaling and management of Docker Containers.
  • Responsible for maintaining AWS instances as part of EBS deployment
  • Developed business logic using Python
  • Build/Maintain Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on AWS/AZURE .
  • Utilized Kubernetes for the runtime environment of the CI/CD system to build, test deploy.
  • Provided support on AWS services and DevOps deploying applications in AWS to help take full advantage of the AWS platform
  • Develop serverless applications on AWS instances (Lambda, ECS, SNS/SQS/Kinesis, RDS, DynamoDB, etc)
  • Develop microservice applications using Java
  • Developed business logic using Python
  • Supported and Work with both relational and NoSQL databases
  • Configured, test, deploy, and upgrade software for production EC2 servers in AWS
  • Lead initiatives for automating and scaling our systems
  • Participated in technical architecture design
  • Improve the security, reliability, and performance
  • Administer, monitor, and deploy cloud-based systems
  • Collaborate with application engineers to design robust systems
  • Take ownership of infrastructure projects and internal tools
  • Exert automated test approaches through CICD
  • Communicated and collaborate with Product Manager, Engineers, Stakeholders, et
  • Deploy, automate, maintain and manage AWS cloud-based production system, to ensure the availability, performance, scalability and security of productions systems.
  • Establish, maintain and evolve concepts in continuous integration and deployment (CI/CD) pipelines for existing and new services.
  • Ensured security compliance with appropriate NIST and ICD requirements
  • Assisted in the architecture, design, implementation, and lead AWS public cloud build (connectivity, network, security, containerization, monitoring)
  • Provided guidance on security configurations and risk and compliance procedures (Identity Management, Network Configuration, Data Protection, Segregation of Duties)
  • Work with in-house cloud security experts to implement a security framework that satisfies ISO standards for implementing cloud solutions in public clouds
  • Work closely with product and platform teams to engineer and implement cloud security controls
  • Design and implement Azure/cloud-based DevSecOps processes and tools
  • Manage patch automation and security hardening for Azure infrastructure
  • Deploy security automation services such as Puppet, Chef, and/or Terraform
  • Secure microservices and hardening containers
  • Build automation/infrastructure as code to enforce cloud infrastructure security
  • Work with Operationalize tools to strengthen cloud security posture - e.g. Cloud Infrastructure scan tools, Firewall scan, network scan, host scan tools, vulnerability management tools etc.
  • Roll out security infrastructure such as central logging, IAM Roles, SIEM tools etc.
  • Manage/create Cloud accounts for both AWS commercial and .Gov cloud as defined by the Government customer and keep in compliance.
  • Manage day-to-day security operational tasks such as security event monitoring, log monitoring and security incident management, compliance monitoring, data loss prevention, and monitoring and responding to emerging threats varying from endpoint to server to public cloud system.
  • Perform ongoing vulnerability assessments including vulnerability scanning and vulnerability exploit testing (penetration testing) with clear reporting, threat identification and action plans for remediation with prioritization. This will also include any assessments for changes that the security team has identified as requiring a vulnerability assessment prior to release
  • Build CJIS/NIST compliance cloud infrastructure, policy and procedures for both AWS public and. Gov cloud.
  • Assist with the development, implementation, and administration of Cloud security awareness training for the enterprise.

Confidential

Architect

Responsibilities:

  • Deploy, monitor and maintain Amazon AWS GOV cloud infrastructure consisting of multiple EC2 nodes in rapidly changing R&D environment
  • Work with Cloud watch to collect data that would be push in the customers SEIM and Splunk environment
  • Work with Kinesis within the cloud for various projects I’m supporting
  • Maintain an AWS Lambda@Edge interceptor in order to simplify the deployment of Amazon-authenticated websites using serverless technologies.
  • Automate Datadog Dashboards with the stack through Terraform Scripts.
  • Work within the software engineers to implement API Management platform to focus on enabling the platform for enterprise.
  • Work within AWS GOV cloud to improvement in system interfaces while Monitoring ESB and API usage and effectiveness Ensure security and compliance of APIs and assist with discovery
  • Write terraform scripts for Cloudwatch Alerts.
  • Assisted in the development of technology roadmaps to evolve the API estate in conjunction with internal and external solution providers
  • Created Windows and Linux desktop using AWS Workspaces
  • Setup Amazon Work Spaces that’s available different Regions P
  • Provided access to high performance cloud desktops wherever the teams needed work done
  • Operated on several prototype OpenShift projects involving clustered container orchestration and management.
  • Implemented cloud services IAAS, PAAS, and SaaS which include Openstack, Docker and Openshift .
  • Worked on container based technologies like Docker, OPENSHIFT and Kubernetes.
  • Configured and maintaining Redhat OpenShift PaaS environment.
  • Manage global deployments of customers Workspaces from the AWS console.
  • Worked with Jenkins to Automated the Orchestration and Incident Response
  • Provision and de-provision desktops as needed at current customers workforce change.
  • Launch AWSEC2 Cloud Instances using Amazon Images (Linux/ Ubuntu) and configure launched instances with respect to specific custom applications.
  • Designed Splunk Enterprise 6.5 infrastructure to provide high availability by configuring clusters across two different data centers.
  • Assist internal users of Splunk in designing and maintaining production quality dashboards
  • Arrange necessary trainings to Splunk internal customers
  • Design core scripts to automate Splunk maintenance and alerting tasks
  • Validate and stress-test multiple servers hosting custom software applications.
  • Created proper documentation for new server setups and existing servers.
  • Automate build and release management process, monitor all changes between releases.
  • Maintained GIT, Bitbucket repository, handling branching, merging, tagging and release activities.
  • Manage multiple AWS instances, security groups, Elastic Load Balancer's and AMI's.
  • Provided authenticated access to AWS resources using Multi-Factor Authentication).
  • Created and manage users, accounts, roles, groups and policies using Identity Access Management (IAM).
  • Design and development of Continuous Integration Process and deployment of Internet, Intranet and Client/Server business applications.
  • Installed, Configured, Maintained, Tuned and Supported Splunk Enterprise server 6.x/5.x.
  • Architected and Implemented Splunk arrangements in exceptionally accessible, repetitive, conveyed figuring situations.
  • Performed Field Extractions and Transformations using the RegEx in Splunk.
  • Responsible for Installing, configured and administered Splunk Enterprise on Linux and Windows servers.
  • Supported the upgradation of Splunk Enterprise server and Splunk Universal Forwarder from 6.5 to 6.6.
  • Install and implement Splunk App for Enterprise Security and documented best practices for the installation and performed knowledge transfer on the process.
  • Worked on installing Universal Forwarders and Heavy Forwarders to bring any kind of data fields into Splunk.
  • Write Splunk Queries, Expertise in searching, monitoring, analyzing and visualizing Splunk logs.
  • Design, optimize and executing Splunk-based enterprise solutions.
  • Installed and configured Splunk Universal Forwarders on both UNIX (Linux) and Windows Servers.
  • Worked on customizing Splunk dashboards, visualizations, configurations using customized Splunk queries.
  • Monitored the Splunk infrastructure for capacity planning, scalability, and optimization.
  • Supported configured work on Splunk- DB connect for real-time data integration between Splunk Enterprise and rest all other databases.
  • Responsible with Splunk Searching and Reporting modules, Knowledge Objects, Administration, Add-On's, Dashboards, Clustering and Forwarder Management.
  • Monitored license usage, indexing metrics, Index Performance, Forwarder performance, death testing.
  • Splunk Architecture/Engineering and Administration for SOX monitoring and control compliance.
  • Design and implement Splunk Architecture (Indexer, Deployment server, Search heads, and Forwarder management), create/migrate existing Dashboards, Reports, Alerts, on daily/weekly schedule to provide the best productivity and service to the business units and other stakeholders.
  • Involved in standardizing Splunk forwarder deployment, configuration and maintenance across UNIX and Windows platforms.
  • Worked with and provided needed information to the Security Operations Center, Global Security Operations Manager, Global Security Operations Specialists and the Global Security Investigations and Intelligence Team to anticipate, identify and evaluate global risks that carry a significant risk to the enterprise
  • Work with various version control systems like Subversion, and GIT and used Source code management client tools like Stash, SourceTree, Git Bash, GitHub, Git GUI and other command line applications.
  • Work on Cloud automation using AWS Cloud Formation templates.
  • Build & Release automation framework designing, Continuous Integration and Continuous Delivery, Build & release planning, procedures, scripting & automation. Good at documenting and implementing procedures related to build, deployment and release.
  • Monitor track Security Information and Event Management within customer datacenter with various software tools and applications
  • Work with Jenkins for Automation, Orchestration, and Incident Response with the Security operation centers cloud monitoring team
  • Stand up and administer Kubernetes cluster on on-perm and Amazon Cloud.
  • Ensure optimum performance, high availability and stability of solutions and Ensure the container orchestration platform (Docker/Kubernetes) is regularly maintained and released to production without any downtime
  • Increase the effectiveness, reliability and performance of container orchestration platform (Docker/Kubernetes) by identifying and measuring key indicators, making changes to the production systems in an automated way and evaluating the results
  • Ensure that the container orchestration platform (Docker/Kubernetes) is maintained properly by measuring and monitoring availability, latency, performance and system health.
  • Assist development teams to migrate applications to Docker based PaaS platform
  • Build Chef Server (set up, run, and maintain), Cookbook creation, Chef Environment Maintenance, & Version pinning.
  • Utilize Jenkins for release management and assistance with CI/CD processes.
  • Responsible for Automation, Virtual networking/security and access in AWS Cloud Services. Provide DevOps and Systems engineering work with all AWS Services (EC2, RDS, Redshift etc..) and frameworks such as Chef

Confidential

Sr. Engineer / Architect

Responsibilities:

  • Automate and manage our AWS infrastructure and deployment processes, including production, test and development environments.
  • Monitor all cloud issues using cloud watch and cloud trail
  • Deploy configure and implement Kinesis in AWS
  • Design and implement Splunk infrastructure, deployment, products, apps, reports, alerts, and dashboards
  • Deployed static websites and several supporting APIs (Node.js) following a serverless architecture in AWS (API Gateway, AWS Lambda & Lambda@Edge, CloudFront, DynamoDB, S3, and more).
  • Manage Splunk knowledge objects (Apps, Dashboards, Saved Searches, Scheduled Searches, Alerts)
  • The ability to de-code and debug complex Splunk queries
  • Work with team lead to implement API features, perform code maintenance and bug fixes
  • Build highly available, reliable and secured API solutions
  • Deploy and upgrade CA/Layer7 API Gateway for scalability.
  • Tune and modify CA/Layer7 API Gateway for optimal performance.
  • Write and refactor CA/Layer API Gateway policies for optimal performance and logging.
  • Research information on new, emerging technologies and methods in API management & Integration and provide architectural and design inputs for incorporating them into the metadata platform
  • Design, modify, optimize and manage CA/Layer7 API Gateway, which includes deploying for scalability, refactoring policies for performance and seamless migration, integrating with datastore and backend systems.
  • Installed deployed Windows and Linux desktop using AWS Workspaces
  • Setup various Amazon desktop OS instance with AWS Work Spaces that’s available in different Regions
  • Provided access permission to high performance cloud desktops wherever the teams needed work done
  • Manage large global enterprise deployments of customers Workspaces from the AWS console.
  • Provision and de-provision desktops as needed at current customers workforce change.
  • Launch AWSEC2 Cloud Instances using Amazon Images (Linux/ Ubuntu) and configure launched instances with respect to specific custom applications.
  • Provided needed support to the Ability to perform and speak regarding log analysis, use of IDS, IPS, and/or other signature technology. Lead teams that Manage and maintain the log management and threat analysis solution
  • Automate of infrastructure using Terraform and Ansible
  • Work with Jenkins for Automation, Orchestration, and Incident Response with the Security operation centers cloud monitoring team
  • Develop, Maintain and support Continuous Integration framework based on Jenkins
  • Work with Jenkins Pipeline develop Pipeline Development, build configure with suite of Jenkins features, which is installing plugins, then enable implementation of continuous delivery pipelines, which is to automated the customer processes for getting software from source control through deployment to end users.
  • Lead the development of innovative service solutions for Azure cloud service offerings
  • Used Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manages change.
  • Wrote Python Code using Ansible Python API to Automate Cloud Deployment Process.
  • Setup complete CI/CD Pipelines
  • Automate instance schedule using Lambda Cloud Watch S3 and RDS services in AWS
  • Edit and repurpose WordPress plugins under customers’ needs in AWS
  • Write and extend WordPress plugins in AWS
  • Developed procedures to unify streamline and automate applications development and deployment procedures with Linux container technology using Docker swarm.
  • Worked in all areas of Jenkins setting up CI for new branches, build automation, plugin management and securing Jenkins and setting up master/slave configurations.
  • Involved in deploying systems on Amazon Web Services Infrastructure services EC2, S3, RDS, SQS, Cloud Formation.
  • Manage the Azure environments Network Design and Infrastructure Setup using Azure Services for both Development and Production systems.
  • Build AWS-based services supporting production SaaS platform including web applications and data analytic services
  • Provided leadership in developing innovative service capabilities for Azure Cloud and in managing Azure capability development project. plan, configure, optimization and deploy Microsoft Azure solutions (IaaS, PaaS, VMs, AD, Automation, Monitor, etc
  • Migrate existing on-premises services to an AWS cloud infrastructure.
  • Build/Maintain Docker container clusters managed by Kubernetes , Linux, Bash, GIT, Docker , on GCP. Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy.
  • Responsible for design and implementation of the Codex is Network and server infrastructure.
  • Provide following duties as Sr Engineer include Firewall, Switch and Router configuration and maintenance
  • Secured configured locked down Hadoop multi-tenant data sets to users and grant access to resources based on each user’s unique needs.
  • Work with OS and application teams to ensure client service success.
  • Worked on large enterprise supporting, HIPAA, FISMA, DOD and DCI which required data encrypted while it is in-flight being transferred over the network. Supported when it is at-rest whiles it was being stored durably on disk
  • Performed Vulnerability Assessment & Penetration Testing on the infrastructure on AWS for security.
  • Installed configured maintained Key Trustee Server with Apache Sentry on the current AWS cloud.
  • Responsible for auditing and tracking usage across multiple tenants and multiple clusters.
  • Build a technical and security architecture in Azure for the selected apps/workloads
  • Lead compliance assessments and application portfolio assessment with the customer on designed Azure architecture
  • Select a migration approach to lift and shift the workloads to Azure or architecting a greenfield development and/or production platform for new applications
  • Configured supported monitored Key Trustee Server with Apache Sentry within customers datacenter environments located offsite.
  • Configured, data read from and written to HDFS directories while its transparently encrypted and decrypted without requiring any changes to user application code.
  • Supported encryption end-to-end data that is protected both in-flight and at-rest, and can only be encrypted and decrypted by the customers and clients within DOD DHS and commercial sector.
  • Configured encryption layers in traditional data management software/hardware stack.
  • Supported and deployed encryption at a given layers in a traditional data management software/hardware stack with different advantages and disadvantages. Application-level encryption, Database-level encryption, Filesystem-level encryption, and Disk-level encryption
  • Integrated various Version control tools, build tools, nexus and deployment methodologies (scripting) into Jenkins to create an end to end orchestration build cycles.
  • Troubleshoot build issues in Jenkins, performance and generating metrics on master's performance along with jobs usage.
  • Implemented enterprise-grade authorization mechanisms based on user directories and authentication technologies such as Kerberos.
  • Installed configured Kerberos to allow Master/Slave replication cluster with consist of any number of hosts which stores all information, both account and policy data, in application databases.
  • Ensure plan execution and Azure consumption targets are met
  • Implemented Kerberos software distribution which includes software replication, such as copying data to other servers.
  • Installed configured design Kerberos which gives client applications ability to attempt authentication against secondary servers if the primary master is down.
  • Create data level security rules for IDH Hive users leveraging Apache Sentry
  • Create new infrastructure Load Balancing, Packet Routing and SSH protocol designs to Maximize Network routing efficiency. Daily network monitoring and troubleshooting of network operation deficiencies
  • Administering & designing LANs, WANs internet/intranet, and voice networks.
  • Work with Tanium Foundation, Operations, & IR Deep Dive tools in customer enterprise AWS space
  • Standardize Splunk forwarder deployment, configuration and maintenance across a variety of platforms
  • Deploying and using enterprise EDR products such as Tanium
  • Define, manage, and promote various development activities for DevOps practices, including continuous integration, continuous delivery, continuous testing, and continuous monitoring
  • Support AWS Cloud infrastructure automation with multiple tools including Gradle, Chef, Nexus, Knife, Docker and monitoring tools such as Splunk, New Relic and Cloudwatch
  • Responsible for designing, scaling and deploying various cloud services, modernizing processes and workflows along with building a consolidated and collaborative integration of IaaS, SaaS, and PaaS cloud services
  • Manage all components of the DevOps Configuration Management platform (Jenkins, Nexus, GitLab, Sonar, etc.)
  • Perform security log analysis during Information Security related events, identifying and reporting possible security breaches, incidents, and violations of security policies.
  • Responsible for designing, developing, testing, troubleshooting, deploying and maintaining Splunk solutions, reporting, alerting and dashboards
  • Implemented and supported Cloud Networks. Collaborate with security and network team to ensure all cloud platforms adhere to security models and compliance requirements for the cloud infrastructure for either on-premises or Cloud network. Assist in the support and troubleshooting of cloud network infrastructure along with the network support team to resolve complex operational issues
  • Manage, configure and install VMware vSphere environment: vCenter, hypervisor on new hosts, virtual machines, datastore creation and maintenance
  • Perform daily system monitoring of Virtual Infrastructure which includes VMware and Amazon Cloud Service
  • Work with various teams to design, implement, integrate and operate AWS cloud solutions for high availability and scalable service delivery.
  • Conduct and remediate Windows Security Content Automation Protocol (SCAP) and NESSUS system scans
  • Configure ACAS (Security Center) Webinspect, Appdective and NESSUS to manage Windows server patches
  • Work with Automating configuration management, infrastructure, and application deployments in a toolset such as Puppet, Chef, Ansible or Salt
  • Implemented distributed data storage system using Accumulo and Hadoop Distributed File System (HDFS) for storing and running analytics on large volumes of data.
  • Install, configure, and manage VMware vSphere environment: vCenter, hypervisor on new hosts, virtual machines, datastore creation and maintenance.
  • Responsible for system administration, engineering, provisioning, operation, maintenance of vCenter, vRealize Operations, VMware Configuration Manager and support..
  • Assist in the proper operation and performance of Splunk, loggers and connectors
  • Worked configured responsible for Installation and configuration of Hadoop, YARN, Cloudera manager, Cloudera BDR, Hive, HUE and MySQL applications
  • Reviewed performance stats and query execution/explain plans, and recommends changes for tuning Hive/Impala queries
  • Enforce best practices in while maintaining customers environment as well as Service request management, Change request management and Incident management by using the standard tools of preference
  • Review security management best practices which includes ongoing promotion of awareness on current threats, auditing of server logs and other security management processes, as well as following established security standards.
  • Work with Cloudera maintenance, monitoring, and configuration tools to accomplish task goals and build reports for the management review.
  • Responsible to build and maintain the Cloudera distribution of Hadoop.
  • Perform cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools
  • Integrate data feeds (logs) into Splunk administering Splunk and Splunk App for Enterprise Security (ES) log management
  • Standardize Splunk agent deployment, configuration and maintenance across a variety Of UNIX and Windows platform
  • Work on System Center and Tanium design and deployment initiatives

Confidential

Sr. Engineer / Architect

Responsibilities:

  • Participate in the upgrading of operating systems and design of systems enhancements.
  • Provided consistent environment using Kubernetes for deployment scaling and load balancing to the application from development through production, easing the code development and deployment pipeline by implementing Docker containerization
  • Work with Jenkins and Docker to Automate the Orchestration, and Incident Response
  • Supported And Provided real-time analysis of security alerts generated by applications and network hardware
  • Developed Docker images to support Development and Testing Teams and their pipelines; distributed Jenkins, Selenium and JMeter images, and ElasticSearch, Kibana and Logstash (ELK & EFK) etc
  • Infrastructure buildout, maintenance, & automation: Collaborated with infrastructure and product engineers to maintain ~1300 servers using Terraform for provisioning, Puppet for platform config & Ansible for deployment. Servers were spread across 14 datacenters/regions, from 3 cloud providers and 1 non-cloud provider
  • Developed Python Modules for Ansible Customizations.
  • Used Ansible Playbooks to setup Continuous Delivery Pipeline. Deployed micro services, including provisioning AWS environments using Ansible Playbooks.
  • Used Ansible to document all infrastructures into version control.
  • Work with the partner to identify, architect and design new cloud based solutions based on Azure technologies that the partner will sell to their customers.
  • Identify, build and drive programs that establish new technical practices within your partner. These practices will be partner architects and consultants who are able to deliver consulting services to their customers using Azure cloud services
  • Used Kubernetes , I have controlled and automated application deployments and updates and orchestrated deployment.
  • Performed Vulnerability Assessment & Penetration Testing on the infrastructure on AWS for security.
  • Setup AWS VPC's for dev, staging and Prod environments.
  • Used Amazon S3 to store and retrieve media files such as images and Amazon Cloud Watch is used to monitor the application and to store the logging information.
  • Involved in writing Java API for Amazon Lambda to manage some of the AWSservices.
  • Configured and managed site counter-intelligence systems using Tripwire and Cisco Firewalls to protect servers and collect audit logs for the network packet filtering.
  • Design and implement container orchestration systems with Docker
  • Implement and managing private register and containers orchestration with tools such as Artifactory, Nexus, Docker and Docker Register
  • Support the implementation of VMware hardware and operating systems solutions to provide hosting services to multiple data centers.
  • Provision Virtual Machines and patches to the software and hardware hosting infrastructure
  • Research, design and develop end to end technology stack (front end/backend) in support of Api’s to help support high volume web transactions
  • Develop technical roadmaps for future AWS cloud implementations.
  • Automate/configure management using Docker, Puppet, and Chef
  • Design and develop web applications, RESTful API’s, prototypes, or proofs of concepts (POC’s)
  • Architect and deploy Splunk Enterprise implementations in small to medium sized customers.
  • Administer Splunk and Splunk App for Enterprise Security (ES) log management.
  • Integrate Splunk with a wide variety of legacy data sources that use various protocols.
  • Consulting with customers to customize and configure Splunk in order to meet their requirements.
  • Troubleshoot SPLUNK server and forwarder problems and issues
  • Assist internal users of SPLUNK in designing and maintaining production-quality dashboards
  • Mentor and train SPLUNK users and administrators
  • Monitor the SPLUNK infrastructure for capacity planning, system health, availability and optimization
  • Assist with design of core scripts to automate SPLUNK maintenance and alerting tasks. Support SPLUNK on UNIX, Linux and Windows-based platforms. Assist with automation of processes and procedures
  • During on-boarding and as needed create rules for compliance and audit requirements and create and manage
  • Review and apply any newly available and applicable SPLUNK software or policy updates routinely
  • Perform implementation of security and compliance-based use cases based on the NIST 800-53 Rev4 security controls.
  • Technical writing/creation of formal documentation such as reports, training material, slide decks, and architecture diagrams.
  • Work closely with middleware (e.g., WebLogic, Tomcat), database, UNIX, network and storage administrators for routine operations such as performance tuning, upgrades and backup.
  • Deploy applications on multiple Weblogic Server and maintain Load balancing, high Availability and Fail over functionality
  • Deploying, managing, and operating scalable, highly available, and fault tolerant systems on AWS
  • Migrating an existing on-premises application to AWS
  • Design and help to lead the implementations of cloud security solutions such as Web Application Firewalls, SIEM integrations, monitoring and auditing tools, and more.
  • Implementing and controlling the flow of data to and from AWS
  • Assist AWS Security Assurance team in determining the strategic direction of the various AWS compliance programs based on customer interaction and demonstrative metrics.
  • Selecting the appropriate AWS service based on compute, data, or security requirements
  • Proven ability to consultatively engage with Enterprise Clients to evaluate and translate functional requirements to a technology solution on Azure / AWS. Help design and implement hosting stack using AWS and Docker.
  • Work with developers on understanding identified vulnerabilities and their underlying causes to develop plans of mitigating actions and comprehensive corrections.
  • Install and configure AppDetective, WebInspect and Nessus out the box
  • Uses a variety of tools (Nessus, HP WebInspect, AppDetective, Fluke Network Tester) to provide full range of system security testing.
  • Configure applications in the C2S AWS environment on Chef configuration management tool
  • Work with engineers on Docker and debugging bad builds using docker-machin, docker-compose, etc
  • Conduct formal tests on web-based applications, networks, and other types of computer systems.
  • Work on physical security assessments of servers, computer systems, and networks.
  • Work with a team in charge of the management, maintenance, and operation of the customers HPC systems.
  • Plan design, engineer, and project support for HPC hardware and software;
  • Design and managing petabyte-scale data storage, with uses ranging from collaborative software development environments to multi-terabyte scientific datasets;
  • Establish strategic relationships with vendors; collaborating with peers across the DOD .
  • Collaborate with customers to address security and compliance challenges, and implementing and migrating customer solutions and workloads onto AWS
  • Experience in design, build, test cloud apps on Cloud; should have a multi-year technical consulting and solutioning experience on AWS / Azure infrastructure and at least one PaaS platforms like Open Shift or PCG; a good understanding of pricing models will be an added advantage.
  • Conduct regular security audits from both a logical/theoretical standpoint and a technical/hands-on standpoint.

Confidential, Chantilly, VA

Sr Principal Engineer/ Architect

Responsibilities:

  • Lead and contributing to the development, maintenance, and usage of deployment and task automation (OS, database services, virtual networks, or other platform services)
  • Design enterprise collaborative cloud computing and hybrid cloud solutions with a focus on Microsoft Office 365 and Azure
  • Implemented the UltraDNS & NS1 providers for Terraform, and a Nagios-compatible monitoring plugin to check for divergence.
  • Provisioned load balancer, auto-scaling group and launch configuration for micro services using Ansible.
  • Implemented Ansible to manage all existing servers and automate the build/configuration of new servers.
  • Provided consistent environment using Kubernetes for deployment scaling and load balancing to the application from development through production, easing the code development and deployment pipeline by implementing Docker containerization.
  • Crated images stored in the Docker container and uploaded to Docker hub.
  • Created Linux containers in CoreOS and Docker and Automated system using Chef.
  • Write, automation code using Chef, Puppet, or Ansible
  • Integrated Kubernetes with network, storage, and security to provide comprehensive infrastructure and orchestrated container across multiple hosts.
  • Provisioning of new services, mostly on new Juniper network, sometimes on former network before future migration.
  • Reverse-engineering and troubleshooting of a complex system and hosting infrastructure.
  • Responsible for the analysis, design and planning of infrastructure and architecture of solutions in Azure, and other related technologies
  • Plan, create and manage cloud infrastructure in a Microsoft Azure environment
  • Install and tune Hadoop clusters which includes benchmarking Hadoop cluster, supporting HA Namenodes, balancing HDFS block data and Datanode adding and decommissioning in secure enterprise environment.
  • Monitor and troubleshoot Zookeeper and Yarn (configure different schedule options, manage and monitor workloads, maintain a multi-tenant environment, implement security controls, and manage high availability features of Hadoop.
  • Troubleshoot Apache Sentry, Kerberos (both Kerberos RPC and HTTP SPNEGO), SSSD and HDFS interdependencies.
  • Manage Hadoop I/O (including Data Integrity, Data compression and serialization), encrypting HDFS data at rest including encryption zones, manage HDFS snapshots and HDFS Backup DR.
  • Manage Hadoop and Spark cluster environments, on bare-metal and container infrastructure, including service allocation and configuration for the cluster, capacity planning, performance tuning, and ongoing monitoring.
  • Work with data engineering groups in the support of deployment of Hadoop and Spark jobs.
  • Responsible for monitoring Linux, Hadoop, and Spark communities and vendors and report on important defects, feature changes, and or enhancements to the team.
  • Create design and implement hosting stack using AWS and Docker.
  • Secure and mobility-ready management of the whole network and system infrastructure (IPv4 and IPv6);
  • Isolation of private addresses (RFC 1918) from the global routing table;
  • Perform as a data scientist leveraging expertise with distributed scalable Big Data store, including Apache Accumulo, Apache Hadoop, MapReduce programming and technologies, and Real-time data processing with Apache Spark.
  • Work with Jenkins and Docker to Automate the Orchestration, and Incident Response
  • Supported and Provided real-time analysis of security alerts generated by applications and network hardware to maintain day to day applications development implantation with tools such as Cloud Foundry, Chef, Puppet, Kubernetes, Docker, Heroku buildpacks and BOSH.
  • Perform administration of VMware environment by managing the following VM components to include but not limited to the VMware Virtual Center, Site Recovery Manager, Operations Manager, Cloud Director, and other VMware products.
  • Administer VMware 5.1/5.5 environment of approximately 300 hosts and 1500 virtual servers
  • Maintain and manage VM resources to include (CPU, Memory and Disk) usage.
  • Build new VM hosts and instances to support customer requirements.
  • Transfer all production data to the newest VM infrastructure/platform.
  • Resolve VM related incidents in compliance with organizational incident management process.
  • Work with the Storage team to attach and manage Fibre Channel storage on VMware clusters.
  • Work with Network Transport team to configure or acquire network resources and configure either standard switches on ESXi hosts or dynamic switches on VMware clusters
  • Design, build, support and maintain Splunk infrastructure in a highly available configuration
  • Standardize Splunk forwarder deployment, configuration and maintenance in Linux and Windows platforms
  • Collaborate with internal teams to integrate data feeds to a centralized Splunk platform
  • Develop and maintain production quality dashboards, custom views, saved searches and alerts for Splunk Operations and for other clients as per their requirements
  • Assist internal users of Splunk in designing and maintaining production quality dashboards
  • Monitor Splunk infrastructure for capacity planning and optimization
  • Supported and responsible for Installation and configuration of Hadoop, YARN, Cloudera manager, Cloudera BDR, Hive, HUE and MySQL applications
  • Reviewed performance stats and query execution/explain plans, and recommends changes for tuning Hive/Impala queries
  • Enforce best practices in while maintaining customers environment as well as Service request management, Change request management and Incident management by using the standard tools of preference
  • Reviewed security management best practices which includes ongoing promotion of awareness on current threats, auditing of server logs and other security management processes, as well as following established security standards.
  • Work with Cloudera maintenance, monitoring, and configuration tools to accomplish task goals and build reports for the management review.
  • Responsible to build and maintain the Cloudera distribution of Hadoop.
  • Perform cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools
  • Installed, modify, support and maintain scripts, policies, procedures and documentation for automation and configuration management.
  • Install configure application servers administration, including WebLogic, Tomcat, IIS and JBoss.
  • Setup high availability for application servers e.g., Tomcat, Weblogic, ESB, JBoss
  • Build, configure, install, maintain, diagnose, troubleshoot, repair, and debug EMC and NetApp products.
  • Manage & customize Cloud Foundry Buildpacks & Services
  • Build, manage and operate highly available systems utilizing Docker, Linux, Ubuntu, CoreOS, HAproxy, nginx, uWSGI, Couchbase, Zookeeper, Mesos, Marathon, Rabbitmq and Percona clusters

Confidential, Herndon, VA

Sr Integration Architect

Responsibilities:

  • Develop work with key data center systems and concepts a plus: Datacenter SW, Comms Infrastructure/networking, Storage, Virtualization, Hadoop, Cloudera, VMware,
  • Supported and Provided real-time analysis of security alerts generated by applications and network hardware
  • Design, develop, install, configure, administer, and maintain highly scalable elastic solutions that implement industry best practices using Azure cloud.
  • Monitored applications within customers Security Information and Event Management software tools within the
  • Security operations
  • OpenStack, Public/Private Cloud, Technical Computing/HPC, Network Function Virtualization, Big Data Analytics.
  • Work with Splunk to design solutions and concepts for data aggregation and visualization.
  • Wrote Templates for AWS infrastructure as a code using Terraform to build staging and production environments.
  • Gather and collate network requirements by working with clients and colleagues.
  • Design secure and scalable networks.
  • Hosted Micro Services on PCF and AWS platforms.
  • Assist in sizing effort for the network part of projects.
  • Assist in pricing for the network part of projects.
  • Lead the client through the process of re-architecting a legacy platform and suite of tools using Azure services
  • Configure, administer, and operate highly scalable, available, and elastic solutions that implement industry best practices using Microsoft Azure and other Cloud service providers
  • Design configure deploy current and future datacenter network environment.
  • Manage Azure based SaaS environment
  • Designing and implementing customers’ network infrastructure
  • Securing switch access
  • Designing and implementing wireless architectures (Cisco Aironet APs and Cisco 4400 series controllers)
  • Configuring Voice VLANS and QoS VLANs
  • Configuring routing using OSPF and policy routing using route maps
  • Hardening Cisco devices - Implementing IDS and IPS, Cisco IOS Firewall, AAA
  • Implementing traffic filters using Standard and Extended access-lists
  • Distribute-Lists, and Route Map
  • Install and tune Hadoop clusters which includes benchmarking Hadoop cluster, supporting HA Namenodes, balancing HDFS block data and Datanode adding and decommissioning in secure enterprise environment.
  • Monitor and troubleshoot Zookeeper and Yarn (configure different schedule options, manage and monitor workloads, maintain a multi-tenant environment, implement security controls, and manage high availability features of Hadoop.
  • Troubleshoot Apache Sentry, Kerberos (both Kerberos RPC and HTTP SPNEGO), SSSD and HDFS interdependencies.
  • Manage Hadoop I/O (including Data Integrity, Data compression and serialization), encrypting HDFS data at rest including encryption zones, manage HDFS snapshots and HDFS Backup DR.
  • Manage Hadoop and Spark cluster environments, on bare-metal and container infrastructure, including service allocation and configuration for the cluster, capacity planning, performance tuning, and ongoing monitoring.
  • Work with data engineering groups in the support of deployment of Hadoop and Spark jobs.
  • Responsible for monitoring Linux, Hadoop, and Spark communities and vendors and report on important defects, feature changes, and or enhancements to the team.
  • Building and maintaining Visio documentations for Clients
  • Build out and improving the reliability and performance of cloud applications and cloud infrastructure deployed on Amazon Web Services, building the next generation of web applications and systems infrastructure, focusing on automation, availability and performance.
  • Design, test, and implement solutions in AWS, Microsoft O365, and other cloud environments using security offerings from vendors that focus on securing data in the cloud.
  • Monitor and manage the security of premise and cloud solutions and take remediation actions to address security events and incidents.
  • Perform Hadoop installation, configuration, and administration Management of Hive and Impala on the Hadoop Platform configuration Management of Linux systems.
  • Provide day-to-day support for SAN requests/requirements including SAN requests, modifications, provisioning, and administration.
  • Analyze, design, test, document, and implement storage solutions for continuous data availability, integrity, storage and protection. Allocate storage to Windows servers using EMC tools.
  • Build and maintain code to populate HDFS, Hadoop with log events from Kafka or data loaded from SQL production systems.
  • Design, build and support pipelines of data transformation, conversion, validation
  • Design and support effective storage and retrieval of BigData >500Tb
  • Assess the impact of external production system changes to Big Data systems on Hadoop or Spark and implement changes to the ETL to ensure consistent and accurate data flows.
  • Design and implement best practices for cloud based cluster deployments of Hadoop, Spark, and other BigData eco-system tools.
  • Assist Engility Pivotal’s customers in migrating existing apps to Pivotal Cloud Foundry

Confidential, Hanover MD

Senior Integration Architect

Responsibilities:

  • Configure vSphere ESXi, host, cluster, datacenter in the customers current enterprise environment.
  • Migrate old infrastructure from dated configuration management tools to Terraform and Chef.
  • Configure network and VLANs to maintain Dev and Production environments.
  • Follow established cloud processes and technology standards on the Azure, .NET and MS Dynamics CRM stacks to ensure the environment is completely secured, properly monitored, and optimally sized and performing.
  • Lead technical discussions and provides technical guidance and expertise for .NET and Azure.
  • Install configure VMWare vCenter 5.5 to managed current and future virtual environment
  • Edit create Linux RedHat, Centos and Ubuntu templates for the current VMWare vSphere environment.
  • Analyze logs, building searches and visualize them using dash boarding capabilities of Splunk as per business requirements.
  • Splunk Admin activities like Forwarder management, Data Ingestion, Indexing and Field
  • Develop, Architect, Deploy Splunk in a large Linux environment
  • Automate threat feeds and integration with Splunk Enterprise Security
  • Develop Splunk modules to support implementation and deployment activities
  • Develop Splunk interfaces and automated feeds and support integration of Splunk with other enterprise security platforms, databases, etc.
  • Support Splunk performance optimization efforts
  • Contribute design and architectures to support evolution of security monitoring
  • Design and propose efficient and cost effective Azure infrastructure necessary to support implementations.
  • Support the gathering of business requirements and capabilities
  • Configured Splunk monitoring alerts based on error conditions.
  • Integrate Splunk with a wide variety of legacy data sources
  • Ability to design high availability applications on AWS across availability zones and availability regions
  • Implement systems that are highly available, scalable, and self-healing on the AWS platform
  • Deploying, managing, and operating scalable, highly available, and fault tolerant systems on AWS
  • Ensure the security of Splunk resources, systems, access, etc.
  • Provide day-to-day support for SAN requests/requirements including SAN requests, modifications, provisioning, and administration.
  • Installed and configured of storage hardware and software. Which includes firmware and driver updates of all hardware components including HBA's, Switches, SAN/NAS units, multi-pathing software, and other specific point products related to storage.
  • Created and managed a Docker deployment pipeline for custom application images in the cloud using Jenkins.
  • Automate the cloud deployments using chef, python (boto & fabric) and AWS Cloud Formation Templates.
  • Advise customers on AWS security tools and services: AWS Security Model, IAM (Identity Access Management), ACM (Amazon Certificate Manager), Security Groups, Network ACLs, Encryption, MFA (Multifactor Authentication)
  • Monitor Analyze all security systems log files, review, and keep track of triggered events ArcSight. Research current and future cyber threats; reconcile correlated cyber security events.
  • Provide network forensic and analytic support of large scale and complex security incidents such as targeted attacks and network/system infiltration.
  • Perform system design, integration, deployment, and administration tasks to migrate a large enterprise-scale system housed in a data center to a cloud environment leveraging cloud technology solutions, predominantly AWS and OpenStack.
  • Plan, deploy, monitor, and maintain Amazon AWS cloud infrastructure consisting of multiple EC2 nodes and VMWare Vm's as required in the environment.
  • Responsible for in Installation and configuration of Hadoop, YARN, Cloudera manager, Cloudera BDR, Hive, HUE and MySQL applications
  • Reviews performance stats and query execution/explain plans, and recommends changes for tuning Hive/Impala queries
  • Enforce best practices in while maintaining customers environment as well as Service request management, Change request management and Incident management by using the standard tools of preference
  • Mentor for colleagues looking to advance and new hires, provide them with instruction and training (Amazon S3, SQS/SNS and CloudFront),
  • Review security management best practices which includes ongoing promotion of awareness on current threats, auditing of server logs and other security management processes, as well as following established security standards.
  • Work with Cloudera maintenance, monitoring, and configuration tools to accomplish task goals and build reports for the management review.
  • Responsible to build and maintain the Cloudera distribution of Hadoop.
  • Perform cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools
  • Deploy security services across enterprise solutions supporting AWS cloud application deployments.
  • Assist in Design/Architecture of AWS and hybrid cloud solutions.
  • Design and Build world class high-volume real-time data ingestion and processing frameworks and advanced analytics on big data platforms.
  • Research, develop, optimize, and innovate frameworks and patterns for enterprise scale data analysis and computations as part of our big data and Internet of Things initiative
  • Lead the implementation of Hadoop platform strategy development by creating architecture blueprints, validating designs and providing recommendations on the enterprise platform strategic roadmap
  • Architect solutions in AWS based cloud deployments of big data platform
  • Architect solutions in AWS based cloud based deployments of big data platforms.
  • Develop a big data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous sources
  • Install, upgrade, patch, configure and support (hardware and software) for the various Hadoop components including but not limited to: Hadoop, Flume, Hbase, Hcatalog, Hive, Hue, Impala, Mahout, Oozie, Pig, Cloudera Search, Sentry, Spark, Sqoop, Whirr, ZooKeeper and Cloudera Manager

Hire Now