Sr. Cloud/devops Engineer Resume
Folsom, CA
SUMMARY
- Responsible in designing and implementing Azure Cloud Environment and configured shared access signatures (SAS) tokens and storage access policies in cloud infrastructure.
- Expertise in Azure Development services like Azure web application, App services, Azure storage, Azure SQL Database, Azure Virtual Machines, Azure AD, Azure search, Azure DNS, Azure VPN Gateway and Notification hub.
- Experience in Azure Platform Development, Deployment Concepts, hosted Cloud Services, platform service and close interface with Windows Azure Multi - factor Authentications and Continuing architectural changes to move software system offerings to a distributed, service-based architecture, utilizing Docker/Kubernetes technologies.
- Extensive working experience and knowledge with broad range of AWS Cloud Services like EC2, ELB, Auto Scaling, VPC, Route53, RDS, S3, IAM, SNS, SQS, Dynamo DB, Elastic search, Elastic Filesystem (EFS), Cloud Foundry, CloudWatch, Cloud Trail, Lambda, Service Catalog, Kinesis, Redshift Cluster, Cloud Security (OAuth2 and SAML).
- Experience in automated deploying of applications with Amazon Lambda and Elastic Beanstalk.
- Experience in designing and developing applications that use MongoDB and DynamoDB in AWS.
- Setup AWS virtual cloud automatically by using infrastructure as code tool called terraform. The main challenge here was modifying settings by interfacing with the control layers and creating components necessary for running the application.
- Experience in managing IAM policies with active directory integration to manage security in GCP.
- Experience in Setting up GCP instances with terraform.
- Installed and configured Terraform Enterprise in Google Cloud Platform (GCP) environment using Cloud SQL for PostgreSQL.
- Experience with airgap installation of Terraform along with configuration with Forsetti and Cloud Armor.
- Automated infrastructure provisioning using Ansible and Terraform.
- Managing Pivotal Cloud Foundry (PCF), applying patches, upgrading PCF 1.6 to next version.
- Experience in infrastructure maintenance (servers for different data centers) using OpenStack. Familiar with OpenStack concepts of user-facing availability zones and administrator facing host aggregates.
- Production experience in large environments using configuration management tools like Chef, Ansible and Puppet.
- Experience in Ansible, Ansible vault and Tower as configuration management tool, to automate repetitive tasks, deploy applications and manage changes and automate software update and verify functionality.
- Experience in writing Ansible playbooks which are custom playbooks written in YAML, encrypt the data using Ansible Vault and maintained role-based access control by using Ansible Tower to manage web applications, Environments configuration files, Users, Mountpoints and Packages.
- Experience in installing Chef Server Enterprise on premise/workstation/bootstrapped the nodes using knife and automated by testing Chef Recipes/Cookbooks with Test-kitchen.
- Extensively worked on a configuration management Puppet tool which includes installing Puppet master, agents and writing manifests from scratch and pushing them to agents for CI &CD.
- Management and design of integrated pipelines built using continuous integration workflows such as Jira, GIT, Stash, Bamboo, Jenkins, Docker, Kubernetes, Terraform, ELK (Elastic search, Log Stash, Kibana).
- Experience in integrating Jenkins with various tools like Maven (Build tool), Git (Repository), SonarQube (code verification), Nexus (Artifactory) and implementing CI/CD automation for creating Jenkins pipelines programmatically architecting Jenkins Clusters and troubleshooting the build issues during the build process.
- Responsible for installing Jenkins master and slave nodes and involved in GIT plugin and schedule jobs using Poll SCM option and creating the build scripts using Maven for Java projects.
- Experience working on several Docker components like Docker Files, Docker images, Docker Hub and Docker Registries and Kubernetes.
- Experienced in installing, Configuring and managing Docker Containers, Docker Images for Web Servers and Applications servers such as Apache, Tomcat using Docker and integrated with Amazon RDS database.
- Experience with system hardening and implementing security controls.
- Experienced with scheduling, deploying and managing container replicas onto a node cluster using Kubernetes and worked with building K8’s run time environment of the CI/CD system to build, test & deploy in an open source platform.
- Expertise in build tools such as MAVEN, ANT, Octopus and Gradle.
- Extensive knowledge of Splunk architecture and various components including Search Heads, Indexers, Deployment server, Deployer, License Master, Heavy/Universal Forwarders, and expertise in Splunk 5.x and Splunk 6.x products, Distributed Splunk architecture.
- Proficient in creating Splunk dashboards & have a strong Splunk UI experience, able to debug search queries.
- Good working experience with Java 8 features like Streams API, Default and Static methods in Interfaces, Lambda Expressions, Optional Class, and Parallel Sort in Arrays.
- Experience In developing Micro Services Architecture using REST APIs & Spring Boot.
- Enhanced and deployed Microservices based applications using Spring Boot and Spring Cloud and created dynamic documentation for RESTFUL web service using Swagger.
- Good ease in building Dynamic Single Page Application (SPA) using MEAN (Mongo, Express Angular, and Node) full-stack JavaScript framework.
- Experience with security and scanning tools such as HP fortify, SonarQube and BlackDuck.
- Proficient in managing the source code control of multiple development efforts using, Subversion (SVN), TFS (for Windows Environment), CVS, GITLAB, Bitbucket and Perforce version control tools.
- Experienced in Branching, Merging, Tagging & Maintaining the version across the environments using SCM tools like Git and Subversion (SVN) on Linux platforms.
- Proficient with Shell, Python, Ruby, PowerShell, JSON, YAML, Groovy scripting languages.
- Hands on experience in Configuration Management (CM) policies and approaches with regards to software development life cycle (SDLC) along with automated scripting using BASH/PowerShell, Perl, Python scripting.
- Exposure to all aspects of Software Development Life Cycle (SDLC) such as Analysis, Planning, Development, Testing, Implementation, Post-production analysis of the projects.
- Knowledge of using Routed Protocols like FTP, SSH, HTTP, HTTPS and direct connect and experience with Kickstart installations, support, configuration and maintenance of Red Hat Enterprise Linux, CentOS, Ubuntu.
TECHNICAL SKILLS
Operating system: Ubuntu 12/13/14, IBM AIX (4.3/5.x/6.x/7.x), Windows NT /2000/2003, Linux (Red Hat 4/5/6/7, CENTOS & SUSE), Solaris 11/10/9/8, DEBIAN, Mac OSx
Application Servers: Apache Tomcat 5.x/7.x, Web Logic Application Server 9.x, 10.x, Red Hat JBoss 4.22. GA Web Sphere 6.x/7.x/8.x
Automation Tools: Chef, Puppet, Ansible, Docker, Vagrant, Terraform, Kickstart, Hudson, Pivotal Cloud Foundry (PCF), Kubernetes
CI/CD & Build Tools: Jenkins, Bamboo, Maven, ANT, Gradle, Anthill Pro, U-Deploy
Version control tools: Git, Gitlab, Bitbucket, Subversion SVN, TFS, Clear Case, CVS
Web servers: Tomcat, Nginx, Azure, Apache 2.x, 3.x, Web Logic (8/9/10), JBOSS 4.x/5.x, Web Sphere4/5, TFS
Networking/protocol: TCP/IP, NIS, NFS, DNS, DHCP, FTP/SFTP, HTTP/HTTPS, NDS, Cisco Routers/Switches, WAN, LAN
Scripting: Perl, Python, Ruby, Bash shell, Power shell scripting, PHP, JSON
Virtualization Technologies: VMWare, ESX/ESXi, Windows Hyper-V, Power VM, Virtual box, Citrix Xen
Cloud Environments: AWS, AZURE, Cloud Formation, Rackspace, OpenStack.
Database: Cassandra, Redis, Aerospike, Oracle MySQL, MongoDB, AWS RDS, DynamoDB
Monitoring Tools: Nagios, Dynatrace, Splunk, CloudWatch, ELK (Elasticsearch, Logstash, Kibana), JIRA
Programming/Web Technologies: .Net, Java, C++, XML, HTML, CSS
Backup Management: SolarWinds, VERITAS Netback, EMC Avamar, Solstice Disk Suite
PROFESSIONAL EXPERIENCE
Confidential, Folsom, CA
Sr. Cloud/DevOps Engineer
Responsibilities:
- D esigned, configured and deployed Microsoft Azure for a multitude of applications utilizing the Azure stack (Including Computer, Web & Mobile, Blobs, ADF, Resource Groups, Azure SQL DW, Cloud Services, and ARM, focusing on high-availability, Disaster Recovery, fault tolerance, and auto-scaling).
- Experience in configuring Azure web apps, Azure App services, Azure Application insights, Azure Application gateway, Azure DNS, Azure Traffic manager, Azure Network Watcher, Implementing AZURE Site.
- Design, Plan and create Azure virtual machines, Implement and manage virtual networking within Azure and connect to on-premises environments.
- Creating, validating, and reviewing solutions for data center migration to Azure cloud environment.
- Created different templates of ARM under platform of Azure.
- Setup Azure Virtual Appliances (VMs) to meet security requirements like software-based appliance functions (firewall, WAN optimization and intrusion detections).
- Experience in migrating infrastructure and application from on premise to Microsoft Azure.
- Researched and created preliminary Powershell code for moving Azure Classic workloads to Azure Resource Manager version.
- Wrote Powershell scripts for automating time-consuming specialty needs for external clients.
- Experience in migrating Azure Classic Instances to Azure ARM Subscription with Azure Site Recovery.
- Experienced in designing, deploying and maintaining various multitude applications utilizing almost all AWS services stack including Elastic Cloud Compute EC2, S3, EBS, EFS, Elastic Bean Stalk, Route 53, VPC, Cloud Front, DynamoDB, Red Shift, RDS, Key Management Service (KMS), Identity & Access Management (IAM), Elastic Container Service (ECS), Elastic Load balancing, Cloud Formation, Elastic Cache, SNS, SQS focusing on high availability, fault-tolerance and auto scaling.
- Responsible for managing Amazon instances by taking AMIs and performing administration and monitoring of Amazon instances using Amazon Cloud Watch.
- Expertise in Auto Scaling using AWS command line tools and AWS cloud environment for Dev/QA environments.
- Managed AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our QA and UAT environments as well as infrastructure servers for GIT and Chef and experienced in automating CI and CD pipeline with AWS Code Pipeline, Jenkins and AWS Code Deploy.
- Extensively worked with Amazon Macie to manage data security and data privacy of our clients.
- Created development and test environments for different micro services by provisioning Kubernetes clusters on AWS using Docker, Ansible, and Terraform.
- Generated Terraform scripts and templates required for the automatic provisioning of resources.
- Worked on Deploying Azure Infrastructure as a Code using Terraform modules and ARM Templates.
- Integrated dot cover (.cover) for test analysis for applications developed in .net core 3.1 & 2.2.
- Worked closely with development team to create automated scripts to build MSI packages in .net core 3.1 & 2.2.
- Extensively worked with core java Collections Threading, Exceptions, String Builder, and Interfaces.
- Used Streams and Lambda expressions available as part of Java 8 to store and process the data.
- Worked with the development team to deploy new libraries with Micro Services Architecture using REST APIs & Spring Boot.
- Did extensive JavaScript and jQuery programming to give AJAX functionality for the website.
- Used jQuery to make the HTML, CSS, and JBoss code interact with the JavaScript functions to add dynamism to the web pages at the client-side.
- Contributed Full stack development in native Golang backend, native JavaScript, and Bootstrap Framework for web application between advisers and their clients.
- Devised solutions to expedite the procurement of required data for unique website architectures comprised of JavaScript.
- Implemented the Apache Kafka cluster as a messaging system between the APIs and Microservices.
- Build Apache Kafka Multinode Cluster and used Kafka Manager to monitor multiple Clusters.
- Development of new listeners for producers and consumer for both Rabbit MQ and Kafka.
- Automated the process of transforming and ingesting terabytes of monthly data in Parquet format using Kafka, S3, Lambda and Airflow.
- Created workflows using Airflow to automate the process of extracting weblogs into S3 Datalake.
- Wrote Ansible playbooks from scratch in YAML. Installing, setting up & Troubleshooting Ansible.
- Updated the existing scripts to Ansible playbooks to install configurations on multiple servers in AWS.
- Proficient in using Ansible Tower, which provides an easy-to-use dashboard and role-based access control, so that it is easier to allow individual teams access to use Ansible for their deployments.
- Developed Ansible scripts for an automated server provisioning and Docker images for isolation, reducing the time between provisioning and deployment from over 2 hours to less than 10 minutes.
- Using Ansible to Setup/teardown of ELK stack (ElasticSearch, Logstash, Kibana).
- Troubleshooting any build issue with ELK and work towards the solution.
- Integrated Jenkins with various DevOps tools such as Nexus, SonarQube, Ansible and used CI/CD system of Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the runtime environment for the CI/CD system to build and test and deploy.
- Created private cloud using Kubernetes that supports development, test and production environments.
- Used Kubernetes to deploy scale, load balance, scale and manage Docker containers with multiple name spaced versions.
- Experience in Docker, Kubernetes for the Container Security Engineer implementing monitoring/auditing security events on container and implement container network security detection.
- Prototype CI/CD system with GitLab utilizing Kubernetes and docker for the runtime environment for the CI/CD systems to build and test and deploy.
- Built docker images using Jenkins pipeline and push to Artifactory and deploy to Openshift containers using Kubernetes for the Microservices.
- Experienced in testing and implementing network Security products in a complex multi-Security Zone and Multi-Domain complex network environment using Aquasec security tool.
- Created various build and deployment scenarios such as jobs to build from various branches, deploy tasks to development server or QA server or Staging/Production server using Jenkins.
- Enabled Continuous Delivery through Deployment into several environments of Development, Test and Production using Maven and SonarQube.
- Used Blackduck for managing quality, security and license-based risks as we used open source and third-party code in our applications and containers.
- Good Knowledge on various databases domains such as Oracle, MYSQL and MS Access.
- Enabling users to manage software development, deployments and infrastructure with tools such as Jenkins and GitHub.
- Developing scripts for build, deployment, maintenance and related tasks using Jenkins, Docker, Maven, Perl.
- Coordinate and assist developers with establishing and applying appropriate branching, labeling/naming conventions using GitLab and analyzed and resolved conflicts related to merging of source code for Git.
- Used JIRA for change control & ticketing.
- Installed, Configured, Managed Monitoring Tools such as Splunk, Nagios for Resource Monitoring/Network Monitoring/Log Trace Monitoring and Cloud Watch and ELK to monitor OS metrics, server health checks, file system usage etc.
- Used Elasticsearch for powering not only Search but using ELK stack for logging and monitoring our systems end to end Using Beats.
- Responsible to designing and deploying new ELK clusters (Elasticsearch, logstash, Kibana, beats, Kafka, zookeeper etc.
- Good knowledge in creating custom Splunk Apps and installing Splunk base apps for based on customer application requirements.
- Worked on deploying, configuring and maintaining the Splunk Universal forwarder on different platforms.
- Monitored servers, switches, and ports using Nagios Monitoring tool and assisted internal users of Splunk in designing and maintaining production quality dashboards.
Environment: & Tools: AWS (EC2/AMIs/VPC/S3/IAM/S3,EBScloud trails, CloudWatch, EMR, Cloud Formation, SQS, SNS, Snowball, Lambda, Kinesis, Redshift, Route53,RDS,MYSQL), AWS Direct connect, VM Export/Import, AZURE, Pivotal Cloud Foundry (PCF), OpenStack, Node.js, Ansible, ELK (Elasticsearch, LogStash, Kibana), Nginx, Terraform, Docker, Docker Swarm, Kubernetes, Linux, Jenkins, SonarQube, Git, GitLab, Hashi corp tools, Python, Nagios, Splunk, TFS, JIRA, PowerShell, ServiceNow, OpenShift, SCOM.
Confidential, Kansas City, MO
Site Relaibility Engineer
Responsibilities:
- Worked on automating the process of managing capacity, safe software deployment and system failure handling.
- Maintain services during deployment and in production by monitoring and measuring key performance and service level indicators including latency, availability, and overall system health.
- Worked with the development team to create shared responsibility.
- Worked with the software engineering team on support issues to improve the tools, software and processes.
- Planning, deploying, monitoring, and maintaining AWS cloud infrastructure consisting of multiple EC2 nodes and VMWare VM's as required in the environment.
- Used Security Groups, Network ACLs, Internet Gateways, NAT instances and Route tables to ensure a secure zone for organizations in AWS public cloud.
- Development of Amazon Virtual Private Cloud in the scalable environment which provides advanced security features such as security groups and network access control lists to enable inbound & outbound filtering at the instance level and subnet level.
- Deployed AWS Lambda code from Amazon S3 buckets. Created a Lambda Deployment function and configured it to receive events from S3 bucket.
- Used Python modules including Boto 3 to supplement automation provided by Ansible and Terraform for tasks such as encrypting EBS volumes, backing AMIs and scheduling Lambda functions for routine AWS tasks .
- Expert hands-on knowledge of Google Cloud Platform services, GCP infrastructure, cloud native application development.
- Rapidly architected & migrated client's infrastructure to GCP, helping shut down client's legacy data centers.
- Extensive use of cloud shell SDK in GCP to configure/deploy the services Data Proc, Storage, BigQuery.
- Build data pipelines in airflow in GCP for ETL related jobs using different airflow operators both old and newer operators.
- Involved in creating the company's DevOps strategy in a mixed environment of Linux (Ubuntu, CentOS, RHEL) servers.
- Involved in implementing various software release strategies for various applications according to Agile process.
- Worked on Load balancing over the network and servers using F5 pools.
- Worked on Jenkins by installing, configuring and maintaining for purpose of Continuous Integration (CI) and for the end to end automation for all build and deployments and creating Jenkins CI pipelines.
- Worked in designing and implementing continuous integration system using Jenkins by creating Python and Shell scripts.
- Installed Chef Server and Chef Clients to pick up the build from Repository and Deploy in target environments and created Chef Cookbooks using recipes to automate Build with Development Pipeline.
- Created & Handled cookbooks and recipes using Ruby in Chef Workstation & deploying them to various nodes.
- Written Chef Cookbooks for various DB configuration to modularize and optimize product configuration, converting production support scripts to Chef Recipes and AWS server provisioning using Chef Recipes.
- Created Jenkins pipeline for Puppet release process to deploy modules using Kanban Agile methodology for development.
- Worked on configuring and monitoring distributed and multiple platform servers using Puppet.
- Developed automation scripting in Python using Puppet to deploy and manage Java applications across Linux servers.
- Created artifact files by using jar files and POM.xml files & used Apache Tomcat application server for deploying the artifacts.
- Designed and implemented scalable, secure Virtual Network.
- Extensively used the core-concepts of java such as Stream API, Multi-Threading, Synchronization, Exception-handling, and Collections for business logic development.
- Worked on Java EE components using Spring transactions, and Spring boot modules.
- Worked with the development team on NodeJS for the front end and implemented data binding for front-end development on the web application and used structured JavaScript code to build endpoints.
- Used NodeJS to run Web pack tasks for our project and for server-side web applications along with JavaScript codes to build real-time web APIs.
- Involved in File Manipulations, File Uploads using Node JS.
- Used Jenkins as Continuous Integration (CI/CD) tool and Deployed application using JBOSS.
- Worked in authoring pom.xml, build.xml files performing releases with Maven, ANT release plugin, and managing artifacts in SonarQube, NEXUS, JFrog Artifactory.
- Install, configure, and maintain ELK stack systems.
- Participated in problem resolving, change, release, and event management for ELK stack.
- Used Data Base connect for real-time data integration between Splunk Enterprise and databases.
- Created Splunk app for enterprise security to identify and address emerging security threats using continuous monitoring, alerting and analytics.
- Created User, Group creation, monitoring and maintaining log for system status/health using Linux commands and Nagios.
- Established processes & tools to maintain code base integrity, including check-in validation rules and branch/merge processes.
- Created deployment request tickets in Remedy for deploying the code in Production Environment.
- Worked with Red Hat OpenShift Container Platform for Docker and Kubernetes.
- Created Docker images and linking of Docker containers for secured way of data transfer and handling images primarily for middleware installations and domain configurations.
- Used Kubernetes to deploy scale, load balance, scale and manage Docker containers with multiple name spaced versions.
- Used Hashicorp Vault for secrets management (storing keys, tokens, API keys, etc).
- Installed, configured and maintained servers like HTTP, Apache Web Server and WebSphere on Red Hat Linux.
- Responsible for Installation, Configuration Management, Maintenance & Systems Development of Linux/UNIX Systems.
- Managed all the bugs and changes into a production environment using Jira tracking tool.
- Worked on Confluence to create, share and discuss content and projects.
- Involved in setting up JIRA as defect tracking system & configured various workflows, customizations and plugins for the JIRA.
- Integrated JaCoCo with Jenkins to check code coverage and generated the reports based on coverage percentage of the Line, Branch, Method etc.
- Implemented monitoring on all enterprise applications and Infrastructure systems using Dynatrace which reduced overall critical incidents and provided self-healing features to be architected and implemented.
Environment: & Tools: Agile, GIT, Maven, Jenkins, Docker, Kubernetes Nexus, Puppet, Jira, Nagios, Confluence, TOMCAT, Websphere, Python Scripts, Ruby Scripts, Shell Scripts, RedHat, CentOS, Ubuntu, JaCoCo, JFrog, SonarQube, Nexus, AWS, EC2, NACL, Security Groups, Route Tables, NAT, VMware.