Cloud/devops Engineer Resume
Washington, DC
SUMMARY
- Over 7 years of professional experience as a DevOps Engineer - Build and Release Engineer in Automating, Building, Deploying, Managing, and Releasing of code from one environment to other environment and maintaining Continuous Integration, Delivery, and Continuous Deployment in multiple environments like Developing, Testing, Staging & Production.
- As a DevOps Engineer worked on Automating, Configuring and Deploying instances on AWS, AZURE, GCP and Data centers.
- Experience in AWS services such as EC2, ELB, Auto-Scaling, EC2 Container Service, S3, IAM, VPC, RDS, DynamoDB, Certificate Manager, Cloud Trail, Cloud Watch, Lambda, Elastic Cache, Glacier, SNS, SQS, Cloud Formation, Cloud Front, EMR, Elastic File System, Storage Gateway.
- E xpertise in Setting up scalability for application servers using command line interface for Setting up and administering DNS system in AWS using Route53 Managing users and groups using amazon Identity Access Management (IAM).
- Experience in migrating On-premises applications to the cloud and orchestrated cloud infrastructure using Terraform, CloudFormation and Azure Resource Manager (ARM) templates.
- Experience in using Microsoft Azure including Azure CLI, Azure Management, Azure Portal, Azure PowerShell, Cloud Monix, Azure Management PowerShell Cmdlets, Red Gate Cloud Services.
- Experience in GCP services such as App Engine, Compute Engine, Kubernetes Engine, VM Instance, Firewall rules, Snapshots, Instance Templates, Healthchecks, Clusters, Storage, Workloads, Monitoring, VPC Network.
- Experience with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation and extensively used Auto scaling for launching Cloud instances while deploying microservices and used Terraform to map more complex dependencies and identify network issue.
- Expertise in creating clusters using Kubernetes, creating pods, Replication controllers, Deployments, Labels, Health checks and ingress by writing Ruby scripts and hands on experience in building and deploying the application code using CLI of Kubernetes like kubectl and kubelet.
- Experience in working with Kubernetes to automate deployment, scaling and management of web Containerized applications and Integrated Kubernetes with Registry, networking, storage, security and telemetry to provide comprehensive infrastructure and orchestrate containers.
- Experience in configuring Docker File with different artifacts to make an image and using Ansible Playbooks deployed those Docker Images to different servers for running the applications on containers.
- Experience in setting up Jenkins as a service inside the Docker Swarm cluster to reduce the failover downtime to minutes and to automate the Docker containers deployment without using Configuration management tool.
- Expertise in w riting playbooks to deploy services on Cloud, applications and to write modules in Ansible for implementing automation of continuous deployment and automated various infrastructure activities like Continuous Deployment, Application server setup, Stack monitoring using Ansible playbooks.
- Wrote Ansible Playbooks with Python SSH as the Wrapper to Manage Configurations of AWS Nodes and Test Playbooks on AWS instances using Python. Run Ansible Scripts to provision Dev servers.
- Experience in Configuring Chef server, Chef Workstation, bootstrapping various enterprise nodes and automated the cloud deployments using Chef, Ruby and AWS Cloud Formation Templates.
- Experience in creating and updating Chef recipes and cookbooks, profiles and roles using Ruby and JSON scripting and migrated all nodes from ansible configuration system to chef system.
- Experience in working on Jenkins by configuring and maintaining in Continuous Integration (CI) and for End to End automation for all build and deployments. Involved in writing Groovy Scripts for building CI/CD pipeline with Jenkins file, Terraform scripts and Cloud Formation Templates
- Experience on Maven to create artifacts from source code and deploy them in Nexus central repository for internal deployments.
- Managed the Version Control System GIT to record the various code changes like Branching, Merging, Staging etc. Integrated GIT into Continuous Integration Environment using Jenkins/Hudson.
- Experience in installing and configuring the application and web servers like Apache, Nginx, Tomcat, JBoss, WebSphere, Web Logic and deployed several applications on these servers.
- Expertise in Shell, Perl, Ruby, Python for Environment Builds and Automating Deployment on WebSphere, Web Logic Application Servers.
- Extensively worked with Splunk, ELK, Nagios, App dynamics, Dynatrace for resource monitoring, network monitoring and Log Trace Monitoring.
- Experience installing and configuring a variety of SQL and NoSQL databases such as MYSQL, PostgreSQL, MongoDB, Cassandra.
- Experience in maintaining Servers, workstations and computer labs including software and hardware VMWARE, Oracle Virtual box and Putty.
- Experienced in Configuring Servers to provide Networking Services, including HTTP/HTTPS, FTP, NFS, SMTP, SSH and LDAP.
- Experience in Managing/Tracking the defects status by using JIRA tool and Planning & resolving the issues as per SLA.
TECHNICAL SKILLS
Version control tools: GIT, SVN.
Cloud Technologies: AWS, Microsoft Azure, GCP
Build Tools: Ant, Maven, Gradle, MS Build
Configuration Tools: Ansible, Chef.
CI and CD Tools: Jenkins, Bamboo, VSTS.
Languages and Scripting: Python, Ruby, Shell.
Networking: HTTP/HTTPS, FTP, NFS, SMB, SMTP, SSH, NTP, TCP/IP, NIS, DNS, DHCP, LDAP, LAN, WAN.
Web/Application Servers: Web logic, Web Sphere, Jboss and Apache Tomcat
Operating Systems: Linux (SUSE, CentOS, RHEL, Ubuntu), UNIX, Windows
Containerization & Orchestration: Docker, Kubernetes
Monitoring Tools: Nagios, Splunk, ELK
Database: Oracle11g, SQL, MySQL, DynamoDB, NOSQL
Virtualization: VMWare ESXi, Virtual Box, Vagrant.
PROFESSIONAL EXPERIENCE
Confidential, Washington | DC
Cloud/DevOps Engineer
Responsibilities:
- Working as a DevOps Engineer for a team that involves different development teams and multiple simultaneous software releases.
- Worked on migrating and maintaining build and test environments in the Cloud Infrastructure and Provided support and developed pipelines and release processes using Azure DevOps.
- Involved in Creating, Modifying and Maintaining cloud infrastructure build templates (JSON ARM Templates) & code repositories of Azure DevOps.
- Used Azure Container Registry to store the Docker images and used those images to do the deployments for the Kubernetes pods.
- Wrote multiple Docker files that creates deployments, manage secrets from Azure Key Vault and use them in CI/CD pipelines and to build the microservices and wrote YAML files to automate build and release steps in Azure DevOps.
- Worked on Application Registrations in Azure Active Directory and created ‘n’ number of service principles for each Application.
- Responsible for writing Azure run books to automate the infrastructure which includes creating the Resource Groups, Key Vaults, Service Principles.
- Deployed Azure IaaS virtual machines (VMs) and Cloud services (PaaS role instances) into secure VNets and subnets.
- Deployed containerized multiple applications via Docker in both Azure and AWS cloud environments. Deployed into standalone, Swarm, and AWS Elastic Container Service configurations.
- Configured Azure Automation DSC configuration management to assign permissions through RBAC, assign nodes to proper automation accounts and DSC configurations, to get alerted on any changes made to nodes and their configuration
- Responsible for implementing AWS solutions and setting up the cloud infrastructure with different services, like EC2, EMR, S3, VPC, ELB, AMI, EBS, RDS, Dynamo DB, Lambda, Auto Scaling, Route53, Subnets, NACL's, Cloud Front, Cloud Formation, Cloud Watch Cloud Trail, SQS and SNS.
- Implemented roles and groups for users and resources using Identity Access Management (IAM) and managed network security using Security Groups and IA
- Worked with IAM service creating new IAM users & groups, defining roles and policies and Identity providers.
- Defined AWS Security Groups which acted as virtual firewalls that controlled the traffic allowed to reach one or more AWS EC2 instances.
- Creating alarms in CloudWatch service for monitoring the server's performance, CPU Utilization, disk usage etc
- Automated and implemented the Cloud Formation Stacks (JSON Scripts) for creating/administrating the AWS resources like VPC, Subnets, Gateways, Auto-Scaling, Elastic- Load-Balancers (ELB), creating DB Instances and many others across different Availability Zones.
- Managed S3 policies for all the buckets in dev and prod. Define lifecycle policies convert the current and version storage to Intelligent tiering
- Wrote Ansible Playbooks to Manage Configurations of AWS Nodes and test Playbooks on AWS instances using Python. Run Ansible Scripts to provision Dev and Prod servers
- Used Boto3 SDK to automate some of the processes of Resources in Amazon Web Services.
- Designed and Provisioned E2E AWS infrastructure for production Environment using availability zones, VPC, Route53, Record sets, ALB, Target Groups and m5.2xlarge EC2 instances.
- Developed Jenkins pipelines using groovy script to trigger on GITHUB pull requests for application build and deploy using tools like maven 3.0, docker 19.03.4 and ansible.
- Experience in setting up CI/CD pipeline integrating various tools with Jenkins to build and run Terraform jobs to create infrastructure in AWS.
- Build Jenkins jobs to create AWS infrastructure from GitHub repos containing terraform code.
- Terraform AWS ECS tasks, usage of multi-region Infra spin-up with locals, variables, local-exec, remote-exec, nested modules, secures Dynamo DB lock with S3 backend configured for tf state files.
- Worked with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation.
- Worked with Terraform to create AWS components like EC2, IAM, VPC, ELB, Security groups.
- Integration of Ansible Dynamic Inventory for reusable Infra setup by Terraform from its TF state file or on AWS cloud based on the Tag names.
- Extensively used Dockers for virtualization, Ship, Run and Deploy the application securely for fasten the Build/Release Engineering.
- Virtualized the servers using the Docker for the test and dev-environments needs and used Docker containers for configuration automation.
- Generated keys for CA singed SSL certificates, converted CRT file to PKCS12 & JKS followed by modifying Jenkins config file to enable HTTPS port and configured PORT forwarding using iptables to forward 8080 to 443.
- Configured SonarQube scanner 7.7 for JAVA projects using JACOCO plugin and also for python projects using coverage.py tool. Also, Modified python test files to export coverage.xml from unit tests.
- Automated CI/CD with Jenkins build pipeline by integrating GitHub, Maven and Nexus repository and set up Jenkins master/slave to distribute builds on salve nodes.
- Administered Jenkins-Jira-GitHub Automation by using Python scripting, Jira API and Webhooks.
- Configured the logs using Prometheus locally and Forwarding the logs to Splunk using Fluentd.
- Hands on development experience in customizing Splunk dashboards, visualizations, configurations, reports and search capabilities using customized Splunk queries.
- Used Python SDK BOTO3 for AWS functions, integrated them to Automate most of the complex task's through Python scripting.
Environment: Azure, Amazon Web Services (AWS), Jenkins, Ansible, Terraform, Docker, GitHub, SonarQube, Maven, Jira, RHEL, YAML, shell, Python.
Confidential, San Jose | CA
Cloud/DevOps Engineer
Responsibilities:
- Setting up of CI/CD pipeline using continuous integration tools such as Cloud Bees Jenkins and automated the entire AWS EC2, VPC, S3, SNS, RedShift, EMR based infrastructure using Terraform, Chef, Python, Shell, Bash scripts and managing security groups on AWS and custom monitoring using CloudWatch.
- Launched AWS EC2 instances using Amazon Web Services such as Linux, Ubuntu, RHEL for Development, Test and Production Environments and set up AWS security groups which behaves as Virtual firewalls controlling the traffic by allowing it to reach one or more AWS EC2 instances.
- Designed AWS Cloud Formation templates to create custom sized VPC, Subnets, NAT to ensure successful deployment of Web applications and database templates.
- Provided highly durable data by using S3 Data store, Versioning, Lifecycle Policies, and created AMI’s for mission critical production servers for backup and worked on end to end deployment ownership for projects on AWS which included Python scripting for automation, scalability, build promotions for staging to production etc.
- Worked on Azure Site Recovery and Azure Backup- Installed and Configured the Azure Backup agent and virtual machine backup, Enabled Azure Virtual machine backup from the Vault and configured the ASR and created multiple Virtual Machines using Power Shell scripting for the testing purposes.
- Created multiple resources in Microsoft Azure, including Resource Groups, configured Azure Virtual Networks (VNets), subnets, Site to Site connectivity, Storage Accounts, and other resources. Performed health check of existing SCCM structure managing servers in both subdev and prod subscriptions.
- Involved in migration from on-premises to Azure Cloud and created custom sized images for VM's and configured Azure Backup Service for taking backup of Azure VM and data of on premise to Azure.
- Involved on migrating SQL Server databases to SQL Azure Database using SQL Azure Migration Wizard and used Python API to upload agent logs into Azure blob storage and in migration from on-premises to Azure Cloud and created custom images for VM's and Configured Azure Backup Service for taking backup of Azure VM and data of on premise to Azure.
- Implemented Kubernetes to deploy scale, load balance and manage docker containers with multiple name spaced versions and implemented a production ready, Load balanced, Highly available, Fault tolerant Kubernetes infrastructure.
- Configured Kubernetes Services type load balancer and Cluster IP to expose UI based application and migrated cluster CNI from Flannel to Kube Router to support Kubernetes network policies.
- Deployed and configured Prometheus to monitor Kubernetes nodes with node-exporter, monitor Kubernetes API.
- Worked on Docker service for our Docker images and worked with Docker container networks communications using Docker Weave rolling updates to implement zero downtime PROD deployments and worked with Docker Trusted Registry as repository.
- Worked on Docker-Compose, Docker-Machine to create Docker containers for Testing applications in the QA environment and automated the deployments, scaling and management of containerized applications across clusters of hosts using Kubernetes.
- Automated configuration management and deployments using Ansible playbooks and YAML for resource declaration. Created roles and updated Playbooks to provision servers by using Ansible.
- Installed, Configured and managed Ansible Centralized Server and creating the playbooks to support various middleware application servers, and involved in configuring the Ansible tower as a configuration management tool to automate repetitive task.
- Created Jenkins environment and configuring end-to-end build pipeline and involved in all area of Jenkins like Plugin management, securing Jenkins, Performance issues, Analytics, Scaling Jenkins, integrating code analysis, and Test Phases to complete the CD pipeline with Jenkins
- Automated continuous build using Maven and deploy scripts for Continuous Integration tool Jenkins to enhance the overall operational environment.
- Managed the Version Control System GIT to record the various code changes like branching, merging, staging etc. Integrated GIT into Continuous Integration Environment using Jenkins/Hudson.
- Designed and developed a configuration management database (CMDB) using Python and MySQL to maintain and audit the everyday configuration changes.
- Created Python scripts to totally automate AWS services which includes web servers, ELB, Cloud Front distribution, database, EC2 and database security groups, S3 bucket and application configuration, this script creates stacks, single servers or joins web servers to stacks.
- Experienced in creating Reports, Alerts, and Dashboards by Splunk Search Processing Language (SPL), creating and running Cron Jobs for scheduled tasks
- Created field aliases across application events and time modifier conversion commands.
- Writing new plugins in Nagios to monitor resources and working in implementation team to build and engineer servers on RHEL Linux provisioning virtual servers on VMware and ESX servers using Cloud.
- Involved in setting up application servers like Tomcat, WebLogic across Linux platforms as well as wrote shell scripts, Bash, Perl, Python, Ruby scripting on Linux.
- Used JIRA for creating bugs tickets, storyboarding, pulling reports from dashboard, creating and planning Sprints.
Environment: AWS, EC2, VPC, S3, SNS, Azure, Kubernetes, Docker, Ansible, Jenkins, Git, Maven Terraform, Python, Shell Script, Bash script, Nagios, YAML, Prometheus, CloudWatch, Jira.
Confidential, New York | NY
Cloud/DevOps Engineer
Responsibilities:
- Created GCP Firewalls through GCP console and REST API and defined at the VPC network with default and implied rules for sensitive information to restrict the access for data storage on virtual machines in VPC.
- Used Google Stackdriver for monitoring the logs of both GKE and GCP instances and configured alerts from Stackdriver and Configured and automated the Google cloud Services as well as involved in deploying the content cloud platform using Google compute engine, Google storage buckets written the Python script to send the Stackdriver logs using Cloudfuntion with integration of Pub/Sub and Big Query and automated all the infrastructure work flows using Terraform.
- Exported the Stackdriver logs to Pub/Sub and from the Pub/sub sending data to the GCS bucket and the Big Query.
- Configured Hybrid Cloud setup on GCP using VPN with two different regions and used Google Cloud console to create and manage GCP and GKE workloads.
- Used GCP App Engine for deploying and scaling web applications and configured and deployed instances on GCP environments and Data centers, also familiar with Compute, Kubernetes Engine, Stackdriver Monitoring, Elastic Search and managing security groups on both.
- Implemented AWS Security Groups which acted as virtual firewalls that controls the incoming traffic and configured the traffic allowing reaching one or more AWS EC2 instances Virtual private cloud (VPC), subnets, Internet Gateways, S3 bucket and route53 under Amazon Cloud Environment.
- Setting up scalability for application servers using command line interface for Setting up and administering DNS system in AWS using Route53 Managing users and groups using amazon identity access management (IAM).
- Created Python scripts to totally automate AWS services which includes web servers, ELB, Cloud Front distribution, Database, EC2 and database security groups, S3 bucket and application configuration, this script creates stacks, single servers or joins web servers to stacks
- Provided highly durable and available data by using S3 data store, Versioning, Lifecycle Policies, and create AMIs for mission critical production servers for backup.
- Created Clusters using Kubernetes kubectl and worked on creating many pods, Replication controllers, Services, Deployments, Labels, Health Checks and ingress by writing YAML files and used to manage containerized applications using its nodes.
- Executed Kubernetes locally with Minikube, created local clusters and deploy application containers and Config Maps, Services and deployed application containers as Pods.
- Initiated Microservices application through Docker and Kubernetes Cluster formation for scalability of the application, creation of Docker Images to upload or download in and out from the Docker Hubs.
- Worked on end to end setup of the Artifactory pro as a Docker Container with a secure private Docker Registry and local Docker repositories for storing the built Docker Images.
- Installed Docker Registry for local upload and download of Docker images and from Docker Hub and created Docker files to automate the process of capturing and using the images.
- Achieved Continuous Delivery goal on high scalable environment, used Docker coupled with load-balancing tool Nginx and virtualized the servers using Docker for the test environments and dev-environments needs, also configuration automation using Docker containers.
- Installed and Configured Chef Server, Workstation, Client servers and nodes. Written several Recipes, Cookbooks in Chef to automate the environment provisioning, Middleware Infrastructure Installations.
- Worked and configured Chef server, Chef-solo along with creating Chef Cookbooks and implemented latest releases of Chef- Solo, Compliance, Habitat and written Chef Recipes to install and configure Nagios for monitoring Infrastructure.
- Implemented Jenkins and built pipelines to drive all microservice builds out to the Docker Registry and then deployed to Kubernetes.
- Worked on Jenkins by configuring and maintaining in Continuous Integration (CI) and for End to End automation for all build and deployments. Involved in writing Groovy Scripts for building CI/CD pipeline with Jenkins file, Terraform scripts and Cloud Formation Templates.
- Involved in writing Maven scripts for the configuration of the Java applications and executed the Maven builds locally to troubleshoot Java code issue and merging related issues.
- Provided end-user training for all GitHub users to effectively use the tool and coordinate/assist developers with establishing and applying appropriate branching, labeling/naming conventions using GIT source control.
- Created New Relic dashboard for all the services, Created New Relic queries for all the services and Developed monitoring solutions in New Relic, Datadog and AWS Config along with runbooks to guide identification and resolution of issues.
- Used Python scripting for Automation, Highly Scalable, Build promotions for staging to production.
- Environment: GCP, Stackdriver, Pub/Sub, Big Query, Cloud Function, GKE, App Engine, Data Flow, Cloud shell, AWS EC2, Python, Terraform, S3, Route53, Kubernetes, Docker, Nginx, Chef, Jenkins, Maven, GitHub, New Relic
Confidential, Philadelphia | PA
DevOps/AWS Engineer
Responsibilities:
- Worked on AWS Lambda functions in Python for AWS's Lambda which invokes python scripts to perform various transformations and analytics on large data sets in EMR clusters.
- Installed and maintained the AWS compute services like EC2, EC2 container services Elastic Beanstalk, AWS Lambda, Auto Scaling, Elastic Load Balancer and created S3 buckets and managing policies for S3 buckets and Utilized S3 bucket for securing the code.
- Launched EC2 instances with various AMI's and Integrated EC2 instances with various AWS tools by using IAM roles. Created Images of critical EC2 instances and used those images to spin up a new instance in different AZ's.
- Implemented Chef Recipes for Deployment on build on internal Data Centre Servers. Also re-used and modified same Chef Recipes to create a Deployment directly into AmazonEC2 instances.
- Installed and Configured Chef Server, Workstation, Client servers and nodes. Written several Recipes, Cookbooks in Chef to automate the environment provisioning, Middleware Infrastructure Installations.
- Written Chef Cookbooks and recipes in Ruby Script to install and configure Infrastructure across environments and automated the process using Python Script and setting up Chef Infra, bootstrapping nodes, creating and uploading recipes, node convergence in Chef SCM.
- Managed configuration of Web App and Deploy to AWS cloud server through Chef and Deployed a centralized log management system and integrated into Chef to be used by developers.
- Implemented CI/CD process using TeamCity for development team, allowing for dozens of code updates per hour with zero downtime.
- Contributed to CI automation by improving Python framework to grab test reporting data from Jenkins Builds and summarize that information as a comment on Bit Bucket Pull Requests.
- Documented release metrics, software configuration process. Used Maven scripts to build the source code. Supported and helped to create Dynamic Views and Snapshot views for end users.
- Integrated Maven with shell scripts created in bash to automate the deployments for the Java based applications.
- Monitoring the health of production servers using Nagios setup and built Nagios monitors for new services being deployed.
- Developed Nagios Plug-in scripts, various reports and projects plans in the support of initiatives to assist in maintaining Nagios distributed system monitoring and management via several data extrapolating applications.
- Developed Shell/Python Scripts for automation purposes like backing up disk groups, archive logging, Rest API calls, OS Patching, DB Patching etc.
- Designed and developed a configuration management database (CMDB) using Python and MySQL to maintain and audit the everyday configuration changes.
- Build and release of Cloud based products containing Linux and Windows environments, using Power Shell and Python Scripting.
Environment: AWS Lambda, AWS EC2, S3, EMR, Chef, Ruby Script, Python Script, Team City, Bit Bucket, Maven, Nagios, Linux, Windows, MySQL, Power Shell.