Cloud Devops Engineer Resume
4.00/5 (Submit Your Rating)
Morristown, NJ
PROFESSIONAL SUMMARY:
- During my 6 years of experience, I had practical skills with interesting technologies, roles like DevOps System's Engineer, AWS DevOps Engineer tasks, include building & supporting web SaaS solutions based on Linux/Unix platform in a cloud (AWS) and on - premises.
- Takes the form part in automating, supporting, and ensuring CI/CD in product development. Worked on managing cloud-based infrastructure especially on AWS as well as simultaneously addressing the troubleshooting issues and initiating corrective actions.
- Extensively worked on AWS cloud services like EC2, VPC, IAM, RDS, ELB, EMR, ECS, Auto-Scaling, S3, Cloud Front, Glacier, Route53, Ops Works, CloudWatch, DynamoDB, Lambda.
- Having the knowledge in writing the Terraform templates to create customized VPC, Subnets, NAT to ensure a successful deployment.
- Experience in branching, tagging, develop, manage, and maintaining the versions across different Source Code Management (SCM) tools like GIT, Subversion (SVN) on Linux. and SCM client tools like GitHub & Bit-Bucket.
- Enthusiastic Cloud /DevOps System's Engineer eager to contribute to team success through hard work, attention to detail excellent organizational skills. A clear understanding of Roles & Responsibility. Motivated to learn, grow & excel in this IT Field.
TECHNICAL SKILLS:
- Shell Scripts, Automation, ITIL, SNMP & NTP, AWS CLI, Ansible, VPC, NAT, S3, EC2, CloudWatch, Logic Monitor, EC2, Jira, RedHat Linux 5.x, 6.x, CentOS, TERRAFORM, CloudWatch, ELB, APP DYNAMICS, SPLUNK, SERVICE NOW. Docker, Jenkins, NGINX, GIT, MAVEN, Bit-Bucket, JSON, Yaml, Python, Kubernetes, Docker Swarm, ECS, Java.
- AMI, Puppet, Jira, Ruby, Tomcat, Auto- Scaling, Route53, DNS, Nagios. Groovy, change management, Web Applications, Database, Documentation Processes, Troubleshooting.
PROFESSIONAL EXPERIENCE:
Cloud DevOps Engineer
Confidential - Morristown, NJ
Responsibilities:
- Working as a DevOps Engineer for a team that involves three different development teams and multiple simultaneous software releases.
- Provisioned infrastructure support for Dev, UAT and Prod environments using Terraform and Ansible in AWS environment, including improvements to existing infrastructure.
- Involved in migration from on-premises to AWS Cloud and created customer images for VM's.
- Written Templates for AWS infrastructure as a code using Terraform to build staging and production environments. Deliver and manage infrastructure as code using Ansible and Terraform, adopting blue-green deployment approach by using Autoscaling, ELB and Route53 services.
- Automated Continuous Integration builds, nightly builds, deployments and unit tests across multiple different environments (DevTest, UAT and Production) each constituting different types of servers (DB, App, Web, Reports) and different number of servers of each type (for load balancing and such) using Jenkins pipelines for AWS.
- Used AWS Lambda to Automate log handling process for blacklisting bad IPs in AWS Web Application Firewall (WAF).
- Deployed containerized multiple applications via Docker in AWS cloud environments. Deployed into standalone, Swarm, and AWS Elastic Container Service configurations. Wrote Jenkins files for scaling down an existing services’ task count to 0, shutting down and restarting RDS and EC2 instances, and scaling the desired task count to 1 at a fixed time for AWS.
- Provided strategies and requirements for the seamless migration of applications, web services, and data from local and server-based systems to the AWS cloud.
- Wrote YAML config files for Kubernetes deployments and built the pods and deployed the microservices in those pods. Scaled the Kubernetes pods as and when required for the microservices. installed and Administered Jenkins CI for ANT and Maven Builds and installation, Configuration and Management of RDBMS and NoSQL tools such as Dynamo DB.
- Developed Ansible roles to maintain the large playbooks easily and configured Ansible modules for AWS cloud deployment.
- Used Ansible to manage Web Applications, Config Files, Database, Commands, users mount points and packages. Ansible to assist in building automation policies. Wrote Ansible Playbooks with Python SSH as the wrapper to manage configurations of AWS Nodes and test playbooks on AWS instances using Python.
- Configured Identity Access Management (IAM) Group and users for improved login authentication and managed those IAM accounts (with MFA) and IAM policies to meet security audit & compliance requirements and efficiently handled periodic exporting of SQL data into Elastic Search.
- Used Bug tracking tools like JIRA for issue tracking, workflow collaboration, and tool-chain automation.
Cloud System's Engineer
Confidential - Reading, PA
Responsibilities:
- Managing company's Linux-based Cloud Environment in AWS Configuring, deploying, and administering several components to help support the company's cloud applications and maintain & support cloud infrastructures to make sure that our cloud-based systems need to meet the company's requirements. Responsible for maintaining the shell script, set up cron job for that to run regularly and I need to automate the required configurations using Ansible.
- Experience in AWS services like VPC, EC2, S3, ELB, Autoscaling Groups (ASG), EBS, RDS, IAM, Route 53, CloudWatch, CloudTrail. Created multiple VPC's and public, private subnets as per requirement and Created S3 buckets in the AWS environment to store files.
- Implemented domain name service (DNS) through route 53 to have highly available and scalable applications. Used security groups, network ACLs, internet gateways, and route tables to ensure a secure zone for the organization in AWS public cloud.
- Worked on AWS Lambda functions to check users age of the secret key to change them periodically for security concerns by using python scripting with Boto3 library & Date and time modules.
- Developed data transition programs from S3 to DynamoDB using AWS Lambda by creating functions in Python for the certain events based on use cases.
- Worked on transit gateway to connect two of your VPCs using the transit gateway connect to on-premises datacenter through the custom gateway. Created an attachment for each VPC, add routes between the transit gateway and your VPCs that determine the next hops associated with VPC.
- Worked on RedHat Linux Systems and taking AMI as a backup, created GOLD AMI with that by updating the required configuration and Involved in Working Lifecycle manager policy to take AMI backup on Tag-based. Before that, I have used Shell script to take AMI on Tag-based.
- Wrote an Ansible script to grow the EBS volume of the system, so far, we are doing manually this script will help to do automate the resize of EBS volume with zero downtime.
- Worked on Logic Monitor by adding RedHat systems to it. We can monitor those using Data Source like CPU, SNMP, NTP related issues, on hostname issues and we can add Postgres service-related Data Sources like on which port it was using & monitoring on replicated Postgres services.
- Using Ansible scripts automates the Environment with playbook by installing package manager, firewall, disable root password, mount volumes with UUID name, etc., on multiple servers at a time and using host list.
- Used Jira as a ticketing tool to get requests from respective teams by closing those assigned tickets in two weeks of the sprint.
System's Engineer
Confidential - Orlando, FL.
Responsibilities:
- Provision of the infrastructure in the cloud platform with a high availability model by implementing Terraform and including the monitoring through AppDynamics and CloudWatch in different work environments in ECS and ALB. Analyzing the infrastructure autoscaling requirements based on application behavior and break test analysis. Creating a fully Automated Build and Deployment Platform and coordinating with operations and orchestrated deployments using, Terraform, Rundeck, Git, and other DevOps tools.
- Writing Terraform scripts, Rundeck runbooks to automate the jobs and spin up the clusters in AWS and configure serverless lambda jobs. Deploying the infrastructure and application through Terraform and Docker orchestration tools build in CI/CD pipeline.
- Experience with setting up Chef Infra, Chef repo, Bootstrapping the nodes, creating and uploading recipes, node convergence in Chef SCM.
- Experience in ITIL Change and Incident management processes and procedures. Respond to and manage infrastructure requests via ServiceNow and other ticketing tools. Execute DevOps functions to automate infrastructure provisioning and change management.
- Worked on POC for automating AWS services using Lambda, Python, and Boto3.
- Experienced in the search and analytic tools like Splunk and experience in developing Splunk queries and dashboards targeted at understanding application performance and capacity analysis.
- Handled production issues and non-production issues and worked with application teams, database teams, and networking teams to resolve the issues. Involved in Root cause analysis for the issues encountered. Provided on-call support for all the production applications.
- Worked on Rabbit MQ message queue clearing the messages splitting the messages checking the follow of the messages.
- Worked on AWS cloud watch for monitoring the application infrastructure and used AWS email services for notifying & configured S3 versioning and lifecycle policies to and backup files and archive files in Glacier.
- Worked on chef issues in prod and pre-prod application finding the issues through the Rundeck fixing those and finally running chef-client. Worked on chef pinning to update the latest version in the application.
DevOps/AWS Engineer
Confidential - Owings Mills, MD
Responsibilities:
- Responsible for Architecting Multi-Availability Zone Components in AWS like EC2, IAM, VPC, RDS With Replication, S3 for Object and Static Web pages, Auto Scaling of Services like ECS, ELB with SSL Cert and worked on AWS Route53 for registering domain names and to route internet traffic for domains and monitor the health checks of the resources.
- Experience with working on rolling updates using the deployments feature in Kubernetes and implemented BLUE GREEN deployment to maintain zero downtime. Maintained several pods and services using Master and Minion architecture of Kubernetes and worked on manifest files to set properties for the Kubernetes cluster.
- Used AWS Lambda to Automate log handling process for blacklisting bad IPs in AWS Web Application Firewall (WAF).
- Used Docker to virtualize deployment containers, built additional Docker Slave nodes for Jenkins using custom-built Docker images and instances and created Docker images using a Docker file, worked on Docker container snapshots, removing images and managing Docker volumes.
- Worked on Ansible to manage all existing servers and automate the build/configuration of new servers and writing various custom Ansible Playbooks for deployment orchestration.
- Coordinate/assist developers with establishing and applying appropriate branching, labeling/naming conventions using GIT, Bitbucket source control.
- Integrated SonarQube with Jenkins for continuous inspection of code quality and analysis with SonarQube Scanner for Maven.
- Used Bug tracking tools like JIRA for ticketing, Integrating into Splunk Enterprise and Bug Reporting for products through JIRA.
Java and Python Developer
Confidential
Responsibilities:
- Associated with various phases of Software Development Life Cycle (SDLC) of the application like requirement gathering, Design, Analysis and Code development.
- Used OOPs concepts in overall design and development of web/system applications
- Experienced working with a team of developers on Python applications for prioritizing tasks and for RISK management
- Developing entire frontend and backend modules using Python on Django web Framework to perform scan software unit monitoring.
- Used GIT as Version Control and Implemented business logic using Python/Django.