Devops Engineer Resume
Dallas, TX
SUMMARY
- DevOps Engineer with 8+ years total experience in AWS, Micro Service deployments and scaling on container orchestration platforms (EKS, Kubernetes), Cloud Architecture Designing, Infrastructure Automation and Network Operations and Certified AWS Solution Architect and Kubernetes Administrator.
TECHNICAL SKILLS
Cloud: AWS
OS and Platform: Linux (RHEL)
Container & Tools: Docker, EKS (Kubernetes), ISTIO, Helm, Hashi corp Vault, Jfrog Artifactory
Logging & Monitoring: EFK, Dynatrace, Metric Server, Prometheus and Grafana (POC)
IAC: AWS CloudFormation, Terraform
CI/CD: Jenkins
Databases: AWS DocumentDB, ElasticSearch Service, Aurora, RDS, ElasticCache (Redis).
Security Tools: CheckMarx, Sonar, Twistlock
Cloud Services: AWS LoadBalancers, Cloudfront, S3, EC2, VPC, ASG, IAM, KMS, etc...
Programming Languages: Python (Beginner)
PROFESSIONAL EXPERIENCE
Confidential, Dallas, TX
DevOps Engineer
Responsibilities:
- Responsible for infrastructure automation (VPCs, EKS, EC2s, etc...) with AWS Cloud formation and Terraform adhering to Bank Security Guidelines.
- Designed and built cloud native Kubernetes architecture (EKS) for deployments and scaling of major ecommerce products.
- Enforced authentication policies in Kubernetes with ISTIO Service Mesh to secure pod - to-pod communication at the network and application layers.
- Deployed and integrated Hashi Corp Vault in complex Kubernetes workflow to securely store application environments and secrets.
- Created new Jenkins groovy pipeline script with integration of security tools like CheckMarx, Sonar, Twistlock to improve complete CI/CD till production environments.
- Deployed AWS LoadBalancers (Application and Network) with security configurations in-place.
- Deployed Serverless websites in cloud, at the edge using Cloudfront CDN, Lambda and S3.
- Integrated enterprise monitoring solution Dynatrace in the kubernetes environment.
- Worked on open source logging and alerting solutions like EFK.
- Good knowledge and hands-on experience in setting up, securing and managing various databases on AWS like DocumentDB (MongoDB), ElasticSearch Service, Redis, RDS.
- Worked on organization workflow tools like JIRA, Confluence etc.
- Wrote custom shell scripts for backup, log rotation and automation of various tasks for services like Hashi corp Vault.
- Worked on cost optimisation and right sizing of pods.
- Hands-on experience on DevOps tools like HELM, Jfrog Artifactory and AWS services like SQS, IAM, S3, Certificate Manager, etc.
Confidential, Pheonix, AZ
AWS Cloud Engineer
Responsibilities:
- Automated incident response system on AWS account to enable traceability and automate security best practices in which whenever guard duty finds the severe incidents, Lambda function isolates the affected ec2 instance from all networks by detaching its security groups and launching new instances for the forensics team.
- Python, EC2, CloudWatch, Lambda, SNS, Step Functions, Cloud Formation Template, AWS Guard Duty .
- Designed & implemented the 3-tier micro-service architecture for web applications and ensured security, resilience, and scalability of the same in the production environment.
- Migrated lower environment application to spot instance on spot.io platform for cost optimization and slash instance cost by 70%. EC2, Lambda, Cloudwatch, Load Balancer, spot.io platform
- Created scheduling system for AWS services in lower environments by making the instances and database stop in non-working hours for cost optimization.
- EC2, Cloudwatch, Lambda, RDS
Confidential, New York, NY
AWS Engineer
Responsibilities:
- Responsible for infrastructure automation (VPCs, EKS, ECzs, etc...) with AWS Cloudformation and Terraform adhering Security Guidelines.
- Primary responsibilities include troubleshooting, diagnosing and fixing production software issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code (JAVA/Perl/PythonJshell scripts), performing SQL queries, updating, tracking and resolving technical challenges
- Have a sound Knowledge of XML/SOAP, web services, workflow modeling, web application development, and industry-standard commerce systems also have the Knowledge of the UNIX/Linux operating systems.
- Responsibilities also include working alongside development on Amazon Corporate and Divisional Soñware projects, updating/enhancing our current software, automation of support processes and documentation of our systems.
- Role requires to be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure
- To identify problems before they happen and implement solutions that detect and prevent outages
- Monitoring and troubleshooting for alerts and application requests during peak load testing times.
- Implementation of change requests for various configurational changes and application recycles or deprecations.
Confidential, Penfield, NY
DevOps/ Build and Release Engineer
Responsibilities:
- Created alarms and trigger points in CloudWatch based on thresholds and monitored the server's performance, CPU Utilization, and disk usage, and Utilized AWS CloudWatch services to monitor the environment for operational & performance metrics during load testing.
- Implementing, maintaining, and enhancing build processes using Maven, Ant, ApacheIvy, Gradle, Groovy, MS Build, NANT, and Nexus
- Used Terraform scripts to Automate Instances for Manual Instances that were launched before.
- Implemented and maintained the branching and build/release strategies utilizing Subversion/GIT. Manage configuration of Web App and Deploy to AWS cloud server through Chef.
- Used Chef to manage web applications, and configure files, databases, users, and packages. Developed Chef Recipes using the Ruby framework to configure, deploy and maintain software components of the existing infrastructure.
- Installation and configuration of bamboo, Installation, and configuration of Jira/Confluence
- Managed local deployments in Kubernetes, creating a local cluster and deploying application containers.
- Administered cookbook source code repos for deployment cookbooks and implementing chef spec frameworks to make out cookbook issues at the initial stages of authoring the recipes.
- Created Ansible playbooks to automatically install packages from a repository, change the configuration of remotely configured machines, and deploy new builds.
- Docker has been core to this experience, along with Kubernetes.
- Setting up SPLUNK monitoring on Linux and windows systems.
- Worked on AWS and related services like EBS, RDS, ELB, Route53, S3, EC2, AMI, and IAM through AWS console
- Expertise in developing templates for AWS infrastructure as a code using Terraform to build staging and production environments.
- Experienced in building AWS S3 buckets and managed policies and used S3 bucket and Glacier for storage and backup
- Configured Veracode scan in the VSTS pipeline for vulnerability scanning to check the health of code and specially to find any security issues in the code.
- Integrated GitLab into Jenkins to automate the code checkout process.
- Used Kubernetes to deploy scale, load balance, and worked on Docker Engine, Docker HUB, and Docker Images and Docker Compose for handling images for installations and domain configurations.
- Used Docker to virtualized deployment containers and push the code to the EC2 cloud using PCF. Built additional Docker Slave nodes for Jenkins using custom-built Docker images and instances.
- Managed AWS infrastructure as code using Terraform.
- Used Jenkins for CI/Automation tool for Continuous Integration. Configured master and slaves to run various builds on different machines and used GIT as a Source Code manager, Maven, and Gradle as a Build Tool.
- Extensively used Java Collection framework and Exception hand
- Created quality gates in SonarQube dashboard and enforced in the pipelines to fail the builds when conditions are not met.
- Using Bash and Python included Boto3 to supplement automation provided by Ansible and Terraform for tasks such as encrypting EBS volumes backing AMIs.
- Used Splunk to centralize and analyze logs, Nagios is used for infrastructure and services monitoring and as an alerting solution.