Senior Aws Devops Engineer Resume
Irving, TexaS
SUMMARY
- Lead Devops Engineer with over 12+ years of industry experience working with Telecom and Airlnines clients.
- Have extensive knowledge in VAS & Application Production Domain.
- Worked as a Lead AWS Devops Engineer, Azure, Gcp cloud Linux Systems/Middleware Engineer, and Telecom Engineer with around 12 years of experience in IT & Telecom Industry, possessing knowledge in VAS & Application Production Domain.
- Worked with the development team to create appropriate cloud solutions for client needs.
- Managing AWS Services EC2, ELB, VPC, EBS, S3, IAM, Workspaces, Cloud trail and Cloud watch.
- Creation of VPC, Security group, Subnet, Procuring Elastic IP and attaching additional EBS volume to Instances. Cluster planning by using best suited AMI.
- Deployment of entire infrastructure using Cloud Formation and several AWS API’s and SDK. Experience on cloud - supported application Service NOW(Snow) for Incident, Service Request, per our defined work instruction and resolving those tickets within defined Service Level Agreement (SLA)
- Worked with configuration management tools like Puppet, Chef, Ansible integrated CI/CD and version control systems. Hands-on experience of Crucible Code Review System.
- Configured Jenkins to integrate tools (Maven, git, Selenium, Docker, Ansible, Puppet, Chef, and Kubernetes) to accomplish the goal and notify the status of the build by creating and mailing there ports to the members of the team.
- Good experience with Docker, right from installation to deployment of applications on containers.
- Has experience in Bash and Python scripting with focus on Devops tools.
- Implemented detailed systems and services monitoring using Nagios and Zabbix services AWS cloud resources.
- Wrote intelligent custom health checks to reduce notification noise and automate service restarts.
- Experience in Configuring and deploying application servers Tomcat and static content to Apache.
- Strong working knowledge in Linux, MySQL, Oracle, SQL Server, Postgres, Perl, Shell Scripting, XML, Python, Database procedures, Awk, SED, JBoss, Tomcat.
- Responsibility of maintaining Linux /Windows servers, Switches & Storages.
- Responsible for installation of servers, products, debugging N/W, H/W & Stack Level issues.
- Experienced strong incident management ITIL procedure and has been responsible for availability of production systems USSD 24/7.
- Worked as Lead and handled SMSC RJIO project and Contributed to efficient coordination with clients.
- Ensure the effective integration of ITIL (Information Technology Infrastructure Library) principles and service management strategies within the IT department.
- Created and administered multiple Kubernetes AKS clusters in Microsoft Azure platform
TECHNICAL SKILLS
AWS Services: EC2, ELB, VPC, RDS, AMI, IAM, Cloud Formation, S3, Cloud Watch, Cloud Trial, SNS, SQS, EBS, Route 53
Operating System: Red Hat, Ubuntu, Linux & Windows, CentOS, Debian
CI Tool & Monitoring: Jenkins, CloudWatch
Cloud Environments: AWS, Cloud Platform, IaaS, PaaS, SaaS, Gcp
Containerization Tools: Docker, Docker swarm, Kubernetes
Bug Tracking Tools: JIRA, Bugzilla, HP Quality Center, Remedy, IBM Clear Quest
Configuration Management Tools: Chef, Puppet, Ansible, Salt Stack
Databases: Oracle, MySQL, MongoDB, SQL Server, MS SQL, NoSQL, Cassandra DB
Build Tools: ANT, MAVEN, Hudson, Jenkins, XL release and XL deploy
Version Control Tools: Subversion (SVN), GIT, GIT Hub, SVN, Perforce
Web Servers: Apache, Tomcat, Web Sphere, JBoss, WebLogic, TFS, Nginx, Azure, IIS
Languages/Scripts: HTML, Shell, Bash, PHP, Python, Chef, PHP, Ruby, Perl
SDLC: Agile, Scrum, Waterfall
PROFESSIONAL EXPERIENCE
Confidential, Irving, TexasSenior Aws Devops Engineer
Responsibilities:
- Installed an application on AWS EC2 instance and configured the storage on S3 buckets
- Utilized AWS services such as EC2, EBS, SNS, ELB, and Auto Scaling to build highly dependable, highly scalable, cost-effective applications without creating or configuring the underlying AWS infrastructure.
- Migrated on premises lower environments to Cloud SQL and GCE in GCP cloud to streamline OLTP.
- Implement Spark Kafka streaming to pick up the data from Kafka and send to Spark pipeline
- Experience in open-source Kafka, zookeepers, Kafka connect
- Managing the multi-tier and multi-region architecture using AWS Cloud Formation.
- Automated Compute Engine and Docker Image Builds with Jenkins and Kubernetes.
- Controlled and automated application deployments and updates using Kubernetes
- Integrated Kubernetes with network, storage, and security to provide comprehensive infrastructure and orchestrated containers across multiple hosts.
- Architecting and automating the Linux server patching process using Ansible, Jenkins, Nexus & GitLab
- Hand - on Experience on High Availability Methodologies for Azure Cloud and SQL 2014 AOAG
- Used GIT along with Puppet Enterprise to deploy software on production and troubleshoot various issues during deployment.
- Configured Azure Active Directory and managed users and groups
- Using Bash and Python, included Boto3 to supplement automation provided by Ansible and Terraform for tasks such as encrypting EBS volumes backing AMIs.
- Performed Branching, Tagging, Release Activities on Version Control Tool GIT (GitHub).
- Managed servers on the Amazon Web Services (AWS) platform instances by installing CM tools like Ansible
- Deploy and maintain critical uptime business product offering environment
Confidential
Lead Devops Engineer
Responsibilities:
- Worked extensively with API Management product of APIGEE Edge in consuming the API’s in implementing customer use cases.
- Devops experience with continuous integration (CI/CD) and automation tools such as GIT, Jenkins, Ansible and Puppet.
- Automated the creation of Azure Container instances custom VM Role which includes the validation of OSCAP compliance scans using shell, Python, groovy and packet.
- Implemented Unix Security, performing Audit, SOX & Cyber Compliance.
- Installed and deployed the VMware ESX containers for the VM cloud and Vsphere
- Administration of DevOps tools suite: Jenkins, GitHub, JIRA, Confluence, Puppet and ELK stack, used Rundeck for deployment and orchestration.
- Provisioned and supported bare-metal servers with Red Hat Storage Solutions baseline for several Big Data applications including Cassandra, Hadoop, and GlusterFS.
- Kernel, U - boot customization and porting for the target board.
- Setup Alerting and monitoring using Stack driver in GCP, evaluating performance of GCP instances, CPU, Memory Usage and setting up health checks, GCS and VPC
- Used Kubernetes to cluster Docker containers in runtime environment throughout the CI/CD
- Created Kafka producer API to send live-stream data into various Kafka topics
- Experienced in using Terraform to create scripts to launch Azure
- AWS infrastructures and manage, implemented all infrastructure deployments by maintaining clean Terraform code using Workspaces and modules
- Proficient with deployment and management of AWS services - including but not limited to: VPC, Route 53, ELB, EBS, EC2, S3.
- Experience working on Informatica Cloud to extract and load data to Redshift and/or S3.
- Efficiently used version control systems IBM clearcase & SVN for configuring & maintaining Tomcat configurations and source code.
- Node.js, JavaScript, TypeScript, Python (Boto3), Terraform, CloudFormation, AWS CDK/SAM/SDK, Serverless Framework, Packer, Ansible, Powershell, Bash
- Expertise in all areas of Drupal including Views, CCK, Drush, Cron, Custom Module, Tpl and worked closely with securing additional resource for the team.
- Implemented HTTPS Ingress controller and use TLS certificate on AKS to provide reverse proxy, configurable traffic routing for individual Kubernetes services.
Confidential
Senior Devops Engineer
Responsibilities:
- Used AWS Cloud Formation to build infrastructure. Deployed components like VPC, Transit Gateways, Subnets, Security Groups, EC2 instances, ELBs, S3 buckets, Backup Vaults/Plans etc.
- Created Cloud Watch dashboards, added widgets for the metrics required by Performance/Security/DB teams.
- Using AWS Backups, created Backup Vaults/Plans. After discussion & approval from senior architects, created respective Backup rules & assigned resources using backup tags.
- Configured Jenkins cluster monitored/maintained Master & Slave nodes. Using Jenkins file created Jenkins Declarative Pipeline for baking AMI, Infra deployments and several automation tasks.
- Docker zed Jenkins worker node. Built images on Ubuntu and installed the required tools and packages.
- Leveraged the AWS ECS-Far gate service to deploy runtime containers and used ECR service to store images.
- Extensively created multiple Terraform modules to manage configurations, applications and automate installation process.
- Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring and cloud deployment manager
- Created several Jenkin Jobs as required by the application. Implemented CI using Jenkins. Automated build and deployment process, eliminating 80% of manual work.
- Wrote Ansible playbooks for installing Apache Tomcat and all its dependencies. Deploy application artifacts onto it.
Confidential
Senior Devops Engineer
Responsibilities:
- Worked on building and supporting environments consisting Development, Testing and Production in the Software Development Life Cycle.
- Created Docker file/ containers to load the application and deployed on the operating systems.
- Comprehensive experience with AWS services like Amazon S3, RDS, EC2, Cloud Formation, Lambda, VPC, ELB, Glacier, Elastic Block Store, DynamoDB, Amazon RDS, Code Deploy, CloudWatch, Amazon IAM, SES, SQS, Security Groups and Route 53 in Agile environment
- Tasks performed on performance tuning, capacity planning for AWS environments, monitoring, resource utilization and alerts using CloudWatch; implement hybrid architectures, routing private connections, peering and infrastructure zoning by VPC; Cloud Formation templates to deploy infrastructure for environment creation; Code Pipeline to design and implement a CI/CD pipeline
- Working for DevOps Platform team responsible for specialization areas related to CHEF for Cloud Automation.
- Administration of RHEL 5,6,7 which includes installation, testing, tuning, upgrading and loading patches, troubleshooting on both PSeries and VMware virtualization systems
- Developed scripts using PERL, BASH and BATCH files for Automation of Activities and builds.
- Authored application using Spring Cloud services (spring version of Netflix OSS-Eureka, Circuit Breaker, and Ribbon).
- Deployed application using Pivotal Cloud Foundry (PCF) CLI.
- Use of Docker to manage micro services for development and testing
- Written and developed CHEF Cookbooks from scratch for custom installation of application.
- Used CHEF for Continuous Delivery. Managed CI and CD process and delivered all application in rpms.
- Responsible for CI and CD using JENKINS, Maven and CHEF.
- Worked extensively with different Bug tracking tools like JIRA, Remedy
- Actively involved in architecture of DevOps platform and cloud solutions.
- Integration of Automated Build with Deployment Pipeline. Currently installed CHEF Server and clients to pick up the Build from JENKINS repository and deploy in target environments (Integration, QA, and Production).
- Chef to aid with our deployment process, and migrating in-house systems to Amazon Cloud Services.
- Using Chef and AWS allowed me to reduce costs for the department and eliminate unwarranted resources. Automated provisioning of cloud infrastructure with Chef.
- Replaced existing manual deployment and management processes with Chef and AWS Ops Works stacks across 4 product platforms
- Installed and support multiple databases and applications including Oracle, MySQL with WebLogic, JBOSS, Oracle and Apache Tomcat.
- Extensively involved in infrastructure as code, execution plans, resource graph and change automation using Terraform.
- Supported Continuous delivery strategy on monitoring applications in pre-production and production environment using AppDynamics.
- Developed environments of different applications on AWS by provisioning on EC2 instances using Docker, Bash and Terraform.
- Experience in writing and organizing Shell and Perl scripting for building complex software systems.
- Created branches, labels and performed merges in SVN and GIT.
- Used Jenkins innovatively to automate most of the build related tasks. Improved throughput and efficiency of build system by providing managers to trigger required build.
- Hands on experience in GCP services like EC2, S3, ELB, RDS, SQS, EBS, VPC, EBS, AMI, SNS, RDS, EBS, Cloud Watch, Cloud Trail, Cloud Formation GCP Config, Autoscaling, Cloud Front, IAM, R53
- Specialized in working closely with system engineers to resolve the issues and handled release process for over twenty-five applications which are at low and production phases with multiple deployment tools like Jenkins and Bamboo.
- Monitoring, analysing, and responding to security events utilizing security event management and reporting tools.
- System performance monitoring, tuning and log management.
- TCP/IP Networking troubleshooting and Linux/Network Administration to identify the problems and resolve the issues.
- Worked closely with network/incident analysts and IC analysts to monitor current attack and threat information to identify.
- Apache web server maintenance, installation, configuration, managing web hosting including name-based, secure and private site, monitoring web server performance, certificate generate, security checks and periodic upgrades, manage user accounts, back up. Implemented Jira with Maven release plug-in for tracking bugs and defects.
- Implemented and maintained server virtualization using VMware, ESXi and Oracle Virtual Manager.
- Coordinated with application team in installation, configuration and troubleshoot issues with JBoss servers.
Confidential
Devops Engineer
Responsibilities:
- Designing, deploying and maintaining the application servers on AWS infrastructure, using services like EC2, S3, Glacier, VPC, Lambda, Route53, SQS, IAM, Code Deploy, CloudFront, RDS, and CloudFormation etc.
- Implemented the various services in AWS like VPC, Auto Scaling, S3, Cloud Watch, EC2.
- Worked with the different instances of AWS EC2, AWS AMI’s creation, managing the volumes and configuring the security groups.
- Designed the data models to be used in data intensive AWS Lambda applications which are aimed to do complex analysis creating analytical reports for end-to-end traceability, lineage, definition of Key Business elements from Aurora.
- Worked with the AWS S3 services in creating the buckets and configuring them with the logging, tagging and versioning.
- Used the AWS-CLI to suspend an AWS Lambda function. Used AWS CLI to automate backups of ephemeral data-stores to S3 buckets, EBS.
- Led POC involving Confluence API call to populate Wiki with log data in AWS Glue.
- Worked on the Cloud Watch to monitor the performance environment instances for operational and performance metrics during the load testing.
- Created the trigger points and alarms in Cloud Watch based on thresholds and monitored logs via metric filters.
- Worked on the AWS Auto Scaling launch configuration and creating the groups with reusable instance templates for Automated Provisioning on demand on based on capacity requirements.
- Worked on the AWS IAM service and creating the users & groups defining the policies and roles and Identify providers.
- Worked with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation.
- Cloud infrastructure maintenance effort using a combination of Jenkins, Chef and Terraform for automating CICD pipeline in AWS.
- Experienced with event-driven and scheduled AWS Lambda functions to trigger various AWS resources.
- Involved in writing Java API for Amazon Lambda to manage some of the AWS services.
- Automated the web application testing with Jenkins and Selenium.
- Installing and configuring Jenkins master and slave nodes. Built CI/CD pipeline and managing the infrastructure as code using Ansible.
- Worked on branching, tagging and maintaining the version control and Build pipe line with TFS and GITHUB.
- Automated the continuous integration and deployments CI/CD using Jenkins, Docker, Ansible and AWS Cloud Templates.
- Implementing and maintaining Ansible Configuration management spanning several environments in Vrealize and the AWS cloud.
- Responsible for managing cloud computing tool AWS and the code in ALM tool i.e., GIT (version controlling).
- Developed Ansible playbook in a variety of areas including: Docker base deployment, Docker Swarm configuration, oracle deployment
- Linux system provisioning, Jenkins management (deploy seed job from Ansible), VSphere (VMware guest) management, module development.
- Experience in docker Automation tools and builds and do an overall process improvement to any manual processes.
- Using the docker file containers has run for the MongoDB and linking it with new container which will be the client container to access the data.
- Worked on the docker network for setting up the private network and linking it to the container when it is spin up.
- Mirrored the Docker images required for Spinnaker from external registry to private Docker Registry.
- Kubernetes dashboard to access the cluster via its web-based user interface and implemented microservices on Kubernetes Cluster.
- Experienced in maintaining containers running on cluster node are managed by OpenShift Kubernetes.
- Maintained Single and Multi-container pods storage inside a node of OpenShift (Kubernetes) cluster.
- Used OpenShift for Docker file to build the image and then upload the created images to the Docker registry.
- Automated the deployment and replication of containers and scale in of the containers in the fly and worked on the docker swarm for the build-in orchestration.
- Configured Operators on Kubernetes applications and all of its components, such as Deployments, Config Maps, Secrets
- Services Experience in containerizing and migrating application to Kubernetes.
- Created Jenkins on top of Kubernetes in team environment to remove dependencies on other teams.
- Worked on open source development tools like Docker Containers, Mesos and Kubernetes.
- Monitoring, traffic tracking and trend analysis using Network Management Tools Splunk, SiteScope, Insight Manager.
- Used Hashi Corp Packer to create and manage the AWS AMI's and Vault to manage AWS secret keys.
- Implemented the effective Data sizing of the ELK Cluster based on the data flow and use cases.
Confidential
Devops Engineer
Responsibilities:
- Performed major revision upgrades from SuSE 10 to SuSE11 systems.
- Created multiple CI/CD pipelines with Azure DevOps
- Monitored and analyzed systems and processes throughout the enterprise for security vulnerabilities.
- Installing, patching, management and troubleshooting of RHEL /CentOS servers.
- Configured and installed VMware tools on the built VM’s.
- Worked on a Project of Data Centers Migration, Client has 5000+ Server, migrated them using various
- Tools/methods e.g., SRDF, TSM & Mksysb. Build & Tested the Disaster Recovery Servers using Recovery Point.
- Experience in UNIX/LINUX Systems and Network administration for Linux Red Hat (RHEL 4, 5, 6), CentOS, SUSE, Debian and AIX
- Provisioning AWS EC2 instances with Auto scaling groups
- Load Balancers in a newly defined VPC and used Lambda Functions to trigger events in accordance to the requests for Dynamo Db.
Confidential
Linux Systems Administrator
Responsibilities:
- Performed major revision upgrades from SuSE 10 to SuSE11 systems.
- Created multiple CI/CD pipelines with Azure DevOps
- Monitored and analysed systems and processes throughout the enterprise for security vulnerabilities.
- Installing, patching, management and troubleshooting of RHEL /CentOS servers.
- Configured and installed VMware tools on the built VM’s.
- Worked on a Project of Data Centers Migration, Client has 5000+ Server, migrated them using various tools/methods e.g.
- SRDF, TSM & Mksysb. Build & Tested the Disaster Recovery Servers using Recovery Point.
- Experience in UNIX/LINUX Systems and Network administration for Linux RedHat (RHEL 4, 5, 6), CentOS, SUSE, Debian and AIX
- Have Extensive Experience in IT data analytics projects, Hands on experience in migrating on premise
- ETLs to Google Cloud Platform (GCP) using cloud native tools such as BIG query, Cloud Data Proc, Google Cloud Storage, Composer.
- Provisioning AWS EC2 instances with Auto scaling groups
- Load Balancers in a newly defined VPC and used Lambda Functions to trigger events in accordance to the requests for Dynamo Db.