Sr. Aws Cloud Devops Engineer Resume
Foster City -, CA
SUMMARY
- Around 9 years of Professional IT experience in Software Configuration Management (SCM), DevOps, Build & Release management, and an AWS/GCP/AZURE, worked in many technical roles both in Linux and DevOps engineer.
- Proficient in automating build and configuration process using tools like Maven, Jenkins and Chef/Ansible.
- Involved extensive work towards code compilation, packaging, building, debugging, automating, managing, tuning, and deploying code across multiple environments and Linux Administration.
- Experience on various Azure services like IAAS, SAAS and PAAS, Azure Websites, Caching, SQL Azure, NoSQL, Storage, Network services, Azure Active Directory, API Management, Scheduling, Auto Scaling, and Power Shell Automation.
- Hands on Experience in Amazon Cloud Services (AWS), Azure and its features (AWS EC2, VPC, EBS, AMI, snapshots, Autoscaling, SES, SQS, SNS, RDS, ELB, EBS, CloudWatch, S3 etc.).
- Expertise in managing Cloud environment using CloudFormation and Terraform for Multi cloud.
- Experience in working on source control tools like GIT and Bit Bucket via creating branches as per application requirement and worked on git pull, git push, git merge, git rebase and other important commands.
- Build pipeline in Jenkins - 2 using groovy for automating builds.
- Used Groovy for Maintenance & Reporting.
- Build uDeploy workflows for deployment automation of JAVA and .NET applications.
- Performed uDeploy agent installation and configuration, Support uDeploy security roles and application access.
- Expertise in LAMP (Linux, Apache, MYSQL, and Python) architecture.
- Hands on configuring DSC configurations to deploy Web Servers to Azure VMs. Configured Azure Automation DSC configuration management to assign permissions through RBAC, assign nodes to proper automation accounts and DSC configurations, to get alerted on any changes made to nodes and their configuration.
- Hands-on experience with DevOps tools like Chef, Puppet, Ansible, Docker, Jenkins, Prometheus, Grafana, Dynatrace
- Worked with Ansible integrated with terraform to deploy application after infrastructure build using playbooks, creating roles for each application, error handling and used Ansible-Lint. Worked on Ansible tower to deploy resources from centralized location and providing limited access to Application team on tower.
- Experience in managing Containerization environment using Docker, Docker compose, Creating and managing Images using Docker file, managing Docker volumes and taking backups of volumes on regular basis, managing Docker network for different environment like Dev, QA and Prod.
- Experience in orchestrating the containerized environment using Kubernetes. Creating pods (Static, unit) Deployments set, Replica Set, Daemon set, Services like Node port and Load balancer, Stateful services, Persistence Volume, Ingress control.
- Experience in Log management using ELK stack, Using LogStash and Beats (File beat, audit beat, metric beat) for log collection, Elastic Search for indexing the logs and Kibana for creating dashboard for virtualizing the data.
- Experiences working with various services in Azure like Data Lake to store and analyze the data.
- Monitoring tool like Prometheus and Grafana for monitoring Pods deployed in Kubernetes.
TECHNICAL SKILLS
Cloud Services: Amazon Web Services: EC2, S3, ELB, Kinesis, Auto scaling, Elastic Beanstalk, Elastic File Storage, Elastic Block Storage, Glacier, FSX Luster, AMI, RDS, DMS, VPC, Amazon Cloud Front, Direct Connect, Route 53, Cloud watch, Cloud trail, IAM, SNS, Azure, GCP
Virtualization: VMware Client, vSphere Client
Operating Systems: Red Hat Linux 4/5/6/7, Windows servers 2003, 2008, 2008 R2, 2012, 2012R2, Windows 2000, XP, Windows 7
Automation Tools: Chef, Puppet, Ansible
Web Servers: Apache Tomcat
Database Technologies: Oracle, DB2, SQL Server, MySQL
Scripting/Programming languages: C, C#, Python, Java, Shell, Ruby, Python
Network Protocols: NIS, DNS, DHCP, NFS, SAMBA, FTP, Carbon
CI Tools: Bamboo, Jenkins, and Team City
Build Tools: ANT, Maven Containers, Clusters Docker, Kubernetes, OpenShift
PROFESSIONAL EXPERIENCE
Confidential, Foster City - CA
Sr. AWS Cloud DevOps Engineer
Responsibilities:
- Working in AWS cloud environment and helping application users to deploy their application in cloud environment using resources like EC2, ALB, RDS, and EKS via creating CloudFormation templates.
- Oversee the ecommerce full implementation projects, as well as support projects.
- Manage Design agencies and 3rd party companies technical integrations as part of the ecommerce project delivery process.
- Managing multiple AWS accounts and integrating them using cross platform roles and deploying the resources form central account and restricting access to each user.
- Recently working in small projects as a freelancer and personal training withAWS (AppSync, Cognito, Amplify, Dynamo, Api Gateway, EC2, S3, Lambda, CloudFront, CloudFormation, Route53, Cloud Watch, SNS, IAM, Step Functions, others related).
- Around 9 years of experience, with strong expertise in the fields of DevOps, AWS, Build and Release Engineer and Linux Admin, using various automation tools to oversee the end-to-end deployment process.
- Extensively worked on Hudson and Jenkins for continuous Integration and End - to-End automation for all builds and deployments.
- Experienced in Amazon Web Services (AWS) cloud platform and services like Lambda, DynamoDB, EBS, ELB, AMI, Elastic Beanstalk, CloudFront, CloudWatch, OpsWork SNS, Glacier, Auto-Scaling, IAM, Route53, EC2, S3, RDS, VPC, VPN, Security-Groups,, and through AWS management console.
- Strong knowledge on working with GraphQL schema, queries, and mutations to interact with Mongo DB and several other data layers.
- Involved in design and development of GraphQL and services to interact with data storage layer.
- Worked on functions in Lambda that aggregates the data from incoming events, and then stored result data in Amazon DynamoDB. Wrote Terraform templates for AWS Infrastructure as a code to build staging, production environments & set up build & automations for Jenkins.
- Started the front-end application with React and used state object React Class Component for general stateful management.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle.
- Worked on migrating MapReduce programs into Spark transformations using Spark and Scala, initially done using python (PySpark).
- Developed Spark jobs using Scala on top of Yarn/MRv2 for interactive and Batch Analysis.
- Running of Apache Hadoop, CDH and Map-R distros, dubbed Elastic MapReduce (EMR) on (EC2).
- Migrated an existing on-premises application to AWS. Used AWS services like EC2 and S3 for small data sets processing and storage, Experienced in Maintaining the Hadoop cluster on AWS EMR.
- Orchestrated multiple ETL jobs using AWS step functions and lambda, also used AWS Glue for loading and preparing data Analytics for customers.
- Wrote AWS Glue scripts to extract, transform load the data.
- Developed Web API using Node.JS and hosted on multiple load balanced API instances.
- Work on migrating C sharp services to Java/Spring boot and use PCF as a cloud environment.
- Work on Syndicate project as a full stack developer and use Angular in front end and C sharp and Java as a choice of backend technologies.
- Used assertions in test scripts using C Sharp Visual Studio automation suite to validate requirements
- Used Node.js on the server side and to install necessary packages into the application.
- End to end Automation using PowerShell for User Account/ Mailboxes/ Distribution Group/ Security Group provisioning and management.
- Responsible on Solaris, UNIX, and Windows environments with regards to SCM, DevOps End to end User Production support, Version control tools.
- Experience in Configure and manage TFS, Branching and Merging, End to end use support in TFS.
- DevOps SCM End to end User support, Managing Version control tools.
- Worked in an agile development team to deliver an end-to-end continuous integration/continuous delivery product in an open-source environment using tools like Chef & Jenkins.
- Build CI/CD pipelines for application and service delivery into Cloud Foundry via Jenkins - build and release with GIT and JFrog Artifactory.
- Worked on the setup of Jenkins master/slave to distribute builds on salve nodes and used several Jenkins plugins like Artifactory Plug-in, ANT, Maven Plug-in, and GIT Plug-in.
- Creating Jenkins pipeline and integrating tools like Git, Maven, SonarQube and Kubernetes for end-to-end deployment of the application in the environment using groovy.
- Managing Ansible Tower for deploying the application in the created resources and restricting access to application users to run their respective jobs.
- Deployed Ansible playbooks in AWS environment using Terraform as well as creating Ansible roles using YAML Scripting.
- Designed Nodes and Spring Boot applications for Micro services to Perform DevOps process.
- Experience with containerization technologies, including Docker, Kubernetes, or Rancher. Well versed with OpenStack based cloud infrastructure.
- Installed and Configured Enterprise JFrog Artifactory.
- Created Ansible playbooks to install and setup Artifactory.
- Worked on Docker and Ansible in build automation pipeline and Continuous Deployment of code using Jenkins and wrote Playbooks to automate Ansible servers using YAML scripting and developed an Ansible role for Zabbix -agent which will be integrated into the to the CICD pipeline.
- Timely reviewing the AWS account and apply the security policy according to company policy.
- Using Docker swarm to orchestrate the containerized environment.
- Creating blue/green deployment and helping application team to test their code.
- Deployed Puppet, Puppet dashboard for configuration management to existing infrastructure.
- Using Science logic monitoring tool for monitoring AWS infrastructure.
- Using AWS CloudWatch for application log monitoring.
- Storing Docker images in nexus Artifactory and provide limited access to pull the images.
- Creating lambda functions to automate task on deleting unused resources, key rotating etc.
- Creating Jenkins pipeline using groovy to automate builds, doing build promotion on successful builds of previous builds.
- Used Dynatrace to monitor business transaction with back ends like database, web services and JMS and identified issues, bottlenecks, and errors.
- Extensively used Load Runner and Dynatrace to various test analysis results being a part of production support team.
- Managed Puppet classes, resources, packages, nodes, and other common tasks using Puppet console dashboard and live management.
- Developed custom functionality using excel services and the SharePoint REST API
- Setup monitoring tools like Cloudem (oracle tool), Thousand Eyes and Cloud watch dog for monitoring all the internal services in oracle cloud, Network packets utilization, and load balancers.
- Created network monitoring visualization using Thousand Eyes, log analysis and dash-boarding using Elastic, Kibana and Grafana on AWS platform.
- Creating Terraform codes to create and manage hybrid cloud infrastructure and create Ansible playbooks and roles and integrate with terraform to deploy applications.
- Helping application team to containerizing application via creating Docker file and building container environments using Docker compose, Docker volumes and segregating Docker networks for Dev and QA and taking backup of Docker volumes on regular interval.
- Orchestrating production container application using Kubernetes for auto scaling of applications using Deployments, creating replica set, and using persistence volume to store application data, use readiness and liveness probe for container health check. Creating RBAC for restricting access, using resources quota to restrict access on resources.
- Implemented a load-balanced, highly scalable, and available, fault-tolerant Kubernetes infrastructure.
- Used Kubernetes to deploy scale, load balance, scale and manage Docker containers.
- Managed Kubernetes charts using Helm and created reproducible builds of the Kubernetes applications, managed Kubernetes manifest files & releases of Helm packages.
- Wrote Python code to produce HTTP GET request, parsing HTML data from websites.
- Created Python scripts for various application-level tasks for implement automation.
- Playbooks to Manage Configurations of AWSNodes and test Playbooks on AWSinstances usingPythonscript.
- Experienced in creating pods and clusters using templates in Kubernetes and deploy using helm chart.
- Successfully created Kubernetes pipeline of deployment & operation activities where all code is written in java, python& stored into Bit Bucket, for staging & testing purpose.
- Applying network policy over Kubernetes cluster to restrict pod level access.
- Managed local deployments in Kubernetes, creating a local cluster and deploying application containers.
- Implemented Kubernetes manifests, helm charts for deployment of microservices into k8s clusters.
- Designed and maintained NoSQL databases using Python and developed Python based API.
- Implementation of Istio on Kubernetes clusters as service mesh networking systems like Envoy, Istio, flannel, or equivalent.
- Daily monitoring production servers using Grafana and Prometheus which is integrated with Kubernetes, exceptions, and report to the team if something happen during standups.
- Worked on Google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring and cloud deployment manager.
- Setup GCP Firewall rules to allow or deny traffic to and from the VM's instances based on specified configuration and used GCP cloud CDN (content delivery network) to deliver content from GCP cache locations drastically improving user experience and latency.
- Worked extensively on building and maintaining clusters managed byKubernetes, Linux, Bash,GIT, Docker, onGCP(Google Cloud Platform).
- Experience inGoogle Cloudcomponents, Google container builders and GCP client libraries and cloud SDK’s.
- Built reports for monitoring data loads into GCP and drive reliability at the site level.
- Work with business process managers and be a subject matter expert for transforming vast amounts of data and creating business intelligence reports using the state-of-the-art big data technologies (Hive, Spark, Scoop, and NIFI for ingestion of big data, python/bash scripting /Apache Airflow for scheduling jobs in GCP/Google’s cloud-based environments).
- Using SonarQube for code verification.
ENVIRONMENT: AWS, GCP, Git, Jenkins, Ansible/Ansible Tower, Terraform, Docker, Kubernetes, ELK, Grafana, Prometheus. Nexus Artifacts, SonarQube, Node js, React, DynamoDB, App sync, GraphQL
Confidential, Connecticut
Sr. AWS Cloud DevOps Engineer
Responsibilities:
- Implemented Large Scale Cloud Infrastructure in AWS using AWS resources - IAM, Elastic IP, Elastic Storage, Auto Scaling, VPC, EC2, EBS, ELB, Route 53, RDS, SNS, KMS, S3, and LAMBDA (Serverless) ECS.
- Creating Lambda function for automation task like AMI build, auto start/stop instances, deleting expired Access keys, taking RDS backups, timely backup of volumes.
- Doing migration of servers from on-prem to AWS Cloud using Cloud Endure.
- Deploying the Cloud resources using Service catalogue integrating CloudFormation templates and using Git for templates versioning.
- Designed and developed test automation scripts in .NET C Sharp
- Designed and implemented a Page Object Model pattern for web application UI elements in C Sharp
- Utilized assertions in test scripts using C Sharp Visual Studio automation suite to validate requirements.
- Implemented REST API using Node.js, Express.js.
- Used node.js as a proxy to interact with RESTful services and interacting with PostgreSQL Database.
- Using Git for version controls and follow branching strategy for application deployment.
- Using Ansible tower to deploy applications in the built infrastructure via creating playbooks and automate them using Jenkins jobs.
- Used Package Manager (NPM) to manage modules and used it to install tools like Grunt and Express implemented AJAX call from ReactJS at client to server.
- Used Angular to build a single page application using typescript.
- Enhanced legacy application by building new components in Angular 2 and typescript.
- Worked on an application which is developed entirely on Mean Stack with deployment on node.js, MongoDB, Express, and React.js based on the MVVM design pattern.
- Created the Node.js EXPRESS Server combined with Socket.io to build MVC framework from front-end side AngularJS to back-end MongoDB, to provide broadcast service as well as chatting service.
- Creating Jenkins pipeline using groovy to automate builds, doing build promotion on successful builds of previous builds.
- Installed and configured Dynatrace tool and created email alerts and threshold values.
- Used Dynatrace for monitoring the online website.
- Used Dynatrace to monitor and trace the logs.
- Creating Terraform codes to create and manage hybrid cloud infrastructure and create Ansible playbooks and roles and integrate with terraform to deploy applications.
- Deployed Puppet, Puppet dashboard for configuration management to existing infrastructure.
- Helping application team to containerizing application via creating Docker file and building container environments using Docker compose, Docker volumes and segregating Docker networks for Dev and QA and taking backup of Docker volumes on regular interval.
- Orchestrating production container application using Kubernetes for auto scaling of applications using Deployments, creating replica set, and using persistence volume to store application data, use readiness and liveness probe for container health check. Creating RBAC for restricting access, using resources quota to restrict access on resources.
- Orchestration of Docker images and Containers using Kubernetes by creating whole master and node.
- Used Kubernetes to manage containerized applications using its node, Config Maps, selector, services and deployed application container as Pods.
- Worked on writing Jenkins build a pipeline with Gradle script and Groovy DSL (Domain Specific Language) and integrating ANT/MAVEN build scripts with Gradle for the sole purpose of continuous build.
- Developed scripts for Build, Deployment, Maintenance, and related tasks using Jenkins, Docker, Maven, Python, and Bash.
- Setting up variousJenkinsjobs for build and test automation and createBranches, Labels and perform Merges inGitLab,deployed micro services inMesoscluster in AWS usingJenkins.
- Involved in setting up JIRA as defect tracking system and configured various workflows, customizations and plugins for JIRA bug/issue tracker.
- Experienced with GITHUB to store the code and integrate it to Ansible to deploy the playbooks.
- MigrateSVNrepositories toGitand administrate GitLab to manageGit repositories.
- Script, debug and automate PowerShell scripts to reduce manual administration tasks and cloud deployments
- Using Grafana and Prometheus for pods metrics monitoring and creating daemon set for deploying agents on each resource to collect metrics.
- Using AWS ELK for log collection and visualizing the logs using Endpoint.
- Managing Nexus artifacts for managing artifacts, docker images.
ENVIRONMENT: AWS, GCP, Git, GitHub, Jenkins, Ansible/Ansible Tower, MongoDB, Terraform, Docker, Kubernetes, ELK, Grafana, Prometheus. Nexus Artifacts.
Confidential - Columbus, OH
Azure DevOps Engineer
Responsibilities:
- Configured and deployed Azure Automation Scripts for a multitude of applications utilizing the Azure stack for Compute, Web and Mobile, Blobs, Resource Groups, Azure Data Lake, HD Insight Clusters, Azure Data Factory, Azure SQL, Cloud Services, and ARM Services and utilities focusing on Automation.
- Performed provisioning of IAAS, PAAS Virtual Machines and Web, Worker roles on Microsoft AZURE Classic and Azure Resource Manager. Deployed Web applications on Azure using PowerShell Workflow.
- Expertise in migrating the existing Azure infrastructure into v2 (ARM), scripting and templating the whole end to end process as possible. Configuring the Azure Load Balancer to Load balance incoming traffic to VM's.
- Hands on Experience in designing and implementing Service Oriented Architecture underlined with Ingress and Egress using Azure Data Lake Store & Azure Data Factory by adding storage blobs to lakes for analytic results and so pull data from Azure data lake to the Storage Blobs.
- Used GIT as source code management tool, created local repo, cloned the repo, and performed add, commit, push, stash, branching, created tags operations. Defined branching, labeling and merge strategies for all applications in GIT.
- Managed build and deployment scripts using Ansible, triggered the jobs using Jenkins to move from one environment to across all the environments.
- Created Single page applications with the use of JavaScript library React.js. In-depth experience in React.js and techniques such as Redux, Axios, JSX, Form Validation, HOC and react-router.
- Extensively worked on Jenkins by installing, configuring, and maintaining the purpose of CI and End-to-End automation for all build and deployments implementing CI/CD for the database using Jenkins. Used Jenkins API to query the Jenkins server state and change configuration and automate jobs on nodes.
- Used Terraform to map more complex dependencies and identify network issues and worked with Terraform key features such as infrastructure as code, execution plans, resource graphs and change automation.
- Created, Configured, and managed a cluster of VMs that are preconfigured to run containerized applications using Azure Container services and designed and integrated pivotal cloud foundry on Microsoft Azure.
- Hands-on experience in Configuration management tool Ansible and developing modules in Ansible to automate infrastructure provisioning.
- Automated the process of authentication and authorization through identity broker Keycloak by integrating it into Azure Active Directory.
- Worked with the Node package manager (NPM) along with Karma, Jasmine, Grunt and Bower for test and build GitHub for version control, JIRA for bug and issue tracking.
- Configured Apache tomcat server using Ansible and Performed Deployment of War files in Tomcat application servers using Shell script and Ansible.
- Experience in writing scripts in Python to automate the log rotation from multiple logs.
- Created Ansible playbooks to automatically install packages from a repository, to change the configuration of remotely configured machines and to deploy new builds.
- Created Docker images using a Docker file worked on Docker container snapshots, managed Docker volume, and implemented Docker automation solution for Continuous Integration / Continuous Delivery model.
- Automated MySQL container deployment in Docker using Python. Involved in creating and working through Docker images, containers, and Docker Consoles for managing Application Life cycle.
- Hands-on experience in using OpenShift for container orchestration with Kubernetes, container storage, automation, to enhance container platform multi-tenancy. Experience with Kubernetes architecture and design, troubleshooting issues and multi-regional deployment models and patterns for large-scale applications.
- Managed and deployed GitLab and Sentry services into Kubernetes. Configured their Kubernetes cluster and supported it running on the top of the Core OS.
- Responsible for design and maintenance of the Git Stash Repositories, views, and the access. Used automated code check-outs in Git and created branches.
- Built scripts using ANT and MAVEN build tools in Jenkins to move from one environment to other environments. Configured GIT with Jenkins and schedule jobs using POLL SCM option.
- Maintained Bitbucket Repositories which includes Jenkins for Integration, creating new repositories, enabling GIT to ignore, branching, merging, and creating pull requests and the access control strategies from Bitbucket and JIRA for the collaboration.
- Conducted detailed research on Mesos-Marathon and Kubernetes for container orchestration.
- Maintenance of JIRA Cloud running with AWS. Integrating Jira and Service Now using Task top plug-in.
- Created Splunk Search Processing Language (SPL) queries, Reports, Alerts, and Dashboards. Configured Splunk for all the mission critical applications and using Splunk effectively for Application troubleshooting and monitoring post go lives.
- Developed Python scripts to automate the Build and deployment process for deploying the web services and created Bash, shell, and python scripts for various Systems Administration tasks to automate repeated processes.
Environment: Azure, Azure Data Factory, Azure SQL, ARM Services, IAAS, PAAS, SAAS, Dockers, Kubernetes, GIT, Ansible, Key cloak, Terraform, JIRA, Java, Python, Shell Scripting, Windows server 2008/2012/R2, Troubleshooting.
Confidential, St. Paul, MN
DevOps Cloud Automation Engineer
Responsibilities:
- Involved in DevOps migration/automation processes for build and deploy systems.
- Implement the Build automation process for all the assigned projects in Vertical Apps domain.
- Implemented & maintained the branching and build/release strategies utilizing Subversion/GIT. Manage configuration of Web App and Deploy to AWS cloud server through Chef.
- Created hybrid cloud by combining private cloud and public cloud (using Amazon web services) and used it for public scaling.
- Involved in periodic archiving and storage of the source code for disaster recovery.
- Designed AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates.
- Used MAVEN as the build tools on Java projects for the development of build artifacts in the source code.
- Automated the build and release management process, including monitoring changes between releases.
- Responsible on Amazon Web Services (AWS) for creating and managing EC2, Elastic Load balancers, Elastic Container Service (Docker Containers), s3, Elastic Beanstalk, CloudFront, Elastic File system, RDS, Cloud Watch, Cloud Trail, Cloud formation, IAM, Elastic Search.
- AWS server provision using chef recipes.
- Creating cookbooks and recipes, use knife test kitchen for testing code and after validating the cookbook push it to Chef Server.
- Documented project's software release management procedures with input decisions.
- Automated the process of Apache Webserver installation, configuration using Chef.
- Developed, maintained, and distributed release notes for each scheduled release.
- Worked closely with developers to pinpoint and provide early warnings of common build failures.
- Created views and appropriate meta-data, performed merges, and executed builds on a pool of dedicated build machines.
Environment: - GIT, AWS, Chef, MAVEN, LINUX, Python Scripts, Shell scripts.
Confidential
AWS DevOps Engineer
Responsibilities:
- Designed, configured, and deployed Amazon Web Services (AWS) for applications utilizing the AWS stack (Including EC2, Route53, S3, RDS, Direct Connect, Cloud Formation, Cloud Watch, SQS, IAM), focusing on high-availability, fault tolerance, auto-scaling, load-balancing capacity monitoring and alerting.
- Configured AWS Identity and Access Management (IAM) Groups and Users for improved login authentication.
- Used Chef to automate deployment of applications on Amazon EC2.
- Handled configuration management of servers using Chef.
- Expertise in running applications using Elastic Bean Stalk.
- Created an AWS RDS Aurora DB cluster and connected to the database through an Amazon RDS Aurora DB Instance using the Amazon RDS Console.
- Configured an AWS Virtual Private Cloud (VPC) and connected it to the on-Premise’s data center using AWS VPN Gateway for cloud front distribution.
- Performed database SQL queries to address connectivity and integration activities.
- Configured AWS RDS Aurora database users to allow each individual user privileges to perform their related tasks.
- Migrated the production MySQL schema to the new AWS RDS Aurora instance.
- Implemented AWS High-Availability using AWS Elastic Load Balancing (ELB), which performed a balance across instances in multiple Availability Zones.
- Assigned AWS Elastic IP Addresses used to work around host or availability zone failures by quickly remapping the address to another running instance or a replacement instance that was just started.
- Defined AWS Security Groups which acted as virtual firewalls that controlled the traffic allowed to reach one or more AWS EC2 instances.
- Managed application and patch management of applications running on AWS.