We provide IT Staff Augmentation Services!

Aws Cloud Engineer Resume

4.00/5 (Submit Your Rating)

Seattle, WA

SUMMARY:

  • 8+ years of Experience in IT Industry as System Admin, Linux Admin and DevOps Engineer.
  • Fast - paced software professional with 8+ years of extensive experience in Automating, configuring and deploying instances on cloud environments and Data centers.
  • Experience in the areas of DevOps with AWS, CI/CD Pipeline, Build and release management and Experience in Infrastructure Development and Operations involving AWS Cloud platforms, EC2, EBS, S3, VPC, RDS, SES, SNS, SQS, ELB, Autoscaling, CloudFront, CloudFormation, Elastic Cache, CloudWatch.
  • Self-starter with an in-depth level of understanding in the strategy and practical implementation of AWS cloud-specific and OpenStack technologies.
  • Hands on experience in AWS provisioning and good knowledge of AWS services like EC2, S3, Route 53, CloudFormation, Elastic Bean Stalk, VPC, EBS etc., Knowledge of application deployment and data migration on AWS.
  • Implemented multiple CI/CD pipelines as part of DevOps role for on-premises and cloud-based software using Jenkins, Chef, Docker and AWS.
  • Have a good experience in writing many ad-hoc scripts using different languages like Shell based scripting and ruby.
  • Worked with bug tracking tools like JIRA.
  • Experience working with Chef Enterprise Hosted as well as On-Premise, Installing Workstation, Bootstrapping Nodes.
  • Written Recipes and Cookbooks and uploading them to Chef-server, Managed On-site OS/ Applications/Services/Packages using Chef as well as AWS for EC2/S3/Route53& ELB with Chef Cookbooks.
  • Worked with Build and Release automation framework designing, Continuous Integration and Continuous Delivery, Build and Release Planning, Procedures, scripting and automation.
  • Executed a Continuous Delivery pipeline with Docker, Jenkins, GitHub and AWS AMI's, results in generating of new Docker container whenever a new GitHub branch get started.
  • Extensive experience using MAVEN as build tools for the building of deployable artifacts (jar, war and ear) from source code.
  • Experience in Branching, Merging, Tagging and maintaining the version across the environments using Source Code Management tools like SVN, GIT and CVS.
  • Experience on monitoring tools such as Splunk and Nagios, used Cloud Watch to monitor AWS infrastructure, and used to analyze and monitor the data.
  • Extensively worked with change tracking tools like Remedy and used these for tracking, reporting and managing bugs.
  • Experience in installation and management of network related services like TCP/IP, FTP, SSH, DNS, HTTP, HTTPS, LOAD BALANCING, VPN, FIREWALL, SUBNETS, SMTP.
  • Linux System Administrator working on server-based operating system kernel configurations on RedHat, Centos7, Ubuntu, kernel Parameter & Tuning, Trouble Shooting System & Performance Issues.
  • Good analytical, problem solving, communication skills and have the ability to work either independently with a little supervision or as a member of a team.
  • Handled Level 2 Unix incidents and changes via incident/change management tools (BMC Remedy).
  • Configured OS Backups and processing restoration requests.
  • Troubleshot NFS issues like exporting, restarting service daemons, hung issues etc.
  • Performed administration tasks as Logical Volume Management, File system creation & Housekeeping. Knowledge & experience on Symantec NetBackup/Commvault Backup tool & administration.
  • Good experience on backup and restore functionality, capability of managing large backup infra.
  • Also, good knowledge on taking application aware backup like Exchange, SQL, Oracle, etc.
  • Administered local and remote servers using the SSH service on a daily basis.
  • Providing day-to-day user administration like adding or deleting users.
  • Setting password aging and account expiration for the users.
  • User account management and administration; Troubleshooting User's login & home directory related issues.
  • Creating and administering users File System and user accounts on SUN Solaris Servers
  • Systems monitoring and administration of SUN Solaris and RedHat Linux Servers for day-to-day problems in production environment and solved tickets on shift basis.
  • Monitored Linux server for CPU Utilization, Memory Utilization, and Disk Utilization for performance monitoring.
  • Handled NOC (Network Operation Center) using BPPM project management tool.
  • Good understanding of IT infrastructure and enterprise architectures.
  • Experience with system troubleshooting and resolution of complex problems.
  • Proactively ensure the highest levels of systems and infrastructure availability.

TECHNICAL SKILLS:

Platforms: Red Hat Linux, Ubuntu, Windows, Windows Server, CentOS, VMware.

Cloud: AWS EC2, S3, RDS, ELB, VPC, EC2 Auto Scaling, Code Pipeline, Lambda, ECR, Office 365 Cloud Exchange

Database: Oracle, MYSQL, SQL Server 2012

IDE: Eclipse, Net Beans, XCode

Versioning Tools: Git and Bit Bucket

Containerization Tools: Docker

Design & Control: UML, CVS

Tools: Maven, Jenkins, SVN, Chef, Splunk, BMC Remedy, BPPM MonitoringSymantec NetBackup, VEEAM Tool and Tableau.

Virtualization Platforms: Hyper-V, VM ware

Orchestration tool: Kubernetes

Scripting: Shell Scripting

Query Languages: SQL, PL/SQL

GUI: HTML, Java

Web Servers: Tomcat, Apache HTTP Server

PROJECT EXPERIENCE:

Confidential, SEATTLE, WA

AWS Cloud Engineer

Responsibilities:

  • Implemented AWS solutions, configuring and troubleshooting of various AWS cloud services, EC2, S3, RDS, ELB, EBS, Auto scaling groups, Cloud watch, Cloud Front and managed IAM accounts (with MFA) and IAM policies to meet security audit and compliance requirements.
  • Designed Architectural Diagrams for different applications before migrating into Amazon cloud for flexibility, cost- effectiveness, reliability, scalability, high-performance and security and migrated applications to AWS and manage applications on cloud.
  • Configured Elastic Load Balancer (ELB) including high availability of ELB using various subnets in various availability zones and used Amazon Route53 to manage DNS zones and give public DNS names to Elastic Load Balancers IP's.
  • Created shell scripts for completely automating AWS services including build server, deploying EC2 instances on AWS environment and Data centers, Cloud Front Distribution, Elastic Search and managing database security groups on AWS.
  • Experience in working with VPCs, ELBs, security groups, SQS queues, S3 buckets, and integrated Terraform with Jenkins and GIT to achieve continuous integration and test automation framework.
  • Experience in working with Docker- Docker hub, pulling images from Docker hub, running containers based on an image, configuration automation using containers and implementation of several Tomcat/WebSphere instances by using the Docker engine for running many containerized application servers.
  • Experience in installation and configuration of Docker environment, including Docker registry hub using a Docker file. Worked on Docker container images, container snapshots removing images, pushing images and managing Docker volumes. Worked in building and maintaining Docker and Vagrant infrastructure in agile environment.
  • Created and managed Docker deployment pipeline for Continuous Integration and Continuous Deployment to develop environment and Associated Nginx with Docker for load balancing on high scalable environment for maintaining Continuous Delivery.
  • Worked on migrating the current application and perform CI/CD on public or private cloud.
  • Experience in building secure, highly scalable and flexible systems that can handle expected and unexpected load bursts, and are able to quickly evolve during development iterations and worked on Implementing and testing various EC2 instances to find out the best IOPS boosting instance for databases like MongoDB and Cassandra
  • Designing and writing code to develop and configure systems which power Splunk Multi-Tenant Architecture in the organization and creating Applications on Splunk to analyze the Big Data and have strong knowledge on Splunk components like indexer, search head, forwarder, index replication and indexer clusters and deployment server.
  • Created and wrote shell scripts Bash for setting up baselines, branching, merging, and automation processes across the environments using SCM tools like GIT on Linux and windows platforms and wrote troubleshooting python code for Lambda service.
  • Implemented AWS solutions using EC2, S3, RDS, EBS, ELB, Auto scaling groups and created python scripts to automate the backup of the EC2 EBS volumes and configured Cronjobs to create the snapshots of the volumes with the AWS API for EC2 Instances storage.
  • Deployed AWS Elastic MapReduce using Cloud Formation templates by configuring the EC2 instance type to create custom sized VPC, Subnets, NAT to ensure successful deployment of Web applications and database templates and perform data intensive tasks.
  • Worked on the Migration of the Jenkins server to Amazon Web Services Cloud and moving of the jobs from the Git and Analyze and resolve conflicts related to merging of source code for GIT followed by the code quality analysis using SonarQube and fix bugs.
  • Managed the auto scaling group that includes scaling up and down the AWS app nodes.
  • Experience in working with AWS deployment services such as AWS Cloud Formation, AWS Elastic Beanstalk and Terraform for efficient deployment of application infrastructure and for automating creation of services like VPCs, ELBs, security groups, subnets, EC2 instances, RDS, SQS queues, S3 buckets, and continuing to replace the rest of our infrastructure
  • Installing and configuring RHEL 6.x/7.x, CentOS and installation of packages and patches for Red Hat Linux Servers.
  • Handled Level 2 Unix incidents and changes via incident/change management tools (BMC Remedy).
  • User account management and administration; Troubleshooting User's login & home directory related issues.
  • Creating and administering users File System and user accounts on SUN Solaris Servers
  • Systems monitoring and administration of SUN Solaris and RedHat Linux Servers for day-to-day problems in production environment and solved tickets on shift basis.
  • Monitored Linux server for CPU Utilization, Memory Utilization, and Disk Utilization for performance monitoring.

Environment: AWS, EC2, Route53, S3, RDS, SNS, IAM, VPC, EBS, VMWare, Auto scaling, GIT, Jenkins, Docker, Terraform, Microservices, Apache, OpenShift, JBoss, Bash, Shell, MongoDB, Cassandra, CloudWatch, SonarQube, JUnit, Python, NAT, Splunk.

Confidential

AWS/Devops Engineer

Responsibilities:

  • Designed, built, and deployed a multitude application utilizing the AWS stack (Including EC2, Route53, S3, RDS, SNS, and IAM), by focusing on high-availability, fault tolerance, and auto-scaling.
  • Experience involving configuring S3 versioning and lifecycle policies to and backup files and archive files in glacier.
  • Defined and managed a well-defined project management process, scheduling and ongoing process improvement initiatives to implement best practices for Agile Project Management.
  • Responsible for building AWS infrastructure VPC, EC2, S3, IAM, EBS, Auto scaling and RDS in cloud formation using JSON templates.
  • Collaborating continuous integration system with GIT version control repository and continually build as the check-in’s come from the developer.
  • Worked on Installation/Configuration/Administrated VMware ESXi 5.1/5.5 & 6.0 and migrated existing servers into VMware Infrastructure.
  • Configuring CI/CD tools using Blue - Green deployment methodology.
  • Experience working on several Docker components like Docker Engine, Docker Hub, Docker Machine, Docker Swarm and Docker Registry.
  • Designed and developed Power BI graphical and visualization solutions with business requirement documents and plans for creating interactive dashboards.
  • Utilized Power BI (Power View) to create various analytical dashboards that depicts critical KPIs such as legal case matter, billing hours and case proceedings along with slicers and dicers enabling enduser to make filters.
  • Involved experience working AWS command line client and management console to interact with AWS resources and APIs.
  • Involved in configuring and integrating the servers with different environments to automatically provisioning and creating new machines using Chef.
  • Developed Chef Recipes to configure, deploy and maintain software components of the existing infrastructure.
  • Implemented Chef best-practices and introduced Test Kitchen to facilitate a more natural cookbook development workflow.
  • Configured Jenkins pipeline jobs and templatized workflows to improve reusability for building pipelines.
  • Implemented multi-tier application provisioning in OpenStack cloud, integrating it with Chef.
  • Automated deployments of various Java/J2EE web application on QA and PROD environments for different applications
  • Responsible for supporting and troubleshooting AWS Pipeline deployments
  • Maintained build related scripts developed in Shell.
  • Installing the applications on AWS EC2 instances and configured the storage on S3 buckets using Bootstrap Scripts.
  • Implemented and maintained the monitoring server’s performance, CPU utilization and alerting of production and corporate servers/storage using AWS CloudWatch.
  • Experience working with SonarQube for code quality control tool.
  • Configured cloud bees Jenkins plugins for pushing Artifact, Log parser, build timeout plugins and Implemented groove-based templates for Jenkins jobs.
  • Virtualized PaaS provider - useful in automating the provisioning of commodity computing resources for cost and performance efficiency.
  • Supports weekly on call for troubleshooting application after hour issues
  • Designed AWS cloud formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of web applications and database templates.
  • Developed Splunk queries and Splunk dashboards target for understanding applications performance and capacity analysis.
  • Installed, tested and deployed monitoring solutions with Splunk for log analyzing and improving the performance of servers.
  • Handled Level 2 Unix incidents and changes via incident/change management tools (BMC Remedy).
  • Configured OS Backups and processing restoration requests.
  • Troubleshot NFS issues like exporting, restarting service daemons, hung issues etc.
  • Performed administration tasks as Logical Volume Management, File system creation & Housekeeping. Knowledge & experience on Symantec NetBackup/Commvault Backup tool & administration.

Environment: AWS, EC2, Route53, S3, RDS, SNS, IAM, VPC, EBS, VMWare, Auto scaling, GIT, Jenkins, Docker, Chef, Apache, OpenShift, JBoss, ANT, Shell, Json, CloudWatch, SonarQube, JUnit, Python, NAT, Splunk.

Confidential

Unix and Backup administrator

Responsibilities:

  • Created and monitored new base instances through AWS for servers to support the latest configurations.
  • Monitored all cloud instances to ensure overall system availability and performance.
  • Review and approve end user training materials for an application or process.
  • Worked with integration tools like Jenkins and version control systems like GIT.
  • Experience with dashboarding, monitoring.
  • Basic knowledge on chef and puppet.
  • Experience with databases and SQL.
  • Handled Level 2 Unix incidents and changes via incident/change management tools (BMC Remedy).
  • Configured OS Backups and processing restoration requests.
  • Troubleshot NFS issues like exporting, restarting service daemons, hung issues etc.
  • Performed administration tasks as Logical Volume Management, File system creation & Housekeeping. Knowledge & experience on Symantec NetBackup/CommVault Backup tool & administration.
  • Good experience on backup and restore functionality, capability of managing large backup infra.
  • Also, good knowledge on taking application aware backup like Exchange, SQL, Oracle, etc.
  • Knowledge on Snapshot backups.
  • Trained in Java, Database and business skills and handled IAM user request tasks.
  • Handled NOC (Network Operation Center) using BPPM project management tool.
  • Good understanding of IT infrastructure and enterprise architectures.
  • Experience with system troubleshooting and resolution of complex problems.
  • Proactively ensure the highest levels of systems and infrastructure availability.
  • Experience in configuring, monitoring and troubleshooting systems.

Confidential

Assistant System Engineer - Intern

Responsibilities:

  • Trained in Java, Database and business skills and handled IAM user request tasks.
  • Handled NOC (Network Operation Center) using BPPM project management tool.
  • Good understanding of IT infrastructure and enterprise architectures.
  • Experience with system troubleshooting and resolution of complex problems.
  • Proactively ensure the highest levels of systems and infrastructure availability.
  • Experience in configuring, monitoring and troubleshooting systems.
  • Validated business requirements for an application or process.
  • Validated solution design, including prototyping and configuration for an application or process.
  • Reviewed and approved end user training materials for an application or process.
  • Implemented monitoring tools to identify and resolve application deployment problems, resolve and/or escalate to development teams.
  • Analyzed problem incidents, debug SQL and implement fixes.
  • Solid experience in Story grooming, organizing (JAD) sessions, walkthrough and Workshop sessions with end-user/clients/stake holders and the IT group.
  • Established Incident Reporting (service now) and Change Control procedures using Clear Quest.
  • Performed Security Testing and Regression Testing.
  • Involved in User Acceptance Testing (UAT) with the end users (SME’s).
  • Reviewed the Functional specifications ahead of requirement gathering to perform analysis and find gaps between the business requirements and base functionality of application.
  • Participated in the bug review meetings, updated requirement document as per business user feedback and changes in the functionality of the applications.

We'd love your feedback!