We provide IT Staff Augmentation Services!

Sre/cloud Solution Architect Resume

5.00/5 (Submit Your Rating)

Tampa, FL

SUMMARY

  • 27+ years’ combined Systems Administration/IT and AWS/Cloud experience.
  • 8+ consecutive years in the AWS/Cloud space. Roles include SRE/Cloud Solution Architect, Cloud Solutions Architect, DevOps Engineer, Platform Engineer/Site Reliability Engineer, and AWS Engineer.
  • Work roles extend to include Build and Release Engineer and Systems Administrator.
  • Professional experience configuring and deploying instances on AWS, Azure, and Cloud environments.
  • Build and deploy applications by adopting DevOps practices such as Continuous Integration (CI) and Continuous Deployment/ Delivery (CD) in runtime with various CI tools such as Jenkins, Ansible, and VSTS.
  • Deploy web applications on AWS S3 served through CloudFront and Route 53 using AWS CloudFormation
  • Deploy applications using AWS Serverless Application repository and Lambda functions.
  • Perform infrastructure provisioning on AWS using Terraform and CloudFormation.
  • Skill applying technical development to Amazon AWS Cloud Services, (EC2, S3, EBS, ELB, CloudFormation, Cloud Watch, Elastic IP, RDS, SNS, SQS, Glacier, IAM, VPC, Route53), and managing security groups on AWS.
  • Migrate applications to the AWS Cloud.
  • Program Python scripts to automate AWS services, (e.g., web servers, ELB, CloudFront distribution, databases, EC2, database security groups, S3 bucket and application configuration).
  • Write scripts to create stacks, single servers, or join web servers to stacks.
  • Apply Service and Certificate Manager.
  • Develop web applications and RESTful API services in Python with Flask and deploy to AWS.
  • Use Application Load Balancer with Auto Scaling Group of EC2 Instances and RDS, and AWS CloudFormation Service,
  • Hands - on with configuration management tools such as Puppet and Ansible.
  • Hands-on with Containerization tools such as Docker and Kubernetes.
  • Skilled with Monitoring tools such as CloudWatch.
  • Create Kubernetes YAML files to deploy SCM dashboard CI/CD applications automatically and reduce time costs.
  • Proven skill applying Core AWS services.
  • Set up databases in AWS using RDS, storage using S3 buckets, and configure instance backups to S3 bucket.
  • Hands-on with S3, EC2, ELB, EBS, Route53, VPC and Auto Scaling.
  • Hands-on with deployment services Elastic Beanstalk, OpsWorks, and CloudFormation.
  • Hands-on with security practices IAM, CloudWatch, and CloudTrail.
  • Adept with various distributions of Virtualization, Cloud, and Linux.
  • Proven in QA Agile testing with extensive knowledge of Agile software testing.

TECHNICAL SKILLS

DevOps: Ansible, Kubernetes, Docker, Chef, Puppet, Jenkins, Vagrant, Maven, Subversion, Terraform, Prometheus.

Cloud: AWS, Azure, GCP

Compute Engines: Apache Tomcat, Maven, Gradle, Apach2, Nginx

Programming Languages: Python, Bash, Perl, Java Script, Shell Scripting, Script, PowerShell Script

Databases: Apache Cassandra, Apache Hive, Microsoft SQL Server Database, PostgreSQL, MySQL, NoSQL, MongoDB.

CI/CD: Git, Docker,Jenkins, Kubernetes, Ansible, AWS CodeCommit, CodeBuild, Codedeploy, Terraform.

Software: Microsoft Project, Microsoft Visual Studio, VMWare, Microsoft Word, Excel, Outlook, PowerPoint, Visio, Git/Trello, Slack.

Operating Systems: Windows, MS Server, Windows Server, Unix/Linux, Ubuntu,, Red Hat Linux, CentOS

Scripting: MapReduce, SQL, Python, Flask, Rest API. UNIX, Shell Scripting, LINUX, Node.js.

PROFESSIONAL EXPERIENCE

SRE/Cloud Solution Architect

Confidential, Tampa, FL

Responsibilities:

  • Identified and implemented automation solutions for manual processes to improve accuracy and efficiency of the DevOps pipelines.
  • Mentored offshore teams to DevOps best practices and ensured end-to-end implementation. involved in new technology assimilation and knowledge transfer to the DevOps team.
  • Coordinated operations with Network and Platform engineering teams.
  • Implemented special projects for the office of Director Datacenter Operations.
  • Introduced Modularization and implemented the architectural roadmap for segregating large DevOps repositories into Terraform module-based implementations.
  • Led a Core DevOps team to implement the best practices of modularization after POC of existing monolithic Terraform configurations into modules and restructured the GitLab repositories to minimize cloning and branching during new releases and spinning of Dev/Test/Prod environments.
  • Applied lean process resulting in minimal modifications of configuration data due to reuse of code modules thus reducing significant time to delivery.
  • Implemented complete end-to-end Automation for Env/Project/List of selected Compute Engines (VMs) Startup/Shutdown application. (The requirement was raised due to the excessive GCP Budget due to unnecessary spin up and idling Cloud resources. The mandate was to reduce the unused resources at a given point of time y identifying the user patterns and startup/shutdown based on the required time frame thus cutting the GCP Budget).
  • Applied Python-based GCP Client library for a given set of GCP resources (Startup/Shutdown/Remove based on the Environment/Project or selected set of GCP resources ( Compute Engine/Persistent Disk/IP addresses )).
  • Configured and ran Cron file to invoke Python application to execute required automation.
  • Led effort on Migration of on-Prem F5 Load Balancer servers to GCP-based F5 Servers (Scope of the project is to switch the live-traffic from on-Prem to GCP load balancer on a particular date according to the cutover plan. The strategy was to implement new Provisioning using Terraform and the subsequent configuration changes be executed via Ansible Playbooks).

Cloud Solutions Architect

Confidential, Chicago, IL

Responsibilities:

  • Served as technical authority and escalation point for AWS technical issues and decisions.
  • Translated business requirements into solutions, ensuring compliance with company strategy, policies and standards.
  • Designed and managed public and private cloud infrastructures using Amazon Web Services (AWS), which included VPC, EC2, S3, Cloud Front, Elastic File System, RDS, Direct Connect, Route53, Cloud Watch, Cloud Trail, Cloud Formation and IAM.
  • Supported DevOps/ Platform Engineers and help them take advantage of continuous integration and delivery wherever possible.
  • Designed, developed, and implemented solutions from the ground up.
  • Built solutions for high availability to ensure redundancy and uninterrupted operations.
  • Built elastic scaling, disaster recovery, centralized logging, monitoring, alerting, and change control into solutions.
  • Deployed AWS Infrastructure with IAC using Terraform. Used Cloud Formation on some of the legacy application.
  • Conducted research about ASW best practice market standards and applicable regulatory protocols, assessed current tools and system operations, identified and recommended areas for improvement, and led upgrade/improvement optimizations initiatives.
  • Performed deep dives into technical areas to solve a specific solution or design challenges.
  • Used trials or POCs to prove or discount an approach.
  • Applied programming using languages such as Java, Python, and Node JS.
  • Applied AWS network services AWS VPC, Subnetting, Security Groups, and Routing.
  • Used AWS core services EC2, RDS, S3, and ElastiCache.
  • Applied AWS serverless services Lambda, API GW, SNS, SQS, and Dynamo DB.
  • Created Cloud Formation and Terraform scripts.
  • Monitored and managed a UNIX system and wrote scripts using Bash.
  • Developed solutions using AWS SAM and the Serverless Framework and defined APIs in Swagger.
  • Handled continuous integration and delivery using Jenkins and AWS Code.

DevOps Engineer

Confidential, Pontiac, MI

Responsibilities:

  • Designed, configured, and deployed Amazon Web Services (AWS) for multiple applications using the AWS stack (EC2, Route53, VPC, S3, RDS, Cloud Formation, Cloud Watch, SQS, IAM) with focus on high availability, fault tolerance, and auto-scaling.
  • Built Jenkins jobs to create AWS infrastructure from GitHub repos containing Terraform code and administered/engineered Jenkins for managing weekly builds.
  • Created automation and deployment templates for relational and NoSQL databases, including MSSQL, MySQL, Cassandra, and MongoDB in AWS.
  • Created Python scripts to automate AWS Services, including web servers, ELB, Cloud Front Distribution, Database, EC2 and Database security groups, S3 bucket and application configuration. Scripts create stacks, single servers, or join web servers to stacks.
  • Created Kubernetes deployment, statefulsets, Network policy etc.
  • Created Kubernetes dashboard, Network policies.
  • Created metrics and monitoring reports using Prometheus and Grafana dashboards.
  • Created puppet scripts to install stack-like LXC containers, Docker, Apache, Postgre, PHP, Python virtual environments, SonarQube, Nexus 2/3, WildFly/Boss applications, and Django applications.
  • Set up Chef Infra, bootstrap nodes, create and upload recipes, and configure node convergence in Chef SCM.
  • Implemented AWS Code Pipeline and create CloudFormation JSON templates in Terraform for infrastructure-as-code.
  • Developed procedures to unify, streamline, and automate applications development and deployment procedures with Linux container technology using Docker Swarm and Docker Compose.
  • Wrote Cloud Formation Templates (CFT) in JSON and YAML formats to build AWS services with the paradigm of Infrastructure-as-Code.
  • Wrote Terraform Templates for AWS infrastructure-as-Code to build staging and production environments.
  • Configured ELK stack in conjunction with AWS and use Logstash to output data to AWS S3.
  • Handled migration of on-premises applications to cloud and created resources in cloud to enable this. Used all critical AWS tools. Used ELBs and Auto-Scaling policies for scalability, elasticity, and availability.
  • Used Helm charts to create, define and update kubernetes clusters
  • Used Helm charets for load balancing with kubernetes clusters
  • Clustered Docker Containers with the help of Kubernetes on the OpenShift platform.
  • Automated provisioning and repetitive tasks using Terraform and Python, Docker container, Service Orchestration.
  • Managed the OpenShift cluster that includes scaling up and down the AWS app nodes.
  • Used Ansible automation to replace different components of OpenShift such as ECTD, MASTER, APP, INFRA, and GlusterFS.
  • Created Chef Cookbooks and write recipes in Ruby Script to install and configure infrastructure across environments and automate the process using Python Script.
  • Automated the cloud deployment using Chef, Python, and AWS CloudFormation Templates. Used Chef for unattended bootstrapping in AWS.
  • Maintained high-availability clustered and standalone server environments and refine automation components with scripting and configuration management (Ansible) and wrote Ansible scripts.
  • Developed OpenShift/Kubernetes templates for various applications like Jenkins, Kafka, Cassandra, and Grafana.
  • Installed, set up, and configured Apache Kafka.

Platform Engineer/Site Reliability Engineer

Confidential, St Louis, MO

Responsibilities:

  • Reviewed existing infrastructure and identified the various services and underlying functioning of different services running in the infrastructure, identified bottlenecks, and made recommendations regarding a go-forward/change plan-of-action that minimizes risk for implementing and monitoring a new infrastructure.
  • Set up AWS Notify Slack Terraform module and integrated into AWS, CloudWatch, and Azure.
  • Set up alerting and monitoring system and configured it to continuously monitor containers and run application health checks.
  • Created Dockerfiles and automated Docker image creation using Jenkins and Docker.
  • Automated various infrastructure activities such as continuous deployment, application server set up,
  • Configured Terraform with AWS-VS Code-Git.
  • Configured EC2 servers and installed CloudWatch agent and created configuration file to fetch logs of infrastructure into CloudWatch.
  • Configured alerting and monitoring system to automatically provide alerts to security and service issues such as red flagging metrics showing outside set parameters, application compromise/failure, service outages, disk/CPU/memory threshold exceeded, etc. and stack monitoring using Ansible playbooks using CI tools such as Jenkins.
  • Used GIT tool to handle configuration and source code repository management.
  • Tested infrastructure with Terraform modules and versioned modules for Staging, Testing, and Production environments in Azure.
  • Documented system configurations, instance, OS, and AMI build practices, backup procedures, troubleshooting guides, and ensuring infrastructure and architecture drawings are current with changes.
  • Traced project milestones and prepared reports using Jira.

AWS Engineer

Confidential, Denver, CO

Responsibilities:

  • Designed and developed AWS infrastructure using EC2, S3, VPC, IAM, EBS, Security Groups, Auto Scaling, Transfer for SFTP, Elastic Beanstalk, CloudFront, VPC, CloudWatch, Lambda, Trusted Advisor, RDS, Cost Explorer, and AWS CLI.
  • Implemented Ansible to manage existing servers and automate the build/configuration of new servers.
  • Implemented rapid provisioning and management for Linux using Amazon EC2, Ansible, and custom Bash scripts.
  • Created alarms and notifications for EC2 instances using CloudWatch.
  • Triggered AWS Lambda functions using CloudWatch scheduled events.
  • Implemented life-cycle policy for snapshots.
  • Defined Terraform modules such as Compute and Users to reuse in different environments.
  • Established and applied appropriate branching, labeling/naming conventions using GIT source control.
  • Created S3 buckets and maintained and utilized the policy management of S3 buckets and Glacier for storage and backup on AWS.
  • Managed IAM service in AWS for assigning roles and polices to users and used the IAM console to create custom users and groups.
  • Created shell scripts for scheduling automated backups from a file system (mounted as a local mount point) to a local disk using Rsync and sent email upon completion.
  • Deployed AWS resources using AWS Cloud Formation.
  • Maintained tagging compliance for all the AWS resources and updated all tags using AWS CLI.
  • Retrieved resource metrics using AWS CLI, such as maximum/average CPU utilization and memory usage.
  • Ensured regular tag compliance and patch compliance to the servers.
  • Worked with Jenkins pipeline suite for supporting the implementation and integration of continuous delivery (CD) pipelines.
  • Installed, configured, and administered Jenkins Continuous Integration (CI) tool on Linux machines along with adding/updating plugins such as Maven, Ansible, and GIT.
  • Used Ansible/Ansible Tower as a configuration management tool to automate daily tasks, rapidly deploy critical applications, and proactively manage change.
  • Provisioned, operated, and maintained systems running on AWS and configuration management using Ansible, and deployed microservices using Ansible.
  • Enabled SSH access to servers from the jump server without key or password using Ansible and the shell.
  • Worked with Ansible for deployment of security tools, Nagios agents, and Nagios Servers in different environments.
  • Configured GIT plugin to offer integration between GIT and Jenkins.
  • Deployed build artifacts to application server using Maven and wrote Maven POM.xml files to automate integrated build activities on Jenkins.

Build and Release Engineer

Confidential, Pittsburgh, PA

Responsibilities:

  • Planned, developed, built, and deployed processes for pre-production environments.
  • Wrote Python scripts to deploy the applications using Puppet across Linux servers.
  • Developed the scripts to push patches, files, and maintain configuration drift through the Puppet Tool.
  • Used ANT build tool for script deployment and deployed processes using Jenkins to move from one environment to another.
  • Configured Linux servers for clusters of Oracle Real Application and configured SAN-based mount points.
  • Configured and maintained common Linux applications such as Apache, Active, NFS, DHCP, BIND, SSH, and SNMP.
  • Configured Jenkins to build Java Code using Meta Case Software and completed CI process on the generated Java code.
  • Used Shell/Perl scripts for automation purposes.
  • Utilized Nexus repository manager to share the artifacts by configuring the repository manager.
  • Applied continuous integration tool Jenkins for end-to-end automation for all build and deployments.
  • Used Puppet and Urban code deploy tool for application delivery automation.
  • Managed deployment automation with Puppet in Ruby.
  • Automated testing builds and deployment by developing and maintaining the processes and associated scripts/tools.
  • Implemented and configured Nagios for continuous monitoring of applications and enabled notifications via emails and text messages.

Systems Engineer

Confidential, Boise, ID

Responsibilities:

  • Built, installed, and configured servers from scratch with OS of RedHat Linux.
  • Installed, configured, and performed troubleshooting activities specific to Solaris, Linux RHEL, HP-UX, and AIX operating systems.
  • Configured DNS and DHCP on client networks.
  • Set up a VPC environment and designed an effective networking strategy based on client requirements.
  • Developed complex scripts to automate the provisioning/maintenance of IaaS and PaaS environment configurations.
  • Applied OS patches and upgrades on a regular basis, upgraded administrative tools and utilities, and configured/added new services.
  • Installed and configured Apache and Web Logic and Web Sphere applications.
  • Provided backup and recovery services, managed file systems and disk space, and managed virus protection on a routine basis.
  • Complied with the established software development life cycle methodology to deliver effective solutions.
  • Provided technical support via telephone/email to 1000s of users.
  • Created database tables with various constraints for clients accessing FTP.
  • Recommended and implemented system enhancements that improved the performance and reliability of the system, including installing, upgrading/patching, monitoring, problem resolution, and configuration management.

We'd love your feedback!