We provide IT Staff Augmentation Services!

Devops/cloud Engineer Resume

New Orleans, LA

SUMMARY

  • 8+ Years of Proficient Experience in Technologies Like Ubuntu / RedHat / CentOS Servers, VMware, Amazon AWS, Azure DevOps, Shell Scripting, worked as Cloud/DevOps Engineer, Linux Administrator and Middleware Engineer providing solutionsto the complex issues utilizing most of the resources such as Virtualization, Integration and configuration Management tools, Configuring, Monitoring. Adopting DevOps Culture to automate the entire Software Development Life cycle and Implemented Continuous Integration, Delivery/Deployment pipelines.
  • Being a part of AWS DevOps Engineer used all the AWS associated compute services like EC2, Auto Scaling, Kubernetes, VMware and storage services like S3, EBS, AWS Storage, Elastic File Systems and DataBase services like DynamoDB(No SQL),RDS(MySQL, Oracle, SQL Servers) and Content Delivering tools like VPC, Route 53, Load Balancing, Cloud Front.
  • Expertise in creating complex and custom IAM policies, Roles, and user management for delegated users within AWS and experience in developing AWS cloud formation templates to create custom sized VPC, Subnets, EC2 instances, ELB and security groups.
  • Experience in working with Linux including Red Hat, CentOS, Ubuntu, Debian, configuration and administration of Red Hat Virtual machines in VMware Environment & also support in Client Technologies.
  • Expertise in implementing the Configuration Management Tools like Chef, Puppet and Ansible.
  • Created various plays / playbooks for automating various repetitive tasks, quickly deploy critical applications, Environment configuration Files with the help of various Ansible modules. Written Ansible Playbooks using YAML for Maintaining Roles, Inventory Files and Groups Variables.
  • Experience in Azure DevOps, created both build and release pipelines.
  • Deployed servers using Puppet, and Puppet DB for configuration management to existing infrastructure and Implemented Puppet 3.8 manifests and Modules to deploy the builds for Dev, QA and production.
  • Experience in Installing Chef Server Enterprise On - premise/workstation/bootstrapped the nodes using knife and developed Chef Cookbooks, Recipes, Roles and Data Bags to automate the services related to deployment.
  • Utilized Kubernetes as a platform to provide a platform for automating the deployments, scaling and operation of application containers across a cluster of hosts. Worked closely with development teams and test engineers for EC2 size optimization and Docker build Containers. Used Kubernetes during debugging. Leading up to production when multiple application build must be tested for stability.
  • Setting up CI/CD pipeline using continuous integration tools Jenkins and automated the entire AWS EC2, VPC, S3, SNS, RedShift, EMR based infrastructure using Terraform, Chef, Python, Shell, Bash scripts and Managing security groups on AWS and custom monitoring using CloudWatch.
  • Worked with Terraform to create stacks in AWS updated the Terraform scripts based on the requirement on regular basis.
  • Expertise on Source code control tools like SVN, Bit bucket (Git) and good knowledge on Branching and merging code lines in the GIT.Also used web hooks for integrating with continuous Integration tools Jenkins, Bamboo and ANT, MAVEN for generating builds. Designed quality profiles and certain standards set by installing Quality Gates in SONARQUBE.
  • Expertise's in using build tools like MAVEN for the building of deployable artifacts such as war & ear from source code.
  • Successfully maintaining all the servers (DEV, QA) with the required code with help of Udeploy(Automating the application deployment process using simple code) in a timely manner for regression testing.
  • As a build and release engineer used Microservice to build a large application.
  • Understanding with the standards and great practices in Software Configuration Management (SCM) in Agile-SCRUM and Waterfall methodologies and Implemented Change Management Process for tracking different clients and setup tracking using JIRA/ ServiceNow.
  • Extensive knowledge in migrating applications from internal data center to AWS.

TECHNICAL SKILLS

Operating Systems: RHEL/CentOS 5.x/6.x/7, Linux-Ubuntu, Windows 7/8/XP/10, AWS-Linux CLI

Networking: VPC, Route-53, LDAP, DNS, SSH

Databases: MySQL, Cassandra, PostgreSQL, SQL Server

Backup/Monitoring Tools: S3 (Simple Storage Service), Cloud watch, Splunk, Nagios

Source Control: GIT, SVN

Configuration Management: Ansible, Puppet, Chef

Virtualization/ Containerization Technologies: VMWARE, AWS ECS, Docker Container services, Pivotal Cloud Foundry, Vagrant

Cluster Management: Kubernetes(K8S), Docker, Docker Swarm

Languages: Shell scripting, Bash

Web/Application Server: IIS, Tomcat, Apache

Build and Deploy Tools: ANT, Maven, Jenkins, Bamboo, TeamCity, TFS, MS Build

Infrastructure: AWS, Azure

Web Technologies/ Programming Languages: Servlets, JDBC, JSP, XML, HTML, .Net, Java Script, C, C++, Ruby, Perl Scripting, Python, Shell scripting

Software Methodologies: Agile-SCRUM, Waterfall

Project Management/ Bug tracking Tools: JIRA, Confluence

PROFESSIONAL EXPERIENCE

Confidential, New Orleans, LA

DevOps/Cloud Engineer

Responsibilities:

  • Hands-on experience in designing, planning and implementation for existing On-Prem applications to Azure Cloud (ARM), Configured and deployed Azure Automation Scripts (Compute, Web and Mobile, Blobs, Resource Groups, Azure Data Factory, Azure SQL, Cloud Services and ARM), Services and Utilities focusing on Automation.
  • Wrote templates to build the infrastructure for the Javascript(Node.js,Vue.js) application. NPM (Node Package Manager) dependencies were installed and used to run the application.
  • Created a resource group which includes storage accounts, containers, functions, and AzureSQL databases.
  • Created CI/CD Pipelines in AZURE DevOps environments by providing their dependencies and tasks.
  • Created azure key-vault to store all the credentials for SQL databases and API keys.
  • REST API were used in backend for server interaction.
  • Worked with automation tools which are pre-installed in the azure DevOps portal to build the packages. The automation tools like NPM and MAVEN with build.xml and pom.xml scripting languages depending on the packages like java or java script.
  • Automated configuration management and deployments using Ansible playbooks and Yaml for resource declaration. And creating roles and updating Playbooks to provision servers by using Ansible.
  • Worked on monitoring servers using Nagios, Splunk using logging tools like ELK.
  • Used JIRA as a ticketing tool to create issues going on in any environments.
  • Created a cloud that supports DEV, TEST, and PROD environments.
  • Coordinate/assist developers with establishing and applying appropriate branching, labeling/naming conventions.
  • Created Resource groups, Azure App Services and Storage accounts manually on the Azure portal and later converted them to code, for the automation and easy migration keeping the future in mind.
  • Created different stages, Published the Artifacts, Copied the files and Archived the files.
  • Created different stages in release pipeline, Development, staging and production. Created these stages in a such a way that, when the Dev branch is updated, it triggers only the development pipeline.
  • Similarly, when the Master branch of github is updated, it directly triggers staging, where you can verify all the developments before it moves to production.
  • Created different pipelines, one for the cloud deployment, other for the backend data& one for Data flow.
  • Initially worked on VSTS, but later moved to Azure DevOps keeping the client goals in mind.
  • Worked closely with Data teams and their tools, Data Factory, Data Bricks and Machine Learning on Azure platform.
  • Wrote ARM templates and PowerShell Scripts, to automate the pipelines as a code.
  • Worked on cloud security services like Azure Key Vault and Monitoring tool Azure Monitor.
  • Performed Integrated delivery using Jenkins. Done Branching, Tagging, Release Activities on Version Control Tools Github.
  • Worked in an Agile methodology model, using Jira as the project management tool.
  • Wrote the Bash Scripts for cloning the github and setting up the project to work on the visual studio 2019.
  • Created and maintained windows server on based host and installed operating system on guest Servers.Active Directory was used to manage and access the networked resources.
  • Worked on Visual studio 2019,and built the project using MS Build for the continuous integration.
  • Worked on the Code Quality using SonarQube.Installed SonarQube plugin in Jenkins and integrated with project.Integrating SonarQube in the CI pipeline for code coverage reports.
  • Creating, Utilizing, Managing Policies in S3 buckets on AWS. .
  • Working on transferring SQLite data generated to S3 bucket using Python scripts.
  • Working on Data Lakes in AWS using the SQLite data transferred to the S3 bucket.
  • Using Cloud Formation tools like AWS Glue worked on Big Data related Data Lakes on Amazon Web Services.
  • Created IAM roles to work on Data Lakes and added database to run it on demand.
  • Also worked on the Schema generated on running of the Data Lakes.
  • Also transformed the data to required format using ETL tools and created tables in the data targets.
  • Also used Amazon Athena for analyzing the data from the created data lakes.

Confidential, Irving, Texas

Cloud/Release Engineer, DevOps Engineer

Responsibilities:

  • Hands-on experience with Amazon Web services (AWS) and implemented solutions using EC2, S3, and RDS in cloud formation Json templates, EBS, Elastic Load Balancer, Auto Scaling Groups, Auto scaling Launch Configuration and Auto scaling Lifecycle Hooks.
  • Creating,Utilizing,Managing Policies and Glacier storage in S3 buckets on AWS.
  • Developed and managed cloud VM’s with AWS EC2command line clients and management console. Created AWS Route53 to route traffic between different regions and alarms and notifications for EC2 instances using Cloud Watch.
  • Written Terraform modules for automating the creation of VPC’s and launching AWS EC2 Instances. Modules are written for creation of VPC and VPN connection from data center to production environment and cross account VPC peering.
  • Worked on launch configuration templates for launching EC2 instances using auto scaling groups.
  • Worked on AWS Cloud Watch,Cloud Formation,Cloud Trail Services and cloud front to setup and manage the cached content delivery.
  • Created Cloud Formation templates and deployed AWS resources by using it. Networking services like Route 53,VPN were put into use.
  • Worked with GITHUB to store the code and integrated it to Ansible to deploy the playbooks.Deployed micro services, including provisioning AWS environments using Ansible Playbooks.
  • Using Ansible Tower, which provides easy to use visual dashboard, role-based access control, job scheduling, integrated notifications and graphical inventory management that allows use of Ansible for their deployment.
  • Worked on AWS Cloud Watch, Cloud Formation, Cloud Trail Services and cloud front to setup and manage the cached content delivery.asy for the individual teams.
  • Creating from scratch a new continuous integration stack based on Docker and Jenkins, allowing transition from dev stations to test servers easily and seamlessly.
  • Created Docker images using a Dockerfile, worked on Docker container snapshots, removing images and managing Docker volumes.Created Docker compose file-using YAML for deploying MEAN stack (MongoDB, Express, AngularJS, Node JS) applications onto Docker containers before deploying them into production environment.Using Docker container clusters to clone the production servers and implementing cabernets orchestration for clone's production servers.
  • Managed Microservices using Docker to quickly spin up into production environment and auto-scaling them and orchestration using Amazon EC2 container service (ECS) and deploy it to an Amazon EC2 instance using launch configuration templates.
  • Experienced in deployment of applications on Apache Web Server, Nix and application servers such as Tomcat, JBoss.
  • Deployed Kubernetes clusters on top of Amazon EC2 Instances using KOPS and Managed local deployments in Kubernetes, creating local cluster and deploying application containers and building/maintaining Docker container clusters managed by Kubernetes and deployed Kubernetes using HELM Charts.
  • Managed Kubernetes charts using Helm , and Created reproducible builds of the Kubernetes applications, managed Kubernetes deployment and service files and managed releases of Helm packages .
  • Worked in container based technologies like Docker, Kubernetes and Openshift.
  • Point team player on Openshift for creating new Projects, Services for load balancing and adding them to Routes to be accessible from outside, troubleshooting pods through ssh and logs, modification of Buildconfigs, templates, Imagestreams, etc.
  • Used Elasticsearch, Logstash and Kibana (ELK stack) for centralized logging and analytics in the continuous delivery pipeline to store logs and metrics into S3 bucket using lambda function.
  • Built a new CI pipeline, Testing and deployment automation with Docker, Jenkins and Ansible. Integrating Sonarqube in the CI pipeline for code coverage reports and sonar metrics. Integrating Sonarqube in the CI pipeline to analyze code quality and obtain combined code coverage reports after performing static and dynamic analysis.
  • Prototype CI/CD system with GitLab on GKE utilizing Kubernetes(K8S) and Docker for the runtime environment for the CI/CD systems to build and test and deploy.
  • Jenkins for integrating Maven to generate builds, conduct unit tests with Junit Plugin,Regression tests,with Selenium,AppDynamics & ELK Stack for monitoring.
  • Worked with Application Performance Monitoring (AMP) tools like AppDynamics for monitoring JAVA, .NET and PHP applications.
  • Installing and configuring LAMP (Apache/ Tomcat/ MySQL/ php) using, Reverse-proxy servers (Nginx).
  • Team Collabration tools like Rational RTC were involved to put different teams work together.
  • Deployed the java application into web application servers like Apache Tomcat. Involved in configuring Nix as a web server.Also worked with UrbanCode Deploy for automating the deployments.
  • Created and maintained various DevOps related tools for the team such as provisioning scripts, deployment tools and staged virtual environments using Docker and Vagrant.
  • Defined AWS Security groups and tools like Palo alto, acted as firewalls that controlled the traffic allowed reaching one or more AWS EC2.Palo Alto firewall system was als
  • Schedule and perform process audits and recommend process improvements. Continuously work towards optimization of process and workflow to improve turnaround and resolution time.

Confidential, Draper, UT

DevOps/AWS Engineer

Responsibilities:

  • Involved in designing and deploying large number of applications utilizing most of the AWS stack includes not limited to EC2, Route53, S3, RDS, DynamoDB, SNS, SQS and IAM focusing on high-availability, fault tolerance, and auto-scaling in AWS cloud formation.
  • Developed Templates for AWS infrastructure as a code using Terraform to build staging and production environments.
  • Worked on AWS Cloud Formation templates to create custom-sized VPC, subnets, EC2 instances, ELB, security groups.
  • Worked on tagging standard for proper identification and ownership of EC2 instances and other AWS Services like CloudFront, CloudWatch, OpsWorks, RDS, ELB, EBS, S3, glacier, Route53, SNS, SQS, KMS, CloudTrail, IAM.
  • Worked on migrating and managing multiple applications from on premise to cloud using AWS services like S3, Glacier, EC2, RDS, CloudFormation and VPC.
  • Designed AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates
  • Build out server automation with Continuous Integration - Continuous Deployment tools like Jenkins/Maven for deployment and build management system
  • Provided highly durable and available data by using S3 data store, versioning, lifecycle policies, and create AMIs for mission critical production servers for backup.
  • Installed and configured various web application servers like Apache Tomcat web server, JBoss for deploying the artifacts and deployed applications on AWS by using Elastic Beanstalk.
  • Managed AWS EC2 instances utilizing Auto Scaling, and Elastic Load Balancing for our QA and UAT environments as well as infrastructure servers for GIT and Chef.
  • Written Chef Cookbooks for various DB configurations to modularize and optimize end product configuration. Converting production support scripts to chef recipes. And AWS server provisioning using Chef Recipes.
  • Developed the chef recipes, cookbooks using Ruby syntax and uploaded them to the master chef server using chef client tool like chef-repo.
  • Installed SonarQube plugin in Jenkins and integrated with projectmaven script Experience with Build Management Tools Ant and Maven for writing build.Xmls and Pom.xmls.
  • Experience in automating the java applications using serenity tool. And also working knowledge on cucumber
  • Experienced in build and deployment of Java applications on to different environments such as QA, UAT and Production.
  • Utilized Kubernetes as a platform to provide a platform for automating the deployments, scaling and operation of application containers across a cluster of hosts
  • Performed Integrated delivery (CI and CD process) Using Jenkins, Nexus, Yum. Branching, Tagging, Release Activities on Version Control Tools: SVN, GitHub.Application Deployments & Environment configuration using Chef, Ansible. Written Ansible Playbooks to configure, install software and other packages on to the application.
  • Maintain Chef servers and management application that can use ServiceNow (CI) data to bring computers into a desired state by managing files, services, or packages installed on physical or virtual machines.
  • Deployed the java application into web application servers like Apache Tomcat. Involved in the installation and configuration of Nix as a web server.
  • Experienced in deployment of applications on Apache Web server, Nix and Application Servers such as Tomcat, JBoss.
  • Virtualized the servers using the Docker for the test environments and dev-environments needs. And also, configuration automation using Docker containers. Docker Swarm was also used.
  • Implemented comprehensive cloud monitoring and incident management solution using Cloud kick, Data dog.
  • Hands on experience in using tools like Docker for orchestrating, linking and deploying the services related to the containers.
  • Automated the cloud Deployments using Ansible and AWS Cloud Formation Templates from scratch as effort of migration.
  • Having good experience in SDLC methodologies like Agile and Scrum Methodologies and Python based environment.
  • Worked with various scripting languages like Go, Bash&Python. Wrote Python scripts for pushing data from MongoDB to MySQL Database.
  • Experience in end to end knowledge of the project, starts with Unix process and do business logic and again calls shell script in NOHUP mode, which in turn calls PL/SQL program to complete the business requirements etc.
  • Coordinated with the Offshore and Onshore teams for Production Releases. Coordinated Release effort amongst various teams (Integration, QA, Testing, and Business Analysis) in geographically separated environment.
  • Coordinated with developers, Business Analyst, and Managers to make sure that code is deployed in the Production environment.
  • Understanding & usage of tools like Bamboo, JIRA, Nexus. Working with JIRA tool to track all the defects and changes released to all environments.

Confidential, Irving, Texas

Build and Release Engineer

Responsibilities:

  • Creating the automated build and deployment process for application, re-engineering setup for better user experience, and leading up to building a continuous integration system for all our products.
  • Maintained and administered GIT source code tool.Created Branches, Labels and performed Merges in Stash and GIT.
  • Regular Build jobs are initiated using the Continuous Integration tools like Jenkins. Developed processes, tools, automation for Jenkins based software for build system and delivering SW Builds.
  • Configured Jenkins for doing the build in all the non-production and production environments.
  • Deployed Puppet, Puppet Dashboard, and Puppet for configuration management to existing infrastructure.Wrote puppet manifests for deploying, configuring, and managing collected for metric collection and monitoring.Refectories puppet manifests to reflect best practices. Wrote Puppet modules.
  • Implementing Puppet modules to automate the configuration of a broad range of services.Build automatic provisioning system with kick start and Puppet.
  • Responsible for OpenStack project core infrastructure including code review, continuous integration systems, and developer tools.Scaled developer infrastructure as project grew and transitioned to OpenStack Foundation
  • Optimized database OpenStack schemas and provided consultation to various service teams for query performance improvements.
  • Experience in managing Source control systems GIT and SVN.
  • Managed build results in Jenkins and deployed using workflows.Maintain and track inventory using Jenkins and set alerts when the servers are full and need attention.
  • Modeled the structure for multi-tiered applications orchestrates the processes to deploy each tier.
  • Developed build and deployment scripts using ANT and MAVEN as build tools in Jenkins to move from one environment to other environment.Experience in administering and maintaining Atlassian products like JIRA and confluence anddeployed specific versions of various modules of an applications into target environment using Udeploy.
  • Monitoring of applications, servers, doing capacity planning with the help of Nagios and Splunk for managing logs to notify the incident management system upon reaching or exceeding the threshold limits.
  • Configured maintained and administered Linux systems that host build and release engineering apps by constantly monitoring system load and memory consumption.
  • Familiar and experienced with Agile Scrum development.
  • Presented on View Object pattern in Web Application Automation C#, Ruby, S, TeamCity.Migration of a continuous build server using Cruise Control over to a more GUI-friendly Team City.Managed continuous integration environment using TeamCity.Used Bamboo for official nightly build, test and managing change list. Installed multiple plugins for smooth build and release pipelines.
  • Involving in Setting up Chef Workstation, boot strapping various enterprise nodes, setting up keys.
  • Integrated GIT into Jenkins to automate the code check-out process.Used Jenkins for automating Builds and Automating Deployments.

Confidential

Linux System Engineer

Responsibilities:

  • Installation and configuration of Linux for new build environment.
  • Created and maintained virtual server on VMware ESX/ESXi based host and installed operating system on guest Servers. Configuring NFS, DNS.
  • Updating YUM Repository and Red Hat Package Manager (RPM).
  • Created RPM packages using RPMBUILD, verifying the new build packages and distributing the package.
  • Configuring distributed file systems and administering NFS server and NFS clients and editing auto-mounting mapping as per system / user requirements.
  • Experience in Installing, upgrading, configuring Red Hat Linux 3.x, 4.x, 5.x, 6.x, 7 using kickstart servers and interactive installation.
  • Installation, configuration and maintenance FTP servers, NFS, RPM and Samba.
  • Configured SAMBA to get access of Linux shared resources from Windows.
  • Samba and Apache Web Services Performed different software changes in VMware environment on customer's servers. Followed up with Data Center personal for hardware related changes.
  • Created volume groups logical volumes and partitions on the Linux servers and mounted file systems on the created partitions.
  • Deep understanding of monitoring and troubleshooting mission critical Linux machines.
  • Experience with Linux internals, virtual machines, and open source tools/platforms.
  • Improve system performance by working with the development team to analyze, identify and resolve issues quickly.
  • Ensured data recoverability by implementing system and application level backups.
  • Performed various configurations
  • Support pre-production and production support teams in the analysis of critical services and assists with maintenance operations.
  • Automate administration tasks through use of Scripting and Job Scheduling.

Hire Now