We provide IT Staff Augmentation Services!

Sr. Aws/ Devops Engineer Resume

3.00/5 (Submit Your Rating)

NJ

PROFESSIONAL SUMMARY:

  • Over 8+ years of experience in the IT industry with Build/Release Management CI/CD, Configuration Management Containerization, Orchestration & AWS, Azure operations Production in cross - platform environments.
  • Management and Administration of AWS Services CLI, EC2, VPC, S3, ELB Glacier, Route 53, CloudTrail, IAM, and Trusted Advisor services.
  • Worked on Multiple AWS instances, set the security groups, Elastic Load Balancer, AMI, and Auto scaling to design cost-effective, fault-tolerant, and highly available systems.
  • Experience with container-based deployments using Docker, working with Docker images, Docker Hub and Docker registries, and Kubernetes.
  • Performed Continuous Delivery in aMicroserviceinfrastructure with Amazon cloud, Docker, and Kubernetes.
  • Worked on Gitlab and created GitLab runners for different server environments.
  • Set up CI/CD process in GitLab to deploy microservices in Kubernetes in different environments.
  • Experience in automated pipeline methodologies and CI/CD toolchain using shell script (GIT, Maven, and Jenkins).
  • Experience with Container orchestration using Kubernetes and Docker.
  • Experience working with AWS Glue for Faster data integrations and KMS for encryption of Keys.
  • Experience with public cloud platforms AWS and Azure, Management tools like Prometheus.
  • Experience creating, publishing, maintaining, monitoring, and securing API Gateway.
  • Setting up databases in AWS using RDS, storage using S3 bucket, and configuring instance backups to S3 bucket.
  • Building fast and accurate Machine Learning models in Python along with NLP, which includes analyzing text and interpreting large datasets using SQL joins to obtain the data from the databases.
  • Experience with container-based deployments using Docker, working with Docker images, Docker Hub and Docker registries, and Kubernetes.
  • Integrated AWS Dynamo DB using AWS Lambda to store the values of the items and back up the Dynamo DB streams and Athena.
  • Experience in ticketing systems like ServiceNow.
  • Implementing Network and Security policies only allow secure connections into the cloud environments.
  • Supported and created documentation to transition development teams to ensure a smooth migration.
  • Enabled CloudFormation using Terraform to update the AWS infrastructure.
  • Monitor the applications and the infrastructure and take corrective actions accordingly.

TECHNICAL SKILLS:

Cloud: AWS, Azure

Infrastructure as code: Terraform, Cloud Formation

Configuration Management: Chef, Ansible

CI/CD: Jenkins, Bamboo

Build Tools: Maven, ANT, Artifactory

Scripting: Bash, Python, Ruby, YAML, PowerShell

Operating Systems: Linux (Red Hat, CentOS, Ubuntu)Containerization Tools: Docker, Kubernetes, AWS ECS

Source Control Management: GIT, SVN, GitHub, Bitbucket

Monitoring: CloudWatch, New Relic

Bug Tracking: JIRA

Languages: C, C++ and Java/J2EE

PROFESSIONAL EXPERIENCE:

Confidential, NJ

Sr. AWS/ DevOps Engineer

Responsibilities:

  • Migrated over 70 Web Applications from Vendor’s infrastructure into AWS environments.
  • Created and Configured EC2 Auto Scaling groups to host multiple static websites.
  • Develop necessary disaster management options to avoid catastrophic failures of the systems and applications.
  • Designedgroups AWS IAM security groups for access control.
  • Worked as Acting Release Manager to ensure timely deployment of applications into Production environments.
  • Work closely with the team’s Data Scientists to develop and test models using the latest Machine Learning and Optimization technologies and methodologies.
  • Automating Tableau Report and Data Sources deployment using Python Rest API libraries.
  • Using Azure Databricks and Data Factory ETL tools to ingest from various on-prem and cloud sources and transform Data using notebooks and storing in Azure Storage and Data Lake accounts.
  • Used AWS Glue, S3, lambdas, PostGres Graph QL and IAM roles in AWS.
  • Experience with distributed data processing system (Akka, Spark, Kafka, Storm, Hadoop, and others)
  • Involved in designing and deploying many applications utilizing almost all the AWS stack (Including EMR, security groups, EC2, S3, Dynamo DB, SNS, SQS, and IAM), focusing on high availability, fault tolerance, and auto-scaling.
  • Write Terraform templates and create modules to automate deployments to AWS.
  • Create custom Terraform modules in centralized repos and use them in different organizations' applications.
  • Implemented GitHub actions workflows to deploy the terraform templates into AWS/Azure.
  • Supported and created documentation to transition development teams to ensure a smooth migration.
  • Management and Administration of AWS Services CLI, EC2, VPC, S3, Route 53, CloudWatch, triggers, CloudWatch event rules, IAM.
  • Migrating Repos from Azure DevOps to GitHub and creating GitHub pipelines from scratch using GitHub Actions.
  • Migrate major applications to Azure Serverless infrastructures like Azure App Services, Azure SQL, and Azure Logic Apps to improve performance and reliability.
  • Troubleshoot any issues related to Network and Security policies.

Confidential, Atlanta, GA

Sr. AWS/ DevOps Engineer

Responsibilities:

  • Maintain the health, stability, and security of the growing production cluster. Support and troubleshoot production problems.
  • Develop and enforce best practices in configuration, management, and application deployment.
  • Maintain and improve GIT source control and continuous integration systems.
  • Amazon EC2 virtual automation and administration. Select, set up, and install machine and application monitoring and alerting systems.
  • Ensure high reliability and adequate capacity distributed “Big Data” and auto-scaled systems.
  • Design, implement, and maintain all AWS infrastructure and services within a managed service environment.
  • Utilized Cloud Watch and Logic Monitor to monitor resources such as EC2, EBS, ELB, RDS, and S3.
  • Created snapshots for backup of a cluster in Amazon S3 and restored using Amazon Red Shift.
  • Involved in setting up the CI/CD pipeline using Jenkins, Maven, Nexus, GitHub, CHEF, Terraform, and AWS.
  • Involved in designing and deploying various applications utilizing almost all of the AWS stack (Including EC2, Route53, S3, RDS, Dynamo DB, SNS, SQS, and IAM), focusing on high-availability, fault tolerance, and auto-scaling in AWS Cloud Formation.
  • Configured AWS IAM and Security Group in Public and Private Subnets in VPC.
  • Created AWS Route53 to route traffic between different regions.
  • Design AWS Cloud Formation templates to create custom sized VPC, subnets, and NAT to ensure success.
  • Worked on container-based technologies like Docker andKubernetes.
  • Kubernetesis used to orchestrate the deployment, scaling, and management of Docker Containers.
  • Configured/Integrated Bamboo with Bitbucket to pull codes, ANT to generate builds, and push artifacts to AWS S3.
  • Building/Maintaining Docker container clusters managed byKubernetesLinux, Bash, GIT, and Docker. UtilizedKubernetesand Docker for the runtime environment of the CI/CD system to build and test deployment.
  • Worked on rolling back to an earlier deployment when instability happens using Kubernetes.
  • Configured and Managed EC2, RDS, Cloud watch, Cloud Formation, S3 Buckets, VPC, VPN Security Groups, Cloud Trial, Elastic Load Balancer, Auto-scaling, and ElastiCache.
  • Worked on POC for Deploying the AWS infrastructure using Terraform & Cloud Formation
  • Build and configured Jenkins slaves for parallel job execution. Installed and configured Jenkins for continuous integration and performed continuous deployments.
  • Created and wrote shell scripts (Bash) for automating tasks.
  • Integrated Docker container orchestration framework using Kubernetes by creating pods, ConfigMaps, and deployments.
  • Worked on continuous integration tool TeamCity. Used Bamboo for the official nightly build, test, and managing change list. Installed multiple plugins for smooth build and release pipelines.
  • Used Jenkins pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods, and managed using Kubernetes.
  • Deployment of Web applications and database templates.

Confidential

Release Engineer

Responsibilities:

  • Implemented automation with Chefon AWS for application testing, deployment & development.
  • Installed Workstation, Bootstrapped Nodes, Wrote Recipes and Cookbooks and uploaded them to Chef-server, Managed On-site OS/Applications/Services/ Packages using Chef.
  • Designed and deployedAWSsolutions using EC2, S3, RDS, EBS, SQS, SNS, ELB, and Auto scaling group.
  • UsedAWSCloud Formation andAWSOpsWorks to deploy the infrastructure necessary to create development, test, and production environments for a software development project.
  • TestChefCookbook modifications on cloud instances in AWS using Test Kitchen andChefSpec and used Ohai to collect node attributes.
  • Working forDevOpsPlatform team responsible for specialization areas related toCheffor Cloud Automation.
  • Well-versed with various concepts ofCheflike Roles, Environments, Data Bags, Knife, andChefServer Admin/Organizations.
  • Created job chains withJenkinsJob Builder, Parameterized Triggers, and target host deployments. Utilized manyJenkinsplugins andJenkinsAPI.
  • Built end-to-end CI/CD Pipelines inJenkinsto retrieve code, compile applications, perform tests and push build artifacts to Nexus Artifactory.
  • Integration of Maven/Nexus,Jenkins, Urban code Deploys with Patterns/Release, GIT, Confluence, JIRA, and Cloud Foundry.
  • Created Jenkins Workflows for advanced deployment process (DB execution, Environment configuration changes, etc.) on both QA and preproduction Environments.
  • Administrating, installing, and configuringSonarQubeSoftware and scanning the code analysis.
  • Developed build and deployment scripts using MAVEN as build tools in Jenkins to move from one environment to another environments.
  • Worked on Nagios Dashboard with creating custom alerts and plugins with Nagios.
  • Worked on configuring Alerting mechanisms with Nagios, error logging, and performance monitoring.
  • Responsible for writing the Release Notes, documenting all the helpful info about the release, software versions, and changes implemented in the current release.
  • Experience installing, upgrading, and configuring Red Hat Linux 6.x, 7 using Kickstart Servers and Interactive Installation.

Confidential

AWS/ DevOps Engineer

Responsibilities:

  • Maintain the health, stability, and security of the growing production cluster. Support and troubleshoot production problems.
  • Designing solutions for data storage, monitoring, and deployment automation while continually enhancing DevOps tools, processes, and procedures.
  • Setting up and maintaining Infrastructure for Developers/QA/Data Team and providing support for product development and releases.
  • Maintaining uptime and quick deployment turnaround through development and optimization of infrastructure, server, and deployment strategies.
  • Improving customer experience by building, deploying, and scaling web services on virtual infrastructure, swiftly investigating and resolving technical issues.
  • Experienced in setting up the AWS infrastructure using the services like EC2, VPC, IAM, RDS, and Elastic Cache. Automating the regular tasks by using Python and Shell scripting.
  • Working on implementing the security best practices in the AWS cloud
  • Deploying and scaling containerized applications that are related to application specific.
  • Help marketing team by producing custom reports and supporting internal tools (Python, Perl, MySQL)
  • Develop internal API and distributed queue infrastructure to maintain the integrity of provisioning database and automate server provisioning (Ansible, Django/Python, Celery)
  • Orchestrate configuration of customer-accessible tools on hardware and VPS platforms
  • Ensure these features are easy to use via a customer-facing website (Django, unit test, React, Jasmine, Perl)
  • Contribute to backend tasks installing and managing Linux servers and monitoring platform.
  • Ensure code quality through code review and unit/integration tests run via continuous integration

We'd love your feedback!