Aws Devops Engineer Resume
Dallas, TX
PROFESSIONAL SUMMARY:
- Around 5plus years of professional experience in IT industry comprising of Linux Administration, Build and Release, DevOps and AWS Cloud Services that includes principles with keeping up Continuous Integration, Continuous Delivery and Continuous Deployment in various environments like (DEV/TEST/STAGE and PROD).
- Experience in AWS Cloud IaaS stage with components VPC, ELB, Auto - Scaling, EBS, AMI, EMR, Kinesis, Lambda, Cloud Formation template, Cloud Front, Cloud Trail, ELK Stack, Elastic Beanstalk, Cloud Watch and DynamoDB
- Experience in maintaining the user accounts (IAM), RDS, Route53, VPC, RDS, Dynamo DB and SNS services in AWS cloud
- 2+ years of experience at Splunk, in Splunk developing dashboards, forms, SPL searches, reports and views, administration, upgrading, alert scheduling, KPIs, Visualization Add - Ons and Splunk infrastructure.
- Installed and configured Map Reduce, HIVE and the HDFS: Implemented CDH3 Hadoop cluster on CentOS. Assisted with performance tuning and monitoring
- Develop alerts and timed reports Develop and manage Splunk applications.
- Assist with design of core scripts to automate SPLUNK maintenance and alerting tasks. Support SPLUNK on UNIX,
- Experience in setting up monitoring infrastructure for the Hadoop cluster using Nagios and Ganglia.
- Experience on Hadoop clusters using major Hadoop Distributions - Cloudera (CDH3, CDH4), Hortonworks (HDP) and MapR (M3 v3.0).
- Experienced in using Integrated Development environments like Eclipse, NetBeans, Kate and gEdit.
- Migration from different databases (i.e. Oracle, DB2, MYSQL, MongoDB) to Hadoop.
- Involved in designing and deploying multitude applications utilizing almost all the AWS stack (Including EC2, Route53, S3, RDS, Dynamo DB, Maria DB, SNS, SQS, IAM) focusing on high-availability, fault tolerance, and Auto scaling in AWS Cloud Formation
- Designed AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates
- Hands on experience in S3 buckets and managed policies for S3 buckets and utilized S3 Buckets and Glacier for storage, backup and archived in AWS and in setting up and maintenance of Auto-Scaling AWS stacks.
- Expertise in creating functions and assigning roles in AWS Lambda to run python scripts. Expertise in testing and automation tools like Selenium, Junit Framework and Cucumber.
- Designed high availability environment for Application servers and database servers on EC2 by using ELB and Auto-scaling. Installed application on AWS EC2 instances and configured the storage on S3 buckets
- Experience in Branching, Merging, Tagging and maintaining the version across the environments using SCM tools like GIT, Bitbucket and Subversion (SVN) on windows and Linux platforms
- Extensively worked on Jenkins for continuous integration and for End to End automation for all build and deployments
- Worked in DevOps group running Jenkins in a Docker container with EC2 slaves in Amazon AWS Cloud configuration. Also, gain familiarity with surrounding technologies such as Mesos (Mesosphere) and Kubernetes
- Hands on in using Bamboo modules such as Build Complete Action, Build Plan Definition, and Administration configuration. Involved in Provisioning AWS Infrastructure using Terraform scripts from Jenkins.
- Extensive hands on experience with recipes, cookbooks for applying across multiple nodes with chef with templates, roles, knife plugins, chef supermarket and deploying nodes in production environment
- Experience in Writing Chef Cookbooks and Recipes to automate our build/deployment process and do an overall process improvement to any manual processes
- Experience with infrastructure automation tool - Terraform. Implement Infrastructure as Code utilizing Terraform for AWS resource provisioning and management.
- Proficient in Writing Terraform templates, Chef Cookbooks, recipes and pushed them onto Chef Server for configuring EC2 Instances.
- Engineered and implemented fully automated end-user, data migration solution to eliminate business downtime during acquisition/merger using Windows PowerShell.
- Experience with automating manual deployment processes (using power-shell is a plus.
- Analyzing in working with Ansible, wrote many playbooks to manage Web applications, Environments configuration Files, Users, Mount points and Packages
- Experience in writing Ansible Playbooks and Puppet Manifests to provision Apache Web servers, Tomcat servers, Nginx, Apache Spark and other applications
- Expert in Chef/Puppet as Configuration management tool, to automate the repetitive tasks, quickly deploy critical applications, and enthusiastically managed the changes
- Experienced in keeping up and looking at log archives using monitoring tools like Nagios, Splunk, Cloud Watch, ELK Stack, New Relic and App Dynamics
- Experience in working with cluster management and orchestration features embedded in Docker Engine which is built using Swarm Kit and worked on creation of custom Docker container images, tagging, pushing the images and Dockers consoles for managing the application of life cycle
- Extensively used Docker/Kubernetes for containerization, virtualization, Ship, Run and Deploy the application securely to fasten Build/Release Engineering
- Excellent knowledge of Linux including CentOS, RedHat, Ubuntu, Debian, configuration and administration of Red Hat Virtual machines in VMware Environment
- Working Knowledge on databases like MySQL, RDS, DynamoDB and MongoDB. Experienced in Administration, Support, Performance Tuning, Migration and Maintenance of Servers
TECHNICAL SKILLS:
Operating Systems: Linux (RedHat, Ubuntu, CentOS), Windows, MAC
Build/Automation Tools: Jenkins, Maven, Ant, Bamboo, Team city, Build Forge, Gradle, TFS
Configuration Management Tools: Ansible, Chef, Puppet, Salt Stack
Cloud Technologies: AWS, Open stack, Azure
Scripting Languages: Shell, Bash, Perl, Python, Groovy, .Net, PowerShell, Terraform
Database System: MySQL, IBM DB2, Dynamo DB, Mongo DB, Cassandra, Hadoop.
Web/App Server: Apache, IIS, HIS, Tomcat, WebSphere Application Server, WebLogic, Jboss
Version Control Tools: GIT, Subversion, Bit Bucket, CVS.
Web Technologies: Servlets, JDBC, JSP, XML, HTML, YAML, Swagger Tool.
Virtualization Tools: VMWare, Power VM, Virtual box, V Centre, vSphere, WebLogic.
Monitoring Tools: Nagios, Cloud Watch, Splunk, ELK, App Dynamics, Datadog
PROFESSIONAL EXPERIENCE:
Confidential, Dallas, TX
AWS Devops engineer
Responsibilities:
- Part of Streaming Data Platform team. Worked on SDP-V4 project. Contributed in getting the PAR approval.
- Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation using Jenkins, Bogie along with Python and Shell scripts to automate routine jobs. Worked with the Architects on SDLC process being the part of post development environments.
- Build a CI/CD pipeline and maintained the working environment from scratch.
- Experience in automating the Hadoop Installation, configuration and maintaining the cluster by using the tools like puppet.
- Assist with design of core scripts to automate SPLUNK maintenance and alerting tasks. Support SPLUNK on UNIX,
- Authored several scripts leveraging VMware’s PowerCLI and Windows PowerShell to aid with capacity planning and monitoring of the virtualization infrastructure.
- Worked on Bogie Pipeline which is the internal tool for CI/CD. Onboarding Various Microservices using Bogie. Doing Microservices deployments and troubleshooting any errors.
- Openshift virtualized PaaS provider - useful in automating the provisioning of commodity computing resources for cost and performance efficiency.
- Managing the Openshift cluster that includes scaling up and down the AWS app nodes.
- Had very strong exposure using ansible automation in replacing the different components of Openshift like ECTD, MASTER, APP, INFRA, Gluster.
- Automated build and deployment process for Microservices, re-engineering setup for better user experience, and leading up to building a continuous integration system for all our products.
- Coordinated the resources by working closely with Project Managers for the release and carried deployments and builds on various environments using continuous integration tool.
- Use AWS services EC2, VPC, IAM, S3, AWS resource access manager, ELB, ASG, Cloud Watch, CloudTrail, SNS, Elasticsearch, ECS etc.
- Managed AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our environments. Manage and maintained and deployed to Dev, QA and Prod environments.
- Deploying Elasticsearch to assist with environment logging requirements. Created different variations of Kibana dashboard running different instances of Elasticsearch, Logstash and Kibana. Responsible for planning index and shards and index TTL strategies in Elasticsearch. Troubleshooting Elasticsearch errors.
- Created different elastic search queries and python scripts to analyze the data from different Microservices and run it through Logstash, pass it through Elasticsearch and visualized them in Kibana depending on the different kinds of logs.
- Managed Kubernetes charts using Helm. Created reproducible builds of the Kubernetes applications, managed Kubernetes manifest files and Managed releases of Helm packages.
- Created SQL server Db using RDS and generated the schema for the existing tables present in S3 using AWS Glue. Data Extraction, aggregations and consolidation of Adobe data within AWS Glue using PySpark.
- Configured Cloud Watch and Datadog to monitor real-time granular metrics of all the AWS Services and configured individual dashboards for each resource Agents.
- Increased pre-production server visibility by producing Datadog metrics. Enabled Datadog APM, JVM metrics in different Microservices. Creating Datadog Dashboards to visualize different Microservices metrics.
- Created index in ELK stack to send the application logs using fluentd. Restricting the access to ELK using Security groups and iam policies. Creating Monitors for Datadog and CloudWatch using terraform. Integrating Datadog with Slack and PagerDuty.
- Integrated Maven with GIT to manage and deploy project related tags. Worked on Maven to create artifacts from source code and deploy them in Nexus central repository for internal deployments. Branching and merging code lines in the GIT and resolved all the conflicts raised during the merges.
- Worked on creating the queries for Kibana, Grafana and New Relic. Worked on Influx DB for creating the Data Source. Worked on Creating ECS clusters in AWS.
- Performing Load test and writing various scripts for performing failure testing, resiliency testing, load testing, etc.
- Defined and Implemented Software Configuration Management Standards based on Agile/Scrum methodologies, in line with the organization.
- Worked with a complex environment on RedHat Linux while ensuring that the systems adhere to organizational standards and policies.
- Researched and recommended open source tools, practices, and methodologies that enhance our day to day productivity
Environment: AWS, Jenkins, Bogie, Datadog, CloudWatch, Terraform, Kafka, ELK, EKS, EMR, Ec2, S3, IAM, VPC, Security Groups, Snowflake, Python, Maven, Linux, Kubernetes, JIRA, KANBAN, Elastic Search, Log stash, Splunk, AWS RedShift, ECS, Influx DB.
Confidential, Overland park, Kansas
Platform Engineer
Responsibilities:
- Setup and build AWS infrastructure various resources VPC, EC2, S3, IAM, EBS, DynamoDB, Security Group, Auto Scaling, EMR and RDS in Cloud Formation templates, Amazon ECR.
- Focused on Security, using aws Guard Duty and CIS benchmark on aws and Dome9, a cloud infrastructure Security tool.
- Setup Active Directory in the AWS Cloud that can be used to manage users, groups, computers and can enable you to join Amazon Ec2 instances to our domain easily.
- Created S3 bucket policies based on the requirement using terraform thus restricting the access to the bucket.
- Created and maintained EMR clusters for the developers using terraform. Installed apps like Hive, Spark, Tezz, Hadoop, Yarn, Ganglia, Hue. Troubleshooting the cluster if the developers facing any issues.
- Setup ELK stack to aggregate logs from all your systems and applications, analyze these logs, and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics.
- Built and Implemented collaborative development environment using Bitbucket and Integrated it with Jenkins. Set up Jenkins master and added the necessary plugins and adding more slaves to support scalability and Agility.
- Automated configuration management and deployments using Ansible playbooks and Yaml for resource declaration. And creating roles and updating Playbooks to provision servers by using Ansible.
- Installing, Configured and management in Ansible Centralized Server and creating the playbooks to support various middleware application servers, and involved in configuring the Ansible tower as a configuration management tool to automate repetitive tasks.
- Created and maintained various DevOps related tools for the team such as provisioning scripts, deployment Tools and staged virtual environments using Terraform.
- Responsible for ensuring Continuous Delivery/Continuous Integration across all environments from POC to Post Production and Production using Jenkins.
- Implemented several Continuous Delivery Pipelines for different products using Jenkins and Bamboo. Set up build pipelines in Jenkins by using various plugins like Maven plugin, EC2 plugin, Terraform, JDK, Twist lock etc.
- Wrote python scripts for implementing Lambda functions. Created API as a front door application to access data or functionality from backend services running on EC2 and code running on Lambda or any web application.
- Define and deploy monitoring, metrics, and logging systems on AWS
- Implement systems that are highly available, scalable, and self-healing on the AWS platform
- Design, manage, and maintain tools to automate operational processes
- Managed AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our environments.
- Wrote scripts and indexing strategy for a migration to Amazon Redshift from SQL Server and MySQL databases and migrated on premise database structure to Amazon Redshift data warehouse.
- Experience in schema to define table and column mapping from S3 data to Redshift and worked on indexing and data distribution strategies optimized for sub-second query response.
- Hands on Experience on Cloud automation, Containers and PaaS (cloud foundry) which helps to trigger the inherent originality of an individual using Terraform.
- Played a role migrating existing AWS infrastructure to server less architecture (AWS Lambda, Kinesis) deployed via Terraform or AWS Cloud formation.
- Converted existing Terraform modules that had version conflicts to utilize cloud formation during terraform deployments to enable more control or missing capabilities.
- Implemented and maintained the monitoring and alerting of production and corporate servers/storage using AWS Cloud watch.
- Worked on the migration from VMware to AWS and used Terraform to automate the infrastructure in AWS by creating EC2, S3, RDS, VPC and Route 53.
- Involved in designing and deploying multiple applications using AWS cloud infrastructure focusing on high availability, fault tolerance and auto-scaling of the instances.
- Designed DevOps workflow for multiple applications by orchestrating Test, Build, Release and Deploy phases through various CI/CD pipelines using Git, Jenkins, Docker, Ansible & Cloud formation tools .
- Installed and configured Jenkins and created parameterized jobs to kick off builds for different environments. Managed the team's source repository through Bitbucket and continuous integration system using Jenkins.
- Branching and merging code lines in the GIT and resolved all the conflicts raised during the merges.
Environment: AWS, Jenkins, Terraform, ELK, EKS, EMR, Anaconda, Ec2, S3, IAM, VPC, WSo2, Snowflake, Apache, Python, Maven, Linux, Kubernetes, JIRA, KANBAN, Elastic Search, Log stash, Splunk, AWS RedShift.
Confidential
DevOps Engineer
Responsibilities:
- Involved in defining, documenting, negotiating and maintaining Product/Application Release Roadmap.
- Developed build using MAVEN as build tools and used CI tools to kick off the builds move from one environment to other environments.
- Participated in the release cycle of the product which involves environments like developments QA and production.
- Hands on compiling builds using pom.xml and build.xml.
- Integrate with Jenkins for Continuous Integration, Delivery and Build Management.
- Setup Jenkins tool to integrate the JAVA project and maintained Jenkins with continuous integration and deployment.
- Implemented monitoring solutions with Elasticsearch and Logstash.
- Worked on Tomcat Web server for hosting web apps.
- Worked closely with developers to pinpoint and provide early warnings of common build failures.
- Used GIT as Version Control System for two applications. Managed development streams and Integration streams.
- Setup various Jenkins jobs for build and test automation and created Branches, Labels and performed Merges in GIT to access the repositories and used in coordinating with CI tools.
- Integrated MAVEN with GIT to manage and deploy project related tags.
- Performed software upgrades to customer instances running JBoss and Tomcat using aforementioned deployment process.
- Worked on RPM and YUM package installations, patch and other server management
- Created new file systems, monitoring disk usage, checking and backup of log files administration and monitoring the disk-based file system.
- Implementing a Continuous Delivery framework using Jenkins, Chef, Maven & Nexus as tools
- Automated middleware management of different environment using chef in Cloud (AWS)
- Configured the Chef-Repo, Setup multiple Chef Workstations.
- Configured multipath, adding SAN and creating physical volumes, volume groups and logical volumes.
- Specified Experience in configuring and deploying Java and J2EE applications into application servers (Rational Web-sphere, Jboss and Apache Tomcat)
- Fisheye used to extract the information from repository and Crucible used to code review
- Worked with Docker for convenient environment setup for development and testing.
- Installed Ansible Registry for local upload and download of Docker images and even from Docker hub.
- Had a experience on Kubernetes by working on managing various mid-scale containers, load balances, and facilitating various tasks assigned to the Kubernetes division.
- Worked on Perl/Python/Bash for updating the application tools with automation Scripts.
- Assisted end-to-end release process from the planning of release content through to actual release deployment to production.
- Install and configure Splunk to monitor application and server logs.
- Deployed Java/J2EE applications on to the Apache Tomcat server and configured it to host the websites.
- Experienced AWS Cloud platform and its features which includes EC2, VPC, EBS, AMI, SNS, RDS, Cloud Watch, Cloud Trail, Cloud Formation AWS Config, Autoscaling, Cloud Front, IAM, S3 in this project Which let me gain knowledge on cloud side environment.
Environment:GIT, GitHub, Bit bucket, AWS, SVN, Jira, Puppet, Ansible, Maven, ANT, Docker, Confluence, UNIX/LINUX, Shell/Bash, Perl and Jenkins, Kubernetes.
Confidential
LinuxSystemAdministrator
Responsibilities:
- Patching of RHEL5 and Solaris 8, 9, 10 servers for EMC Powerpath Upgrade for VMAX migration.
- Configuration of LVM (Logical Volume Manager) to manage volume group, logical and physical partitions and importing new physical volumes
- Documented the standard procedure for installation and deployment of VMAX Migration and logical volume manager.
- Installation, configuration, support and security implementation on following services:DHCP, SSH, SCP.
- Configuration and administration of NFS and Samba in Linux and Solaris.
- Maintained and monitored all of company\'s servers' operating system and application patch level, disk space and memory usage, user activities on day-to-day basis.
- User administration on Sun Solaris and RHEL systems, HP-UX machines, management & archiving.
- Installations of HP Open view, monitoring tool, in more than 300 servers.
- Attended calls related to customer queries and complaints, offered solutions to them.
- Worked with monitoring tools such as Nagios and HP Openview.
- Creation of VMs, cloning and migrations of the VMs on VMware vSphere 4.0.
- Worked with DBA team for database performance issues, network related issue on Linux / Unix Servers and with vendors for hardware related issues.
- Expanded file system using Solaris Volume Manager (SVM) in the fly in Solaris boxes.
- Managed and upgraded UNIX's server services such as Bind DNS.
- Configuration and administration of Web (Apache), DHCP and FTP Servers in Linux and Solaris servers.
- Supported the backup environments running VERITAS Net Backup 6.5.
- Responsible for setting cron jobs on the servers.
- Decommissioning of the old servers and keeping track or decommissioned and new servers using inventory list.
- Handling problems or requirements as per the ticket (Request Tracker) created.
- Participated in on-call rotation to provide 24X7 technical supports.
- Configuration and troubleshooting - LAN and TCP/IP issues.
Environment: Red Hat Enterprise Linux 4.x,5.x, Sun Solaris 8,9,10, VERITAS Cluster Server, Veritas Volume Manager, Oracle 11G, HP UX, IBM AIX, HP Proliant DL 385, 585 Weblogic, Oracle RAC/ASM, MS Windows 2003 server