Sr Devops Engineer Resume
Houston, TexaS
SUMMARY
- Around 10 years of hands - on experience in establishing standard DevOps practices and In-depth knowledge of DevOps management methodologies and production deployment Configurations.
- Expertise in setting up distributed Elasticsearch Cluster for real-time search and analysis of data.
- Experience in implementing CI/CD pipeline using Gitlab and Jenkins.
- Experience with configuring and maintaining Kafka cluster for various use cases and implemented authentication and authorization on Kafka cluster.
- Extensively worked on Jenkins/Ansible/Docker to configuring and maintaining for continuous integration (CI) and for End to End automation for all build and deployments.
- Lead the cloud infrastructure maintenance effort using a combination of GitLab and Terraform for automating CICD pipeline in AWS .
- Implemented Istio mesh on Kubernetes cluster
- Experience with building kubernetes platform, deploying and maintaining applications on kubernetes and setup access policies(rbac), External services on Kubernetes.
- Experience in installing and configuring various components like Map Reduce, Hive, Pig, HBase, Sqoop, Hue, Oozie, Spark, Kafka, Yarn, ZooKeeper, NiFi in Apache Hadoop eco-system using Hortonworks distribution (HDP & HDF).
- Hands on experience on performing administration, configuration management, monitoring, debugging, NameNode Recovery, HDFS High Availability, writing Hadoop Shell commands and performance tuning on Hadoop Clusters.
- Built industrial standard Data Lake on on-premise and Cloud platforms.
- Worked with a strong team of architectures, backend developers to gathered function and non-functional requirements.
- Involved in source control management with GitHub and GitLab Enterprise level repositories. Regular activities included configure user’s access levels, monitor logs, identifying merge conflicts and managing master repository.
- Deployed and monitor scalable infrastructure on Amazon web services (AWS) & configuration management.
- Configured & deployed Java applications on Amazon Web Services (AWS) for a multitude of applications utilizing the AWS stack, cloud formation.
- Well versed with deadline pressures, superior analytical, time - management, collaboration, communication and problem-solving skills.
- Experience in Linux System Administration, Build Engineering & Release Management process, including end-to- end code configuration, building binaries & deployments and entire life cycle model in Enterprise Applications.
- Skilled at Software Development Life Cycles and Agile Programming Methodologies.
- Experience with SaaS (Software as a Service), PaaS (Platform as a Service), and IaaS (Infrastructure as a Service) solutions.
- Extensively worked on Logstash to collect, parse and ship logs from various sources to destinations and Kibana to perform advanced data analysis and visualize data in a variety of charts, tables, and maps (ELK Stack).
- Extensive experience with AWS (Storage, Application Services, Deployment and Management) and managed servers on AWS platform instances using Ansible. production instance using Change Sets, Force.com Migration tool and Eclipse.
- Hands on experience in AWS provisioning and good experience of AWS services like EC2, S3, Glacier, ELB, RDS, Redshift, IAM, Route 53, VPC, Auto scaling, Cloud Front, Cloud Watch, Cloud Trail, Cloud Formation, Security Groups.
- Skilled in monitoring servers using Nagios, Datadog, Cloud watch and using ELK Stack, Elastic Search.
- Created tagging standards for proper identification and ownership of EC2 instances and other AWS resources.
- Experience in managing infrastructure resources in cloud architecture with close coordination with various functional teams.
- Monitor, build and deploy software releases and provide support for production deployments.
- Automated application deployment in the cloud using Docker technology using Elastic Container Service scheduler.
- Created and managed a Docker deployment pipeline for custom application images in the cloud using Jenkins.
- Expertise in querying RDBMS such as Oracle, MYSQL and SQL Server by using PL/SQL for data integrity.
- Involved in the functional usage and deployment of applications in Apache Tomcat and Web Logic Server.
- Configured and Deployed application packages on to the Apache Tomcat server. Coordinated with software development teams and QA teams.
- Worked with IAM service creating new IAM users & groups, defining roles and policies and Identity providers.
- Experience of working with the release and deployment of large-scale Java/J2EE Web applications.
- Experience in building & deploying Java/SOA applications and troubleshooting any build & deploy failures.
- Ability in development and execution of XML, Ruby, Shell Scripts and Perl Scripts, Power shell, Batch scripts and Bash also.
- Working in implementation team to build and engineer servers on Ubuntu and RHEL. Provisioning virtual servers on VMware and ESX servers using Vcloud.
- Good Knowledge in developing advanced web-based applications using JavaScript, Web Services and Databases like Oracle, SQL Server.
- Installed, configured, managed Logging and Monitoring tools such as Splunk, ELK, AppDynamics, Syslog-NG.
- Experienced in using Jfrog Artifactory Repository managers for builds.
- Experience with various Web servers like Nginx and Apache Tomcat.
- Experience with networking protocols ( IP, TCP, UDP ).
- Ability to quickly understand, learn and implement the new system design, data models in a professional work environment.
TECHNICAL SKILLS
Build Tools: Maven, Ant, Eclipse
Bug Tracking: Jira
Web/Application Servers: Apache, Apache Tomcat, nginx, Web logic
SCM/Version Control Tools: GIT, Bit bucket
Automation container+: Docker, Kubernetes, Istio
Continuous Integration Tools: Jenkins, Gitlab
Continuous Deployment tools: Puppet, Ansible, Helm, Chef & Terraform.
Cloud services: Amazon Web Services (AWS)
Scripting Languages: Shell scripting, Python
Operating system: Unix, Linux (Ubuntu, Debian, Red Hat (RHEL), Centos) and Windows
Programming Languages: Python, C, Java
Databases: RDBMS, MySQL, MS SQL, Oracle, Amazon DynamoDB & MongoDB, Postgres, AWS RDS
Web Services: SOAP, REST
Big Data Technologies: HDFS, Hive, Yarn, MapReduce, EMR, Glue, Pig, HUE, Oozie, Elasticsearch, Spark, Kafka, Ambari
Firewalls: Checkpoint, ISA 2004/2006, Palo Alto 3000/5000
Network Protocols: TCP/IP, UDP, DNS, DHCP, ARP, Telnet, SSH, IPSec, SSL.
PROFESSIONAL EXPERIENCE
Confidential, Houston, Texas
Sr DevOps Engineer
Responsibilities:
- Working on building next generation platform using Kubernetes.
- Created Build Jobs and Deployments in Jenkins and Implemented a CD pipeline with Docker, Jenkins, GitHub, Ansible and AWS AMI's.
- Managed EC2 instances using launch configuration, Auto scaling, Elastic Load balancing, automated the process of provisioning infrastructure using Cloud Formation, Ansible templates, and created alarms to monitor using CloudWatch.
- Working with developers to troubleshoot AWS development environment issues, Kubernetes cluster or access issues and production application releases.
- Working on creating centralized logging system using logstash, Beats, Elasticsearch and Kibana.
- Configure Prometheus, various Prometheus exporters and Thanos for monitoring
- Implement Prometheus alerts using alert manager.
- Writing Terraform to provision AWS infrastructure components and services such as IAM roles, S3, EC2 instances, ELBs, auto-scaling groups, ECS Docker containers and/or EKS Kubernetes pods, Kubernetes resources, API gateway, AWS Lambda, RDS, etc.
- Automate AWS infrastructure and CI build/deployment pipelines of Java-based applications using Python/bash scripting and Gitlab to deploy services on Kubernetes.
- Working on writing helm charts for various deployments.
- Working on setting up and configuring Kafka cluster for different use cases across the organization.
- Configured Istio mesh on kubernetes cluster and implemented security protocols like mutual TLS for internal service to service communication.
- Developed ServiceEntry, Gateway, VirtualService and destination rules on istio mesh for any external service to connect to kubernetes cluster visa mesh.
- Working on configuring Graphql and Envoy proxy
- Used Kubernetes to manage containerized applications using its nodes, Config Maps, Selector Services and deployed application container as Pods.
- Prototype CI/CD system with Gitlab utilizing Kubernetes and Docker for the CI/CD systems to build and test and deploy.
- Automated various infrastructure activities like Continuous Deployment, application server setup, using Ansible playbooks
- Design, build and operational management of Elasticsearch cluster infrastructure for logging.
- Migrating Splunk logging to ELK (Elasticsearch, Logstash and Kibana) stack and setup monitoring and alerting dashboards to capture metrics and detect issues or anomalies in real time.
- Writing Ansible playbooks to Install different resources on linux instances
- Working on setting up infrastructure for Data Lake.
- Providing operational support for various database like Redshift, RDS, etc.
Confidential
Sr DevOps Engineer
Responsibilities:
- Worked on building next generation platform using Kubernetes
- Build various Helm charts and package them for micro services deployment.
- Wrote Dockerfiles to build JBoss Wildfly base image.
- Involved in building pipelines using Jenkins, Ansible all the way to Production.
- Fix various Jenkins build failures to support faster releases.
- Troubleshoot both system level and application level issues using New Relic and Data Dog.
- Upgraded Kubernetes clusters to prevent security vulnerabilities.
- Worked on writing helm charts for various deployments.
- Involved in building highly available, scalable and resilient infrastructure in AWS. Migration of multi-tier web applications from on-premises datacenter to AWS.
- Worked with build Servers using AWS: Importing volumes, Launching EC2, creating security groups, load balancers, network interfaces, transit gateway, route tables.
- Built the AWS Infrastructure like VPC, EC2, S3, Route 53, EBS, Security Group, Auto Scaling, and RDS using terraform.
- Worked on replicating realtime data between multiple datacenters using NiFi
- Administered source code repository using Git and GitHub Enterprise, supported Git branching, tagging, and merging.
- Worked on JFrog for deploying Artifacts and Used JIRA as ticketing tool.
- Developed Ansible Playbooks for automating the Infrastructure, deployment process.
- Monitored the logs with Splunk and user request using Grafana. Experience in designing, developing, and engineering automated application builds and deployments.
- Used Docker hub for creating Docker images and handling multiple images primarily for middleware installations and domain configuration.
- Developed multi-stage docker builds to install different kinds of applications with dependencies
- Created snapshots, AMIs, Elastic IP’s and managed EBS volumes.
Confidential, Baltimore, Maryland
DevOps Engineer
Responsibilities:
- Designed and implemented Elasticsearch Clusters across data centers for real-time distributed search and analysis of data.
- Implemented multiple Logstash collectors to collect, parse and ship logs from various App servers to Elasticsearch.
- Implemented Kibana dashboards to perform advanced data analysis and visualize data in a variety of charts, tables, and maps.
- Installed and configured Kafka clusters and developed Kafka producer and consumer project in Java to consume logs from different applications and send them to specific index in Elasticsearch according to the subscribed topics.
- Created various Jenkins pipelines to fully automate CI build and deployment infrastructure and processes for multiple projects.
- Responsible for app deployment, configuration management and orchestration using Ansible.
- Developed Ansible Playbooks to install ELK stack on multiple nodes.
- Developed Bash scripts to automate the process of ELK stack installation on RHEL/Centos based systems.
- Experience with AWS S3 services creating buckets, configuring buckets with permissions, logging, versioning and tagging.
- Involved in building highly available, scalable and resilient infrastructure in AWS. Migration of multi-tier web applications from on-premises datacenter to AWS.
- Setting up IAM Users/Roles/Groups/Policies and automated DB backups to S3 using AWS CLI and SQL dumps.
- Implemented Elastic Load Balancers (ELB's) and Auto Scaling groups in AWS on Production EC2 Instances to build Fault-Tolerant and Highly Available applications.
- Involved in configuring, Cluster Capacity planning, Hardware planning, Installation, troubleshooting and Performance Tuning of the Hadoop Cluster.
- Container management using Docker by writing Docker files and set up the automated build on Docker HUB.
- Implemented AppDynamics monitoring to discover application topology and interdependencies and trace key business transactions based on production application behavior.
- Used Splunk Enterprise to collect and analyze high volumes of machine-generated data and developed Splunk dashboards.
- Used Nginx as a reverse proxy to access Kibana web interface.
- Worked on resolving problems, performed Troubleshoot, resolved software/hardware problems in production.
- Configured Mongodb for various usecases.
- Worked on building cloud based disaster recovery using various AWS tools like EC2, IAM, S3, VPC, Auto scaling, Cloud Formation, AWS API Gateway, AWS Lambda, Cloud Watch, RDS, EMR and Redshift.