We provide IT Staff Augmentation Services!

Sr. Elk Stack/ Devops Engineer Resume

5.00/5 (Submit Your Rating)

Phoenix, AZ

SUMMARY

  • Overall 10+ years of experience as a Sr. DevOps / ELK Engineer and Build/Release management, SCM, Environment Management and Build/Release Engineering for automating, building, releasing and configuring changes from one environment to other environment.
  • Good experience building Elasticsearch High Availability Clusters and building Logstash Environment.
  • Worked on DevOps/Agile operations process and tools area (Environment, Service, unit test automation, Build & Release automation, Code review, Incident and Change Management).
  • Profusely worked on Hudson, Jenkins Team City and Team Forge for continuous integration and for End to End automation for all build and deployments.
  • Proficient in SQLite, MySQL and Postgre SQL databases wif Python. Experienced in developing Web Services wif Python programming language.
  • Experience in development and configuration experience wif software provisioning tools like Chef, Puppet, Docker and Ansible.
  • Integrated delivery (CI and CD process) Using Jenkins, Bamboo, Nexus, Yum and puppet. Encountered wif Version Control Systems administering Subversion and Perforce.
  • Experienced in Linux/Unix system Administration, System Builds, Server Builds, Installations, Migration, Upgrades, Patches, Trouble shooting on RHEL 4.x/5.x, Subversion (SVN), Clear Case, GIT, Perforce, TFS.
  • Integrated Jenkins wif various DevOps tools such as Nexus, Sonarqube, Puppet, CA Nolio, HP CDA, HP ALM and HP QTP etc.
  • Experience building Kibana and Grafana Dashboards (used for real time performance and Network traffic patterns)
  • Chef Experience in Server infrastructure development on AWS Cloud, extensive usage of Virtual Private Cloud (VPC), Cloud Formation, Cloud Front, EC2, RDS, S3, Route53, SNS, SQS, Cloud Trail.
  • Good experience in Code Quality Analysis tool like Sonarqube for testing teh code quality of developed code.
  • Hands - on noledge of software containerization platforms like Docker and container orchestration tools like Docker-Swarm and Knowledge on Kubernetes.
  • Worked on Configuration of New Relic for Application Performance Monitoring and Infrastructure Monitoring.

TECHNICAL SKILLS

Operating Systems: Linux CentOS, Ubuntu, UNIX, Windows, AIX

Version Control Tools: SVN, GIT, TFS, CVS and IBM Rational Clear Case

Web/Application Servers: Web Logic, Apache Tomcat, Web Sphere and JBOSS

Automation Tools: Jenkins/Hudson, BuildForge and Bamboo

Build Tools: Maven, Ant and MS Build, Docker.

Configuration Tools: Chef, Puppet, Ansible, Docker, Kubernetes, Openshift

Databases: Oracle, MySQL, PostgreSQL

Bug Tracking Tools: JIRA, Remedy, ServiceNow and IBM Clear Quest

Scripting: Shell, Ruby, Python and JavaScript

Virtualization Tools: Docker, VM virtual Box and VMware

Monitoring Tools: Nagios, Cloud watch, Splunk.

Cloud Platform: AWS EC2, VPC, EBS, Cloud Formation AWS Configer and S3, Terraform

Languages: C/C++, Java, Python and PL/SQL

PROFESSIONAL EXPERIENCE

Sr. ELK Stack/ DevOps Engineer

Confidential

Responsibilities:

  • Work closely wif teh product management and development teams to rapidly translate teh understanding of customer data and requirements to product and solutions.
  • Analyze structured and unstructured data points to design data architecture solutions for scalability, high availability, fault tolerance, and elasticity
  • Design, develop and implement web-based Java applications that are often high-volume and low-latency, required for mission-critical systems. Follows State Street Standards life cycle methodologies, creates design documents, and performs program coding and integration testing.
  • Improved teh performance of teh Kafka cluster by fine tuning teh Kafka Configurations at producer, consumer and broker level.
  • Using Jenkins, able to build teh different kind of projects like Freestyle, Maven Based, Pipeline, Multi-branch pipeline
  • Implemented Disaster management for creating Elasticsearch clusters in two DC and configure Logstash to send same data to two clusters from Kafka.
  • Installed and deployed Kafka, Zookeeper, ELK, and Grafana using Ansible playbooks
  • Written and maintained Wiki documents for teh Planning, installation, Deployment for Elk Stack and Kafka.
  • Written custom plugins to enhance/customize open source code as needed.
  • Wrote and executed various MYSQL database queries from Python using Python-MySQL connector and MySQL db package.
  • Written automation salt Scripts for managing, expanding, and node replacement in large clusters.
  • Sync Elasticsearch Data between teh data centers using Kafka and Logstash.
  • Managing Kafka Cluster and integrated Kafka wif Elasticsearch.
  • Proficient wif container systems like Docker and container orchestration like EC2 Container Service, Kubernetes, worked wif Terraform.
  • Separate Java URL's Data from Elasticsearch Cluster and transfer to other cluster Using Logstash,
  • Snapshot Elasticsearch Indices data and archive in teh repository every 12 hours.
  • Strong expertise in implementation of Kinesis, Elasticsearch, Logstash, Kibana Plugins.
  • Experience on create kubernetes nodes, pods and spread in all availability zones for HA.
  • Experience on Kubernetes pod anti affinity used to spread data nodes across AZs.
  • Experience on collect this Kubernetes metrics using Prometheus and send to Elasticsearch.
  • Analysis teh logs data and filter required columns by logstash configuration and send it to Elasticsearch.
  • Involved in enabling cluster logs and search slow logs temporarily using rest API calls to collect logs and analyzing those logs to troubleshoot teh elastic search related functional and performance issues.
  • Involved in updating teh cluster settings using both API calls and configuration file changes.
  • Has prepared elastic search operations guide and trained teh operations team to perform day-to-day operations like back-up, restore, re-indexing, troubleshooting frequently occurring problems etc.
  • Working on cluster maintenance and data migration from one server to other and upgrade ELK stack.
  • Merge teh data into share on avoid data crush and support load balancing.
  • Using Kibana illustrate teh data wif various display dashboard such as matric, graphs, pia-chart, aggregation table.
  • X-PACK (security)-monitoring tools that provide system metrics, service state, process state, file system usage.
  • Strong expertise in object-oriented design and analysis, programming styles and design patterns.
  • Developing distributed Complex event processing pipelines wif simplicity.
  • Experience wif code repositories, continuous integration.
  • Experience on Bitbucket, confluence and jira for modern delivery system.
  • Experienced in developing models wif contextual data and proficient in Machine Learning algorithms.
  • Develop automation for teh setup and maintenance of teh AMA platform.
  • Build, maintain, and scale infrastructure for Production, QA, and Dev environments.

Environment: - Kafka, Ansible 2.7, Jenkins, Elasticsearch ECE, Logstash, Filebeat, Metricbeat, JavaScript.

Devops / ELK Stack Engineer

Confidential, Phoenix, AZ

Responsibilities:

  • Provided design recommendations and thought leadership to improved review processes and resolved technical problems.
  • Working wif product managers to architect teh next generation of Workday search's.
  • Benchmark Elasticsearch-5.6.4 for teh required scenarios.
  • Worked on configuring teh EFK stack and used it for analyzing teh logs from different applications.
  • Involved in creating teh cluster and implemented teh backup of teh cluster wif teh help of curator by taking teh snapshots
  • Spin up teh environment wif teh help of chef cookbooks and involved in modifying them per our requirement.
  • Created users for application teams to view their logs using curl statements and provided only teh read access to them.
  • Using Curator API on Elasticsearch to data back up and restoring.
  • Configured xpack for teh security and monitoring of our cluster and created watches to check for teh health and availability of teh nodes.
  • Used AWS Beanstalk for deploying and scaling web applications and services developed wif Java, PHP, Node.js, Python, Ruby, and Docker on familiar servers such as Apache, and IIS.
  • Worked on Cloud automation using AWS Cloud Formation templates.
  • Hands on experience in EC2, VPC, Subnets, Routing tables, Internet gateways, IAM, Route53, VPC peering, S3, ELB, RDS, Security Groups, CloudWatch, SNS on AWS
  • Implemented Continuous Delivery framework using Jenkins, Chef, and Maven in Linux environment.
  • Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation using Jenkins along wif Shell scripts to automate routine jobs.
  • Manage AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our QA and UAT environments as well as infrastructure servers for GIT and Chef.
  • Installed and deployed Kafka, Zookeeper, ELK, and Grafana using Ansible playbooks
  • Skilled in monitoring servers using Nagios, Data dog, Cloud watch and using EFK Stack Elasticsearch Fluentd Kibana
  • Managed servers on teh open stack / cloud / Amazon Web Services AWS platform instances using Chef Configuration management.
  • Worked on using Chef Attributes, Chef Templates, Chef Recipes and Chef Files for managing teh configurations across various nodes using Ruby.
  • Experience wif container based deployments using Docker, working wif Docker images, Docker HUB and Docker registries.
  • Manage teh configurations of all teh servers using Chef configured Jenkins builds for continuous integration and delivery. Automated web server content deployments via shell scripts.
  • Container management using Docker by writing Docker files and set up teh automated build on Docker HUB and installed and configured Kubernetes
  • Implemented a production ready, load balanced, highly available, and fault tolerant Kubernetes infrastructure.
  • Worked wif JIRA for creating Projects and Created Mail handlers and notification Schemes for JIRA
  • Worked wif ServiceNow for creating / reporting tickets, change dashboards and use service catalog
  • By using JIRA/CONFLUENCE we maintain our product release wikis on confluence and administer JIRA and manage tickets raised.
  • Involved in an Agile/ Scrum environment and daily standup meetings
  • Manage regular changes in priority due to customer priority changes.
  • Configured logstash: input, filter, output plugins - database, jms, log file sources and elastic search as output converting search indexes to Elastic wif large amount of data
  • Elastic search experience and capacity planning and cluster maintenance. Continuously looks for ways to improve and sets a very high bar in terms of quality.
  • Written custom plugins to enhance/customize open source code as needed, written automation salt Scripts for managing, expanding, and node replacement in large clusters.
  • Sync Elasticsearch Data between teh data centers using Kafka and logstash. managing Kafka Cluster and integrated Kafka wif elastic
  • Installed and Configure curator to delete indices older TEMPthan 90 days.
  • Responsible to designing and deploying new ELK clusters (Elasticsearch, logstash, Kibana, beats, Kafka, zookeeper etc.

Environment: ELK stack, Service Now, Kafka, beats, Python, Java, GIT, SVN, Maven, Ansible, Puppet, Docker, Jenkins, Apache Webserver, JIRA, Windows, Python, Power Shell, AWS, Chef, MYSQL, Kubernetes, VMware / OpenStack servers.

ELK/DevOps Admin

Confidential

Responsibilities:

  • Responsible for Elasticsearch mapping creating, document indexing, including deploying, managing, and tuning/optimizing large-scale Elasticsearch clusters.
  • Manage individual project priorities and deliverables and communicate progress to internal teams and executives
  • Work wif partner divisions of NBC Univsersal to drive new ELK platform capabilities and roadmaps
  • Working wif Ansible to automate teh process of deploying/testing teh new build in each environment, setting up a new node and configuring machines/servers.
  • Develop complex applications that scale high-volume production quality.
  • Provided design recommendations and thought leadership to improved review processes and resolved technical problems.
  • Worked on development of Configuration scripts for Dev and Production servers.
  • Design, build, deploy, maintain and enhance ELK platform
  • Working wif product managers to architect teh next generation of Workday searches.
  • Elastic search cluster and capacity planning and cluster maintenance. Continuously looks for ways to improve and seting a very high bar in terms of quality.
  • Configured Logstash: input, filter, output plugins - database, JMS, log file sources and elastic search as output
  • Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on GCP (Google Cloud Platform). Utilized Kubernetes and Docker for teh runtime environment of teh CI/CD system to build, test deploy.
  • Proficient wif container systems like Docker and container orchestration like EC2 Container Service, Kubernetes, worked wif Terraform.
  • Separate Java URL's Data from Elasticsearch- 6.5.0 Cluster and transfer to Elasticsearch-7.9 cluster Using Logstash,
  • Maintained changed control and testing process for all modifications and Deployments.
  • Used Kubernetes to orchestrate teh deployment, scaling and management of Docker Containers.
  • Experience wif container based deployments using Docker, working wif Docker images, Docker Hub and Docker-registries and Kubernetes.
  • Used Jenkins pipelines to drive all micro services builds out to teh Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.
  • Used Elasticsearch for powering not only Search but using ELK stack for logging and monitoring our systems end to end Using Beats.

Environment: s: ELK Stack, GCP, Logstash, Docker, Jenkins, Terraform, Ansible.

ELK Admin

Confidential

Responsibilities:

  • Responsible for Elasticsearch mapping creating, document indexing, including deploying, managing, and tuning/optimizing large-scale Elasticsearch clusters.
  • Developed Logstash Pipeline configurations
  • Wrote custom filter plugins in Ruby to enrich teh pipeline data before ingested it in ES
  • Wrote infrastructure code using Terraform 0.12, and Ansible to build all three environments written for Sandbox and then moved that same code to other environments
  • Wrote Jinja2 template to generate Logstashyml file dynamically wif Ansible template module
  • Wrote watcher functionality to implement monitoring alerts.
  • Implemented AWS solutions using EC2, S3, RDS, EBS, Elastic Load Balancer, and Autoscaling groups, Optimized volumes, and EC2 instances -
  • Elastic search cluster and capacity planning and cluster maintenance. Continuously looks for ways to improve and setting a very high bar in terms of quality.
  • Configured Logstash: input, filter, output plugins - database, JMS, log file sources and elastic search as output
  • Worked wif internal clients to move them from Splunk to this new observability platform.
  • Helped build teh Logstash environment on AWS wif Terraform and Ansible
  • Wrote pipeline configuration file that pulls and enriches data that comes from Kafka before gets into Elasticsearch
  • Came up wif ILM strategies
  • Sinking data in Solr through morphline and creating solr collection.
  • Came up wif a solution to implement Disaster Recovery (hence High Availability) solution, architecture, presented to teh management, and implemented in lower environments
  • Writing Python scripts to automate scripts, deployments such as ILM policies, RBAC policies
  • Used Elasticsearch for powering not only Search but using ELK stack for logging and monitoring our systems end to end Using Beats.

Environment: Terraform 0.12, Ansible 2.7, Jenkins, Elasticsearch ECE, Logstash, Filebeat, Metricbeat, JavaScript, Ruby.

Linux & UNIX Admin

Confidential

Responsibilities:

  • Responsible for handling teh tickets raised by teh end users which includes installation of packages, login issues, access issues
  • User management like adding, modifying, deleting, grouping.
  • Responsible for preventive maintenance of teh servers on monthly basis. Configuration of teh RAID for teh servers.
  • Resource management using teh Disk quotas.
  • Documenting teh issues on daily basis to teh resolution portal.
  • Responsible for change management release scheduled by service providers.
  • Generating teh weekly and monthly reports for teh tickets that worked on and sending report to teh management.
  • Managing Systems operations wif final accountability for smooth installation, networking, and operation, troubleshooting of hardware and software in LINUX environment.
  • Identifying operational needs of various departments and developing customized software to enhance System's productivity.
  • Running LINUX SQUID Proxy server wif access restrictions wif ACLs and password.
  • Established/implemented firewall rules, Validated rules wif vulnerability scanning tools.
  • Proactively detecting Computer Security violations, collecting evidence and presenting results to teh management.
  • Accomplished System/e-mail authentication using LDAP enterprise Database.
  • Implemented a Database enabled Intranet web site using LINUX, Apache, MySQL Database backend.
  • Installed Cent OS using Pre-Execution environment boot and Kick-start method on multiple servers.
  • Monitoring System Metrics and logs for any problems.
  • Running Cron-tab to back up Data.
  • Applied Operating System updates, patches and configuration changes.
  • Maintaining teh MySQL server and Authentication to required users for Databases.
  • Appropriately documented various Administrative & technical issues

Environment: Linux/Centos 4, 5, 6, Logical Volume Manager, VMware ESX 5.1/5.5, Apache and Tomcat Web Server, Oracle 11,12, Oracle Rac 12c, HPSM, HPSA.

We'd love your feedback!