We provide IT Staff Augmentation Services!

Lead Devops Engineer Resume

SUMMARY

  • 8 + years of experience as a system engineer and DevOps engineer.
  • Exposed to all aspects of software development life cycle (SDLC) such as analysis, planning, developing, testing, implementing, post - production analysis of the projects.
  • Expertise in maintaining the applications based on the environments like UAT, DEV, SIT and PROD.
  • Involved in performance testing of applications in lower environments.
  • Experience on Build and Release Engineer in Automating, Building, Deploying, Managing and Releasing of code from one environment to another environment.
  • Expertise with UNIX and Windows environments including shell and Python scripts.
  • Strong knowledge of Subversion (SVN) and experience utilizing source control such as Perforce, GIT, and knowledge of Clear Case.
  • Experience in and demonstrated understanding of source control management concepts such as Branching, Merging, Labeling/Tagging and Integration.
  • Experience in administration of web servers like Apache web server and Apache Tomcat.
  • Involved in the functional usage and Deployment of Applications in Web Logic, Web Sphere and Apache Tomcat Server.
  • Worked on DevOps, Continuous Integration, Continuous Delivery and Continuous Deployment
  • Configured and monitored distributed and multi-platform servers using chef. Excellent at defining Chef Server and workstation to manage and configure nodes. Developed Chef Cookbooks to manage systems configuration.
  • Integrated Docker container-based test infrastructure to Jenkins CI test flow and set up build environment integrating with Git and Jira to trigger builds using Slave Machines.
  • Extensive experience in AWS Amazon cloud service platform and its features: EC2, VPC, SNS, EBS, Cloud watch, Cloud trail, Cloud formation AWS configuration, Load Balancing, Lambda, S3, IAM, Security Groups.
  • Ability to identify and gather requirements to define a solution to be built and operated on AWS.
  • Designed highly available, cost-effective and fault-tolerant systems using multiple EC2instances, Auto-Scaling, Elastic Load Balancer (ELB) and AMIs and Glacier for QA and UAT environments as well as infrastructure servers for GIT and Chef.
  • Installed and Setup Web servers (Apache and Tomcat), DB Server (MySQL).
  • Experience in managing multiple CI tools like Bamboo, Hudson/Jenkins for automated builds and End to End deployments.
  • Cloud Platforms AZURE (API Management Services, Data Factories, App Services, Data Lake Store, SQL Databases & Virtual Machines)
  • Experienced with deployments, Maintenance and troubleshooting applications on Microsoft Cloud infrastructure AZURE.
  • Experience with in working in an Agile / Scrum environment and daily standup meetings.
  • Experience in handling highly scalable JAVA / J2EE applications and their performance monitoring.
  • Extensive experience in running web scale services on Amazon Web Services (AWS).
  • Experience in Branching, Merging, Tagging and maintaining the version across the environments using SCM tools like Subversion (SVN), GIT (GitHub, GitLab).
  • Worked on Jenkins by installing, configuring and maintaining for the purpose of continuous integration (CI) and for End to End automation for all build and deployments.
  • Setup Continuous Integration for major releases in Jenkins. Created Pipeline in Jenkins by integrating Git and Maven Plugins. Created new build jobs in Jenkins admin console and configured global environmental variables.
  • Extensive experience in developing and maintaining build, deployment scripts for test, Staging and Production environments using ANT, Maven, Shell scripts.
  • Expertise in Repository Management tools Artifactory, Nexus.
  • Expertise in monitoring and management tools like Splunk, Nagios and AppDynamics.
  • Experience in Managing/Tracking the defects status by using JIRA tool and Planning & resolving the issues as per SLA.
  • Excellent automation experience in working with configuration management tools like Chef, Puppet.
  • Expertise in scripting for automation, and monitoring using Shell, Python scripts.
  • Extensively worked on Jenkins, Hudson for continuous integration and for End to End automation for all build and deployments.
  • Experienced in Debugging, Optimizing and Performance Tuning of Oracle BI (Siebel Analytical) Dashboards / Reports to improve performance at Database.

TECHNICAL SKILLS

Operating Systems: Linux (Red Hat 4.x, 5.x, 6.x), UNIX, MS WINDOWS, AIX, Ubuntu, Macintosh

Tools: ANT, MAVEN, JENKINS, DOCKER, CHEF

Languages: Java, python,XML

Scripting Languages: SHELL, PYTHON

Database: SQL SERVER 2000/2005/2008/2012 , MS-Access, Oracle8i/9i/10g, MySQL, T-SQL, PL/SQL

Networking and Configurations: Active Directory, Group policy configurations, DNS, WINS, DHCP, WSUS, IIS (SharePoint, Streaming Media), VLANs, NAT, Access lists

Version Control Tools: BitBucket, GIT

VMware: VMware … VMware Workstation Pro v12, VMware Fusion, VMWare Player, VMware ESXi 5.0, 5.1, 5.5, 6.0, vCenter Server, vCloud Suite

PROFESSIONAL EXPERIENCE

Confidential

Lead DevOps engineer

Responsibilities:

  • Working as DevOps engineer and responsible for taking care of everything related to the clusters total of 90 nodes ranges from POC (Proof-of-Concept) to prod clusters.
  • Provided regular user and application support for highly complex issues involving multiple components
  • Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Manage and review data backups, Manage & review log files.
  • Leveraged appropriate AWS services.
  • Designing, deploying and maintaining the application servers on AWS infrastructure, using services like EC2, S3, Glacier, VPC, Lambda, Route53, SQS, IAM, Code Deploy, CloudFront, RDS, and CloudFormation etc.
  • Implemented the various services in AWS like VPC, Auto Scaling, S3, Cloud Watch, EC2.
  • Worked with the different instances of AWS EC2, AWS AMI's creation, managing the volumes and configuring the security groups.
  • Worked with the WS S3 services in creating the buckets and configuring them with the logging, tagging and versioning.
  • Used the AWS-CLI to suspend an AWS Lambda function. Used AWS CLI to automate backups of ephemeral data-stores to S3 buckets, EBS.
  • Led POC involving Confluence API call to populate Wiki with log data in AWS Glue.
  • Worked on the Cloud Watch to monitor the performance environment instances for operational and performance metrics during the load testing
  • Created the trigger points and alarms in Cloud Watch based on thresholds and monitored logs via metric filters.
  • Worked on the AWS Auto Scaling launch configuration and creating the groups with reusable instance templates for Automated Provisioning on demand on based on capacity requirements.
  • Worked on the AWS IAM service and creating the users & groups defining the policies and roles and Identify providers.
  • Worked with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation.
  • Performed all necessary day-to-day Subversion/GIT support for different projects.
  • Connected continuous integration system with GIT version control repository and continually build the check-ins from the developer.
  • Build out server automation with Continuous Integration - Continuous Deployment tools like Jenkins/Maven for deployment and build management system.
  • Integrated Docker container-based test infrastructure to Jenkins CI test flow and set up build environment integrating with Git and Jira to trigger builds using Slave Machines
  • Administer and execute Jenkins jobs for generating artifacts and deploying the same on specific environment as and when required.
  • Developed build and deployment scripts and used ANT/Maven tools in Jenkins to span from one environment to other.
  • Used shell scripting in Jenkins to automate deployment of artifacts into WebSphere.
  • Implementing new projects builds framework using Jenkins, Cruise control& Maven as build framework tools.

Confidential

DevOps Engineer

Responsibilities:

  • Work with RAML, JSON to design and follow layers of API procedures to call and store data from various end points.
  • Developed technical design documents and implemented process flows in Business Works for the Web Services.
  • Guided the Enterprise Architect team on the approach of implementing retry process in BW.
  • Developed technical design documents for orchestration process flows and implemented process flows in Business works.
  • Extensive Experience in Upgrading, Installing BW, EMS, Configuring Queues, Topics.
  • Configured Failover and Load Balancing TIBCO Apps.
  • Upgraded the BW Processes from BW 3.3 to 4.1 as per CATE recommendation, Upgraded EMS services from 5.x to 8.2.2 as per CATE recommendation.
  • Participated in capacity planning of EAI Servers.
  • Participated in Infrastructure setup of TIBCO Servers.
  • Done the Performance Tuning on EMS servers. in POC implementation of migration projects from On Prem to AWS.
  • Rebuilding the servers can be achieved by reverting to the initial snapshots and responsible for taking the server snapshots at a periodically manner and configuring ansible.cfg file to adjust the repo source.
  • Maintained Ansible playbooks using Ansible roles, Ansible Galaxy, utilized combination of different module in Ansible playbook with YAML scripting to configure the files on remote servers.
  • Used Ansible to manage systems configuration to facilitate interoperability between existing infrastructure and new infrastructure in alternate physical data centers.
  • Experienced in using Ansible to manage Web Applications, Config Files, Data Base, Commands, Users Mount Points, and Packages and Used ansible-galaxy to create roles which can be reused multiple times across the organizations and calling these reusable roles using the requirement.yml file in roles.
  • All the deployments are carried out using ansible tower (AWX) by creating job templated and pushing the code on to the target inventory and executing the workflow templates to patch the inventory.
  • Experienced in using artifactory Repository Managers for Maven Builds.
  • Standing up and administering on premise Kafka platform.
  • Provide expertise in Kafka brokers, zookeepers, Kafka connect, schema registry, KSQL, Rest proxy and Kafka Control center.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Provide administration and operations of the Kafka platform like provisioning, access lists Kerberos and SSL configurations.
  • Use automation tools like provisioning using open shift, Docker, Chef, Ansible, Jenkins, BB and RLM.
  • Standing up and administering on premise Kafka platform.
  • Provide expertise in Kafka brokers, zookeepers, Kafka connect, schema registry, KSQL, Rest proxy and Kafka Control center.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Provide administration and operations of the Kafka platform like provisioning, access lists Kerberos and SSL configurations.
  • Use automation tools like provisioning using open shift, Docker, Chef, Jenkins, BB and RLM.
  • Involved in topics creation based on application team requirements.
  • Used Ansible and Ansible tower (AWX) as configuration management tool to deploy the application to multiple servers at once.
  • Involved in writing various custom Ansible playbooks for deployment, orchestration and developed Ansible Playbooks to simplify and automate day-to-day server administration tasks.
  • Responsible for rebuilding the servers using VMware vsphere web client for a fresh installation of layered products on different environments.
  • Rebuilding the servers can be achieved by reverting to the initial snapshots and responsible for taking the server snapshots at a periodically manner and configuring ansible.cfg file to adjust the repo source.

System Engineer

Confidential

Responsibilities:

  • Experience in setting up the Chef repo, Chef work stations and Chef nodes. Involved in chef-infra maintenance including backup/security fix on Chef Server.
  • Extensively involved in infrastructure as code, execution plans, resource graph and change automation using Terraform. Managed AWS infrastructure as code using Terraform.
  • Expertise in writing new plugins to support new functionality in Terraform.
  • Deployed application updates using Jenkins. Installed, configured, and managed Jenkins
  • Triggering the SIT environment build of client remotely through Jenkins.
  • Deployed and configured Git repositories with branching, forks, tagging, and notifications.
  • Experienced and proficient deploying and administering GitHub
  • Deploy builds to production and work with the teams to identify and troubleshoot any issues.
  • Worked on MongoDB database concepts such as locking, transactions, indexes, Sharding, replication, schema design.
  • Consulted with the operations team on deploying, migrating data, monitoring, analyzing, and tuning MongoDB applications.
  • Viewing the selected issues of web interface using SonarQube.
  • Developed a fully functional login page for the company's user facing website with complete UI and validations.
  • Installed, Configured and utilized AppDynamics (Tremendous Performance Management Tool) in the whole JBoss Environment (Prod and Non-Prod)
  • Worked on various Unix/Linux clusters such as Oracle RAC, SAP service guard, S2S and Local Service guard CFS S2S and Local, Linux VCS HA Cluster S2S.
  • Worked on tickets raised during Build process.
  • As a team member involving for Unix/IT-Infrastructure quarterly maintenance.
  • As per schedule performing the on-call rotation duties.
  • Creating Users/groups and permission management towards Security Environment.
  • Worked on DNS, LDAP groups and troubleshoot the issues.
  • Worked on logical volume management and mounting the file systems.
  • Able to work on different tools like ILM, APS, SRPA, Virtual SM and other tools to complete servers successfully.
  • Worked closely with SAN team and allocated storage to the server and shared with its cluster nodes.
  • Automated routine jobs by using existing Bash and Korn shell Scripts.
  • Performed system Firmware and ILO updates.
  • Maintaining the inventory of virtual and physical hosts.
  • Aggregating the servers based on the applications wise in maintaining the server specs in Aperture for CMDB activities.
  • Experience in using OPAM for root access logins.
  • Updating the change templets, Req templets and SLA for DC activities.
  • Marinating the P2V and V2V migration on DC level.
  • Generating Dashboard reports for capacity monitoring.
  • Monitoring the decommission activities and list of the dependencies on decommission process make sure to reach the deliverables for upcoming PO.
  • Managing network auditing and providing the new IP for further assignments.

Linux/Unix/VMware Systems Administrator

Confidential

Responsibilities:

  • Building and maintaining over 500+ physical and virtual servers.
  • Build the Redhat enterprise Linux 6.3 and 5.8; ESXi 5.0 servers through Auto build process.
  • Implementing the post installation procedures for UNIX operating systems through internal scripting involving Shell/Perl.
  • Worked on various Unix/Linux clusters such as Oracle RAC, SAP service guard, S2S and Local Service guard CFS S2S and Local, Linux VCS HA Cluster S2S.
  • Worked on tickets raised during Build process.
  • As a team member involving for Unix/IT-Infrastructure quarterly maintenance.
  • As per schedule performing the on-call rotation duties.
  • Creating Users/groups and permission management towards Security Environment.
  • Worked on DNS, LDAP groups and troubleshoot the issues.
  • Worked on logical volume management and mounting the file systems.
  • Able to work on different tools like ILM, APS, SRPA, VirtualSM and other tools to complete servers successfully.
  • Worked closely with SAN team and allocated storage to the server and shared with its cluster nodes.
  • Automated routine jobs by using existing Bash and Korn shell Scripts.

Hire Now