Automation Devops Engineer Resume
Greenwood Village, CO
SUMMARY
- Experience of 8+ years in working in multi - functional Environment during various phases of SDLC focusing on Systems Administration, Software Configuration Management (SCM), Amazon Web Services (AWS), Google Cloud Platform (GCP) and other Cloud Platforms in a DevOps Culture through Continuous Integration and Continuous Deployment.
- Experience in supporting dozens of Amazon AWS implementations including Amazon EC2 (IaaS) and all Amazon RDS (DBaaS) offerings. Amazon AWS services include provisioning, implementation, migration, heterogeneous conversions and ongoing administration and monitoring support for Microsoft SQL Server, Oracle, MySQL, MariaDB, PostgreSQL and Amazon Aurora.
- Strong in Amazon AWS costing, provisioning, administration, monitoring and troubleshooting. Experience in data migrations, ongoing data synchronization, backup and recovery, geographic data replication (Region and Availability Zones), CloudWatch performance monitoring, system tuning and disaster recovery configuration and execution.
- Experience in designing Cloud architectures for customers looking to migrate or develop new PAAS, IAAS or hybrid solutions utilizing Amazon Web Services (AWS), Google Cloud and Microsoft Azure.
- Designed, built and deployed multitude applications utilizing almost all the AWS (Including EC2, S3, Elastic Beanstalk, Elastic Load Balancing (Classic/Application), Auto Scaling, RDS, VPC, Route53, CloudWatch and IAM), focusing on high-availability, fault tolerance, and auto-scaling with CloudWatch monitoring.
- Worked with Amazon Kinesis to handle the streaming data and setup DynamoDB to store the processed stream data and configured with Lambda to run the data transformation code.
- Configured network and server monitoring using ELK (Elastic Search, Log Stash, and Kibana) stack, and Nagios for notifications and experience working with log monitoring and evaluating system logs with ELK Stack.
- Experienced in writing Terraform scripts from scratch for building Development, Staging, Production, and Disaster Recovery for several cloud infrastructures.
- Created functions and assigned roles in AWS Lambda to run python scripts, and AWS Lambda using java to perform event driven processing.
- Experience on implementing Azure Application Insights and OMS for monitoring applications and servers. Solution development for OMS Alerting and Remediation.
- Improved application performance using Azure Search and Internet of Things (IoT) optimization and Implemented Azure Application Insights to store user activities and error logging.
- Experienced in working on big data problems based on open source technologies Kafka, Hadoop, HBase, OpenTSDB, Parquet, PostgreSQL.
- Experience in using Tomcat, JBOSS, WebLogic and WebSphere Application servers for deployment and worked with multiple databases like MongoDB, Cassandra, MySQL, PostgreSQL, ORACLE.
- Knowledge in Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker on Google Cloud Platform (GCP).
- Deployed Zabbix to monitor and alert the health of Nova, Neutron, Keystone and other OpenStack services.
- Experience in Orchestrating Docker container clusters using Kubernetes.
- Worked on rolling updates using the deployments feature in Kubernetes and implemented BLUE GREEN deployment to maintain zero downtime to deploy process in Tomcat, Nginx using Python and Shell Scripts to automate log rotation of logs from web servers and automating administration tasks.
- Implemented bash-based automation to create k8s clusters kops and kubectl, node autoscaler and pod scaling with replica sets.
- Installed and configured Chef Server, workstation and nodes via CLI tools and wrote Dockerfile to create new images based on working environments for testing purposes before deployment.
- Extensively worked on Vagrant & Docker based container deployments to create environments for dev teams and containerization of env's delivery for releases.
- Used Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manage change by writing Python code by using Ansible Python API to automate Cloud Deployment Process.
- Worked with Chef Enterprise Hosted as well as On-Premise, Installed Workstation, Bootstrapped Nodes using Knife, Wrote Recipes and Cookbooks and uploaded them to Chef-server.
- Managing Jenkins jobs based on Groovy using Jenkinsfile and Maven script, also use plugins at some points for test builds, promote the artifacts to S3 and Jfrog, also setting up multi-pipeline jobs to build based on dependencies.
- Installing, configuring and administering Jenkins CI tool on Linux machines and built Continuous Integration and Continuous delivery environment and used Nginx as reverse proxy for securing Jenkins using OPEN SSL.
- Knowledge in Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker on Google Cloud Platform (GCP).
- Application Deployment on PCF using CF push and Urban code deploy. Also, PCF backup for all the environments and set-up Jenkins maven build automation with uploads to Pivotal Cloud Foundry (PCF).
- Ensured successful architecture and deployment of enterprise grade PaaS solutions using Pivotal Cloud Foundry (PCF) as well as proper operation during initial application migration and set new development.
- Wrote Maven , ANT and Gradle Scripts to automate the build process. Managed the Maven Repository using Nexus and used the same to share the snapshots and releases of internal projects.
- Managing Jenkins jobs based on Groovy using Jenkinsfile and Maven script, also use plugins at some points for test builds, promote the artifacts to S3 and Jfrog, also setting up multi-pipeline jobs to build based on dependencies.
- Hands-On experience in using different Monitoring Tools like Nagios, Splunk, CloudTrail, Stack driver, Sumo Logic, Prometheus, New Relic. Created Alarms in CloudWatch for monitoring the server's performance, CPU utilization, log files, Disk Usage etc. and developed Shell Scripts (Bash) for automating day-to-day maintenance tasks.
- Experience in redesigning the architecture of GITHUB Enterprise on Cloud through Disaster Recovery using different Snapshots and Restore Configurations.
- Experienced in installing, configuring, supporting and troubleshooting Unix/Linux Networking services and protocols like NIS, NIS+, LDAP, DNS, TCP/IP, NFS, DHCP, NAS, FTP, SSH and SAMBA.
- Installed, Configured, Managed Monitoring Tools such as Splunk, Nagios, iCinga for Resource Monitoring/Network/Monitoring/Log Trace Monitoring.
- Experienced in all phases of the Software Development Life Cycle (SDLC) with specific focus on the build and release of quality software in Waterfall, Agile and Scrum.
- Created and wrote Shell Scripts (Bash), Ruby and Power Shell for setting up branching, merging, and automation processes across the environments using SCM tools like GIT, Subversion and TFS on Linux and windows platforms.
- Experience in Installation, configuration, Administration and Supporting of RHEL 4, 5.x, 6.x SUSE Linux Enterprise Server (SLES) 11.x, 10.x, 11i Solaris 8, 9, 10, 11, Windows Server NT, 2002, 2003, 2007, 2008, 2010.
- Experienced installing, upgrading and configuring Red Hat Servers using Kickstart and Solaris Server using Jumpstart and customizing the Kickstart profiles and Jumpstart scripts to automate the installation of various servers.
- Day-to-Day application support on production and technical documentation for critical production issues, on-call pager support 24 / 7 environment.
TECHNICAL SKILLS
Cloud Technologies: AWS, Google Cloud, Azure
Source control tools: SVN, Git, GitHub, Gitlab, Bitbucket, GitHub Enterprise
Configuration Management: Chef, Puppet, Ansible, Salt
Build Tools: Ant, Maven, Gradle
Continuous Integration tools: Jenkins, Bamboo, Team City, Bitbucket Pipelines, Gitlab CI
Monitoring/logging tools: Nagios, Splunk, ELK, Prometheus
Bug reporting tools: Jira
Operating Systems: Linux (Red Hat 4.x, 5.x, 6.x,7.x), WINDOWS, CentOS, UNIX, Sun Solaris, Ubuntu.
Databases: Postgres SQL, MySQL, Oracle, Cassandra, Redis, Mongo DB
Change Management: Service Now, JIRA
Virtualization: VMware ESX, ESXi, vSphere 4 and vSphere 5, Citrix
Scripting: Shell, Ruby, Perl, PowerShell and Python
Containerization: Docker, Kubernetes
PROFESSIONAL EXPERIENCE
Confidential, Greenwood Village, CO
Automation DevOps Engineer
Responsibilities:
- Created and Managed AWS environment and used the features EC2, VPC, IAM, ELB, EBS, SNS, cloud watch, S3, creating AMIs and snapshots , RDS, creating security groups, Subnets and Storage Gateway.
- Worked for a new web application release to users which we focused more on AWS personalization, for customer tracking and for better experience than they had previously.
- Worked as a DevOps Automation Engineer for a team that worked closely with many proprietary tools and Open source tools like AWS, JIRA, TOMCAT, APACHE etc.
- Worked on Database setup in AWS environment after migration for Datacenter to AWS cloud. Created lucid charts for Data Flow architecture from user to source they are trying to reach.
- Wrote Terraform and Ansible scripts to create and setup infrastructure for DB environment in cloud.
- Developed builds from the data centers servers to the AWS cloud servers by implementing Ansible Playbooks.
- Used Terraform templates and Ansible Playbooks for direct deployments into the EC2 instances.
- Wrote Ansible Playbooks to deploy nearly sixty-five applications using Tomcat and Apache.
- Build infrastructure for Tomcat and Apache servers with the help of Terraform templates.
- Used WebLogic to connect Databases and Applications for the web applications to run successfully.
- Helped in migrating Oracle Databases to the AWS cloud by creating EC2 instances which acted as physical databases.
- Wrote shell and C-shell scripts to automate the process of migrating and creating environment in AWS. Also used those scripts to run Terraform and provisioning the instance using Ansible Playbooks.
- To migrate the Database from on prem to AWS, I used NFS as migration tool.
- Wrote the scripts to create additional Databases in the EC2 instances. Used SQL to write the spooling commands.
- Wrote a puppet module for automating the oracle database 12C release 2 along with MYSQL.
- DBMS implementation, reduced the cost of database migration using RDS and EC2 provisioning, converted database Schemas to secured servers using Schema Conversion Tool(SCT).
- Used Load Balancers to balance and reroute the traffic and CloudWatch to keep check on the health of the instances.
- To implement the migration process, Copied the physical Databases to NFS and extracted the copied DB to EC2 using Ansible, Shell and C-shell.
- Documented POC for Terraform to build the infrastructure in the cloud.
- Used Amazon RDS Multi-AZ for automatic failover and high availability at the database tier for MySQL workloads. Creating an AWS RDS MySQL DB cluster and connected to the database through an Amazon RDS MySQL DB Instance using the Amazon RDS Console.
- Used Amazon RDS Multi-AZ for automatic failover and high availability at the database tier for Postgres workloads. Creating an AWS RDS Postgres DB cluster and connected to the database through an Amazon RDS Postgres DB Instance using the Amazon RDS Console.
- Worked on resolving the issues with Databases migration to AWS Cloud by assessing the database for compatibility by using the latest version of the Data Migration Assistant (DMA), preparing necessary fixes as Transact SQL-scripts and also optimized the data transfer performance during Migration.
- Used GITLAB as SCM tool to create repositories and managing them among other users. Stored the entire work in the repositories.
- Used Linux as OS to do all the operations. Used all sorts of Linux commands as there are all application involved in this migration process.
Tools: AWS, Terraform, Ansible, Tomcat, Apache, WebLogic, Oracle, NFS, Gitlab, Jira, Linux, Bash, C-shell
Confidential, Katy, TX
Sr. DevOps Engineer
Responsibilities:
- Built CloudFormation and TERRAFORM templets using JSON and python scripting for cloud infrastructure.
- Creating an AWS RDS MySQL DB cluster and connected to the database through an Amazon RDS MySQL DB Instance using the Amazon RDS Console.
- Deployed builds in Data Center servers by implementing Chef recipes. Modified and re-used the Chef recipes for direct deployment into the EC2 instances.
- Implemented Micro-services using AWS platform build upon Spring Boot Services and created workflows on Jenkins for setting up automated pipelines for CI/CD with AWS.
- Handled the migration of the on premises application to AWS and provided the resources to enable them. Used all sort of AWS tools and auto-scaling polices for scalability elasticity and availability.
- Built AWS Disaster Recovery Environment and AWS Backups from scratch using TERRAFORM . Also, Designed, deployed, and maintenance of a full stack Kubernetes environment, running on AWS Cloud.
- Created Cache Memory on Azure to improve the performance of data transfer between NOSQL AWS and WCF services.
- Building and maintaining Docker container clusters managed by Kubernetes, Linux, Bash, GIT, Docker, on Azure. Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test, deploy.
- Worked on Docker registry, Machine, Hub and creating, attaching, networking of Docker containers, container orchestration using Kubernetes for clustering, load balancing, scaling and service discovery using selectors, nodes and pods.
- Worked with upgrades of all the CICD applications including SonarQube, nexus, nexus-pro, Jenkins, Cloud bees, SCM Manager, Gitlab.
- Building Docker images including setting the entry point and volumes. Also ran Docker containers.
- Containerization of Web application using Docker and Kubernetes and Database maintenance.
- Installed, configured, and managed OpenShift HA clusters and deployed applications on OpenShift
- Point team player on OpenShift for creating new Projects, Services for load balancing and adding them to Routes to be accessible from outside, troubleshooting pods through SSH and logs, modification of Buildconfigs, templates, Image streams, etc.
- Worked as a member of Cloud Enablement team to onboard enterprise applications and services to OpenShift.
- Perform maintenance and troubleshooting of our enterprise Red hat OpenShift systems.
- Automate the installation of ELK agent (file beat) with Ansible playbook. Developed KAFKA Queue System to Collect Log data without Data Loss and Publish to various Sources.
- Designing, installing and implementing VMware ESXI servers and vCenter server 5.0 and setting up of VMware Features like vMotion, HA, DRS.
- Documented POC for Terraform to spin up the VMware infrastructure.
- Creating and deploying of Virtual Machines from Templates and creating Snapshots, Clones of Virtual Machines for future Deployments using Terraform .
- Setup and Configuration of Puppet Configuration Management.
- Installed and configured an automated tool Puppet that included the installation and configuration of the Puppet master, agent nodes and an admin control workstation.
- Managed deployment automation using Puppet Roles, Profiles, Hiera and Custom Puppet modules.
- Deployed Puppet, Puppet Dashboard, Puppet DB for configuration management to existing infrastructure.
- Created and updated Puppet manifests and modules, files, and packages stored in the GIT repository.
- Involved in setting up Jira as defect tracking system and configured various workflows, customizations and plug-ins for the Jira bug/issue tracker.
- Automation of daily tasks using Shell and Ruby scripts.
- Supporting and working alongside Agile development teams to ensure they have all the facilities to get the job done.
- Worked on resolving the issues with Databases migration to AWS Cloud by assessing the database for compatibility by using the latest version of the Data Migration Assistant (DMA), preparing necessary fixes as Transact SQL-scripts and also optimized the data transfer performance during Migration.
Environment: AWS, Terraform, OpenShift, Puppet, Kubernetes, Docker, Git, Oracle, Shell, Ruby, VSTS, TFS.
Confidential, Chicago, IL
Cloud Engineer
Responsibilities:
- Designed a disaster recovery plan (DRP) for WK in which all the applications and servers are migrated from region to region which helps the organization in case of disasters.
- Used Amazon services like EC2, Route53, RDS, AMIs, Snapshots, Internet gateways, CloudFormation, Lambda, VPC-VPC peering, Jump servers, ELBs and Elastic Block Storage.
- Worked in Amazon AWS costing, provisioning, administration, monitoring and troubleshooting.
- Designed automated scripts using cloud formation for data migrations, ongoing data synchronization, backup and recovery, geographic data replication (Region and Availability Zones), CloudWatch performance monitoring, system tuning and disaster recovery configuration and execution.
- Used Lambda functions, Cloud Formation templates to architect Disaster Recovery Plan for multi-region environment. Used Route53 to divert the internet traffic to another region in case of disaster.
- Supported dozens of Amazon AWS implementations including Amazon EC2 (IaaS) and all Amazon RDS (DBaaS) offerings. Amazon AWS services include provisioning, implementation, migration, heterogeneous conversions and ongoing administration and monitoring support for Microsoft SQL Server, Oracle, MySQL, MariaDB, PostgreSQL and Amazon Aurora.
- Design, implement and maintain all AWS infrastructure, enterprise class security, network and systems management applications within an AWS environment.
- Using skills in DBMS implementation, reduced the cost of database migration using RDS and EC2 provisioning, converted database Schemas to secured servers using Schema Conversion Tool(SCT).
- With DBMS migration skills, automated DB product conversions, test plan assistance, ongoing data synchronization. This helped in DB migration Service Configuration and execution.
- Used Amazon RDS for DB instance class configuration/scaling, DB instance management, creating DB parameter groups, DB option groups, DB subnet groups and maintained RDS maintenance windows.
- Designed DRP with availability of the region/availability zone configurations, multi-AZ deployments and creating read replicas to MySQL, MariaDB, PostgreSQL.
- Created Access control/IAM policies, DB security groups, Amazon security groups, VPC security groups, RDS encryption configuration, DB auditing and SSL support.
- Wrote CloudFormation templets which helped in Snapshots of the servers, DB backup, creating backup windows and backup retention.
- Used Amazon services like CloudWatch alarm administration, logs and events to monitor the plan is working fine. Used Amazon DB log files, RDS event management to keep track on the databases.
- Recommended services like performance tuning and problem analysis resolution to keep track on the performance issues.
Tools: AWS Management Console, AWS Command Line Interface, Amazon CloudWatch, AWS Key Management Service, Amazon RDS Console, Amazon RDS API, Amazon Event Notification, Enhanced Monitoring Option, AWS Lambda
Confidential, Eden Prairie, MN
Sr. Cloud Automation DevOps Engineer
Responsibilities:
- Involved in designing and deploying multitude applications utilizing AWS stacks like EC2, Route53, S3 RDS, SNS, SQS, Dynamo DB ELK focusing on auto scaling in AWS cloud formation and high-availability fault tolerance.
- Created, maintained and handled different operations like maintaining and troubleshooting EC2 instances, S3 buckets, VPC (Virtual Private Clouds) and ELB (Elastic Load Balancers) on AWS Cloud Resources.
- Designed and deployed several applications using AWS stack (EC2, route 53, S3, RDS) focusing upon high-availability, auto-calling and Cloud Formation.
- Managed the AWS VPC network for the Launched Instances and configured the Security Groups and Elastic IP's accordingly. Worked with Cloud Trail, Cloud Passage, Check Marx, Qualys Scan tools for AWS security and scanning.
- Used AWS Beanstalk for deploying and scaling web applications and services developed with Java, Node.ps, Python on familiar servers like Apache, Nginx, Tomcat.
- Triggered LAMDA from DynamoDB where LAMBDA runs data transformation code and loads results into data warehouse by using REDSHIFT which is a Internet hosting service.
- Utilize Cloud Watch to monitor resources such as EC2, CPU memory, Amazon RDS DB services, Dynamo DB tables, EBS volumes to set alarms for notification or automated actions and to monitor logs for a better understanding and operation of the system.
- Installed Pivotal Cloud Foundry (PCF) on EC2 to manage the containers created by PCF . Used Docker to virtualize deployment containers and push the code to EC2 cloud using PCF .
- Successfully migrated on-premises applications to GCP using tunneling method, used IPSEC-v4 tunneling process to migrate from cloud to cloud.
- Worked on GCP services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring and cloud deployment manager.
- Established failover and auto-scaling for all critical applications by using HA Proxy/Nginx for Load Balancing in GCP . Configured monitoring of uptime and performance for all production systems by GCP Stack driver .
- Integrated technologies such as Docker and Kubernetes , a powerful cluster manager and orchestration system for running your Docker containers by using OpenShift Google Cloud Platform .
- Extensively used Google stack driver for monitoring the logs of both GKE and GCP instances and configured alerts from Stack driver for some scenarios.
- Maintained and developed Docker images for a tech stack including Cassandra, Kafka, Apache, and several in house written Java services running in Google Cloud Platform ( GCP ) on Kubernetes .
- Integrated Docker container orchestration framework with Kubernetes by creating pods, config Maps, deployments, Replica sets, nodes etc.
- Experience in developing CI/CD system with Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the runtime environment for the CI/CD system to build, test and deploy and troubleshooting pods through SSH and logs, writing/modification of Buildconfigs, templates, Imagestreams etc.
- Worked on setting up Splunk to capture and analyze data from various layers Load Balancers, Webservers and application servers.
- Provide regular support guidance to Splunk project teams on complex solution and issue resolution. Checking traffic / Errors on JBoss Web App API's via Splunk and command line
- Created, managed and performed container-based deployments using Docker images containing middleware (Apache Tomcat) and Applications together and evaluated Kubernetes for Docker container orchestration.
- Expertise in virtualization of servers using Docker , worked with Docker Engine and Docker Machine, to deploy the micro services-oriented environments, and configuration automation using Docker containers.
- Involved in writing various custom Ansible playbooks for deployment orchestration and developed Ansible Playbooks to simplify and automate tasks. P rotected encrypted data needed for tasks with Ansible Vault.
- Responsible for Creating Ansible Inventory files, hosts, handlers, tasks, templates, roles and group vars to build and Automate AWS Environment/infrastructure.
- Built complete Configuration Management for the Microservices using Kubernetes, Docker and Ansible .
- Implemented continuous integration using Jenkins . Configured security to Jenkins and added multiple slaves for continuous deployments.
- Implemented Ansible to manage all existing servers and automate the build/configuration of new servers.
- Built Jenkins pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes , Created Pods and managed using Kubernetes .
- Utilized Kubernetes and Docker for the runtime environment for the CI/CD system to build , test , and deploy . Created Jenkins jobs to deploy applications to Kubernetes Cluster .
- Hands on experience in using ELK (Elastic Search, Kibana, Log stash), Splunk, Nagios to get data for each application about usage.
- Automated the deployment process by writing Perl, Python scripts in Jenkins.
- Extensive experience in Centos / RHEL/Unix system Administration, System Builds, Server Builds, Installations, Upgrades, Patches, Migration, Trouble shooting on RHEL 4.x/5.x, Centos, Troubleshooting Server issues.
- Working on User requests via ticketing system (JIRA) related to system access, logon issues, home directory quota, file system repairs, directory permissions, disk failures, hardware and software related issues.
- Setting up network environments using TCP/IP, NFS, DNS, DHCP, FTP, SFTP, SSHD and proxy.
Environment: AWS, CloudWatch, AWS Lambda, Google Cloud, Google stack driver, GIT, GITHUB, Jenkins, Docker, K8S, Ansible, ELK, Splunk 5.0, 6.0, Python, Nexus Artifactory, RHEL
Confidential
DevOps/Build and Release Engineer
Responsibilities:
- Designed Puppet modules using RUBY to Provision several pre-mode environments.
- Consisting of POSTGRES Database installations, Web Logic Domain creations and several proprietary middleware installations.
- Installed and configured the Rabbit MQ environment for the Analytics and setup the supervisor for managing process availability.
- Responsible for the maintenance and development of processes and supported tools/scripts for the automatic building, testing and deployment of the products to various developments.
- Developed Puppet Modules for installation & auto healing of various tools like Bamboo, Nolio agents, MSSQL, Nexus etc. these modules are designed to work on both windows and Linux platforms
- Involved in the migration of the Bamboo server, Artifactory & Git server. Responsible for writing Hooks and Triggers using Perl. Built Java application using ANT.
- Configured various jobs in Bamboo& Hudson for deployment of Java based applications and running test suites. Setup ANT script-based jobs and worked with Hudson Pipelines.
- Experience in managing and setting up Continuous Integration using tools like Bamboo, Hudson and Electric Commander, etc.
- Involved in troubleshooting the automation of installing and configuring JAVA applications in the test and pre-production environments.
- Integrated Ant, Bamboo, Puppet and Nexus to implement the continuous delivery framework in Linux environment.
- Install, configure, update, and troubleshoot database servers like MSSQL Cluster, MySQL, Oracle9i/10g/11g, Mongo DB 2.x and 3.0.
- For the deployment of the artifacts used Nexus Artifact Repository Manager.
- Successfully implemented the Master-Slave architecture setup to improvise the performance of Jenkins.
- Configured and administrated CI-CD pipeline using Bamboo as integration, Ant as build and GIT as Source code management tools.
- Verified and rectified the errors that are basically caused in CI-CD pipeline setup. Setup constant security checks to CI-CD pipeline to successfully monitor the faster events which may occur and to solve them.
- Used NEXUS as repository antifactory and deployed the archives like war files into the TOMCAT Application Servers.
- Used SonarQube in build system for continuously inspecting the code quality, Nagios for monitoring and performed log analysis using ELK stack and created monitoring charts.
- Designed ANT scripts to automate the build process for JAVA projects in building artifacts for the source code.
- Experience in using XML to write the ANT scripts.
- Automated the build and release management process including monitoring changes between release.
- Used SHELL, PYTHON and PERL scripting languages to design the automation process in the build process.
- Worked on SDLC model with architects and later started using SVN tool as Source Code Management tool.
- Experienced in migrating data from SVN to GIT. Designed branching and build/release strategies utilized for GIT (sub-version).
- Linux System Administration on RHEL 5.x/6.x. Experienced in bash and python scripting.
Environment: Java, J2EE, SVN (Subversion), Ant, Bamboo, JIRA, Shell/Perl Scripting, Nagios, WebSphere, UNIX.
Confidential
Systems Engineer
Responsibilities:
- Implemented open source base monitoring tool Nagios 3.0 for servers, SAN switches, EMC SAN Storage and VMware ESX and ESXi.
- Monitored CPUs, IDE/SCSI disks, RAID controllers and network parameters using the NAGIOs monitoring tool and performance tools.
- Integrated more than 500 Linux servers to authenticate Windows Active Directory using Winbind and Samba.
- Installation and configuration of Oracle 8/9i database on Sun Solaris Servers. Integrated of Linux/Solaris with Active Directory.
- Implemented Spacewalk Open Source (Red Hat Satellite Server) System management application for auto provisioning, software grouping, custom package channel, system inventory, auto deploying patches and monitoring of Red Hat Servers.
- Performed Patching and upgrades (release), on stand-alone servers (using single user mode), and live upgrade of servers in production using YUM Update/RPM Manager from repository or Red hat subscription management service.
- Activities include user administration, startup and shutdown scripts, crontab’s, file system maintenance backup scripting and automation using PERL, SHELL scripting (BASH, KSH) for Red Hat Linux systems.
Environment: RHEL, VMware ESXi, VSphere, DNS, DHCP, NFS, Linux, Unix, Shell, Perl, Active Directory.