Devops Engineer Resume
Philadelphia, PA
SUMMARY
- Around 12 years of experience in IT Infrastructure and Build, Release & Deployment Management, which includes automating, building, deploying, and releasing of code from one environment to another environment.
- Expertise on Oracle - Confidential application server and web server, Pivotal Cloud Foundry, Performance Load Testing, GOCD, Big Data and UNIX Platforms. It comprises of monitoring, management, support and troubleshooting.
- Experience in Pivotal cloud foundry setup/POC/Configuration/maintenance.
- Excellent technical skills, with strong knowledge of current and emerging technologies.
- Expertise in automating various build and deployments using ANT, MAVEN and Shell and YAML Scripts.
TECHNICAL SKILLS
Software Products: Oracle Confidential server 8.x, 9.x, 10.x, 11.x, 12c Apache tomcat 6.0.33.
Operating Systems: Sun Solaris, RH-Linux, IBM AIX, HP UNIX, Windows XP/NT/2000.
Database: Oracle 12G, MySQL.
Tools: HP-BAC, WILY, Splunk, Nagios, RAD, ALBUM, AppDynamic, RBM, PCM, Toad, SQL Developer, SQL Yog and Control-M, HP Performance Center, GOCD, JMeter Pivotal Cloud Foundry, Grafana, Kibana logstash, Tableau, Rally and GIT, KAFKA, Ambari, Big Data Hadoop, HDFS, Jenkins, Anisible, Skyview, Python, Shell, Mongo DB, Zookeeper, Selenium, Vugen, Docker, Kubernet, ALM .
Ticketing System: HP-Service Manager, JIRA, Service Catalog and Remedy.
PROFESSIONAL EXPERIENCE
DevOps Engineer
Confidential, Philadelphia, PA
Responsibilities:
- Closely worked with Kafka Admin team to set up Kafka cluster setup on the QA and Production environments.
- Had knowledge on Kibana and Elastic search to identify the Kafka message failure scenarios.
- Implemented to reprocess the failure messages in Kafka using offset id.
- Implemented Kafka producer and consumer applications on Kafka cluster setup with help of Zookeeper.
- Have knowledge on partition of Kafka messages and setting up the replication factors in Kafka Cluster.
- Combined views and reports into interactive dashboards in Tableau Desktop that were presented to Business Users, Program Managers, and End Users.
- Worked on Big Data Integration &Analytics based on Hadoop, SOLR, Spark, Kafka, Storm and web Methods.
- Handled importing of data from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS and Extracted the data from SQL into HDFS using Sqoop.
- Configured Elastic Load Balancing (ELB) for routing traffic between zones, and used album with failover and latency options for high availability and fault tolerance.
- Responsible for source system analysis, data transformation, data loading and data validation from source systems to Transactional Data system
- Responsible for providing environments (lower and production servers) to tester's/developers.
- End-to-End migration supports provided from arterra billing system (current generation) to Amdocs billing system (next generation).
- Taken care of all front-end applications (Xfinity Mobile) and backend applications (Amdocs CES Billing system).
- With the help of IAM created roles, users and groups and attached policies to provide minimum access to the resources.
- Created topics in SNS to send notifications to subscribers as per the requirement.
- Worked on the databases of the Amazon RDS and carried out functionalities for creating instances as per the requirements.
- Creating new Cloud Foundry environments, GSLB, Network/Firewall connectivity, AppDynamic configuration, Linux servers, Virtual Machine's.
- Create and maintain fully automated CI/CD pipelines for code deployment using GitHub.
- Worked on slack bot automation to check health of the databases, applications servers, config files, logs and properties.
- Renew SSL certificates for all production servers and GSLB's.
- Managing and creating firewall requests for both client ( Confidential ) and vendor's (Amdocs).
- More Knowledge on AWS and Azure ML.
Environment: Pivotal Cloud Foundry, GOCD pipeline, Jenkins, Appdynamics, Grafana, SOAP, REST, GitHUB, Kafka, Kibana, splunk, wily, Zookeeper, Hadoop, SOLR, Spark, Kafka, Storm, Tableau, album, netmanager, skytool.
DevOps Engineer
Confidential
Responsibilities:
- Expertise in GOCD deployment tool for Server configurations, pipeline creating, Agents and troubleshooting issues.
- Strong knowledge in Performance Load testing using HP Performance Center tool, VuGen and ALM.
- Various types of deployment and administration tasks in Pivotal Cloud Foundry.
- Worked on installation, configuration, tuning, troubleshooting, DR strategies, High Availability and upgrades for big data technology like Ambari, Apache Spark, Apache Kafka, HDFS and zookeeper.
- Expertise in Confidential deployments, upgrade and troubleshooting.
- Continues deployment on Jenkins, to create a jobs, configuring with repository and troubleshooting.
- Strong knowledge on Docker, Kubernet, Zookeeper and Kafka configuration, creating queue, topics, maintains the servers and messaging flow.
- Configuring new application alert and health rule setup on Appdynamics and Nagios.
- Open Stack server’s configuration and Performance testing on new open stack servers.
- Experience on alert configuration, Application monitoring and Servers monitoring.
- Involve on production issues and take care of all applications and servers related stuffs.
- Perform deployments, upgrades, configurations in a controlled, pre-production and production environment with tight operating perimeters.
- Ensures management and monitoring tools are integrated with Pivotal Cloud Foundry and have rules / alerts for routine and exceptional operations conditions.
- Automated several processes thereby avoiding manual intervention.
- Developed a script to check the health of all applications.
- Data Analysis and create graphs on tableau tool.
- Experience in branching, tagging and maintaining the version across the environments using SCM tools like GIT, Subversion (SVN) on Linux and windows platforms.
- Experience with Bug tracking tool like JIRA, Service Catalog and Remedy.
- Managed environments DEV, SIT, QA, UAT, PERF and PROD in SDLC for various releases and designed instance strategies.
- Setup and maintained different environments such as production, QA, performance, development for all applications.
- Worked on Maintenance and Deployment activities for all applications.
- Strong knowledge in deployment of J2EE applications on Oracle/BEA Confidential Servers/Clusters using automated WLST/Ant /Unix Shell scripts. comfortable configuring Confidential Servers in a clustered environment
- Excellent in troubleshooting, counseling and analyzing Confidential situations and finding solutions where necessary.
- Expertise in Administration/Installing/Configuring/Trouble-Shooting of Oracle/BEA Confidential, Apache Servers on Red Hat Linux 6.x/5.x/4.x and Windows environments.
- Managed and administered Domains, Clusters, JDBC Connection Pools, JDBC Data sources, Security and other resources on Confidential Server Platforms.
- Experience in deployment in SOA suite with Stage, No-stage, External stage modes.
Environment: Confidential, Pivotal Cloud Foundry, GOCD pipeline, Jenkins, Appdynamics, Grafana, SOAP, REST, GitHUB, Kafka, Kibana, splunk, wily, Zookeeper, Hadoop, SOLR, Spark, Kafka, Storm, Tableau, album, netmanager, skytool.
Confidential
System Administrator
Responsibilities:
- Performed Web Logic Server administration tasks such as installation, configuration, monitoring and performance tuning.
- Performed J2EE application deployment and administration including JAR, WAR, and EAR files on different environments (QA, Stage, and Production).
- Creating domains and setups in RAW Environments.
- Product upgrade and patches, core patches.
- Various types of deployments in Confidential and apache.
- Doing Unit Testing and Sanity checks.
- Worked with Data base team to resolve permission issues, connection pool issues on Stage environment.
- Involved in Performance tuning of Web Logic server with respect to heap, threads and connection pools.
- Troubleshoot Web Logic Server connection pooling and connection manager with Oracle.
- Using RBM and PCM Tools for deployments.
- Involved in trouble shooting and fixing day-to-day problems of the applications in production.
- More knowledge on nagios, puppet and Appdynamics tools.
- Monitor the customer interfaces and web pages for availability.
- Respond to incidents, analyze, resolve and bring it to closure within SLAs.
- Worked on UNIX troubleshooting which includes disk checks, CPU performance monitoring and permission related issues for app servers.
Environment: Confidential, Jboss, Apache, Appdynamics, Grafana, SOAP, REST, Kibana, splunk, wily, album, RBM, PCM, Nagios