Lead Devops Admin/engineer Resume
Fort Worth, TX
SUMMARY
- Highly motivated and skilled Sr. DevOps Engineer with 10+ years of practicing DevOps, CI/CD, Infrastructure automation, quality engineering and build & release management with a strong background in web applications, middleware, and databases.
- Hand’s on experience in architecting and automating mission - critical deployments over large infrastructure with implementing continuous deployment pipeline, integrating Bamboo, Vagrant, Git, Jenkins, Ansible tools.
- A passionate professional seeking to leverage my experience and knowledge in transforming IT Infrastructure, operations, and applications to the most innovative, scalable, highly available, secured, fault-tolerant systems and cost-efficient on public and private cloud offerings.
- Possess 10+ years of IT exp.in DevOps/ CI&CD /Solaris/Ubuntu/Windows/Cloud/Splunk/JavaScript.
- Proficient in automating build and configuration processes using tools like Maven, Jenkins, and Chef/Ansible.
- Involved extensive work towards code compilation, packaging, building, debugging, automating, managing, tuning, and deploying code across multiple environments and Linux Administration.
- Expertise in Installation and Configuration of Linux distributions such as Red Hat Enterprise Linux (RHEL) 5.x/6.x/7.x, SUSE Enterprise Linux Server 10/11.
- Worked in container-based technologies likeDocker,Kubernetes, andOpenshift
- Hands-on Experience in Amazon Cloud Services (AWS) and its features (AWS EC2, VPC, EBS, AMI, snapshots, Auto scaling, SES, SQS, SNS, RDS, ELB, EBS, Cloud Watch, S3, etc.).
- Experience in managing Cloud environment using Cloud formation and Terraform for Multicolour.
- Experience in configuration management tools like Chef and Ansible.
- Experience in working on source control tools like GIT and Bit Bucket via creating branches as per application requirement and worked on git pull, git push, git merge, git rebase, and other important commands.
- Responsible for designing and deploying new ELK clusters (Elastic search, Logstash, Kibana, Beats, Kafka, zookeeper, etc.
- Design, build and manage the ELK (Elastic search, Logstash, and Kibana) cluster for centralized logging and search functionalities for the App.
- Written and Maintained Automated Salt scripts for elastic search, Logstash, Kibana, and Beats. Expertise in Repository Management tools Jfrog, Artifactory, and Nexus.
- Proficient in computer applications and scripting’s like Shell, Python, Ruby, Perl, JavaScript, and XML.
- Build pipeline in Jenkins-2 using groovy for automating builds.
- Has working knowledge of ticketing tools like JIRA, Clear Quest, and Remedy.
- Experience in LAMP (Linux, Apache, MYSQL, and Python) architecture.
- Experience with monitoring and logging tools likeDynatrace,Splunk, andNew Relicfor monitoring network services and host resources.
- Installed and configured Chef Servers and bootstrapped chef-client nodes. Creating cookbooks, recipes, using attributes for dynamic configuration, templates, using kitchen test for testing recipes in the virtual environment,
- Worked with Ansible integrated with Terraform to deploy application after infrastructure build using playbooks, creating roles for each application, error handling, and used ansible-Lint. Worked on Ansible tower to deploy resources from a centralized location and providing limited access to the Application team on the tower.
- Experience in managing Containerization environment using Docker, Docker compose, Creating and managing Images using Docker file, managing Docker volumes and taking backups of volumes on regular basis, managing Docker network for the different environment like Dev, QA, and Prod.
- Experienced in using Atlassian products (JIRA, Confluence, Bamboo, Fisheye)
- Experienced in customizing Atlassian products by developing various plugins, event listeners, and scripts in JAVA, JavaScript, Groovy script, Jelly Script, and JQL
- Use SQL Server Integration Services (SSIS) and Extract Transform Loading (ETL) tools to populate datadog from various data sources, creating packages for different data loading operations for the application.
- Working with Deployment Tools like Change Sets, Copado, ANT migration tool, and Aside.io
- Expertise in planning, executing & spearheading IT Infrastructure Projects, and developing & streamlining systems with skills to enhance IT infrastructure operational effectiveness
- Created thek8 clusterusing the google cloud
- Expert inMachine Learning, Deep Learning, and Data Science, utilizing advanced analytics, artificial intelligence, and cognitive services using Spark MLlib Pipelines API, TensorFlow, and Keras, Databricks, DSVM, DLVM, R Server, Python, Jupyter notebooks.
- Experience with AZURE, React JS, TCP, BIG DATA, Hadoop, snowflake, Mongo DB, KUBERNATES, Docker with Mesos, and Marathon.
- Experience in migrating on-premises applications toAzureand configuredVNETsandsubnetsas per the project requirement also performedPowerShell scriptingto doPatching,Imaging, andDeployments inAzure.
- Experience in orchestrating the containerized environment using Kubernetes. Creating pods (Static, init) Deployments set, Replica Set, Daemon set, Services like Nodeport and Loadbalancer, Stateful services, Persistence Volume, Ingress control.
- Implemented a CI/CD pipeline usingAzure DevOps(VSTS, TFS)in both cloud and on-premises withGIT, MS Build, Docker, Mavenalong withJenkinsplugins.
- Good understanding of Data Mining and Machine Learning techniques.
- Experience in Cloud, Big Data, DevOps, Analytics, Business Intelligence, Data mining, Machine learning, Algorithm development, Distributed computing, Programming and Scripting languages,
- Developed framework forKubernetes(K8s)for cluster platform resilience and tested in production
- Experience in Log management using ELK stack, Using Logstash and Beats (File beat, audit beat, metric beat) for log collection, Elastic Search for indexing the logs, and Kibana for creating a dashboard for virtualizing the data.
- Monitoring tools like Prometheus and Grafana for monitoring Pods deployed in Kubernetes.
- Efficient in working with system performance configuration and monitoring tools like Nagios, Science Logic, Prometheus, Grafana, ELK.
- Experience in DevOps Tools (Bit bucket, Bamboo) and test Management JIRA Integration.
- Working knowledge of network administration, deploying, and troubleshooting of DNS, LDAP, NIS, NFS, DHCP, Samba, and TCP/IP.
- Developed build and deployment scripts MAVEN as a build tool in JENKINS.
- Working experience with cloud infrastructure of AWS (Amazon Web Services), IAM, and computing AMI virtual machines on Elastic Compute Cloud (EC2).
- Highly experienced and certified, lead Azure Cloud professional with strong experience in migrating SAP workload to the public cloud, Azure, DevOps, AWS, Azure Administration, Azure Infrastructure Operations, Linux and end to end Project Management
- Experience in implementingTest Management with Automation framework - JIRA(Bamboo JIRA integration, X-ray - Updating Automation Results in JIRA).
- Automated machine learning and training and optimized it on a GPU with Cuda
- Exposed to all aspects of software development life cycle (SDLC) such as Analysis, Planning, and Developing.
- Extensive experience using Maven build tools for the building of deployable artifacts (jar, war & ear) from source code and for writing the pom.XML respectively.
TECHNICAL SKILLS
Platforms: Linux (Red Hat 5.x, 6.x,7.x,*.x)
Programming Languages: Python, Bash, Groovy, Java
Version Control Tools: GIT, Bitbucket
Configuration Management Tool: Chef, Ansible, Ansible Tower, Ansible Molecule
Build tools: Maven
CI tools: Jenkins, Bitbucket
Infrastructure Management tool: Cloud formation, Terraform, Packer
Cloud Technologies: AWS/GCP, Azure
Web/Application Servers: JBoss, Apache Tomcat
Containerization: Docker, Docker Swarm, Kubernetes, Kubespray, Vagrant, Machine Learning, Open shift
Monitoring Tools: Science logic, Grafana, Prometheus, ETL, New Relic
Database: MySQL
PROFESSIONAL EXPERIENCE
Confidential, Fort Worth, TX
Lead DevOps Admin/Engineer
Responsibilities:
- Working in AWS cloud environment and helping application users to deploy their application in cloud environment using resources like EC2, ALB, RDS, and AKS via creating cloud formation templates.
- Implemented Performance testing using Apache JMeter and created a Dashboard using Grafana to view the Results.
- Managing multiple AWS accounts and integrating them using cross-platform roles and deploying the resources from a central account and restricting access to each user.
- Creating Jenkins pipeline and integrating tools like Git, Maven, SonarQube, and Kubernetes for an end-to-end deployment of the application in the environment using groovy.
- Managing Ansible Tower for deploying the application in the created resources and restricting access to application users to run their respective jobs.
- Working on implementing new OCR solution; Spring Boot,OpenShift, micro services. Member of the group developing containerized applications; Docker, Spring Boot, Kubernetes,OpenShift. Deployed Micro services to IBM Bluemix Cloud Foundry and later migrated toOpenShift.
- The deployment model uses Atlassian development repository tools,Jenkinsas the build engine, while execution deployments to container orchestration tools ranged over time fromOpenShifton EC2,AWS. Create Functional Test Cases (data-driven when possible) using ReadyAPI (SoapUI) for CI/CD pipeline to test RESTful “Accounts” and “Apple Mobile Wallet” APIs for Mobile App Redesign Journey
- Improve speed, efficiency and scalability of the continuous integration environment, automating wherever possible usingPython, Ruby, Shell and PowerShell Scripts.
- Lead DevOps\Automation engineer for large - scale Hybrid Cloud implementation (Active Directory - VMware - Azure, AWS) within geographically disbursed organization with over 200 remote sites and 20,000+ users.
- Create assertions in ReadyAPI (SoapUI) for Test Cases based upon the Swagger 2.0 spec
- Collaborated with the OCM enterprise lead (Toronto corporate offices) to build out the corporate change management framework.
- Spearheaded framework development and team knowledge advancement to support an ERP program.
- Used Flume, Kafka to aggregate log data into HDFS.
- Involved in design, configuration, installation, implementation, management, maintenance, and support for the Corporate Linux servers RHEL 4x, 5.x, SLES 9, CENTOS 5.x.
- Enterprise Container Services, and today usingAWSFaregate.Implemented Micro Services framework with Spring Boot, NODE.JS, andOpenShiftcontainerization platform (OCP).
- Integrated Test Management with Automation framework - JIRA (Bamboo JIRA integration, X-ray - Updating Automation Results in JIRA).
- Used Azure Kubernetes Service to deploy a managed Kubernetes cluster in Azure and created an AKS cluster in the Azure portal, with the Azure CLI, also used template-driven deployment options such as Resource Manager Templates and Terraform.
- Developed and presented proto-type demos fully configured for industrial IoT to internal and external audiences.
- Responsible to work collaboratively with IT and Business to develop a comprehensive Project Plan, and manage Scope Changes, Risks, Schedule. Managed Web Application development projects in OSS/BSS on Mobility platform and Agile Development cycle. Used Rally for Agile project management
- Developed a stream filtering system using Spark streaming on top of Apache Kafka.
- Designed a system using Kafka to auto - scale the backend servers based on the events throughput.
- Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring, and cloud deployment manager.
- Setup GCP Firewall rules to allow or deny traffic to and from the VM's instances based on specified configuration and used GCP cloud CDN (content delivery network) to deliver content from GCP cache locations drastically improving user experience and latency.
- Spun up HD Insight clusters and used Hadoop ecosystem tools like Kafka, Spark, and data bricks for real-time analytics streaming, sqoop, pig, hive, and Cosmos DB for batch jobs.
- Facilitated long-term vision implementation for leadership and drove enterprise-wide Agile Testing initiative establishing new ways of testing, vendor engagement & management, contract negotiations, implementing testing automation strategies, and driving adoption of Agile testing culture.
- Used In house automation toolARAto deploy artifacts
- Experience in performance tuning a Cassandra cluster to optimize writes and read
- Involved in the process ofdata modellingCassandra Schema
- Installed and ConfiguredDataStax OpsCenterfor Cassandra Cluster maintenance and alerts.
- Check Datadog Quality of E2E System including ETL tool, Predictive analytics layer and different integrated system.
- Administered Atlassian and JFrog Suite of tools including Artifactory, Bamboo, Confluence, Crowd, JIRA, and SonarQube, as part of the DevOps Engineering Team. Also Administered Serena Change Man (PVCS) Suite of tools, including Version Manager and Tracker. Created and configured various PVCS Projects to lock-down developed code in adherence with the SDLC Promotion Model, and the Project Life Cycle of Software Development.
- Deploying windowsKubernetes (K8s)cluster withAzure Container Service (ACS)fromAzure CLIand UtilizedKubernetesandDockerfor the runtime environment of theCI/CDsystem tobuild,test, andOctopus Deploy.
- Benchmark Elasticsearch-5.6.4 for the required scenarios.
- Using X-pack for monitoring, Security on Elasticsearch-5.6.4 cluster.
- Providing Global Search with Elastic search
- Automated the creation ofS3buckets and policies andIAMrole-based policies troughPythonBoto3 SDK. ConfiguredS3versioning and lifecycle policies and archived files inGlacierand by using AWSDatabase Migration Service (DMS)migrated homogenous migrations such asoracletooracle and oracletoAmazon Aurora.
- Designed and configured Azure Virtual Networks (VNets), subnets, Azure network settings, DHCP address blocks, DNS settings, security policies, and routing.
- Deployed Azure IaaS virtual machines (VMs) and Cloud services (PaaS role instances) into secure VNets and subnets.
- Timely reviewing the AWS account and apply the security policy according to company policy.
- Using Docker swarm to orchestrating the containerized environment.
- Creating blue/green deployment and helping the application team to test their code.
- Using Sciencelogic monitoring tool for monitoring AWS infrastructure.
- Using AWS Cloud watch for application log monitoring.
- Storing Docker images in nexus artifactory and provide limited access to pull the images.
- Creating lambda functions to automate tasks on deleting unused resources, key rotating, etc.
- Creating Jenkins pipeline using groovy to automate builds, doing build promotion on successful builds of previous builds.
- Developing database using AWSDatabase Migration Service (DMS)homogenous migrations such asoracletooracle heterogeneous migrations between different database platforms such asOracletoAmazon AuroraandMicrosoft SQLtoMySQL.
- Creating Terraform codes to create and manage hybrid cloud infrastructure and create Ansible playbooks and roles and integrate with Terraform to deploy applications.
- Helping application team to a containerizing application via creating Dockerfile and building container environments using Docker compose, Docker volumes and segregating Docker networks for Dev and QA and taking backup of Docker volumes on a regular interval.
- Orchestrating production container application using Kubernetes for auto-scaling of applications using Deployments, creating replica set, and using persistence volume to store application datadog, use readiness and liveness probe for a container health check. Creating RBAC for restricting access, using resources quota to restrict access on resources.
- Developed Terraform plugins using Golang to manage infrastructure which improved the usability of our storefront service.
- Developed website both frontend and backend modules using Python Django Web Framework.
- Applying network policy over Kubernetes cluster to restrict pod level access.
- Led pre-sale meetings and discussions to understand end users’ needs and requirements for industrial IoT and edge-communication solutions.
- Migrated on-premises clusters to Microsoft Azure Cloud and enabled data scientists to perform machine learning and advanced analytics by utilizing Azure Data Factory Pipelines, Data Lakes, Blobs, Catalogs, Keyvaults, HDInsight, Databricks, Azure ML Studio, PowerShell, Automations, Runbooks, CI/CD tools (Chef, Jenkins, Ansible, and Kubernetes, Docker, container orchestration), DevOps
- Designed and created Application release automation (ARA) solutions
- Handled browser compatibility issues for different browsers related to CSS, HTML, and JavaScript for IE, Firefox, and Chrome browsers.
- Experience in administering and maintaining Atlassian products like JIRA,bamboo, Confluence, Fisheye
- Engaged with the client, conducted weekly meetings, and providedHigh-level documentationfor the entire Containerization effort across multiple video software development projects.
- Knowledge of Node.js and frameworks available for it
- Responsible to designing and deploying new ELK clusters (Elasticsearch, Logstash, Kibana, beats, Kafka, zookeeper etc.
- Design, build and manage the ELK (Elasticsearch, Logstash, and Kibana) cluster for centralized logging and search functionalities for the App.
- Written and Maintained Automated Salt scripts for Elasticsearch, Logstash, Kibana, and Beats. Expertise in Repository Management tools Jfrog, Artifactory, and Nexus.
- DynatraceandNew Relicapplication monitoring for performance metrics in real-time to detect and diagnose application problems automatically.
- Installed different plugins in Jenkins like Build pipeline, Artifactory, LDAP, Authorize Project, Green ball, Copy Artifacts, SVN, GIT((bit bucket)), NAnt and HP-ALM
- Infrastructure design for the ELK Clusters
- Using SonarQube for code verification.
- Extracted Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, React JS, Azure SQL, Azure DW) and processing the data in In Azure Data bricks.
Confidential, Fort Worth, TX
DevOps Engineer/SRE
Responsibilities:
- Managing AWS environment using Cloud formation and work closely with the application team to help in deployment of infrastructure. Developed various helper classes needed following Core Java multi-threaded programming and Collection classes.
- Provisioned, Configured, and administered required infrastructure for the CI/CD process in theAWS Cloudproviding best practices for Containerization.
- Support a client with pipeline deployment and automation withinCopadofor Salesforce
- Working with application team to create CI/CD pipeline using GIT, Jenkins, SonarQube, and Ansible to deploy the application in infrastructure
- Developed and implemented SoftwareRelease Managementstrategies for various applications according to the agile process.
- Integrated Kafka with Flume in sand box Environment using Kafka source and Kafka sink.
- Focused on containerization and immutable infrastructure.Dockerhas been a core to this experience, along withMarathonandKubernetesfrom the logs and database in the required format.
- Scheduled, deployed, and managed container replicas onto a node cluster usingKubernetesand converted VM based application to micro services and deployed as a container managed byKubernetes
- Used Zabbix as monitoring tool and used zabbix plugin for Grafana for analysis & visualization.
- Involved in XenServer, VMwareESXi, and Windows2012 installation on HPE Synergy Server.
- Managed Datadog Centre configuration, conducted requirements gathering (RGS) for enhancements, generated BRD, facilitated Joint Application Development (JAD), generated Use Cases, and provided UAT support for OSS/BSS projects for Retail Markets Voice Portal.
- Generated Business Requirements Document and provided support for the technical Development Team.
- Migrate Data from Elasticsearch-1.4.3 Cluster to Elasticsearch-5.6.4 using Log stash, Kafka for all environments.
- Separate Java URL’s Data from Elasticsearch-1.4.3 Cluster and transfer to Elasticsearch-5.6.4 cluster Using Logstash, Kafka.
- Worked extensively on building and maintaining clusters managed byKubernetes, Linux, Bash,GIT, Docker, onGCP(Google Cloud Platform).
- Developed systems to enable baselining and tracking of different types of Reference Data; automated the creation and labelling and adding to version control, tan set up push-button deployments from Subversion to Oracle Databases via Jenkins and JNLP Nodes, with full auditing and user autantication and authorisation provided by the corporate Active Director
- Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring, and cloud deployment manager.
- Setup GCP Firewall rules to allow or deny traffic to and from the VM's instances based on specified configuration and used GCP cloud CDN (content delivery network) to deliver content from GCP cache locations drastically improving user experience and latency.
- Infrastructure design for the ELK Clusters.
- Skilled in monitoring servers using Nagios, Data dog, Cloud watch and usingEFKStackElasticsearchFluentdKibana
- Designed the process to fold into enterprise waterfall bi-yearly testing
- DevelopedPythonandshell scriptsfor automation of the build and release process.
- Elastic search and Log stash performance and configuration tuning.
- Enterprise system integration of two large Magento eCommerce websites into SAP: integration and testing for two stores, B2B and B2C, namely oribe.com and ohcpro.com, with respect of third parties’ APIs, module extensions, payment gateways, hosting and cloud solutions, enterprise ERP integration and CMS systems in the retail domain for KAO Corporation (KAO Salon Division), Limited Brand Partners, Molton Brown, Absolute Web Services and Oribe.com
- Extensive knowledge of HPE HW, Services and SW and Micro Focus SW portfolio including Cloud, ITOM, App Dev (including native cloud development practices, 12 Factor, and Micro services based application transformation) and containerization solutions including Docker, Swarm, Kubernetes, as well as, Cloud Foundry and Open Shift App Dev Platforms (CI/CD) PaaS solutions. Also very well versed in HPE Aruba, Simplicity, Synergy, and Universal IoT Platform solutions. me carry most 1st, 2nd, and many 3rd level conversations with client IT leadership and lead technologists.
- Managed Data Center configuration, conducted requirements gathering (RGS) for enhancements, generated BRD, facilitated Joint Application Development (JAD), generated Use Cases, and provided UAT support for OSS/BSS projects for Retail Markets Voice Portal.
- Transform data by running a Python activity in Azure Security.
- Trained Developers on the use of Atlassian, JFrog, and Serena (PVCS) Suite of tools and provided supporting documentation on Configuration Management methodology. Aided Developers with locking down code into the PVCS VM Repository, and Migration of dat code into Production.
- Developed and implemented the MVC Architectural Pattern using WAF Framework including JavaScript, EJB, and Action classes.
- Automated machine learning and training and optimized it on a GPU with Cuda
- SAT (SIM Application Toolkit) GSM 11.11 & 11.14, Smart trust WIB, S@T,RFM,OTA.
- Implemented highly interactive features and redesigned some parts of products by writing plain JavaScript due to some compatibility issues using JQuery
- Openshiftvirtualized PaaS provider - useful in automating the provisioning of commodity computing resources for cost and performance efficiency
- Use Bitbucket for version control and using a branching strategy to manage deployment.
- Create Jenkins multibranch pipeline using groovy to automate the build and build promotion on a successful build.
- Implement Continuous Integration Continuous Delivery (CICD) for end to end automation of release pipeline using DevOps tools like Jenkins, Puppet, and Automic ARA etc.
- Tested the ETL Informatica mappings and other ETL Processes (Data Warehouse Testing).
- Installed and administered Atlassian tools like Jenkins, JIRA, and Confluence.
- Used JIRA as a Change Management/ Work Management/ SCRUM Agile Tool. Created analytical matrices reports, dashboards for release services based on JIRA tickets.
- Developed metrics dashboards and advanced filters inJIRAto provide end-users and business leadership with performance metrics and status reports.
- Created, customized, and managed new and existing projects inJIRA7 (Server), includingJIRA Agile and spaces in Confluence.
- Experience in implementingTest Management with Automation framework - JIRA(Bamboo JIRA integration, X-ray - Updating Automation Results in JIRA)
- A point team player on Openshift for creating new Projects, Services for load balancing and adding them to Routes to be accessible from outside, troubleshooting pods through ssh and logs, modification ofBuildconfigs, templates,Image streams, etc.
- Setup full CI/CD pipelines so dat each commit a developer makes will go through a standard process of the software lifecycle and get tested well enough before it can make it to production.
- UsedRTCfor version control and agile project management.
- Educate developers on how to commit their work and how can they make use of the CI/CD pipelines dat are in place.
- Familiar with wireless network performance testing, network selection/reselection, RF performance, OTA, data services, GPS navigation, Location-based service.
- Developed Terraform plugins using Golang to manage infrastructure which improved the usability of our storefront service.
- Using Maven for the build, creating artifacts, and storing in the Nexus artifactory.
- Creating an Ansible playbook for deploying applications in the environment.
- Migration of on-prem servers to the AWS Cloud.
- Experience in DevOps Tools (Bit bucket, Bamboo) and test Management JIRA
- Containerizing the application using docker via creating Dockerfile.
- Deploy Infrastructure using ARM templates for Azure PaaS Data bricks services with deployment tools such as Octopus and VSTS
- Used s for migration and back promotion of the functionalities.
- Using Docker swarm to orchestrating the containerized environment.
- Creating blue/green deployment and helping the application team to test their code.
- Orchestrating production container application using Kubernetes for auto-scaling of applications using Deployments, creating replica set, and using persistence volume to store application data, use readiness and liveness probe for a container health check. Creating RBAC for restricting access, using resources quota to restrict access on resources.
- Designed and developed advanced analytical solutions dat utilize Machine Learning and Deep Learning frameworks for Ontario's driver examination services. The solutions utilize tools such as Machine Learning Studio, DLVM, R Server, Python, Jupyter notebooks, Azure Data Lake, Azure SQLCQLdelta database, Azure SQL Data Warehouse, Polybase, and many other data science and ML tools, Spark MLlib Pipelines API, TensorFlow and Keras, Databricks, DSVM, DLVM, R Server, Python, Jupyter notebooks.
- Configuring and managing ELK stack for centralized log monitoring, creating parsers in Logstash to filter the logs, and uses beats for log shipping.
- Created hybrid cloud by combining private cloud and public cloud (using Amazon web services) and used it for public scaling.
- Researched and evaluated various cloud technologies to build industrial IoT edge-computing micro data canters supporting various verticals.
- Involved in periodic archiving and storage of the source code for disaster recovery.
- Designed AWS Cloud Formation templates to create custom-sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates.
- Monitoring IoT (Internet of Things) specified infrastructure design and implementation process.
- Using chef to deploy the application in infrastructure using the Client-Server model.
- Implemented Copado to improve the efficiency of salesforce release management and version control.
Confidential, Pittsburgh, PA
Build & Release Engineer
Responsibilities:
- Managing AWS resources and helping the application team to deploy 3 tier architecture using Terraform.
- Creating Jenkins pipeline using groovy to automate builds, doing build promotion on successful builds.
- AdministeredJenkins, Proposed and implemented branching strategy suitable foragile/scrumdevelopment in a Fast-Paced Engineering Environment.
- Using Terraform codes to create and manage hybrid cloud infrastructure and write ansible playbooks and roles to integrate with Terraform for deploying applications.
- Implemented aCI/CDpipeline withDocker,Jenkins(TFSPlugin installed),Team Foundation Server (TFS),GitHub, andAzure Container Service, whenever a newTFS/GitHub branchgets started,Jenkins, ourContinuous Integration (CI)server, automatically attempts to build a newDocker containerfrom it.
- Helping application team to containerizing application via creating Dockerfile and building container environments using Docker compose, Docker volumes and segregating Docker networks for Dev and QA and taking backup of Docker volumes on a regular interval.
- Orchestrating production container application using Kubernetes for auto-scaling of applications using Deployments, creating replica set, and using persistence volume to store application data, use readiness and liveness probe for a container health check. Creating RBAC for restricting access, using resources quota to restrict access on resources.
- Managing the Open shift cluster dat includes scaling up and down the AWS app nodes.
- Worked on migration services like AWSServer Migration Service(SMS) to migrate on-premises workloads to AWS in an easier and faster way using Rehost "lift and shift" methodology and AWSDatabase Migration Service(DMS), AWS Snowball to transfer large amounts of data and Amazon S3 Transfer Acceleration.
- Implemented Configuration Management, Change Management policies and procedures Use of the ticketing tool JIRA.
- Worked onJiraAgile projects like creating scrum boards, configured columns, filters, and reports for sprints.
- Creating Storage Pool and Stripping of Disk for Azure Virtual Machines. Backup, Configure and Restore Azure Virtual Machine using Azure Backup. Configure Window Failover Cluster by creating Quorum for File sharing in Azure Cloud.
- Designed User Defined Routes with custom route tables for specific cases to force tunneling to the Internet via On-premise network and control use of virtual appliances in the customer's Azure environment.
- Designed and configured Azure Virtual Networks (VNets), subnets, Azure network settings, DHCP address blocks, DNS settings, security policies, and routing.
- Had very strong exposure using Ansible automation in replacing the different components of Openshift likeECTD,MASTER, APP, INFRA,Gluster
- Using Kubesprey to create Kubernetes cluster production-ready to deploy the application.
- Managing Vault to store the important information.
- Using Helm2 as a package manager to deploy the application in the Kubernetes environment.
- Using Rancher as a dashboard to monitor the Kubernetes environment.
- Use Ansible molecule for testing the playbooks before deploying them into the production environment.
- Helping application team to include tools like Maven, SonarQube, Joint, selenium with the Jenkins pipeline.
- Automating the AMI build using Packer.
- Using Vagrant for creating a test environment for the development team to test their codes.
Confidential
Systems Engineer
Responsibilities:
- Installing, upgrading, and managing packages via rpm and yum package management. - created virtual machines and configured them to use resources efficiently.
- Scheduled jobs like disabling and enabling cron jobs, enabling system logging, network logging of servers for maintenance, performance tuning, testing, and managed systems routine backup
- Administered Linux servers for several functions including managing Apache/Tomcat server, mail server, and MySQL databases in both development and production environment.
- Monitored and controlled disk space usage and performed capacity analysis on systems.
- Daily monitoring and troubleshooting of enterprise server, network, and security concerns.
- Administered and maintained network file system (NFS) to share data with Linux and across widow environments.
- Performed memory and swap management to improve optimization and performance of the servers.
- Installed, configured, and administered services such as DNS servers, mail servers, FTP servers, samba servers, Nis, nis+, DHCP, LDAP, and rpm.
- Helped in resolving various build, compile, and test problems.
- Customize layer 2 and layer 3 networking between VMware, networking components, and storage for high availability and maximum performance.
- Implemented fault-tolerance on virtual platforms with high availability and distributed resource scheduling.
- Planning & implementation of scheduled changes per change management policies.
- Development of custom scripts for virtual infrastructure.
- Utilization and capacity reporting.
- Configure SNMP-based monitoring alerts on all converged infrastructure elements.