We provide IT Staff Augmentation Services!

Sr. Devops/cloud Engineer Resume

PA

SUMMARY

  • Sr. Cloud & DevOps professional about 7 years of IT Experience as Cloud/DevOps Engineer comprising of Linux and System Administration with major focus on AWS, Azure, Continuous Integration, Continuous Deployment, Configuration Management, Build/release Management and Virtualization technologies which also includes
  • Troubleshooting and Performance issues.
  • Result oriented developer with strong problem solving skills and able to work efficiently balancing between resources and time constraints.
  • Hands on experience in building AWS solutions using CloudFormation Templates and Terraform to automate repeatable provisioning of AWS resources for applications.
  • Used Amazon ECS as a container management service to run microservices on a managed cluster of EC2 instances. Implemented Amazon API Gateway to manage, as an entry point for all the API's.
  • Knowledge on SaaS, PaaS and IaaS concepts of cloud computing architecture and Implementation using AWS, OpenStack, OpenShift, Pivotal Cloud Foundry (PCF) and Azure.
  • Experience in Converting existing AWS Infrastructure to Server less architecture (AWS Lambda, Kinesis), deploying via Terraform and AWS Cloud Formation templates.
  • Good experience in working with AWS, GCP and Microsoft Azure cloud environments.
  • Hands on experience in GCP, Big query, GCS Bucket. Analyzed data in Google Cloud Storage using Big Query.
  • Expertise using BigQuery browser tool and BigQuery Command Line.
  • Good experience in Unix/Linux system administration, Server Builds, System builds, Installations, Upgrades, Migrations and Troubleshooting on REDHAT Linux, CentOS, Ubuntu, Windows, Fedora, Suse and Solaris.
  • Expertise in the areas of Software Development Life Cycle(SDLC) methodologies, Change Management, Disaster Recovery, Failure Management, Incident and Issue Tracking, Cost Optimization, Log Monitoring and Cloud Implementation.
  • Experience with Kubernetes in managing the containerized applications, creating and deploying application containers.
  • Experience with setting up Chef Infrastructure, bootstrapping nodes, creating, and uploading recipes, node convergence in Chef SCM. Experience in using Chef for server provisioning and infrastructure automation, release automation and deployment automation, Configure files, commands, and packages.
  • Experience in using MAVEN and ANT as build tools for building of Deployable Artifacts (jar, war & ear) from source code.
  • Experience using Ansible Tower dashboard, role - based access control, access to Ansible for deployments.
  • Experience in working with Ansible Playbooks to automate various deployment tasks and working knowledge on Ansible Roles, Ansible inventory files and Ansible Galaxy.
  • Extensively worked on Jenkins/Hudson & Bamboo by installing, configuring, troubleshooting and maintaining for the purpose of Continuous Integration (CI) and for End-End automation of all builds and deployments.
  • Excellent knowledge in writing Bash, Ruby, Python and PowerShell scripts to automate the deployments.
  • Expertise in deploying the builds through web application servers like Tomcat, JBoss, WebSphere and Web logic.
  • Expertise in troubleshooting at System level, Application level, Network level sues generated while building, deploying and in production support.
  • Good knowledge on DataStage tool: Designer, Director and Information Server Manager and Used DataStage as an ETL tool to extract data from sources systems.
  • Knowledge on different stages of Datastage Designer like Lookup, Join, Merge, Funnel, Filter, Copy, Aggregator, and Sort etc.
  • Experience in working with large data sets of Structured and Unstructured data using Big Data/Hadoop and Spark, Data Acquisition, Data Validation, Predictive modeling, Statistical modeling, Data modeling, Data Visualization.
  • Experience in installing, configuring and using Apache Hadoop ecosystem components like Hadoop Distributed File System (HDFS), MapReduce, Yarn, Spark, Pig, Hive, Hbase, Oozie, Zookeeper and Sqoop.
  • Hands-on experience in working with Spark SQL queries, Data frames, import data from Data sources, perform transformations, perform read/write operations, save the results to output directory into HDFS.
  • Experience in using distributed computing architectures like AWS products (e.g. EC2, Redshift, and EMR, Elastic search) and working on raw data migration to Amazon cloud into S3 and performed refined data processing.
  • Experienced with setting up databases in AWS using RDS including MSSQL, MYSQL, MongoDB & DynamoDB. Storage using S3 bucket and configuring instance backups to S3 bucket.
  • Experienced in using Used Cloud watch logs to move application logs to S3 and create alarms based on a few exceptions raised by applications.
  • Have understanding of Azure Portal, Azure Data Lake, Azure Data Factory, Azure Databricks, Azure CLI, Monitor, MASE, SQL Data Warehouse Azure Blob, and Azure Storage Explorer.
  • Experience in Azure Platform Development, Deployment Concepts, hosted Cloud Services, platform as a service (PaaS) and close interface with Windows Azure Multi-factor Authentications.
  • Worked on creating end-end ci/cd pipeline using VSTS and deployed it in the Azure cloud.
  • Experience with Azure transformation projects and Azure architecture decision making Architect and implement ETL and data movement solutions using Azure Data Factory(ADF), SSIS

TECHNICAL SKILLS

Configuration Management: Chef, Ansible

Continuous Integration: Jenkins, Bamboo

Version Control: Git, SVN

Build Tools: ANT, MAVEN and MS Build

Cloud Platforms: AWS, Azure, GCP, Openstack

Package Management: Artifactory, Nexus

Issue Tracking: JIRA, Bugzilla

Virtualization: Docker, Vagrant, and Kubernetes

Programming Languages: JAVA, .Net, XML, Shell script, Ruby and Python

Web & Application servers: Web logic, Web Sphere, Tomcat, Ngnix, and Apache.

Scripting: Bash, Python, Ruby, Perl, nodeJS, JavaScript, HTML

Monitoring Tools: Datadog, Splunk, ELK

Big Data Eco-system: HDFS, MapReduce, Spark, Yarn, Hive, HBase, Sqoop, Kafka, Zookeeper

Operating Systems: Linux (Red Hat 5/6), Ubuntu, CentOS, Windows, and Unix

PROFESSIONAL EXPERIENCE

Confidential, PA

Sr. DevOps/Cloud engineer

Responsibilities:

  • Configured AWS stack to AMI management, Elastic Load Balancing, Auto Scaling, CloudWatch, EC2, EBS, IAM, Route53, S3, RDS, Cloud Formation.
  • Managed EC2 instances utilizing Launch Configuration, Auto scaling, Elastic Load balancing, automated the process of provisioning infrastructure using Cloud Formation templates JSON templates using Ansible modules, and used ELK(Elastic Search, Logstash, Kibana) stack for aggregating logs, indexing and monitoring environments.
  • Migrating present Linux environment to AWS by creating and executing a migration plan, deployed EC2 instances in VPC, configured security groups & NACL’s, attached profiles and roles using AWS Cloud Formation templates and Ansible modules.
  • Wrote AWS Lambda python scripts for internal server’s health monitoring, knowledge of API calling, when integrated with third-party API’s.
  • Scripting infrastructure and (Linux) machine provisioning using bash and the Python AWS-SDK.
  • Created Master-Slave configuration using existing Linux machines and EC2 instances to implement multiple parallel builds through a build farm, expertise in troubleshooting build and release job failures.
  • Used Amazon Route53 to manage DNS zones globally as well as to give public DNS names to ELB’s and Cloud front for Content Delivery, administered AWS environment using IAM to assign roles, to create and manage AWS users, groups, and permissions.
  • Designed and worked on a CI/CD pipeline supporting workflows for application deployments using Jenkins, Artifactory, Chef, Terraform and AWS CloudFormation.
  • Installing Docker using Docker tool box and installing and configuring Kubernetes.
  • Container management using Docker by writing Docker files and set up the automated build on Docker HUB.
  • Used Jenkins and pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.
  • Worked on Installation and configuration of Jenkins for Automating Builds and Deployments through integration of GIT into Jenkins to automate the code check-out thus providing an automation solution.
  • Involved in Designing and setup of CI tool Bamboo to integrate SCM tool Git and automated the build process.
  • Worked with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation and created Infrastructure in a Coded manner (Infrastructure as Code) using Terraform
  • Wrote Terraform scripts to improve the infrastructure in AWS. Expertise in configuring Jenkins job to spin up infrastructure using Terraform scripts and modules.
  • Involved in setting up Kubernetes (k8s) for clustering & orchestrating Docker containers for running microservices by creating Pods.
  • Worked with Docker and Kubernetes on multiple cloud providers, from helping developers build and containerize their application (CI/CD) to deploying either on the public or private cloud.
  • Building Docker images and checking in to AWS ECR for Kubernetes deployment.
  • Managed Ansible Playbooks with Ansible modules, implemented CD automation using Ansible, managing existing servers and automation of build/configuration of new servers.
  • Used Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manages change.
  • Worked with Nagios for monitoring and Splunk for Log Management.
  • Installed and configured Splunk to monitor applications deployed on the application server, by analyzing the application and server log files. Worked on the setup of various dashboards, reports, and alerts in Splunk.
  • Proficient with Splunk architecture and various components (indexer, forwarder, search head, deployment server), Heavy and Universal forwarder, License model.
  • Involved in setting up JIRA as defect tracking system and configured various workflows, customizations and plugins for the JIRA bug/issue tracker.
  • Extensively used JIRA as a ticketing tool for creating sprints, bug tracking, issue tracking, project management functions, and releases.

Confidential, Dallas, TX

DevOps/AWS Engineer

Responsibilities:

  • Installation, Administration, Maintenance, and troubleshooting of Linux and Windows servers.
  • Leveraged Amazon Web Services like EC2, RDS, EBS, ELB, Auto Scaling, AMI, IAM through AWS console and API Integration.
  • Deployed LAMP based applications in AWS environment, including provisioning MYSQL- RDS and establishes connectivity between EC2 instance and MySQL-RDS via security groups.
  • Configured Amazon Elastic Container Service to use scale cluster size and adjust its desired count up or down in response to CloudWatch alarms.
  • Deployed JSON template to create a stack in Cloud Formation which includes services like Amazon EC2, Amazon S3, Amazon RDS, Amazon Elastic Load Balancing, Amazon VPC, SQS and other services of the AWS infrastructure.
  • Automated configuration management and deployments using Ansible playbooks and YAML. Created Ansible Playbooks to provision Apache Web servers, Tomcat servers, Nginx, Apache Spark and other applications.
  • Extensively worked on creating Docker file, build the images, running Docker containers and manage Dockerized application by using Docker Cloud. Used Docker Swarm for clustering and scheduling Docker container.
  • Designed and Implemented scalable, secure cloud architecture based on MS Azure Cloud platform.
  • Developing a Continuous Delivery (CD) PIPELINE with Docker, Jenkins, GITHUB and Azure pre-built images.
  • Configured Azure Virtual Networks, subnets, DHCP address blocks, Azure network settings, DNS settings, security policies and routing. Also, deployed Azure IaaS virtual machines and Cloud services (PaaS role instances) into secure Virtual Networks and Subnets.
  • Design and Implementation of Azure Site Recovery in both Disaster Recovery Scenario and for migrating the workloads from On-Premise to Azure. Disaster Recover (DR) plan using Traffic Manager Configuration.
  • Used Azure Terraform to deploy the infrastructure necessary to create development, test, and production environments.
  • Written Ansible Playbooks with Python SSH as the Wrapper to Manage Configurations of Azure Nodes and Test Playbooks on Azure instances using Python SDK and automated various infrastructure activities like continuous deployment, application server setup, stack monitoring using Ansible playbooks.
  • Involved in various aspects and phases of architecting, designing, and implementing solutions in IT infrastructure with an emphasis on Azure cloud and hybrid solutions.
  • Used Kubernetes to deploy scale, load balance, scale and manage Docker containers with multiple namespace versions.
  • Developed CI/CD system with Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the CI/CD system to build, test and deploy.
  • Managed Kubernetes charts using Helm. Created reproducible builds of the Kubernetes applications, templatize Kubernetes manifests, and provide a set of configuration parameters to customize the deployment and Managed releases of Helm packages.
  • Configured Kubernetes to automatically adjust all replica sets according to the deployment strategy, making it possible to perform updates without affecting application availability.
  • Configured and deployed several hypervisors and VMs running OpenStack for DevOps, testing and production environments.
  • Troubleshooting any part of the lifecycle services within the OpenStack including log files, message queues, database, computer hardware, and network connectivity.
  • Implemented new project builds using Jenkins and maven as build framework tools, inspected builds in a staging environment before rolling out to a production environment.
  • Setup the Jenkins jobs for Continuous integration process and to execute test cases.
  • Written PowerShell script to automate Active Directory and server tasks and generate reports for administrators and management.
  • Wrote Lambda functions in python for AWS Lambda and invoked python scripts for data transformations and analytics on large data sets in EMR clusters and AWS Kinesis data streams and configuration management tools such as Kafka
  • Configured and implemented OpenStack Nova to provision virtual machines on KVM for compute capacity.
  • Designed and implemented OpenStack Keystone to provide unified authentication between OpenStack Nova, Swift & Glance APIs using IDM solution, LDAP & hybrid drivers.
  • Configured and deployed several hypervisors and VMs running OpenStack for DevOps, testing and production environments.
  • Troubleshooting any part of the lifecycle services within the OpenStack including log files, message queues, database, computer hardware, and network connectivity.
  • Worked on the creation of Jenkins Pipeline using Groovy scripts to automate ANT/MAVEN application build by pulling code from GIT and GitHub repositories.
  • Built scripts using ANT and MAVEN build tools in Jenkins to move from one environment to other environments. Configured GIT with Jenkins and schedule jobs using POLL SCM option.

Confidential

Cloud/DevOps Engineer

Responsibilities:

  • Developed and supported key pieces of the company's AWS cloud infrastructure. Built and managed a large deployment of Ubuntu Linux instances systems with Opscode.
  • Provisioning of infrastructure for our environments building AWS CloudFormation stacks from the resources VPC, EC2, S3, RDS, Dynamo DB, IAM, EBS, Route53, SNS, SES, SQS, CloudWatch, Security Group, Auto Scale.
  • Creating NAT and Proxy instances in AWS and manage route tables, EIP's and NACLs. Configuring of Virtual Private Cloud (VPC) with networking of subnets containing servers.
  • Wrote Chef Cookbooks for various DB configurations to modularize and optimize end product configuration, converting production support scripts to Chef Recipes and AWS server provisioning using Chef Recipes.
  • Performed development and version control of Chef Cookbooks, testing of Cookbooks using Test Kitchen and running recipes on nodes managed by on-premise Chef Server.
  • Worked with CHEF ohai plugin, chef handlers, push jobs and exposure to chef supermarket to leverage the existing cookbooks for quick automation of general deployment and Infrastructure tasks.
  • Integrated SonarQube with Jenkins using Maven to get the Quality Analysis for all the Projects pre-deployment. Discussed the report with Developers to explain the SonarQube Report and to help improve code Quality.
  • Implemented a Continuous Delivery pipeline with Docker, Jenkins and GitHub. Responsible for installation & configuration of Jenkins to support various Java builds and Jenkins plugins to automate continuous builds and publishing Docker Images to the Nexus Repository.
  • Worked on Docker engine and Docker Machine environments, to deploy the micro services-oriented environments for scalable applications, Docker swarm to host cluster and container scheduling, Docker compose to define multiple container applications.
  • Virtualized servers in Docker as per test environments and Dev-environments requirements and configured automation using Docker containers.
  • Worked on VMware application in Splunk to scheduling components that manage data collection tasks for the API data. The Collection Configuration dashboard coordinates the flow of data with the data collection nodes.
  • Automated Java Builds with Maven and implemented multiple plugins for Code analysis, Junit, Code coverage, PMD, SonarQube, etc. Installed and administered Artifactory repository to deploy the artifacts generated by Maven.
  • Written shell scripts with Bash, Python to automate tasks like provisioning servers, installing, configuring packages and deploying applications on multiple servers in the Prod & Non-prod environments.
  • Worked with application/database team to resolve issues for performance Tuning and Management of Linux servers
  • Worked on analyzing Hadoop cluster using different big data analytic tools including Pig, Hive and MapReduce.
  • Integrated Oozie with Pig, Hive, Sqoop and developed Oozie workflow for scheduling and orchestrating the Extract, Transform, and Load (ETL) process within the Cloudera Hadoop.
  • Implemented several scheduled Spark, Hive & Map Reduce jobs in Hadoop Map Reduce distribution.
  • Implemented optimized map joins to get data from different sources to perform cleaning operations before applying algorithms.
  • Analyzed data which need to be loaded into Hadoop and contacted with respective source teams to get the table information and connection details.
  • Exploring with Spark, improving performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, and Pair RDD's.
  • Created the Lambda script in Python for executing the EMR jobs
  • Scheduled clusters with Cloud watch and created Lambda to generate operational alerts for various workflows.
  • Design and Develop ETL Processes in AWS Glue to migrate Campaign data from external sources like S3, ORC/Parquet/Text Files into AWS Redshift.

Environment: AWS, GIT, GITHUB, SonarQube, Jenkins, Maven, Nexus, Ansible, Chef, LVM, Splunk, Nagios, DynamoDB, Python, shell scripting, Linux, Hadoop, Spark, Pig.

Confidential

Linux Administrator

Responsibilities:

  • Day to day duties involved Linux server maintenance and support to developer's team for their issue’s application, tuning, troubleshooting, and software running.
  • Administered and Maintained RHEL 4.x, 5.x, 6.x and Solaris 8/9.
  • Installed the latest patches for, Oracle on Red hat Linux servers, Configured and administered Send mail, Samba, Squid servers in Linux environment
  • Set up the Linux Cron jobs for automating various build related jobs and application data synchronization jobs.
  • Responsible for building of Linux OS servers using kickstart automation application
  • Configured Kickstart and Jumpstart servers to initiate installation of RedHat Linux and Solaris on several machines at once.
  • Updated previous LDAP tools to work with version of Ruby Rails.
  • Involved in Installing, Configuring and Upgrading of RedHat Linux AS 4/5, Solaris 9/10 operating systems.
  • Performed automated installations of Operating System using kickstart for Red Hat Enterprise Linux5/6 and Jumpstart for Solaris 9/10 Linux.
  • Administered and supported distributions of Linux, including Linux Enterprise Desktop, SUSE Linux Enterprise Server, RedHat and Solaris.
  • Install, maintain and upgrade Drupal and Word press on LAMP stack and Configured LAMP Stack on Unix/Linux servers.
  • Configured the NIS, NIS+ and DNS on RedHat Linux 5.1 and update NIS maps and organize the RHN Satellite Servers in combination with RHN Proxy Server.
  • Worked on Linux Package installation using RPM and YUM, provisioned system with LVM.
  • Developed, customized and build packages on Solaris and rpms on Linux for deployment on various servers through Software Development Life Cycle.

Environment: s: Oracle on Red hat Linux, Samba, Squid, RedHat Linux AS 4/5, Solaris 9/10, Linux Enterprise Desktop, SUSE Linux Enterprise Server, RedHat and Solaris, LDAP.

Hire Now