Sr. Devops/ Aws Engineer Resume
Parsippany, NJ
SUMMARY
- 8+ years of experience on various Linux/Windows Environments in administration, implementation, integration, troubleshooting and maintaining both on - premise and cloud infrastructures.
- Followed by automation of continuous integration and continuous delivery process(CI/CD) of build and release management using agile methodologies. Which include monitoring, configuration troubleshooting and maintaining cloud and devops environment with required toolset.
- Experience in message streaming by using Apache Kafka.
- Extensive hands-on experience on DevOps Environment working on various technologies like Puppet, CHEF, GIT, GILLAB CI, SVN, Jenkins, Docker, Ansible, Kubernetes, Docker swarm, ANT and Maven.
- In-depth understanding on Amazon Web Services which include EC2, S3, VPC, IAM, Cloud Watch, RDS, Dynamo DB, SNS, STS, ELB, Auto Scaling, NAT Gateway, Cloud Formation, Cloud front, Route53, Lambda, etc.
- Experience in dealing with Windows Azure IaaS - Virtual Networks, Virtual Machines, Cloud Services, Resource Groups, Express Route, Traffic Manager, VPN, Load Balancing, Application Gateways, Auto-Scaling.
- Involved in installation management, configuration management, maintenance, upgrade and backup of Linux Operating Systems like RHEL, CentOS, Ubuntu and windows.
- Experience in working on web servers like Apache and application servers like Web logic, Tomcat, WebSphere & JBOSS to deploy code.
- Experience in Blue/Green deployment strategy by creating new applications which are identical to the existing production environment using automation frameworks such as OpenStack, Cloud Formation and Terraform.
- Expertise in App Containerization technology like Docker which include creating Docker images, Containers, Docker Registry to store images, cloud-based registry Docker Hub, Docker Swarm to manage containers
- Experience in creating clusters using Kubernetes, creating pods, replication controllers, deployments, labels, health checks and ingress by writing YAML files and managing Kubernetes charts using Helm.
- Unique experience with Pivotal Cloud Foundry (PCF) architecture and design, troubleshooting issues with platform components, and developing global/multi-regional deployment models and patterns for large scale developments/deployments on Cloud Foundry and AWS.
- Experience in migration of all servers from on-premises to Kubernetes containers & wrote the scripts in Python, Perl and Shell Scripts to monitor installed enterprise applications.
- Converted existing Terraform modules that had version conflicts to utilize cloud formation during Terraform deployments to enable more control or missing capabilities.
- Used Terraform to map more complex dependencies and identify network issues and worked with Terraform key features such as infrastructure as code, execution plans, resource graphs and change automation.
- Hands on experience in Architecting Legacy Data Migration projects such as Teradata to AWS Redshift migration and from on-premises to AWS Cloud.
- Experience in Migrating a production infrastructure into an Amazon Web Services utilizing AWS Cloud formation, Code Deploy, Code Commit, code build, code pipeline Chef, EBS and OpsWorks.
- Installed, configured, tested open source Cassandra multi node cluster distributed across multiple data centers (3DC's and 8 nodes each- 24nodes).
- Streamlined installation of OpenShift on partner cloud infrastructure such as AWS. Edited and managed the creation of relevant content for the OpenShift online channel.
- Integrated Defect tracking tool JIRA with Confluence and Jenkins CI server for identify, log, track and document issues in real-time Configured Nagios to monitor EC2 Linux instances with Ansible automation.
- Developed the OpenShift Test-Drive for Admins (Installation of OCP 3.5, Cluster management and Project Template to explore pods, services, etc) - Qwiklabs & AWS EC2 used to provision the lab guide.
- Created roles, users, groups, adding user to the groups, attached customized policies to provide minimum access to the resources and customized JSON templates using AWS Identity Access Management (IAM).
- Experience in provisioning and administering EC2 instances and configuring EBS, S3- cross region replication, Elastic Load Balancer, configure Auto scaling, setting up CloudWatch alarms, Virtual Private Cloud (VPC), mapping with multi AZ VPC instances and RDS, based on architecture.
- Involved in AWS S3 services like creating buckets, configuring buckets with permissions, logging, versioning, and tagging & lifecycle policies to back the data from AWS S3 to AWS Glacier.
- Designed end to end automation of infrastructure and continuous delivery of the applications by integrating cloud formation scripts, Jenkins, AWS & CHEF cookbooks and recipe.
- Administration experience in branching, tagging, develop, manage Pre-commit, Post-commit hook scripts and maintaining the versions across different Source Code Management (SCM) tools like GIT, Subversion (SVN) on Linux ( Red Hat, CENTOS and Ubuntu ) and Windows platform.
- Proficient in building deployable Artifacts (War, Jar, Ear, Zip, and Tar) from source code and maintained by using Maven (Pom.xml), Ant (Build.xml) and Gradle (Build.gradle) and also worked with Groovy Scripts to Automate Configuration in Jenkins.
- Worked on Hudson, Jenkins for continuous integration and for end to end automation for all build and deployments including setting up pipeline jobs and upstream/downstream job configurations in Jenkins.
- Effective in creating functions and assigning roles in AWS Lambda to run python scripts, and also used java to perform event driven processing.
- Extensive experience in using Groovy, Maven and ANT as a Build Tool for the building of deployable artifacts (war & ear) from Source Code Management (SCM) tools like GIT, Subversion (SVN).
- Managed Maven environment by setting up local, remote and central repositories with required configuration in maven configuration files and defined dependencies and plugins in Maven pom.xml for various activities and integrated Maven with GIT to manage and deploy project related tags.
- Extensively worked on Chef Roles, Cookbooks, Recipes, Templates, Resources, Attributes & Data bags Proficient in the setup of Chef Servers, workstations & bootstrapping infrastructure Chef Nodes for configuration management.
- Experience in writing Ansible playbooks for installing WebLogic/tomcat application, deployment of WAR, JAR, and EAR files across all the environments.
- Experience in writing Ansible playbooks by using YAML script to launch AWS instances and used to manage web applications, configuration files, used mount points and packages.
- Responsible for creating puppet modules and manifest files from scratch and experience in editing existing puppet manifests and automated manual installation processes using puppet.
- Experience in using Azure Stack as a hybrid cloud computing software and combine Iaas and Paas services in a software stack. Enabled the enterprises to keep workloads on-premises or seamlessly move them to the Azure public cloud as needed.
- Experience with optimization and performance tuning of SQL Server Database and performance tuning of SQL Server Store Procedures with SQL Server Profiler and Query Execution Plan.
- Experience in Monitoring infrastructure to identify and troubleshoot issues using tools like Nagios, ServiceNow, and Splunk & JIRA. Custom Kafka broker design to reduce message retention from default 7-day retention to 30-minute retention, architected a light weight Kafka broker.
TECHNICAL SKILLS
Cloud services: AWS EC2, ELB, VPC, RDS, IAM, Route53, Cloud Formation, S3, Cloud Watch, Cloud Front, AWS Snowball Cloud Trail, EBS, SNS, STS, SQS, SWF, EBS, RDS, Dynamo DB, NAT Gateways, Subnets, Security Groups, NACL's, Code Deploy, Code Pipeline, Code Build and Code Commit / GCP
DevOps Tools (CI/CD Tools): Chef, Puppet, Open stack, SVN, Nagios, Jenkins, Docker SWARM, Maven, Ant, Git, SVN, Docker, Kubernetes, Ansible
Scripting Languages: Shell Scripting, Python, Java, and Angular js, Ruby, Perl, Yaml, Node.js and Groovy.
Networking Protocols: TCP/IP, DNS, DHCP, Cisco Routers/Switches, WAN, LAN, FTP/TFTP and SMTP
Bug tracking and monitoring Tools: JIRA, Bugzilla, Nagios, Cloud Watch, Splunk, ELK and SonarQube
Web Development: HTML, CSS, http. XHTML, XML and JavaScript
Version control Tools: Subversion (SVN), GIT, Bitbucket, GIT Hub, CVS.
Database: MySQL, Oracle, MS Access, MS SQL Server, Azure, NO SQL, Cassandra, Kafka.
Web Servers: Tomcat, Web Logic, Apache, Web Sphere, JBoss, VMware and Ngins.
Operating Systems: Red hat Linux 4/5/6/7, Windows servers 2003, 2008, 2008 R2, 2012, Windows 2000, XP, Windows 7, Ubuntu, CentOS, Solaris and DEBAIN
PROFESSIONAL EXPERIENCE
Sr. DevOps/ AWS Engineer
Confidential - Parsippany, NJ
Responsibilities:
- Implemented new process and policies for build process and involved in auditing,
- Build Continuous Integration environment (Jenkins) and continuous delivery environment (puppet).
- Implemented Automated Application Deployment and written Deployment scripts and automation scripts.
- Involved in Leading Automation deployment team and working with Puppet, and written puppet modules for Application deployment
- Worked with broad range of AWS Cloud Services like EC2, S3, ELB, Glacier, and Cloud Front, Code Deploy, code commit, code build, code pipeline, Elastic Beanstalk, AWS snowball, Auto Scaling, Route53, AMI, SNS, SQS, DynamoDB, Elastic search and CloudWatch.
- Implemented a production ready, load balanced, highly available, fault tolerant Kubernetes infrastructure and created Jenkins jobs to deploy applications to Kubernetes Cluster.
- Worked with Kubernetes to provide a platform for automating deployment, scaling, and operations of application containers across clusters of hosts and managed containerized applications using its nodes, config maps, selectors, and services.
- Implemented serverless applications on CloudFront, API gateway, SNS and AWS Lambda
- Created Clusters using Kubernetes, managed them on OpenShift platform and worked on creating many pods, replication controllers, services, deployments, labels, health checks and ingress by writing YAML files.
- Used Docker to containerize custom web applications and deployed on application servers with Ubuntu instances through SWARM Cluster and automated application deployment in cloud using Docker HUB, Docker Swarm, and Vagrant.
- Experience in Migrating a production infrastructure into an Amazon Web Services utilizing AWS Cloud formation, Code Deploy, Code Commit, code build, code pipeline, Chef, EBS and OpsWorks.
- Configuring and deploying instances on AWS environments and Data centers, also familiar with Compute, Kubernetes Engine, Stack driver Monitoring, Elastic Search and managing security groups on both.
- Used Docker to virtualize deployment containers and push the code to EC2 cloud. Built additional Docker Slave nodes for Jenkins using custom built Docker images and instances.
- Orchestrated and migrated CI/CD processes using Cloud Formation, terraform templates and Containerized the infrastructure using Docker setup in Vagrant, AWS and Amazon VPCs.
- Created Terraform modules to create instances in AWS & automated process of creation of resources is AWS.
- Worked closely with application team and support on various performance and configuration issues on daily basis and Planned release schedules with agile methodology.
- Deployed and managed sandbox and production environments using Lamp Stack in Cloud IaaS.
- Developing a web application (Lamp, Zend Framework, and AngularJS). Excellent LAMP development experience including Apache
- Involved in creating IAM user accounts, groups, adding user to the groups, generating custom policies, assigning to groups and users, customizing the JSON template, Created snapshots and Amazon Machine Images (AMI) of the instance for backup.
- Implemented Kafka Security Features using SSL and without Kerberos. Further with more grain-fines Security I set up Kerberos to have users and groups this will enable more advanced security features.
- Used Spark-Streaming APIs to perform necessary transformations and actions on the fly for building the common learner data model which gets the data from Kafka in near real time and Persists into Cassandra.
- Configured deployed and maintained multi-node Dev and Test Kafka Clusters.
- Performed all necessary GIT configuration support for different projects and Worked on branching, versioning, labeling, and merging strategies to maintain GIT repository, GIT Hub.
- Worked on Jenkins for continuous integration and for end to end automation for all build and deployments including setting up pipeline jobs and upstream/downstream job configurations in Jenkins.
- Integrated Jenkins with various DevOps tools such as Nexus, Puppet, configuring Kubernetes container environments, utilizing Kubernetes and Docker for the runtime environment.
- Involved in editing the existing Ant (build.xml) files in case of errors or changes in the project requirements and defined dependencies and plugins in Maven (pom.xml) for various activities and integrated Maven with GIT to manage and deploy project related tags.
- Build scripts using ANT (build.xml) and MAVEN (pom.xml) build tools with Jenkins and schedule jobs using POLL SCM option and integrated to automate the code checkout process and also build deployable artifacts such as war & ear which were pushed to Nexus to check code quality.
- Written/ Developed Ansible Playbooks to automate the entire deployment process as well as infrastructure admin tasks.
- Build totally automated Jenkins Pipeline as IAAC (Infrastructure as Code) with SonarQube, Maven, Docker build process.
- Installed and configured code rules, keys of SonarQube for code analysis and created of SonarQube dash boards for different team members based on their roles to monitor the progress of project source code.
- Generate nightly builds with integration to code quality tools like SonarQube and Veracode.
- Worked on data processing using tools like Hadoop, Data Lake scalable solutions to manage, analyze and extract all relevant and available data.
- Developed AWS S3-based application with Data Lake solution using S3 and provide the primary storage platform to provide the accurate designing capability to the application.
- Deployed and configured Elastic search ELK, Logstash and Kibana (ELK) for log analytics, full text search, application monitoring in integration with AWS Lambda and Cloud Watch.
- Designing Logical and physical data modelling for various data source on redshift and developed ETL work to extract the data from Oracle and load it in data mart.
- Hands-on experience on Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manages change.
- Used Ansible server to manage and configure nodes, Managed Ansible Playbooks with Ansible roles, automate repetitive tasks, quickly deploys critical applications, and proactively manages change. Used file module in Ansible playbook to copy and remove files on remote systems.
- Utilized Agile Methodologies - Scrum meetings to manage full life-cycle development of the project.
- Tested, evaluated and involved in troubleshooting of different NoSQL database systems such as MongoDB, Cassandra and their cluster configurations to ensure high availability in various crash scenarios.
- Used Splunk as a monitoring tools to identify and resolve infrastructure problems before they affect critical processes and it is used for Event handlers in case of automatic restart of failed applications and services.
Environment: AWS, Virtual Private Cloud, AWS Lambda, Cloud Formation Templates, Snapshots, AWS AMI’s, Tomcat, Nginx, Terraform, Docker Engine, Docker, Grafana, Kubernetes, OpenShift, Web Sphere, Web Logic, Apache Tomcat, Amazon cloud server, JBOSS, Packer, Kafka, Git, GitHub, Maven, Nexus, JFrog, Jenkins, Ansible, Vagrant, JIRA, Nagios, SonarQube, YAML, Bash, Python, PowerShell, Ruby, Groovy and Perl.
Sr. DevOps/AWS Engineer
Confidential - Cambridge, MA
Responsibilities:
- Worked on designing and deploying a multi-tier application utilizing almost all of the main services of the AWS stack (like EC2, S3, RDS, VPC, IAM, ELB, Cloud watch, Route 53, Lambda and Cloud Formation) focused on high - availability, fault tolerance environment.
- Hands on experience in setting up databases in AWS using RDS, storage using S3 bucket and configuring instance backups to S3 bucket to ensure fault tolerance and high availability.
- Used Jenkins pipelines, which helped us, drive all Micro services builds out to the Docker registry and then deployed to Kubernetes.
- Used OpenShift to enables efficient container orchestration, allowing rapid container provisioning, deploying, scaling and management and used OpenShift to build, launch and host the application in cloud.
- Setup & maintain Jenkins/Drone pipelines for application continuous integration / continuous deployment.
- Created NAT gateways and instances to allow communication from the private instances to the internet through bastion hosts and used Kinesis to stream the data over thousands of data sources.
- Converting existing AWS infrastructure to serverless architecture (AWS Lambda, Kinesis) deployed via Terraform and AWS Cloud formation. Designed and implemented an entire infrastructure to power micro services architecture on AWS using Terraform.
- Used Terraform in AWS Virtual Private Cloud (VPC) to automatically setup and modify settings by interfacing with control layer and automated data log dashboards with the stack through Terraform scripts.
- Used Docker continuous deployment tool to run, ship and deploy the application. Worked with container base deployments using Docker, working with Docker images, Docker Hub and Docker-registries
- Deployed various databases and applications using Kubernetes cluster management some of the services are Reddis, node.js app, Nginx etc.
- Worked with Groovy scripts in Jenkins to execute jobs for a continuous integration pipeline where Groovy Jenkins Plugin and Groovy Post Build Action Plugin is used as a build step and post build actions.
- Expert in using build tools like Maven and Ant for the building of deployable artifacts such as .war & .jar from source code where Maven tool is used to do the builds, integrated ANT to Eclipse and did local builds.
- Created self-service, auto-provisioning, and auto-scaling environments using VMware Orchestrator and RedHat Enterprise, OpenStack, Cloud Formation with OpenShift, and Ansible open source software.
- Worked with Ansible playbooks and inventory which are the entry point for Ansible provisioning and management where the automation is defined through tasks and run Ansible scripts to provision servers.
- Wrote ANSIBLE Playbooks with Python, SSH as the Wrapper to Manage Configurations of AWS Nodes and Test Playbooks on AWS instances using Python. Run Ansible Scripts to provision Dev servers.
- Integrate static code analysis and code coverage tool SonarQube and implement quality gates to integrate with further workflows.
- Implement promotion workflows for each of the services to integrate it with SonarQube quality gates to take decisions on.
- Defined dependencies and plugins in Maven pom.xml for various activities and integrated Maven with GIT to manage and deploy project related tags and servers.
- Worked on AWS OpsWork, AWS Lambda, AWS Code Deploy, AWS cloud formation and cloud foundry.
- Served application data using Lambda functions to store data in NOSQL database Dynamo DB. Configured REST API's using API Gateway that hit lambda which in turn invoke lambdas to do necessary operations.
- Developing Nagios plug-in scripts, various reports, and project plans in the support of initiatives to assist in maintaining Nagios Distributed system monitoring and management via several data extrapolating applications.
- Core service uses the main database and the other Micro services use their individual databases to access and store data. Extensively worked on NOSQL databases Cassandra and Mongo DB.
- Used SonarQube in build system for continuously inspecting the code quality, Nagios for monitoring and performed log analysis using ELK stack and created monitoring charts.
- Used Elasticsearch for powering not only Search but using ELK stack for logging and monitoring our systems end to end Using Beats. Responsible to designing and deploying new ELK clusters.
- Designing and building multi-terabyte, full end-to-end data warehouse infrastructure from the ground up on confidential Reshift for large scale data and migrated on premise database structure to redshift data warehouse with AWS tools.
- Implemented and maintained the monitoring and alerting of production, corporate servers and storage using AWS CloudWatch for efficiency.
- Used Jira, Crucible bug tracking tool for both hosted and local instances for issue tracking, workflow collaboration and tool-chain automation.
- Developed and designed system to collect data from multiple portal using Kafka and then process it using spark.
- Integrated Apache Kafka for data ingestion and Ran Log aggregations, website Activity tracking and commit log for distributed system using Apache Kafka.
- Used Nagios to monitor and manage server logs for different environments in the organization
- Supporting 24x7 production computing environments. Experience providing on-call and weekend support.
Project Environment: AWS EC2, S3, VPC, RDS, IAM, Route 53, Cloud Formation, CloudFront, CloudWatch, Docker Hub, Docker Swarm, Kubernetes, NAT instances, Terraform, Jenkins, Chef, Puppet, Nagios, Maven, Ant, GIT, Kafka, HTML, CSS, Node JS, Ubuntu, HTTP, WebSphere.
DevOps/AWS Engineer
Confidential - New York, NY
Responsibilities:
- Worked with software engineers to develop tools that support rapid creation. Deployment, iteration and ongoing support of web applications Worked on using a GIT branching strategy that included developing branches, feature branches, staging branches and master. Pull requests and code reviews were performed.
- Build deployable artifacts using build management tools like Chef, initially used Ant for writing build.xml for building Java/J2ee applications later on migrated to Maven (pom.xml).
- Written multiple cookbooks in Chef using Ruby scripting language for various DB configurations to modularize and optimize end-product configuration, converting production support scripts to Chef Recipes.
- Developed version control of Chef Cookbooks, testing of cookbooks using Food Critic and Test Kitchen and running recipes on nodes managed by on-premise Chef Server.
- Set-up a continuous build process in Visual Studio Team Services to automatically build on new check-in of code then deploy that new build to the Azure Web application.
- Selecting the appropriate Azure service based on compute, data, or security requirements and leveraging Azure SDKs to interact with Azure services from your application.
- Worked on Docked-Compose, Docker-Machine to create Docker containers for testing applications in the QA environment and automated the deployments, scaling and management of containerized applications across clusters of hosts using Kubernetes.
- Application requirement. Repaired broken Chef Recipes and corrected configuration problems with other Chef objects
- Exposed to all aspects of software development life cycle (SDLC) such as Analysis, Planning, Developing, Testing, and Implementing and Post-production analysis of the projects.
- Created clusters using Kubernetes and worked on creating many pods, replication controllers, deployments, labels, health checks and ingress by writing YAML files
- Proficient in using Docker in swarm mode and Kubernetes for container orchestration, by writing Docker files and setting up the automated build on Docker HUB.
- Implemented continuous deployment to deploy the Docker images in OpenShift platform for DEV and TEST
- Created the pods in OpenShift Environment by using Docker file and Created clusters from the docker containers with the help of Kubernetes on the OpenShift platform.
- Installed SONAR on Jenkins server and configure with the build process for the code analysis process for better code quality and code metrics, and rapid feedback for development teams and managers
- Experience on Configuring the Chef-Repo, Setup multiple Chef Workstations and Developing Cookbooks for automating deployments via Chef.
- Worked on setting up virtual box and Vagrant boxes for testing the Chef cookbooks using kitchen and involved in fixing the Robocop and linting for several cookbooks
- Involved in NoSQL database design, integration and implementation.
- Experience in automating day-to-day activities by using Windows PowerShell for Creating VM's, Virtual Networking, VPN, Key Vault, Load balancer and Disk Encryption.
- Implemented Splunk infrastructure and used Splunk to capture and analyze data from various layers load balancers, web servers and application servers.
- Built & Deployed Java/J2EE to web application server in agile continuous integration environment and automated Labelling activities in TFS once deployment is done.
- Developed stored procedures, triggers in MY SQL for lowering traffic between servers & clients.
- Configured back-ups twice a week. Streamlined applications delivery to get applications out to customers faster.
Project Environment: GIT, subversion, Bitbucket, IAAS, PaaS, DynamoDB., RDBMS, continuous integration, continuous delivery, continuous deployment pipelines, Glacier, ELB (Load Balancers), RDS, SNS, SWF, and IAM, Chef, Jenkins, Ubuntu, CentOS, cloud watch, cloud formation, s3 buckets, VPN and docker.
Sr Full Stack Developer/ DevOps
Confidential - Denver, CO
Responsibilities:
- Used IAM for creating roles, users, groups and implemented MFA (Multi Factor Authentication) to provide additional security to AWS account and its resources.
- Coordinated with developers to establishing and applying appropriate branching, labeling/naming conventions using GIT source control and analyzed and resolved conflicts related to merging of source code for GIT.
- Built S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and backup on AWS
- Created a fully Automated Build and Deployment Platform on docker and Kubernetes cluster and coordinating code build promotions and orchestrated deployments using Jenkins, Ansible, and Git.
- Strong experience with CI (Continuous Integration) and CD (Continuous Deployment) methodologies using Ansible, Jenkins.
- Used Ansible to run ad-hoc commands and playbooks to automate tasks and integrated with Ansible.
- Wrote the Ansible playbooks which is the entry point for Ansible provisioning, where the automation is defined through tasks using YAML format. Run Ansible scripts to provision dev servers.
- Managed the security and compliance of all the users of Ansible and taken care of the application deployment.
- Configured the Ansible playbooks with Ansible Tower and wrote playbooks using YAML
- Designing and implementing CI (Continuous Integration) system: configuring Jenkins servers, Jenkins nodes, creating required scripts (Perl & Python), and creating/configuring VMs (Windows/Linux).
- Wrote python scripts using Boto3 to automatically spin up the instances in AWS EC2 and OPS Works stacks and integrated with Auto scaling to automatically spin up the servers with configured AMI’s.
- Used Jenkins and pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes.
- Created Puppet Manifests to provision Apache Web servers, Tomcat servers, Nginx, Apache Spark and other applications.
- Implementing the monitoring solutions like Zabbix, Nagios, AWS Cloud watch etc.
- Designed and implemented the backup strategy for all the critical systems such as build machines, bug tracking tools, central repositories etc. collect all release documents for the schedule release of every month second week and off cycle releases.
Environment: Subversion 1.6, GIT, GITHUB, Stash, Apache, Maven, Jenkins, Apache Tomcat, Shell Script, AnsibleLinux, Windows, Cloud Foundry, Cloud Watch, Python, Kubernetes, Perl, AWS, DNS, Docker, putty, puppet, CentOS and RHEL
System Administrator
Confidential
Responsibilities:
- Set up and configuring of Linux (RedHat) and Solaris servers/workstations for clients.
- Provided system support for Linux servers including patching, system backups and upgrading software.
- Used TFS for Software Configuration management and maintaining the versions of code.
- Team participated in Agile Process and used TFS for backlog grooming, task and dependency management.
- Directed the implementation and performance tuning of Windows 2003 Server environment for client’s global operations. Delivered a major improvement over old VPN system that catapulted productivity of remote sales force.
- Led in-house and consultant team in large-scale Linux server upgrade for multinational consulting firm, which significantly enhanced system performance.
- Replaced major manufacturer’s vulnerable network with robust security through joint architecture of firewall and DHCP.
- Stabilized, expanded and protected client network and PC environment. Built new file servers to maximize Web hosting, terminal server, file/print sharing and domain control performance.
- Coordinated critical build, deployment and application issues with various teams like Developers,
- Network Engineers, System Engineers and Release Engineers.
- Facilitate monthly meetings with client to document requirements and explore potential solutions.
- Logged all events to log files
- Responsible for merging source code across multiple branches in TFS and making the code available for deployments. Created build definitions in TFS for Continuous Integration of source code along running unit and Integration tests.
- Experience in development with Perl, Python, PowerShell or other scripting languages.
- On-call support for 24/7 for troubleshooting production issues.
- Project Management for various UNIX/Linux/Windows system integration projects.
Environment: RHEL, Solaris, AIX and Windows, ShelliPlanet4.1, Python, BMC Remedy, Sun One 6.1, IIS 6.0, Windows 2008,Linux, Shell Scripting, Oracle 9i.