Devops Engineer Resume
2.00/5 (Submit Your Rating)
SUMMARY
- Over 11 years IT experience of practical experience in deploying enterprise business continuity solutions.
- Skilled in AWS cloud computing, EC2 instances, AMI, S3, EBS, Glacier, Snowball, Cloud Front, Cloud Formation, VPC, Route 53, Kinesis, EMR, Redshift, Elastic Search & Lambda, SNS, SQS, Internet of Things
- Worked with Apache Ecosystem; Hadoop File Distribution System and Hive for aggregation and analytics of big data, MongoDB, Cassandra, CouchDB, SimpleDB, Hbase, Apache Spark, Storm, Flume, Kafka, Pig
- Knowledge of CI/CD technologies both for cloud - native application and non-cloud applications, experience in build Automation tools; Chef, Jenkins, Puppet,
- Worked with Hadoop clusters in Amazon Elastic Map Reduce to migrate files to Redshift with Data pipeline
- Strong understanding of DW/BI technologies - DW data modeling, Data integration, Visualization/Reporting technologies like Talend, Snaplogic, Tibco, Tableau and host of others available in AWS marketplace
- Generated the weekly, monthly, and quarterly report by using SQL Standard Report.
- Knowledge of Disk and Server configuration through RAID for SQL Server
- Knowledge of Citrix Netscaler, Nginx, HA proxy Loadbalancing
- Experience in Containerization / Cluster Management: Docker containers, Repositories, scripting languages like Bash, Perl
- Experience in writing Ansible playbook in YAML for automated provisioning, configuration management and deployment.
PROFESSIONAL EXPERIENCE
Devops Engineer
Confidential, Philadelphia, PA
Responsibilities:
- Working with collaboration tools like Jira and SCRUM as agile methodologies
- Experience with Splunk & Zenoss as monitoring tools and ensured 97% uptime
- Experience with Nexus for code storage before testing and deployment
- Experience with Sonarqube for code quality and validation tool
- Install and configure Red Hat products including OpenShift.
- Experience with Jenkins for CI/CD tool to build and release of artifacts in CI/CD architecture whenever there are any changes in Git repositories
- Experience with virtualization technologies at the infrastructure level like VMWare, VirtualBox
- Installation of Docker Engine on Linux OS, pulling the Docker base images, running the docker commands and persisting data and images by giving volume to the containers
- Experience with Openshift platform for container-based code build, test and deployment
- Experience in writing Ansible Playbooks for IT automation, multi-tier App deployment by writing simple task description
- Experience in writing recipes, uploading cookbooks to Chef server for provisioning and configuration of the nodes
- Experience in using AWS Cloud formation for automated deployment of AWS stack resources
- Experience in creating modules and writing Puppet Manifests in the server to manage the configuration of the nodes
- Integrating AWS Opworks with configuration management tool, Chef
AWS Architect/Devops
Confidential
Responsibilities:
- Use of AWS S3 buckets as durable, high available and scalable storage resource and for static web hosting
- Use of AWS reserved and spot instances to manage costs and ensure budget efficiency
- Use of AWS ElastiCache to read frequently accessed data in-memory instead of reading them to disk all times for performance optimization
- Experienced in using AWS Cloudfront as a content Deliver Resource
- Experience in installation of CI/CD tools- Jenkins using AWS EC2 instances
- Installation and configuration of Source Control Management tools; GIT, SVN, CVS
- Installation and configuration of build automation tools; Ant, Maven and configuring its repository for artifacts storage
- Implemented agile methodology and participated in Scrum meetings with DevOps and developers for daily operations
- Installation and configuration of continuous testing and monitoring tools; Selenium, JUNIT, Nagios, Splunk
- Configured and managed a high availability SQL Server on AWS using RDS
- Working understanding of codes and scripts such as Bash, PHP, Python, Ruby
- Developing APIs from scratch using scripting language such as python
- Implemented partitioning, dynamic partitioning and bucketing in Hive to improve performance while querying historical tables Implemented Proof of concepts on Hadoop stack and different big data analytic tools, migration from different databases (i.e. MSSQL, Oracle, and MySQL) to Hadoop
- Developed Pig Scripts and Pig's Load and Store functions
- Extracted and aggregated substantial amounts of log data using Apache Flume and staging data in HDFS for further analysis.
- Utilized Spark to improve the performance and optimization of the existing algorithms in Hadoop using Spark context, Spark-SQL, Data Frame, pair RDD's, Spark YARN.
- Created an Amazon RDS Microsoft SQL Server instance and connecting to that instance using Microsoft SQL Server Management Studio.
- Created S3 storage on AWS and implemented lifecycle polices for managing objects
- Experienced in using of the immutability function of SCALA in data processing
- Designed, deployed and optimized MS SQL server on AWS in EC2 and RDS
- Provisioned Redshift clusters for descriptive, predictive and prescriptive analytics of big data with traditional Business Intelligence tools
- Mitigated a situation where there is an application spike when customers migrated to a new data source system by investigating and applying resolution skill sets within my team to stabilized the application timely
- Work with Join Group management to understand their short-term goals and long-term vision and provide leadership for both internal and external initiatives for continuous integration and continuous delivery
- Have Extensive knowledge of using Kafta for extraction and ingestion of large streaming data into HDFS and fast distribution to different nodes within the cluster for fast data processing
MS. SQL Server Database Administrator
Confidential
Responsibilities:
- Configured and set up Disaster Recovery (DR) and High Availability (HA) using Always on High Availability solutions for customers
- Designed, implemented and administered High Availability solutions, (clustering, log shipping mirroring and replication)
- Provided 24/7 remote support to SQL Servers on customer sites of various companies to prevent downtime
- Wrote, debugged and tuned T-SQL and Stored Procedures for system performance
- Provided SQL Server database physical model creation and implementation (data type, indexing, table design)
- Setup and administered SQL Server Database Security environments using profiles, database privileges and roles to prevent data theft and increase data integrity
- Database maintenance, replication, and tuning
- Migrated systems data from different databases and platforms to MS-SQL databases
- Generated complex stored procedures and functions for better performance and flexibility
- Ensured database performance monitoring and tuning is regularly accomplished
- Reviewed logs, updated database statistics and index management
- Participated in DBA On-call monthly and document environments and processes where applicable