We provide IT Staff Augmentation Services!

Senior Cloud Engineer Resume

4.00/5 (Submit Your Rating)

New, YorK

EXPERIENCE SUMMARY:

  • AWS Professional Certified Solution Architect/Devops Engineer with more than 9 years of experience in IT industry with real time Analysis, Functional Design, Development, DevOps, Infrastructure, Cloud solutions, Automation and deployment.
  • Experience in Amazon AWS cloud Administration and actively involved designing highly available, Scalability, cost effective and fault tolerant systems using multiple AWS services.
  • Solid experience in creating Infrastructure provisioning & Software configuration management frameworks using Ansible, Terraform & Jenkins.
  • Experience in Migrating the On - Premise Data Center to AWS Cloud Infrastructure.
  • Experience in Setting up the Hadoop Cluster in AWS Infrastructure.
  • Proficient in using all amazon web services like EMR, EC2, EBS, IAM, S3, ELB, RedShift, RDS, VPC, Route 53, Cloud Watch, Cloud Formation etc.
  • Experience in creating S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and backup on AWS using lifecycle policies.
  • Experience in real-time monitoring and alerting of applications deployed in AWS using Cloud Watch, Cloud Trail, DataDog, Pagerduty, Splunk and Simple Notification Service.
  • Experience in creating Cloud Formation Stack and automated Jenkins for the Provisioning of Dev/QA/Pre-Prod/Production Environment.
  • Experience in developing puppet modules for installation and configuring and deploying Git, Jenkins, Tomcat
  • Experience in Docker Containerization, Docker-Compose and configuring Dockerfile and creating Docker-images
  • Experience in maintaining and executing build scripts to automate development and production builds.
  • Experience in Kubernetes architecture: node, kubelet manages pods, their containers, images, volumes, network etc.
  • Generate Micro services and deployed in k8s using Jenkins and Ansible
  • Created Git repo, deployment pipeline and managing the kubernetes resources with different namespace (pvs, cpu, memory)
  • Scaling the Pods in K8s using Replication Controller to match the environment requirements.
  • Extensive experience in using Build Automation tools like, ANT, Maven, Artifactory and Jenkins.
  • Excellent communication and inter-personal skills detail oriented, analytical, time bound, responsible team player and ability to coordinate in a team environment and possesses high degree of self-motivation and a quick learner.

TECHNICAL SKILLS:

AWS Services: EMR, EC2, RDS, VPC, Redshift, Cloud Formation, Cloud Trail, Cloud Watch, SNS, Kinesis, Elastic Beanstalk, SQS, S3, Dynamo dB, Code Commit, Code Deploy

BigData Technologies: Hadoop Architecture, HDFS, Map Reduce, Hive, Pig, Cassandra Hbase, Sqoop, Zookeeper, Oozie, Flume, Apache Spark, Spark Streaming, Pyspark, Presto, Azkaban, Storm, Airflow, redis

Operational Intelligence: SPLUNK, ELK Stack

Databases: MySQL, SQL Server, Oracle, Postgres, DB2

Languages: Java, Scala

Application Servers: WebSphere, JBOSS, Tomcat.

BI/Reporting Tools: Splunk, Tableau, Cognos, Quikview

Version Control Systems: Subversion, Git (Bitbucket/Stash)

Build Tools: Maven, Jenkins, Ant, Gradle, Git, Teamcity

Monitoring: DataDog, Pagerduty, Cloud watch, Nagios, Gangalia, Ambari

Automation/Performance Testing: Selenium, QTP, HP Load Runner, JMeter

Scripting: ANT, Shell scripting, Ruby, Python

Container Services: Docker, Kubernetes

Protocols/Other: TCP/IP, REST, soap, json, yaml, xml

Operating systems: Windows, Linux, Mac

PROFESSIONAL EXPERIENCE:

Confidential, New York

Senior Cloud Engineer

Responsibilities:

  • Involved in design and development of AWS hosted distributed compute analytic platform known as GDP ( Confidential Data Platform). GDP is based on decoupled persistent storage (s3) and ephemeral distributed compute (EMR). Setting up open sources including Hive and Presto as the distributed SQL engine, Re:Dash as the SQL UI, Azkaban as the job orchestration application and Jenkins for automated deployment of spark code and Azkaban workflows. For data ingestion platform relies on Spark and MR (Aegisthus to ingest Cassandra backups).
  • Responsible for CaaS architecture in AWS (VPC, EC2, S3, EBS, Route53, Auto scaling, CloudWatch, S3). Entire stack is managed through Terraform modules & Ansible playbooks orchestrated by Jenkins pipeline.
  • Migrated Different source data from Postgres, Salesforce, MySQL, Cassandra to AWS S3.
  • Developed Cluster Management in AWS for automatically managing EMR and EC2 services.
  • Designed redaction for data processing from various sources like Cassandra, Kinesis, Postgres, MySQL.
  • Implemented CI process and release model for Ansible code which involves splitting inventory into separate repositories and establishing ‘config specifications’ between Ansible release & environment config. CI process involves dynamically provisioning VMs/EC2 instance using Terraform through a custom pool app (reducing provisioning wait time for CI process) and validating playbooks as per user defined policies.
  • Responsible for selecting appropriate EC2 Instances for applications by doing research and benchmarks
  • Designed and developed Data Lake in AWS S3.
  • Research and Chose appropriate instances for EMR, RDS, Redshift, EC2 by benchmarking using different data sources, load, query performance and memory consumption.
  • Designed Disaster Recovery for the applications in AWS using ELB, Route 53 which eliminates the SPOF.
  • Design and development of the horizontally scalable ETL platform using Celery. An ETL platform that moves data from MySQL, PostgreSQL, MS SQL Server, FTP/SFTP. S3 and Apache Cassandra into our data warehouse on Amazon Redshift using MapReduce, Hive, Python and command line tools. The scripts also regularly communicate with APIs and get
  • data from FTP and SFTP servers as well as network drives and can alert recipients through email or instant messaging.
  • Responsible for integrated Docker integration into the CI/CD pipeline. This includes replacing conventional test/dev environments with Docker containers and automating the deployment/resource configurations via dockerfile.
  • Designed Presto Environment, Performance Tuning and optimization.
  • Responsible for Onsite co-ordination, production deployment, and production support team.
  • Designing, creating, optimizing, and maintaining multiple databases and data warehouses such as one on Redshift, as well as many other smaller RDS that we create for specific purposes.
  • Designed and developed Kubernetes cluster in AWS for development Environment.
  • Managed local deployments in Kubernetes, creating local cluster and deploying application containers.
  • Building/Maintaining Docker container clusters managed by Kubernetes, Linux, Bash, GIT, Docker, on AWS. Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy.
  • Setting up Splunk agent in all the AWS Services and pushed the logs to Splunk Server.
  • Installed DataDog Agent in all Service for monitoring the resources.
  • Developed Ansible playbooks and deploy the services in the infrastructure using Jenkins.
  • Deployed Presto Server in Kubernetes using Helms Charts.

Environment: EMR,EC2,ELB,Redshift,Terraform,AnsibleRoute53,S3,RDS,Jenkins,Spark,hive,presto,Cassandra,Python, Shell, Azkaban, JIRA, Java, Celery, SPLUNK, DataDog, Docker, Kubernetes, YAML, Redash, Tableau, Redis

Confidential, New York

Cloud Devops Engineer

Responsibilities:

  • Involved in setting up Hadoop cluster in Amazon Web Services.
  • Migrated On-Premise Data Center to AWS Cloud Infrastructure.
  • Provisioned AWS resources using Terraform Script and automated the provisioning using Jenkins.
  • Responsible for CaaS architecture in AWS (VPC, EC2, S3, EBS, Route53, Auto scaling, CloudWatch, S3). Entire stack is managed through Terraform modules & Ansible playbooks orchestrated by Jenkins pipeline
  • Created S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storing historical data.
  • Developed AWS Security groups and Custom policies for all the services.
  • Setting Up the Redshift on AWS and imported the Legacy data from Oracle to Redshift.
  • Involved in designing Amazon RedShift DB Clusters, Schema, tables, Performance Tuning and Compression.
  • Created Separate VPC, Subnets, NAT, Bastion Host for different environments.
  • Maintained Backup and Snapshots from EBS, RDS using automation Scripts.
  • Developed Shell Scripts and Control-M Jobs for scheduling the Jobs.
  • Involved in the administration team for Jenkins /Artifactory/Sonar
  • Analyze and resolve compilation and deployment errors related to code development, branching, merging, building of source code
  • Created and Maintained large EC2 instances in AWS (Master, Core and Task Nodes).
  • Provisioned AWS EC2 instances based on AMIs. Defined and tracked security groups and ACLs as part of virtual firewall configuration. In Cloud Formation, provisioned launch configurations and Auto Scaling groups to provide support for scaling instances across AZs.
  • Experience in setting up AWS VPCs, subnets, peering between VPCs and hardware VPN configuration.
  • Creating test environments using Docker for building/testing an application
  • Knowledge on Container management using Docker in creating images.
  • Supported AWS Cloud environment with 200+ AWS instances and configured Elastic IP & Elastic Storage and also experience working on implemented security groups.
  • Implemented AWS high-availability using AWS Elastic Load Balancing (ELB), which performed balance across instances in multiple availability zones.

Environment: AWS,Python,MapReduce,Oracle,RDS,Tomcat,Jenkins,Nexus,Spark,Terraform,Hive,Java,Informatica,Cognos,QilkView,Docker,JIRA

Confidential, Montvale, New Jersey

Senior Cloud Engineer

Responsibilities:

  • Migrated Oracle database to AWS RDS Multi AZ’s.
  • Building and Maintaining the Hadoop EMR Cluster in AWS Infrastructure.
  • Designing and developing tables in HBase and storing aggregating data from Hive
  • Developed Cloud Formation scripts for provisioning AWS services (IAM, EC2, S3, SNS, RDS, ELB and Auto Scaling).
  • Developed Cloud Formation stacks and automated with Jenkins for deploying Development, SIT, UAT and REH Environment for each sprint
  • Configured and managed AWS Glacier to move old data to archives, based on retention policy of database/applications.
  • Involved in setting up Hadoop cluster on AWS EMR.
  • Created and maintained Puppet Modules to manage configurations and automate installation process. Deployed Puppet and Puppet DB for configuration management to existing infrastructure
  • Developed Spark Job for transforming and storing the data in S3 from various sources.
  • Design, deploy, monitor, and maintain AWS cloud infrastructure consisting of multiple EC2 nodes as required in the environment.
  • Developing Spark Streaming program on Scala for importing data from the Kafka topics into the Hbase tables.
  • Importing data from hive table and run SQL queries over imported data and existing RDD’s Using SparkSQL.

Environment: AWS EMR, Puppet, MapReduce, HBase, Sqoop, Java, Hive, Oozie, DB, Spark, Apache Kafka, Tableau, AWS, Jenkins

Confidential

Senior Hadoop Developer

Responsibilities:

  • Involved in the design and development of Manhattans' custom deployment solution called Deployment Director (involving IBM WAS wsadmin framework)
  • Developed efficient Map Reduce programs in java for filtering out the unstructured data.
  • Responsible for loading unstructured and semi-structured data into Hadoop cluster coming from different sources using Flume.
  • Played a major role where 17 different file transfer systems are secured with one anomaly detection setup based on map-reduce in java, thereby greatly reducing maintenance.
  • Involved in Setting Up of the Cloudera Infrastructure.
  • Developed MapReduce Input format to read specific data format
  • Developing and maintaining Workflow Scheduling Jobs in Oozie.
  • Involved in extracting customer's big data from various Transfer Protocol using Flume into Hadoop HDFS
  • Developed Pig Latin Script for the log data and stored the cleansed data in the Apache Hadoop.
  • Involved with the client and gathered the business requirements.

Environment: Hadoop, HDFS, MapReduce, Core Java, Oozie, Flume, CDH 4.x.x

Confidential

Senior Cloud Engineer

Responsibilities:

  • Development and ETL Design in Hadoop
  • Developed MapReduce Input format to read specific data format.
  • Developing Hive queries and UDF's as per requirement.
  • Processed the different source data and stored in AWS S3.
  • Involved in extracting customer's big data from various data sources into Hadoop HDFS. This included data from mainframes, databases and also logs data from servers.
  • Used Sqoop to efficiently transfer data between databases and HDFS and used Flume to stream the log data from servers.
  • Developed MapReduce programs to cleanse the data in HDFS obtained from heterogeneous data sources to make it suitable for ingestion into Hive schema for analysis.
  • The Hive tables created as per requirement were managed or external tables defined with appropriate static and dynamic partitions, intended for efficiency.
  • Implemented Partitioning, Bucketing in Hive for better organization of the data.
  • Implemented Fair Scheduler on the job tracker to allocate the fair amount of resources to small jobs.
  • Implemented automatic failover Zookeeper and zookeeper failover controller.
  • Used Sqoop to transfer data from external sources to HDFS
  • Designed ETL flow for several Hadoop Applications.
  • Designed and Developed Oozie workflows, integration with Pig.

Environment: Hadoop, AWS, HDFS, MapReduce, Hive, Sqoop, Pig, DB2, Oracle, XML, CDH4.x

Confidential

Senior Cloud Engineer

Responsibilities:

  • Led a team of 5 members.
  • Implemented Automation using Selenium WebDriver jars, JAVA, TestNG
  • Developed Selenium Automation framework - Created a hybrid framework (Keyword and Data driven) on Selenium using TestNG.
  • Designed and executed the Automation Test Scripts using Selenium WebDriver and TestNG.
  • Performed Data driven testing using Selenium WebDriver, TestNG functions and JDBC Connections which reads data from scripts using property and XML files.
  • Worked on distributed test automation execution on different environment as part of Continuous Integration Process using Selenium Grid and Jenkins. Integrated Automation scripts (Selenium WebDriver API) in Continuous Integration tools (Jenkins) for nightly batch run of the Script.

Environment: Selenium Webdriver, TestNG, Maven, Jenkins, Excel, Jira, Eclipse IDE, Windows, Java

Confidential

Senior Cloud Engineer

Responsibilities:

  • Analyzed business requirements and functional documents, created the test strategy document that define the test environment, phases of testing, entrance and exit criteria into different phases of testing and resources required to conduct the effort.
  • Implemented Automation using Selenium WebDriver jars, JAVA, TestNG
  • Extensively Automated Regression and Functional Test Suites by developing over 50 test cases, 6 Test Suites using Selenium WebDriver, JAVA
  • Developed test cases, test reports, test scenarios and make sure every defect is logged.
  • Execution of test scripts using Selenium WebDriver using JUnit frame work.
  • Responsible for project scheduling and Tracking, Preparation of Test plans, Defect reporting, Defect resolution and Test Design Executions
  • Conducted end user demos and created a detailed report using the business users feedback

Environment: Selenium Webdriver, TestNG, Excel, Bugzilla, Eclipse IDE, Core Java, Maven

We'd love your feedback!