We provide IT Staff Augmentation Services!

Sr. Cloud Architect Resume

2.00/5 (Submit Your Rating)

Leesburg, VA

SUMMARY

  • Highly motivated, creative and versatile IT professional with 18 years of experience in designing and supporting enterprise - wide applications, infrastructure and systems; proven ability to motivate teams to work effectively
  • Experience leading cross functional teams, nurturing customer and executive relationships across organizations; experience defining, tracking and driving large projects to successful completion; strong background in both technical and business aspects of IT.

PROFESSIONAL EXPERIENCE:

Sr. Cloud Architect

Confidential, Leesburg, VA

Responsibilities:

  • Automated Continuous Integration / Continuous Deployment (CI/CD) pipeline using Gitlab CI including retrieving latest builds, creating infrastructure using Terraform, pushing Docker images to AWS ECS Repository (ECR) and running automated several automated tests. This periodic job eliminated last-minute integration issues and gave confidence in our deployments.
  • Wrote tools to update software versions of SaaS platform using Terraform and bash scripts. I followed the Blue/Green deployment strategy to safely and reliably update software on hundreds of customers.
  • Tested hundreds of Chef cookbooks for Chef12 compliance by testing them on Docker and RedHat VMs. This effort is part of Chef11 to Chef12 migration for Raytheon’s GCX hosted solution.

Sr. Cloud Architect

Confidential, Bethesda, MD

Responsibilities:

  • Completely automated a large (250+ node) application cluster involving MongoDB, SOLR (SOLRCloud) and application stack on AWS for a data analytics startup. The client was looking to reduce their batch processing time from over 30 days to 3-days and wanted an automated solution to bring up the cluster on demand, tweak the parameters of the cluster to improve throughput and shut it down after processing. I built the solution using Chef Provisioning program and wrote several Chef cookbooks to first define the characteristics of the cloud and build it using one simple command. The client has requested us to repackage the automation so they can offer this as a SaaS solution.
  • Helped a large SaaS vendor host HRM application in AWS GovCloud. Wrote several Chef cookbooks to automate installation and deployment of Oracle and ancillary COTS packages.
  • Migrated a large, multi-tiered Drupal 7 based website to AWS GovCloud using Ansible configuration management software for DoD. The site uses several Nginx reverse proxy servers to communicate with multiple Drupal instances. I integrated the entire workflow using Git/GitLab and used S3 shared file system. The site was designed to follow strict security requirements such as using Security Technical Implementation Guide (STIG) standards and Internet connectivity using vArmour suite. The future integration work will involve connecting to NIPRNET (Non-Secure Internet Protocol Router NETwork) and PIV / CAC cards (Personal Identity Verification / Common Access Card).
  • Used Ansible to create a 50-node MongoDB 3.2 cluster including sharding, authentication, loading data from AWS S3 buckets and integration with Tableau Business Intelligence software. The MongoDB BI connector was used to integrate NoSQL databases used by MongoDB and PostgreSQL drivers used by Tableau. The data management part included created monthly AWS Snowball jobs to migrate 10TB of on premise data to AWS.
  • Architect a Continuous Integration / Continuous Deployment (CI/CD) solution using Jenkins, Docker, Docker Compose, Consul for a large Java-based application in AWS. Enabled the Development and QA teams to efficiently release quality versions for all branches of code. Created multiple pipelines to build, test and integrate Java-based code using Jenkinsfile, Docker containers and AWS EC2 instances using Jenkins as CI/CD platform.
  • Architect and design a large SOLR implementation consisting 400+ shards in Amazon Web Services. The solution involved migration of 2B documents from on-premise data center and syncing with the cloud. I used the hosted OpsCode Chef to create cookbooks to support complex systems and infrastructure deployment. I provided several cost estimates including On-demand and Reserved Instances, different storage options such as Zadara Storage solutions and choice of architectures, their pros and cons to the customer. This turnkey solution will be delivered to the customer within 3-months.
  • Currently involving in designing private cloud architecture for DHS FEMA using OpenStack. The architecture calls for preserving existing infrastructure and investment in VMWare but adding additional layers of OpenStack and CloudForms cloud management platform. This achieves the goal of better visibility across several FEMA data centers, policy enforcement, standardization, capacity planning and simplified administration.
  • Work with FEMA CIO office to come up the public cloud strategy and guidelines.
  • Conduct OpenStack and AWS courses for partners and customers such as DHS FEMA, USCIS and ProQuest.
  • Meet with business leaders and stakeholders and discuss cloud migration strategies, best practices and provide solutions in AWS.

Adjunct Professor

Confidential, Laurel, MD

Responsibilities:

  • Routinely conduct advanced cloud computing course for John Hopkins University at CIA ITU and online- principles-of-cloud-computing-shyamsunder-joshi)
  • Taught a free cloud computing course to high school and college students on introduction to computing with emphasis on cloud computing. For most of the students, this was their first foray into the world of infrastructure, systems management, databases, security, cloud services and how companies use technology to further their business. The course website the associated labs. The senior management of Amazon Web Services recognized and encouraged me to continue offering such courses in the future.
  • Routinely conduct beginner and advanced level Amazon Web Services (AWS), OpenStack and Azure cloud computing classes to companies, federal agencies and individuals. The courses are conducted at customer sites or at neutral venues and include several hands-on labs.
  • Conduct UNIX / Linux administration classes including setting up web services, DNS, file services, storage systems, networking, firewalls and security systems.

Lead Cloud Architect

Confidential, Reston, VA

Responsibilities:

  • Architect and desig in AWS. The site was designed to handle 200M page views per month and has services in the US-east, US-west and EU regions. The site was architected without any single point of failures and uses Amazon’s best practices for deploying all components. I used OpsCode Chef to deploy the application using AWS EC2, S3, ELB, CloudFront, ElastiCache, RDS, SES, Route 53 and EBS. I also setup HaProxy between regions to help with MySQL replication and save on data transfer costs. The application uses Tomcat, Apache and MySQL databases. The site is managed and monitored by Chef, NewRelic, Pingdom, CloudWatch and Nagios.
  • Architect and design several projects for Coca-Cola Company such Coke Japan and Coke Germany sites.
  • Setup AWS Storage Gateway to provide backup on-premise data to AWS S3 buckets. This solution negated the need for expensive and often unreliable tape backup solutions and provided DR solution as well.
  • Developed a standard backup and retention policy to be used for all customer sites. This document is now part of our standard RFP response and defines the ROI and RTO for the sites we manage. As part of this initiative, I have developed a Gluster based file system to store short-term backups, used S3cmd toolset to copy weekly and monthly copies to AWS S3 and then the policies on S3 will migrate the older backups to AWS Glacier for long-term retention. I also replaced MySQLdump with Percona Xtrabackup and significantly improved recovery time. The next steps to store the results in AWS DynomoDB NoSQL database and generate reports for customers and management were planned.
  • Architect and designed one of the largest website developed by the company - the People magazine’s website, serving 1B page-views a month. The site uses 150+ front-end servers and 30 databases replicated across 5 geographic locations. The site adapts to traffic patterns by expanding and shrinking servers using AWS AutoScale technology. I was also responsible for giving input to the RFP, helping business development team win the project and presenting solution to customers.
  • Lead the migration of the KickApps social media platform to AWS from a traditional data center. The production environment has 15 databases, 25 application services ranging from single sign-on, feeds, API services and the migration was completed within a short period. Several AWS services were evaluated for suitability and performance before finalizing the architecture. This services also uses NetScaler load balancers, AWS ElastiCache and Gluster file systems.
  • Present the architecture and design to customers, lead the launch plans with the customers, lead the hyper-care activities post-launch, review performance and usage metrics with senior management and customers to optimize the environment.
  • Developed an employee portal for Walmart on RackSpace cloud (gen 2). This was beta/POC for customer presentation.
  • Designed a solution to integrate Kaltura platform with client websites to offer video portals. Monumental Sports will be the first site to use this service.

Infrastructure Architect

Confidential, Reston, VA

Responsibilities:

  • The role involved working with business owners and stakeholders in the company to understand and translate their requirements to IT projects and road maps, develop best practices, procedures, policies and provide guidance to the engineering and operations teams.
  • The team worked on developing new capabilities such as cloud adoption, on- demand environments, tools and technologies that will have a huge impact on the organization in the coming years.
  • I was involved in presenting options and recommendations to the senior management on the strategic direction, doing due diligence on vendors, products and services including the integration efforts, the impact those external products and services will have on our IT services and lifecycle management of those assets.
  • Involved in a large-scale deployment of Amazon EC2 infrastructure for several projects. The projects used almost all AWS services including EC2 for servers; EBS, snapshots, S3 for storage and backup; AutoScale groups for elasticity and an automated way of scaling in/out of cloud capacity based on needs; ELB for load balancers; CloudWatch for monitoring; RDS for database instances; IAM for managing user accounts; VPC to create a dedicated and secure virtual private network
  • CloudFront to distribute large media files; Elastic IP for public IP addresses and SNS for notifications. The applications were developed in Java and run on Tomcat / Apache setup.
  • In addition to managing all these assets using AWS console, API and custom scripts, I’ve used RightScale console to meet specific requirements for few projects. The ServerTemplates and RightScripts offered by RightScale were used to create custom deployments.
  • The subsequent phases of the project added capabilities such as on-demand environments; vending machine models so users can pick and choose apps based on project needs, burst capacity to meet peak demands; integration with internal VMWare clusters, continuous integration/build services and access to in-house services.
  • Designed and developed solution to migrate 60+ Drupal websites running in Amazon AWS services to Acquia managed services. The solution included developing architecture, Concept of Operations (CONOPS), operational testing and user acceptance stages.
  • Participated in a large-scale project of converting 600+ physical servers to VMWare ESXi clusters. The team evaluated RFP responses from several vendors and the project is currently in the design/architecture phase.
  • Evaluated Wowza Streaming and Adobe Flash Media Servers, hardware and software encoders to setup Video on Demand (VOD) services. These services are part of the offered to teachers involved in the Advanced Placement (AP) program.
  • Evaluated hardware and software to encrypt data to meet the Payment Card Industry (PCI)’s Data Security Standards; Hitachi Dynamic Provisioning (thin-provisioning) technology and its impact on Oracle disks; Storage Resource Manager (SRM) tools to help with storage provisioning and monitoring.
  • Worked with vendors to ensure our products and environment align with their roadmap; our concerns and issues are addressed appropriately, attend EBC s (Executive Briefing Center) and user groups to influence the change and direction College Board seeks; educate end-users and management about industry changes

Solutions Architect

Confidential, McLean, VA

Responsibilities:

  • This was an onsite residency program supporting several DHS/TSA (Transportation Security Administration) projects. The role involved gathering customer requirements, product presentation, configuring storage assets, and production support of several SANs.
  • Responsible for planning, coordinating and implementation activities related to a consolidated data center migration DC2. The migration involved moving assets from IBM St. Loius Data Center to DC2 center in Virginia. The activities included about 20 mission critical applications, databases and SAN infrastructure.
  • Assisted DBAs, Systems administrators and project managers with storage troubleshooting issues
  • Educated management and project managers on latest EMC products, their features and suitability of the products in TSA environment
  • Migrated and consolidated storage assets including DMX, Clariion 3-40, NS-20 Unified Storage and Brocade switches

Technical Manager Lead/Sr. Storage Administrator

Confidential

Responsibilities:

  • Reduced the acquisition cost of storage assets by 45% and saved the company $12M per year by working with a cross-functional team to send out several RFPs (Request for Proposal). The RFP invited storage vendors to bid to be the exclusive storage provider as long as they agreed to our price structure and SLA (Service Level Agreement). We then evaluated and performed a “scorecard” analysis of the responses
  • Initiated and launched a test lab to evaluate new technologies like global clustered file systems, storage virtualization, data de-duplication, block-level compression, encryption, NAS (Network Attached Storage) gateways and Wide Area File Systems (WAFS) appliances. Convinced vendors to loan hardware, software and subject matter experts to help with these evaluations. This project resulted in increased efficiency, knowledge and familiarity of the proposed technologies and its features, reduced time to install and helped develop vendor relationships prior to any production impact. These technologies reduced the storage acquisition costs by $5M per year and improved utilization by at least 50%
  • Setup the complete storage infrastructure for the Live 8 concert of 2005 within 4 business days. The project involved setting up 40 web servers at two data centers, EMC Celerra NS702G NAS gateways and replicate data between the sites. The project was one of the major s of AOL in 2005 according to the CEO and generated positive reviews in the media
  • Deployed Brocade WAFS appliances betwee data center and remote locations such as Bangalore and NYC. These appliances optimize NFS/CIFS traffic between the sites and provide “LAN-like” access to file systems to remote users. This deployment saved AOL $3M since we didn’t have to setup expensive backup infrastructure at remote sites, encrypt backup tapes and send them to an offsite location. It allowed remote users access file servers located at and saved them from making multiple copies of the data
  • Deployed YottaYotta storage virtualization appliances across all SAN islands to improve efficiencies save time and avoid late payments of leased storage assets. These appliances present virtual devices to the hosts and data can be migrated from old arrays to new arrays without user impact enabling us to return the arrays on time
  • Built several large applications such as AOL Pictures (1PB data replicated across 3 data centers), AOL File backup (240TB Clariion storage replicated across 2 data centers) and AOL Music. These projects involved moving them from legacy hardware, setting up new storage infrastructure, and replicate data between sites, implementation and write tools to monitor performance
  • Designed, implemented and maintained several large-scale SAN islands for AOL’s internal and external customers. Each of these dual-fabric SAN islands included several EMC DMX 1000/800, Symmetrix 8530, HP XP 128, Clariion CX600 and CX700 arrays were designed to provide high-tier and low-tier storage for hundreds of servers. Brocade 12K, 24Ks and 48K fabric switches were used in a tiered architecture
  • Conducted several formal and informal to end users and systems administrators on services we offer, Veritas Volume Manager, PowerPath basics, general storage knowledge, best practices on configuring the devices
  • Setup replications between hosts using EMC (Symmetrix Remote Data Facility) SRDF and Clariion MirrorView using DWDM (Dense Wave Division Multiplexing) links between multiple AOL data centers. The replications met business requirements of disaster recovery, business continuity and data assurance to several applications
  • Revamped AOL client and web registration infrastructure by adding a hot standby site and moving to grid architecture. New code changes were applied to standby site first and tested before draining the in-flight traffic from primary site. This improved customer experience, increased registrations, minimized the risk to the environment due to code changes and reduced troubleshooting time

Team Lead/Sr. Systems Admini strator

Confidential, Chantilly, VA

Responsibilities:

  • Evaluated and deployed of a freeware firewall solution using TIS Firewall Toolkit to protect the services we offered to customers and partners. Used the alerts and logs generated by this tool to convince management of its effectiveness and the need for a commercial grade firewall system. Management approved the purchase of a Sunscreen SPF-100 and later Checkpoint Firewall/1 systems. These efforts improved security, reduced threats to the environment and helped protect company assets
  • Secured the environment from other threats by installing and managing intrusion detection systems, performed penetration testing of corporate network using tools such as Snort, COPS, Crack, SATAN, nmap, OpenSSH, Tripwire and TCP Wrappers. The CIO commended these efforts and promoted me to a senior role
  • Developed a robust system management framework in Perl to automate maintenance of hundreds of Solaris and SunOS clients including OS upgrades, patch installs, periodic status updates, inventory information, security checks, process monitoring and restarts, file system checks, system usage, checking corrupt system files etc. This customized application negated the need for expensive solutions such as Tivoli and CA Unicenter
  • Consolidated all UNIX/NT servers and storage in an effort to streamline the data center operations. The project saved the company close to $500K per year in maintenance and associated costs

We'd love your feedback!