We provide IT Staff Augmentation Services!

Aws And Big Data Architect Resume

2.00/5 (Submit Your Rating)

Boston, MA

SUMMARY

  • 20+ years of Total IT experience with 12+ years of Enterprise Architect experience in architecting, designing, developing, testing and deploying enterprise projects & products and in IT Technical Solution Consulting
  • Intensive Enterprise Cloud Architect experience with AWS Certified Solutions Architect Associate Certification
  • Worked as Enterprise Architect for IBM for 12 years, leading crucial enterprise applications with diversified technologies
  • Experience in Leading the IT Solutions Team of Technical Leads & Architects
  • Expertise and Experience in digital technologies - AWS Cloud, Big Data, Analytics, Machine Learning, Digital Experience, Mobility, Java, JEE
  • Sr. Application Architect for Kohl’s USA SMAC project (Social, Mobile, Analytics, Cloud)
  • Excellent experience in SOA and API Architecture & Design for large enterprise systems
  • Hands on experience in identifying architectural pitfalls and anti-patterns and drawing solutions for them
  • Expertise in protecting key Architectural Characteristics with fitness functions: Performance, Scalability, Availability, and Security
  • Proficiency in migrating architectures - monolithic to microservices - while defining current and target architectures with a well-defined phase wise transformation roadmap
  • Proficiency with TOGAF certified EA tools - TROUX platform, IBM Rational System Architect, ABACUS, MEGA’s HOPEX
  • Performing product demos and customer presentations from onsite with 8+ years of onsite experience, client facing
  • Hands on experience in agile scrum project automation as onsite product owner & scrum master
  • Domain Expert in Banking, Telecom, Retail, E-commerce and Healthcare domains

TECHNICAL SKILLS

AWS: EC2, S3, EBS, Lambda, ELB, ECS, EKS, VPC, VPC Peering, Route Tables, NAT Gateway, Internet Gateway, Network ACL, Security Groups, CDN, RDS, API Gateway, Cloud Trail, SNS, SQS, CI/CD Code Pipeline, Elastic Beanstalk.

Cloud Native: AWS Cloud, Micro services SOA, Containerization, Docker, Container Orchestration and KUBERNETES

Big Data & Analytics: HADOOP 2.0 Ecosystem, Spark 1.2 and R ML Algorithms

DEVOPS: with multiple CI/CD platforms - Jenkins 2.0, WERCKER and Open Shift

Data Warehousing & ETL: TALEND 6.4 ETL & Big Data Testing

MDM Tools: TALEND 6.4 - for a 360 degree of data view

Java, JEE: Java from 1.2 version, JSP and SERVLETS, JDBC and JTA (Java Transaction API), STRUTS Framework 2.x, SPRING Framework 3.x

PROFESSIONAL EXPERIENCE

Confidential, Boston, MA

AWS and Big Data Architect

Responsibilities:

  • Leading the Data Engineering team for Confidential
  • Architect and Build enterprise data warehouses in Redshift.
  • Target architecture and its implementation - DMS migration from Oracle to RDS PostgreSQL
  • Working with Kinesis Data Streams for real time sensor data migration to AWS
  • Matillion ELT on AWS replacing Spark transformation
  • Architect, build and lead enterprise data warehouses and ETL (Extract, Transform and Load) data engineering in AWS (Amazon Web Services) cloud servers with a very specialized tool called MATILLION
  • Architected, designed and developed sensor data (Cassandra) migration to Redshift for a near real time use case
  • Architect, design and configure specialized AWS cloud services for Redshift data warehouse by Data Migration Service (DMS) from on premise environment to AWS cloud environment
  • Working and technically leading with RDS (Relational Database Service) and DMS (Database Migration Service) and AWS SCT (Schema Conversion Tool) for on prem Oracle migration to AWS RDS Postgres
  • Get involved in AWS cloud-based Data Engineering and performing Large Scale Big Data Analytics using tools like Looker & Spotfire

Confidential, Raleigh, NC

Enterprise Cloud Architect

Responsibilities:

  • Architect and Build enterprise data warehouses and ETL (Extract, Transform and Load) data engineering both on premise servers as well as in AWS cloud servers.
  • Data engineering code work with AWS Cloud platform Snowflake data warehouse.
  • Develop with Python & PYSPARK Programming languages for data transformations and business validations.
  • Hands on development engineering work with various big data components such Hadoop Eco System (Hive, Pig, Spark, Kafka, Sqoop etc.).
  • Architect, design and configure AWS services for Data warehouse data migration from on premise to various AWS cloud environments Sandbox, Dev, QA, Stage (Pre-Prod) and Production.
  • Gather requirements and prepare traceability matrix to implement technical design.
  • Architect and engineer automated data pipelines
  • DEVOPS CI/CD mechanisms with AWS code commit and Jenkins CI/CD integration engineering platform.
  • Experience with the AWS Cloud IaaS and PaaS services which includes RDS, EC2, S3 and Step Functions.
  • Involve in Data Engineering and performing Large Scale Big Data Analytics.
  • Code with AWS Glue Jobs, dynamically invoked from AWS Lambda functions, written in PYSPARK

Confidential, GA

Sr. Enterprise Architect

Responsibilities:

  • Responsible for gaining architecture approval from Project Stakeholders
  • Documentation and presentation of designs to business, internal stakeholders and vendor teams
  • Review Project level architectures to ensure alignment with program goals.
  • Verifying security, scalability, performance architectural characteristics with every major change in the architecture
  • Developed low-level designs (LLD), code designs and prototyping/build of proof-of-concepts (POC)
  • AWS Cloud Infrastructure design and deployment using virtual and on prem physical hardware
  • Portal Integration Project providing IOT enabled solutions for analyzing the big data stored in the RFID tags
  • Involved with the ingesting and analyzing the big data stored in the RFID tags as received from the IOT module
  • Involved in data visualization using Tableau 9.x
  • Responsible for solution development throughout the project lifecycle, supporting the scrum work of agile teams.
  • Cloud Native Technologies used: 1. AWS IAAS (Infrastructure as a Service): EC2 instances, Amazon S3 bucket, Load Balancers Elastic Map Reduce DYNAMO DB REDSHIFT for Cloud Analytics, High Performance Computing (HPC) Network ; 2. Micro Services SOA: Spring Boot - supporting appropriate efferent and afferent coupling; 3. DEVOPS automation with Jenkins 2.0, WERCKER and Open Shift ; 4. Containerization with DOCKER 17.x ; 5. Container Orchestration with KUBERNETES 1.7

Confidential, Michigan

Big Data Architect

Responsibilities:

  • Entrusted the task of developing the Big Data Analytics product for healthcare clients like BCBSM US, OPTUM US
  • Product Description: Using Lambda architecture; Capturing Data from various big data resources - external databases, wearable’s, medical devices; Data ingestion into the Big Data platform via Kafka; Storing the ingested; Data stored in the NOSQL database Cassandra; Analyzing this data further by Spark; Running all the tools inside Dockers containers; Showcasing the results to the end user as reports, dashboards, graphs, charts
  • Big Data Testing using TALEND 6.4 platform - involved in test strategy, plans and test execution reporting
  • Used Spark 1.2 for live big data processing - Connecting Java with respective Spark CONTEXT; Using Spark Streaming 1.2 and DSTREAMS in Java, connecting input resources via KHAFKA and FLUME with ZOOKEEPER for subscriber-publisher messaging system and regulator; Spark SQL connecting HIVE and CASSANDRA using Spark HIVE Context and CASSANDRA connector respectively; using SPARK CHECKPOINTING mechanism for 24/7 live analytics; using Spark default scheduler, also used YARN in few places as cluster manager

Tools: & Technologies Used: CLOUDERA CDH 5.X, HADOOP Ecosystem 2.0 - Map Reduce Programming, YARN Programming, HDFS Kafka, Storm, Spark 1.0,1.1 AND 2.X and Spark MLLIB, ZOOKEEPER, SOLR, Flume, Mahout ML, Hive and Hive SQL, Pig and Pig Latin, OOZIE and Cassandra NOSQL & SQOOP Tool

Confidential

Big Data Architect

Responsibilities:

  • Developed IT and Business Capability Roadmaps for strategic and operational goals
  • Recommend improvements to optimize performance, eliminate single points of failure
  • Entrusted the task of building the Big Data Analytics COE with Machine Learning Analytics Capability
  • Ramped up a 5-member core team in a short span & built the required capabilities with tools & technologies required

Tools: & Technologies Used: Big Data Vendor implementations: CLOUDERA, IBM Horton Works, MAPR & Data Bricks; No SQL Databases: HBASE, COUCHDB, MONGODB, Graph DB NEO4J (ZEPHIR), Cassandra & SQOOP Tool ; MDM Tools: TALEND 6.4 - for a 360 degree of data view; Data Visualization tools: QLIKVIEW, PENTAHO Reporting, and Tableau 9.x; Social Media Analytics: Social Media Analytics (Twitter) using R search twitter function for an internal project; Predictive Analytics with R: Segmentation, Clustering Analysis, Recommendation, Decision Tree, Linear regression & Time Series; Used Outlier Detection algorithm in R using supervised ML for: Medical abnormality diagnosis for BCBSM

We'd love your feedback!