We provide IT Staff Augmentation Services!

Aws Architect / Technical Lead Resume

Raleigh, NC


  • Over 19 years of industry experience in all aspects of the Software Life Cycle, including System Analysis, Design, Development, Testing and Implementation which includes Amazon Webservice (Glue, Athena, EMR, Redshift, Serverless, Lambda, Kenisis, EC2, DynamoDB, CloudFormation, CloudWatch, etc.) and BIG DATA technologies (Hadoop, Attivio, Cassandra, Strom, Kafka, Zookeeper).
  • Successfully completed Internet Of Things (IOT) certification from MIT Professional Education.
  • Comprehensive understanding on IOT Architecture’s and Technologies.
  • Strong familiarity with Data Architecture including Data Ingestion Pipeline Design, Data Modeling and Data Mining, Machine Learning and Advanced Data Processing.
  • Highly capable of processing large sets of structured, semi - structured and unstructured data and supporting systems application architecture.
  • Proficient with Hadoop 1.0 and 2.0 architectures.
  • Architecture and code Map Reduce programs and responsible for storing data into HDFS file system.
  • Architected and designed product services using REST Web Services, Spring Dynamic modules with OSGI (Open Service Gateway Initiative).
  • Proven skills in project design and implementation of Object Oriented principles and having good implementation knowledge of Java/J2EE design patterns.
  • Extensively worked in designing application using technologies OSGI, Spring Dynamic modules, J2EE technologies - Servlets, Swing, Applets, JSP 1.x, JDBC, JNDI, EJB, XML 1.0, and Struts.
  • Architecture and designed application Workflow using JBPM.
  • Customized and developed Oracle 360Commerce retail application using POS, Back Office and Central Office using J2EE Design Patterns.
  • Hands on experience with database designing using Erwin and programming skills including PL/SQL, JDBC and SQL with DB2, ORACLE and SQL Server.
  • Extensive domain knowledge on finance, Automotive, Retail, HR, Content Management and Education.
  • Excellent knowledge of Shell commands.
  • Solid experience working with methodologies like Agile SCRUM, RUP and SDLC Waterfall.
  • Strong analytical, problem solving, organizational and planning skills.


Amazon Web Services: Step Function, SQS Queues Service, SNS Notification Service, Serverless Framework, Lambda Functions, Kinesis, Flink, EC2, DynamoDB, CloudFormation, CloudWatch, Cloud Trail, S3, Managed Kafka, Glue ETL, Glue Crawler, Athena, API Gateway, Redshift, EMR, Snowflake

Big Data Ecosystems: Hadoop, SQOOP, OOZIE, Yarn, Cassandra, DataStax MapReduce, HDFS, Hive, Attivio, Vivo, Apache Tika.

Java Caching: Memcached 2,7.1.

Messaging: ZooKeeper 3.3.0, Kafka 0.8, JMS.

Design Skills: DBeaver, UML (Rational Rose 2003), Microsoft VISIO Object Oriented Analysis and Design (OOAD)GOF and J2EE Patterns.

Java & Technologies: Java 1.8/1.7, JAX-WS, JAX-RS, Apache, J2EE- JAXB, JSP, Servlet, EJB, JDBC, Swing.

Frameworks: Spring 3.0, Hibernate 3.0, Struts 2.0, JTA, JPA, JAF, JNDILDAP, Metrics-Graphite 2.2.0.

Database: HDFS, Oracle 10G, DB2, MS SQL Server 6.5/7.0/2000 MYSQL, MS Access.

Servers: Glassfish, JBoss 6.0, IBM Web Sphere 5.0, Web Logic 6.1, Tomcat 7.2.24, iPlanet Web Server 6.1.


Scripting Language: jQuery, Shell Scripting, JavaScript, and CSS.

Version Control: GitHub, Maven, ClearCase 2002, Visual SourceSafe 6.0 CVS, SVN, Merant Dimensions 8.0.5, JUnit, Jtest, and Load Runner.

Workflow: JBoss JBPM.

Methodologies: Agile SCRUM, Unified process, RUP (Rational Unified Process) and SDLC Waterfall.

PM Tools: Rally, Slack, Microsoft Project 2007, Redmine, Version One Confluence Jira.

Other Tools: Eclipse Indigo, RAD, IBM WSAD 5.0, WINSQL, TOAD 6.1.11, Front Page 98, Apache POI, iReport, Text PadDreamweaver 3.0, XML Spy, Jasper Reports.


AWS Architect / Technical Lead

Confidential, Raleigh, NC


  • Identify, design and document ETL process from source to AWS Data Lake.
  • Define various stages of data in Data pipeline using AWS Glue jobs, cloudwatch, terraform, step functions and lambdas.
  • Identify and document transformation logic based on the business requirement collaborating with business teams.
  • Create technical and functional specification, collaborating with the business team for development and testing.
  • Design and work closely with technical teams in data transformation from source to data lake on required format and storage (S3, Excel and snowflake).

Environment: Amazon IAM, S3, CloudWatch, Glue ETL Jobs, Lambda, Step function, PySpark, Data Frames, Snowflake, Terraform, Python, Java, Sql Server, Teradata, cloud9, PyCharm, and Eclipse .

AWS Architect

Confidential, Irvine, CA


  • Define and document various IAM Roles policies for federated users to access various components of Data Lake architecture.
  • Evaluate, prototype, design, configure and document various features of Glue managed service to demonstrate its capabilities.
  • Evaluate, prototype, design, Configure and document Athena services to demonstrate its capabilities.
  • Identify suitable s3 storage service to store data objects.
  • Evaluate, configure and document capabilities of EMR to perform analytics on stored data.
  • Prototype, configure and document capabilities of RedShift to port data from s3.

Environment: Amazon IAM, S3, CloudWatch, Pyspark, Glue ETL Jobs, Crawlers, Databases, Tables, Athena, Redshift, EMR, PyCharm, and Eclipse .

Java/ AWS Architect/Developer

Confidential, Dover, NH


  • Worked on working prototypes to demonstrate various features and functionality of various AWS technologies for data ingestion pipeline.
  • Evaluated various AWS services like Step Functions, Lambda, Managed Kafka, S3, CloudWatch, Apache Glue (ETL, Crawler) etc. for data lake ingestion pipeline.
  • Evaluate and develop ETL process for various input file types (CSV, Json) into Parquet.
  • Coded Ingestion pipeline using step functions, Lambda’s, SQS Queues, SNS Notifications, Glue ETL, Crawlers and Athena.
  • Assist team in developing various step functions, Lambda’s, CFT’s, SNS Topics, SQS Queues, Glue crawlers for various types of data flow which includes Streaming, non-streaming, CSV, Json formats.

Environment: Amazon IAM, CloudFormation, Step Function service, Lambda Functions, VPC, EC2, S3, CloudWatch, Glue ETL, Glue Crawlers, Managed Kafka, Git, Maven, java1.8, Eclipse, Jenkins, Cloud Forge, and Bamboo.

Java/ BigData /AWS Architect/Developer

Confidential, Irving, TX


  • Worked on working prototypes to demonstrate various features and functionality of various AWS technologies.
  • Evaluated various AWS services like Managed Kafka, Elastic search, Kinesis, Flink, S3, Glacier, DynamoDB, CloudWatch, Apache Glue etc. for Data Lake.
  • Evaluate various data formats like JSON, Parquet and Avro for data storage and transfer.
  • Coded microservices for data catalog and data indexing service using serverless, Lambda, NodeJS and DynamoDB.
  • Established elastic search cluster using CloudFormation and serverless.
  • Document the findings for the above-mentioned services which includes:
  • Architecture and guide lines
  • Cost incurred for the project if used.
  • Cost comparison

Environment: Amazon IAM, CloudFormation, Serverless Framework, Lambda Functions, Kenisis, Flink, VPC, EC2, DynamoDB, S3, Glacier, CloudWatch, Glue, Managed Kafka, Node.js, Git, Gradle 4.2, java1.8, Eclipse, Jenkins, and SAFe Agile.


Confidential, Irving, TX


  • Worked on POC’s to demonstrate various features and functionality of Zookeeper and Kafka cluster.
  • Install Confluent Kafka and Zookeeper cluster for development, testing, QA and production environment.
  • Documentation for Kafka and Zookeeper
  • Architecture Diagrams
  • Troubleshooting Guide
  • Testing procedure for non-functional tests
  • Host and port details
  • Startup and Shutdown, Production rolling restart procedure
  • Venafi certificate instructions
  • Log file settings
  • Topic create, alter, describe, delete, partitions, replicas
  • Scripts, paths, binaries, log locations and details
  • Zookeeper CLI commands
  • Zookeeper ensemble and Kafka broker configuration
  • Configure Kafka brokers and Zookeeper ensemble for project needs.
  • Setting up security using SASL (Simple Authentication and Security Layer) and TLS1.2.
  • Configured and coded metrics collection using java and Yammer metrics API for Zookeeper, Kafka, Producer and Consumers to store in Graphite and display in Grafana. Created dashboard in Grafana.
  • Coded wrapper scripts for various functions like topic creation, start, stop etc. for Zookeeper and Kafka brokers.
  • Host validation check to ensure disk mounts, permissions, ssh keys are present
  • Start/Stop Kafka Brokers/Zookeeper with appropriate Java configurations and Runtime password lookup in Cyberark vault
  • Topic Creation across multiple brokers in a single cluster setting ACL for each
  • Test Cyberark connection and Cyberark password retrieval
  • Assisted teams with regression testing, configuration and guidance on setting up Kafka Producer and Consumers.

Environment: Confluent Kafka 3.3.0, Yammer Metrics API 2.2.0, Kafka 11.4, Zookeeper 3.4.9, Git, Gradle 4.2, Java1.8, Eclipse.

Product Developer/ Architect

Confidential, Tampa, FL


  • Configure and query Cassandra Database using java DAO.
  • Very good understanding Cassandra cluster mechanism that includes replication strategies, snitch, gossip, consistent hashing and consistency levels.
  • Good exp. with Cassandra configuration, creating multi node clusters, reading and writing into Cassandra
  • Create REST/SOAP service to fetch various job details using spring framework.
  • Configured Zookeeper for various Kafka queues.
  • Responsible for configuring and writing Kafka Streams, producer, consumer and connector API's.
  • Created and worked on CRON scheduler to schedule Jobs to store static data into Oracle DB.

Technologies used: Cassandra 2.1, Kafka, ZooKeeper, Spring framework, JERSY REST Framework, SOAP UI, JSON, DBeaver, Oracle, java, Git, Onestash.

Product Developer/Architect

Confidential, Philadelphia, PA


  • Architect and implemented highly scalable RESTful/SOAP API using Spring framework that connects downstream system.
  • Senior team member responsible for delivering core architecture solution and complex components.
  • Designed and configured Kafka with long poll to retrieve events generated by various devices.
  • Coding new features, enhancements, based on Product team requirements and priorities.
  • Lead design and development of Operational resource API’s for admin and technicians which drastically improves the user experience by decreasing installation time.
  • Configured and implemented Cluster Location services, Account, Subscriber, and COPS APIs for various client teams.
  • Ported data from Oracle DB to Cassandra using DataStax API’s.
  • Developed unit test cases for the API's created using JUint and EasyMock framework.
  • Document API’s, design documents and user guides.
  • Collaborate and point of contact of Home security products and integration services for other Confidential line of businesses, customer support applications and multiple customer facing applications/tools.

Environment: Java 1.8, REST SOA architecture, Cassandra 1.2,Kafka queue, Memcached 2.7.1, Websocket, Zoo Keeper, Splunk, Maven, JSON, Oracle 11g, Curl, JAX web services, Craft database (titan), SAML, oAuth, Web Socket, Real-time Service to Service Integration, Dockers, API management and publishing document management WIKI applications, JIRA, Python scripts, EasyMock, Eclipse, AntHillPro, Git - GitHub and Gerrit.

Hire Now