We provide IT Staff Augmentation Services!

Aws + Big Data Architect Resume

5.00/5 (Submit Your Rating)

Milpitas, CA

PROFILE:

  • IT veteran with 10+ years of experience keen on learning, embracing & implementing new technologies the most efficient way
  • Self - driven and highly motivated
  • Relish tough & complex problem-solving scenarios & being put on challenging situations and always ready to deep-dive on any demanding front ( conceptual ideas -> analysis -> coding -> all the way to architecture & maintenance ) and willing to add value with hands-on contribution.
  • Proven communication skill who can effortlessly converse to business & technical people alike.
  • Well experienced in hands on, Kickstart (PXE) and Jumpstart installation of various fully and mostly POSIX compliant systems like RedHat 4, 5&6, CentOS 5 &6 and open SUSE 11 & 12.
  • Experience in different versions of Openstack,VMwareand other private cloud servers.
  • Experience in building and running Docker container and images.
  • Experience in manipulating raw data into required formats using scripting tools like sed, awk, cut and various others.
  • Hands on experience in Saltstack deployment, dashboards configurations.
  • Experience working with DHCP/LDAP/NIS/NFS services.
  • Support enterprise Storage Area Networks (SAN), and Tape Backup/Restore technology in a mission critical environment.
  • In depth knowledge in AmazonAWS Cloud Administration which includes services like: EC2, S3, EBS, VPC, ELB, Route 53, Auto scaling, Security Groups).
  • Good working knowledge Amazon AWS IAM Service: IAM Policies, Roles, Users, Groups, AWS Access Keys and MFA
  • Experience in using various network protocols like HTTP, UDP, POP, FTP, TCP/IP, and SMTP.
  • Experience in storage technologies like Net Backup, Hitachi, EMC storage, Confidential, SAN.
  • Software package deployment, disaster recovery/contingency of business management.
  • Expertise in implementation and designing of disaster backup and recovery plan.
  • Very good understanding in the concepts and implementation of high availability, faulttolerance, fail over, replication, backup, recovery, Service Oriented Architecture( SOA) and various Software Development Life Cycle(SDLC) methods.
  • Uniquely skilled with the perfect blend of time-tested RDBMS expertise & latest IT disruptions viz Big Data & Cloud technologies.

TECHNICAL SKILLS:

Big Data: Cloudera 5.x, Apache Hadoop 2.x, Apache Spark 2.x, Spark Streaming, Kafka, Flume

Cloud: AWS, Oracle Cloud

NoSQL: HBase, DynamoDB, Apache Cassandra, InfluxDB, MongoDB

Data Flow: Apache NiFi 1.7

Oracle RDBMS: 7.x to 12c ( Development to Administration)

Data Warehouse: Oracle RDBMS & AWS RedShift, Oracle Database Appliance (ODA)

SQL on Hadoop/S3: Hive, Impala, Athena, Apache Drill

Data Integration and ETL: SQL*Loader, External Tables, Hive+HBase on HDFS, GoldenGate, AWS Change Streams

Oracle Performance Tuning: OEM, Diag+Tuning pack, Statspack etc

Oracle Administration: DB Installation, upgrades & migrations, pro-active & real-time monitoring for issues.

EMPLOYMENT HISTORY:

Confidential, Milpitas, CA

AWS + Big Data Architect

Responsibilities:

  • evaluation of a variety of Timeseries databases for performing time sensitive analysis for automatic performance counters captured from storage devices sitting at customers’ datacenters. ( InfluxDB, Cassandra, MemSQL )
  • Was involved in reviewing and rearchitecting microservices that feed off data pipeline to avoid tight dependencies and can independently scale up/down. (Apache Kafka, Spark Streaming & Docker)
  • Key contribution in designing data flows in Apache NiFi of critical workflows & scheduler jobs that were previously running as multi-step ETL workflows in Pentaho. ( Apache NiFi, Oracle (DSS) Database & JSON)
  • Ongoing activities in accommodating new requirements into existing data processing pipeline. ( Toolkit: Cloudera 5.14.x, Scala 2.12, DSE stack for Solr + Cassandra, MongoDB, Kafka streaming, Flume, DataBricks’ Spark stack)

Confidential, DC

Database Cloud Architect ( AWS), Data Engineer ( Big Data Analytics)

Responsibilities:

  • Played a key role in transitioning many clients' database infrastructure from on-prem to cloud (AWS RDS).
  • This role typically involved getting into discussions with client to understand their infrastructure layout and application dependencies and cost footprint and then coming up with recommendations on a migration strategy to cloud.
  • I was also tasked with Data Engineer role for a few specific client projects where the assignment was to do PoC on getting data offloaded from high-licensed database tier & lay it on HDFS and create equivalent workflows in Hive & MapReduce for data processing and reporting.( Apache Hadoop v1, Hive, Pig, MR & s3 & AWS Quicksight for reporting)

Confidential

Performance Architect/Oracle SME

Oracle Stack: Oracle Database 11gR2, Oracle Streams, Oracle GoldenGate

Responsibilities:

  • Represented & positioned as Voice of Oracle ACS ( Advanced Customer Support) at client site for 2+ years.
  • Responsibilities included ensuring consistent smooth performance of databases that were serving a mission critical application with nationwide clientele and visibility.
  • Scope of work included everything from analyzing requirements, discussing with Vendors on implementation, reviewing architectural changes of database ( physical and logical), replication strategies for performance reasons and for DR & from benchmarking for performance in labs to monitoring and analyzing production workloads & submitting predictive performance reports.

Confidential, CA

Sr. Oracle Consultant

Tools: Oracle Server 9.x, 10.x, 11gR2

Responsibilities:

  • As an integral part of Database Development team, I was involved with many new initiatives that took place at Confidential company and supported all of them from database standpoint providing guidelines and recommendations on database activities.
  • Development of PL/SQL routines, Evolving database model along with company's directions, ensuring two databases are maintained in Sync & writing custom code ( PL/SQL) to manually pull data from source and syncing up using PLSQL, whenever replication product ran into conflicts ).

Confidential

Oracle Consultant

Tools: Oracle Server 10gR2 (4-node RAC), Toad, OEM, Informatica

Responsibilities:

  • I was involved with performing day-to-day health check of production database, writing code for data patch ( typically SQL & PLSQL) for mitigating application side issues
  • Supporting ETL team (Informatica) and working closely with them to ensure all enterprise data loads are optimized to complete before 6AM & when there were issues reported to troubleshoot database side of things to make ETL jobs complete faster, Finding and tuning queries that take away resources
  • Optimizing table and index structures for RAC, developing SQL & PL/SQL APIs, doing POCs to demonstrate alternative better/efficient approach to Dev teams.

Confidential, Pasadena, CA

Oracle Consultant

Tools: Oracle DB server 9.x, Toad

Responsibilities:

  • Role included code review of RISK system code and tuning of various code base that was lagging behind while syncing up with external source ( ingestion or mapping). External source system was Sybase and data were transferred into Oracle side at the end of each day ( aftermarket hours) & multiple nested processing were handled all via PL/SQL code.

We'd love your feedback!