We provide IT Staff Augmentation Services!

Sr. Cassandra Dba/hadoop Admin Resume

2.00/5 (Submit Your Rating)

Atlanta, GA

SUMMARY

  • I have been working a Cassandra/NoSQL/Hadoop/BigData/Database Expert with over 17 years’ experience in Information technology
  • 5+ years’ experience with Designing, engineering, and administering Cassandra/Hadoop/BigData/NoSQL environment
  • Designed, Installed and Configured Large Cassandra Clusters
  • Migrated Data from Oracle to Hadoop (HDFS) for data analysis
  • Performed data load on large databases from Oracle 11g/12c into Large Hadoop and Cassandra clusters in multiple data centers
  • Automated Data analytical tasks through PLSQL, HQL, BASH Shell scripts; I have written various complicated Hive queries (HQL) on Hive.
  • Tuned various NoSQL Databases in Cassandra 1.x/2.x/3.x. I possess extensive experience in designing the Databases for OLTP and Data warehouse, Data Conversion, Data & Volumetric Analysis, Database Security Administration, Storage and Application infrastructure & enterprise model management experience.
  • I have proven demonstration of analytical and problem - solving skills with team participation skills; excellent verbal and written communication skills, developing and administering best practices.

PROFESSIONAL EXPERIENCE

Confidential - Atlanta, GA

Sr. Cassandra DBA/Hadoop Admin

Responsibilities:

  • Architect/Designed Data Model for Cassandra/NoSQL databases for OMS/Pricing environment for many use cases with Application teams/Devops; Design and implement NoSQL database systems of Cassandra database clusters THD Pricing/Sales analytical systems;
  • Designed data modeling for Cassandra environment and keyspace/column family design, database coding, and database performance tuning and making schematic for Cassandra database architecture; Tuned Cassandra systems with various tuning techniques such as manual compaction, diagnosing freezing/unresponsive Cassandra peers;
  • Performed Capacity planning for Cassandra database setup and designed/implemented rollout plans for Cassandra clusters among data centers with NetworkTopologyStrategy and GossipPropertyFileSnitch; Administered Cassandra peers/clusters on Dev/QA/Prod environments;
  • Architect Multi-Data Center Cassandra clusters for THD environment; Architect multi-data center clusters with various snitches on THD Private cloud, AWS S3 and Google Cloud; Achieved data ingestion from Oracle into Cassandra 2.x/3.x; Created Cassandra column familes and tuned queries in CQL; Managed and Tuned batch/scheduled jobs on Cassandra databases and re-engineered the data model to avoid adhoc and batch jobs on Cassandra;
  • Architect/Tuned Cassandra clusters/nodes with garbage collection tuning, appropriate compaction strategy setup, tuning the large number of tombstones, tuned large SSTables and partitions; Studied performance model of Cassandra clusters and analyzed the queries for tunable consistency with various consistent levels;
  • Designed, Installed and Configured 30 node Hadoop Clusters 2.x; Migrated Data from Oracle and MySQL into HDFS for data analysis; Performed data load on large databases from Oracle 11g/12c into Hadoop/HDFS using Sqoop;
  • Leading the efforts of tuning Hadoop Datanodes to scale up the test results in tune with the requirement of the application team; Optimizing the downtime for maintenance by coordinating with the different application teams involved in the maintenance activity;
  • Participated in design/develop Bigdata infrastructure environment for THD Clearance Optimization and Price engine project; Installed and configured Hadoop 1.x/2.x; Configured HDFS for storing the data content for analysis; Performed data-load from Oracle RDBMS into Hadoop Big-data;
  • Participated in design and development of Hadoop Map reduce programs to implement the pricing and clearance pricing algorithm; Used Design and develop Hadoop Map reduce programs to implement the pricing and clearance pricing algorithm such as Hive, HBase; Design and develop Hadoop Map reduce programs to implement the pricing and clearance pricing algorithm;
  • Automated Data analytical tasks through PLSQL, HQL, BASH Shell scripts; Worked on various performance techniques on Hive such partitions, buckets, CBO; Tuned Hadoop systems (HDFS) for better performance; Performed the unit testing, integration testing and system testing of the applications.

ENVIRONMENT: Cassandra 1.x/2.x, HBase, Hadoop 1.x, 2.x.x, Hive 0.13/0.14/2. x, Sqoop, Map Reduce, Golden Gate 10/11, KSH/BASH Shell, NoSQL

Confidential, San Francisco, CA

Sr. DBA Oracle/Cassandra/Hadoop/NoSQL

Responsibilities:

  • Performed Data upgrades from RDBMS into Cassandra databases; Build Hadoop Clusters under RHEL 6/7 environment; Analyzed data on Oracle database to move from Oracle 11g into HDFS filesystems;
  • Performed databases upgrades using 11g Data guard/In-place oracle upgrades; Accomplished PCI Compliance data migration on Confidential Credit Card database; Upgraded the Credit card database into 11gR2; Designed and implemented process for Tokenizing the user and profile credit card data into the credit card database; Tuned the system post data migration and tokenization; Designed data encryption and validation procedures for tokenized credit cards;
  • Partnered application and infrastructure architects and teams to engineer complex technical product in order to provide palpable business solutions to various business divisions as a part of DBA Engineering project activities; Architecting Oracle Fail safe environments for site critical databases, including configurations, failover testing, tuning, creating standards etc.
  • Accomplished OMS(Sterling product) upgrades with Phase-I and Phase-II; Tuned long running queries on the database servers for OMS(Sterling) calls; Tuned Oracle systems for Sterling to stabilize the query performance with stress tests and key-note tests;
  • Designed Disaster recovery strategies and performed production switch over on 5 Node RAC clusters with 4 Standby databases for ATG and OMS databases; Designed and implemented DR drill before the switch over between sites with flashback technology;
  • Designed archiving techniques for moving on large table data quarterly from production SEPHPR into SEPHARC; Automated the process of data archiving with shell scripts and PLSQL; Improved performance on the production database after the data archive; Designed compressed tables for archiving database to reduce SAN storage; Lead the DBA team on architecture council meetings for production issues, changes, design decisions, application database, system and storage infrastructure upgrades;
  • Analyzed performance issues with AWR, ADDM reports for dead locks/graphs/traces, Tuning parallel queries, Oracle and system waits, Latch Contentions, CPU/Memory high watermark analysis on peak hours, Tables and Indexes fragmentation and analysis, Collect optimizer stats for tables and indexes and fixes on stale Materialized views and refresh jobs; Troubleshooting of Performance and Tuning problems in Oracle database using tools like AWR, ASH, ADDM, SATSPACK, OEM Grid Control and manual scripts.

ENVIRONMENT: Cassandra 1.x, Hadoop 1.x, 2.x.x, Hive 0.13/0.14/2. x, Sqoop, Oracle 10g/11g/12c RAC, Data Guard, Solaris 8/9/10, RHEL 6.x/7.x, RMAN, RAC, DG, Sterling and ATG eCommerce products, NoSQL, Hadoop/Hive

Confidential, Santa Clara, CA

Sr. Database Architect

Responsibilities:

  • Responsibilities include Data Center Migration of all infrastructure components including hardware/network/storage/database; Designed/developed/implemented complete upgrade/migration plans for PSVS, PBRM PDWH (Transaction and reporting databases) running on Oracle 10.2.0.4 to 11.2.0.3 Real Application Clusters
  • Worked with Infrastructure management/Release management/Application/Business management to implement hardware/server/storage/network/database during Data Center Migration (DCM); Also involved in Network configuration and performance tuning for TCP UDP buffers for database interconnect tuning/Disk/ASM/AIO filesystem tuning and relevant OS kernel parameter changes
  • Upgraded more than 30 production Shareplex Channels running on source and target database as part of production database 11g upgrades. Implemented monitoring capabilities for Shareplex channels. Fixed the bugs/issues encountered on Shareplex during the upgrade and post upgrade; Upgraded/Deployed large installations of OEM Grid Control after the database 11gR2 upgrades; Re-designed new configuration changes/scripts/functionalities on 11gR2 Databases/VCS upgraded resources/Upgraded Shareplex Channels/Netbackup configurations/Standby & Dataguard database configurations;
  • Designed and implemented Golden Gate for database upgrades from Oracle 10.2.0.4 to 11.2.0.4; Managed large initial loads/datapumps and lags for data loading; Provided extensive support after the upgrade, the issues related to performance with Golden Gate 11 and Oracle 11.2.0.4 grid infrastructure.
  • Performed benchmarks and stress tests with databases upgraded using Golden Gate/Data guard/In-place oracle upgrades. Documented the benefits of the Oracle GG 11 over other upgrades; Attended day-to-day issues on GG such as lags, performance issues, bugs and worked with Golden Gate support to fix the issues on production environment;
  • Arranged team meetings to discuss critical issues on war rooms, business continuity discussions, Infrastructure upgrade and new build plans, Oracle and AIX product support team meetings and escalations; Technical exchange training; Attended Project management program and Oracle Golden Gate 11 extensive training

ENVIRONMENT: Oracle 10g/11g Solaris 8/9/10, Redhat Linux 4/5/6, Golden Gate 10/11, Shareplex 7.4/7.5, RMAN, RAC, DG, Hitachi/Netapp Storage, Veritas Netbackup 5.x, VCS, EMC tools, SFRAC

Confidential

DBA Architect

Responsibilities:

  • Responsibilities included design, development and support production 24x7 Oracle 9i, 10g & 11g databases in Confidential .com, Samsclub.com, CANADA GM; Providing Database build/support project plans, LOEs for Confidential .com US GM e-commerce for database scaling, changes, migrations and upgrades;
  • Implemented ORACLE REAL APPLICATION CLUSTER technology to Confidential .com and MTEP tenants for scaling database hardware resources and increasing the availability (HA Solutions) of the database systems; Participated on all the critical issues encountered during various phases of discharging RAC for various applications such as Order/Inventory/Catalog applications;
  • Designed and documented blueprints for HA (RAC) solutions, Disaster recovery (between East and North data centers), Replication technologies for Confidential .com, US GM and CANADA GM; Created process standards for database maintenance such as entire site down, application down
  • Designed the process for Holiday Performance/Capacity Planning for all Confidential .com e-commerce tenants; Arranged multiple levels of Stress/Load tests on Database and application with keynotes and other application testing vendors on the production site; Identified the key bottlenecks during the site stress tests and generated postmortem results with detailed analysis based on order processed/min and other businesses requirements
  • Designed Site monitoring strategy on Nagios/Proactive Net for Order Management System, Catalog Inventory, Payment Gateway, Credit card authorization, Customer Relationship Management databases for Siteops/Database Admins to support the database 24x7 with pagers and alerts; Written SQL/PLSQL scripts and BASH/KSH/SH shell programs for monitoring RAC, ASM and Oracle databases;
  • Provided valuable suggestions/solutions on everyday war rooms for every production critical issues faced during the holiday and attended lessons learnt training everyday to understand issues faced on Database, Storage, Network, Operating System, Web services, Application, Monitoring
  • Analyzed and resolved issues such as DDL/DML locks and instance level locks and latch contention, enqueues and waitevents, ITL issues, Invalid objects, hourly and daily AWR reports (Automatic Workload Repository), Tablespace High Water Mark, Max datafiles limit, Top SQLs with 10046/10053 trace analysis, Top Active Sessions, blocking sessions, highly utilized datafile volumes, dump file growth, ORA errors, database parameter changes, database memory hit ratios, Partition growth, Indexes utilization/tuning, Soft/Hard parsing, Undo and Temp segment’s growth and shrinks, ORA 600/7445/4031/60/3113 errors, Standby read only status check, Log Shipping and Log Apply disaster recovery/ archived log shipping;

Confidential

Principle DBA

Responsibilities:

  • Design, development and support production 24x7 Oracle 9i, 10g & 11g databases in various configurations such as RAC, Dataguard, GG on Sun Solaris and AIX 5L and also L2/L3 Production support by troubleshooting database issues, performance monitoring/tuning & capacity planning on RAC, Dataguard, ASM, performance and scalability issues; Provided timely solutions everyday critical issues encountered across production databases to L2 Support NOC at Confidential and Worked with Oracle Support/TAR resolutions; Guide the NOC to follow up the maintenance procedures on Off-Peak hours such as Failovers, Patching, Upgrades, Data Load, ETL;
  • Implemented Oracle Golden Gate for replicating the large tables on remote site for reporting; Attended all kind of problems encountered during the daily database operations with GG and tuned GG to replicate faster to the target systems; Prepare GoldenGate parameter files for Extraction and Replication; Investigate Replication failures using LogDump utility; Attended advanced training on Golden Gate 11 and Cross-Train other DBAs on on Confidential database environment and Golden Gate replication setup.
  • Portrayed and implemented the migration from 9i Manual Standby to 10g, 11g Active physical DataGuard; Resolved the critical issues involved during the migration phase such as minimizing the recovery time of DG, Tuned the network parameters for DG, Maintained Logs on DR, 8 hours Lag standby, ETL standbys (4 per each production boxes on various locations)
  • Provided Solutions for Optimal Network Configuration on Confidential databases having high volume of connections such as Network buffer configuration, OLTP Listener maintenance, Tuning the Oracle connection pool for OLTP on shared server, Interconnect Failover techniques for RAC, Tuning Interconnection for Cache coherency waits due to interconnect bottleneck, Set up Optimal SDU, TDU and buffer sizing for Data Guard for log shipping
  • Maintained large tables and indexes using Online redefinition and Online re-organization; Converted large Non-Partition tables to Range and list Partitioned table using online Re-Organization;
  • Tuned Large Confidential production database’s Complex Queries and ETL operations having very large tables and indexes; Analyzed, Identified the bottleneck of the database’s performance and adopted the optimal solutions for the slowness;
  • Designed automated scripts for monitoring the databases, alerting, alarming for spikes, top SQLs; Monitored production and standby databases using SQL scripts and Unix scripts, Identified the bottleneck of the performance during the peak hours
  • Provided timely solutions for Production On-Call issues such as Block corruptions, Performance issues, Locks and Latches on Memory, Tuning Long running SQLs, Point in Time restore and Recover

We'd love your feedback!