We provide IT Staff Augmentation Services!

Cassandra Dba/ Data Analyst Resume

2.00/5 (Submit Your Rating)

Dallas, TX

SUMMARY

  • Having 5+ years of strong experience in Database Administration across the Oracle versions, Postgres DB. as well as around 4 years of experience in NoSQL Databases like MongoDB, Redis and Apache Cassandra & DataStax Enterprise edition installation, configuration, backup configuration, upgrade, Node tool commands, bootstrapping, performance tuning, decommissioning, migration etc.
  • Experience on various databases like database planning, installation, upgrade and migrations, performance tuning, backup, recovery, security provisioning, data guard, golden gate etc. on production, staging, development environments, understanding ITIL guidelines.
  • Installed Cassandra 2.x in Production, Pre - prod environments as per best practices and upgraded Cassandra databases from 1.x to 2.x
  • Tested the application and the cluster with different consistency levels to check for thewrites and readsperformance with respective to Consistency Level.
  • Excellent knowledge onCQL (Cassandra Query Language),for retrieving the data present in Cassandra cluster by running queries in CQL.
  • Involved in designing various stages ofmigrating data from RDBMS to Cassandra.
  • Good experience in Designing, Planning, Administering, Installation, Configuring, Troubleshooting, Performance monitoring of Cassandra Cluster.
  • Involved in designing various stages of migrating data from RDBMS to Cassandra.
  • Used DataStax OpsCenter and Node tool utilities to monitor the cluster.
  • Have Knowledge on Apache Spark with Cassandra.
  • Experience of doing Casandra upgrades to Major & latest versions.
  • Expertise in AWS, experience in implementing new AWS EC2 instances and working with EBS and S3 storage.
  • Good experience in managingvery large OLTP and data warehousing databases.
  • Involved in Data modeling, logical and physical database designsand worked as on-call support team member for administering OLTP24X7 production databases.
  • Good experience with Oracle grid control, Oracle enterprise manager OEM, ASM,AWR,ADDM, Flashback Technology,SQL tuning advisor, SQL access advisor, andSQL serverundo advisor.
  • Expert in planning migrating databases as part of datacenter migrations
  • Good Understanding in Microsoft Azure with Databricks, Databricks Workspace for Business Analytics, Manage Clusters in Databricks, Managing the Machine Learning Lifecycle.
  • Database user and security administration activities
  • Experience in benchmarking Cassandra Cluster using Cassandra stress tool.
  • Excellent team player and works independently, actively participates in team meetings and maintains good relations with everyone on the team and other teams.
  • Experience in developing complex queries, Stored Procedures, Functions, Views, and Triggers using SQL Server.
  • Background in a disciplined software development life cycle (SDLC) process and excellent analytical, programming, and problem-solving skills.
  • Ability to take and give directions, instructions, and assistance through completion of tasks and to work.
  • Provided 24x7 production support for multiple databases environments across different clients.

TECHNICAL SKILLS

Programming Languages: PL/SQL, Python, Shell Scripting, Java

RDBMS: Oracle 11g12c19c, Postgres

NoSQL Technologies: Cassandra, Mongo DB

Cassandra: Cassandra, DataStax OpsCenter and DevCenter, node tool, Spark on Cassandra, and OpenStack.

Tools: OpsCenter, DevCentre, Jira, Putty, Winscp, Toad, SQL Developer, VMware, Vnc-Viewer, DBCA, ADDM, RMAN, AWR, FileZilla, OEM,OpsManager

Database concepts: Transactions, Data Integrity Security Management Authentication, Performance Tuning, Locking, Query Tuning, DDL and DML Definitions, Backups, Users, Password, Log shipping, Schemas, Restores and Audtis, DB recovery, Database mirroring Database health check, Normalization, Denormalization, Replication, Locks, Logins, Automate DBA Functions.

Version Control Systems: GitHub, bitbucket.

Cloud Technologies: Azure, Azure Databricks

Operating Systems: RHEL, OEL, CENTOS

PROFESSIONAL EXPERIENCE

Confidential, Dallas, TX

Cassandra DBA/ Data Analyst

Responsibilities:

  • Involved in Cassandra Architecture, Cassandra data modelling & experience in installation, configuration and monitoring Multiple DataStax Enterprise Cassandra cluster (B2B & B2C). Worked with DataStax Support team in implementing patches/Upgrades.
  • Involved in end-to-end production monitoring, notification setup for Cassandra Clusters using OpsCenter, Node tool utilities, Shell Scripting and Python. Actively worked on troubleshooting hardware ware failures, analyzing disk failures, Memory Leak etc.
  • Loaded and transformed large sets of structured, semi structured, and unstructured data in various formats like text, zip, XML, CSV, YAML and JSON.
  • Ran many performance tests using the Cassandra-stress tool to measure and improve the read and write performance of the cluster.
  • Modified Cassandra.YAML files to set the configuration properties like cluster name, node addresses, seed provider, replication factors, memTable size and flush times etc.
  • Tuned and recorded performance of Cassandra clusters by altering the JVM parameters. Changed garbage collection cycles to place them in tune with backups/compactions to mitigate disk contention.
  • Worked on tuning Bloom filters and configured compaction strategy based on the use case.
  • Worked on Cassandra clusters for (Start/Stop/ Upgrade/Repair/Compaction/Monitoring) and worked on User defined compaction to release tombstones in Cassandra.
  • Involved in troubleshooting Cassandra performance related issues by analyzing various metrics available from OpsCenter & Custom Shell Scripts (Read Request Latency, Write Request Latency, Cfstats, TPstats, netstats, Dstats)
  • Actively Involved in expanding clusters like adding new datacenter to existing clusters, Node addition and removing Dead Nodes.
  • Involved in 180 Node cluster upgrades from 4.8.9 to 4.8.14 to resolve significant SStable corruption issues and automating corruption clean up using (Offline Scrub and Online Scrub).
  • Actively Work with Development team and Operation team during crisis time to minimize customer impact and verifying the data consistency across all datacenters using CQL (Cassandra Query Language).
  • Worked on Managing the Hadoop dev and prod clusters.
  • Wrote CQL queries to run on the data present in Cassandra Cluster with multi-DCs in 8 nodes each.
  • Solely worked on a POC in create an Azure Storage account and migrating data from Hadoop to the Azure.
  • Solely worked on a POC in creating a Databricks Workspace and Clusters and establish a connection to Cassandra dB and Azure Storage account.

Environment: Cassandra, DataStax, DevCenter, OpsCenter, Shell Scripting, Python, Hadoop, Azure, Azure Databricks, Databricks Notebooks, Performance Tuning, CQL.

Confidential, Atlanta, GA

DBA (Mongo & Cassandra DB)

Responsibilities:

  • Involved in requirements gathering and capacity planning for multi data center Apache Cassandra cluster.
  • Installing, configuring, monitoring Apache Cassandra Prod, Dev, and Test clusters
  • Implementing and maintaining a multi datacenter Cluster
  • Creating required key spaces for applications in prod, dev, test clusters.
  • Determining and setting up the required replication factors for key spaces in prod, dev etc. environments in Consultations with application teams.
  • Creating required tables with appropriate privileges to the users and secondary indexes
  • Set Cassandra backups using snapshot backups.
  • Performed Mongo DBA operational routines, performance optimization, job scheduling sharded cluster setup Database shared cluster and replica set setup, configure instances
  • Setup Cassandra database on multiple servers and manage data on Cassandra cluster.
  • Mongo dB and Cassandra connectivity and security in AWS and Microsoft Azure cloud
  • Design and implement internal process improvements, automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability. implemented successful enterprise BI solutions
  • Migration of data from on premise datacenter into Amazon web services cloud and Microsoft Azure database cloud
  • MMS configuration experience, Configuring and monitoring replica sets Optimizing database/query performance Configuring Sharding, monitoring, and identify the proper shard key
  • Performing Security measures, backups and restore backups
  • User Management - creating users, assigning roles, managing permissions
  • Responsible for administration, maintenance, Performance analysis, and Capacity planning for MongoDB/Cassandra clusters.
  • Coordinate and plan with Application teams on MongoDB capacity planning for new applications.
  • Created aggregation queries for reporting and analysis.
  • Collaborated with development teams to define and apply best practices for using MongoDB.
  • Consulted with the operations team on deploying, migrating data, monitoring, analyzing, and tuning MongoDB applications.
  • Ensure the continuous availability of our mission critical MongoDB clusters using replication across data centers.
  • Implemented TTL and indexing based on collection data time duration.
  • Performance tuning and indexing strategies using mongo utilities like Mongostat and Mongotop.

Environment: Cassandra DB, Mongo DB, Azure, Role Management, Migration, Azure

Confidential, MO

Database Administrator (oracle & Cassandra)

Responsibilities:

  • Involved in the process of Conceptual and Physical Data Modeling techniques.
  • Created data models in CQL for customer data.
  • Involved in Hardware installation and capacity planning for cluster setup.
  • Involved in the hardware decisions like CPU, RAM and disk types and quantities.
  • Used the Spark - Cassandra Connector to load data to and from Cassandra.
  • Worked on many performance tests using the Cassandra-stress tool to measure and improve the read and write performance of the cluster.
  • Modified Cassandra, by YAML files to set the configuration properties like cluster name, node addresses, seed provider, replication factors, Mem Table size and flush times etc.
  • Used the DataStax OpsCenter for maintenance operations and key space and table management.
  • Created data-models for customer data using the Cassandra Query Language.
  • Running weekly repairs to the key spaces using customized scripts and cluster wide repair bi-weekly.
  • Experienced in setting up alerts and scheduling backups through OpsCenter
  • Hosted multiple applications on Cassandra with different key space strategies, replication factor and consistency levels based on the business requirement to meet the SLA.
  • Worked on Linux shell scripts for business process and loading data from Oracle to Cassandra.
  • Evaluated business requirements and prepared detailed specifications that follow project guidelines required to develop the application.
  • Analyzed the log files to determine the flow of the application and trouble shoot the issues.
  • Involved in moving the SSTables data on to the live cluster.
  • Implemented advanced procedures like text analytics and processing using the in-memory computing capabilities like Spark.
  • Tuned and recorded performance of Cassandra clusters by altering the JVM parameters. Changed garbage collection cycles to place them in tune with backups/compactions to mitigate disk contention.
  • Performed Migration from on premise to AWS Cloud.

Environment: DataStax 4.8, Cassandra 2.2, Cqlsh, OpsCenter, Shell Scripting, Solr, Apache Kafka, Spark.

Confidential

Database Administrator (Oracle/Postgres)

Responsibilities:

  • Create and maintain documentation specific to DBA activities of the project.
  • Monitoring alert mails & tickets. And fixing the issue.
  • Responsible for Database upgradation & patching as per Scheduled CR for RAC and Non-RAC databases.
  • Perform Daily and Weekly Backup through logical backup with Export and Import Utility, Physical backup through RMAN.
  • Performing Cloning for Production Database to the Staging Environments on regular basis.
  • Performing Cloning of Database on Amazon EC2 instance using CPM Tool.
  • Perform Database backups, Restore and Recovery.
  • Supported various projects reports in analyses like AWR reports, ASH reports, ADDM reports and tune the Database accordingly
  • Managing user Profiles and privileges.
  • Doing index rebuilding, gather statistics to improve performance
  • Applied database patches such as CPU, PSU and Patch sets.
  • Performed upgradation from 11gR2 to 12cR2
  • Performed migration of on premises database to AWS (Ec2/RDS)
  • Monitoring and checking performance using OEM.
  • Implementation and support for Disaster Recovery Oracle Data Guard.
  • Monitoring the long running session, blocking session, wait events, undo usage etc. and taking necessary actions.
  • Performing SQL Tuning using SQLT.
  • Tablespace Reorganization as per requirement.
  • Performing Data Archival of large tables in database as per requirement.
  • Support, monitoring and troubleshooting RAC databases.
  • Performed conversion of Non-RAC to RAC database.
  • Monitor and manage database indexes for optimal performance
  • Performed ASM configuration, Administrations, and conversion from NON-ASM to ASM Database.
  • Performing Installation of Postgres in Linux and Windows environment.
  • Knowledge of disaster recovery principles and practices, including planning, testing, backup/restore
  • Hands-on experience on database administration, backup recovery and troubleshooting in co-location environments.
  • Responsible for configuring, integrating, and maintaining all Development, QA, Staging and Production PostgreSQL databases within the organization.
  • Responsible for all backup, recovery, and upgrading of all the PostgreSQL databases.
  • Monitoring databases to optimize database performance and diagnosing any issues.
  • End to end setup of Filesystem monitoring, Backups and Process Monitoring on Postgres Databases.
  • Implemented Partitioning in Postgres with Partman Extension.
  • Implementing Quota using PgQuota extension.
  • Implemented HA solution with Repmgr and PgBouncer.
  • Providing response for all system / database issues on 24 x 7 schedule responding to critical events and situations outside normal work hours.
  • Work with development and operations teams to tune production queries for optimal performance.
  • Implement and monitor replication for high availability and disaster recovery scenarios.
  • Performing quarterly Database Review and checking if any issues in DB and implementing changes wherever required after co ordinating with App Team.

Environment: Oracle 11g, 12c, Postgres, RAC and ASM, Dataguard, Repmgr and PgBouncer, PGPartman, PGQuota.

Confidential

Oracle Administrator

Responsibilities:

  • Data files relocations and Monitoring archiving mount points and datafile mount points.
  • Perform Daily and Weekly Backup through logical backup with Export and Import Utility, Physical backup through RMAN.
  • Applied database patches such as CPU, PSU and Patchsets.
  • Configuring listener on server side and TNS alias on client side.
  • Index rebuilding to improve query performance and analysing index structure.
  • Managing user Profiles and privileges.
  • Monitoring and checking performance using OEM.
  • Creating Users with Quota Management.
  • Creating Users Profiles Granting specific privileges and roles to the users.
  • Cross checking backup and deleting expired backup using RMAN.
  • Cloning of Oracle database 11g by using RMAN.
  • Managing storage of Oracle database 11g through ASM.
  • Finding corruption in the database and recover it using RMAN.
  • Cloning of production database to the test environment using RMAN. Handling Quarterly refresh activities of databases.
  • Data movement between the databases using Export and Import Data pump Utilities. Replicate the production schema to another instance for month end closing process.
  • Configuring Data Guard physical standby for the production database to ensure the maximum Availability, performance, and protection in environment.
  • Implementing Real Application Cluster 11g and converting single instance to RAC on ASM Storage. Support, monitoring and troubleshooting RAC databases.
  • Adding node to an existing RAC Instance.
  • Increasing the performance of the database through gathering statistics Confidential schema and table level, managing tables and indexes, compiling invalid objects.
  • Upgradation of Oracle Database from 10g to 11gR2.
  • Analyzing AWR reports, ASH report and implementing ADDM recommendations.

Environment: Oracle 10g and 11g, Data guard, 11g RAC.

We'd love your feedback!