- Progressive and diverse information technology leader with focus as technical and Strategic Database leader for MYSQL ,SQL Server, Oracle & MongoDB and experience as leader of onshore and offshore teams along with expertise in the roles of Database Administrator, Database Engineer, Data architect, project manager and developer for diverse industries including investment, insurance, software product, e - commerce, banking, healthcare, social network and high-tech environments. Proven ability to lead and motivate a staff in a growing, flexible environment utilizing relevant approaches and methodologies.
- Success in developing company strategy for data architect and databases standards, templates, processes, procedures & policies that have led to better performance, cost reductions, efficiencies and effectiveness.
- Superior communication, presentation, analytical and problem solving skill along with a demonstrated ability to work well with all levels of businesses.
- Expert in information systems technology, project planning, strategic planning, systems analysis and troubleshooting, quality control, forecasting, scheduling and planning, and tracking of results.
- Excel at creating and implementing technical and operational plans and strategies by use of strong interpersonal and problem solving skills.
- Expert on handing the large data, scale it, fix the performance issue, Database and Sql tuning, Long term Database strategy, move database to AWS.
TECHNICAL SKILLS:Databases: MS-SQL 2000/2005/2008 , Oracle 6/7/8/9/10/11, MySQL 4/5/5.x, Informix, VAX RDB 4.2, MongoDB 2.4, 2.6,3.0, 3.2, Toku Mysql, TokuMX 2.0, PostgreSQL 8.x, 9.x
Operating Systems: Windows 2000/2003/NT/XP, UNIX, Linux, Solaris, HP, VAX
Applications: Visio, MS Office, Crystal Reports, Visual Studio, Erwin, Informatica, SQL-Loader, SQL Navigator, Developer 2000, Desinger 2000, MS project, Sql Server Management Studio, Query Editor, SQL Configuration Manager, SQL server Profiler, Database Engine Tuning Advisor, Toad, Precise I3, RedGate, ApexSQL, LiteSpeed, Shareplex, RMAN, Percona
Languages: T/SQL, C, Pro*C, SQL, PL/SQL, DCL, Shell script, Perl, cgi, Html, PHP, Java script, Python
Confidential, New York, NY
Lead Database Engineer and ArchitectResponsibilities:
- Architected database and moved databases from data center to AWS cloud for scalability, high growth, High availability and used managed RDS databases
- Installed and upgraded ops-manager 3.4 for MongoDB monitoring and reduced the outages with pro-active monitoring.
- Used MongoDB 3.4 enterprise edition 3.4 encryption using local key for PII (Personally identifiable information) data and worked to pass the Audit for Identity application
- Upgraded Enterprise MongoDB 3.4 from community MongoDB 3.0 & 3.2 which provided new functionality and improved performance and helped on audit and backup
- Improved the MongoDB performance by reducing too many indexes, rewriting query, improved the database modeling
- Implemented the new Mysql backup using Python which improved the backup timing for hours to minutes and reduce the CPU usages.
- Redesigned database design for Identity for handling large data set and improved the performance and setup DR
- Resolved the Mysql dead-lock and common mysql performance issue and improved the database performance
- Reduced the AWS EC2 and RDS Mysql hardware cost by improving the database performance and down grading the server in AWS
- Working to build Mysql and MongoDB servers with data automatically for just a click using Python and Docker container
- Done POC for data warehouse using Cloudera Hadoop, MongoDB and reports runs couple of minutes to milliseconds in MongoDB
Confidential, New York, NY
Lead Database Engineer, ETL and ArchitectResponsibilities:
- Architected the entire company databases for relational and non relational database and chosen the best database solution for the application
- Moved databases to Amazon Cloud (AWS), which helped the company for High availability, Digester Recovery and high growth (scalability).
- Used AWS on Eastern Region as Primary and Western Region for Digester Recovery
- Used Chef for automation which help on standardization and automation
- Used mainly EC2 instances ( M3, M4, C3, C4, I2) and S3 storage hourly basis and some reserved instances
- Did POC on RDS database services and using Amazon RedShift
- Used encrypted SSD volume for PHI (Protected Health Information) data and worked to pass the Audit
- Implemented Percona Mysql (XtraDB) cluster and replication which helped for better performance and high availability cluster (virtual synchronous Master to Master replication) on Linux.
- Introduced NoSQL Database MongoDB with replication, Sharding for unstructured data and introduced TokuMX for better performance in place of MongoDB.
- Implemented Database Backup/Recovery policy, audit, data modeling, Capacity planning, data encryption, database monitoring using third party tools MonYOG, MMS, New Relic, Nagios and home grown shell and Python scripts using for Mysql, MongoDB and TokuMX
- Improved database performance by tuning database parameters and tuning long running queries and optimize database design for Mysql, MongoDB
- Implementing Data ware house using cloudera hadoop, Amazon EMR and data messaging system using Camel/Kafka ESB.
- Data Migration from Relation database into MongoDB and vice versa.
- Automated backup, restore, monitoring, user creation/drop etc using shell and Python scripts, chef to reduce manpower and improve efficiency
Confidential, New York, NY
Lead Database Engineer and ArchitectResponsibilities:
- Used AWS for on demand servers and used for High availability, Digester Recovery and high growth (scalability).
- Used AWS servers for performance testing where servers can be upgraded easily and get the optimal server configuration
- Compared AWS and Microsoft Azure for our application and AWS worked
- Hired, Build, coached Database engineering team who will support the OLTP and Data warehouse 24X7
- Helped dev-ops to implement chef for automation and standard database server build
- Implemented Percona XtraDB Mysql 5.5 cluster (virtual synchronous Master to Master replication) on Linux with six nodes and multiple slaves to support online loyalty program for entertainment which improved the high availability and scalability to handle large database 30TB+, billions of record per table and 100K+ queries per second.
- Introduced Xtrabackup on slave to reduce lock and load on cluster and it is easy to restore.
- Implemented on Hadoop with Impala to support the large Data warehouse 100TB using MicroStrategy
- Developed and implemented the automated job for pro-active database monitoring and data load, data transfer, database backup & restore using shell scripts
- Improved the database performance by 60% by redesign the database server, tuning database design, reducing the unnecessary indexes, tuning the sql, tuning database parameter, partitioning large tables.
- Introduced Database monitoring tools like MonYOG, Zabbix and New Ralic which helped to identify the issue quickly and reduced the outages
- Moved PostgreSQL data into Mysql for better performance, reliability, high availability and reduced cost
- Improved the database physical design by fixing data type, chose the correct data type and redefine the relationship which improved the database performance and transaction capacity and reduced hardware cost.
- Introduced MongoDB with replica and shard, Riak for non relation data which improved the speed of large data access
Confidential, Purchase, NY
Lead Database and Data Model EngineerResponsibilities:
- Did POC to use AWS for SAAS environment
- Implemented active to active clustering in SQL server, RAC for Oracle 11g for high availability for SAAS
- Improved the database performance by 60% by reducing the unnecessary indexes, tuning the sql, separating data & index into different physical disk, tuning database parameter, partitioning large tables, updating statistics, applying patch
- Improved the database design by introducing data architect and sql standard, using proper data type, removing redundant data, applying normalization rule, using correct index and introducing code review
- Developed custom database monitoring scripts using shell scripts for proactive database monitoring and automate the database jobs
- Implemented Mysql 5.5 (InnoDB) Master-slave replication, Percona XtraDB Mysql cluster on Linux with 4 nodes in the SAAS environment which reduced cost 200K yearly and better performance and handle large database 10TB and high transactions
- Implemented Monitoring tools MonYOG for Mysql, 12c OEM and Precise I3 for Oracle and Sql server, Foglight for Oracle and Sql server which reduced the outages by 40% and reduced the performance issue by 30%
- Implement HA and DR solution and redesign existing system for high volume, better performance and reliability for SAAS which is mission critical system
- Implemented MongoDB for non-relation data which will help to customize data without application change
Confidential, Hartford, CT
Lead Database AdministratorResponsibilities:
- Hired and developed a DBA team to support the 24x7 OLTP and data warehouse ( > 30TB, single table >4TB)
- Introduced offshore team and reduced the manpower cost 60% while improving team performance through coaching, mentoring, goal building
- Implemented large Mysql database with replication (master-slave) using multiple slaves to reduce the cost and implemented for monitoring using Mysql database for home grown application and financial simulation ( Billions+ records, 100GB+ table, 20TB+ database).
- Developed and implemented shell scripts for job automation and database monitoring, backup and restore which reduced the manpower and improved the efficiency
- Implemented new web oriented DBA request system replacing old e-mail oriented request system which reduced manpower and faster processing
- Implemented active to active clustering in SQL server for high availability, RAC on 11g on Linux for Oracle for large volume
- Implemented proactive database monitoring like locking, deadlock analysis, long running query, clustering failover using DMVs and stored procedure which reduced database outage and improved performance
- Improved the database performance by partitioned large objects, dropped excessive and unused indices, tuned the long running query, defragged the indices, used differential backup that reduced load
- Tested the application in Mysql with replication (master-slave) to reduce the cost and implemented for monitoring using Mysql database for home grown application, but was not able to implement fully due to security and audit rules
- Introduced Performance monitoring tools Precise I3 & FogLight for Oracle & SQL Server which helped to resolve performance issues quickly and BMC patrol for monitoring databases helping to reduce the outages by 30%
- Passed the internal and Sarbanes Oxley audit for databases
Confidential, Norwalk, CT
Senior Database AdministratorResponsibilities:
- Achieved vast improvements in tracking of chronic issues by decreasing false alarms and improving communication with the network operations team by enabling timely successful roll-out of BMC Patrol monitoring Software
- Instrumental in upgrading of 60 productions 40 QA and 30 development databases from oracle 817 to Oracle 9207 and Oracle 9207 to Oracle 10204 by leveraging upgraded shareplex replication strategy and query monitoring tools with zero downtime to website nor negative application performance impact.
- Delivered 100% increase in transaction volume of our databases and reduced source database sync time by 50% by implementing crucial upgrade of Shareplex enabling use of a Multi threaded poster. This Multi threaded poster increased the posted speed double than Single threaded poster
- Lead the DBA team in reorganizing production, QA and development, reducing the storage requirement to 30-40% while increasing database performance by 40-50%
- Implemented Mysql on Linux to reduce the cost, scalability and used for highly transaction(100K+ per second) system where read is more than writes( Master and multi slave replication). It was used for temporary(1-30 days) large data 30TB.
- Upgraded Mysql from 4.0 to 5.0 for better performance and new features, introduced partitioning, custom monitoring for Mysql, introduced Innodb, Master-slave replication(20+ slavse), online backup for Mysql for InnoDB, optimized sql code.
- Developed custom tools for data transfer, password change, database monitoring using Perl, Cgi, html, PHP and shell scripts which helped to reduce the dependency on DBA and automate the job.