We provide IT Staff Augmentation Services!

Principal Enterprise Architect Resume

5.00/5 (Submit Your Rating)

Los Angeles, CA

SUMMARY:

  • Result - oriented hands-on database expert with 20 years of experience and proven track record in architecting large (50 TB - OLTP, 200 TB-DW and 750 TB-HDFS) Enterprise and mission critical systems involving relational RDBMS (Postgresql, SQL Server, MySQL, clusters, AWS,RDS), and NoSQL databases MongoDB, Cassandra, Couchbase and BigData platform such as Apache Hadoop, hive,Hbase, Apache Spark, Apache Kafka echo systems.
  • Expertise in design and developing real time data processing and integration with real time data pipelines using spark streaming, structured streaming, SparkSQL, kafka streaming, and Ktable
  • In-depth knowledge in relational, NoSQL, and BigData technologies and employing them appropriately to get the best out of them and address scalability and availability issues with high Return On Investment
  • Experienced in very large Enterprise database design, data partition, data encryption, data protection, data governance, and security
  • Excellent in scala,python, java scripting, SQL, PL/SQL, T-SQL, bash shell, perl, and Complex ETL processes
  • Ability to understand complex data flow architecture in enterprise level applications, design and develop automation processes and procedures for efficient dataflow in very large and heterogeneous database environments.
  • Excellent in strategic planning, project management, and effective employee relationship management
  • Use of Agile/Scrum/SDLC methodologies and tools JIRA/Confluence for project management
  • Passionate about learning and implementing cutting edge data management technologies
  • Exposure to Spark-MLlibs and building predictive and supervised learning models

PROFESSIONAL EXPERIENCE:

Principal Enterprise Architect

Confidential

Environment: Hadoop/Hive/Mongodb,Cassandra, Spark/kafka Streaming, Kafka connect, /Amazon AWS, AzureDataLake,S3/EBS/EC2/java/scala/python/Avro schema/Postgresql/Aurora-MySQL

Responsibilities:

  • Design and develop clod solution using open source technologies for large enterprises
  • Designing low latency data pipeline using kafka, spark, NiFi,Cassandra, Redis/Geode in-memory datastores
  • Converting Legacy Informatica ETL workflows to Apache NiFi/Spark/Kafka data streams
  • Data migration from on premises to cloud (aws/azure) using parallel data pipelines
  • Building containerized docker/Kubernetes/portwork pods in AWS
  • Designing and developing Bigdata solutions for large enterprises to handle and manage petabytes of data and providing real time data processing solutions using suitable technologies
  • Ingesting and integrating real time data with enterprise analytical and reporting systems
  • Building machine learning model using Spark-MLlib
  • Migrating relation database to columnar data store

Principal Database Engineer/Architect

Confidential

Environment: SQL Server 2012/2008R2/Linux 6.8/Postgresql 9.6,Hadoop/Hive/SOA/Amazon AWS, S3/EBS/EC2/Pentaho ETL/Cassandra/Mongodb

Responsibilities:

  • Responsible for sox/PCI database design, development and implementation of complex SQL Server RDBMS/Postgresql, and NoSQL (Mongodb/ Cassandra /Hadoop-Hive/Pig/HBase/Zookeeper/kafka) systems that support wide range of business critical applications
  • Spearhead open source NoSQL (MongoDB/Apache Cassandra/Hadoop-hive) adoptions to meet scalability and high availability requirements.
  • Gathering and analyzing business requirements, evaluating suitable technology stack and developing use cases.
  • Design, develop real time ETL from OLTP to DW system using kafka/spark streaming for near real time reporting: De-normalizing legacy CISAM schema structure and porting data to Cassandra/Mongo databases
  • Design document data model schemas for porting sql server data to Mongodb
  • Design mongodb shards and replication strategies to meet data protection and high availability
  • Design and develop mapreduce jobs in mongodb, Hadoop/hive/pig systems
  • Set up Hadoop cluster for processing unstructured/semi-structure (weblog, text files,csv) data
  • Migrate sqlserver/db2/C-ISAM and mysql databases to MongoDB/Cassandra/postgresql (Amazon AWS with EBS/S3 storage containers and EC2 systems)
  • Develop postgresql stored procedures, functions, and designing partition tables and materialized views
  • Design and implement postgresql partition strategies, full-text search, Jason datastore
  • Responsible for overall data management strategy, performance, security, reliability, and availability of the enterprise data.
  • Develop and maintain master data models (for Data As A Service) and security standards align with company’s overall system architecture.
  • Providing expert guidance to leadership team on enterprise data design, gather and management across all business areas.
  • Pioneered major efforts with the help of a team of dbas/developers to optimize more than 60 business critical reports and improved performance by over 50% to 90%
  • Improved product build and deployment process and cut done production deployment time by over 100%
  • Root cause analysis (RCA) for unplanned system outages and performance issues.
  • Lead database architecture team to support data integration and globalization initiatives
  • Document technical requirements, develop project plans and implement change control procedures

Senior DBA Lead

Confidential, Los Angeles, CA

Environment: Oracle 112c/11GRAC, Sybase ASE/IQ 12.5.4/SQLServer 2005/2008/Oracle12c/11G/ MySQL clusters/SSIS/Shareplex replication, hadoop-hive/128 node HDFS cluster/HBase/Pig/YARN/ Oracle 11G, Informatica

Responsibilities:

  • Led a team of three dbas in supporting 24/7 production, development and testing database environment
  • Guided development team in data modeling, database logical & physical design, user management, security policies, and backup and recovery processes.
  • Guided team members in resolving various production and development related issues and root cause analysis and subsequent best practice implementation
  • Established database standers and implemented best practice and standard operating procedures (SOP)
  • 124 node big data implementation using Hadoop/hive/Hbase open source systems with 7500TB of HDFS storage
  • Install, configure, and manage scalable MongoDB/HBase, and Hadoop Core Components such as Task tracker,Job Tracker, and data nodes
  • Designed data partition in Hadoop and data extract and load into and from Hadoop systems.
  • Optimized and automated large data load (etl) using Oracle Data warehouse builder, sqlloader and pl-sql packages
  • Established SOX and PCI complaint databases and database environments
  • Practiced Agile and scrum methodology for project management and identified and prioritized issues and resolved them in a timely manner and practiced effective resource utilization and time management.
  • Worked closely with development, testing, network, product support teams in delivering projects on schedule.
  • Managed very large Database (100TB) oracle data warehouse environment with Oracle ASM for storage management.
  • Planned implemented data partitioning and large table partition and space management
  • Optimized OLTP and data warehouse queries
  • Capacity planning, disk expansion, and cluster node maintenance and monitoring
  • Designed business compliant enterprise backup and recovery process and procedures

Principal Database Architect

Confidential, Westlake Village, CA

Environment: Oracle11/101G clusters/MySQL/Oracle 11GR2/10GR2 cluster with ASM/Oracle E-Business Suite R12.1.1-3/Fusion Middleware/Postgresql/Informatica 8.6/9.1/Hadoop-hive on Linux

Responsibilities:

  • Designed and implemented 24/7 process and procedures to support maximum availability with Microsoft standby and replication technologies
  • Worked with development team and put in place a set of database standards for database changes and release process.
  • Let a team of four resources to migrate non-production database and application environment from physicals servers to virtualized environment (VMWare), thereby reducing database and application licensing cost, trained, developed, and mentored entry-level technologists in DB design, development, project management, and software development processes.
  • Led major database and database server consolidation project saving $800K on database and OS licenses cost.
  • Led a three member team in developing and deploying a failover solution for SSIS/Informatica ETL application saving $250K on Informatica high availability costs and ensure the ETL process are failsafe.
  • Installed and configured SOX/PCI complaint SQL server, Oracle11gR2 RAC/10GR2RAC database with ASM/ocfs2 file system for high availability.
  • Designed mysql and postgresql databases with replication setup for high availability
  • Designed and developed data warehouse database and data load processes.
  • Planned and executed Oracle/SQL Server/MongoDB/postgresql/MySQL database upgrades with least downtime
  • Designed mysql and postgresql databases with replication setup for high availability
  • Designed and developed data warehouse database and data load processes.
  • Planned and executed Oracle/SQL Server/MongoDB/postgresql/MySQL database upgrades with least downtime

We'd love your feedback!