We provide IT Staff Augmentation Services!

Hadoop Admin Resume

0/5 (Submit Your Rating)

Los Angeles, CA

SUMMARY

  • Around 11 years of experience in Information Technology.
  • Around 2+years of experience as Hadoop/Hbase Administrator/Architect and Big data technologies.
  • 8+ years of extensive experience in Oracle Database Administration/3 + Years of Sql Server Admin/DB2 on supporting mission critical Distributed and OLTP/OLAP/DWH 24x7 databases.
  • Expertise in Managing Hadoop clusters administration/Hbase including setup, install, monitor, maintenance and operational support for distribution: Cloudera CDH3,CDH4,CDH5
  • Good experience in install/configure and managing Hadoop clusters in Amazon EC2
  • Involved in setting up High availability solutions to Hadoop cluster and HBase.
  • Expertise in Installation, Configuration, Upgradation, Patching, Migration, Trouble shooting and Performance tuning of oracle 8i/9i/10g/11g stand alone and RAC databases/Clients on UNIX, LINUX and Windows environments
  • Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.
  • Involved in setting Hadoop security with Kerberos
  • Design Big Data solutions for traditional enterprise businesses.
  • Worked on install and configuring Cloudera Manager and Hue of Hadoop stack.
  • Used Network Monitoring Daemons like Ganglia and Service monitoring tools like Nagios.
  • Adding/removing new nodes to an existing hadoop cluster.
  • Backup configuration and Recovery from a NameNode failure.
  • Decommissioning and commissioning the Node on running hadoop cluster.
  • Installation of various Hadoop Ecosystems and Hadoop Daemons.
  • Installation and configuration of Sqoop and Flume,scala
  • Excellent command in creating Backups & Recovery and Disaster recovery procedures and Implementing BACKUP and RECOVERY strategies for off - line and on-line Backups, with excellent command on RMAN, Recovery Catalog,NetApp Snap backups and EMC Time finder Backup/restore tools.
  • Involved in bench marking Hadoop/HBase cluster file systems various batch jobs and workloads.
  • Good at writing scripts and monitoring routines using bash, Perl and Java..etc for standalone and distributed environments.
  • Expert in performance tuning in PL/SQL, database server configuration, memory, Instance and database level tuning.
  • POC experience on Cassandra big data implementation.
  • Worked on many cluster builds on Linux servers for distributed environments.
  • Good exposure and experience on Linux internals and administration
  • Worked on year to year capacity planning for distributed environments and involved and coordinated with vendors in procurement process for new hardware.
  • Worked as the subject matter expert in the matters of High available solutions such as Hadoop and RAC for various internal/business users and end clients.
  • Excellent command on change management and coordinating deployments in standalone and distributed environments
  • Performed all aspects of database administration including installs, builds, upgrades, object sizing, object creation, backup/recovery, patches, monitoring, export/imports and SQL loader.
  • Provided training for new onboard consultants and trainees on regular basis about the Database standards, security compliance and first look of environments.
  • Worked on capacity and resource planning. Estimated the size of data files and redo log files considering transactions rate, transaction records size, transaction type, and volatility of the data, recovery time allowed etc.
  • Good internals knowledge in distributed databases and relational oracle 9i, 10g and 11g tools and other OS level tools.
  • Excellent command on Pl/SQL and Unix Shell scripting.
  • 24 X 7 Production on-call support and administration for Distributed and relational databases.
  • Excellent command on Linux and UNIX platforms.
  • Strong experience in OEM (Oracle Enterprise Manager), 12c Cloud control, Oracle Data Guard, SQL* Net, SQL * Loader and SQL*PLUS, STATSPACK, Explain Plan and TOAD.

TECHNICAL SKILLS

Operating Systems: Solaris 8/9/10, HP-UX, SUSE Linux, Redhat Enterprise Linux 4/5, Windows 9X/XP/2000, IBM AIX 5.1/5.2/5.3Big Data,Hadoop, Map Reduce, Pig, Hive, Sqoop, Hbase, Cassandra,Flume, Impala,Hue,Solr,Whirr, Amazon EC2,CDH3,CDH4,CDH5

Database: Oracle 8i/9i/10g/11g Enterprise/Standard Edition (With RAC), SQL Server 2000/5, MySQL 5,Cassandra

Languages: SQL,Perl, PL/SQL, Linux/UNIX Shell Scripting, Java, C & C++, Perl,VB,VB.Net,C#

Database Tools: SQL*PLUS, RMAN, OEM, SQL*Loader, Exp, Imp, expdp, impdp and TOAD, SQL Developer, Erwin, dbverify, Spotlight,Foglight, DAM etc.

Tuning Tools: SQL Auto Tracing, TKPROF, EXPLAIN PLAN, STATSPACK, AWR, Tuning advisors, ADDM, oradebug, OS Watcher and SQL Plan stability management etc.

Other: Veritas Net Backup, Legato, Tivoli, EMC, Citrix Server & VMware

PROFESSIONAL EXPERIENCE

Confidential, Los Angeles, CA

Hadoop Admin

Environment: Hadoop, HDFS, Hive, Sqoop, Flume, Zookeeper and HBase, Amazon EC2,Python,scala,and Shell scripting,Puppet, Oracle 9i/10g/11g RAC with Solaris/redhat, Exadata Machines X2/X3,Big Data Cloudera CDH4/5,Solr,Toad, SQL plus, Oracle Enterprise Manager (OEM), RMAN, Solaris 9/10,AIX,Shell Scripting, python, bash scripting, Golden Gate, Data guard, Net Backup, Time finder Backups,RMAN,Redhat/Suse Linix, EM Cloud Control 12c, Net Backup,Foglight, SPARC M3000/5000 and T3/T4 systems.

Responsibilities:

  • Worked as Big Data Hadoop Architect/administrator in Evaluating, design, install, configure monitor, maintenance, and scale up for Confidential global distributed environments.
  • Involved in architecting hadoop cluster for ad campaign and big data analytics for Confidential Global division.
  • I had involved developing and administrating scalable, cost effective, and fault tolerant distributed environment for Confidential global campaign and social network/email campaigning and data analytics using Cloudera CDH3 and CDH4 and deployed on Amazon EC2
  • Involved in bench marking Hadoop/HBase cluster file systems various batch jobs and workloads.
  • Worked with Hadoop devlopers and Inftracture teams in hadoop cluster/Map reduce jobs troubleshooting.
  • Configure of Fully-distributed Hadoop cluster.
  • Using Network Monitoring Daemons like Ganglia and Service monitoring tools like Nagios.
  • Recovering from a NameNode failure
  • Involved in installing hadoop user experience interface(Hue)
  • Decommissioning and commissioning the Node on running cluster.
  • Installation of various Hadoop Ecosystems and Hadoop Daemons.
  • Installation and configuration of Sqoop and Flume.
  • Involved in configuration and maintenance of Solr
  • Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the job requirement.
  • Cluster monitoring, maintenance and troubleshooting;
  • Importing and exporting data into HDFS using Sqoop from Teradata.
  • Experienced in define being job flows with Oozie.
  • Loading log data directly into HDFS using Flume.
  • Experienced in managing and reviewing Hadoop log files.
  • Installation of various Hadoop Ecosystems and Hadoop Daemons.
  • Installation and configuration of Sqoop and Flume,Hbase
  • Involved in installing and configuration of Puppet module
  • Involved in Hadoop Cluster environment administrator that includes adding and removing cluster nodes
  • Developed many scripts and monitoring routines using bash, Perl and Java..etc for distributed and Relational DB environments.
  • Managed installs, configuration and deployments for large scale distributed production environments
  • Evaluating, design, installing Database servers 8i,9i/10G/11G stand alone and RAC Database software, upgrading to new versions, co-coordinating with hardware changes and applying database patches.
  • worked on optimizing IO for Hadoop clsuetrs / analytics workloads
  • Plan and acquire the new technologies for and implementation, suitable for the applications in regard to the specific future requirements of the company's business projections and growth.
  • Worked on lead database architect for Evaluating, design, installing 3 node 11gR2 RAC cluster on Redhat Linux for Telematics systems to manage online tracking/911 support/crash reporting to centralized systems along with online payment options.
  • Cloning Oracle development and QA environments to support multiple clients using VMware templates.
  • Designed, Configured and supported Data Guard configuration including cascading standby databases for high availability and disaster recovery.
  • supported 24/7 on-call support for mission critical financial databases
  • involved in designing new structures/procedures/views/functions for development and data migration from production to reports

Confidential Brea, CA

Sr. Oracle Database Architect/Administrator

Environment: Oracle 8i/9i/10g, TOAD, OEM, Oracle RAC, RMAN, MS SQL Server 2000/2005, Erwin, Informatica, B.O., IBM AIX 5.1/5.2/5.3, bash scripting, VB,C,C++,C#,VB.Net,Redhat Linux, SuSE Linux Advanced Server, Solaris 8/9, HP-UX and perl, Net Framework, C#,, python

Responsibilities:

  • Evaluating, plan, Installing Database servers, Database software, upgrading to new versions, co-coordinating with hardware changes and applying database patches.
  • Worked as lead Security DBA to hardening the Databases using Oracle TDE, Vormetric encryption, Imperva DB activity monitoring, Audit support and various regular database security checks.
  • Worked on setting up Centralized reporting environment using Golden gate.
  • Backup plan designing and implementation using cold/hot backups, Flash copy, Volume copy and RMAN backup technology.
  • Re-organizing the database architecture to work efficiently and for better response time.
  • Configuring Data Guard to ensure high availability, data protection and disaster recovery for enterprise data.
  • Intensively used latest 10g Automatic Workload Repository (AWR) & Automatic Database Diagnostic Monitoring (ADDM) reports for the health check of the databases, and used the notification tool to send the alerts in OEM Grid Control.
  • Intensively used OEM Grid Control Diagnostic and tuning packs to tune the application and SQL queries.
  • As part of a pilot project to transfer the database administrative activity to overseas, I have documented and coordinated with the people overseas.
  • Performing tuning of application with RBO, CBO, using Trace, Tkprof, Explain plan, Statspack, AWR and ADDM
  • Experience with testing and pre-production of Oracle 10g with ASM.
  • Successfully providing 24/7 support for production, development, and test database.
  • Database performance monitoring, Application performance monitoring, tuning database and application with efficient database parameters.
  • Plan and acquire the new technologies for evaluation and implementation, suitable for the applications in regard to the specific future requirements of the company's business projections and growth.
  • Excellent command in creating Backups & Recovery and Disaster recovery procedures and Implementing BACKUP and RECOVERY strategies for off-line and on-line Backups, with excellent command on RMAN

Confidential -San Diego

Oracle DBA

Environment: Oracle 8i/9i/10g, SQLPlus, Oracle OEM, RMAN, Toad, Erwin, Red hat Linux, IBM AIX, Solaris, HP-UX, and Windows.

Responsibilities:

  • Backup plan designing and implementation using cold/hot backups and RMAN backup technology and NetBackup.
  • Re-organizing the database architecture to work efficiently and for better response time.
  • Successfully providing 24/7 support for production and regular support development, and test database.
  • Created users, roles and granted privileges according to the business requirements
  • Database performance monitoring, Application performance monitoring, tuning database and application with efficient database parameters.
  • Disaster Recovery implementation using standby databases.
  • Excellent command in creating Backups & Recovery and Disaster recovery procedures and Implementing BACKUP and RECOVERY strategies for off-line and on-line Backups, with excellent command on RMAN
  • Tuning Sql Queries using Trace, Tkprof, Explain plan and Statspack.
  • Migrated flat files from different sources into Oracle Database using SQL*Loader and import/export.
  • Applied latest patches as per requirement.
  • Was actively involved in constant site enhancements along with developers - code and data push between different servers and testing and moving to production.
  • supported 24/7 on-call support for mission critical financial databases
  • involved in designing new structures/procedures/views/functions for development and data migration from production to reports
  • Provided database related services to their development and QA environments.
  • Implemented database related security measures.

Confidential

Database Administrator

Environment: Oracle 9i/8i, SQL Plus, Oracle OEM, RMAN, Red hat Linux & Windows, Shell scripting, VB,C,VB.Net,C# and PL/SQL programming.

Responsibilities:

  • Helping to the developing team and data warehousing team.
  • Writing complex queries and procedures using sql *plus and toad.
  • Developed shell scripts and automated nightly jobs using crone to schedule the backups, exports, refresh test DB from production site and build development, testing staging DB’s.
  • Report Generation using crystal reports 8.0.
  • Creating users, allocation to appropriate tablespace quotas with necessary privileges and roles for all databases.
  • Configuring TNSNAMES.ORA and LISTENER.ORA for SQL*NET connectivity.
  • Performing tuning of application with RBO, CBO, using Trace, Tkprof, Explain plan, Statspack, AWR and ADDM
  • Monitoring table space size, resizing table space, relocating data files for better disk I/O.
  • Managing archive logs, DB-links, partitions.
  • Oracle Database tuning using SQL Auto trace, TKPROF, EXPLAIN PLAN for the Better performance.
  • Monitoring Growth of tables and undertaking necessary re-organization of database as and when required.

We'd love your feedback!