We provide IT Staff Augmentation Services!

Hadoop Administrator Resume

2.00/5 (Submit Your Rating)

Phoenix, AZ

PROFESSIONAL SUMMARY

  • Over all 6+ years of System Admin experience in IT background which includes 2+ years of experience in Hadoop Technologies. Strong experience as Hadoop Administrator was responsible for smooth running and day - to-day operation of a mission-critical Hadoop Cluster. Involved in performing upgrades, manage configuration changes, maintain System integrity and monitor Cluster performance in a multi-tenancy environment
  • Strong communication skills with professional attitude and can take the pressures to drive with enthusiasm to support of client Hadoop Cluster environment with full potential.
  • Worked on Multi Clustered environment and setting up Cloudera Hadoop echo-System.
  • Experience writing Map Reduce Jobs, HIVEQL
  • Experience with Hbase, PIG, NoSQL and MongoDB
  • Set up and configure HadoopSystem on Confidential AWS for processing massive volumes of Data
  • Work with Enterprise Analytics team and transform analytics requirements into Hadoop centric technologies.
  • Monitor Map Reduces and analyze cluster performance
  • Performance tuning the Hadoop Cluster by gathering and analyzing the existing infrastructure
  • Automating the Hadoop Installation, configuration and maintaining the Cluster by using the tools like puppet.
  • Setting up monitoring infrastructure for Hadoop Cluster using Nagios and Ganglia.
  • Working with Flume to load the log Data from multiple sources directly into HDFS
  • Configuring the Zookeeper to coordinate the servers in Clusters and to maintain the Data consistency.
  • Data migration from existing Data stores to Hadoop using Sqoop.
  • Upgrade and maintenance of Apache product stack
  • Designing both time driven and Data driven automated workflows using Oozie.
  • Supporting analysts by Administering and Configuring HIVE.
  • Providing support to Data analyst in running PIG and HIVE queries.
  • Writing shell scripts to dump the Shared Data from MySQL servers to HDFS.
  • Familiar with Java virtual machine (JVM) and multi-threaded processing.

TECHNICAL SKILLS

Hadoop Ecosystem Development: Hadoop HDFS, Sun Grid Engine Administration, HIVE, PIG, Flume, Oozie, Zookeeper, HBASE and Sqoop.

Operating System: Linux (Red Hat Enterprise Linux 3/4/5, CentOS, Oracle Linux, Fedora, SUSE, Ubuntu), UNIX (Solaris, AIX, HP UX), Windows

Languages: C, JAVA, PYTHON, SQL, PIG LATIN, UNIX shell scripting, Perl Scripting, HTML.

Databases: MySQL, Oracle, MS SQL Server

Tools: Puppet, BmcPatrol, Nagios, Ganglia, Cloudera Manager

Storage Arrays: Expertise in Storage management on EMC infrastructure, ZFS and Linux- RAID

SAN: Brocade 48k / 24k/12k/4900 / 4100 and Cisco MDS 9513, 9509, 9506, 9140 and 9120

Storage Tools: ECC, Navisphere Manager, NAVICLI, SANCopy, BrocadeCLI / WebTools, CiscoCLI / Fabric/Device Manager and PowerPath

PROFESSIONAL EXPERIENCE

Confidential, Phoenix, AZ

Hadoop Administrator

Roles & Responsibilities:

  • Specifying the Cluster size, allocating Resource pool, Distribution of Hadoop by writing the specification texts in JSON File format.
  • Upgrading the Hadoop Cluster from CDH3 to CDH4 and setup High availability Cluster Integrate the HIVE with existing applications
  • Configured Ethernet bonding for all Nodes to double the network bandwidth
  • Exported the result set from HIVE to MySQL using Shell scripts.
  • Develop HIVE queries for the analysts.
  • Helped the team to increase Cluster from 25 Nodes to 40 Nodes. The configuration for additional Data Nodes was managed through Serengeti.
  • Maintain System integrity of all sub-components (primarily HDFS, MR, HBase, and Flume).
  • Monitor System health and logs and respond accordingly to any warning or failure conditions.
  • Perform upgrades and configuration changes.
  • Commission/decommission Nodes as needed.
  • Manage resources in a multi-tenancy environment.
  • Configured the Zookeeper in setting up the HA Cluster
  • Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce jobs given by the users.
  • Configured the compression codec’s to compress the Data in the Hadoop Cluster.
  • Developed Map Reduce programs to perform analysis
  • Research, identify and recommend technical and operational improvements resulting in improved reliability efficiencies in developing the Cluster.
  • Evaluate technical aspects of any change requests pertaining to the Cluster.

Environment:Hadoop CDH4, Hive 0.9.0/0.10.0, Sqoop 1.4.2, PIG 0.10.0, Flume 1.2.0/1.3.0, HBASE 0.94.3 Solaris 9/10/11, MySQL 5.1.65, Redhat Linux 4/5/6, HP-UX 11i, AIX, Sun Enterprise Servers, Sun Fire V1280/480/440, Sun SPARC 1000, HP 9000K, L, N class Server, HP & Dell blade servers, IBM RS/6000, p Series servers, VMware ESX Server, Oracle.

Confidential, Seattle, WA

Hadoop Administrator

Roles & Responsibilities:

  • Build 20 Node Hadoop Cluster from bare-Metal hardware
  • Configured IPTABLES to allow required services and block unwanted ports.
  • Upgraded the Cluster from CDH3U0 to CDH3U1. The tasks were first performed on the staging platform before doing it on production Cluster.
  • Automated all the jobs starting from pulling the Data from MySQL to pushing the result set Data to Hadoop Distributed File System.
  • Implemented NameNode backup using NFS. This was done for High availability.
  • Used Ganglia to monitor the Cluster around the clock.
  • Wrote shell scripts for log-Rolling day to day processes and it is automated.
  • Implemented Capacity schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce jobs given by the users.
  • Supported Data Analysts in running Map Reduce Programs.
  • Worked on importing and exporting Data into HDFS and HIVE using Sqoop.
  • Worked on analyzing Data with HIVE and PIG
  • Experience in Implementing Rack Topology scripts to the Hadoop Cluster
  • Manage the day-to-day operations of the Cluster for backup and support.
  • Upgraded the Hadoop Cluster to Cloudera Manager 3.7.

Environment:Hadoop CDH3U0/CDH3U1, MySQL 5.1.49/50/51, PIG 0.3.0 Linux RHEL 3/4/5/6, Solaris 8/9/10, HP-UX 11/11i, AIX 5.3L, Sun Enterprise Server, 6800/ E6500/ E4500 & E3500, Sun V440, V490, TSeries, MSeries, HP-9000, rx8640, K,L, N Class Server, IBM RS/6000,pSeries Server

Confidential

SAN Engineer

Roles and Responsibilities:

  • Administration and Management of EMC Clariion Storage Arrays. Resolving client Level II Tickets with in SLA
  • Monitor all Production servers through Nagios.
  • Installation and Configuration of EMC Management Software (NaviCLI & Navisphere Manager
  • Manage break fix activities including creating proper change control.
  • Working with EMC Clariion CX300, CX500, CX700, CX 3 Series Storage Arrays
  • Monitor and Review the Storage Processor Logs and Hosts System Logs to troubleshoot and Perform Health check of Storage System.
  • Troubleshooting and maintenance (SP Collects, SPLAT, CAP2).
  • Creating of Raid group, binding and unbinding the LUNs and assign the LUNs to the hostsUsing Navisphere Manager.
  • Storage Provisioning on Clariion Storage Array (Storage allocation and Reclamation of LUNs)Creating &Dissolving MetaLUNs in Clariion.
  • Clariion LUN allocation for various operating Systems LINUX, Solaris servers
  • Reclaiming and deleting LUNs from Clariion Creating & Destroying Meta LUNs in Clariion.
  • Masking of LUNs and Implementing Host groups and Security Policies
  • Configuring power path and creating multipathing on LINUX hosts
  • Creating Virtual Provisioning and Administration of SAN Fabric Brocade switches Configuring Zoning policies in SAN Environment (Hardware Zoning/Software Zoning)
  • Monitoring Storage Array hardware failures through NavisphereManager
  • Monitoring Alerts in ECC and Navisphere Manager
  • Enabling and disabling ports on SAN Switches according to the requirement.

Environment: EMC DMX 2000/1000/800 storage array, EMC Symmetrix 8830 storage array, EMC Clariion CX 300/500/700, EMC Clariion CX3 storage array Brocade 48k/24k, Cisco switches 9509/9513,, EMC Connectrix MDS 9506/9216, McData ED140M2

Confidential

LINUX System Administrator

Roles & Responsibilities:

  • Managing Systems operations with final accountability for smooth installation, networking, and operation, troubleshooting of hardware and software in LINUX environment.
  • Identifying operational needs of various departments and developing customized software to enhance System's productivity.
  • Running LINUX SQUID Proxy server with access restrictions with ACLs and password.
  • Established/implemented firewall rules, Validated rules with vulnerability scanning tools.
  • Set up and sustained a LINUX hosted ORACLE Database server, configured Net8, Oracle Listener services and ODBC services.
  • Evolved Routing solutions for connecting network segments with LINUX.
  • Effectively applied operating System patches.
  • Proactively detecting Computer Security violations, collecting evidence and presenting results to the management.
  • Accomplished System/e-mail autantication using LDAP enterprise Database.
  • Implemented a Database enabled Intranet web site using LINUX, Apache, MySQL Database backend, HTML and PHP scripting for Database access via web browsers.
  • Installed Cent OS using Pre-Execution environment boot and Kick start method on multiple servers.
  • Monitoring System Metrics and logs for any problems.
  • Running Cron-tab to back up Data.
  • Applied Operating System updates, patches and configuration changes.
  • Adding, removing, or updatinguser accountinformation, resettingpasswords, etc.
  • Maintaining the MySQL server and Autantication to required users for Databases.
  • Appropriately documented various Administrative & technical issues.

We'd love your feedback!