We provide IT Staff Augmentation Services!

Hadoop Administrator Resume

2.00/5 (Submit Your Rating)

Birmingham, AL

SUMMARY

  • Over 7+ years of IT experience including 2.5 years of experience with Hadoop Ecosystem in installation and configuration of different Hadoop eco - system components in the existing cluster.
  • Experience in Hadoop Administration (HDFS, MAP REDUCE, HIVE, PIG, SQOOP, FLUME, OOZIE, and HBASE) and NoSQL Administration.
  • Setting up automated 24x7 monitoring and escalation infrastructure for Hadoop cluster using Nagios and Ganglia.
  • Experience in installing Hadoop cluster using different distributions of Apache Hadoop: Cloudera and Hortonworks.
  • Good experience in understanding client's Big Data business requirements and transforming them into Hadoop centric technologies.
  • Experience in analyzing client’s existing Hadoop infrastructure and understanding the performance bottlenecks and providing the performance tuning accordingly.
  • Installed, configured and maintained HBASE.
  • Worked with Sqoop in Importing and Exporting data from different databases like MySQL and Oracle into HDFS and Hive.
  • Defining job flows in Hadoop environment using tools like Oozie for data scrubbing and processing.
  • Experience in configuring Zookeeper to provide Cluster coordination services.
  • Loading logs from multiple sources directly into HDFS using Flume.
  • Experience in benchmarking, performing backup and recovery of Namenode metadata, and data residing in the cluster.
  • Familiar in commissioning and decommissioning of nodes on Hadoop Cluster.
  • Adept at configuring NameNode High Availability.
  • Worked on Disaster Management with Hadoop Cluster.
  • Strong knowledge on Hadoop HDFS architecture and Map-Reduce framework.
  • Experience in deploying and managing the multi-node development, testing and production.
  • Experience in understanding the security requirements for Hadoop and integrating with Kerberos authentication infrastructure- KDC server setup, creating and managing the realm domain.
  • Worked on setting up Name Node high availability for major production cluster and designed Automatic failover control using Zookeeper and Quorum Journal Nodes.
  • Well experienced in building servers like DHCP, PXE with kick-start, DNS and NFS, and also used them in building infrastructure in a Linux Environment.
  • Experienced in Linux Administration tasks like IP Management (IP Addressing, Subnetting, Ethernet Bonding and Static IP).

TECHNICAL SKILLS

Operating System: RedHat, CentOS, Ubuntu, Solaris, Windows 2008/’08R2

Hardware: Sun Ultra Enterprise Servers (E3500, E4500), SPARC server 1000, SPARC server 20 Enterprise Servers

Languages: C++, Core Java and JDK 7/8

Web Languages: HTML, CSS, and XML

Hadoop Distribution: Cloudera and HortonWorks

Ecosystem Hadoop: MapReduce, YARN, HDFS, Sqoop, Hive, Pig, Hbase, Sqoop, Flume, and Oozie.

Tools: JIRA, PuTTy, WinSCP, FileZilla.

Database: HBase, RDBMS Sybase, Oracle 7.x/8.0/9i, MySQL, SQL.

Protocols: TCP/IP, FTP, SSH, SFTP, SCP, SSL, ARP, DHCP, TFTP, RARP, PPP and POP3

Shell Scripting: Bash

Cloud Technologies: AWS

PROFESSIONAL EXPERIENCE

Confidential, Birmingham, AL

Hadoop Administrator

Responsibilities:

  • Responsible for architecting Hadoop clusters Translation of functional and technical requirements into detailed architecture and design.
  • Worked exclusively on Cloudera distribution of Hadoop.
  • Installed and configured multi-node fully distributed Hadoop cluster of large number of nodes.
  • Provided Hadoop, OS, and Hardware optimizations.
  • Setting up the machines with Network Control, Static IP, Disabled Firewalls, and Swap memory.
  • Installed and configured Cloudera Manager for easy management of existing Hadoop cluster.
  • Worked on setting up high availability for major production cluster and designed automatic failover control using Zookeeper and Quorum Journal Nodes.
  • Implemented Fair scheduler on the job tracker to allocate fair amount of resources to small jobs.
  • Performed operating system installation and Hadoop version updates using automation tools.
  • Configured Oozie for workflow automation and coordination.
  • Implemented rack aware topology on the Hadoop cluster.
  • Importing and exporting structured data from different relational databases into HDFS and Hive using Sqoop.
  • Configured ZooKeeper to implement node coordination in clustering support.
  • Configured Flume for efficient collection, aggregation and transformation of huge log data from various sources to HDFS.
  • Involved in collecting and aggregating large amounts of streaming data into HDFS using Flume and defined channel selectors to multiplex data into different sinks.
  • Worked on developing scripts for performing benchmarking with Terasort/Teragen.
  • Implemented Kerberos Security Authentication protocol for existing cluster.
  • Backed up data on regular basis to a remote cluster using distcp.
  • Good experience in troubleshoot production level issues in the cluster and its functionality.
  • Regular Commissioning and Decommissioning of nodes depending upon the amount of data.
  • Monitored and configured a test cluster on AWS for further testing process and gradual migration.
  • Experience in deploying and managing the multi-node development, testing and production.

Confidential, Alpharetta GA

Hadoop Administrator

Responsibilities:

  • Client used HortonWorks distribution of Hadoop to store and process their huge data generated from different enterprises.
  • Experience in installing, configuring, monitoring HDP stacks 2.1, 2.2, and 2.3.
  • Installed and configured Hadoop ecosystem components like Hive, Pig, Sqop, Flume, Oozie and HBase.
  • Experience in cluster planning, performance tuning, Monitoring, and troubleshooting the Hadoop cluster.
  • Experience in cluster planning phases: planning the cluster, preparing the nodes,pre-installation and testing.
  • Responsible for cluster HDFS maintenance tasks: commissioning and decommissioning nodes, balancing the cluster, and rectifying failed disks.
  • Responsible for cluster MapReduce maintenance tasks: commissioning and decommissioning tasktrackers and mapreduce jobs.
  • Experince in using sqoop to import and export data from external databases to Hadoop cluster.
  • Experince in using flume to get log files into the Hadoop cluster.
  • Experince in configuring Mysql to store the hive metadata.
  • Experience in admistration of NoSql databases including Hbase and MongoDB.
  • Communicating with the development teams and attending daily meetings.
  • Addressing and Troubleshooting issues on a daily basis.
  • Experience in setting up Kerberos in hortonworks cluster.
  • Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing MFS, and Hive.
  • Cluster maintenance as well as creation and removal of nodes.
  • Monitor Hadoop cluster connectivity and security
  • Manage and review Hadoop log files.
  • File system management and monitoring.
  • Experience in managing backups and version upgrades.
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.

Confidential, NJ

Linux Administrator

Responsibilities:

  • Installation, Configuration, Upgradation and administration of Windows, Sun Solaris, RedHatLinux and Solaris.
  • Linux and Solaris installation, administration and maintenance.
  • User account management, managing passwords setting up quotas and support.
  • Worked on Linux Kick-start OS integration, DDNS, DHCP, SMTP, Samba, NFS, FTP, SSH, and LDAP integration.
  • Network traffic control, IPSec, Quos, VLAN, Proxy, Radius integration on Cisco Hardware via Red Hat Linux Software.
  • Installation and configuration of MySql on Windows Server nodes.
  • Responsible for configuring and managing Squid server in Linux and Windows.
  • Configuration and Administration of NIS environment.
  • Involved in Installation and configuration of NFS.
  • Package and Patch management on Linux servers.
  • Worked on Logical volume manager to create file systems as per user and database requirements.
  • Data migration at Host level using Red Hat LVM, Solaris LVM, and Veritas Volume Manager.
  • Expertise in establishing and documenting the procedures to ensure data integrity including system fail-over and backup/recovery in AIX operating system.
  • Managed 100 + UNIX servers running RHEL, HPUX on Oracle HP and Dell server including blade centers.
  • Solaris Disk Mirroring (SVM), ZONE installation and configuration
  • Escalating issues accordingly, managing team efficiently to achieve desired goals.

Confidential

System Administrator

Responsibilities:

  • Provided onsite and remote support for RedhatLinux& AIX Servers.
  • Provided 24x7 on call server support for UNIX environment including AIX, Linux, HP-UX, and Sun Solaris.
  • Configured HP Proliant, Dell Poweredge, R series, Cisco UCS and Confidential p-series machines, for production, staging and test environments.
  • Administration / installation /upgrade and maintenance of HP Proliant DL 585 G7 using Redhat Enterprise Linux jumpstart, flash archives and upgrade method.
  • Responsible for setting up Oracle RAC for a three node RHEL5 cluster.
  • Experience with provisioning Linux using Kickstart and Redhat Satellite server
  • Extensively used NFS, NIS, DHCP, FTP, Send mail, and Telnet for Linux.
  • Coordinated with database administrators while setting up Oracle 10g/11g on Linux.
  • Monitoring and troubleshooting with the performance related issues.
  • Configured Linux native device mapper (MPIO), EMC powerpath for RHEL 5.4, 5.5, 5.6, 5.7.
  • Performance monitoring utilities like IOSTAT, VMSTAT, TOP, NETSTAT and SAR.
  • Worked on Support for Aix matrix sub system device drivers.
  • Worked on Virtualization of different machines.
  • Worked on with the computing by both physical and virtual from the desktop to the data center using the SUSE Linux.
  • Expertise in Build, Install, load and configure boxes.
  • Worked with the team members to create, execute and implement the plans.
  • Experience in Installation, Configuration and Troubleshooting of Tivoli Storage Manager (TSM).
  • Remediating failed backups, Take manual incremental backups of failing servers.
  • Upgrading TSM from 5.1.x to 5.3.x.Worked on HMC Configuration and management of HMC Console which included up gradation, micro partitioning
  • Installation of adapter cards cables and configuring them.
  • Worked on Integrated Virtual Ethernet and building up of VIO servers.
  • Install ssh Keys for Successful login of srmdata into the server without prompting password for daily backup of vital data such as processor utilization, disk utilization, etc..
  • Hardware troubleshooting, Maintenance and replacement of failed hardware.
  • Provide redundancy with HBA card, Ether channel configuration and network devices.
  • Coordinating with application and database team for troubleshooting the application or Database outages.
  • Coordinating with SAN team for allocation of LUN's in order to increase filesystem space.
  • Configuration and administration of Fiber card Adapter's and handling AIX part of SAN.
  • Good LVM skills, used LVM, created VGs, LVs, and disk mirroring.
  • Implemented PLM (Partition Load Manager) on AIX 5.3. and 6.1

We'd love your feedback!