We provide IT Staff Augmentation Services!

Hadoop Administrator Resume

3.00 Rating

Melbourne, FL


  • 8+ years of experience as DBA in various phases of project implementation including System Integration, Administration, Testing, Change Control Process, Technical support User training and Documentation. on various technologies primarily Linux/Unix &Big Data Systems in diverse industries.
  • 3+ years of proven expertise in Hadoop Projects Implementation and Configuring Systems. Well versed in installation, configuration, supporting and managing of Big Data and underlying infrastructure in multi - node Cluster environment.
  • Expertise in setting, configuring & monitoring ofHadoopcluster using Cloudera CDH3, CDH4, Apache tar balls&Hortonworks Ambari on Ubuntu, Redhat, Centos&Windows.
  • Hands-on experience on major components in Hadoop Ecosystem including HBase-Hive Integration, PIG, Sqoop, Flume, ZooKeeper, Oozie & knowledge of Mapper/Reduce/HDFS Framework
  • Worked on NoSQL databases including HBase and MongoDB also plugging them to Hands on experience with "Productionalizing" Hadoop applications(i.e. administration, Hadoop eco system. configuration, management, monitoring, debugging, and performance tuning)
  • Experience in big data domains like Shared Service (Hadoop Clusters, Operational Model, Inter-Company Chargeback,Lifecycle Management)
  • Experience in setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS amazon cloud
  • Experience in ClouderaHadoop Upgrades and Patches and Installation of Ecosystem Products through Cloudera manager along with Cloudera Manager Upgrade.
  • Expertise in setting Hive views in Excel Power pivot and analyze the statistical data.
  • Experience in understanding and managing Hadoop Log Files, experience in managing the Hadoop infrastructure with Cloudera Manager. Involved in building Big Data cluser and successfully performed installation of CHD using Cloudera manager.
  • Extensive experience in architect the Hadoop cluster and experience in setup, configuration and management of security forHadoop clusters.
  • Extensive experience capacity planning, cluster set up, benchmarking & Performance tuning of large clusters.
  • Experienced in automating the configuration management using Puppet.
  • Extensive shell scripting experience for data loading, data movement activities, database refreshes, replication and backup and restores.
  • Experienced in performing typical cluster related activates like Storage capacity management, performance tuning.
  • Experienced in setting up security related activities for Hadoop clusters using Kerberos and integrated with LDAP at enterprise level.
  • Imported/exported data from RDBMS to HDFS using Data Ingestion tools like Sqoop.
  • Knowledge in workflow scheduling and monitoring tools like oozie and enabling HA of HBase &Hadoop cluster using Zookeeper.
  • Exposure in Implementing and maintaining Hadoop Security and Hive security.
  • Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the requirement..
  • Experience in installation, configuration, backup, recovery, maintenance, and support of Red Hat EnterpriseLinux
  • Experience inLinux-based Virtualization implementations such as VMWare and Xen.
  • Experience in Administration, implementation and support on Solaris 7, 8, 9, and 10.
  • Strong knowledge of system performance and monitoring tools running security scans and remediating Servers to meet enterprise standards.
  • Managed Patches, Upgrades and Licensed Products for System software on all flavors of UNIX and Linux Servers,supported Disk Volume Management - creating volume and Logical Volume Manager.
  • Creating new file systems, managing and checking data consistency of file systems, managed shared NFS files system, mounting and unmounting NFS server, NFS client on remote machine, sharing remote file folder, starting and stopping the NFS services.
  • Configuration and maintenance of web server, application server and batch servers.
  • Excellent interpersonal, communication skills and a very good team player willing to take on new and varied projects and an ability to handle changing priorities and deadlines.


Programming Languages: Java, Pig, XML, HTML, SQL

Hadoop/Big Data: Hadoop, Map Reduce, HDFS, HBase, Hive, Pig, Sqoop, Spark, Kafka, Flume, Oozie, Zoo keeper.

Scripting Languages: Shell

No Sql Databases: HBase, MongoDB

Auto Monitoring Tools: Puppet, Ambari, Ganglia, Nagios, CDH4

Methodologies: Agile methodology (Scrum), UML, Design Patterns

Operating systems /PLATFORMS: Linux, Unix, Windows,Ubuntu,Centos

Databases: Oracle 12c/11g/10g/9i/8i, MS SQL server 2008



Protocols: TCP/IP, HTTP, DNS


Confidential, Melbourne, FL

Hadoop Administrator


  • Installed and configured Apache Hadoop, Hive and Pig environment on Amazon EC2.
  • Installed and configured various components ofHadoop ecosystem and maintained their integrity, planning for production cluster hardware and software installation on production cluster and communicating with multiple teams to get it done.
  • Configured MySQL Database to store Hive metadata.
  • Designed, configured and managed the backup and disaster recovery for HDFS data, commissioned data nodes when data grew and decommissioned when the hardware degraded.
  • Worked with Linux systems and MySQL database on a regular basis.
  • Trouble shooting many cloud related issues such as Data Node down, Network failure and data Tblock missing.
  • Managing and reviewing Hadoop and HBase log files.
  • Worked with systems engineering team to plan and deploy newHadoop environments and expand existingHadoop clusters.
  • Worked with application teams to install Hadoop updates, patches, version upgrades as required, Installed and configured Hive, Pig, Sqoop and Oozie on the HDP 2.0 cluster.
  • Installed and configured Hive, Pig, Sqoop and Oozie on the HDP 2.0 cluster.
  • Created shell scripts to clean the log files and check the disk space after cleaning, to restore data node, to copy data across different cluster.
  • Involved in implementing High Availability and automatic failover infrastructure to overcome single point of failure for Name node utilizing zookeeper services.
  • Successfully performed installation of CDH4 - Cloudera’s Distribution including Apache Hadoop through Cloudera manager.
  • Successfully performed installation of CDH3 - Cloudera’s Distribution including Apache Hadoop through Cloudera manager.
  • Worked with big data developers, designers and scientists in troubleshooting map reduce job failures and issues with Hive, Pig and Flume.
  • Work with theHadoop production support team to implement new business initiatives as they relate toHadoop Perform other work related duties as assigned and available 24 x 7 on call support
  • Worked on establishing Operational/Governance model and Change Control Board forvarious lines of business running on Big Data Clusters.

Environment: Hadoop, HDFS, Hive, Flume, HBase, Sqoop, Hue PIG, Java (JDK 1.6), Eclipse, MySQL,Linux, Ubuntu, Zookeeper, Cloudera CDH4 with HA.

Confidential, Milwaukee, WI

Hadoop Administrator


  • Involved in installation, configuration, supporting and managing hadoop clusters,hadoop cluster administration that includes commissioning & decommissioning of DataNode, capacity planning, slots configuration, performance tuning, cluster monitoring and troubleshooting.
  • Developed shell scripts to automate the cluster installation.
  • Involved in installingHadoop ecosystem components.
  • Built automated set up for cluster monitoring and issue escalation process.
  • Administration, installing, upgrading and managing distributions of Hadoop (CDH3, CDH4, Cloudera manager), Hive, Hbase.
  • Plan and execute on system upgrades for existing Hadoop clusters.
  • Imported/exported data from RDBMS to HDFS using Data Ingestion tools like Sqoop.
  • Commissioning and Decommissioning nodes toHadoop Cluster.
  • Used Fair Scheduler to manage Map Reduce jobs so that each job gets roughly the same amount of CPU time.
  • Recovering from node failures and troubleshooting common Hadoop cluster issues.
  • Scripting Hadoop package installation and configuration to support fully-automated deployments.
  • Supporting Hadoop developers and assisting in optimization of map reduce jobs, Pig Latin scripts, Hive Scripts, and HBase ingest Required.
  • Involved in creating Hive Internal/External tables, loading with data and troubleshoot with Hive jobs.
  • Familiarized with automated monitoring tools like Nagios and Ganglia.
  • Worked on configuring security forHadoop Cluster, managing and scheduling jobs on aHadoop Cluster.
  • Tuning of MapReduce configurations to optimize the run time of jobs.
  • Experienced in managing and reviewingHadoop log files.
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs.
  • Managing nodes onHadoop cluster connectivity and security.
  • Supported in setting up QA environment and updating configurations for implementing scripts with Pig, Hive and Sqoop.
  • Involved in HDFS File system management and monitoring.
  • Good knowledge on Creating ETL jobs to load Twitter JSON data into MongoDB and jobs to load data from MongoDB into Data warehouse.
  • Constantly learning various Big data tools and provide strategic direction as per development requirement.

Environment: Java, Eclipse Juno 4.2, Map Reduce, HDFS, Pig, Hive, Hcatalog, HBase, Flume, Sqoop, Oozie, ZooKeeperNagios, Ganglia, Zookeeper, Fair Scheduler.

Confidential, Boston, MA

Linux/ Unix Systems Administrator


  • Installing, configuring and support RHEL 4/5.
  • Configuration and administration of logical volume manager, Patch and Package administration.
  • Maintaining virtual server under VMware, accessing virtual server for installing and troubleshooting purpose.
  • Administering Solaris& linux Experience in Installation, Configuration, Backup, Recovery, Maintenance, Support of Sun Solaris &Linux.
  • Installing, upgrading and configuring RedHatLinux3.x, 4.x, 5.x using Kickstart Servers and Interactive Installation.
  • Worked with Databaseadministrationto tune kernel for Oracle installations.
  • Installing, upgrading and configuring SUN Solaris 2.6, 7, 8, 9 and 10 on Sun Servers using Jumpstart Servers, Flash Archives and Interactive Installation.
  • Experience in installing, configuring and implementing the RAID technologies using various tools like VxVM and Solaris volume manager.
  • Creating and managing user accounts, security, rights, disk space and process monitoring in Solaris and RedhatLinux.
  • Administer and deploy RedHat servers (Stand alone, via VMWARE and TPM).
  • Installation and upgradation of Packages and Patches configuration mgmt, version control, service pack. & reviewing connectivity issue regarding security problem.
  • Drafted shell scripts for automated installations, to extract logs.
  • Installing and configuring various network services such as DNS, DHCP, NFS, SMTP, Apache Server, NIS, Samba, SSH, Telnet, sendmail and management of TCP/IP protocols.


Unix Administrator


  • Day - to-day administration on Sun Solaris, RHEL 4/5 which includes Installation,upgrade & loading patch management & packages
  • Assist with overall technology strategy and operational standards for the Unix domains.
  • Manage problem tickets and service request queues, responding to monitoring alerts,execution of change controls, routine & preventative maintenance, performance tuning and emergency troubleshooting & incident support.
  • Provides accurate root cause analysis and comprehensive action plans. Works closely with HW/SW vendor support teams to resolve problems quickly.
  • Manage daily system administration cases using BMC Remedy Help Desk, investigated, installed and configured software fail-over system for production Linux servers. configuration of NIS, NIS+, DNS, DHCP, NFS, LDAP, SAMBA, SQUID, postfix, sendmail, ftp, remote access, security management and Security trouble shooting skills.
  • Managing systems routine backup, scheduling jobs like disabling and enabling cron jobs, enabling system logging, network logging of servers for maintenance, performance tuning, testing.
  • Working on Volume management, Disk Management, software RAID solutions using VERITAS Volume manager & Solaris Volume Manager. File system Tuning and growing using VERITAS File System (VxFS), coordinated with SAN Team for storage allocation and Disk Dynamic Multi path.
  • Worked on KVM, installing, configuringLinuxVMs.
  • Responsible for setting up disaster recovery environment for Red HatLinuxservers and implemented all the DR procedures as per the guidelines.
  • Ensured system security by hardening and auditing the systems as per guidelines in Red Hat Linuxalso experienced in system analysis, troubleshooting and performance tuning of operating systems.


Unix Administrator


  • Build new Solaris and Linux servers. Upgrade and patch existing servers. Compile, built and upgrade Linux kernel.
  • Worked with Telnet, FTP, TCP/IP, rlogin, used to inter-operate hosts, System Administrator in UNIX supporting infrastructure environment.
  • Performed regular system administrative tasks including User Management, Backup, Network Management, and Software Management including Documentation etc.
  • Supported Disk Volume Management- creating volumes with Solaris Volume Manager,VERITAS Volume Manager, and VERITAS File System.
  • Worked on OS upgrade plan for Solaris on development and production servers and Recommend system configurations for clients based on estimated requirements.
  • Performed reorganization of disk partitions, file systems, hard disk addition, and memory upgrade and Monitored system activities, log maintenance, and disk space management.
  • Fix all the system problems, based on system email information and users’ complaints and Upgrade software, add patches, and add new hardware in UNIX machines.
  • Performed Network stack tuning for apache web servers.
  • Was involved in preparation of technical design specifications, installation and configuration of Servers.


Linux Administrator


  • Installation, Maintenance, Administration and troubleshooting of OELLinux, HP-UX systems, Sun Solaris.
  • Managing systems routine backup, scheduling jobs like disabling and enabling, network logging of servers for maintenance, performance tuning, enabling system logging,testing.
  • Administration of RHEL, which includes installation, testing, tuning, upgrading and loading patches, troubleshooting server issues.
  • Configured RAID controllers and disk storage shelves.
  • Configured, implemented and maintained hardware RAID storage technology.
  • Experience with TCP/IP, FTP, DNS, NIS, BIND, Sendmail, sys logging, UNIX account management and file permissions.
  • Troubleshooting system, network and user issues.
  • Testing and upgrading production, development and testLinuxand Solaris servers.
  • Additional responsibilities include schedule and administration of server backups, setup and administration of network and ms mail accounts.
  • Fine tuning of systems using performance monitoring tools, i.e. a couple of servers need additional memory due to constant page in, page out, values are non-zeros.
  • Configured Kickstart files, installed RPMs, & Packages and wrote scripts for Opsware, Installing Patches.
  • Responsible for Monitoring Tools deployment and Script Development for any ongoing projects.
  • Created and maintained users, roles, permissions and enabled quota for the users.
  • Installed and configured anti-virus software with its updates and industry specific software.
  • Provided enterprise support to Red Hat and Windows servers hosting Oracle database

We'd love your feedback!