We provide IT Staff Augmentation Services!

Big Data / Hadoop Administration Lead Resume

3.00/5 (Submit Your Rating)

Southlake, TX

PROFESSIONAL SUMMARY:

Experienced Hadoop Administrator with strong background in Linux, Windows and AWS Cloud Computing. 8+ years of IT experience with 4 years in Hadoop Administration. Highly resourceful, goal - oriented professional with a variety of applied IT skills. Experienced in deployment, validation, performance tuning, capacity planning and maintenance of Hadoop clusters and ecosystem services.

TECHNICAL SKILLS:

Cluster Management Tools: Cloudera Manager, Hortonworks Data Platform (HDP) Ambari, Nagios

Big Data / Hadoop Ecosystem Services: HDFS, YARN (MRv2), Hive, Flume, Impala,, Kafka, Pig, Spark, Sqoop and ZooKeeper

NO SQL / Graph Database: Cassandra, Mongo DB, HBase, Tableau

Security: Kerberos, Active Directory, Sentry

Monitoring: Cloudera Manager, Ambari, Nagios

Cloud: Amazon Web Services, EC2, Microsoft Azure, Google Cloud

Operating System Platform: Linux, Unix Windows and MacOS

Databases: Oracle, MySQL, PostgreSQL

SCM Tools: Git, Gitlab, GitHub

PROFESSIONAL EXPERIENCE:

Big Data / Hadoop Administration Lead

Confidential, Southlake, TX

Responsibilities:

  • Hands-on experience in the installation, configuration and management of Hadoop clusters. Experience in monitoring and troubleshooting issues with Linux memory, CPU, OS, storage and network.
  • Deployed and configured Hadoop clusters and services using Linux CLI and Cloudera’s integrated parcels and workflows
  • Monitored the health, QoS and SLA notifications of the Hadoop Ecosystem using integrated charts and SNMP Alerts
  • Troubleshot, diagnosed, tuned and solved network connectivity, firewall, port management, node availability, service failure, job failure, database connectivity and other Hadoop issues. Troubleshot and analysed Hadoop log files, debugged failed jobs and raised service tickets with Cloudera Support as needed.
  • Documented root cause discovery, systems analysis, log aggregation, faceted search, visual timeframe controls, online references of solutions for easy reproducibility
  • Implemented Kerberos and Sentry (3rd Party Authorisation and Authentication Applications) on Hadoop clusters.
  • Determined the correct hardware and infrastructure for Hadoop cluster
  • Configured, installed and maintained High Availability NameNode Clusters using the recommended best practises of Cloudera and Hortonworks
  • Implemented Fair Scheduler and Capacity Scheduler to provide SLAs for multiple users in the Hadoop cluster for optimized job processing
  • Established benchmarks for the read-write, CPU and network performance using TestDFSIO and Terasoft Benchmark Suite
  • Collaborated with data delivery teams to setup new Hadoop users, create Kerberos principals and test Hadoop ecosystem access for new users. Provided technical support to Data Analysts, Pig and Hive Developers.
  • Automated installation, configuration and management of tasks using shell scripts.
  • Installed and configured CDH 5.14.0 cluster, using Cloudera Manager.
  • Implemented automatic failover zookeeper and zookeeper failover controller.
  • Monitored multiple Hadoop clusters environments using Ganglia and Nagios.
  • Monitored workload, job performance and capacity planning.
  • Supported cluster maintenance, firewall and LVM configuration, Backup and Disaster Recovery (BDR) for production cluster. Backed up data on regular basis to a remote cluster using distcp
  • Fine-tuned Hive jobs for better performance.
  • Performed rolling and express upgrades of Hadoop cluster using Cloudera Manager

Hadoop Administrator

Confidential, Houston, TX

Responsibilities:

  • Provided management support for large scale Big Data platforms on the Hadoop eco-system
  • Planned, deployed and managed large multi-node Hadoop Clusters for data platform operations
  • Created EC2 instances and implemented large multi-node Hadoop clusters in AWS cloud using automated Terraform scripts
  • Configured AWS IAM and Security Groups. Developed Terraform template to deploy Cloudera Manager on AWS.
  • Hands on experience in managing and monitoring the Hadoop Cluster using Cloudera Manager.
  • Experience in decommissioning failed nodes and commissioning new nodes to accommodate more data on HDFS as the cluster grows
  • Experience in enabling High Availability to avoid data loss and limit cluster downtime.
  • Backup configuration and recovery from Name-node failure.
  • Hands on experience in cluster upgrade and patching without any data loss
  • Experience in HDFS data storage and support for running MapReduce jobs.
  • Troubleshot system failures, identified root causes, and recommended remedial courses of action. Managed, analysed and reviewed Hadoop log files as a part of Hadoop Administrative troubleshooting processes. Documented root causes, solution steps and diagnostic procedures for process improvement and repeatability.
  • Extensive knowledge on HDFS and YARN implementation in cluster for better performance.
  • Automated the installation and configuration of Hadoop Cluster using Puppet and shell scripting.
  • Installed and configured Sqoop to establish import/export pipeline from RDBMS like MySQL
  • Installed and configured Flume to establish import/export data pipeline from various unstructured and semi structured data source.
  • Installed and configured HBase for random access and modification of data in HDFS.
  • Installed and configured Hive for faster data processing with SQL-like interface.
  • Installed and configured Impala for real time data processing.
  • Collaborated with systems engineering team to plan, deploy and expand Hadoop clusters

System Administrator

Confidential, Houston, Texas

Responsibilities:

  • Created and managed users and groups, their permissions, ownerships, group privileges and related ACLs for files, folders and system services.
  • Managed Database workload batches with automated shell scripts and Cron utility schedules.
  • Managed disk space using Logical Volume Management(LVM)
  • Monitored virtual memory performance; swap space, disk and CPU utilization.
  • Monitored system performance using Nagios. Prepared monthly system performance reports, procedures and process documents.
  • Experience in System Builds, Server builds Installs, Upgrades, Patches, Migration, Troubleshooting, Security, Backup, Disaster Recovery, Performance Monitoring and Fine-tuning.
  • Experience in deploying virtual machines using templates and cloning, taking backup with a snapshot.
  • Deployed virtual machines using templates and cloning. Redeployed VM's and data stores using vMotion and storage vMotion in VMware environment.
  • Setup, configured, and maintained UNIX and Linux servers, RAID subsystems, and desktop/laptop machines including installation/maintenance of all operating system and application software.
  • Install, configure, and maintain Ethernet hubs/switches/cables, new machines, hard drive, memory, and network interface cards.
  • Manage software licenses, monitor network performance and application usage, and make software purchases
  • Provide user support including troubleshooting, repairs, and documentation and developing web site for support.

System Administrator

Confidential

Responsibilities:

  • Performed automated installations of Operating Systems using kickstart for Linux. Remote monitoring and management of server hardware.
  • Experienced in Package management using RPM, YUM and UP2DATE in Red Hat Linux
  • Experienced in using various network protocols like HTTP, UDP, FTP, and TCP/IP.
  • Linux Installation and configuration from scratch and regular monitoring
  • Experienced in Red Hat Linux package administration using YUM
  • Configuration and administration of NFS, NIS, and DNS in Linux environment.

We'd love your feedback!