We provide IT Staff Augmentation Services!

Hadoop Administrator Resume

4.00/5 (Submit Your Rating)

CAREER OBJECTIVE:

A position as a Hadoop/Linux Administrator that utilizes my excellent technical skills and provides me plethora of learning and growth opportunities.

PROFESSIONAL SUMMARY:

  • 7 years of professional experience including more than 3 years in Big Data analytics as Hadoop Administrator and 7 years in Linux Administrator and 1 year as a Service Manager .
  • Experience in architecting, designing, installation, configuration and management of Apache Hadoop Clusters & Hortonworks Hadoop Distribution.
  • Expertise and sound knowledge of RedHat - Linux (5&6) operating systems and Administration.
  • Experience with complete Software Design Lifecycle including design, development, testing and implementation of moderate to advanced complex systems.
  • Expertise in Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Balancer, Troubleshooting.
  • Wide scan implementation knowledge of NIFI data injection tool in HDP cluster.
  • Expertise in Hadoop cluster ready for development team working on POCs.
  • Experience installing monitoring systems such Ambari on AWS to monitor cluster health and cluster management.
  • Experience in minor and major upgrades of Hadoop and Hadoop eco system.
  • Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause.
  • Experience on Commissioning, Decommissioning, Balancing, and Managing Nodes and tuning server for optimal performance of the cluster.
  • Involved in Administration of Cluster maintenance, troubleshooting, Monitoring and followed proper backup & Recovery strategies.
  • Created system security supporting multi-tier software delivery system by utilizing Kerberos and also centralize authorization security through Ranger.
  • Strong understanding of Hadoop eco system such as HDFS, MapReduce, Zookeeper, Hadoop streaming, Sqoop, Oozie and Hive.
  • Experience in designing Real-time ingestion big-data architecture using STORM, Kafka and Spark Streaming.
  • Ability to analyze and performance tune a Hadoop cluster.
  • Performance of incident and problem ticket resolution to SLA for services across the global Hadoop distribution.
  • Continuous evaluation of Hadoop infrastructure requirements and design/deploy solutions (high availability, replication considerations, space issues, etc).
  • Hands on complex RDBMS SQL/HIVE HQL.
  • Hands on Remedy tool as a Service manager.
  • Extensive knowledge on Core Java.
  • Familiarity with the fundamentals of Linux shell scripting.
  • Proficient in Storage management products like Linux Volume Manager.
  • Monitoring the infrastructure and support through Nagios.
  • Specialized in High Availability Products Redhat Cluster Suite.
  • Experience in Incident, Change and Problem management in a large UNIX infrastructure consisting of Linux Servers.
  • Environmental oversight and root cause analysis for stability and reliability issues and timely resolution or escalation.
  • Great team player and contributor with passion to excel and learn new technologies.

TECHNICAL SKILLS:

Hadoop/Big Data Technologies: HDFS, Map Reduce, HBase, Hive, Sqoop, Zookeeper, Yarn, Nifi and Ranger

Operating Systems: Linux(5/6), Unix

Databases: Oracle 9i/10g/11g, SQL Server

Programming Languages: Core Java, Shell Scripting, SQL

Hardware: DL380 G7-G9, DL385 G6-G7

PROFESSIONAL EXPERIENCE:

Confidential

Hadoop Administrator

Responsibilities:

  • Provided the POC for the big data by setting 17 nodes HDFS cluster.
  • Built and implemented the capacity queue and assign the resources.
  • Designed the job by which was loading data from source in to HDFS cluster using NIFI tool.
  • Installed and Integrated Ranger and creating policy for HDFS, HIVE etc.
  • Deployed Kerberos and creating keytab Files, principals, realm.
  • Day to day support for the cluster issues and job failures.
  • Changes to the configuration properties of the cluster based on volume of the data being processed and performance of the cluster.
  • Maintaining Cluster to remain healthy and in optimal working condition.
  • Scheduling and maintaining the Oozie jobs.
  • Used to create the HDFS filesystem and assigned the permission to the filesystem as per developer’s request.
  • Integrated Kafka (Messaging System) to transfer data from one application to another.
  • Monitoring the health of all the Processes related to Name Node, Data Node, HDFS, YARN, HIVE, HBASE, Ranger and Storm using Horton Works Ambari.
  • Backup configuration and Recovery from a Name Node failure.
  • Provided proactive and reactive support and ensuring maximum availability and minimizing downtime and meet our service SLA's in incident, change and problem management using Remedy tool.

Confidential, Bellevue, Washington

Hadoop/Linux Administrator

Responsibilities:

  • Provided the POC for the big data by setting 200 nodes production cluster and later it grew to more than 600 nodes.
  • Designed and Install the 14 nodes DEV cluster to help the developer for the POC.
  • Upgraded Apache Ambari from Version 1.7.0 to 2.0.0.
  • Involved in upgrading Hadoop Cluster from HDP 2.2.0.0 to HDP 2.2.4.2.
  • Designed and implemented High Availability and automatic failover infrastructure to overcome single point of failure for Namenode, Resource manager, Hive metastore, Hive server2, Hbase master and Storm nimbus utilizing zookeeper services.
  • Used to migrate Hadoop services from one node to another with the help of Ambari.
  • Balanced HDFS manually to decrease network utilization and increase job performance on regular basis.
  • Involved in Installing Kerberos and Configuring Server and Client Systems to enable Hadoop Security.
  • Co-coordinating and troubleshooting with developers to move data from Neteeza to HDFS cluster.
  • Worked on commissioning & decommissioning of data nodes.
  • Adding and removing the nodes in Hadoop Cluster as per requirement.
  • Close monitoring and analysis of the Map Reduce job executions on cluster at task level.
  • Involved in loading data from UNIX file system to HDFS.
  • Continuously worked with Vendors Technical support, Professional services in resolving high severity issues.
  • Adding the staging node to production node through cleanup activity.
  • Done user and group management through Hortonworks Ambari.
  • Worked on Hbase and doing the performance tuning.
  • Configured and integrated the Storm to manage the data flow from HDFS to Hbase tables.
  • Handle the data exchange between HDFS and databases using Sqoop.
  • Monitoring disk, Memory, Heap, CPU utilization on all Master and Slave machines using Ambari and took necessary measures to keep the cluster up and running on 24/7 basis.
  • Provided proactive and reactive support and ensuring maximum availability and minimizing downtime and meet our service SLA's in incident, change and problem management using JIRA.
  • Technical Preparation and review of changes on the Unix Estate. Attending CAB meetings to provide an insight of the technical plan prepared, solution proposed and highlight the risks involved.
  • Unix support which includes day-to-day maintenance tasks, troubleshooting as well as root causes analysis, generating sosreport, generating kdumpreport, escalation and coordination with vendor.
  • End to end server builds using kickstart, Storage configuration using LVM and customized configurations of Linux servers for Applications and Databases like Oracle.
  • Liaise with Application, Database, Storage and network teams for setting up new Linux environments and providing support to regular BAU activities in Unix environment.
  • Planning and implementation of Quarterly Infraweekends to resolve existing software and hardware vulnerabilities in the environment and standardize the infrastructure.
  • Project including BIOS, RAID and ILO firmware upgrades on HP Hardware, and operating system and kernel upgrades for REDHAT using Local Repository.
  • Worked on Hardware failures. Creating tickets with HP Support and scheduling changes for hardware replacements.
  • Tuned and Maintained the Kernel (Through Proc filesystem).
  • Implemented Network bonding services for Network resilience.
  • Configured the raid controller through hpacucli utility.
  • Built the server starting from scratch through local disk.
  • Basic shell programming skills for system administration.
  • Scheduling/Controlling of Jobs using Cron.

Confidential, San Francisco, California

Linux Administrator

Responsibilities:

  • Disk Management, partitioning tools like fdisk, file system management LVM physical volumes, logical volumes & volume groups extending file systems on LVM mount, umount of file systems - /etc/fstab mount options.
  • Storage Migrated from Vmax to EMC Clariion with LVM volume managers.
  • Monitoring the performance of the servers which includes CPU utilization, Memory utilization, Swap space utilization and File System Utilization.
  • To identify and analyze the issues that hamper the performance of the system, and to work in close coordination with the product development team and recommend the solutions for the issues.
  • Performed the remote connections and file transfer, creating archives, managing local disk devices and performing system security.
  • Security implemented through TCP Wrappers, IPTABLES and SELinux.
  • Responsible for adding/removing of packages.
  • Network Administration and Troubleshooting.
  • Worked on user requests from different teams using Service Now Manager tool.
  • Knowledge and Implemented the RAID levels (RAID 0, RAID 1, RAID 5 and RAID6).
  • Installed and configured the Nagios and also managing the services through Nagios.
  • Updated the system as soon as new version of OS and application software rollout.
  • Responsible for enabling and revoking privileges from the users.
  • Scanning and Configuration of Storage LUN.
  • Installation/Updating Kernel or client patching on Red hat servers as client required.

Confidential

Linux Administrator

Responsibilities:

  • Responsible for monitoring system performance and supporting servers to improve the performance.
  • Providing support by handling Application, installation, troubleshooting & required configuration of the Linux servers.
  • Provided Support on operating system issues and hardware related issues.
  • Configured and Implemented the NTP, DNS and SMTP (postfix) server.
  • Supported the NFS, Samba, Apache services in our estate.
  • Managed simple Partition and File system.
  • Established and changed a swap space under Red Hat Linux.
  • Implemented Disk Quotas on File system.
  • Managed user accounts attributes and files systems.
  • User and Group Administration.
  • Scheduling job through Cron and at.
  • Watching systems logs for problems on regular basis.

Confidential

Linux Administrator

Responsibilities:

  • Provided support by handling Application, installation, troubleshooting & required configuration of the Linux servers.
  • Provided Support on operating system issues and hardware related issues.
  • Handled Kernel and OS upgrades through YUM repository.
  • Troubleshooting of failure OS, login in single user mode, filesystem checking by fsck, etc
  • Backup and Recovery with tar and scp.
  • Installed kernel and additional Packages using YUM repository.
  • Used to Provide Connectivity - box to box (password less).
  • Configured and managed SSH and FTP Servers.
  • Backup and Recovery with tar and scp.
  • Used to check disk space on daily basis.

We'd love your feedback!