We provide IT Staff Augmentation Services!

Senior Hadoop Administration Resume

3.00/5 (Submit Your Rating)

Memphis, TN

SUMMARY

  • Over 9+ years of extensive experience in Linux/Unix including Hadoop Administration.
  • Experienced managing Linux platform servers.
  • Excellent understanding / knowledge of Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradigm.
  • Hands - on development and implementation experience in Big Data Management Platform (BMP) using HDFS, Map Reduce, Hive, Pig, Oozie, Apache Kite and other Hadoop related eco-systems as a Data Storage and Retrieval systems.
  • Experienced managing Linux platform servers.
  • Experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions.
  • Hands-on experience on major components in Hadoop Ecosystem including Hive, Sqoop, Flume& knowledge of Mapper/Reduce/HDFS Framework.
  • Set up standards and processes for Hadoop based application design and implementation.
  • Experienced in developing Map Reduce programs using Apache Hadoop for working with Big Data .Excellent Experience in Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradigm
  • Experience in performing major and minor upgrades of Hadoop clusters in Apache, Cloudera and Horton works distributions.
  • Set up automated 24x7x365 monitoring and escalation infrastructure for Hadoop cluster using Nagios and Ganglia.Experience in collecting metrics for Hadoop clusters using Ambari & Cloudera Manager.Proven expertise in Hadoop Projects Implementation and Configuring Systems.
  • Involved in log file management where the logs greater than 7 days old were removed from log folder and loaded into HDFS and stored for 3 months.
  • Experience in production support and application support by fixing bugs.
  • Supported Web Sphere Application Server WPS, Confidential HTTP/ Apache Web Servers in Linux environment for various projects.
  • Supported geographically diverse customers and teams in a 24/7 environments.
  • Improved system availability by handling BPM, SiteScope Monitoring tool.
  • Ability to analyze complex problems, interprets operational needs, and develops integrated, creative solutions
  • Strong interpersonal and communication skills and the ability to work effectively with a wide range of constituencies in a diverse community.
  • Excellent Communication and Customer Support skills.

PROFESSIONAL EXPERIENCE

Confidential, Memphis, TN

Senior Hadoop Administration

Responsibilities:

  • Worked on installing, configuring and administrating Hadoop cluster.
  • Benchmarking clusters using TeraGen, TeraSort, and TeraValidate and identifying performance bottlenecks.
  • Built Hadoop cluster ensuring High availability for Namenode, mixed-workload management, performance optimization, health monitoring, backup and recovery across one or more nodes.
  • Validated YARN and HIVE parameters for mapreduce jobs to run successfully.
  • Installed Azkaban Server to solve the problem of Hadoop job dependencies.
  • Installed and configured Apache Phoenix
  • Tuning the cluster and reporting the statistics.
  • Good understanding of job schedulers like Fair Scheduler which assigns resources to jobs such that all jobs get, on average, an equal share of resources over time and an idea about Capacity Scheduler.
  • Continuous monitoring and managing of the Hadoop cluster
  • Implemented and Configured High Availability Hadoop Cluster (Quorum Based) for HDFS, IMPALA and SOLR.
  • Hands on experience with various ecosystem tools like HDFS, MapReduce, Hive, Pig, Oozie, Zookeeper

Environment: Hadoop, Cloudera CDH 5.3.3, HDFS, Map Reduce, Hive, Sqoop, Solr, HBase, Oozie, Pig.

Confidential, Bloomington, IL

Senior Hadoop Administration

Responsibilities:

  • Developed PIG scripts to report data for the analysis purpose.
  • Exporting and Importing data from HDFS to RDBMS and vice versa using the SQOOP tool
  • Written UDF's in Python scripting language.
  • Ability to analyze the MapReduce jobs for the data coordination.
  • Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with archived data exists in NFS tapes.
  • Enabled speedy reviews and first mover advantages by using Oozie to automate data loading into the Hadoop Distributed File System and PIG to pre-process the data.
  • Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems.
  • Managed and reviewed Hadoop log files.
  • Tested raw data and executed performance scripts.
  • Shared responsibility for administration of Hadoop, Hive and Pig.
  • Data Management through database (HBASE) to analyze the weblogs logs.

Environment: Hadoop, Cloudera CDH 5.3.3Linux, Map Reduce, HDFS, Hive, Pig, Shell Scripting.

Confidential, San Francisco, CA

Senior Hadoop Admin

Responsibilities:

  • Capturing data from existing databases that provide SQL interfaces using Sqoop
  • Implemented Hadoop stack and different bigdata analytic tools, migration from different databases to Hadoop
  • Processed information from Hadoop HDFS. This information will comprise of various useful insights that can be used in the decision making process. All these insights will be presented to the users in the form of Charts
  • Working on different Big Data technologies, good knowledge of Hadoop, Map-Reduce, Hive
  • Worked on deployments and automation task
  • Installed and configured Hadoop cluster in pseudo and fully distributed mode environments
  • Involved in developing the data loading and extraction processes for big data analysis
  • Worked on professional services engagements to help customers design, build clusters, applications, troubleshoot network, disk and operating system related issues
  • Administer Linux servers, other Unix variants, and managed hadoop clusters
  • Work with HBase and Hive scripts to extract, transform and load the data into HBase and Hive
  • Continuous monitoring and managing of the Hadoop cluster
  • Analyzed the data by performing Hive queries and running Pig scripts to know user behavior
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs
  • Developing scripts and batch job to schedule a bundle (group of coordinators) which consists of various Hadoop programs using Oozie
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports

Environment: Hadoop, HDFS, Map Reduce, Hive, Flume, Sqoop, Cloudera CDH4, HBase, Oozie, Pig, AWS EC2 cloud

Confidential, MN

Hadoop Admin

Responsibilities:

  • Proactively monitored systems and services, architecture design and implementation of Hadoop deployment, configuration management, and backup & DR systems.
  • Involved in analyzing system failures, identifying root-cause and recommendation of remediation actions. Documented issue log with solutions for future references.
  • Worked with systems engineering team for planning new Hadoop environment deployments, expansion of existing Hadoop clusters.
  • Monitored multiple hadoop clusters environments using Ganglia and Nagios. Monitoring workload, job performance and capacity planning using Cloudera Manager.
  • Worked with application teams to install OS level updates, patches and version upgrades required for Hadoop cluster environments.
  • Installed and configured Hive, Pig, Sqoop and Oozie on the Hadoop cluster.
  • Installation and configuration of Name Node High Availability (NNHA) using Zookeeper.
  • Analyzed web log data using the HiveQL to extract number of unique visitors per day, page views, visit duration, most purchased product on website.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports by our BI team.
  • Experienced in Linux Administration tasks like IP Management (IP Addressing, Ethernet Bonding, Static IP and Subnetting).
  • Responsible for daily system administration of Linux and Windows servers. Also implemented HTTPD, NFS, SAN and NAS on Linux Servers.
  • Worked on creation of UNIX shell scripts to watch for 'null' files and trigger jobs accordingly and also had good knowledge in Python scripting language.
  • Integrated Oozie with the rest of the Hadoop stack supporting several types of Hadoop jobs out of the box (such as Java map-reduce, Streaming map-reduce, Pig, Hive, Sqoop and Distcp) as well as system specific jobs (such as Java programs and shell scripts).
  • Worked on disaster management for Hadoop cluster.
  • Involved in Installing and configuring Kerberos for the authentication of users and Hadoop daemons.

Environment: Hadoop, MapReduce, HDFS, Hive, Pig, Java, SQL, Cloudera Manager, Sqoop, Flume, Oozie, Java (jdk 1.6), Zookeeper, Ganglia, Linux (CentOS/REDHAT).

Confidential, Pittsburgh, PA

Unix/Linux Systems Administrator

Responsibilities:

  • Managed server on VMware, provided test environments on virtual machines.
  • Provide IT support to internal Presbyterian Home Health Services staff members.
  • Provided application support to large users groups.
  • Installed hardware, installed Linux OS, and configured required network on 25 Node HPC cluster.
  • Configured and Manage Apache web server.
  • Manage software and hardware RAID systems.
  • Manage user accounts and authentication process by NIS service.
  • Manage System Firewall utilizing IP Chains, IP Tables. Implemented SSHSSL.
  • Managed user disk usage by setting up quotas.
  • Administer System logs, Security logs.
  • Update software packages and apply security patches.
  • Performed hardware maintenance, upgrades and troubleshooting on workstations and servers.
  • Communicate with Hardware and Software vendors and equipment dealers and maintain good relations.
  • Write documentation for internal use about System changes, Systems Administration etc.

Environment: s: Oracle10g, Weblogic8.1, Windows 2003, Linux, Oracle 10g, SQL Server 2005, PL/SQL, XML, Windows 2000/NT/2003 Server, UNIX.

Confidential

UNIX Administrator/Support

Responsibilities:

  • Designed and implemented all network and server resources.
  • Performed User Management, creating user and group accounts assigning permission to file systems.
  • Monitoring and analyzing the batch jobs through control-m tool
  • Installed patch cluster, firmware updates on a periodic basis.
  • Work with other team on planning for making changes on any servers.
  • Managed CRONTAB jobs, batch processing and job scheduling.
  • Software installation and maintenance.
  • Security, users and groups administration.
  • Networking service, performance, and resource monitoring.
  • Worked on 24X7 on-call rotation, Remedy trouble tickets, Break-fix, incident change.

Confidential

LNIX Administrator

Responsibilities:

  • Involved in support and monitoring production Linux Systems.
  • Responsible system administration tasks, like Installing Operating System, Operating System patching, software Installation, Operating System upgrades, Hardware upgrades, troubleshooting and problem resolution.
  • Responsible for Installation and supporting Linux servers.
  • Networking communication skills and protocols such as TCP/IP, Telnet, ftp, SSH, rlogin.
  • Worked with application team to load their app to servers, created test environment for Oracle DBA team
  • Supporting following change management process for production system.
  • Patching red hat Linux servers.
  • Created status reports, project plans and attend team meetings to coordinate activities.
  • Installing and Configuring Veritas Cluster Services in Linux servers for Highly redundant applications.
  • Remedy Problem & Change management application for management of problem tickets and production change requests.
  • Expertise in Security in OS level and DB level.

Environment: Linux, TCP/IP, Telnet, OS, Ubuntu, Red Hat Linux.

We'd love your feedback!