We provide IT Staff Augmentation Services!

Senior Hadoop Administrator Resume

5.00/5 (Submit Your Rating)

Orlando, FL

PROFESSIONAL SUMMARY:

  • Overall 8 Years of professional Information Technology experience in Hadoop, Linux and Data base Administration activities such as installation, configuration and maintenance of systems/clusters.
  • Having extensive experience in Linux Administration & Big Data Technologies as a Hadoop Administration.
  • Hands on experience in Hadoop Clusters using Horton works (HDP), Cloudera (CDH3, CDH4), oracle big data and Yarn distributions platforms.
  • Possessing skills in Apache Hadoop, Map - Reduce, Pig, Impala, Hive, HBase, Zookeeper, Sqoop, Flume, OOZIE, and Kafka, storm, Spark, Java Script, and J2EE.
  • Experience in deploying and managing the multi-node development and production Hadoop cluster with different Hadoop components (Hive, Pig, Sqoop, Oozie, Flume, HCatalog, HBase, Zookeeper) using Horton works Ambari.
  • Good experience in creating various database objects like tables, stored procedures, functions, and triggers using SQL, PL/SQL and DB2.
  • Used Apache Falcon to support Data Retention policies for HIVE/HDFS.
  • Experience in Configuring Name-node High availability and Name-node Federation and depth knowledge on Zookeeper for cluster coordination services.
  • Experience on Design, configure and manage the backup and disaster recovery for Hadoop data.
  • Experience in administering Tableau and Green Plum databases instances in various environments.
  • Experience in administration of Kafka and Flume streaming using Cloudera Distribution.
  • Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause.
  • Extensive knowledge in Tableau on Enterprise Environment and Tableau administration experience including technical support, troubleshooting, reporting and monitoring of system usage.
  • Experience on Commissioning, Decommissioning, Balancing, and Managing Nodes and tuning server for optimal performance of the cluster.
  • Experience in importing and exporting the data using Sqoop from HDFS to Relational Database systems/mainframe and vice-versa.
  • Worked on NoSQL databases including HBase, Cassandra and MongoDB.
  • Designing and implementing security for Hadoop cluster with Kerberos secure authentication.
  • Hands on experience on Nagios and Ganglia tool for cluster monitoring system.
  • Experience in scheduling all Hadoop/Hive/Sqoop/HBase jobs using Oozie.
  • Knowledge of Data Ware Housing concepts and Cogons8 BI Suit and Business Objects.
  • Experience in HDFS data storage and support for running map-reduce jobs.
  • Experience in Installing Firmware Upgrades, kernel patches, systems configuration, performance tuning on Unix/Linux systems.
  • Expert in Linux Performance monitoring, kernel tuning, Load balancing, health checks and maintaining compliance with specifications.
  • Hands on experience in Zookeeper and ZKFC in managing and configuring in Name Node failure scenarios.
  • Team Player with good communication and interpersonal skills and goal oriented approach to problem solving issues.

TECHNICAL SKILLS:

Big Data Technologies: Hadoop, HDFS, Map Reduce, YARN, PIG, Hive, HBase, Zookeeper, Oozie, Ambari, Kerberos, Knox, Ranger, Sentry, Spark, Tez, Accumulo, Impala, Hue, Storm, Kafka, Flume, Sqoop, Solr.

Tools & Utilities: HP service manager, Remedy, Maximo, Nagios, Ambari, Chipre, Ganglia & SharePoint

Distributions: Cloudera, Horton works (HDP).

Operating Systems: Linux, AIX, CentOS, Solaris & Windows.

Databases: Oracle 10/11g, 12c, DB2, MySQL, HBase, Cassandra, MongoDB.

Backups: VERITAS, Netback up & TSM Backup.

Virtualization: VMware, vSphere, VIO.

Scripting Languages: Shell & Perl programming, Python.

PROFESSIONAL EXPERIENCE:

Confidential, Orlando, FL

Senior Hadoop Administrator

Responsibilities:

  • Working on Hadoop HORTONWORKS distribution which managed services. HDFS, MapReduce2, Hive, Pig, HBASE, SQOOP, Flume, Spark, AMBARI Metrics, Zookeeper, Falcon and OOZIE etc. for4cluster ranges from LAB, DEV, QA to PROD.
  • Monitor Hadoop cluster connectivity and security on AMBARI monitoring system.
  • Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades.
  • Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, troubleshooting review data backups, review log files.
  • Installed, tested and deployed monitoring solutions with SPLUNK services and involved in utilizing SPLUNK apps.
  • Day to day responsibilities includes solving developer issues, deployments moving code from one environment to other environment, providing access to new users and providing instant solutions to reduce the impact and documenting the same and preventing future issues.
  • Involved in Analyzing system failures, identifying root causes, and recommended course of actions.
  • Interacting with HDP support and log the issues in portal and fixing them as per the recommendations.
  • Imported logs from web servers with Flume to ingest the data into HDFS.
  • Using Flume to load the data from local system to HDFS
  • Retrieved data from HDFS into relational databases with SQOOP.
  • Experience in developing SPLUNK queries and dashboards by evaluating log sources.
  • Fine tuning hive jobs for optimized performance.
  • Partitioned and queried the data in Hive for further analysis by the BI team.
  • Written scripts for configuring the alerts for capacity scheduling and monitoring the cluster.
  • Involved in Installing and configuring Kerberos for the authentication of users and HADOOP daemons.
  • Expertise in setting up the policies, ACL’s using Apache Ranger for the Hadoop services.
  • Perform auditing for the user logs using Apache Ranger.
  • Monitored Clusters with Ganglia and NAGIOS.
  • Supported in setting up QA environment and updating configurations for implementing scripts with Pig and SQOOP.
  • Work with a global team to provide 24x7 support and 99.9% system uptime.

Environment: Hue, Oozie, Eclipse, HBase, HDFS, MAPREDUCE, HIVE, PIG, FLUME, OOZIE, SQOOP, RANGER, ECLIPSE, SPLUNK.

Confidential, San Francisco, CA

Hadoop Administrator

Responsibilities:

  • Responsible for Cluster Maintenance, Monitoring, Managing, Commissioning and decommissioning Data nodes, Troubleshooting, and review data backups, Manage & review log files for Horton works.
  • Adding/Installation of new components and removal of them through Cloudera.
  • Monitoring workload, job performance, capacity planning using Cloudera.
  • Major and Minor upgrades and patch updates.
  • Creating and managing the Cron jobs.
  • Installed Hadoop eco system components like Pig, Hive, HBase and Sqoop in a Cluster.
  • Experience in setting up tools like Ganglia for monitoring Hadoop cluster.
  • Handling the data movement between HDFS and different web sources using Flume and Sqoop.
  • Extracted files from NoSQL database like HBase through Sqoop and placed in HDFS for processing.
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs.
  • Building and maintaining scalable data pipelines using the Hadoop ecosystem and other open source components like Hive and HBase.
  • Installed and configured HA of Hue to point Hadoop Cluster in Cloudera Manager.
  • Have deep and thorough understanding of ETL tools and how they can be applied in a Big Data environment, supporting and managing Hadoop Clusters.
  • Installed and configured Map Reduce, HDFS and developed multiple Map Reduce jobs in java for data cleaning and pre-processing.
  • Working with applications teams to install operating system, Hadoop updates, patches, version upgrades as required.
  • Extensively worked on Informatica tool to extract data from flat files, Oracle and Teradata and to load the data into the target database.
  • Responsible for developing data pipeline using HDInsight, Flume, Sqoop and Pig to extract the data from weblogs and store in HDFS.
  • Performed transformations, cleaning and filtering on imported data using Hive, Map Reduce, and loaded final data into HDFS.
  • Commissioning Data Nodes when data grew and De-commissioning of data nodes from cluster in hardware degraded.
  • Set up and managing HA Name Node to avoid single point of failures in large clusters.
  • Working with data delivery teams to setup new Hadoop users, Linux users, setting up Kerberos principles and testing HDFS, Hive.
  • Discussions with other technical teams on regular basis regarding upgrades, process changes, any special processing and feedback.

Environment: Linux, Shell Scripting, Tableau, Map Reduce, Teradata, SQL server, NoSQL, Cloudera, Flume, Sqoop, Chef, Puppet, Pig, Hive, Zookeeper and HBase.

Confidential,. El Segundo, CA

Hadoop Administrator

Responsibilities:

  • Installed and configured Horton works HADOOP from scratch for development and HADOOP tools like Hive, HBASE, SQOOP, ZOOKEEPER and FLUME.
  • Administered Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting.
  • Performed Adding/removing new nodes to an existing Hadoop cluster.
  • Implemented Backup configurations and Recoveries from a Name Node failure.
  • Monitored systems and services, architecture design and implementation of Hadoop deployment, configuration management, backup, and disaster recovery systems and procedures.
  • Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the job requirement.
  • Installed and configured HDFS, Zookeeper, Map Reduce, Yarn, HBASE, Hive, SQOOP and OOZIE.
  • Integrated Hive and HBASE to perform analysis on data.
  • Applied standard Back up policies to make sure the high availability of cluster.
  • Involved in analyzing system failures, identifying root causes, and recommended course of actions. Documented the systems processes and procedures for future references.
  • Involved in installing and configuring the Apache Ranger using AMBARI WEB UI.
  • Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.

Environment: HADOOP, HDFS, Zookeeper, Map Reduce, YARN, HBASE, Apache Ranger, Hive, SQOOP, OOZIE, Linux- CENTOS, UBUNTU, Red Hat.

Confidential

Linux Administrator

Responsibilities:

  • Hadoop installation, Configuration of multiple nodes using Cloudera platform.
  • Major and Minor upgrades and patch updates.
  • Handling the installation and configuration of a Hadoop cluster.
  • Building and maintaining scalable data pipelines using the Hadoop ecosystem and other open source components like Hive and HBase.
  • Handling the data exchange between HDFS and different web sources using Flume and Sqoop.
  • Monitoring the data streaming between web sources and HDFS.
  • Monitoring the Hadoop cluster functioning through monitoring tools.
  • Close monitoring and analysis of the Map Reduce job executions on cluster at task level.
  • Inputs to development regarding the efficient utilization of resources like memory and CPU utilization based on the running statistics of Map and Reduce tasks.
  • Changes to the configuration properties of the cluster based on volume of the data being processed and performed by the cluster.
  • Setting up automated processes to analyze the system and Hadoop log files for predefined errors and send alerts to appropriate groups.
  • Excellent working knowledge on SQL with databases.
  • Commissioning and De-commissioning of data nodes from cluster in case of problems.
  • Setting up automated processes to archive/clean the unwanted data on the cluster, in particular on Name Node and Secondary Name node.
  • Set up and managing HA Name Node to avoid single point of failures in large clusters.
  • Discussions with other technical teams on regular basis regarding upgrades, process changes, any special processing and feedback.

Environment: Java, Linux, Shell Scripting, Teradata, SQL server, Cloudera Hadoop, Flume, Sqoop, Pig, Hive, Zookeeper and HBase.

Confidential

DataBase Administrator

Responsibilities:

  • Ensure smooth functioning and 24X7 availability of database
  • Oracle database installation, upgrades, instance configuration, tuning, schema maintenance
  • Installed Oracle Grid Infrastructure 11g/12c for New projects
  • Responsible for configuration and implementation of backup strategies using RMAN
  • Maintaining stringent Backup strategy
  • Prepare, maintain and recovery plans in case of any failures
  • Troubleshooting of Other technical errors if any arise
  • Upgraded Oracle database from release 10g to 11g
  • Tuning of the database
  • Wrote UNIX shell scripts to automate DBA daily activity.
  • Purging of Database
  • Refresh Staging and Development environment using Database Cloning* Maintaining user profile and security administration.

Environment: Oracle 11g/12c, PL/SQL, Sun Solaris, Linux, RAC.

We'd love your feedback!