We provide IT Staff Augmentation Services!

Cloudera Hadoop Administrator Resume

2.00/5 (Submit Your Rating)

Houston, TX

SUMMARY:

  • Over 7 years of experience including 3+ years of experience with Hadoop Ecosystem in installation and administrated of all UNIX/LINUX servers and configuration of different Hadoop eco - system components in the existing cluster project.
  • Experience in configuring, installing and managing MapR, Hortonworks & Cloudera Distributions.
  • Hands on experience in installing, configuring, monitoring and using Hadoop components likeHadoop MapReduce, HDFS, HBase, Hive, Sqoop, Pig, Zookeeper, Hortonworks, Oozie, Apache Spark, Impala.
  • Working experience with large scale Hadoop environments build and support including design, configuration, installation, performance tuning and monitoring.
  • Working knowledge of monitoring tools and frameworks such as Splunk, Influx DB, Prometheus, SysDig, Data Dog, App-Dynamics, New Relic, and Nagios.
  • Experience in setting up automated monitoring and escalation infrastructure for Hadoop Cluster using Ganglia and Nagios.
  • Standardize Splunk forwarder deployment, configuration and maintenance across a variety of Linux platforms. Also worked on Devops tools like Puppet and GIT.
  • Hands on experience on configuring a Hadoop cluster in a professional environment and on Amazon Web Services (AWS) using an EC2 instance.
  • Experience with complete Software Design Lifecycle including design, development, testing and implementation of moderate to advanced complex systems.
  • Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Hortonworks, Cloudera and Map Reduce.
  • Extensive experience in installing, configuring and administrating Hadoop cluster for major Hadoopdistributions like CDH5 and HDP.
  • Experience in Ranger, Knox configuration to provide the security for Hadoop services (hive, base, hdfs etc.). Experience in administration of Kafka and Flume streaming using Cloudera Distribution.
  • Developed automated scripts using Unix Shell for performing RUNSTATS, REORG, REBIND, COPY, LOAD, BACKUP, IMPORT, EXPORT and other related to database activities.
  • Installed, configured, and maintained several Hadoop clusters which includes HDFS, YARN, Hive, HBase, Knox, Kafka, Oozie, Ranger, Atlas, Infra Solr, Zookeeper, and Nifi in Kerberized environments.
  • Experienced with deployments, maintenance and troubleshooting applications on Microsoft Azure Cloud infrastructure. Excellent knowledge of NOSQL databases like HBase, Cassandra.
  • Experience in large scale Hadoop cluster, handling all Hadoop environment builds, including design, cluster setup, performance tuning
  • Release process implementation like Devops and Continuous Delivery methodologies to existing Build and Deployments. Experience with scripting languages python, Perl or shell script also.
  • Involving architecture of our storage service to meet changing requirements for scaling, reliability, performance, manageability, also.
  • Modified reports and Talend ETL jobs based on the feedback from QA testers and Users in development and staging environments.
  • Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
  • Deployed Grafana Dashboards for monitoring cluster nodes using Graphite as a Data Source and collect as a metric sender.
  • Experienced in workflow scheduling and monitoring tool Rundeck and Control-M.
  • Proficiency with the application servers like Web Sphere, WebLogic, JBOSS and Tomcat.
  • Working experience on designing and implementing complete end to end Hadoop Infrastructure.
  • Good experience on Design, configure and manage the backup and disaster recovery for Hadoopdata.
  • Experienced in developing Map Reduce programs using Apache Hadoop for working with Big Data.
  • Responsible for designing highly scalable big data cluster to support various data storage and computation across varied big data cluster - Hadoop, Cassandra, MongoDB & Elastic Search.

WORK EXPERIENCE:

Cloudera Hadoop Administrator

Confidential, Houston, TX

Hadoop Administrator

Confidential, Johns creek, GA

Hadoop Admin

Confidential, Englewood, CO

Hadoop Admin

Confidential, Boston, MA

Hadoop Admin

Confidential, Minneapolis, MN

Linux/Systems Engineer

Confidential

Responsibilities:

  • Experience in setting up automated monitoring and escalation infrastructure for Hadoop Cluster using Ganglia and Nagios.
  • Standardize Splunk forwarder deployment, configuration and maintenance across a variety of Linux platforms. Also worked on Devops tools like Puppet and GIT.
  • Hands on experience on configuring a Hadoop cluster in a professional environment and on Amazon Web Services (AWS) using an EC2 instance.
  • Experience with complete Software Design Lifecycle including design, development, testing and implementation of moderate to advanced complex systems.
  • Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Hortonworks, Cloudera and Map Reduce.
  • Extensive experience in installing, configuring and administrating Hadoop cluster for major Hadoopdistributions like CDH5 and HDP.
  • Experience in Ranger, Knox configuration to provide the security for Hadoop services (hive, base, hdfs etc.). Experience in administration of Kafka and Flume streaming using Cloudera Distribution.
  • Developed automated scripts using Unix Shell for performing RUNSTATS, REORG, REBIND, COPY, LOAD, BACKUP, IMPORT, EXPORT and other related to database activities.
  • Installed, configured, and maintained several Hadoop clusters which includes HDFS, YARN, Hive, HBase, Knox, Kafka, Oozie, Ranger, Atlas, Infra Solr, Zookeeper, and Nifi in Kerberized environments.
  • Experienced with deployments, maintenance and troubleshooting applications on Microsoft Azure Cloud infrastructure. Excellent knowledge of NOSQL databases like HBase, Cassandra.
  • Experience in large scale Hadoop cluster, handling all Hadoop environment builds, including design, cluster setup, performance tuning
  • Release process implementation like Devops and Continuous Delivery methodologies to existing Build and Deployments. Experience with scripting languages python, Perl or shell script also.
  • Involving architecture of our storage service to meet changing requirements for scaling, reliability, performance, manageability, also.
  • Modified reports and Talend ETL jobs based on the feedback from QA testers and Users in development and staging environments.
  • Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
  • Deployed Grafana Dashboards for monitoring cluster nodes using Graphite as a Data Source and collect as a metric sender.
  • Experienced in workflow scheduling and monitoring tool Rundeck and Control-M.
  • Proficiency with the application servers like Web Sphere, WebLogic, JBOSS and Tomcat.
  • Working experience on designing and implementing complete end to end Hadoop Infrastructure.

We'd love your feedback!