We provide IT Staff Augmentation Services!

Senior Hadoop Administrator Resume

3.00/5 (Submit Your Rating)

Santa Clara, CA

PROFESSIONAL SUMMARY:

  • 10 + years of IT experience including 5 years’ experience in Big Data Hadoop Administration, 7 years in Linux/ Unix/ Windows Administration. Experience in Designing, Planning, Building, Configuring, Administering, Troubleshooting, Maintenance, Performance Monitoring and Fine - tuning of large scale Hadoop production Clusters using Apache, Cloudera and Hortonworks, on Physical as well as Cloud (AWS servers and cloud stack).
  • Total 10+ years of IT experience including 5+ years of experience as Hadoop Administrator and 7 years as Linux/ Unix/Windows Administrator. Experience in Capacity Planning, installation, configuration and support of Hortonworks and Cloudera CDH Clusters
  • Experience on Installation and Configuration of Hadoop Ecosystems - HDFS, YARN, Map Reduce, Hbase, Storm, Kafka, Ranger, Spark, Pig, Hive, Sqoop, Flume, Tez, Zookeeper, Oozie
  • Experience in setting up Hadoop High Availability (Name Node, Resource Manager, Hive, Hbase and Disaster Recovery.
  • Strong experience with Hadoop Security and Governance using Ranger, Kerberos, Security Concepts-Best Practices, Falcon.
  • Good experience in BI. data warehousing, analytics, and Database
  • Good experience with data analytics tools such as Splunk, Cognos, Tableau
  • Strong experience with cluster security tools such as Kerberos, Ranger and Knox.
  • Good Knowledge on Apache Spark, Spark- Streaming, Apache Kafka
  • Experience on product upgrades, rollbacks, and updating patch fixes between different product versions.
  • Experience in Commissioning and Decommissioning of nodes within a cluster.
  • Experience in job automation using Oozie, cluster coordination through Zookeeper and MapReduce job scheduling using Capacity Schedulers.
  • Experience in tool Integration, automation, configuration management in GIT, SVN, Jira platforms.
  • Experience in Setup monitoring and alerts for the Hadoop cluster, creation of dashboards, alerts, and weekly status report for uptime, usage, issue, etc.
  • Experience on setting Sqoop to ingest RDBMS data into Hive and vice versa.
  • Experience in setting up Flume and Kafka to ingest log data and HDFS.
  • Expertize on Hadoop Cluster Performance Tuning and Troubleshooting.
  • Experience setting up Hadoop Clusters on EC2 instances for Product POCs.
  • Experience on Red Hat Enterprise Linux Administration and Devops Tools Puppet, chef, Jenkins and ansible.
  • Monitor the cluster - jobs, performance and fine-tune when necessary using Ambari.
  • Design, implement, test and document performance benchmarking strategy for platform as well for each use cases.
  • Experience with the Continuous Integration and Continuous Deployment pipeline ecosystem including tools such as Maven, Gradle, Jenkins and Puppet configuration tools.
  • Hands on experience in AWS provisioning and good knowledge of AWS services like EC2, S3, VPC, IAM, ELB
  • Maintain hardware-level stability and availability, including all break/fix issues, hardware replacement, hardware modifications, and hardware/server configurations
  • Participate in a 24x7 on-call support rotation

TECHNICAL SKILLS:

Hadoop/Big Data: HDFS, Map Reduce, Hbase, Kafka, Storm, Spark, Ranger, Nifi,Pig, Hive, Sqoop, Flume, Hue, Oozie, Zookeeper, Apache Phoenix

Operating System: RHEL/SUSE/Centos/Ubuntu/Solaris/Aix/Windows

Cloud & Virtualization: EC2,S3,SQS,Lambda,Autoscaling,Adding EBS Volume, RDS, Redshift, ELB, VPC, Security Groups, IAM roles, Policies, EMR and Dynamo DB

Hardware: HP ProLiant SL4540 Gen8, DL560, DL580, BL660c Dell PowerEdge R720XD rack server, M620 blade server, IBM System x3650 M4 BD,M620 blade server, Oracle Big Data Appliance X4-2, Sun Oracle X4-2L Servers

Devops Tools: Jenkins, Gradle, Chef, Ansible, Git, GITHUB, SVN, Dockers, Maven, Agile, Scrum

Programming Languages: C, C++, Python, bash, ksh, basic Java, Perl, ruby

Web Servers: Web Logic, Web Sphere, Apache Tomcat

Network Protocols: TCP/IP, HTTP, DNS, DHCP, NTP, SFTP, LDAP, SMTP, FTP, Kerberos

Database: Oracle/MySQL/HBASE/Cassandra

Scheduling: Oozie Coordinator, Autosys

PROFESSIONAL EXPERIENCE:

Confidential, Santa Clara, CA

Senior Hadoop Administrator

Responsibilities:

  • Responsible for Managing large scale Hadoop cluster environment, handling all Hadoop environment builds, including design, capacity planning, cluster setup, performance tuning and ongoing monitoring.
  • Implemented large scale Hadoop (Hortonworks HDP 2.4 Stack and CDH5) enterprise Data Lake for Prod, DEV, and UAT Environment around 300 nodes
  • Upgraded Hortonworks Ambari and HDP Stack from 2.3 to 2.4 Versions in Dev, DR and Prod Environment.
  • Data node commissioning, Decommissioning. HDFS Disk Rebalancing.
  • Changing all Hadoop, Yarn and HBase configuration based on issues and performance. Configured YARN queues - based on Capacity Scheduler for resource management.
  • Hadoop Security - Kerberos - Setting up Generic, Headless and Service Key-tabs. Setting up Kerberos principals. Create User access, user directories, Allocate Space quota and Resolve User Permission issues. Create strategy and maintain Ranger Policies for HDFS, Hive and Hbase.
  • Setting up High-Availability for Name node, Hive and Yarn(Resource Manager)
  • Monitor job performances, file system/disk-space management, cluster and database connectivity, log files, management of backup/security and troubleshooting various user issues.
  • Configuring Hbase replication between Production and Disaster Recovery Cluster. Hbase Performance and High Availability, Peer Replication testing.
  • Importing Data using Apache Phoenix with SQLLINE.PY and PSQL.PY
  • Full shutdown backup using Distcp tool and restore the data from backup. Snapshot setup and creation.
  • Cluster status monitoring by Ambari cluster Management and HBase Master Web UI.
  • Importing and exporting data into HDFS and Hive using SQOOP. Transfer and load Memo and Payment datasets
  • Involved in creating Hive tables, loading with data and writing hive queries that will run internally in map reduce way.
  • Analyzing various Hadoop log files for troubleshooting.
  • Design, implement, test and document performance benchmarking strategy for platform as well for each use cases.
  • Prepared Architecture documents and detailed configuration documents. Maintain HDFS directory structure and access as per the standard.
  • Support Application team thru Incident management tool like service now and fix various issues related to Hadoop platform.
  • Hands-on experience in diagnosing, troubleshooting various networking, hardware & Linux server's services issues and performing preventive maintenance.
  • Participate in a 24x7 on-call support rotation and off-hours maintenance windows.

Environment: Hortonworks CDH5, Ambari, Yarn, Mapreduce2, HBase, Hive, Tez, Pig, MySQL, DB2, Sqoop, Oozie, Zookeeper, Kerberos, Ambari Metrics, Ranger, Phoenix and Spark

Confidential, Sunnyvale, CA

Hadoop Administrator

Responsibilities:

  • Responsible for installation, configuration, supporting and managing Hadoop Clusters (200 nodes).
  • Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Manage and review data backups, Manage & review log files.
  • Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades.
  • Configuring Flume for efficiently collecting, aggregating and moving large amounts of log data from many different sources to HDFS.
  • Importing and exporting structured data from different relational databases into HDFS and Hive using Sqoop.
  • Setting up High-Availability for Name node and Resource Manager
  • Secured production environments by setting up Linux users, setting up Kerberos principals.
  • Configured YARN queues - based on Capacity Scheduler for resource management.
  • Installed Oozie workflow engine to schedule Hive and PIG scripts.
  • Hands on experience in Zookeeper and ZKFC in managing and configuring in Name Node failure scenarios.
  • Used SPARK to build fast analytics for ETL Process and Constructed ingest pipeline using Spark streaming.
  • Hands-on experience in diagnosing, troubleshooting various networking, hardware & Linux server's services issues and performing preventive maintenance.
  • Participate in a 24x7 on-call support rotation and off-hours maintenance windows.

Environment: Java MapReduce, Scala Spark, HDFS, Hive, Pig, MySQL, DB2, Sqoop, Flume, Oozie, Eclipse, SVN, Maven, Jenkins.

Confidential

Linux/Hadoop Administrator

Responsibilities:

  • Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and troubleshooting, manage and review data backups, manage and review Hadoop log files.
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs.
  • Configuring Flume for efficiently collecting, aggregating and moving large amounts of log data from many different sources to HDFS.
  • Importing and exporting structured data from different relational databases into HDFS and Hive using Sqoop.
  • Managed Disks and File systems using LVM on Linux. and monitoring
  • Solve production problems when needed 24x7 Develop and document best practices
  • Planning, installation, configuration, management and troubleshooting of Red Hat Enterprise Linux platform for test development and Production servers
  • Monitor Linux Server for CPU utilization, Memory Utilization and Disk Utilization for performance monitoring.
  • Maintain hardware-level stability and availability, including all break/fix issues, hardware replacement, hardware modifications, and hardware/server configurations
  • Worked with Linux, Oracle Database, and Network teams to ensure the smooth relocation of the servers.
  • Perform physical hardware installation and configuration according to project requirements
  • Provision and manage Amazon Web Services resources for Production, QA, and Development
  • Manage off-site team for monitoring and managed hosting services (verify OS patches and backups)
  • Responsible for 24x7 Global on call support for production Issues.

Confidential, Tempe, AZ

Application support Engineer

Responsibilities:

  • Production support and on call support role for Java/J2EE application
  • LAMP developer for internal application. Used Perl, python, php
  • My sql server administration, oracle 11g/12c administration
  • Live cd provisioning for centos servers, network configuration dhcp server and pxe server setup provisioning using cobbler tomcat web logic application server administration tuning and deployment windows administration 2008 /ad integration using ldap vulnerability remediation using custom scripts
  • Administered Nagios for monitoring of infrastructure

Confidential, AL

UNIX Administrator

Responsibilities:

  • Sun Solaris server administration
  • Upgrade sun servers from Solaris 10 to Solaris 11
  • Configure web sphere servers, tuning and administration
  • Virtualization using LDOMs in Solaris
  • Migrated servers from AIX to rhel 6
  • EMC SAN servers client side administration using
  • DR setup ; testing, cut over and business continuity plan implementations
  • POC's for application migration to cloud/AWS
  • Responsible for the Installation, Configuration and Maintenance of Sun Enterprise Servers

Confidential, CA

LINUX/UNIX Administrator

Responsibilities:

  • Handling the on-call and resolving the critical tickets after business hours.
  • Preparing the SLA justification for missed SLA ( Sev 1&2 only)
  • Handling the restoration of the files from TSM and Networker Backup.
  • Attending Bridge call for sev1 & sev2 issues & working till issues get resolved.
  • Working on ticketing process based on ITIL (IT Infrastructure Library). Working on Automated and Manual Tickets
  • Replacement of H/W (Motherboard, Memory, Media-Drives, NIC Cards, HBA) by coordinating with onsite team/vendor. installing the patches on all the servers for every quarter as per customer OLA
  • Performance tuning and monitoring using net stat, iOS stat, vmstat and sar.
  • Supported and administered Veritas Volume Manager and Veritas Cluster products.

We'd love your feedback!