We provide IT Staff Augmentation Services!

Big Data Admin (linux And Hadoop Administration) Resume

4.00/5 (Submit Your Rating)

CAREER OBJECTIVE:

To obtain a position as a System/Hadoop/Cloud Administrator to utilize my strong enterprise level system administration, storage administration, big data/hadoop administration and application support skills.

SUMMARY:

  • Certified Linux, Networking, Cloud and Big Data/Hadoop Professional
  • Over ten years of experience in Red Hat Enterprise Linux administration
  • Experience supporting mission critical infrastructure in an environment of 40,000+ servers deployed globally
  • Experience working with multi - node Cloudera and Hortonworks Hadoop Cluster in multiple data centers
  • Experience in supporting AWS Platform for real-time data ingestion to Hadoop cluster
  • Experience administering large-scale, distributed PHP (LAMP/LEMP stack) and Ruby web applications
  • Over five years of experience in virtualization with VMware vSphere/ESX 4 and 5
  • Knowledge of backup suite Symantec NetBackup and High-availability Veritas Cluster Server
  • Gained experience by working and supporting in 24/7 mission critical production environment & as a member of a global system administrators team

TECHNICAL SKILLS:

Operating Systems: Linux (Red Hat, Fedora, Ubuntu), Windows, Unix (Solaris), Mac OS X

Virtualization: VMware ESXi (4, 4.1, 5), VMware vCenter Server, VMware vSphere Client

Software/Applications/Tools: Nagios, Cacti, Puppet, Git, Sphinx, NetBackup, VCS, AutoSys, IBM Netcool, Jira, Capistrano, Cobbler, Logstash, Newrelic, Redis, Monit, Zabbix, Saltstack, Control-M, RedHat Identity Management, Jenkins, Ansible

Storage: NAS (NetApp Filers), SAN (Hitachi Arrays), IBM Tape Library, Symantec VxVM

Distributed File Systems: NFS, AFS, CIFS, HDFS

Networking: TCP/IP, DNS/BIND, DHCP, NIS, SSH, Telnet, FTP, VLAN, Software Load Balancer (Citrix NetScalar VPX)

Programming: Shell Scripting, Perl, Java, C, C++, SQL

Web/FTP Servers: Apache, Nginx, ProFTPD

Databases: MySQL, Oracle Exadata, PostgreSQL

Big Data/Cloud: CDH, HDP, HDF, HDFS, Yarn/MapReduce2, Oozie, Hue, HBase, Hive, Ranger, Spark, NiFi, AWS S3, Kinesis, Lambda

WORK EXPERIENCE:

Big Data Admin (Linux and Hadoop Administration)

Confidential

Responsibilities:

  • Maintain Hortonworks Hadoop Production, Development and QA environments by monitoring their performance to ensure functionality, integrity and security of the environments and compliance with information security and change management processes
  • Build and deploy new RHEL servers to expand capacity of existing Hadoop cluster
  • Lead Admin to perform all aspects of HDFS data at rest encryption project from POC to Production and implemented it successfully to meet Rogers security standard
  • Monitor Red Hat Linux servers health (Master, Edge, Worker nodes) in Zabbix and troubleshoot/resolve reported issues
  • Setup Production, Development and QA environments for new project intakes in RHEL Edge nodes, HDFS and NetApp volume with Ansible playbooks
  • Primary point of contact to provision and manage user access to Hadoop cluster with RedHat Identity Management, Ranger, Kerberos and extended ACL
  • Support AWS platform (S3, Kinesis and Lambda) and HDF/NiFi cluster for real-time data ingestion to Hive
  • Provide support for Hadoop's daily operations to ensure the delivery of timely, accurate and reliable data to business users. This includes monitoring processes, updating/maintaining the appropriate system documentation, identifying potential problems and implementing solutions to ensure the efficiency and reliability of the operations
  • Perform code deployment to production for new projects using Jenkins pipeline
  • Identify and troubleshoot problems as they arise which may involve interacting with other areas within team for resolution
  • Monitor and administer automated Control-M and manual data integration jobs to verify execution and measure performance
  • Perform application optimization by working with engineers, developers and customers
  • Provide technical assistance, advice and guidance to other team members and business users in response to enquiries on the functionality of applications, and data in Enterprise Big Data Environment to ensure efficient and un-interrupted services to the business users
  • Restore system continuity by contributing to incident and disaster recovery
  • Provide on-call assistance for the Hadoop/Big Data cluster, jobs and related problems

IT Operations Analyst

Confidential

Responsibilities:

  • As a member of the Production and Systems Support team responsible to administer the Bank's Big Data/Cloudera Hadoop platform in physical environments
  • Administer multi-node Hadoop clusters in Production, PAT & SIT/DEV environment
  • Perform Hadoop configuration changes and restart affected components to apply changes
  • Monitor Red Hat Linux servers health (Master, Edge, Worker nodes) in Zabbix and troubleshoot/resolve reported issues
  • Working with Cisco UCS Manager to manage physical servers in multiple data centers
  • Work with hardware vendor to schedule and replace failed server hardware
  • Perform, troubleshoot and monitor daily ingestion/extraction with custom tools/scripts and Oozie workflow on Hue
  • Troubleshoot and resolve Tibco MFT file transfer issues and work with Business, ETL, File Transfer team to re-delivery of failed/missed files to Hadoop clusters
  • Performance test of new projects in PAT environment before PROD deployment
  • Responsible to migrate multiple projects from existing PAT cluster to new PAT cluster
  • Writing shell scripts to report/automate repetitive tasks
  • Setup DEV,SIT,PAT environment in Edge/Landing server and HDFS for new projects
  • Plan, schedule and apply patches on Linux servers to mitigate security vulnerabilities
  • Setup directory level encryption in the Edge/Landing node and HDFS to meet PCI compliance
  • Manage and apply extended ACL on all local and HDFS filesystems to restrict access to data
  • Working with Developers to find root cause of failed ingestion by FileMonitor application
  • Work closely with Hadoop Architect/Engineering team to test new tools and feature
  • Ensure daily server backups are successful with Tivoli Storage Manager server and resolve any agent issues, also restore files if required
  • Monitor local and HDFS disk space usage and perform regular cleanup

Linux Systems Administrator

Confidential

Responsibilities:

  • Install, configure, upgrade and support RedHat Enterprise Linux 5/6 servers and packages in VMware vSphere/ESX environment
  • Create, manage and migrate clustered NetApp NAS volumes/shares and allocate/deallocate storage to servers
  • Identify and troubleshoot network, server, storage configuration and application performance related issues/errors
  • Implement and administer configuration automation tool Puppet to manage system configuration files on centralized Puppet server
  • Configure and maintain monitoring tools Nagios and Cacti
  • Plan, schedule and apply patches on servers and perform other scheduled changes
  • Build and support development and QA environments for the development and QA teams
  • Perform regular production deployment for PHP and Ruby web applications
  • Install and configure WordPress, Apache and MySQL for WordPress blogs
  • Manage internal DNS on Windows Server 2008 Domain Controller for QA/ corporate environment and external DNS (Safenames, UltraDNS) for production environment
  • Performed various important tasks during two major datacenter migrations to accommodate business growth
  • Writing and maintaining shell scripts for production deployment, backups and automate tasks
  • Manage all production and corporate backup policy and ensure all production, corporate data are backed up to both NetApp NAS backup volume and offsite storage
  • Configure Apache, Nginx web servers and ProFTPD FTP servers
  • Manage Cobbler installation server for PXE/network installation of Operating Systems
  • Designed, implemented and configured centralized system and application log server using Logstash, Elasticsearch and Redis
  • Provide support as part of a 24x7 oncall rotation for production issues

Senior IT Consultant

Confidential

Responsibilities:

  • Perform assorted Unix/Linux Administration tasks, including daily audits (using in-house tool) on Linux, Solaris, VMware ESXi hosts, SAN and NAS devices, and resolve reported errors/issues
  • Handle immediate escalations from the Level 1 monitoring team for critical alerts
  • Manage storage using volume management tool Veritas Volume Manager (VxVM)
  • Working with Veritas Cluster Server (VCS) to create clustering configurations and administer clustering configurations and problems globally
  • Deploy VMware ESXi 4 hosts and virtual machines and manage them with vCenter Server
  • Installing, configuring and upgrading Operating Systems and applications packages, as well as decommissioning of end of life servers
  • Aiding in preparations for controlled changes to be introduced over each weekend into the Linux, Solaris and storage environment
  • Handling storage allocation/deallocation and migration requests from business departments on both Hitachi SAN arrays and NetApp NAS filers
  • Administer both disk backup (Hitachi arrays) and tape backup (IBM tape library) configurations and problems, and troubleshoot backup job failures (including initial troubleshooting for backup failures for Windows servers) with Symantec NetBackup
  • Perform immediate and critical production data restore from backup media on user request

IT Consultant

Confidential

Responsibilities:

  • Monitor and support critical enterprise infrastructure services and applications to ensure optimal durability and reliability with IBM Netcool in Unix/Linux and Windows environment
  • Assist in the backup of the client’s business data with Symantec NetBackup and troubleshoot any backup job failures
  • Monitor client’s Backup servers (Linux-based) and IBM tape libraries globally; troubleshoot any problems and coordinate with IBM engineers to resolve critical issues
  • Assist with user questions and problems on infrastructure application usage
  • Perform “Ready For Business” check on infrastructure services and resolve any issues
  • Troubleshoot client’s AutoSys job failures
  • Maintain team’s internal webpage of procedures & forms using HTML, CGI, JSP and JavaScript

We'd love your feedback!