Sr. Hadoop Administrator Resume
Raritan, NJ
PROFESSIONAL SUMMARY:
- Over 10+ years of overall experience in IT as a Systems Administrator on multiple Operating systems like Linux/AIX with cross platform integration experience using Hadoop
- Around 4+ years of experience in Hadoop Administration using Hortonworks (HDP) & Cloudera (CDH) Distributions
- Hands on experience in installing, configuring, and using Hadoop ecosystem components like HDFS, MapReduce, Hive, Hbase, Sqoop, Pig, Oozie, Zookeeper, Solr, Hue, Flume, Spark & Yarn distributions.
- Experience in performing backup and disaster recovery of Name Node metadata and important sensitive data residing on cluster.
- Experience in Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.
- Experience in minor and major upgrades of hadoop and hadoop eco system
- Experience in Hadoop administration activities such as installation, configuration and management of clusters using Cloudera Manager & Ambari.
- Good Understanding of the Hadoop Distributed File System and Eco System (MapReduce, Pig, Hive, Sqoop and HBase)
- Good knowledge on Hadoop HDFS architecture and MapReduce framework.
- Experience in monitoring the health of hadoop cluster and also performing multiple administrative cluster maintenance such as commissioning /decommissioning data nodes
- Experience in importing and exporting data between HDFS and Relational Database Management systems using Sqoop and troubleshooting for any issues.
- Excellent understanding and knowledge of NOSQL databases like MongoDB, HBase, and Cassandra.
- Solid understanding of all phases of development using multiple methodologies i.e. Agile with JIRA, Kanban board along with ticketing tool.
- Expertise to handle tasks in Red Hat Linux includes upgrading RPMS using YUM, kernel and LVM file system.
- Creating and maintaining user accounts, profiles, security, rights, disk space and process monitoring. Handling and generating tickets via the Remedy and JIRA ticketing tools.
- Analyze business requirements and work with technology team to recommend and implement best approach to delivering solutions on the Hadoop Platform.
- Good in planning big data strategy and designing big data solution architecture, including data acquisition, storage, transformation, analysis, business intelligence and integration with other frameworks for tailor made solutions to meet specific business needs.
- Experience in monitoring and reviewing Log4j logs
- Experience in ingestion, storage, querying, processing and analysis of Big Data using. different tools including MapReduce, YARN, HDFS, Hive, Pig, Hbase, Sqoop, Flume, Oozie.
- Providing Change, Problem & Incident Management using Tools like Remedy, Maximo, Chipre, SharePoint and Manage Now.
- Maintain and use Red Hat Kickstart servers, Jumpstart, Ignite Installation Methods and NIM Master for installs and upgrades.
- Expertise in Installation, administration, upgradation, configuration, patching, performance tuning and troubleshooting of Red Hat Linux 4/5/6, Sun Solaris 8/9/10 & AIX 5.3/6.1/7.1
- Have flair to adapt to new software applications and products, self - starter, have excellent communication skills and good understanding of business work flow.
TECHNICAL SKILLS:
Big Data Technologies: Hadoop, HDFS, Map Reduce, YARN, PIG, Hive, Hbase, Smartsense, Zookeeper, Oozie, Ambari, Kerberos, Knox, Ranger, Sentry, Spark, Tez, Accumulo, Impala, Hue, Storm, Kafka, Flume, Sqoop, Solr,Nifi,Airflow
Hardware: IBM pSeries,pureflex,RS/6000,IBM Blade servers, HP Proliant DL 360,380, HP Blade servers C6000,C7000
SAN: EMC Clarion, EMC DMX, IBM XIV, Dell
Operating Systems: Linux, AIX, CentOS, Solaris & Windows
Networking: DNS, DHCP, NFS, FTP, NIS, Samba, LDAP, Open LDAP, SSH, Apache, NFS, NIM
Tools: Managenow,Remedy,Maximo,Nagios,Chipre & SharePoint
Databases: Oracle 10/11g, 12c, DB2 & MySQL
Backups: Veritas Netbackup & TSM Backup
Virtualization: VMware vSphere, VIO
Cluster Technologies: HACMP 5.3, 5.4, Power HA 7.1, VERITAS Cluster Servers 4.1
Web/Application Servers: Tomcat, WebSphere Application Server 5.0/6.0/7.0, Message Broker, MQ Series, Web Logic Server, IBM HTTP Server.
Cloud Knowledge: Openstack, AWS
Scripting & Programming Languages: Shell, Perl, Python
PROFESSIONAL EXPERIENCE:
Confidential, Raritan, NJ
Sr. Hadoop Administrator
Responsibilities:
- Worked on multitenant clusters environment where in the biggest one has 400+ nodes hosted.
- Commissioning & decommissioning of datanodes when required.
- Supporting the existing and new data load jobs.
- Upgraded Ambari from 2.2.1 to 2.4.0.
- Upgraded HDP from 2.3 to 2.4 for production cluster and upgraded to 2.5.3.0 for POC Lab
- Setting up Cross Realm for Distcp for the inter cluster data transfer.
- As a part of DTV ( Confidential ) migration used Falcon for migrating data from CDH to HDP.
- Taking care of Day-to-Day running of clusters.
- Installed and configured HDP cluster and other Hadoop ecosystem components.
- Configured smart sense for bundle creations to upload HWX portal for cluster tuning and performance.
- Scheduling TWS jobs & troubleshooting the issues for any failures.
- Provided support for all the maintenance activities like OS patching, hadoop upgrades, configuration changes.
- Installed & Configured Nifi, Falcon, Smartsense & Airflow on few clusters.
- Developed automated scripts using Unix Shell for running Balancer, file system health check and User/Group creation on HDFS
Environment: Hortonworks (HDP 2.4), Cloudera Distribution (CDH5), Ambari, Cloudera Manager, Map Reduce 2.0(Yarn), HDFS, Hive, Hbase, Pig, Oozie, Sqoop, Spark, Impala, Kerberos, Ranger, Sentry, Zookeeper, Smartsense,Airflow,Falcon,DB2, SQL Server 2014, RHEL 6.x, Python, SAS, Storm, Kafka.
Confidential, Dallas, TX
Hadoop Administrator
Responsibilities:
- Troubleshooting, diagnosing, tuning, and solving Hadoop issues.
- Maintain good health of cluster.
- Commissioning and decommissioning the nodes across cluster.
- Analyse the communications informational, database and programming requirements of clients, develop, design test and implement appropriate information systems.
- Involved in installation and configuration of Kerberos on HDP2.1
- Involved in installation and configuration of LDAP server and integrated with kerberos on cluster.
- Maintain data safety and maintain high availability(HA) of NameNode .
- Responsible for building scalable distributed data solutions using Hadoop.
- Assist in Install and configuration of Hive, Pig, Sqoop, Flume and Oozie on the Hadoop cluster with latest patches.
- Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required.
- Worked with Ranger, Knox configuration to provide centralized security to Hadoop services.
- Continuous monitoring and managing the Hadoop cluster using hortonworks ambari tool.
- Provide guidance over simple to complex Map/reduce Jobs using Hive and Pig
- Optimized Map/Reduce Jobs to use HDFS efficiently by using various compression Mechanisms.
Environment:: Hadoop, MapReduce, HDFS, Hive, Pig, Hue, Oozie, Core Java, Eclipse, HBase, Flume, Linux, git, ansible.
Confidential, Bowie, MD
Hadoop Administrator
Responsibilities:
- Involved in installation of CDH5.5 with CM5.6 on centos Linux environment.
- Involved in installation and configuration of Kerberos security setup on CDH5.5 cluster.
- Involved in installation and configuration of L DAP server and integrated with kerberos on cluster.
- Worked with Sentry configuration to provide centralized security to hadoop services.
- Monitor critical services and provide on call support to the production team on various issues.
- Assist in Install and configuration of Hive, Pig, Sqoop, Flume, Oozie and HBase on the Hadoop cluster with latest patches.
- Involved in performance tuning of various hadoop ecosystem components like YARN, MRv2 .
- Implemented the Kerberos security software to CDH cluster for user level as well as service level to provide strong security to the cluster.
- Troubleshooting, diagnosing, tuning, and solving Hadoop issues.
- Maintain good health of cluster.
- Continuous monitoring and managing the Hadoop cluster using Cloudera Manager.
- Commissioning and decommissioning the nodes across cluster.
Environment: Hortonworks (HDP 2.2), Ambari, Map Reduce 2.0(Yarn), HDFS, Hive, Hbase, Pig, Oozie, Sqoop, Spark, Flume, Kerberos, Zookeeper, DB2, SQL Server 2014, CentOS, RHEL 6.x
Confidential, Reston, VA
Linux/UNIX System Administrator
Responsibilities:
- Performed administrative tasks such as System Startup/shutdown, Backup strategy, Documentation, User Management, Security and Network management.
- Provided UNIX infrastructure, operations and support.
- Performed installation, periodic maintenance updates and apply APAR fixes.
- Configured IBM RS 6000,550,520,610 machines for production, staging and test environments.
- Installed open ERP client/server based on the client requirements.
- Performed automation, trouble shooting and monitoring of the client/server ERP solution.
- Focused on localization of software.
- Installed and managed the system with HMC.
- Configured and installed the web based system manager using the HMC.
- Installed VMware server 2 on the systems.
- Worked on VMware bench marking tools and troubleshooting devices.
- Worked on running the virtual machines from one server to other.
- Migrated virtual machines between the servers.
- File system management on various platforms.
- Created Copies of Logical volumes on different physical volumes in root vg or as requested by the application team.
- Good knowledge of shell scripting perl, python, puppet, tomcat, bash, chef.
- Created active and passive clusters in HACMP.
- Extended failover to backup resources at remote sites using optional PowerHA
- Preparation and execution of server patching and upgrade on more than 200 servers including HP-UX and Red Hat Linux servers. Servers included HP-UX MC/Service Guard clusters.
- Installed and configured LAN and Flash copy.
- Performed server buildup, cloning of LINUX servers.
- Setup and configured VMware guests.
- Setup and configured network TCP/IP on AIX including RPC connectivity for NFS.
- Created mount points for Server directories, and mounted these directories on AIX Servers.
- Configured and administrated LPAR'S.
- Configured and administrated VIO server and Lpar clients.
- Administration Tasks included User administration, Performance, Send mail, weekly mksysb
- Daily roles included dealing with LVM extensively. Strong hold of LVM.
- Daily tasks included dealing with AIX side of SAN. Like troubleshooting Fiber card adapter, replacing Fiber card adapters. Configuring LUN's back.
- Setup of full networking services and protocols on AIX, including NIS/NFS, DNS, SSH, DHCP, NIDS, TCP/IP, ARP, applications, and print servers to insure optimal networking, application, and printing functionality.
- Clustered multiple RS/6000 machines on AIX 5.1/5.2 using HACMP, configured HACMP cluster to keep applications running, restarting it on a backup server.
- Configured HACMP 4.5 to monitor, detect and react to failure events, allowing the system to stay available during random, unexpected problems.
- Performed User Account management, data backups, and users' logon support.
- Collected data and implemented operating system upgrades to the latest available maintenance levels, installing fixes and APAR's.
- Experience in AIX System Dump, resizing Dump devices, understanding error reports and coordinating on different issues with AIX System Support Center.
- Monitored trouble ticket queue to attend user and system calls.
Environment: NetApp, NFS, AIX, Solaris, Redhat, LINUX, Windows 2000 Server, IBM WebSphere, Veritas Net backup, Veritas Volume Manager, Tomcat, WebSphere Application Server
Confidential, Santa Clara, CA
Linux Administrator
Responsibilities:
- Installation, configuration and Operating System upgrade on, Red Hat Linux 6.x, 5.x,
- Administrated server consolidation program through use of VMware ESX server and VMware Virtual Center
- Used and worked on Kick Start configuration to handle issuances and kernel configurations.
- SWAP box configuration, Implementation of disaster backup and recovery.
- Performance Management & Tuning of RHEL.
- Created, and Installation of VIO Server and configured several VIO client LPARS on power6 (p550 and 570) models.
- Installed and configured SAMBA server for windows and Linux connectivity.
- Implemented NIS and NFS for administrative and project requirements.
- Migrated Unix applications to Linux platform (ETL Informatica, Oracle, DB2, AD/LDAP etc.)
- Worked with several teams and guided 1st Level and 2nd Level teams to handle Incidents related to OS, Applications and Hardware.
- Discussed and documented Internal WIKI pages about the SLAs, OLAs and other ITIL best practices used to address the tickets in a timely manner.
- Excellent understanding of HP Service Manager and ability to create/modify/edit Interaction (SD-Service/Request), RFCs, Change Records, Incidents, Problem Record, CAB/eCAB, Test and Production Implementation Readiness.
- Helped the team in migration activities and actively involved in Pre-Assessment Phase, Assessment Phase, Build Phase, Validation Phase, UAT-1 (by Application Areas), Implementation Readiness (Change Records, Change Windows, Impact analysis), Migration, Cut-Over/Night-of-Implementation, UAT-2 (Final Validation by Application Areas) followed by Final Transition from Project to Service.
- Installed and Configured Web and investigate the configuration changes in the production environment.
- Responsible for multicasting of various components as a system administrator.
- Worked closely with DBA team to ensure optimal performance of databases, and maintain development applications and databases
- Tuning the kernel parameters based on the application/database requirement.
- Monitoring system resources, logs, disk usage, scheduling backups and restore
- Set up Quotas for the user accounts & limiting the disk space usage.
- Configuring SUDO and granting root permission to users for performing certain activities.
- Configure Crontab entries and update automation scripts.
- Knowledge in Adobe, Hyperion, development servers.
- Upgrading RHEL 5.0 to RHEL 5.9/6.4 using live upgrade and manual upgrade.
- Created BASH shell scripts to automate cron jobs and system maintenance. Scheduled cron jobs for job automation.
- Work with security team to modifying application users' password policies, group policy, UID and GID assignment policies.
- Monitored the performance of the system using top, sar, ps, prstat, vmstat, netstat, iostat, and cpuinfo to check the CPU utilization, bottlenecks of I/O devices,, memory usage and network traffic.
- Troubleshoot network connectivity used ping, netstat, ifconfig, and trace route commands. Login to the remote system using sudo, putty, and telnet. Transfer files across the systems on the network used ftp, scp, and rsync commands.
- Experience managing various file systems using LVM and also configured filesystems through network using NFS, NAS, SAN methodologies and installed RAID devices.
- Linux technical support and prepared technical documentation for check in verification.
- Regular backing up of critical data and restoring backed up data. Worked in solving tickets issued on day to day activities and problems related to development and test servers.
Environment: Red Hat Linux 5/6, VMware ESX 5.x, Weblogic 12/11, Oracle 10g, VMWARE, NFS, SAN, NAS,ITIL.
Confidential, Charlotte, NC
Unix Administrator
Responsibilities:
- Providing level 3 technical support in Solaris/Linux environment
- Operating systems support for Solaris 10, 9, and SUSE Linux.
- Quarterly patching of Solaris as well as SUSE Linux
- Experience in supporting HP Proliant hardware running SUSE Linux
- Break fix support of all systems locally in local data center as well as remote support of other data centers
- Working with SAN administrators for LUN allocation
- Adding resources and file systems in clustered Servers
- Experience in supporting Fujitsu Prime Power Servers
- Create disk groups and volumes and grow volumes
- Monitored system alerts and provide immediate solutions to Production/Development hardware/OS issues
- Involved in planning and executing Disaster Recovery Drill.
- Experienced troubleshooter. Good grasp of low-level tools to diagnose all type of system and application related issues.
- Provide immediate and correct technical support in a 24/7 production Operations
- Upgrading Solaris from version 8 to 10 using jumpstart and flash archive in a clustered environment ( VCS )
Environment: H/w: Sun T2000,1280,490,240, Fujitsu, HP Proliant DL585, DL385, SUSE Linux Servers O/s: Solaris 10, 9 and 8, VCS 5.0, 4.x, Storage Foundation 4.1, EMC Array, Power Path and SRDF, HPUX 11.X, Oracle, SUSE Linux, Peregrine Service Center, Large environment with over 1400 Servers, Lotus same time. DNS, NIS, Keon, NFS, Automounter. TSM