Sr Hadoop Administrator Resume
Raleigh, NC
PROFESSIONAL SUMMARY:
- Around 9 years of experience in IT - Industry on Big Data/Hadoop Administration.
- Hadoop cluster setup, installation, configuration and administration experience of multi-node cluster using Cloudera (CDH) & Hortonworks (HDP) distributions of Hadoop.
- Ensuring system service performance, and handling emergency circumstances and providing solutions for quick recovery from emergency situations.
- Hands-on experience of installation Cloudera manager and resolve bugs when installation CDH4/5.
- Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.
- Strong experience in troubleshooting, resolving issues related to Hadoop Cluster.
- Extensive experience in Monitoring Hadoop clusters and nodes using Cloudera Manager, Ambari & Nagios.
- Strong experience on Cloudera Manager & Ambari automated process of installing, upgrading and managing multiple nodes Hadoop cluster.
- Experienced in designing, implementing and managing Secure Authentication mechanism to Hadoop Cluster with Kerberos.
- Experienced in Ranger and Apache Sentry in Access Control and Authorizations.
- Know the installation of Tableau Desktop and provide connection with Hive, Spark and Impala.
- Create, Manage, Back-up and reinitiate AMI (Amazon Machine Image).
- Hands on DistCp, AWSCLI and Multipart.
- Strong experience in diagnosing, troubleshooting and resolving Hadoop related issues.
- Deep knowledge and related experience with Hadoop and its ecosystem components i.e. HDFS, Yarn, Hive, MapReduce, Pig, Sqoop, Oozie, Kafka, Spark and other hadoop components.
TECHNOLOGY EXPERIENCE:
Hadoop, HDFS, Yarn, Kerberos, Sentry, Ranger, Map Reduce, Spark, Shark, Hive, Pig, Sqoop, Flume, Kafka, NiFi, Storm, Oozie, ZooKeeper, Informatica BDM.
AWS, S3, EC2, EMR, Rackspace.
HBase, Cassandra, MongoDB
Cloudera Manager, Ambari, Nagios, Ganglia, Zabbix, Chef, Puppet, Maven, Jenkins.
JAVA, J2EE, WebLogic.
Confidential, Netezza, Oracle 8i, 9i, 10g, 11g, MS Sql Server, Sybase, SASInformatica, Datastage.
Net backup, Oracle Grid, RMAN, BMC PatrolLinux, IBM AIX 5.3, Solaris10, Windows
WORK EXPERIENCE:
Confidential, Raleigh, NC
Sr Hadoop Administrator
Responsibilities:
- Installed and configured Hadoop multi-node cluster and maintenances by nagios.
- Install and configure different Hadoop ecosystem components such as Spark, HBase, Hive, Pig etc. as per requirement.
- Configuring HA to various services such as Name Node, Resource Manager, HUE etc., as required to maintain the SLA of the organization.
- Run the benchmark tools to test the cluster performance
- Performed POC to assess the workloads and evaluate the resource utilization and configure the Hadoop properties based on the benchmark result.
- Tuning the cluster based on the POC and benchmark results.
- Commissioning and de-commissioning of cluster nodes.
- Monitoring System Metrics and logs for any problems
- Experience with AWS Cloud (EC2, S3 & EMR).
- Experienced in installation, configuration, troubleshooting and maintenance of Kafka & Spark clusters.
- Experience in setting up Kafka cluster on AWS EC2 Instances.
- Worked on setting up Apache NiFi and performing POC with NiFi in orchestrating a data pipeline.
- Monitoring System Metrics and logs for any problems using Check Mk monitoring tool.
- User provisioning (creation and deletion) of user on Prod and Non-Prod cluster according to client request.
- Ensure that critical customer issues are addressedquickly and effectively.
- Apply troubleshooting techniques to provide solutions to our customer’s individual needs.
- Troubleshoot, diagnose and potentially escalate customer inquiries during their engineering and operations efforts.
- Investigate product related issues both for individual customers and for common trends that may arise.
- Resolve customer problems via telephone, email or remote access.
- Maintain customer loyalty through integrity and accountability.
- Research customer issues in a timely manner and follow up directly with the customer with recommendations and action plans.
- Escalate cases to management when customer satisfaction comes into question.
- Escalate cases to the engineering team when the problem is beyond the scope of technical support or falls out of the support team’s expertise.
Technology: Ambari, Hadoop, Yarn, Spark, Kafka, Hive, Pig, Sqoop, Kerberos, Ranger, NiFi, Oracle, Netezza, Tableau, Python, Java 8.0, Log4J, GIT, AWS, S3, EC2, EMR, JIRA.
Confidential, Dallas, TX
Sr Hadoop Administrator
Responsibilities:
- Perform Hadoop and eco-system service evaluation and make necessary configuration changes to meet the business and operational needs of Confidential client.
- Perform monitoring of various services of the Hadoop cluster and fix user issues and make necessary corrections to the cluster to meet the cluster uptime required by Confidential .
- Create necessary hive databases and database objects in Hive per the requirement of BI Teams.
- Work with ticketing system Remedy in creating project specific folder and work with tickets and follow the ticket lifecycle of remedy.
- Work on tickets related to various Hadoop/Bigdata services which include HDFS, MapReduce, Yarn, Hive, Sqoop, Storm, Spark, Kafka, HBase, Kerberos, Ranger, Knox.
- Closely monitor for space to the Hadoop environment by creating Hadoop space usage report.
- Perform upgrades to Hadoop HDP cluster and Hadoop eco-system components on Development, Test and Production environments.
- Worked on writing stop/start/check status scripts to spark streaming jobs. Make necessary configuration parameter changes
- Work on in FRO upgrade in Production.
- Promote code from Dev to test to Production as necessary.
- Handle security of the cluster by administering Kerberos and Ranger services.
- Evaluate, recommend and make necessary policy changes in ranger to meet the security requirements of the user teams.
- Proactively monitor for alerts and notifications. Fix the corresponding issues to maintain the up-time of the servers.
- Write scripts to load the data into hive tables and set it up in cron to run daily.
- Work on resolving RANGER and Kerberos issues.
- Moving data from one cluster to another cluster.
- Use Jenkins for deployments and promoting code from one environment to another.
- Work on applying patch to the systems where Hadoop services are running.
- Work on performance tuning and optimization of various services in Hadoop and support development teams in identifying the issues and suggesting necessary changes to the code or parameters.
Technology: Ambari, Hadoop, Yarn, Spark, Kafka, Hive, Pig, Sqoop, Kerberos, Ranger, NiFi, SAP Hana, SAP Data Services, SAP BO, Oracle, Netezza, Tableau, Python, Java 8.0, Log4J, GIT, AWS, S3, EC2, JIRA.
Confidential, CA
Hadoop Administrator
Responsibilities:
- Installation, configuration and Administration of 10-node Hortonworks HDP2.1 Hadoop cluster.
- User creation and deletion is done on service-now portal through automated script running every 20 mins.
- Monitoring the alerts set on Prod and Non-Prod clusters for any issues related to capacity usage and services running on it.
- Maintain control and management of the overall resolution for any escalated case, even when cross-functional groups are involved.
- Leverage internal technical expertise, including development engineers, knowledge base, and other internal tools to provide the most effective solutions to customer issues.
- Create knowledge base content to capture new learning for re-use throughout the company and user base.
- Participate in technical communications within the team to share best practices and learn about new technologies and other ecosystem applications.
- Actively participate in the Hadoop community to assist with generic support issues
Technology: Ambari, Hadoop, Nagios, Zabbix, Spark, Kafka, Storm, Shark, Hive, Pig, Sqoop, MapReduce, Kerberos, Ranger, Salt, Kibana, Talend, Oracle, Confidential, SAS, Tableau, Java 7.0, Log4J, Junit, MRUnit, SVN, JIRA.
Confidential
Unix/Linux Admin
Responsibilities:
- Creating installing and maintaining physical, virtual SuSe and Redhat-Linux servers based on project requirements.
- Creating modifying and maintaining filesystem on SuSE and Redhat and solaris servers
- Creating root disk mirroring using zfs filesystem on solaris servers
- Creating root filesystem mirroring on solaris servers
- Configuring autofs on solaris and Linux(Redhat and Suse) servers
- Updating patches on solaris/linux/AIX servers
- Designing and managing disk space using AIX Logical Volume Management (LVM). Creating vg and lv extend vg, lv.
- Working with storage team to configure filesystem using SAN
- Installing and configure and netback tools on solaris and SuSe-Linux and Redhat-Linux servers
- Creating CR and following the ITIL procedure for new server installation
- Installing kernel patches and updating hardware firmware to bring the stability to the hardware
- Working with vendors to find RCA for the server issues
- Working with VMWare team to install the latest VMWare tools on Linux servers
- Submitting changes in remedy tool.
- Adding or removing Luns from servers.
- Configuring LDOMS and Control Domains on SPARC Confidential -Series servers.
- Having experience on AIX and LPAR and VIO server patching
- Perform weekly failover/reboots for standalone and cluster servers and resolving the issues while failovers in cluster
- Working/supporting with database and application teams while their deployment/upgrades.
Technology: WebLogic, Solaris, Solaris Volume Manager(SVM), Veritas Volume manager(VxVM), Veritas Cluster, Linux(RedHat/SuSE), AIX, VMware, Zones, Logical volume manager, ZFS, Sun cluster, LDOMS, WebLogic, worked in Data center .
Confidential
System Admin
Responsibilities:
- Administration of RHEL, which includes installation, testing, tuning, upgrading and loading patches, troubleshooting both physical and virtual server issues.
- Creating, cloning Linux Virtual Machines.
- Installing Red Hat Linux using kick start and applying security polices for hardening the server based on the company policies.
- RPM and YUM package installations, patch and other server management.
- Managing systems routine backup, scheduling jobs like disabling and enabling cronjobs, enabling system logging, network logging of servers for maintenance, performance tuning, testing.
- Tech and non-tech refresh of Linux servers, which includes new hardware, OS, upgrade, application installation, testing.
- Set up user and group login ID's, printing parameters, network configuration, password, resolving permissions issues, and user and group quota.
- Installing MySQLDB in Linux and Customize the MySQL DB parameters.
- Working with Service Now incident tool.
- Creating physical volumes, volume groups and logical volumes.
- Samba Server configuration with Samba Clients.
- Knowledge of IP tables, SELINUX.
- Modified existing Linux file systems to a Standard EXT3.
- Configuration and administration of NFS FTP, SAMBA, NIS.
- Maintenance of DNS, DHCP and APACHE services on Linux machines.
- Installing and configuring Apache and supporting them on Linux production servers.
Technology: Red-Hat Linux Enterprise servers (HP Proliant DL 585, BL ML Series, SAN (Netapp), VERITAS Cluster Server 5.0, Windows 2003 server, Shell programming.