Hadoop Administrator Resume
2.00/5 (Submit Your Rating)
PROFESSIONAL SUMMARY:
- 7 years of professional experience including more TEMPthan 3 years in Big Data analytics as Hadoop Administrator and 7 years in Linux Administrator and 1 year as a Service Manager.
- Experience in architecting, designing, installation, configuration and management of Apache Hadoop Clusters & Hortonworks Hadoop Distribution.
- Expertise and sound knowledge of RedHat - Linux (5&6) operating systems and Administration.
- Experience with complete Software Design Lifecycle including design, development, testing and implementation of moderate to advanced complex systems.
- Expertise in Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Balancer, Troubleshooting.
- Wide scan implementation knowledge of NIFI data injection tool in HDP cluster.
- Expertise in Hadoop cluster ready for development team working on POCs.
- Experience installing monitoring systems such Ambari on AWS to monitor cluster health and cluster management.
- Experience in minor and major upgrades of Hadoop and Hadoop eco system.
- Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause.
- Experience on Commissioning, Decommissioning, Balancing, and Managing Nodes and tuning server for optimal performance of teh cluster.
- Involved in Administration of Cluster maintenance, troubleshooting, Monitoring and followed proper backup & Recovery strategies.
- Created system security supporting multi-tier software delivery system by utilizing Kerberos and also centralize authorization security through Ranger.
- Strong understanding of Hadoop eco system such as HDFS, MapReduce, Zookeeper, Hadoop streaming, Sqoop, Oozie and Hive.
- Experience in designing Real-time ingestion big-data architecture using STORM, Kafka and Spark Streaming.
- Ability to analyze and performance tune a Hadoop cluster.
- Performance of incident and problem ticket resolution to SLA for services across teh global Hadoop distribution.
- Continuous evaluation of Hadoop infrastructure requirements and design/deploy solutions (high availability, replication considerations, space issues, etc).
- Hands on complex RDBMS SQL/HIVE HQL.
- Hands on Remedy tool as a Service manager.
- Extensive knowledge on Core Java.
- Familiarity with teh fundamentals of Linux shell scripting.
- Proficient in Storage management products like Linux Volume Manager.
- Monitoring teh infrastructure and support through Nagios.
- Specialized in High Availability Products Redhat Cluster Suite.
- Experience in Incident, Change and Problem management in a large UNIX infrastructure consisting of Linux Servers.
- Environmental oversight and root cause analysis for stability and reliability issues and timely resolution or escalation.
- Great team player and contributor with passion to excel and learn new technologies.
TECHNICAL SKILLS:
Hadoop/Big Data Technologies: HDFS, Map Reduce, HBase, Hive, Sqoop, Zookeeper, Yarn, Nifi and Ranger
Operating Systems: Linux(5/6), Unix
Databases: Oracle 9i/10g/11g, SQL Server
Programming Languages: Core Java, Shell Scripting, SQL
Hardware: DL380 G7-G9, DL385 G6-G7
Visa Details: Have H1B visa for US valid till March 2018.
PROFESSIONAL EXPERIENCE:
Confidential
Hadoop Administrator
Responsibilities:
- Provided teh POC for teh big data by setting 17 nodes HDFS cluster.
- Built and implemented teh capacity queue and assign teh resources.
- Designed teh job by which was loading data from source in to HDFS cluster using NIFI tool.
- Installed and Integrated Ranger and creating policy for HDFS, HIVE etc.
- Deployed Kerberos and creating keytab Files, principals, realm.
- Day to day support for teh cluster issues and job failures.
- Changes to teh configuration properties of teh cluster based on volume of teh data being processed and performance of teh cluster.
- Maintaining Cluster to remain healthy and in optimal working condition.
- Scheduling and maintaining teh Oozie jobs.
- Used to create teh HDFS filesystem and assigned teh permission to teh filesystem as per developer’s request.
- Integrated Kafka (Messaging System) to transfer data from one application to another.
- Monitoring teh health of all teh Processes related to Name Node, Data Node, HDFS, YARN, HIVE, HBASE, Ranger and Storm using Horton Works Ambari.
- Backup configuration and Recovery from a Name Node failure.
- Provided proactive and reactive support and ensuring maximum availability and minimizing downtime and meet our service SLA's in incident, change and problem management using Remedy tool.
Confidential, Bellevue, Washington
Hadoop/Linux Administrator
Responsibilities:
- Provided teh POC for teh big data by setting 200 nodes production cluster and later it grew to more TEMPthan 600 nodes.
- Designed and Install teh 14 nodes DEV cluster to help teh developer for teh POC.
- Upgraded Apache Ambari from Version 1.7.0 to 2.0.0.
- Involved in upgrading Hadoop Cluster from HDP 2.2.0.0 to HDP 2.2.4.2.
- Designed and implemented High Availability and automatic failover infrastructure to overcome single point of failure for Namenode, Resource manager, Hive metastore, Hive server2, Hbase master and Storm nimbus utilizing zookeeper services.
- Used to migrate Hadoop services from one node to another with teh help of Ambari.
- Balanced HDFS manually to decrease network utilization and increase job performance on regular basis.
- Involved in Installing Kerberos and Configuring Server and Client Systems to enable Hadoop Security.
- Co-coordinating and troubleshooting with developers to move data from Neteeza to HDFS cluster.
- Worked on commissioning & decommissioning of data nodes.
- Adding and removing teh nodes in Hadoop Cluster as per requirement.
- Close monitoring and analysis of teh Map Reduce job executions on cluster at task level.
- Involved in loading data from UNIX file system to HDFS.
- Continuously worked with Vendors Technical support, Professional services in resolving high severity issues.
- Adding teh staging node to production node through cleanup activity.
- Done user and group management through Hortonworks Ambari.
- Worked on Hbase and doing teh performance tuning.
- Configured and integrated teh Storm to manage teh data flow from HDFS to Hbase tables.
- Handle teh data exchange between HDFS and databases using Sqoop.
- Monitoring disk, Memory, Heap, CPU utilization on all Master and Slave machines using Ambari and took necessary measures to keep teh cluster up and running on 24/7 basis.
- Provided proactive and reactive support and ensuring maximum availability and minimizing downtime and meet our service SLA's in incident, change and problem management using JIRA.
- Technical Preparation and review of changes on teh Unix Estate. Attending CAB meetings to provide an insight of teh technical plan prepared, solution proposed and highlight teh risks involved.
- Unix support which includes day-to-day maintenance tasks, troubleshooting as well as root causes analysis, generating sosreport, generating kdumpreport, escalation and coordination with vendor.
- End to end server builds using kickstart, Storage configuration using LVM and customized configurations of Linux servers for Applications and Databases like Oracle.
- Liaise with Application, Database, Storage and network teams for setting up new Linux environments and providing support to regular BAU activities in Unix environment.
- Planning and implementation of Quarterly Infraweekends to resolve existing software and hardware vulnerabilities in teh environment and standardize teh infrastructure.
- Project including BIOS, RAID and ILO firmware upgrades on HP Hardware, and operating system and kernel upgrades for REDHAT using Local Repository.
- Worked on Hardware failures. Creating tickets with HP Support and scheduling changes for hardware replacements.
- Tuned and Maintained teh Kernel (Through Proc filesystem).
- Implemented Network bonding services for Network resilience.
- Configured teh raid controller through hpacucli utility.
- Built teh server starting from scratch through local disk.
- Basic shell programming skills for system administration.
- Scheduling/Controlling of Jobs using Cron.
Confidential
Linux Administrator
Responsibilities:
- Disk Management, partitioning tools like fdisk, file system management LVM physical volumes, logical volumes & volume groups extending file systems on LVM mount, umount of file systems - /etc/fstab mount options.
- Storage Migrated from Vmax to EMC Clariion with LVM volume managers.
- Monitoring teh performance of teh servers which includes CPU utilization, Memory utilization, Swap space utilization and File System Utilization.
- To identify and analyze teh issues dat hamper teh performance of teh system, and to work in close coordination with teh product development team and recommend teh solutions for teh issues.
- Performed teh remote connections and file transfer, creating archives, managing local disk devices and performing system security.
- Security implemented through TCP Wrappers, IPTABLES and SELinux.
- Responsible for adding/removing of packages.
- Network Administration and Troubleshooting.
- Worked on user requests from different teams using Service Now Manager tool.
- Knowledge and Implemented teh RAID levels (RAID 0, RAID 1, RAID 5 and RAID6).
- Installed and configured teh Nagios and also managing teh services through Nagios.
- Updated teh system as soon as new version of OS and application software rollout.
- Responsible for enabling and revoking privileges from teh users.
- Scanning and Configuration of Storage LUN.
- Installation/Updating Kernel or client patching on Red hat servers as client required.
Confidential
Linux Administrator
Responsibilities:
- Responsible for monitoring system performance and supporting servers to improve teh performance.
- Providing support by handling Application, installation, troubleshooting & required configuration of teh Linux servers.
- Provided Support on operating system issues and hardware related issues.
- Configured and Implemented teh NTP, DNS and SMTP (postfix) server.
- Supported teh NFS, Samba, Apache services in our estate.
- Managed simple Partition and File system.
- Established and changed a swap space under Red Hat Linux.
- Implemented Disk Quotas on File system.
- Managed user accounts attributes and files systems.
- User and Group Administration.
- Scheduling job through Cron and at.
- Watching systems logs for problems on regular basis.
- Provided support by handling Application, installation, troubleshooting & required configuration of teh Linux servers.
- Provided Support on operating system issues and hardware related issues.
- Handled Kernel and OS upgrades through YUM repository.
- Troubleshooting of failure OS, login in single user mode, filesystem checking by fsck, etc
- Backup and Recovery with tar and scp.
- Installed kernel and additional Packages using YUM repository.
- Used to Provide Connectivity - box to box (password less).
- Configured and managed SSH and FTP Servers.
- Backup and Recovery with tar and scp.
- Implemented teh File Access Control Lists.
- Used to check disk space on daily basis.