We provide IT Staff Augmentation Services!

Hadoop Admin Resume

0/5 (Submit Your Rating)

WA

SUMMARY

  • Over 9 years of IT experience with multinational clients which includes 3+ years of experience as Hadoop administrator and around 2 years of experience in Linux administration.
  • Expertise in Managing Hadoop clusters administration including setup, install, monitor, maintenance and operational support for distribution: Cloudera CDH4/5, Hortonworks 1.3/2.1/2.2.
  • Good experience in install/configure and managing Hadoop clusters in Amazon EC2.
  • Involved in setting up High availability solutions to Hadoop cluster and Hbase.
  • Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.
  • Worked on both Map Reduce 1 and Map Reduce 2 (YARN) architectures.
  • Worked on install and configuring Cloudera Manager, Ambari of Hadoop stack.
  • Used Network Monitoring Daemons like Ganglia and Service monitoring tools like Nagios.
  • Adding/removing new nodes to an existing hadoop cluster.
  • Backup configuration and Recovery from a NameNode failure.
  • Decommissioning and commissioning teh Node on running hadoop cluster.
  • Installation of various Hadoop Ecosystems and Hadoop Daemons.
  • Installation and configuration of Sqoop and Flume.
  • Loading teh data from teh different Data sources like (Teradata and DB2) into HDFS using sqoop and load into partitioned Hive tables.
  • Analysed large amounts of data sets writing Pig scripts and Hive queries
  • Involved in bench marking Hadoop cluster file systems with various batch jobs and workloads.
  • Experienced with performing real time analytics on NoSQL databases like Hbase.
  • Experience in performing backup and disaster recovery of Name Node metadata and important sensitive data residing on cluster.
  • Good exposure and experience on Linux internals and administration.
  • Worked on year to year capacity planning for distributed environments and involved and coordinated with vendors in procurement process for new hardware.
  • Excellent command on change management and coordinating deployments in standalone and distributed environments.
  • Excellent command on change management and coordinating deployments in distributed environments.
  • Hands on experience with RAID using Volume Management Software like Logical Volume Manager,Veritas Volume Manager, Solaris Volume Manager.
  • Experienced in Installation of Operating Systems, Packages and Patches, adding peripherals, maintaining user accounts, System Security maintenance, performance tuning, troubleshooting at various levels.
  • Installed and configured Kickstart server environment.
  • Experience in networking concepts like DHCP, TCP/IP, IP Addressing, Networking Technologies and WLAN.
  • Hands on experience in configuration management systems like Puppet, Cobber and Chef.
  • Security management like performing security health checks as per policies / procedures, patching teh servers based on teh advisories for teh applications and operating systems.
  • Extensive experience with Veritas Cluster Server and Veritas NetBackup.
  • 24 X 7 Production on - call support and administration for Distributed and relational databases.

TECHNICAL SKILLS

Operating Systems: Centos, Redhat Enterprise Linux 4/5/6, Windows 9X/XP/2000, zOS, os/390

Big Data: Hadoop, Map Reduce, Pig, Hive, Sqoop, Hbase, Oozie, Hbase, Flume, Amazon EC2,CDH3/4/5, HDP 1.3/2.1/2.2

Database: Oracle 9i/10g, SQL Server 2000/5, MySQL 5

Languages: SQL, Core Java, C & C++, JCL, Cobol

Monitoring tools: Ganglia, Nagios, Cloudera Manager, Ambari, Omegamon

Deployment tools: Puppet, Chef, Ansible

Other: Veritas Net Backup, Legato, Tivoli, Citrix Server & VMware

PROFESSIONAL EXPERIENCE

Confidential, WA

Hadoop Admin

Environment: HDP, Ambari, Sqoop, Flume, Oozie, Zookeeper, Pig, Hive, Map-Reduce, YARN, HA, HBASE,Scala, Redhat/Centos Linux.

Responsibilities:

  • Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning, Troubleshooting, Manage and review data backups, Manage &review log files. In Hortonworks Data Platform(HDP).
  • Day to day responsibilities includes solving developer issues, deployments moving code from one environment to other environment, providing access to new users and providing instant solutions to reduce teh impact and documenting teh same and preventing future issues.
  • Experienced on adding/installation of new components and removal of them through Ambari.
  • Implemented and Configured High Availability Hadoop Cluster.
  • Performed a Major upgrade in production environment.
  • Installed and Configured Hadoop monitoring and Administrating tools: Nagios and Ganglia.
  • Back up of data from active cluster to a backup cluster using distcp.
  • Periodically reviewed Hadoop related logs and fixing errors and preventing errors by analyzing teh warnings.
  • Hands on experience working on Hadoop ecosystem components like Hadoop Map Reduce, HDFS, Zoo Keeper, Oozie, Hive, Sqoop, Pig, Flume.
  • Experience in configuring Zookeeper to coordinate teh servers in clusters to maintain teh data consistency.
  • Experience in using Flume to stream data into HDFS - from various sources. Used Oozie workflow engine to manage interdependent Hadoop jobs and to automate several types of Hadoop jobs such as Java map-reduce, Hive and Sqoop as well as system specific jobs.
  • Helped in setting up Rack topology in teh cluster.
  • Capacity planning and performance tuning.
  • Implemented Fair scheduler to allocate teh fair amount of resources to small jobs.
  • Implemented automatic failover zookeeper and zookeeper failover controller.
  • Deployed Network file system for Name Node Metadata backup.
  • Performed both major and minor upgrades to teh existing cluster and also rolling back to teh previous version.
  • Designed teh cluster so that only one standby name node daemon could be run at any given time.
  • Dumped teh data from HDFS to MYSQL database and vice-versa using SQOOP.

Confidential, Dearborn, MI

Hadoop Admin

Environment: Centos, HDFS, Mapreduce, Hive, PIG, HBASE, Oozie, CDH 4/5, Ganglia, Nagios.

Responsibilities:

  • Installed, Configured and Maintained Apache Hadoop clusters for application development and Hadoop Ecosystem Components like Hive, Hbase, Zookeeper and Sqoop.
  • Extensively worked on Installation and configuration of Cloudera distribution for Hadoop(CDH).
  • Worked on installing cluster, commissioning & decommissioning of DataNodes, NameNode recovery, capacity planning, and slots configuration.
  • Installed Oozie workflow engine to schedule Hive and PIG scripts.
  • Developed data pipeline using Flume, Sqoop to ingest customer behavioral data and financial histories into HDFS for analysis.
  • Implemented High Availability on a large cluster.
  • Experienced in capacity planning for large clusters.
  • Installed and configured Hive and Hbase.
  • Implemented Kerberos Security mechanism.
  • Involved in migration of ETL processes from Oracle to Hive to test teh easy data manipulation
  • Exported teh analyzed data to relational databases using Sqoop for visualization and to generate reports.
  • Installed Cluster monitoring tools like Ganglia, Cloudera Manager and Service monitoring tools like Nagios.
  • Configured ZooKeeper to implement node coordination, in clustering support
  • Involved in clustering of Hadoop in teh network of 70 nodes.
  • Communicating with teh development teams and attending daily status meetings.
  • Addressing and Troubleshooting issues on a daily basis.
  • Working with data delivery teams to setup new Hadoop users. dis job includes setting up Linux users, setting up Kerberos TEMPprincipals and testing MFS, Hive.
  • Benchmark Hadoop cluster with Test DFSIO, TeraSort, Teragen, and Teravalidate
  • Cluster maintenance as well as creation and removal of nodes.
  • Monitor Hadoop cluster connectivity and security
  • Manage and analyze Hadoop log files.
  • File system management and monitoring.

Confidential

Linux Administrator

Environment: RHEL, Puppet, Kickstart, Web Server, Nagios, HPSM, Veritas Net Backup, shell scripting, VMware, LDAP.

Responsibilities:

  • Administration of RHEL4.x, 5.x which includes installation, testing, tuning, upgrading and loading patches, troubleshooting both physical and virtual server issues.
  • Creating, cloning Linux Virtual Machines, templates using VMware Virtual Client 3.5 and migrating servers between ESX hosts, Xen servers.
  • Installing RedHat Linux using kicks tart and applying security polices for hardening teh server based on teh company policies.
  • Installed Cent OS using Pre-Execution environment boot and Kick start method on multiple servers, remote installation of Linux using PXE boot.
  • Configured Ganglia which include installing gmond and gmetad daemons which collects all teh metrics running on teh distributed cluster and presents them in real-time dynamic web pages which would further halp in debugging and maintenance.
  • Installed and verified that all AIX/Linux patches or updates are applied to teh servers.
  • Installing, administering RedHat using Xen, KVM based hypervisors.
  • RPM and YUM package installations, patch and other server management.
  • Managing systems routine backup, scheduling jobs like disabling and enabling cron jobs, enabling system logging, network logging of servers for maintenance, performance tuning, testing.
  • Worked and performed data-center operations including rack mounting, cabling.
  • Installed, configured, and maintained Weblogic10.x and Oracle10g on Solaris & RedHat Linux.
  • Set up user and group login ID's, printing parameters, network configuration, password, resolving permissions issues, user and group quota.
  • Configuring multipath, adding SAN and creating physical volumes, volume groups, logical volumes.
  • Installing and configuring Apache and supporting them on Linux production servers.
  • Troubleshooting Linux network, security related issues, capturing packets using tools such as IPtables, firewall, TCP wrappers, NMAP.
  • Worked on installation of home based agents on all target servers.
  • File system Administration, Setting up Disk Quota, configuring backup solutions.
  • Managed Disks and File systems using LVM on Linux.
  • Job Scheduling and Automating process using CRON & AT.
  • Provided 24X7 oncall production support.
  • Troubleshooting Day - to-Day issues with various Servers on different platforms.
  • Troubleshooting logon problems, boot process and printing.
  • Basic creation and troubleshooting on scripting like Shell/Perl.
  • Worked with Linux, Oracle Database, and Network teams to ensure teh smooth relocation of teh servers.
  • Mirrors and RAID 0, 1 and 5 levels. Experience with configuring and managing Virtual disks, Disk
  • SENDMAIL configuration and administration, testing teh mail server.
  • Monitor Linux Server for CPU utilization, Memory Utilization and Disk Utilization for performance monitoring.
  • Setup NFS file systems and shared them to clients.
  • Responsible for providing teh desktop system administration include monitor and Performed system startups and shutdowns.

Confidential

Technical Services Associate

Environment: Jes2, Jes3, OPC/TWS and AJS, me/O Concepts, IMS, CICS, HMC,Omegaview, Omegamon, Tivoli Enterprise Console

Responsibilities:

  • Provide first-level problem determination (of hardware, software, or job related); create problem tickets; correct problem or escalate to appropriate personnel.
  • Perform restarts and overrides. Correct JCL syntax errors and space problems.
  • Maintain and modify job streams in existing schedules.
  • Handling Mainframe Batch job abends and identtify delays in critical batch and escalate on priority.
  • Ensure CICS, IMS, DB2 startup and availability, perform file manipulation and maintain shutdown deadlines
  • Escalating IMS transaction and program abends and starting them to get teh business running without any interruption.
  • Perform analysis and investigation of logs, trends, alerts, notifications and other event or operational indicators.
  • Well experienced in performing HMC activities such as activating/deactivating Lpar, TIME change, Stand alone dumps, checking for hardware messages.
  • Perform scheduled/unscheduled IPLs using HMC(Hardware management console).
  • Perform post IPL checks to ensure system is healthy.
  • Monitor systems for job allocations, hardware errors, and problems TEMPeffecting system performance.
  • Establish status of all devices and applications (waits, contentions, etc.)
  • Monitor Jes2 Spool, Jes3 Spool and other resource types and take necessary actions when teh spool percentage increases.
  • Monitoring performance of mainframe computing, perform first level problem determination and escalate accordingy to Performance team.
  • Update daily batch abends/lates report and distribute across teh team.
  • Gather and prepare service and system related data and information required for operational TEMPeffectiveness and improvements.
  • Analyzed and ensured that events, incidents and requests are handled according to agreed procedures and service levels, including supporting documentation.
  • Utilize IT Service Management tools and techniques related to tracking, documenting and reporting operational performance.
  • Engage Incident manager,ensure a bridge call is opened and make all teh service lines which are effected join teh call.
  • Was been part of Test DR Activities.
  • Attend daily/weekly status calls and represent teh team for all CAB meetings.
  • Perform checklist to determine teh Couple dataset status and Coupling facilities.
  • Monitor HSM active functions, awaiting tasks on a timely basis and ensure teh availability to switch tape drives.
  • Perform all supervisory duties of operations, production control, and network when needed.
  • Cross-trained in all areas of operations to provide necessary coverage.

We'd love your feedback!