We provide IT Staff Augmentation Services!

Hadoop Administrator Resume

NY

SUMMARY:

  • Involved in bench marking Hadoop/HBase cluster file systems various batch jobs and workloads.
  • Experience monitoring and troubleshooting issues with Linux memory, CPU, OS storage and network.
  • Extensive experience on design, configure and manage the backup and disaster recovery for hadoop data.
  • Experience on balancing, and managing Nodes and tuning server for optimal performance of the cluster.
  • As an admin involved in Hadoop cluster maintenance & trouble shooting also Monitoring and backup & Recovery.
  • Experience in HDFS data storage and support for running map - reduce jobs.
  • Strong knowledge on Yarn terminology and the High-Availability Hadoop Clusters.
  • Hands on experience in analyzing Log files for Hadoop and Eco-System services and finding root cause.
  • Expertise in installing, Configuration and Managing Red Hat Linux 5,6 also good experience on scheduling Corn jobs in Linux.
  • Maintain best practices on managing systems and services across all environments.
  • Manage, coordinate and implement software upgrades, patches, hot fixes on servers, workstations and network hardware.
  • Provide input on ways to improve the stability, security, efficiency and scalability of the environment
  • Install and maintain all server hardware and software systems and administer all server performance and ensure availability for same.
  • Stabilized system by disk replacement, firmware upgrade in SAN storage, Solaris Volume Management, clustering environment on scheduled maintenance hours.
  • Assist to configure and deploy all virtual machines and install and provide backup to all configuration procedures.
  • Responsible for scheduling and upgrading these servers throughout the year to the latest versions of software.
  • Communicated and worked with individual application development groups, DBAs and the Operation's.
  • Created custom monitoring plugins for Nagios using UNIX shell scripting, and Perl.
  • Collaborate with other teams and team members to develop automation strategies and deployment processes.
  • Provided root cause analysis of incident reports during any downtime issues.

TECHNICAL SKILLS:

Hadoop: HDFS, MapReduce, Zookeeper, Oozie, Kafka, Pig, Hive, Sqoop, Tez, Impala, Flume, Python, Hbase, Spark, Storm

Operating System: RedHat Linux, UNIX, Windows 2000/NT/XP,Sun Solaris

Languages: C, C++, VB, PL/SQL, JAVA

Scripting Languages: VB Scripts, Unix & Shell Script

Database: Oracle, SQL server, Teradata, TENDEM

Database Tools: SQL* Plus, Oracle SQL Developer

Version Control: SVN, CVS, Git

PROFESSIONAL EXPERIENCE:

Hadoop Administrator

Confidential, NY

Responsibilities:

  • Loading the data from the different Data sources like (Teradata and DB2) into HDFS using sqoop and load into Hive tables, which are partitioned.
  • Installation of various Hadoop Ecosystems and Hadoop Daemons.
  • Responsible to manage data coming from different sources.
  • Supported MapReduce Programs those are running on the cluster.
  • Installed and configured Pig and also written Pig Latin scripts.
  • Imported data using Sqoop to load data from MySQL to HDFS on regular basis.
  • Developed Scripts and Batch Job to schedule various Hadoop Program.
  • Involved in start to end process of Hadoop cluster setup where in installation, configuration and monitoring the Hadoop Cluster.
  • Built automated set up for cluster monitoring and issue escalation process.
  • Administration, installing, upgrading and managing distributions of Hadoop (CDH5, Cloudera manager), Hive, HBase.
  • Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting, Manage and review data backups, Manage & review Hadoop log files
  • Monitoring systems and services, architecture design and implementation of Hadoopdeployment, configuration management, backup, and disaster recovery systems and procedures.
  • Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the job requirement
  • Monitored multiple Hadoop clusters environments using Ganglia and Nagios. Monitored workload, job performance and capacity planning
  • Expertise in recommending hardware configuration for Hadoop cluster
  • Installing, Upgrading and Managing Hadoop Cluster on Cloudera distribution
  • Managing and reviewing Hadoop and HBase log files.
  • Experience with Unix or Linux, including shell scripting
  • Developed bash scripts to bring the log files from ftp server and then processing it to load into hive tables.

Environment: Hadoop, HDFS, Map Reduce, Shell Scripting, spark, solr, Pig, Hive, HBase, Sqoop, Flume, Oozie, Zoo keeper, cluster health, monitoring security, Redhat Linux, impala, Cloudera Manager

Hadoop Administrator

Confidential, Newark, NJ

Responsibilities:

  • Responsible for loading the customer's data and event logs from Oracle database, Teradata into HDFS using Sqoop
  • End-to-end performance tuning of Hadoop clusters and Hadoop MapReduce routines against very large data sets.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Monitor Hadoop cluster connectivity and security
  • Manage and review Hadoop log files
  • HDFS support and maintenance
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability
  • Installed and configured various components of Hadoop ecosystem and maintained their integrity
  • Designed, configured and managed the backup and disaster recovery for HDFS data.
  • Experience with Unix or Linux, including shell scripting
  • Installing, Upgrading and Managing Hadoop Cluster on Cloudera distribution.
  • Commissioned Data Nodes when data grew and decommissioned when the hardware degraded
  • Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the job requirement
  • Monitored multiple Hadoop clusters environments using Ganglia and Nagios. Monitored workload, job performance and capacity planning
  • Expertise in recommending hardware configuration for Hadoop cluster
  • Installing, Upgrading and Managing Hadoop Cluster on Cloudera distribution
  • Installed and configured Hadoop HDFS, MapReduce, Pig, Hive, and Sqoop.
  • Wrote Pig Scripts to generate MapReduce jobs and performed ETL procedures on the data in HDFS.
  • Exported analyzed data to HDFS using Sqoop for generating reports.
  • Managing and reviewing Hadoop and HBase log files.
  • Experience with Unix or Linux, including shell scripting
  • Scheduling all hadoop/hive/sqoop/Hbase jobs using Oozie.

Environment: Hadoop, Map Reduce, Shell Scripting, spark, Pig, Hive, Cloudera Manager, CDH 5.4.3, HDFS, Yarn, Hue, Sentry, Oozie, Zoo keeper, Impala, Solr, Kerberos, cluster health, Puppet, Ganglia, Nagios, Flume, Sqoop, storm, Kafka, KMS

Hadoop Administrator

Confidential, NC

Responsibilities:

  • Loading the data from the different Data sources like (Teradata and DB2) into HDFS using sqoop and load into Hive tables, which are partitioned.
  • Installation of various Hadoop Ecosystems and Hadoop Daemons.
  • Responsible to manage data coming from different sources.
  • Supported MapReduce Programs those are running on the cluster.
  • Installed and configured Pig and also written Pig Latin scripts.
  • Imported data using Sqoop to load data from MySQL to HDFS on regular basis.
  • Developed Scripts and Batch Job to schedule various Hadoop Program.
  • Involved in start to end process of Hadoop cluster setup where in installation, configuration and monitoring the Hadoop Cluster.
  • Built automated set up for cluster monitoring and issue escalation process.
  • Administration, installing, upgrading and managing distributions of Hadoop (CDH5, Cloudera manager), Hive, HBase.
  • Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting, Manage and review data backups, Manage & review Hadoop log files
  • Monitoring systems and services, architecture design and implementation of Hadoopdeployment, configuration management, backup, and disaster recovery systems and procedures.
  • Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the job requirement
  • Monitored multiple Hadoop clusters environments using Ganglia and Nagios. Monitored workload, job performance and capacity planning
  • Expertise in recommending hardware configuration for Hadoop cluster
  • Installing, Upgrading and Managing Hadoop Cluster on Cloudera distribution
  • Managing and reviewing Hadoop and HBase log files.
  • Experience with Unix or Linux, including shell scripting
  • Developed bash scripts to bring the Tlog files from ftp server and then processing it to load into hive tables.

Environment: Hadoop, HDFS, Map Reduce, Shell Scripting, spark, solr, Pig, Hive, HBase, Sqoop, Flume, Oozie, Zoo keeper, cluster health, monitoring security, Redhat Linux, impala, Cloudera Manager

UNIX Administrator

Confidential

Responsibilities:

  • Design and maintain all system tools for all scripts and automation processes and monitor all capacity planning.
  • Integrate all required software and resolve all issues across various technologies and design require enterprise servers and provide back up support.
  • Evaluate all documents according to system requirements and evaluate all design and perform tests on all development activities and administer all complex methodologies.
  • Develop an infrastructure to provide support to all business requirements and perform regular troubleshoot on system to resolve all issues.
  • Monitor everyday systems and evaluate availability of all server resources and perform all activities for Linux servers.
  • Assist to configure and deploy all virtual machines and install and provide backup to all configuration procedures.
  • Implemented and setup Virtualization environments for AIX LPARs, HP Integrity VMs; and Solaris Zones and Logical Domains
  • Updated and created provisioning scripts to setup new operating systems and software for supported platforms
  • Consolidated servers at numerous smaller remote data centers to three central data centers
  • Stabilized system by disk replacement, firmware upgrade in SAN storage, Solaris Volume Management, clustering environment on scheduled maintenance hours.
  • Enhanced business continuity procedure by adding critical middleware server and identified through power-down test activity.
  • Resolved issues, planned requests as point-of-contact to vendors, oversaw developers, business users, following change control procedure, reported result Monitor everyday systems and evaluate availability of all server resources and perform all activities for Linux servers.
  • Maintain and monitor all patch releases and design various patch installation strategies and maintain all systems according to Confidential standardization.
  • Administer all performance for various resources and ensure optimization for same and provide support to all applications and ensure optimal level of customer services.
  • Maintain and monitor all system frameworks and provide after call support to all systems and maintain optimal Linux knowledge.
  • Perform troubleshoot on all tools and maintain multiple servers and provide back up for all files and script management servers.
  • Wrote and maintained shell scripts using Perl and Bash.
  • Monitored, troubleshot, and resolved issues involving operating systems.
  • Applied ITIL approach to incident and problem management.
  • Developed and maintained troubleshooting journal for incident management team.
  • Participated in on-call rotation to provide 24×7 technical support
  • Tested numerous software and hardware configurations during the development stages to recreate the operating environments utilized by customers in an effort to avoid the distribution of releases with bugs and/or erroneous documentation
  • Wrote utility scripts using BASH and KORN shell
  • Configured UNIX systems to use Active directory, KERBEROS, NTPD, XDMCP, LDAP, SSH, FTP, TFTP and DNS
  • Performed problem diagnosis, corrected discrepancies, developed user and maintenance documentation, provided user assistance and evaluated system performance
  • Installed, configured third party applications, hardened new and existing servers and desktops

Environment: KERBEROS, monitoring tool HP OpenView ITO (OVO), Redhat Linux, Windows, FTP, Solaris, HP UX with Oracle, Sybase

UNIX Administrator

Confidential, Silver Spring, MD

Responsibilities:

  • Maintain and monitor all patch releases and design various patch installation strategies and maintain all systems according to Confidential standardization.
  • Administer all performance for various resources and ensure optimization for same and provide support to all applications and ensure optimal level of customer services.
  • Maintain and monitor all system frameworks and provide after call support to all systems and maintain optimal Linux knowledge.
  • Assisted developers with troubleshooting custom software, and services such as ActiveSync, CalDav, CardDav, and PHP
  • Top level customer service and implementation for DKIM, SPF, and custom SSL/TLS security
  • Implemented and performed initial configuration Confidential Storage CS460G-X2 array and migrated data from legacy Confidential storage array. Converted access from NFS to iSCSI
  • Assigned to selected projects and successfully defined hardware and software needs to complete them.
  • Recommended to a project leader for a new Sales Tax project to use repurposed servers, thus saving the project
  • Maintained Solaris server hardware and performed basic troubleshooting on database problems and initiated necessary steps to fixing any found errors utilizing shell scripts.
  • Served as Project lead on updating hardware and software for the backup schema on both Windows and UNIX/LINUX based development networks.
  • Troubleshot any errors found in code using simple PERL scripts.
  • Planned and coordinated move of server equipment from older server area to the newer location then conducted setup.
  • Documented troubleshooting guide for administrators to be used for on-call pager duty.
  • Attended team meetings and handled light managerial duties in the absence of team lead.

Environment: UNIX, Solaris, HP UX, Red Hat Linux, Windows, FTP, SFTP

Hire Now