We provide IT Staff Augmentation Services!

Hadoop Administrator Resume

4.00/5 (Submit Your Rating)

Houston, TX

SUMMARY:

  • Overall around 6 years of professional Information Technology experience in Hadoop and Linux Administration activities such as installation, configuration and maintenance of systems/clusters. Hands on day - to-day operation of the environment, knowledge and deployment experience in Hadoop ecosystem.
  • Installed, Configured and Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper and Sqoop.
  • Hands on experience in installing, configuring Cloudera, MapR, Hortonworks clusters and installing Hadoop ecosystem components like Hadoop Pig, Hive, HBase, Sqoop, Kafka, Oozie, Flume and Zookeeper.
  • Experience in installing, configuring and optimizing Cloudera Hadoop version CDH3, CDH 4.X and CDH 5.X in a Multi Clustered environment.
  • Excellent Knowledge of Cassandra Architecture, Cassandra data modelling & Monitoring Cassandra using Opscente.
  • Hands on experiences with Hadoop stack. (HDFS, Map Reduce, YARN, Sqoop, Flume, Hive-Beeline, Impala, Tez, Pig, Zookeeper, Oozie, Solr, Sentry, Kerberos, Gentrify DC, Falcon, Hue, Kafka, Storm).
  • Experienced on Horton works Hadoop Clusters.
  • Experience with Cloudera Hadoop Clusters.
  • Used Oozie workflows to automate jobs on Amazon EMR.
  • Written shell scripts and successfully migrated data from on Prem to AWS EMR (S3).
  • Installing, configuring, monitoring Apache Cassandra Prod, Dev and Test clusters
  • Implementing and maintaining a multi Datacenter Cluster.
  • Involved in the process of adding a new Datacenter to existing Cassandra Cluster
  • Involved in upgrading the Cassandra test clusters
  • Creating required keyspaces for applications in prod, dev, test, and fst clusters.
  • Determining and setting up the required replication factors for keyspaces in prod, dev etc. environments in consultations with application teams.
  • Creating required tables with appropriate privileges to the users and secondary indexes
  • Set Cassandra backups using snapshot backups.
  • Used OpsCenter to monitor prod, dev, test, and fst Cassandra clusters.
  • Implemented Spark solution to enable real time reports from Cassandra data
  • Generated user specific reports based on indexed columns using SPARK
  • Performance tuning a Cassandra cluster to optimize writes and reads
  • Involved in the process of bootstrapping, decommissioning, replacing, repairing and removing nodes.
  • Benchmarked Cassandra cluster based on the expected traffic for the use case and optimized for low latency
  • Troubleshoot read/write latency and timeout issues in CASSANDRA
  • Excellent Knowledge of Cassandra Architecture, Cassandra data modelling & Monitoring Cassandra using Opscenter
  • Excellent knowledge on CQL (Cassandra Query Language), for retrieving the data present in Cassandra cluster by running queries in CQL.
  • Involved in designing various stages of migrating data from RDBMS to Cassandra.
  • Excellent Knowledge of Cassandra Architecture, Cassandra data modelling & Monitoring Cassandra using Opscenter.
  • Can handle commissioning and decommissioning nodes along with monitoring of Cassandra Cluster.
  • Have Knowledge on Apache Spark with Cassandra.
  • Gained Hands on experience in analyzing the Cassandra data from flat files using Spark.
  • Responsible for continuous monitoring and managing Elastic MapReduce (EMR) cluster through AWS console.
  • Implement and maintain security LDAP, Kerberos as designed for cluster.
  • Expert in setting up Horton works cluster with and without using Ambari.
  • Experienced in setting up Cloudera cluster using packages as well as parcels Cloudera manager.
  • In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, YARN, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce concepts.
  • Experience in configuring, installing and managing MapR, Hortonworks & Cloudera Distributions Extensive Experience in understanding the client's Big Data business requirements and transform it into Hadoop centric technologies.
  • Responsible for infrastructure to maintain Cassandra, Spark and Mongo db.
  • Responsible for python scripts and shell scripting to check the health of Cassandra and status of Spark job.
  • Involved in Cassandra Data modelling going through the phases of creating Conceptual model, Application flow, Logical model, Physical Optimization, Final Physical model.
  • Solid understanding of all phases of development using multiple methodologies i.e. Agile with JIRA, Kanban board along with ticketing tool Remedy and Service now.
  • Expertise to handle tasks in Red Hat Linux includes upgrading RPMS using YUM, kernel, configure SAN Disks, Multipath and LVM file system.
  • Creating and maintaining user accounts, profiles, security, rights, disk space and process monitoring. Handling and generating tickets via the BMC Remedy ticketing tool.
  • Configure UDP, TLS, SSL, HTTPD, HTTPS, FTP, SFTP, SMTP, SSH, Kickstart, Chef, Puppet and PDSH.
  • Experience in deploying and managing the multi-node development and production Hadoop cluster with different Hadoop components (HIVE, PIG, SQOOP, OOZIE, FLUME, HCATALOG, HBASE, ZOOKEEPER) using Horton works Ambari.
  • Worked on creation of Ansible manifest files to install tomcat instances and to manage configuration files for multiple applications.
  • Provide hands-on engineering expertise to assist with support and operation of the cloud infrastructure. Responsibilities include the design, creation, configuration, and delivery of cloud infrastructure environments.
  • Hands on experience on Unix/Linux environments, which included software installations/upgrades, shell scripting for job automation and other maintenance activities.
  • Sound knowledge of ORACLE 9i, Core Java, Jsp, Servlets and experience in SQL and PL/SQL concepts database stored procedures, functions and Triggers.
  • Expertise in Ansible playbooks and AWX deployments.
  • Expert in using ANT scripts, Make and Maven for Build process. Experience in Implementation of Continuous Integration through Jenkins. Deployment using various CI Tools like Chef/Ansible.
  • Well versed in writing Hive Queries and Hive query optimization by setting different queues.
  • Maintaining and executing existing Ansible Playbooks.
  • Troubleshooting, Security, Backup, Disaster Recovery, Performance Monitoring on Linux systems. Experience in Jumpstart, Kickstart, Infrastructure setup and Installation Methods for Linux.
  • Developed and Coordinated deployment methodologies (Bash, Puppet & Ansible).
  • Ran Ansible playbooks and created various roles for applications, then deployed the Applications/Services on hosts.
  • Experience in implementation and troubleshoot of cluster, JMS, JDBC.
  • Experience in importing the real-time data to Hadoop using Kafka and implemented the Oozie job. Experience Schedule Recurring Hadoop Jobs with Apache Oozie.
  • Experience in setting up Encryption Zones in Hadoop and worked in Data Retention.
  • Experience Schedule Recurring Hadoop Jobs with Apache Oozie.
  • Experience in setting up Encryption Zones in Hadoop and worked in Data Retention.
  • Knowledge of NoSQL databases such as HBase, Cassandra, and Mongo DB.

TECHNICAL SKILLS:

Hadoop/Big Data Technologies: HDFS, MapReduce, Hive, Pig, Sqoop, Flume, Oozie, Storm, Zookeeper, Kafka, Impala,MapR,HCatalog, Apache Spark, Spark Streaming, Spark SQL, HBase, NiFi and Cassandra, AWS (EMR, EC2), Hortonworks, Cloudera. MapReduce, HDFS, Pig, Hive, HBase, Sqoop, Zookeeper, Oozie, Hue, Storm, Kafka, Solr, Spark, Flume, Ansible.

Hadoop ecosystem tool's and Automation tool: MapReduce, HDFS, Pig, Hive, HBase, Sqoop, Zookeeper, Oozie, Hue, Storm, Kafka, Solr, Spark, Flume.

Platforms: Linux (RHEL, Ubuntu) Open Solaris, AIX.

Databases: MySQL, Oracle … Oracle Server X6-2, HBase, NoSQL.

Scripting languages: Shell Scripting, Bash Scripting, HTML scripting, Python.

WEB Servers: Apache Tomcat, JBOSS, windows server2003, 2008, 2012.

Security Tool's: LDAP, Sentry, Ranger and Kerberos.

Cluster Management Tools: Cloudera Manager, HDP Ambari, Hue

Operating Systems: Sun Solaris 8,9,10, Red Hat Linux 4.0, RHEL-5.4, RHEL 6.4, IBM-AIX, HPUX 11.0, HPUX 11i, UNIX, VMware ESX 2.x, 3.Windows XP, Server … Ubuntu.

Scripting & Programming Languages: Shell & Perl programming

Programming Language: Core Java, HTML, Programming C, C++.

PROFESSIONAL EXPERIENCE

Hadoop Administrator

Confidential, Houston, TX

Responsibilities:

  • Research and recommend innovative, and where possible, automated approaches for system administration tasks.
  • Ability to closely calibrate with product managers and lead engineers.
  • Provide guidance in the creation and modification of standards and procedures
  • Proactively monitor and setup alerting mechanism for Kafka Cluster and supporting hardware to ensure system health and maximum availability
  • Wrote Lambda functions in python for AZURE Lambda which invokes python scripts to perform various transformations and analytics on large data sets in EMR clusters.
  • As a Lead of Data Services team, built Hadoop cluster on Azure HD Insight Platform and deployed Data analytic solutions using tools like Spark and BI reporting tools.
  • Experience in the Azure components & APIs.
  • Experienced in storing the analyzed results into the Cassandra cluster.
  • Configured, Documented and Demonstrated inter node communication between Cassandra nodes and client using SSL encryption
  • Administered Cassandra cluster using Datastax OpsCenter and monitored CPU usage, memory usage and health of nodes in the cluster.
  • Configured Performance Tuning and Monitoring for Cassandra Read and Write processes for fast I/O operations and low latency time.
  • Optimized the Cassandra cluster by making changes in Cassandra properties and Linux (Red Hat) OS configurations.
  • Experienced in provisioning and managing multi-datacenter Cassandra cluster on public cloud environment.
  • Experienced in upgrading the existing Cassandra cluster to latest releases.
  • Knowledge on set up Cassandra wide monitoring scripts and alerting system.
  • Knowledge on bootstrapping, removing, replicating the nodes in Cassandra and Solr clusters.
  • Installed and configured Cassandra cluster and CQL on the cluster.
  • Installed, Configured, Tested Datastax Enterprise Cassandra multi-node cluster which has 4 Datacenters and 5 nodes each.
  • Involved in capacity planning and requirements gathering for multi datacenter Cassandra cluster
  • Involved in the process of designing Cassandra Architecture.
  • Thorough knowledge on Azure platforms IAAS, PaaS.
  • Manage Azure based SaaS environment.
  • Azure Data Lakes and Data Factory.
  • Responsible for daily monitoring activities of 6 clusters on cloud (GCP) with 3 different environments (Dev, Stg and Prod) making a total of 18 clusters.
  • Support developer team in case of issues related to job failures related to hive queries, zeppelin issues etc.
  • Responsible for setting rack awareness on all clusters.
  • Responsible for DDL deployments as per requirement and validated DDLs among different environments.
  • Responsible for usual admin activities like giving access to users for edge nodes, raising tickets/requests for cyberark account creation, RSA, AD user creation for different services.
  • Installed druid service on HDP cluster and further loaded batch jobs with druid, loaded streaming data from Kafka to druid.
  • Connected druid to hive and hive to tableau.
  • Upgraded HDP 2.6.0 to HDP 2.6.5 cluster
  • Worked on increasing HDFS IO efficiency by adding new disks and directories to HDFS data nodes. Tested HDFS performance by DFSIO before and after adding data directories.
  • Worked on HBase performance tuning by following Apache HBase recommendations and changed row key accordingly.
  • Creation of key performance metrics, measuring the utilization, performance and overall health of the cluster.
  • Capacity planning and implementation of new/upgraded hardware and software releases as well as for storage infrastructure.

Environment: Big Data, HDFS, YARN, Hive, Sqoop, Cassandra-stress, Zookeeper, HBase, Oozie, Kerberos, Rangers, Knox, Spark, RedHatLinux Cassandra 2.1.

Hadoop Administrator

Confidential, Chicago, IL

Responsibilities:

  • Administration and management of Atlassian tool suites (installation, deployment, configuration, migration, upgrade, patching, provisioning, server management etc.).
  • Integrate JIRA with Apteligent for creating two ways linking between the crash reports and JIRA issues.
  • Audited our existing plug-ins and uninstalled few unused plugins to save costs and manage the tool efficiently. Automated the transition of issues based on the status when work is logged on the issues.
  • Automated issue creation from the office 365 email through mail handler. Configured logging to reduce unnecessary warnings and Info.
  • Installed and configured Hadoop Map Reduce, HDFS, developed multiple Map Reduce jobs in java for data cleaning and preprocessing.
  • Worked on Installing and configuring the HDP Hortonworks 2.x Clusters in Dev and Production Environments.
  • Worked on Capacity planning for the Production Cluster.
  • Installed HUE Browser.
  • Involved in loading data from UNIX file system to HDFS and creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
  • Experience in MapR, Cloudera, & EMR Hadoop distributions.
  • Worked on Installation of HORTONWORKS 2.1 in AWS Linux Servers and Configuring Oozie Jobs
  • Create a complete processing engine, based on Hortonworks distribution, enhanced to performance.
  • Performed on cluster up gradation in Hadoop from HDP 2.1 to HDP 2.3.
  • Ability to Configuring queues in capacity scheduler and taking Snapshot backups for Hbase tables.
  • Worked on fixing the cluster issues and Configuring High Availability for Name Node in HDP 2.1.
  • Involved in Cluster Monitoring backup, restore and troubleshooting activities.
  • Involved in MapR to Hortonsworks migration.
  • Currently working as hadoop administrator in MapR hadoop distribution for 5 clusters ranges from POC clusters to PROD clusters contains more than 1000 nodes.
  • Implemented manifest files in puppet for automated orchestration of Hadoop and Cassandra clusters.
  • Worked on installing cluster, commissioning & decommissioning of Data Nodes, Name Node recovery, capacity planning, Cassandra and slots configuration.
  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Managed and reviewed Hadoop log files.
  • Administration of Hbase, Hive, Sqoop, HDFS, and MapR.
  • Importing and exporting data from different databases like MySQL, RDBMS into HDFS and HBASE using Sqoop.
  • Worked on Configuring Kerberos Authentication in the cluster
  • Experience in using Mapr File system, Ambari, Cloudera Manager for installation and management of Hadoop Cluster.
  • Very good experience with all the Hadoop eco systems in UNIX environment.
  • Experience with UNIX administration.
  • Worked on installing and configuring Solr 5.2.1 in Hadoop cluster.
  • Hands on experience in installation, configuration, management and development of big data solutions using Hortonworks distributions.
  • Worked on indexing the Hbase tables using and indexing the Json data and Nested data.
  • Hands on experience on installation and configuring the Spark and Impala.
  • Successfully install and configuring Queues in Capacity scheduler and Oozie scheduler.
  • Worked on configuring queues in and Performance Optimization for the Hive queries while Performing tuning in the Cluster level and adding the Users in the clusters.
  • Day to day responsibilities includes solving developer issues, deployments moving code from one environment to other environment, providing access to new users and providing instant solutions to reduce the impact and documenting the same and preventing future issues.
  • Adding/installation of new components and removal of them through Ambari.
  • Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades.
  • Monitored workload, job performance and capacity planning
  • Involved in Analyzing system failures, identifying root causes, and recommended course of actions.
  • Inventing and deploying a corresponding Solr Cloud collection.
  • Creating collections and configurations, Register a Lily Hbase Indexer configuration with the Lily Hbase Indexer Service.
  • Creating and managing the Cron jobs.
  • Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Manage and review data backups, Manage & review log files.

Environment: Hadoop, Map Reduce, Yarn, Hive, HDFS, PIG, Sqoop, Solr, Oozie, Impala, Spark, Hortonworks, Flume, HBase, Zookeeper and Unix/Linux, Hue (Beeswax), AWS.

Hadoop Kafka Admin

Confidential, Chicago, IL

Responsibilities:

  • Used Sqoop to connect to the ORACLE, MySQL, and Teradata and move the data into Hive /HBase tables.
  • Worked on Hadoop Operations on the ETL infrastructure with other BI teams like TD and Tableau.
  • Involved in installing and configuring Confluent Kafka in R&D line, also Validate the installation with HDFS connector and Hive connectors.
  • Implemented High Availability and automatic failover infrastructure to overcome single point of failure for Name node utilizing Zookeeper services.
  • Effectively worked in Agile Methodology and provide Production On call support
  • Regular Ad-Hoc execution of Hive and Pig queries depending upon the use cases.
  • Regular Commissioning and Decommissioning of nodes depending upon the amount of data.
  • Monitor Hadoop cluster connectivity and security.
  • Manage and review Hadoop log files.
  • File system management and monitoring.
  • Monitored Hadoop Jobs and Reviewed Logs of the failed jobs to debug the issues based on the errors.
  • Diagnose and resolve performance issues and scheduling of jobs using Cron & Control-M.
  • Used Avro SerDe for serialization and de-serialization packaged with Hive to parse the contents of streamed log data.
  • Performed Disk Space management to the users and groups in the cluster.
  • Used Storm and Kafka Services to push data to HBase and Hive tables.
  • Documented slides & Presentations on Confluence Page.
  • Added Nodes to the cluster and Decommissioned nodes from the cluster whenever required.
  • Used Sqoop, Distcp utilities for data copying and for data migration.
  • Worked on end to end Data flow management from sources to NoSQL (mongo DB) Database using Oozie.
  • Installed Kafka cluster with separate nodes for brokers.
  • Involved with Continuous Integration team to setup tool GitHub for scheduling automatic deployments of new/existing code in Production.
  • Deployed Hadoop cluster of Cloudera Distribution and installed ecosystem components: HDFS, Yarn, Zookeeper, HBase, Hive, MapReduce, Pig, Kafka, Confluent Kafka, Storm and Spark in Linux servers.
  • Responsible for maintaining 24x7 production CDH Hadoop clusters running spark, HBase, hive, MapReduce with multiple petabytes of data storage on daily basis.
  • Configured Capacity Scheduler on the Resource Manager to provide a way to share large cluster resources.
  • Deployed Name Node high availability for major production cluster.
  • Experienced in writing the automatic scripts for monitoring the file systems, key MapR services.
  • Configured Oozie for workflow automation and coordination.
  • Troubleshoot production level issues in the cluster and its functionality.
  • Backup data on regular basis to a remote cluster using Distcp.
  • Setting up cluster and installing all the ecosystem components through MapR and manually through command line in Lab Cluster.
  • Monitored multiple hadoop clusters environments using Nagios. Monitored workload, job performance and capacity planning using MapR control systems.

Environment: CDH 5.8.3, HBase, Hive, Pig, Sqoop, Yarn, Apache Oozie workflow scheduler, Kafka, Flume, Zookeeper.

Hadoop Administrator

Confidential, Detroit, MI

Responsibilities:

  • Converting Hive/SQL queries into Spark transformations using Spark RDDs, Python and Scala
  • Implemented Flume, Spark, and Spark Stream framework for real time data processing.
  • Involved in implementing security on Hortonworks Hadoop Cluster using with Kerberos by working along with operations team to move non-secured cluster to secured cluster.
  • Responsible for upgrading Hortonworks Hadoop HDP2.2.0 and MapReduce 2.0 with YARN in Multi Clustered Node environment.
  • Hive, Pig, HBase, Zookeeper and Sqoop.
  • Configuring YARN capacity scheduler with Apache Ambari.
  • Configuring predefined alerts and automating cluster operations using Apache Ambari.
  • Managing files on HDFS via CLI/Ambari files view.Ensure the cluster is healthy and available with monitoring tool.
  • Developed Hive User Defined Functions in Python. Writing a Hadoop MapReduce Program in Python.
  • Improved Mapper and Reducer code using Python iterators and generators
  • Built high availability for major production cluster and designed automatic failover control using Zookeeper Failover Controller (ZKFC) and Quorum Journal nodes.
  • Worked on Distributed/Cloud Computing (MapReduce/ Hadoop, Hive, Pig, HBase, Sqoop, Flume, Spark, Zookeeper, etc.), Hortonworks (HDP 2.4.0)
  • Deploying, managing, and configuring HDP using Apache Ambari 2.4.2.
  • Installing and Working on Hadoop clusters for different teams, supported 50+ users to use Hadoop platform and resolve tickets and issues they run into and provide training to users to make Hadoop usability simple and updating them for best practices.
  • Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like
  • Responsible for services and component failures and solving issues through analyzing and troubleshooting the Hadoop cluster.
  • Integrated Kafka with Flume in sand box Environment using Kafka source and Kafka sink.
  • Performed Puppet, Kibana, Elastic Search, and Tableau, Red Hat infrastructure for data ingestion, processing, and storage.
  • Monitored multiple Hadoop clusters environments using Ganglia and Nagios. Monitored workload, job performance and capacity planning using Ambari.
  • Implemented Spark solution to enable real time reports from Hadoop data. Was also actively involved in designing column families for various Hadoop Clusters.
  • Manage and review Hadoop log files. Monitor the data streaming between web sources and HDFS.
  • Working with Oracle XQuery for Hadoop oracle java hotspot virtual machines.
  • Managing Ambari administration, and setting up user alerts.
  • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, Spark and loaded data into HDFS.
  • Solving Hive thrift issues and HBase problems after upgrading HDP 2.4.0.
  • Involved in projects Extensively on Hive, Spark, Pig and Sqoop throughout the development Lifecycle until the projects went into Production.
  • Managing the cluster resources by implementing capacity scheduler by creating queues.

Environment: HDP 2.4.0, Ambari 2.4.2, Oracle 11g/10g, Oracle Big Data Appliance, MySQL, Sqoop, Hive, Oozie, Spark, Zookeeper, Oracle Big Data SQLMapReduce, Pig, Kerberos, RedHat 6.5.

Hadoop Administrator

Confidential, Bowie, MD

Responsibilities:

  • Involved in performance tuning of various hadoop ecosystem components like YARN, MRv2.
  • Troubleshooting, diagnosing, tuning, and solving Hadoop issues.
  • Maintain good health of cluster.
  • Continuous monitoring and managing the Hadoop cluster using Cloudera Manager.
  • Commissioning and decommissioning the nodes across cluster.
  • Involved in installation and configuration of LDAP server and integrated with kerberos on cluster.
  • Worked with Sentry configuration to provide centralized security to hadoop services.
  • Involved in installation of CDH5.5 with CM5.6 on centos Linux environment.
  • Involved in installation and configuration of Kerberos security setup on CDH5.5 cluster.
  • Implemented the Kerberos security software to CDH cluster for user level as well as service level to provide strong security to the cluster.
  • Monitor critical services and provide on call support to the production team on various issues.
  • Assist in Install and configuration of Hive, Pig, Sqoop, Flume, Oozie and HBase on the Hadoop cluster with latest patches.

Environment: Hortonworks (HDP 2.2), Ambari, Map Reduce 2.0(Yarn), HDFS, Hive, Hbase, Pig, Oozie, Sqoop, Spark, Flume, Kerberos, Zookeeper, DB2, SQL Server 2014, CentOS, RHEL 6.x.

Linux Administrator

Confidential, Dallas, TX

Responsibilities:

  • Monitored overall system performance, performed user management, system updates and disk & storage management.
  • Performed Patching in the Red Hat Linux servers and worked on installing, upgrading the packages in the Linux systems.
  • Analyzing Business requirements/user problems to determine feasibility of application or design within time and cost constraints. Formulated scope and objectives through fact-finding to develop or modify complex software programming applications or information systems.
  • Designed and wrote scripts in Shell, Bash scripting, Installations and Configurations of different versions and Editions of Linux servers.
  • Development of automation of Kubernetes clusters with Ansible, writing playbooks.
  • Troubleshooting the performance, Network issues and monitors the RHEL Linux Servers on Day to Day basis.
  • Experience in working on LVM's which includes add/expand/configure disks, disk partitioning with FDISK/PARTED.
  • Implemented Virtualization using VMware in Linux on HP-DL585.
  • Red Hat Linux kernel, memory upgrades and swaps area. Red Hat Linux Kick start InstallationSun Solaris Jump start Installation. Configuring DNS, DHCP, NIS, NFS in Sun Solaris 8/9& other Network Services.
  • Performed Red Hat Linux kernel, memory upgrades and undertook Red Hat Linux Kickstart installations.
  • Created users, manage user permissions, maintain User & File System quota on Red Hat Linux.
  • Performed troubleshooting, tuning, security, backup, recovery and upgrades of Red Hat Linux based systems.
  • Setup of full networking services and protocols on UNIX, including NIS/NFS, DNS, SSH, DHCP, NIDS, TCP/IP, ARP, applications, and print servers to insure optimal networking, application, and printing functionality.
  • Installed and configured Sudo for users to access the root privileges.
  • Performed installation, configuration and maintenance of Red Hat Linux4.x, 5.x, 6.x.
  • Provided 24x7 System Administration support for Red Hat Linux 4.x, 5.x, 6.x servers and resolved trouble tickets on shift rotation basis.
  • Installation of Red Hat Linux 4.x, 5.x, 6.x using Kickstart installation.
  • Wrote bash script for getting information about various Red Hat Linux servers.
  • Worked on DM-Multipath to determine scsi disks to corresponding LUNS.
  • Creation of LVM's on SAN using Red Hat Linux utilities.
  • Experienced in Servers consolidation and virtualization using VMware Vsphere Client and Citrix Xen
  • Worked on Migrating servers from one host to other in VMware, Xen.
  • Worked on Migrating servers from one datacenter to other in VMware Vsphere Client.
  • Working knowledge of Hyper-V virtualization on Microsoft Windows 2008 platform.
  • Experience in NFS sharing files/directories, with security considerations.
  • Designing and developing new Ansible Playbooks.
  • Performed installations, updates of system packages using RPM, YUM.
  • Performed Patching activity of RHEL servers using Red Hat Satellite server.

Environment: RedHat Enterprise Linux 4.x/5.x, Oracle 9i, Logical Volume Manger for Linux and VMware ESX Server 2.x, Apache 2.0, ILO, RAID, VMware Vsphere Client, Citrix Xen, Microsoft Windows 2008/2012

We'd love your feedback!