Big Data Administrator, Sre/devops Engineer Resume
Cupertino, CA
PROFESSIONAL SUMMARY:
- 8+ years of professional IT experience which includes 5+ years of experience in Hadoop Administration on Cloudera (CDH), Kafka and Hortonworks (HDP) Distributions, Vanilla Hadoop, MapR and strong experience in AWS, Elasticsearch, DevOps and Linux Administration. Hands on experience in installation, configuration, supporting and managing Hadoop Clusters.
- Have extensive experience as Hadoop and spark engineer and Big Data analyst. Build deploy and management of large - scale Hadoop based data Infrastructure.
- Have experience in comprising of Development, Systems Administration and Software configuration management (SCM) includes DevOps Build/Release Management.
- Gained experience in complete Software Design Life Cycle (SDLC) including design, development, testing and implementation of moderate to advanced complex systems.
- Excellent understanding of Hadoop architecture and underlying framework including storage management.
- Expertise in using various Hadoop infrastructures such as HDFS, YARN, Map Reduce, Hive, Zookeeper, HBase, Sqoop, Oozie, Flume, and spark for data storage and analysis.
- Experience in managing and handling Linux platform servers (especially Ubuntu) and hands on experience on RedHat Linux.
- Expertise in Relational Database design, data extraction and transformation of data from data sources using MySQL and Oracle.
- Possessed hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Horton works, Cloudera (CDH5, CDH6), Yarn distributions.
- Gained experience in IT systems design, systems analysis, development, and management
- Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.
- Used Network Monitoring Daemons like Ganglia and Service monitoring tools like Nagios
- Strong exposure in Automation of maintenance tasks in Bigdata environment through Cloudera Manager API.
- Having expertise on High Volume Data Streaming with Kafka Architecture.
- Gained implementation experience in configuration and tuning of various components such as IMPALA, SPARK, Airflow, Kafka, NiFi.
- Gained experience in data Integrity/Recovery/High Availability; Service & data migration; Disaster Recovery Planning; Contingency Planning; Capacity Planning, Research & Development; Risk Assessment & Planning; Cost Benefits Analysis
- Experienced in developing Map Reduce programs using Apache Hadoop for working with Big Data, Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradigm
- Obtained experience in HDFS data storage and support for running map - reduce jobs and Supported in optimizing performance of HBase/Hive/Pig jobs
- Possessed hands on experience in Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting
- Gained experience in installing, administering, and supporting operating systems and hardware in an enterprise environment (CentOS/RHEL)
- Having good knowledge in using NoSQL databases like Apache Cassandra (1.2, 2.0 and 2.1) and Mongo DB (2.6, 2.4), Orient DB.
- Experience in managing source control repositories like GIT by creating new Repositories and creating user level credentials.
- Created and wrote shell scripts (Bash), Python for automating tasks.
- Hands on experience in working on Spark SQL queries, Data frames, and import data from Data sources, perform transformations; perform read/write operations, save the results to output directory into HDFS.
- Monitoring the data streaming between web sources and HDFS and functioning through monitoring tools.
- Demonstrated ability to design Big Data solutions for traditional enterprise businesses
- Gained experience in installing and configuring Hadoop eco system such as Sqoop, pig, hive, Ansible etc.
- Obtained experience in importing and exporting the data using Sqoop from HDFS to Relational Database systems/mainframe and vice-versa
- Experience on Hadoop cluster maintenance including data and metadata backups, file system checks, commissioning and decommissioning nodes and upgrades.
- Closely worked with Developers and Analysts to address project requirements. Ability to effectively manage time and prioritize multiple projects.
- Experience with configuration management using Ansible, Chef and container management with Docker.
- Expertise in working with AWS provided Solutions like EC2 and ELB which includes Creating EC2 instances, adding EBS volumes for EC2 and familiar with VPC, Route 53, RDS, S3, IAM, SNS, SQS, SWF, SES, Auto scaling, Storage gateway, Elastic Beanstalk, Cloud formation and Cloud Watch.
- Experience in setting up Docker Swarm and Kubernetes cluster for Docker Container Management
- Experience in working with Docker, Kubernetes, Docker Swarm and Micro Services
- Possessed Strong ability to troubleshoot any issues generated while building, deploying and in production support
- Well versed with programming languages such as C, C ++, Java, .net, Python.
- Experience in Amazon AWS cloud Administration and actively involved highly available, Scalability, cost effective and fault tolerant systems using multiple AWS services.
- Good working experience in Amazon Web Services (AWS) provisioning/Services and in-depth knowledge of application deployment and data migration on AWS and expertise in monitoring, logging and cost management tools that integrate with AWS. Developed Cloud formation scripts for AWS Orchestration, Chef and Puppet.
- Experience in writing Shell scripts for various purposes like file validation, automation and job scheduling using Crontab.
TECHNICAL SKILLS:
Big Data Technologies: Hortonworks, HDFS, Hive, Map Reduce, Cassandra, Pig, Falcon, Apache NiFi, Scoop, Zookeeper, Kafka, Spark, Airflow, Flume, Oozie, Avro, HBase, MapReduce, HDFS, Storm, Cloudera.
Scripting Languages: Shell Scripting, Korn Shell, Python, YAML, Bash, CSH, Ruby, PHP
Databases: Oracle 11g, MySQL, NoSQL, MS SQL Server, HBase, Cassandra, MongoDB
Networks: HTTP, HTTPS, FTP, UDP, TCP/TP, SNMP, SMTP
Monitoring Tools: Cloudera Manager, Solr, Ambari, Nagios, Hubble
Application Servers: Apache Tomcat, WebLogic Server, Web Sphere
Security: Kerberos, Knox.
Web Technologies: HTML5, CSS3, Bootstrap, JSON, jQuery, JavaScript, XML
Hadoop Distributions: Hortonworks, Cloudera Virtualization in VMware ESXi 6.
Reporting Tools: Cognos, Hyperion Analyzer, OBIEE & BI+
Analytic Tools: Elastic search-Logstash-Kibana
Operating Systems: RHEL, CentOS, Ubuntu, Fedora, Debian, Solaris, Windows, MacOS.
Configuration Management: Chef, Ansible, Puppet, Terraform
Container Management tool: Docker Swarm, Kubernetes and AWS ECS
Cloud Technologies: Amazon AWS - EC2. GCPCI/CD tools: Jenkins, MAVEN, GitLab
Version Control Tools: GIT, CVS, SVN, Bit Bucket
WORK EXPERIENCE:
Big Data Administrator, SRE/DevOps Engineer
Confidential, Cupertino, CA
Responsibilities:
- As a part of (Data Analytics) DevOps/SRE team we collect, processes, and analyzes diagnostics and usage data from Confidential devices across the world.
- Have experience developing large scale distributed computing systems in a large organization.
- Installed, configured Cloudera clusters using Hadoop ecosystem components like HBase, Kafka, Flume, Oozie Spark2, Airflow, Zookeeper.
- Installed and configured Confidential open JDK across all clusters. Upgraded and configured all cluster from CDH5.10 to CDH 5.16.2
- Monitoring systems and services, architecture design and implementation of Hadoop deployment, configuration management, backup, and disaster recovery systems and procedures.
- Build automated setup for the cluster monitoring and issue escalation process and Installed of various Hadoop Ecosystems and Hadoop Daemons.
- Expertise in using various Hadoop infrastructures such as Map Reduce, Hive, Zookeeper, Oozie, Flume, Airflow and spark for data storage and analysis.
- Experienced in troubleshooting errors in HBase Shell/API, Pig, Hive and MapReduce, YARN.
- Deployed a Kafka cluster with a separate zookeeper to enable processing of data using spark streaming in real-time and storing it in HBase. manage and review data backups and manage and review Hadoop log files.
- Monitoring the health of the cluster and setting up alert scripts for memory usage on the edge nodes. Experienced in running query - using Impala and used BI tools to run ad-hoc queries directly on Hadoop.
- Developed batch and streaming analytics solutions using Kafka, Flume, Hadoop, Spark, Jenkins, and other state of the art technologies.
- Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters. Involved in Installing and configuring Kerberos for the authentication of users and Hadoop daemons.
- Worked with Distributed database administration like Impala, Vertica, Cassandra. & Have Knowledge on creating and performance tuning of Vertica, Hive scripts.
- Developed a stream filtering system using Spark streaming on top of Apache Kafka.
- Designed a system using Kafka to auto - scale the backend servers based on the events throughput.
- Responsible for database design, writing complex SQL Queries and Stored Procedures.
- Maintained & administrated HDFS through Hadoop and Worked on Oracle Database and SQL, PL/SQL, Python and Shell Scripts.
- Worked on Monitoring systems like Hubble, Nagios and Splunk and repositories Artifactory.
- Knowledge on Creating alerts, monitoring profiles and dashboards for new and existing services.
- Good understanding on Systems and application performance monitoring. (Key KPIS, tools and implementation)
- Good Understanding on Architect, design and maintain automated build and deployment systems using Jenkins, Subversion, Maven and Nexus.
- Knowledge on Version Control Tools or Source Code Management tools (GIT, SVN).
- Hands on working experience with DevOps tools, chef, puppet, Jenkins, git, maven, Ansible.
- Experience in Designing, Installing, and Implementing Ansible configuration management system and in writing playbooks for Ansible and deploying applications.
- Knowledge on CI/CD Pipeline development, DevOps methodology and data pipeline troubleshooting skills
- Validated software engineering experience and discipline in design, test, source code management and CI/CD practices
- Implemented test scripts to support test-driven development and continuous integration.
- Performed troubleshooting of customer facing issues using Splunk, log file analysis, monitoring tools and then coordinated remediation efforts with other teams.
- Worked with Enterprise data support teams to install Hadoop updates, patches, version upgrades as required and fixed problems, which raised after the upgrades.
- Monitoring the UAT/Production/staging Environments for any down time issues.
- Responsible for Cluster maintenance, Adding and removing cluster nodes, Cluster Monitoring, troubleshooting,
- Proficient in working with Linux operating systems, shell scripting, and networking technologies
- Have Strong software development, problem-solving and debugging skills.
- Well working Experience with Big Data application troubleshooting (Spark, MapReduce, YARN)
Environment: HDFS, Apache Nifi, PL/SQL, Hive, Java, Unix Shell scripting, Sqoop, ETL, Python, Docker, Jenkins, API Platforms, GIT, Ansible, HBase, MongoDB, Cassandra, Ganglia, Kafka, Yarn,spark Airflow, oozie, Impala, Flume, Pig, Scripting, MySQL, Red Hat Linux, and Cloudera Manager
Sr. Hadoop Administrator
Confidential, Saint Louis, MO
Responsibilities:
- Involved in start to end process of Hadoop cluster setup including installation, configuration and monitoring the Hadoop Cluster.
- Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Cluster Planning, Manage and review data backups, Manage & review log files
- Worked with the Data Science team to gather requirements for various data mining projects.
- Monitored systems and services, architecture design and implementation of Hadoop deployment, configuration management, backup, and disaster recovery systems and procedures
- Hands-on experience in standing up and administrating on-premise Kafka platform and Volume Data High Streaming with Kafka Architecture.
- Designed and implemented by configuring Topics in new Kafka cluster in all environment.
- Performed Importing and exporting data into HDFS using Sqoop and Installed various Hadoop Ecosystems and Hadoop Daemons
- Here I have installed 5 Hadoop clusters for different teams, we have developed a Data lake which serves as a Base layer to store and do analytics for Developers, we provide services to developers, install their custom software’s, upgrade Hadoop components, solve their issues, and help them troubleshooting their long running jobs, we are L3 and L4 support for the Datalike, and I also manage clusters for other teams
- Installed and configured HDFS, Zookeeper, Map Reduce, Yarn, HBase, Hive, Scoop, Ansible and Oozie.
- Involved in loading data from UNIX file system to HDFS, Importing and exporting data into HDFS using Sqoop, experienced in managing and reviewing Hadoop log files.
- Responsible for data extraction and data ingestion from different data sources into Hadoop Data Lake by creating ETL pipelines using Pig, and Hive.
- Hands on experience in working with ecosystem like Hive, Pig scripts, Sqoop, MapReduce, YARN, and zookeeper. Strong knowledge of hive's analytical functions.
- Involved in the Deployment, Up-gradation, Configuration Apache Storm and Spark clusters using Ansible playbooks.
- Monitored multiple Hadoop clusters environments using Ganglia and Nagios. Monitored workload, job performance and capacity planning using Cloudera. Installed and configured Hortonworks and Cloudera distributions on single node clusters for POCs.
- Apache Spark 2.0 was installed and configured in the Cluster using Cloudera, and Individual Services patches were sent and incorporated it into the Cluster.
- The AWS Clusters have High Availability enabled for the Name Node and currently running 4 Hive server instances.
- Involved in running Hadoop jobs for processing millions of records of text data. Troubleshoot the build issue during the Jenkins build process. Implement Docker to create containers for Tomcat Servers, Jenkins.
- Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required.
- Currently the Cluster migration Process - Loading the data from the In-premise cluster to the AWS Cloud cluster is in Progress.
- Good working knowledge on importing and exporting data from different databases namely MySQL into HDFS and Hive using Scoop.
- Involved in a Continuous Delivery framework using Jenkins, Puppet, Maven & Nexus in Linux environment. Integration of Maven/Nexus, Jenkins, Urban Code Deploy with Patterns/Release, Git, Confluence, Jira and Cloud foundry
- Handled importing of data from various data sources, performed transformations using Hive, Map Reduce, Spark and loaded data into HDFS. Hadoop security setup using MIT Kerberos, AD integration (LDAP) and Sentry authorization.
- Involved in running Hadoop jobs for processing millions of records of text data. Troubleshoot the build issue during the Jenkins build process. Implement Docker to create containers for Tomcat Servers, Jenkins.
- Worked on MySQL and successfully launched queries to provide required data to the department
- Installing and configuring Checkpoint and ASA firewalls, VPN networks and redesigning customer security
- Good experience in administrative tasks such as Hadoop installation in pseudo distribution mode, multi node cluster.
- Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters. Involved in Installing and configuring Kerberos for the authentication of users and Hadoop daemons.
Environment: CDH4.7, Hadoop-2.0.0 HDFS, MapReduce, MongoDB-2.6, Hive-0.10, Sqoop-1.4.3, Oozie-3.3.4, Zookeeper-3.4.5, Hue-2.5.0, Jira, Web Logic 8.1 Kafka, Yarn, Impala, Pig, Scripting, MySQL, Red Hat Linux, CentOS and other UNIX utilities, Cloudera Manager.
Hadoop Administrator
Confidential, Charlotte, NC
Responsibilities:
- Installed and Configured Hortonworks Data Platform (HDP) and Apache Ambari.
- To analyze data migrated to HDFS, used Hive data warehouse tool and developed Hive queries.
- Experience in all the phases of Data warehouse life cycle involving Requirement Analysis, Design, Testing, and Deployment.
- Involved in Data modeling and Create logical and physical ERD diagrams and Data Analysis/Modeling for Data warehouse.
- Use of MongoDB for building large data warehouse, implemented shading and replication to provide high performance and high availability
- Performed data validation between source system and data loaded in the data warehouse for new requirements
- Cluster Administration, releases and upgrades Managed multiple Hadoop clusters with the highest capacity of 7 PB (400+ nodes) with PAM Enabled Worked on Hortonworks Distribution.
- Responsible for implementation and ongoing administration of Hadoop infrastructure.
- Maintained, audited and built new clusters for testing purposes using the AMBARI, HORTONWORKS.
- Created POC on Hortonworks and suggested the best practice in terms HDP, HDF platform, NIFI
- Set up Hortonworks Infrastructure from configuring clusters to Node
- Installed and Configured Hadoop Ecosystem (MapReduce, Pig, and Sqoop. Hive, Kafka) both manually and using Ambari Server. scheduler
- Supported in setting up QA environment and updating configurations for implementing scripts with Pig and Sqoop. Worked on tuning the performance Pig queries.
- Converted ETL operations to Hadoop system using Pig Latin Operations, transformations and functions.
- Implemented best income logic using Pig scripts and UDFs
- Capturing data from existing databases that provide SQL interfaces using Sqoop.
- Worked on YARN capacity scheduler by creating queues to allocate resource guarantee to specific groups.
- Implemented Hadoop stack and different bigdata analytic tools, migration from different databases to Hadoop (HDFS).
- Responsible for adding new eco system components, like spark, storm, flume, Knox with required custom configurations based on the requirements
- Installed and configured Kafka Cluster.
- Integrated Apache Storm with Kafka to perform web analytics. Uploaded click stream data from Kafka to HDFS, HBase and Hive by integrating with Storm and High Data Streaming with Kafka.
- Helped the team to increase cluster size. The configuration for additional data nodes was managed using Puppet manifests.
- Strong knowledge of open source system monitoring and event handling tools like Nagios and Ganglia.
- Integrated BI and Analytical tools like Tableau, Business Objects, and SAS etc. with Hadoop Cluster.
- Planning and implementation of data migration from existing staging to production cluster. Even migrated data from existing databases to cloud (S3 and AWS RDS).
- Component unit testing using Azure Emulator.Written complex Hive and SQL queries for data analysis to meet business requirements.
- Exported analyzed data to downstream systems using Sqoop for generating end-user reports, Business Analysis reports and payment reports.
- Worked on the Databases like Cassandra, MongoDB
- Development operations using GIT, Puppet, its modules configuration, upload to master server and implement on client servers.
Environment: HDFS, Map Reduce, Hortonworks, Hive, Pig, Flume, Oozie, Sqoop, MongoDB Ambari, and Linux.
Hadoop Administrator
Confidential, Boston, MA
Responsibilities:
- Installed and configured Hadoop and Ecosystem components in Cloudera and Hortonworks environments.
- Installed and configured Hadoop, Hive and Pig on Amazon EC2 servers
- Upgraded the cluster from CDH4 to CDH5 the tasks were first performed on the staging platform, before doing it on production cluster.
- Enabled Kerberos and AD security on the Cloudera cluster running CDH 5.4.4.
- Implemented Sentry for the Dev Cluster
- Configured MySQL Database to store Hive metadata.
- Involved in managing and reviewing Hadoop log files.
- Involved in running Hadoop streaming jobs to process terabytes of text data.
- Worked with Linux systems and MySQL database on a regular basis.
- Supported Map Reduce Programs those ran on the cluster.
- Involved in loading data from UNIX file system to HDFS.
- Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
- As a admin followed standard Back up policies to make sure the high availability of cluster.
- Involved in Analyzing system failures, identifying root causes, and recommended course of actions. Documented the systems processes and procedures for future references.
- Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
- Installed and configured Hive, Pig, Sqoop and Oozie on the HDP 2.2 cluster.
- Managed backups for key data stores
- Supported configuring, sizing, tuning and monitoring analytic clusters
- Implemented security and regulatory compliance measures
- Streamlined cluster scaling and configuration
- Monitoring cluster job performance and involved capacity planning
- Works with application teams to install operating system and Hadoop updates, patches,
- Version upgrades as required.
- Documented technical designs and procedures
Environment: HDFS, Hive, Pig, sentry, Kerberos, LDAP, YARN, Cloudera Manager, and Ambari.
Hadoop Operations Administrator
Confidential, Sacramento, CA
Responsibilities:
- Responsible for Cluster maintenance, Adding and removing cluster nodes, Cluster Monitoring and Troubleshooting, Manage and review data backups, Manage and review Hadoop log files on Hortonworks, MapR and Cloudera clusters.
- Responsible for architecting Hadoop clusters with Hortonworks distribution platform HDP 1.3.2 and Cloudera CDH4.
- Experience in setting up of Data Sources, Configuring Servlets Engines, Session Managers including planning installation and configuration of Web Logic Application Servers.
- Used Config wizard and WLST scripts to create and manage Weblogic domains.
- Involved in setting up cluster environment for Web Logic Server integrated with multiple workflows.
- Handling Mainframe Batch jobs a bends and critical batch through OPC/TWS on a priority basis and ensure production cycles are not delayed.
- Responsible on-boarding new users to the Hadoop cluster (adding user a home directory and providing access to the datasets).
- Wrote Pig scripts to load and aggregate the data.
- Worked on analyzing Hadoop cluster using different big data analytic tools including Pig, Hbase database and Sqoop.
- Experience on Commissioning, Decommissioning, Balancing, and Managing Nodes and tuning server for optimal performance of the cluster.
- Load and transform large sets of structured, semi structured and unstructured data.
- Installed and configured Hive.
- Extensively involved working in Unix Environment and Shell Scripting
- Helped the users in production deployments throughout the process.
- Managed and reviewed Hadoop Log files as a part of administration for troubleshooting purposes. Communicate and escalate issues appropriately.
- Added new Data Nodes when needed and ran balancer. Responsible for building scalable distributed data solutions using Hadoop.
- Involved in working on Cassandra database to analyze how the data get stored Continuous monitoring and managing the Hadoop cluster through Ganglia and Nagios.
- Wrote complex Hive queries and UDFs in Java and Python.
- Installed Oozie workflow engine to run multiple Hive and Pig jobs, which run independently with time and data availability. Also Done major and minor upgrades to the Hadoop cluster.
- Upgraded the Cloudera Hadoop ecosystems in the cluster using Cloudera distribution packages. Done stress and performance testing, benchmark for the cluster.
- Commissioned and decommissioned the Data Nodes in the cluster in case of the problems.
- Debug and solve the major issues with Cloudera manager by interacting with the Cloudera team from Cloudera.
Environment: Flume, Oozie, Cassandra, WebLogic, Pig, Sqoop, Mongo, Hbase, Hive, Map-Reduce, YARN, Hortonworks and Cloudera Manager.
Hadoop Administrator
Confidential
Responsibilities:
- Installed, Configured and Maintained the Hadoop cluster for application development and Hadoop ecosystem components like Hive, Pig, HBase, Zookeeper and Sqoop.
- In depth understanding of Hadoop Architecture and various components such as HDFS, Name Node, Data Node, Resource Manager, Node Manager and YARN / Map Reduce programming paradigm.
- Monitoring Hadoop Cluster through Cloudera Manager and Implementing alerts based on Error messages. Providing reports to management on Cluster Usage Metrics and Charge Back customers on their Usage.
- Extensively worked on commissioning and decommissioning of cluster nodes, replacing failed disks, file system integrity checks and maintaining cluster data replication.
- Very good understanding and knowledge of assigning number of mappers and reducers to Map reduce cluster.
- Setting up HDFS Quotas to enforce the fair share of computing resources.
- Strong Knowledge in Configuring and maintaining YARN Schedulers (Fair, and Capacity).
- Experience in setting up HBase cluster which includes master and region server configuration, High availability configuration, performance tuning and administration.
- Created user accounts and given users the access to the Hadoop cluster.
- Involved in loading data from UNIX file system to HDFS.
- Worked on ETL process and handled importing data from various data sources, performed transformations.
- Coordinate with QA team during testing phase.
- Provide application support to production support team.
Environment: Cloudera, HDFS, Hive, Sqoop, Zookeeper and HBase, UNIX Linux Java, HDFS Map Reduce, Pig Hive HBase Flume Sqoop, Shell Scripting.
Linux System Administrator
Confidential
Responsibilities:
- Installed RedHat Enterprise Linux (RHEL 6) on production servers.
- Provided Support to Production Servers.
- Updated firmware on Servers, Installed patches and packages for security vulnerabilities for Linux.
- Monitored system resources, like network, logs, disk usage etc.
- User account creation and account maintenance both local and centralized (LDAP - Sun Identity Manager).
- Performed all duties related to system administration like troubleshooting, providing sudo access, modifying DNS entries, NFS, backup recovery (scripts).
- Setup password less login using ssh public - private key.
- Setting up cron jobs for the application owners to deploy scripts on production servers.
- Performed check out for the sanity of the file systems and volume groups.
- Developed scripts for internal use for automation of some regular jobs using shell scripting.
- Completed Work Requests raised by customer/team and following up with them.
- Worked on Change Request raised by customer/team and follow up.
- Did Root Cause Analysis on Problem Tickets and frequently occurring incidents.
- Raised Case with vendors if any software or hardware needs to be updated/replaced/repaired.
- Raised Case with RedHat and follow up them as and when required.
- Engaged different team’s member when ticket requires multiple team support.
- Effectively and efficiently monitored SDM / Remedy queues so that no SLA Breach should happen.
- Worked in a 24X7 on call rotation to support critical production environments.
Environment: RedHat LINUX Release 5.x, 6.x,SUSE LINUX v 10.1, 11, OpenBSD,TCP/IP Wrapper, SSH, SCP, RSYNC,Service Desk Manager, BMC Remedy, Hostinfo, Apache Web Server, Samba Server,Iptables, FTP, DHCP, DNS, NFS, RPM, YUM, LDAP, Auto FS, LAN, WAN,KVM, RedHat Ent Virtualization, Xen, VMware.