Sr Devops Engineer/sre Engineer Resume
Southbury, CT
SUMMARY
- Around 11 years of experience in IT industry wif expertise in IaaS, PaaS, AWS, Software Integration, DevOps, Build & Release management, Linux/windows Administration and Configuration management.
- Sight Reliability, Infrastructure - as-Code, CI/CD, Bamboo, AWS DevOps, Jenkins, CircleCi, New Relic, OpsGenie, Snowflake, RDS, Python, Bash, Groovy, Go lang, IaC, Terraform, SRE.
- Experience wif DevOps essential tools like Git, Jenkins, Maven, Ant, JFrog, Docker, Kubernetes, Chef, Ansible and Linux/Unix system administrator wif RHEL, CentOS, Ubuntu, Debian.
- Experience in virtualization platforms like EC2, VirtualBox, KVM, Oracle Virtualization and VMware V-Sphere ESX/ESXi environment.
- Strong Experience in DevOps Environment in Automation, configuration and deployment of instances on AWS.
- Experienced in building cloud native Big Data architecture and analytics systems.
- Expert in resolving big query performance issues by recommending best practices, highly noledgeable in US Health Care, Skin care, Taxing, Financial, & Retaildomains.
- Expert in writing complex SQL queries including complex scenarios including array, struct, windowing functions.
- Extensively worked on GCP cloud functions, Cloud Data Fusion, Logs Explorer, Event triggers.
- Worked on data integration projects using Apache Spark & Scala, TEMPHas administered Cloud Data Fusion and its instances including creation of persistent data proc clusters and dynamic clusters.
- Developing templates or scripts to automate everyday developer or operations functions, Deploy, Configure and maintain MongoDB databases and replica sets across multiple environments.
- Planning and performing MongoDB databases upgrades and migrations, Monitor using Ops manager and MongoDB Cloud Manager and tune performance of Mongo DB Databases
- Good noledge of performance tuning of Mongo DB instances, Configuration parameter, Schema design, indexing on premises and on clouds AWS, RDS, EC2, S3.
- Work wif application team to understand the database needs of our applications and optimize them using Mongo DB.
- Spark, PySpark, EKSandAWSenvironment, Dremio,Databricks.
- Experienced in installation, configuration, administration, troubleshooting, tuning, security, backup, recovery and upgrades of Linux (Red Hat based 6/7 and Debian based) and Windows 2008/2012 Servers in a large environment.
- Experience in migrating P2V, V2V servers and Experience in data migration from On-Premises to cloud using AWS Import/Export service.
- Implemented DevOps tools suites like GIT, ANT, Maven, Jenkins, JFrog Artifactory, CircleCI, Docker, Docker Swarm, Kubernetes, Nexus repository, Chef, Ansible, cloudwatch & Nagios in traditional environments & cloud environment
- DevOps Practice for Micro Services Architecture using Docker Swarm, Kubernetes to orchestrate the deployment, scaling and management of Docker Containers.
- Working extensively on Kubernetes for configuration of security (TLS), Labels & Selectors and deploying Load-balancing, Replication (auto-scaling), services, etc.
- Strong hands on experience in Amazon web services such as EC2, S3, Elastic Beanstalk, Elastic Load Balancing (Classic/Application), Auto Scaling, RDS, EBS, VPC, Route53, Cloud Watch and IAM.
- Built customized Amazon Machine Images (AMI), deploy AMIs to multiple regions and launch EC2 instances using these custom images.
- Strong experience wif UNIX/Linux in installing and configuring LVM, RAID, NGINX, Apache HTTPD, Tomcat, Weblogic, MySQL, Oracle, patching, system & application log metrics, configured custom Cloud Watch metrics.
- Experience in building multi-Tier, highly available, fault tolerant and scalable applications using AWS Elastic Beanstalk, AWS RDS, DynamoDB, Elastic Load Balancing and Auto Scaling.
- Hands on experience in using Terraform along wif Ansible to create custom machine images and automation tools like Chef/Ansible to install software's after the infrastructure is provisioned.
- Hands on experience in using Nagios and CloudWatch for log metrics and to monitor the resources utilization for each application and applications/server's health.
- Experience wif Version Control Systems like GIT, SVN, Bit Bucket, Perforce.
- Strong experience in using Build Automation tools like ANT and Maven.
- Experienced wif Round Robin, Least Connections, IP Hash for Load balancing.
- Experience in building policies for access control and user profiles using AWS IAM, S3 controls wif bucket policies.
- Excellent noledge of S3 storage strategies such as versioning, life cycle policies, cross region replication and Glacier.
- Good experience in AWS CLI to manage AWS infrastructure.
- Created RDS DB instances using Multi-AG deployments. Tested DB instance Failover using reboot wif fail-over.
- Expertise in AWS Identity and Access Management (IAM) such as creating users, groups, organizing IAM users to groups, assigning roles to groups.
- Build and configured virtual Data center in Amazon cloud to support Enterprise hosting which includes VPC, public, private subnets, Security groups and Route tables.
- Utilized Amazon CloudWatch to monitor AWS resources such as EC2 instances, Amazon EBS volumes, Amazon RDS DB instances and Elastic Load Balancers.
- Designed and Created Terraform templates and used existing templates to create stacks and provisioned Infrastructure.
- Infrastructure automation coding in languages such as Python, Shell Scripts, AWS Command line interface and AWS BOTO3.
- Perform all programming and development assignments wifout close supervision.
- Works independently on complex systems or infrastructure components dat may be used by one ormoreapplications or systems.
- Drives application development focused around delivering business valuable features
- Maintains high standards of software quality wifin the team by establishing good practices and habits.
- Performs integrated testing of components dat requires careful planning and execution to ensure timely and quality results.
- Implements CI/CDAzurePipelines Design/develop/maintain/monitor Web API
- Manage Azure SQL Database using Entity Framework
TECHNICAL SKILLS
Cloud Services: OpenStack, Amazon Web Services: EC2, Elastic Load Balancer, Auto scaling Services, Glacier, Elastic Beanstalk, Cloud Front, Relational Database, Virtual Private Cloud, Route 53, Lambda, API Gateway, Code Commit, Code Build, Code Deploy, Code Pipeline, Cloud Watch, Cloud Formation, Identity and Access Management and Ops works.
Operating Systems: Red hat Linux 6/7, Oracle Linux, Amazon Linux AMI, CentOS, Ubuntu and Microsoft Server.
Programming Languages: C, C++, JAVA/J2EE, SQL, PL-SQL
Application / Web Server: Apache 2.2.x, Nginx 1.10 and Tomcat-Apache 7, 8, 9, WebLogic.
J2EE Technologies: Servlets, Spring, Hibernate, JSP, JDBC, JMS, and EJB.
Tools: Cloud Watch, Vitalize, Maven, Ant, Jenkins, Nexus, Chef, Ansible.
Web Technologies: JDK 1.5/1.6/1.7, HTML, XML, DHTML.
Database: Oracle 9i/10g/11g/12c, Maria DB, Aurora DB, My SQL, PostgreSQL Dynamo DB, MongoDB and MS Access.
Protocols: TCP/IP, FTP, SSH, SMTP, SOAP, HTTP and HTTPS.
Version Control: SVN, CVS, Git.
Scripting Languages: C, C++, HTML, CSS, JavaScript, J Query, Angular JS, JSON, XML, Bash, Ruby, Shell Scripting and Python
PROFESSIONAL EXPERIENCE
Confidential - Southbury, CT
Sr DevOps Engineer/SRE Engineer
Responsibilities:
- Designing and implementing High Available Kubernetes cluster in Dev, QA and Prod and Own of all Kubernetes clusters.
- Sight Reliability, Infrastructure-as-Code, CI/CD, Bamboo, AWS DevOps, Jenkins, CircleCi, New Relic, OpsGenie, Snowflake, RDS, Python, Bash, Groovy, Go lang, IaC, Terraform, SRE.
- Installed multi-master high available Kubernetes clusters on Open stack.
- Created and configured infrastructure using heat templates in Open stack to create Instances, security groups, networks, VIPs etc.
- Developing templates or scripts to automate everyday developer or operations functions
- Deploy, Configure, and maintain MongoDB databases and replica sets across multiple environments. Planning and performing MongoDB databases upgrades and migrations.
- Monitor using Ops manager and MongoDB Cloud Manager and tune performance of Mongo DB Databases.
- Good noledge of performance tuning of Mongo DB instances, Configuration parameter, Schema design, indexing on premises and on clouds AWS, RDS, EC2, S3.
- Work wif application team to understand the database needs of our applications and optimize them using Mongo DB.
- Creating Docker images for micro-services applications and automating the entire flow using Jenkins pipeline.
- Spark, PySpark, EKSandAWSenvironment, Dremio,Databricks.
- Writing Jenkins Pipelines/Libraries to automate end-to-end pipelines
- Automated CI/CD for our micro-service applications using Stash/GitLab, Maven, JUNIT, Sonar Qube, and Quality gates, Docker, Kubernetes, Selenium and JFrog Artifactory.
- Writing Docker files and automated building Docker images using Jenkins and deploy to Kubernetes.
- Migration of data from Google storage buckets onto Google Big query, Ingested XMLs from source gs buckets to intermediate GS buckets in parquet formats using spark and Scala.
- Migration of data from AWS S3 to GCS using Airflow/Cloud Composer, enabling schema on read for AWS S3 json files and loaded data to big query.
- Ingested JSON files from source AWS S3 buckets to intermediate Google Storage buckets in json formats using airflow
- Automated the application deployments to Kubernetes using YAMLs and later migrated to HELM charts and maintain all the helm charts in the relevant repositories
- Audited and analyzed resource usage on all microservices and fine-tuned micro services for optimal performance.
- Worked wif New Relic for monitoring and auditing.
- Designed and deployed canary-based deployment on AWS wif custom CRDs to support custom tweaks.
- Designed an approach to deploy and maintain global-properties for Java microservices to replace spring-config server.
- Deployed Jenkins wif dynamic slaves on Kubernetes and configured wif external Jenkins servers for triggering jobs.
- Designed and implemented GIT strategy of maintaining branches and release branches
- Automated in issues creation on Jira when errors occur during builds on Jenkins.
- Installed, configured, and deployed Neo4j, Redis, Bonita BPM
- Installed and configured proxy and reverse-proxy using Apache and Nginx
- Installed configured and deployed SiteMinder SSO on Kubernetes and performance tuned its engine and Apache
- Installed, configured, and deployed Glusterfs W/ Heketi for storage on Kubernetes.
- Writing Vagrant files and Shell scripts for automating local servers for developers.
- Used Rundeck to schedule jobs and for regular operations
- Implemented Logging cluster using (FEK) Elastic-search, Fluentd and Kibana on Kubernetes to get logs and create alerts on top of dat
- Implemented monitoring cluster wif various exporters to monitor every aspect of the cluster for Prometheus, Grafana and alert-manager.
- Implemented remote-node-exporter, SNMP exporter, integrated a java service for SNMP traps collection and push to Prometheus.
- Designed and automated Jenkins pipelines to generate various configurations of Prometheus exporters and deploy to Kubernetes.
- Created dashboards for Grafana for custom metrics of the cluster.
Environment: Linux, AWS, OpenStack, Docker, Kubernetes, Glusterfs, Jenkins, SonarQube, Quality Gates, Vagrant, Shell, Ansible, Prometheus, Grafana, Alert-manager, New Relic, Rundeck, Fluentd, Elastic-Search, Kibana, Artifactory, Redis, Jira, Confluence, Stash, GitLab.
Confidential - Columbus, Oh
DevOps Engineer
Responsibilities:
- Configuring, supporting and maintaining all network firewall, storage, load balancer, operating systems and software in Google Compute Engine Instances.
- Configuring Cloud Storage versioning and lifecycle policies to and backup files and archive files in Cloud File store.
- Communicating technical direction, issues and trade-offs to development staff, peers, business partners, project stakeholders and senior management, Migration of data from Azure SQL Server onto Google Big query data lake using Cloud Composer/airflow, Processing Inbound files thru SFTP location using Apache Airflow.
- Transform data from Big Query data lake to Big Query Semantic layer using Cloud Data Fusion meeting SLA, Responsible for Big query Cost Optimization.
- Sight Reliability, Infrastructure-as-Code, CI/CD, Bamboo, AWS DevOps, Jenkins, CircleCi, New Relic, OpsGenie, Snowflake, RDS, Python, Bash, Groovy, Go lang, IaC, Terraform, SRE.
- Spark, PySpark, EKSandAWSenvironment, Dremio,Databricks.
- Designing Google Deployment Manager Templates to create custom sized VPC, setting up firewall rules to ensure successful deployment of Web applications and database templates.
- Managing Google infrastructure and automation wif CLI and API. Working on Inbound and Outbound services wif automation of Chef. Deploying multiple resources simultaneously, using Deployment Manager Templates in Google.
- Managing ANTCompute Engine instances utilizing Google Cloud Interconnect, Cloud Load Balancing and Cloud Storage for our QA environments as well as infrastructure servers for GIT and Chef.
- Managing Chef Cookbooks wif Chef Recipes. Creating inventory in Chef for automating the continuous deployment.
- MongoDB Version Upgrades., MongoDB Enterprise to Community Edition - Downgrade.
- Mongo and Mongos Instances Run as a Service where start, stop and restart commands are in place - In Binary Installation, Good exposure to CRUD, JSON.
- MongoDB cluster administration and management.
- Utilizing Stack Driver to monitor resources such as Compute Engine, CPU memory, Google Cloud SQL services, Cloud data store tables, volumes to set alarms for notification or automated actions and to monitor logs for a better understanding and operation of the system.
- Launching and configuring of Google Compute Engine Cloud Servers by using setting Firewall rules.
- Handling migration of on-premises applications to cloud, and created resources in cloud to enable dis, using all critical Googletools, used Cloud Load balancer and Google App Engine policies for scalability, elasticity and availability.
- Building Automation and Build Pipe Line Development using Jenkins, GitHub and Maven. Set up build pipelines in Jenkins by using various plugins like Maven plugin.
- Created customized Docker Images and push them to Google Compute engine, worked on Docker and deployed and maintaining Micro Services in Dev and QA, implemented Jenkins slaves as Docker containers auto scalability.
- Used Kubernetes for automating deployment, Scaling and managing the containerized application.
- Setup Kubetlets, Kubernetes Master/Worker nodes as well as API Server and scheduler to orchestrate the deployment of instance in real-time.
- Maintaining JIRA for tracking and updating project defects and tasks.
- Responsible for Plugin Management, User Management, regular incremental backups and regular maintenance for recovery.
- Used Nagios as a monitoring tool to identify and resolve infrastructure problems before they affect critical processes and worked on Nagios Event handlers in case of an automatic restart of failed applications and services.
Environment: GCP, Open shift, GIT, GitHub, Bash, Python, Ant, Maven, Jenkins, Chef, Linux, Unix, Apache Tomcat, Docker, Kubernetes, Kafka, Jira, Nagios.
Confidential - Nashville, TX
DevOps Engineer
Responsibilities:
- Launching Amazon EC2 Cloud Instances using Amazon Web Services (Linux /Centos /Ubuntu /RHEL) and configuring launched instances wif respect to specific applications.
- Migrated servers from On-Prem to AWS cloud wif packer and migrated by converting to OVF.
- Worked on DEV, TEST, Staging, PROD and Training environments.
- Designed and architecture multi-tier architecture on AWS and provisioned end-to-end architecture for development teams in all environments.
- Designed and implemented Amazon Web DevOps Services (AWS) cloud-based instances for use on their current and upcoming projects.
- Worked in an agile development team to deliver an end-to-end continuous integration / continuous delivery product in an open-source environment.
- Developed automation and deployment utilities using python, Ruby, Bash and Code Deploy.
- Used CloudFormation & Terraform as infrastructure as code to manage the AWS infrastructure.
- Manage servers using configuration management products like Chef and Ansible
- Maintaining the security groups in EC2, EC2 VPC and controlling the inbound and outbound traffic dat are allowed to reach the instances
- Automated deployment of WAR/EAR file wif Git, Jenkins, Tomcat & Weblogic
- Extensively worked on AWS Cloud platform and its features which includes EC2, VPC, EBS, AMI, SNS, RDS, EBS, S3, DynamoDB, ElastiCache, IAM, Cloud Formation, and Cloud Watch.
- Good Experience on On-Prem virtualization wif VMWare, KVM and Oracle Virtualization.
- Configured iDRAC and iLO for remote access and managed servers.
- Worked on Database like MongoDB, Redis, PostgreSQL, MySQL.
- Expert in administrating the LAMP and LEMP stacks.
- Built and Deployed Java/J2EE to Tomcat Application servers in an Agile continuous integration process and automated the entire process.
- Branching, Tagging, Release Activities on Version Control Tools SVN, GitHub.
- Maintaining the Security groups and setting up the rules for instances dat are associated to the security groups.
- Worked on RPM and YUM package installations, patching and management.
- Application Deployments & Environment configuration using Chef and Ansible.
- Written Chef Cookbooks for various DB configurations and optimize product configuration, converting production support scripts to Chef Recipes and AWS server provisioning using Chef Recipes.
- Established Chef Best practices approaches to systems deployment tools and test-kitchen for testing and version controlled each Chef cookbook & maintained cookbooks in git repository.
- Worked wif Ansible to manage Multiple Nodes and Manage Inventory for different Environments and automated various infrastructure activities like application server setup, stack monitoring using Ansible playbooks
- Deployed the java application into application servers like Apache Tomcat and Weblogic.
- Created file systems like Ext4, XFS and configured LVM and RAID.
- Written and implemented Python & BASH scripts for cron jobs for backups of directories and DBs.
- Installed and configured Nagios for detailed monitoring on Linux instances.
- Experience in using Nagios and CloudWatch for monitoring AWS infrastructure, applications and system resources.
- Written scripts wif Amazon Boto3 SDK to manage AWS infrastructure.
- Experienced in deployment of applications on Apache Webserver, NGINX and Application Servers such as Tomcat & Weblogic.
- Implemented unit testing framework using Junit and Cucumber Set up Jenkins master/slave to distribute builds on salve nodes.
- Set up Jenkins master and added the necessary plugins and adding more slaves to support scalability and agility and integrated GIT, Maven, Nexus and Tomcat wif Jenkins for the builds as per the CI/CD requirement using Jenkins along wif Python and Shell scripts to automate routine jobs.
- Worked on Ant and Maven as a build tool for building Java applications.
- Dealt wif errors in Maven’s pom.xml and Ant’s build.xml files to obtain appropriate builds.
- Written Vagrant files to create infrastructure on developer’s workstation wif Virtual Box.
- Containerizing servers using Docker for the test environments and dev-environments needs
- Worked on docker wif both bridge and overlay driver for docker network and worked on swarm for containerization cluster.
Environment: EC2, VPC, EBS, AMI, SNS, RDS, EBS, S3, DynamoDB, ElastiCache, ECS, IAM, Cloud Formation, Cloud Watch, auto scaling, SQS, Vagrant, VirtualBox, VMWare, KVM, Oracle Virtualization, iDRAC, iLO, Centos, NGINX, Apache, ANT, Maven, Tortoise SVN, GitHub, RPM, YUM, Chef, Ansible, Docker, MongoDB, Oracle, PostgreSQL, MySQL, LAMP & LEMP stacks, Jenkins.
Confidential - Lexington, KY
AIX System Administrator
Responsibilities:
- Worked on HMC, installed and configured VIO Server, LPARs/DLPARs and subsequent maintenance and administration on Confidential Power 7 machines.
- Configured the fresh HMC and add new Confidential Power770, 780 and Power710 machines.
- Upgrade the latest HMC version and installed fix pack on all HMC.
- Created the Dual VIO profile and installed on local disk of Power770 and Power780 machine according to the architecture diagram and documentation.
- Successfully perform LPM (Live Partition Mobility) and migrate to the other frame on same HMC.
- Created mirror the VIO and added in bosboot list.
- Created virtual Ethernet wif appropriate VLAN for client lpars and sent request to network team to setup the same vlan according to the documentation.
- Configuration and administration of Fiber Adapter’s and handling AIX part of SAN
- Created Pre-Production lpars profile and submitted WWPN to SAN team for provision the OS disk.
- Installation and Configuration of Network Installation Manager (NIM), configuring the NIM Master and assigning the clients and taking the mksysb of the NIM client on the NIM Master.
- Created the virtual fibre adapter on VIO and mapped to the client lpars.
- Pushed the OS image on SAN disk for Pre-Production lpars wif the halp of NIM.
- Installed and configured EMC powerpath and installed the License key on all lpars and checked dat all paths are active.
- Created Volume Groups, Logical Volume and Filesystem according to the build documentation.
- Created users and group, installed and configured ssh and sudo files according to the instruction.
- Worked on Veritas Volume Manager on RHEL and AIX servers and created the Vxdg, Vxdisks and created the Vxfs.
- Created NFS mount point and mounted on all Pre-Production lpars.
- Daily troubleshooting and worked on Confidential SAP application request.
- Installed lsof, java and vnc filesets on the request of SAP application team.
- Created dual VIO profile for Production and DR/FO environment.
- Created the Production and DR/FO lpars profile and push the image using NIM
- Took the mksysb for all Pre-Production, Production and DR/FO lpars using NIM.
- Performed testing to verify failover and fallback of Dual VIO server on p770.
- Installed the EMC networker client, SAP database module for backup on all lpars and worked wif application team and backup team to configure the filesystems level backup.
- Used to take HMC and its lpars profile monthly basis backup on NIM server.
- Provided sulogs and users details information to the audit team.
- Handled day to day user management tickets.
- Planning the Power HA/XD implementation on Production between two sites.
- Installed the Power HA software on all Production and DR/FO lpars.
- Implemented the Power HA/XD 6.1 (HACMP) and configured the cluster between two sites, Production and DR/FO.
- Tested Power HA/XD implementation wif SAP application team and EMC SRDF team.
- Successfully failover the Production environment to the DR/FO side in disaster recovery test.
- Upgrade the Power770 and HBA firmware.
- Configured the NAS storage and mounted on Pre-Production environment and HP team migrated the SAP application data from Canada to Milford SAP environment.
- Worked on Gold Build AIX 6.1 and submitted to GM for approval.
- Administrated and troubleshooting HACMP/XD 6.1.
- Experience wif installations, configuration, upgrade, and administration of Confidential pSeries and Power5/Power6 servers on various levels of AIX operating system environments
- Troubleshooting PowerVM and PowerHA on pseries server related issues, maintaining their virtual server environment, creating, and deleting virtual servers.
- System policies and hardware profiles, hard disk configuration for fault tolerance, disk mirroring and backup the data using BACKUP devices.
- Troubleshoot and halped application team for application migrate from HP-UX to AIX 6.1.
- Provided support to EMC backup team to configure and troubleshooting the backup the SAP instances filesystems.
- Maintained availability, increased capacity & performance of production machines by upgrading their hardware (disks, CPU, memory, IO board, power cooling unit, mother board etc.) & firmware.
Confidential - Columbus, Oh
Build and Release Engineer
Responsibilities:
- Installation, configuration, and upgrade of Redhat Linux 4/5, Solaris 8/9/10, HP-UX 11.11/11.23/11.31 , Windows operating systems.
- Performed package and patching management and debugging in all flavors of Unix& Linux
- Expertise in capacity planning and Migrations in Linux, Solaris.
- Installed, configured and managed Oracle Real Application Cluster (RAC) in Linux, Solaris and HP-UX servers.
- Worked on different heterogeneous host environments like Linux, Solaris, Windows 2003/2008 environments for performing the Migrations.
- Exclusively involved in the Migration of data from 3Par to V-Max storage arrays using Open Migrator for Hot Migration.
- Exclusively involved in Installation, Initialization and activation of the sync process on Linux, HP-UX hosts using Open Migrator.
- Installed, configured and managed Redhat Cluster Suite in Linux servers.
- Configured and maintained Network Multi pathing in Linux Involved in installing and licensing of Power Path on different hosts like Linux.
- Installed Solutions Enabler and Replication Manager as part of the BACKUP purpose.
- Exclusively involved in preparation of Migration documents for Linux.
- Exclusively involved in managing file systems and disk management using Solaris Disk suite.
- Administered the software Packages and Patches on servers as well as on workstation.
- Installed and maintained the machines wif the updated patches and necessary client software packages, pkgadd, pkginfo, pkgrm, patchadd, showrev - p, patchadd - p.
- Configured systems wif Veritas Volume Manager, Veritas file system using vxdiskadm and vxassist utility.
- Worked on Redhat Cluster Server and Veritas Cluster Server for high availability and redundancy.
- Creating and managing shared disk groups in clustering environment.
- Managed UNIX Infrastructure, EMC Storage involving maintenance of servers and troubleshooting problems in environment.
- Worked on GFS (Global File System) wif Redhat Cluster Server.
- Installed and configured Guest Servers on VMware host and installed OS on VMware Guests.
Environment: RedHat Linux 4/5, Solaris 8/9/10, HP-UX, AIX, Sun Enterprise server 6500/5500/4500/ R/250, SPARC Server 1000, Open Migrator Linux 3.11/3.12, Solaris 3.10/3.11, HP-UX 3.10/3.11, Rsync Power path 5.6, Solution Enabler 7210, Replication manager 5.3, Oracle cluster nodes, Veritas volume manager.