rXg Knowledge Base

ZFS Mirror (RAID 1) and IDV Setup Guide

April 08, 2026

Overview

rXg uses ZFS exclusively for storage. The installer natively supports mirrored vdevs (RAID 1 equivalent) when two equal-size disks are present during installation. Since adding a mirror requires a reinstall, this is also an ideal time to evaluate Internal Dataplane Virtualization (IDV) if the hardware has sufficient resources.

ZFS Mirror Setup

Prerequisites

  • Two SSDs of the same capacity
  • Both SSDs must be installed before running the rXg installer (mirror is configured at install time only -- cannot be added non-destructively to an existing single-disk pool)
  • If the server has a hardware RAID controller, it must be set to HBA/JBOD/IT mode so ZFS can see raw disks (ZFS checksumming and self-healing require direct disk access)

Process

  1. Take a full backup from System > Backups in the admin console
  2. Install the second SSD
  3. Reinstall rXg -- select Mirror when prompted
  4. Restore your backup

Verification

zpool status zroot

Both disks should appear under mirror-0 vdev with state ONLINE.

Post-Install Monitoring

  • System > Filesystems -- ZFS dataset health and capacity
  • System > SMART Entries -- SSD wear metrics (power-on hours, spare capacity, erase counts)
  • Health Notices -- automatic CRITICAL alerts on disk degradation (via Rxg::Dmesg and Rxg::Raid modules)

IDV (Internal Dataplane Virtualization)

What It Does

Runs virtual data plane nodes (vDPs) on the same bare metal host using bhyve (FreeBSD's native hypervisor). The controller instance handles DB/UI/RADIUS; vDPs handle traffic forwarding.

Benefits

  • Higher device/session capacity (~1,000 DPL per vDP)
  • Resource isolation between control plane and data plane
  • Scales without additional hardware

Hardware Requirements

  • 32+ cores, 64+ GB RAM recommended
  • Each vDP typically gets 8 cores / 16 GB RAM
  • Licensing must support additional DPL capacity

Setup Process

IDV is configured post-install, not at install time:

  1. Install rXg on bare metal normally (with ZFS mirror). That instance becomes the Cluster Controller (CC)
  2. Post-install, go to Services > Virtualization to create VMs using bhyve
  3. Upload the rXg ISO as a Disk Image
  4. Create Virtual Machines (allocate cores, RAM, virtual disks)
  5. Install rXg inside each VM from the ISO. Each VM auto-provisions into the cluster as a Data Node

Each vDP is a separate rXg OS instance, but all are created and managed from the admin console after the bare metal deployment.

Typical Resource Split

Example for a 64-core / 256 GB server:

Instance Role Cores RAM
Bare metal rXg Controller (CC) 16 64 GB
VM 1-6 vDP Data Nodes 8 each 16 GB each

Configuration Location

  • Admin console: Services > Virtualization
  • Key models: VirtualizationHost, VirtualMachine, VirtualSwitch, VirtualInterface, VirtualDisk

Cookies help us deliver our services. By using our services, you agree to our use of cookies.