Skip to main content

Quick Setup

Quick Setup

This quick setup guide will result in the creation of a RAIDZ array. RAIDZ is a redundant array with >3 disks and allowance for a single disk failure. This works well for home setups as a minimal number of disks can be used while still providing redundancy, thus saving money. The performance of this setup will be quite good and can easily saturate multiple gigabit connections.

Much like mounting disks with UUIDs in fstab, using the disk id is a much more reliable way to keep track of the disks. There are other ways, noted here, but they are mainly useful for enterprise setups with many (>8) drives in one server.

$ ls -lh /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root  9 Oct 26 09:04 ata-HGST_HUS726060ALE610_######## -> ../../sdc
lrwxrwxrwx 1 root root  9 Oct 26 09:04 ata-HGST_HUS726060ALE610_######## -> ../../sdb
lrwxrwxrwx 1 root root  9 Oct 26 09:04 ata-HGST_HUS726060ALE614_######## -> ../../sde
lrwxrwxrwx 1 root root  9 Oct 26 09:04 ata-HGST_HUS726060ALE614_######## -> ../../sda

Creating the pool

-f forces the pool to be created even if there are existing filesystems on the devices
-m specifies the mount point of the pool
raidz this can be specified as; mirror, raidz, raidz2, or raidz3

At pool creation, ashift=12 should always be used, except with SSDs that have 8k sectors where ashift=13 is correct. A vdev of 512 byte disks using 4k sectors will not experience performance issues, but a 4k disk using 512 byte sectors will. Since ashift cannot be changed after pool creation, even a pool with only 512 byte disks should use 4k because those disks may need to be replaced with 4k disks or the pool may be expanded by adding a vdev composed of 4k disks. Because correct detection of 4k disks is not reliable, -o ashift=12 should always be specified during pool creation.

sudo zpool create -f -o ashift=12 -m /mnt/bastion bastion raidz ata-HGST_HUS726060ALE610_######## ata-HGST_HUS726060ALE610_######## ata-HGST_HUS726060ALE614_######## ata-HGST_HUS726060ALE614_########

Check the status of the pool

# zpool status

Output:

lucid@shiro:~$ sudo zpool status
  pool: bastion
 state: ONLINE
  scan: none requested
config:

	NAME                                   STATE     READ WRITE CKSUM
	bastion                                ONLINE       0     0     0
	  raidz1-0                             ONLINE       0     0     0
	    ata-HGST_HUS726060ALE610_########  ONLINE       0     0     0
	    ata-HGST_HUS726060ALE610_########  ONLINE       0     0     0
	    ata-HGST_HUS726060ALE614_########  ONLINE       0     0     0
	    ata-HGST_HUS726060ALE614_########  ONLINE       0     0     0

errors: No known data errors

Check the configuration of the pool, this also shows the total available size of the pool.

# zpool get all <pool name>

Create filesystems

Filesystems are individual folders on the root of the pool, more information on pools, filesystems, and vdevs can be found in the source links at the top of the page.

Example:

# zfs create bastion/documents

The created filesystem will be owned by root so the file will need permissions changed to the user of choice.

# chown lucid:lucid /mnt/bastion/documents

Automatic scrubbing

Using a systemd timer/service it is possible to automatically scrub pools monthly:

/etc/systemd/system/zfs-scrub@.timer
-----------------------------------------------
[Unit]
Description=Monthly zpool scrub on %i
[Timer]
OnCalendar=monthly
AccuracySec=1h
Persistent=true

[Install]
WantedBy=multi-user.target
/etc/systemd/system/zfs-scrub@.service
-----------------------------------------------
[Unit]
Description=zpool scrub on %i

[Service]
Nice=19
IOSchedulingClass=idle
KillSignal=SIGINT
ExecStart=/usr/bin/zpool scrub %i

Enable/start zfs-scrub@pool-to-scrub.timer unit for monthly scrubbing the specified zpool.

Unmounting a pool is weird

# zpool export <pool name>