HOME UEFI BIOS
Pop!OS on OpenZFS Root Guide (UEFI) Pop!OS on ZFS Root ZFS Root

About this guide

Booting on OpenZFS via ZFSBootMenu

OpenZFS (or ZFS for short), is the ultimate file system. With support for transparent block compression, virtual block devices, and data checksumming, this filesystem protects your data from almost anything (Software RAID is still not a backup!). However, it seems like the only thing we still cant do with it is boot Linux from a ZFS filesystem. So why is that?

Enter Linus Torvalds, Creator of Linux.


ZFS was open sourced under the CDDL, where Linux is licensed under GPL 2.0. And they are not compatible. Which is a major bummer.

However, that doesnt stop people like me from trying to install Linux distros ontop of it!

And thanks to the efforts of the ZFSBootMenu project, we can do just that!

This guide assumes you have experience with the command line, and that you have a UEFI capable computer. As far as I can tell, ZFSBootMenu will not work with legacy BIOS. If you have legacy BIOS, go to this guide instead: https://zfs.scott.lol/pop-bios

This guide also needs a temporary drive that will not be a part of the ZFS pool. This drive will be needed to initially install Pop! OS on. Once Pop! is installed, we will migrate it to the ZFS pool. This drive can be very small, like a 32GB SSD if you have one.

Step 1

Download Pop! OS

Download the Pop! OS ISO and create a bootable USB with it. You can download the ISO at this link.

https://pop.system76.com/

Once you have that done, install all the storage drives you want to be your ZFS pool and the temporary drive into your computer and boot off the Pop! OS medium.

Step 2

Setup Environment

My environment is going to be a virtual machine that has two 1TB drives, along with the 128GB temp drive. I will be using the two 1TB drives in a mirror, but you can setup any array you want. Raidz2, striped mirrors, anything can work. You can also use the new special metadata and dedup vdevs on the root pool!

Once you are in Pop! Live, you can now open a terminal and start configuring your Live system in order to install Pop! to ZFS.
First we need to elevate rights to root.

sudo -i

Confirm EFI support.

dmesg | grep -i efivars
$ [ 0.643127] Registered efivars operations

We need to define a variable that describes the distro install. You can use the /etc/os-release file to do this, or make your own.

source /etc/os-release
export ID="$ID"

Echo out $ID to check that you have a value. For us it will be 'pop'.

echo $ID
$ pop

Install binaries we need to setup ZFS on the Live environment.

apt install zfsutils-linux zfs-dkms gdisk

You are now ready to create the root ZFS pool!

Step 3

Create Root Pool

First, we need to take note of the block devices we want to create the root ZFS pool from. To make this easy, remove any storage devices that will not be a part of the pool we are creating, but leave the Pop! OS Live boot medium plugged in!

lsblk | grep disk

With this command, you will see a list of storage devices installed in your system. Look at the size of each to determine which names you need to take down. Here is an example of my VM.

NAME MAJ:MIN RM SIZE RO TYPE
sda 8:0 0 1T 0 disk
sdb 8:16 0 1T 0 disk
sdc 8:32 0 128G 0 disk

Take note of the NAME values for each device you want. You will need it when partitioning the drives.

We start by completely wiping the drives we selected. I am going to show how to create a mirrored setup, but you can do raidz/2/3, mirrored, or even just one disk!

#Drive 1
wipefs -a /dev/sda
sgdisk --zap-all /dev/sda

#Drive 2
wipefs -a /dev/sdc
sgdisk --zap-all /dev/sdb

Now we need to partition the drives. I recommend that you partition each drive the same to avoid size mismatches and partition number mismatches. You can also create a swap partition here if you want, but I will be showing how you can create a swap device on ZFS itself in this guide.

Create EFI boot partition

#Do this to all your drives
sgdisk -n 1:1m:+2g -t 1:ef00 /dev/sda
sgdisk -n 1:1m:+2g -t 1:ef00 /dev/sdb

This will create a 2gb boot partition. Most distros will only use about 512mb of space, so why did I size mine to 2gb? I prefer to have a slightly large EFI partition so I can add rEFInd themes and other efi boot images such as netboot.xyz for recovery. Plus it always nice to just have space there incase we have to put something big there to boot off of.

Create the ZPool partitions

#Do this to all your drives
sgdisk -n 2:0:-10m -t 2:bf00 /dev/sda
sgdisk -n 2:0:-10m -t 2:bf00 /dev/sdb

Now we need to grab the partition names from lsblk for partition 2. Here we can see it is sda2 and sdb2. The partition naming can be different based on what kind of storage medium you have.

lsblk
NAME MAJ:MIN RM SIZE RO TYPE
sda 8:0 0 1T 0 disk
├─sda1 8:1 0 2G 0 part
└─sda2 8:2 0 1022G 0 part <---------- THIS
sdb 8:16 0 1T 0 disk
├─sdb1 8:1 0 2G 0 part
└─sdb2 8:2 0 1022G 0 part <---------- AND THIS
sdc 8:32 0 128G 0 disk

Create the pool!

zpool create -f \
  -O compression=zstd \
  -O acltype=posixacl \
  -O xattr=sa \
  -O relatime=on \
  -O recordsize=1M \
  -o autotrim=on \
-m none zroot mirror /dev/sda2 /dev/sdb2

Also take note of the compression and recordsize config settings I used. If you have a weak cpu you might want to consider using lz4 instead of zstd for compression, and I use recordsize=1M to get better compression ratios and save space where ever I can.

Now lets create some datasets.

zfs create -o mountpoint=none zroot/root
zfs create -o mountpoint=/ -o canmount=noauto zroot/root/${ID}
zfs create -o mountpoint=/home zroot/home
zpool set bootfs=zroot/root/${ID} zroot

From here you can create any other dataset you might want with zfs create. Since zroot has a mountpoint=none value, you will need to set the mountpoint value when you create the dataset, otherwise you wont be able to mount the dataset. I would also only create other datasets under the zroot dataset, and not under zroot/root as that will contain all your linux root installs if you wish to install more than one distro.

Also take note of the canmount=noauto value, this is used to prevent multiple OS installs from mounting at root (/). Root will be mounted at bootup based on the OS root dataset we boot from.

Export the pool, and then re-import with a temp mountpoint of /mnt. This is done to mount our datasets correctly, and to place the ZFS root in a place where we can move Pop! to later.

zpool export zroot
zpool import -N -R /mnt zroot
zfs mount zroot/root/${ID}
zfs mount zroot/home

Check that zroot is mounted.

mount | grep mnt

zroot/root/pop on /mnt type zfs (rw,relatime,xattr,posixacl)
zroot/home on /mnt/home type zfs (rw,relatime,xattr,posixacl)

Update device symlinks.

udevadm trigger

We are now ready to install Pop!

Install Pop! OS

Step 4

Install Pop! OS

Now that the ZFS pool is created and mounted correctly, we can go back to the Pop! installer window that we have been ignoring.

Go through the steps until we get the Clean Install selection. Click on Clean Install, and then click the Clean Install button that appears in the bottom right of the window.

This is where we need the spare drive to install Pop! OS to temporarily. Once the install is done, we will move all the OS files to ZFS.

Select the spare drive. For me it is the 128G drive. Then click Erase and Install.

Input your name and the username you wish to have. Click Next.

Input a password. Click Next.

You will now be looking at a prompt asking if you wish to encrypt your partitions. Since we are moving to ZFS anyways, click on Don't Encrypt.

Pop! OS should now be installing itself to the temp drive. Wait for it to finish.

Once Pop! is done installing to the temp drive, you will see this.

You now have a working Pop! OS install on the temp drive. Now we have to move it to the ZFS pool.

Step 5

Moving Pop! OS to ZFS

We can now move the Pop! OS install to ZFS! Instead of clicking any of the options presented, go back to the console.

Create a new mountpoint at /mnt2.

mkdir /mnt2

Use lsblk to find the partition Pop! installed itself to on the temp drive. Mount it to /mnt2. It will most likely be the largest partition on the temp drive.

mount /dev/sdc3 /mnt2

Now copy over the Pop! OS system files! It is important to keep the trailing slash on /mnt2/ as it tells rsync to copy the files in the directory without copying the top directory itself.

rsync -av --info=progress2 --no-inc-recursive --human-readable /mnt2/ /mnt

Pop! OS is now on ZFS! But wait, there is still more we have to do to get Pop! working with ZFS. We now need to change root into the Pop! ZFS root and make some changes.

mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -B /dev /mnt/dev
mount -t devpts pts /mnt/dev/pts
mount -o bind /run /mnt/run
chroot /mnt /bin/bash

Once you are in the Pop! ZFS root, update the OS

apt update

Create the vfat filesystem for EFI. Run this command on every EFI partition you created. You only really need one, but what I like to do is use them all to create redundant EFI devices.

mkfs.vfat -F32 /dev/sda1
mkfs.vfat -F32 /dev/sdb1

Make the /boot/efi directories.

mkdir /boot/efi
mkdir /boot/efi2

Delete the old fstab.

rm /etc/fstab

Create fstab entries and mount. We need to do this before installing ZFS and the initramfs tools because it will look for the ESP mount to generate initramfs.

cat << EOF >> /etc/fstab
$( blkid | grep /dev/sda1 | cut -d ' ' -f 2 ) /boot/efi vfat defaults 0 0
$( blkid | grep /dev/sdb1 | cut -d ' ' -f 2 ) /boot/efi2 vfat defaults 0 0
EOF

mount /boot/efi
mount /boot/efi2

Install ZFS.

apt install initramfs-tools zfs-initramfs zfsutils-linux zfs-dkms -y

Enable systemd ZFS services.

systemctl enable zfs.target
systemctl enable zfs-import-cache
systemctl enable zfs-mount
systemctl enable zfs-import.target

We can now set the kernel parameters for booting like so:

zfs set org.zfsbootmenu:commandline="quiet" zroot/ROOT

If you need to setup IOMMU passthrough, you can do it via org.zfsbootmenu like you can with grubs DEFAULT line.

Make sure curl is installed. You can use wget to get the ZFSBootMenu EFI file, but I like curl.

apt install curl -y

Reimport the $ID value since chroot doesnt include it.

source /etc/os-release
export ID="$ID"

Install ZFSBootMenu!

mkdir -p /boot/efi/EFI/${ID}
curl -o /boot/efi/EFI/${ID}/VMLINUZ.EFI -L https://get.zfsbootmenu.org/efi
cp /boot/efi/EFI/${ID}/VMLINUZ.EFI /boot/efi/EFI/${ID}/VMLINUZ-BACKUP.EFI

Configure EFI boot entries

mount -t efivarfs efivarfs /sys/firmware/efi/efivars

I will be using rEFInd since Pop! OS will most likely live with other operating systems such as Windows 10/11. Install and configure.

apt install refind -y
refind-install
rm /boot/refind_linux.conf

cat << EOF > /boot/efi/EFI/${ID}/refind_linux.conf
"Boot default" "quiet loglevel=0 zbm.skip"
"Boot to menu" "quiet loglevel=0 zbm.show"
EOF

Now to be safe, lets rebuild the initramfs.

update-initramfs -c -k all

So now if we list the files of /boot/efi/EFI, we see some extra stuff that we can get rid of.

ls -lash /boot/efi/EFI
$ Pop_OS- pop refind tools

We can remove Pop_OS- and tools, they will either be empty or contain files that will not work when booting from them because of the ZFS root file system.

rm -rf Pop_OS-
rm -rf tools

HOWEVER! Pop_OS- will show up every time there is an update. So we can actually tell rEFInd to ignore this directory so that we never boot from it. Because if we boot from it then the OS will never load. To tell rEFInd to ignore the directory, we need to edit /boot/efi/EFI/refind/refind.conf. There will be a line "dont_scan_dirs ESP:/EFI/boot,EFI/Dell,EFI/memtest86". Uncomment and add ",EFI/Pop_OS-" at the end like so:

dont_scan_dirs ESP:/EFI/boot,EFI/Dell,EFI/memtest86,EFI/Pop_OS-

Once you have modified refind.conf, rEFInd boot should no longer list the Pop_OS- files as entrees on boot.

So now that the /boot/efi mount is setup and good to go, we need to copy it to our other vfat EFI file systems. Copy the structure of /boot/efi/ into each vfat fs you created. I usually create a service that runs before shutdown / reboot that updates the other vfat fs's from the main /boot/efi.

rsync -a /boot/efi/ /boot/efi2

At this point, I highly recommend adding a zfsbootmenu config option to the zroot/root dataset for selecting a default kernel to boot to. As all zfs users know, a simple distro kernel update can render your zfs pool inaccessible. Setting a default kernel will allow us to control when the system can boot into newer kernels, however you will still need to make sure a supported kernel is still installed.

To set the default kernel to boot to, we need to set org.zfsbootmenu:kernel="#" on zroot/root.

zfs set org.zfsbootmenu:kernel="6.6.10" zroot/root

The 6.6.10 kernel is the latest PopOS kernel that has zfs support. 6.8 has just been released (2024-03-26), and caused my system to not boot. So please, set a default kernel! If ZBM cant find a kernel with that version installed, it will default back to searching for anything bootable.

We can now exit the chroot and umount everything.

exit
umount -n -R /mnt

If you intend to use the temporary drive for something else in this system, go ahead and wipe / format the drive so we dont boot from it.

wipefs -a /dev/sdc
sgdisk --zap-all /dev/sdc

Export the zroot pool and shutdown! Be sure you do this as not exporting the pool can cause issues when the system tries to mount it on boot up.

zpool export zroot
shutdown now

Once your machine is shutdown, remove the temporary storage drive we used to install Pop! OS to initially, and the live USB. We no longer need it. Turn on your computer and enjoy Pop! OS on ZFS root!

HOWEVER! Sometimes you might be greeted by the UEFI Shell screen instead of the Pop! OS Logo like so.

This can sometimes happen based on your system. Type exit and press enter to enter the UEFI setup screen.

From here, you will need to add a boot entry under your UEFI boot manager screen. This can vary from vendor to vendor, but here is the process I did mine.

Find your boot manager screen.

From here it might be under boot options like me, or add boot entry. Once you find add boot option, procede to add.

You will be presented with a list of your storage devices in a scary looking format. Press enter on one until you find the familar EFI directory.

You then need to navigate to the refind directory and select refind_x64.efi as your boot entry.

Once you select refind_x64.efi, it should ask you to name the boot option. Name it pop or whatever you want and save the new boot entry. Now reset.

You should now be greeted by the rEFInd boot manager. Notice how our first entry is actually 'Boot EFI\Pop_OS-\vmlinuz-previous.efi'. We see this if we dont remove the Pop_OS- directory from our EFI directory like how I said to earlier, OR edit the refind.conf file to ignore this directory. Make sure we select EFI\pop\VMLINUZ.EFI or EFI\pop\VMLINUZ-BACKUP.EFI.

We can now boot Pop!

To confirm its on ZFS root...

And look at the compression ratios we are getting from the base OS!

Extras!

pop-upgrade Command

After having used this setup for some time, I decided to make space for a recovery partition. And came across an issue with the pop-upgrade command to install the /recovery partition since this isnt a new re-install of the OS.

I will be skipping partitioning resizing and data shuffling here, but you would need to zfs send all your datasets to a different disk, re-partition your drives for ESP, SWAP, Recovery, ZFS pool, whatever else you want, then zfs recv your datasets back to the recreated zfs pool, rebuild your EFI stuff, etc etc etc.

Once I had the partition for the recovery image mounted to /recovery, I ran the following command to populate the recovery partition and recieved an error:

sudo pop-upgrade recovery upgrade from-release
$ checking if pop-upgrade requires an update
$ Recovery upgrade status: recovery upgrade aborted: failed to apply system repair before recovery upgrade

This is because there is no /etc/fstab entry for /. Which on this setup is not needed. So how do we fix this?

Short answer: You dont long term. Long answer: I havent tested the temporary fix on a reboot to know if it will break anything.

The short term fix to get the command to work is to add a /etc/fstab entry for /. Add the following line to /etc/fstab:

/ / bind defaults 0 0

Yes this is disgusting, but it gets pop-upgrade to work and install the recovery partition files. Comment out that line right after you are done with pop-upgrade. As I have said earlier, I have not tested rebooting with that line in the fstab.

Swap device!

Now that Pop! OS is installed and booted, lets add a swap device. Since Pop! recently added zram as a default swap device, you may or may not want to disable that.

To remove it, run these commands.

swapon
$ NAME
$ /dev/zram0 <--- Take this name for swapoff.

sudo swapoff /dev/zram0
sudo apt purge pop-default-settings-zram -y

Add a block device on the ZFS pool.

sudo zfs create -V 16G -s -o logbias=throughput -o primarycache=metadata -o secondarycache=none zroot/swap

Find the ZFS block device name. The system will name it with zd*.

lsblk | grep disk
$ sda ... 1T ... disk
$ sdb ... 1T ... disk
$ zd0 ... 16G ... disk <--- This one.

Elevate command prompt to root, turn on swap on the ZFS block device, and add it to fstab.

sudo -i
mkswap /dev/zd0
swapon /dev/zd0

cat << EOF >> /etc/fstab
#Swap Device
$( blkid | grep /dev/zd0 | cut -d ' ' -f 2 ) none swap sw 0 0
EOF

Reboot to check that the swap device gets mounted on startup.

reboot
swapon
$ NAME TYPE SIZE USED PRIO
$ /dev/zd0 partition 16G 0B -2

ZFS list script!

So after using Pop! OS, Ubuntu, and other distros on a ZFS root, I got tired of trying to remember the zfs list / zfs get options to see basic information about the ZFS pool. So I created a script that shows all the info I think would be needed at a glance.

To download a copy of the script, run these commands.

sudo -i
curl -o /usr/local/bin/listzfs.sh -L https://raw.githubusercontent.com/inthebrilliantblue/LinuxScripts/main/tools/ubuntu/listzfs.sh
chmod +x /usr/local/bin/listzfs.sh
exit

You dont have to download the script using root or to the /usr/local/bin location, but I like being able to just 'listzfs.sh' on the terminal.

Using this script we can see at a glance how much space every dataset is taking up, its compression type / ratio, block device information, and if the pool itself is degraded. I like putting this in the motd on servers that I run ZFS roots on.