Expand Zfs Pool
9 minutes read | 1753 words
The story thus far…
Earlier last year, I set up a NAS in my apartment. Since I had not written an update about it then, I am forced to describe it now. Let this be a lesson to all who think laxity will go unpunished!
The NAS

I bought a 5 bay enclosure for drives, and hooked it up to my home server. At the time, I was only able to get a 4 TB drive. So, I stuck the drive into the enclosure, formatted it with BTRFS, and moved most of my media library onto it. Later, I was able to get another 4 TB drive, and I had decided by now that in the long term, I want a ZFS pool. So, I managed to fish out two old 1 TB drives and started thinking on how best to set things up. In the course of my research, I found this article by Jim Salter. He’s also a host on a podcast I subscribe to, so I consider this article a trusted source of information.
My basic idea then, was as follows:
- Use the large disk as a single VDEV, to be made into a mirror later.
- Use the pair of small disks as a mirror VDEV, to be expanded later when funds allow for larger disks to be acquired.
After the research phase, came the implementation phase. Without any adventures, I ended up with a NAS setup that looked like this :
BTRFS
1 disk of 4 TB
Total capacity: 4 TB
ZFS pool
vdev1
1 disk of 4 TB
vdev2 - mirror
2 disks of 1 TB
Total capacity: 5 TB
The BTRFS disk currently acts as a storage for my multimedia (home movies, family images, etc), and the ZFS pool acts as a target for backups of all my computers. (As a bonus: check out this video for how I run my backups.)
The Upgrade
In the Black Friday sales last week, I was able to snag a pair of 4 TB drives at a very reasonable price. I am finally the proud owner of some Seagate Ironwolfs. Or is it Ironwolves? How do plurals of brand names work?

All I needed to figure out now, was how the theory of “mirrors are easy to expand” translates into the practical of what commands one needs to execute, and in what order, to achieve this without accidental data loss. Turns out, in the case of ZFS, the cookie is exactly what is advertised on the tin, and the entire process was, once again, without adventure.
The guts and bones
So, its time to finally get your hands dirty. The idea here is simple: your mirror VDEV consists of
two drives: lets call them drive1 and drive2. You should have two new drives of better quality.
In my case, the better quality means larger capacity (and also, as I learnt as a result of this
process, much faster). Lets call these drives drive3 and drive4. We first replace drive1 with
drive3. This will take some time as ZFS manages this change (called resilvering). Since the
drives are a part of a mirror VDEV, resilvering in this case just means copying all data from
drive2 over to drive3. Once this is complete, we replace drive2 with drive4. ZFS will then
spend more time resilvering, i.e., copying over data from drive3 to drive4. So, at the end of
this process, you will have replaced two old drives in your ZPOOL with two new drives, and at the
end of the second replacement, the pool will gain the qualities of the new drives (in my case, a
sudden bump in capacity from 1 TB to 4 TB).
Exact commands
Lets now look at the exact commands to make the changes. I have provided output, which is sanitized
to remove irrelevant information. For this section, we focus only on the the four drives described
earlier, i.e., drive1, drive2, drive3, and drive4.
Step 1: Current Status
Run sudo zpool status to see your current pool. It should be something like
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
drive1 ONLINE 0 0 0
drive2 ONLINE 0 0 0
errors: No known data errors
You can also run sudo zpool list to see information about your pool.
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 928G 919G 9.28G - - 2% 99.0% 1.00x HEALTHY -
Here, we see that your pool is named tank, the drives in the mirror are drive1 and drive2, and
that the pool’s current capacity is 928G.
Step 2: Take drive1 offline
Run the following command:
sudo zpool offline tank drive1
This will disable drive1 from your pool. ZFS will also complain at this point about your pool
being in a degraded state. If you run sudo zpool status again, you will get something like this as
output:
pool: tank
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
drive1 OFFLINE 0 0 0
drive2 ONLINE 0 0 0
errors: No known data errors
Step 3: Replace drive1
Run the following command to replace drive1 with drive3:
sudo zpool replace tank drive1 drive3
WARNING !!
drive3is just a shorthand in this article for the new drive you want to add. Realistically, the actual argument is going to be something like/dev/drive/by-path/path-link-to-drive3, or/dev/drive/by-id/uuid-to-drive3
This command will begin the process of replaceing the first set of drives. This can take a long time
depending on the size of your pool and the speed of your disks. You can check the progress of this
process by running sudo zpool status again.
pool: tank
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Nov 29 13:05:10 2025
205G / 921T scanned at 462M/s, 17.3G / 921T issued at 39.1M/s
8.02G resilvered, 0.56% done, 22:14:44 to go
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
replacing-1 DEGRADED 0 0 0
drive1 OFFLINE 0 0 0
drive3 ONLINE 0 0 0 (resilvering)
drive2 ONLINE 0 0 0
errors: No known data errors
After this process is finished (check zpool status and drive1 should have disappeared), we can
move on. For me, the first resilver took around 17 hours.
pool: tank
state: ONLINE
scan: resilvered 921G in 17:14:15 with 0 errors on Sun Nov 30 06:19:25 2025
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
drive3 ONLINE 0 0 0
drive2 ONLINE 0 0 0
errors: No known data errors
Step 4: Take drive2 offline
Similarly as above, mark drive2 as offline.
sudo zpool offline tank drive2
pool: tank
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 921G in 17:14:15 with 0 errors on Sun Nov 30 06:19:25 2025
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
drive2 OFFLINE 0 0 0
drive3 ONLINE 0 0 0
errors: No known data errors
Step 5: Replace drive2
Same story as the last time, replace drive2 with drive4, and then wait for the resilvering to
complete.
sudo zpool replace tank drive2 drive4
pool: tank
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sun Nov 30 12:32:33 2025
216G / 921G scanned at 4.30G/s, 36.4G / 921G issued at 72.3M/s
16.6G resilvered, 1.18% done, 11:57:13 to go
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
replacing-0 DEGRADED 0 0 0
drive2 OFFLINE 0 0 0
drive4 ONLINE 0 0 0 (resilvering)
drive3 ONLINE 0 0 0
errors: No known data errors
When done, run zpool status -v to confirm. In my case, the second resilver took only 10 hours.
pool: tank
state: ONLINE
scan: resilvered 921G in 09:19:47 with 0 errors on Sun Nov 30 21:52:20 2025
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
drive4 ONLINE 0 0 0
drive3 ONLINE 0 0 0
errors: No known data errors
However, autoexpand was not enabled on my zpool. That is, the filesystem did not automatically
expand to use the available capacity on the new, larger drives. This can be checked by sudo zpool list -v.
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 928G 919G 9.28G - 2.73T 4% 99.0% 1.00x ONLINE -
mirror-0 928G 919G 9.28G - 2.73T 13% 99.0% - ONLINE
drive4 3.64T - - - - - - - ONLINE
drive3 3.64T - - - - - - - ONLINE
So I needed to also expand the drives in my mirror with the command sudo zpool online -e tank drive3. Surprisingly, I did not need to expand both drives in the mirror to expand the drive. After
expanding, we have more space!
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 3.63T 919G 2.74T - - 0% 24.7% 1.00x ONLINE -
mirror-0 3.63T 919G 2.74T - - 0% 24.7% - ONLINE
drive4 3.64T - - - - - - - ONLINE
drive3 3.64T - - - - - - - ONLINE
Step 6: Remove old hardware
Technically, you can physically disconnect a drive as soon as you mark it offline, but, if you have
enough bays in your storage device, and not enough confidence in your mind to know which physical
drive was drive1 and definitely not drive2, you can leave the offline drives in until both have
been replaced.
Power down your device, and pull out your old drives. Your pool should now only depend on your shiny new drives, and you can submit your old drives for shredding (or software shred them, and donate them to charity).
