Discussion:
zpool list -p 's FREE vs. zfs list -p's AVAIL ? FREE-AVAIL == 6_675_374_080 (199G zroot pool)
Mark Millard via freebsd-current
2021-05-05 23:40:01 UTC
Permalink
Context:

# gpart show -pl da0
=> 40 468862048 da0 GPT (224G)
40 532480 da0p1 efiboot0 (260M)
532520 2008 - free - (1.0M)
534528 25165824 da0p2 swp12a (12G)
25700352 25165824 da0p4 swp12b (12G)
50866176 417994752 da0p3 zfs0 (199G)
468860928 1160 - free - (580K)

There is just one pool: zroot and it is on zfs0 above.

# zpool list -p
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 213674622976 71075655680 142598967296 - - 28 33 1.00 ONLINE -

So FREE: 142_598_967_296
(using _ to make it more readable)

# zfs list -p zroot
NAME USED AVAIL REFER MOUNTPOINT
zroot 71073697792 135923593216 98304 /zroot

So AVAIL: 135_923_593_216

FREE-AVAIL == 6_675_374_080



The questions:

Is this sort of unavailable pool-free-space normal?
Is this some sort of expected overhead that just is
not explicitly reported? Possibly a "FRAG"
consequence?


For reference:

# zpool status
pool: zroot
state: ONLINE
scan: scrub repaired 0B in 00:31:48 with 0 errors on Sun May 2 19:52:14 2021
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
da0p3 ONLINE 0 0 0

errors: No known data errors


===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
Yuri Pankov
2021-05-06 00:01:07 UTC
Permalink
Post by Mark Millard via freebsd-current
# gpart show -pl da0
=> 40 468862048 da0 GPT (224G)
40 532480 da0p1 efiboot0 (260M)
532520 2008 - free - (1.0M)
534528 25165824 da0p2 swp12a (12G)
25700352 25165824 da0p4 swp12b (12G)
50866176 417994752 da0p3 zfs0 (199G)
468860928 1160 - free - (580K)
There is just one pool: zroot and it is on zfs0 above.
# zpool list -p
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 213674622976 71075655680 142598967296 - - 28 33 1.00 ONLINE -
So FREE: 142_598_967_296
(using _ to make it more readable)
# zfs list -p zroot
NAME USED AVAIL REFER MOUNTPOINT
zroot 71073697792 135923593216 98304 /zroot
So AVAIL: 135_923_593_216
FREE-AVAIL == 6_675_374_080
Is this sort of unavailable pool-free-space normal?
Is this some sort of expected overhead that just is
not explicitly reported? Possibly a "FRAG"
consequence?
From zpoolprops(8):

free The amount of free space available in the pool. By contrast,
the zfs(8) available property describes how much new data can be
written to ZFS filesystems/volumes. The zpool free property is
not generally useful for this purpose, and can be substantially
more than the zfs available space. This discrepancy is due to
several factors, including raidz parity; zfs reservation, quota,
refreservation, and refquota properties; and space set aside by
spa_slop_shift (see zfs-module-parameters(5) for more
information).
Mark Millard via freebsd-current
2021-05-06 00:18:34 UTC
Permalink
Post by Yuri Pankov
Post by Mark Millard via freebsd-current
# gpart show -pl da0
=> 40 468862048 da0 GPT (224G)
40 532480 da0p1 efiboot0 (260M)
532520 2008 - free - (1.0M)
534528 25165824 da0p2 swp12a (12G)
25700352 25165824 da0p4 swp12b (12G)
50866176 417994752 da0p3 zfs0 (199G)
468860928 1160 - free - (580K)
There is just one pool: zroot and it is on zfs0 above.
# zpool list -p
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 213674622976 71075655680 142598967296 - - 28 33 1.00 ONLINE -
So FREE: 142_598_967_296
(using _ to make it more readable)
# zfs list -p zroot
NAME USED AVAIL REFER MOUNTPOINT
zroot 71073697792 135923593216 98304 /zroot
So AVAIL: 135_923_593_216
FREE-AVAIL == 6_675_374_080
Is this sort of unavailable pool-free-space normal?
Is this some sort of expected overhead that just is
not explicitly reported? Possibly a "FRAG"
consequence?
free The amount of free space available in the pool. By contrast,
the zfs(8) available property describes how much new data can be
written to ZFS filesystems/volumes. The zpool free property is
not generally useful for this purpose, and can be substantially
more than the zfs available space. This discrepancy is due to
several factors, including raidz parity; zfs reservation, quota,
refreservation, and refquota properties; and space set aside by
spa_slop_shift (see zfs-module-parameters(5) for more
information).
Thanks for pointing to the reference material.

6_675_374_080/213_674_622_976 =approx= 0.03124 =approx= 1.0/32.0

and spa_slop_shift's description reports:

QUOTE
spa_slop_shift (int)
Normally, we don't allow the last 3.2%
(1/(2^spa_slop_shift)) of space in the pool to be consumed.
This ensures that we don't run the pool completely out of
space, due to unaccounted changes (e.g. to the MOS). It
also limits the worst-case time to allocate space. If we
have less than this amount of free space, most ZPL
operations (e.g. write, create) will return ENOSPC.

Default value: 5.
END QUOTE

So in my simple context, apparently not much else
contributes and the figures are basically as
expected.

Thanks again.

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)

Loading...