Discussion:
ZFS i/o error in recent 12.0
(too old to reply)
KIRIYAMA Kazuhiko
2018-03-19 23:00:04 UTC
Permalink
Hi,

I've been encountered suddenly death in ZFS full volume
machine(r330434) about 10 days after installation[1]:

ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot

FreeBSD/x86 boot
ZFS: i/o error - all block copies unavailable
ZFS: can't find dataset u
Default: zroot/<0x0>:
boot:

Partition is bellow:

# gpart show /dev/mfid0
=> 40 31247564720 mfid0 GPT (15T)
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
410664 984 - free - (492K)
411648 268435456 3 freebsd-swap (128G)
268847104 30978715648 4 freebsd-zfs (14T)
31247562752 2008 - free - (1.0M)

#

But nothing had beed happend in old current ZFS full volume
machine(r327038M). According to [2] the reason is boot/zfs/zpool.cache
inconsistent. I've tried to cope with this by repairing
/boot [3] from rescue bootable USB as follows:

# kldload zfs
# zpool import
pool: zroot
id: 17762298124265859537
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

zroot ONLINE
mfid0p4 ONLINE
# zpool import -fR /mnt zroot
# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/da0p2 14G 1.6G 11G 13% /
devfs 1.0K 1.0K 0B 100% /dev
zroot/.dake 14T 18M 14T 0% /mnt/.dake
zroot/ds 14T 96K 14T 0% /mnt/ds
zroot/ds/backup 14T 88K 14T 0% /mnt/ds/backup
zroot/ds/backup/kazu.pis 14T 31G 14T 0% /mnt/ds/backup/kazu.pis
zroot/ds/distfiles 14T 7.9M 14T 0% /mnt/ds/distfiles
zroot/ds/obj 14T 10G 14T 0% /mnt/ds/obj
zroot/ds/packages 14T 4.0M 14T 0% /mnt/ds/packages
zroot/ds/ports 14T 1.3G 14T 0% /mnt/ds/ports
zroot/ds/src 14T 2.6G 14T 0% /mnt/ds/src
zroot/tmp 14T 88K 14T 0% /mnt/tmp
zroot/usr/home 14T 136K 14T 0% /mnt/usr/home
zroot/usr/local 14T 10M 14T 0% /mnt/usr/local
zroot/var/audit 14T 88K 14T 0% /mnt/var/audit
zroot/var/crash 14T 88K 14T 0% /mnt/var/crash
zroot/var/log 14T 388K 14T 0% /mnt/var/log
zroot/var/mail 14T 92K 14T 0% /mnt/var/mail
zroot/var/ports 14T 11M 14T 0% /mnt/var/ports
zroot/var/tmp 14T 6.0M 14T 0% /mnt/var/tmp
zroot/vm 14T 2.8G 14T 0% /mnt/vm
zroot/vm/tbedfc 14T 1.6G 14T 0% /mnt/vm/tbedfc
zroot 14T 88K 14T 0% /mnt/zroot
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 51.1G 13.9T 88K /mnt/zroot
zroot/.dake 18.3M 13.9T 18.3M /mnt/.dake
zroot/ROOT 1.71G 13.9T 88K none
zroot/ROOT/default 1.71G 13.9T 1.71G /mnt/mnt
zroot/ds 45.0G 13.9T 96K /mnt/ds
zroot/ds/backup 30.8G 13.9T 88K /mnt/ds/backup
zroot/ds/backup/kazu.pis 30.8G 13.9T 30.8G /mnt/ds/backup/kazu.pis
zroot/ds/distfiles 7.88M 13.9T 7.88M /mnt/ds/distfiles
zroot/ds/obj 10.4G 13.9T 10.4G /mnt/ds/obj
zroot/ds/packages 4.02M 13.9T 4.02M /mnt/ds/packages
zroot/ds/ports 1.26G 13.9T 1.26G /mnt/ds/ports
zroot/ds/src 2.56G 13.9T 2.56G /mnt/ds/src
zroot/tmp 88K 13.9T 88K /mnt/tmp
zroot/usr 10.4M 13.9T 88K /mnt/usr
zroot/usr/home 136K 13.9T 136K /mnt/usr/home
zroot/usr/local 10.2M 13.9T 10.2M /mnt/usr/local
zroot/var 17.4M 13.9T 88K /mnt/var
zroot/var/audit 88K 13.9T 88K /mnt/var/audit
zroot/var/crash 88K 13.9T 88K /mnt/var/crash
zroot/var/log 388K 13.9T 388K /mnt/var/log
zroot/var/mail 92K 13.9T 92K /mnt/var/mail
zroot/var/ports 10.7M 13.9T 10.7M /mnt/var/ports
zroot/var/tmp 5.98M 13.9T 5.98M /mnt/var/tmp
zroot/vm 4.33G 13.9T 2.75G /mnt/vm
zroot/vm/tbedfc 1.58G 13.9T 1.58G /mnt/vm/tbedfc
# zfs mount zroot/ROOT/default
# cd /mnt/mnt/
# mv boot boot.bak
# cp -RPp boot.bak boot
# gpart show /dev/mfid0
=> 40 31247564720 mfid0 GPT (15T)
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
410664 984 - free - (492K)
411648 268435456 3 freebsd-swap (128G)
268847104 30978715648 4 freebsd-zfs (14T)
31247562752 2008 - free - (1.0M)

# gpart bootcode -b /mnt/mnt/boot/pmbr -p /boot/gptzfsboot -i 2 mfid0
partcode written to mfid0p2
bootcode written to mfid0
# cd
# zpool export zroot
#

But can not boot:

ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot

FreeBSD/x86 boot

Any idea?

Best regards

[1] http://ds.truefc.org/~kiri/freebsd/current/zfs/messages
[2] https://lists.freebsd.org/pipermail/freebsd-questions/2016-February/270505.html
[3] https://forums.freebsd.org/threads/zfs-i-o-error-all-block-copies-unavailable-invalid-format.55227/#post-312830

---
KIRIYAMA Kazuhiko
Allan Jude
2018-03-19 23:09:31 UTC
Permalink
Post by KIRIYAMA Kazuhiko
Hi,
I've been encountered suddenly death in ZFS full volume
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
ZFS: i/o error - all block copies unavailable
ZFS: can't find dataset u
# gpart show /dev/mfid0
=> 40 31247564720 mfid0 GPT (15T)
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
410664 984 - free - (492K)
411648 268435456 3 freebsd-swap (128G)
268847104 30978715648 4 freebsd-zfs (14T)
31247562752 2008 - free - (1.0M)
#
But nothing had beed happend in old current ZFS full volume
machine(r327038M). According to [2] the reason is boot/zfs/zpool.cache
inconsistent. I've tried to cope with this by repairing
# kldload zfs
# zpool import
pool: zroot
id: 17762298124265859537
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
zroot ONLINE
mfid0p4 ONLINE
# zpool import -fR /mnt zroot
# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/da0p2 14G 1.6G 11G 13% /
devfs 1.0K 1.0K 0B 100% /dev
zroot/.dake 14T 18M 14T 0% /mnt/.dake
zroot/ds 14T 96K 14T 0% /mnt/ds
zroot/ds/backup 14T 88K 14T 0% /mnt/ds/backup
zroot/ds/backup/kazu.pis 14T 31G 14T 0% /mnt/ds/backup/kazu.pis
zroot/ds/distfiles 14T 7.9M 14T 0% /mnt/ds/distfiles
zroot/ds/obj 14T 10G 14T 0% /mnt/ds/obj
zroot/ds/packages 14T 4.0M 14T 0% /mnt/ds/packages
zroot/ds/ports 14T 1.3G 14T 0% /mnt/ds/ports
zroot/ds/src 14T 2.6G 14T 0% /mnt/ds/src
zroot/tmp 14T 88K 14T 0% /mnt/tmp
zroot/usr/home 14T 136K 14T 0% /mnt/usr/home
zroot/usr/local 14T 10M 14T 0% /mnt/usr/local
zroot/var/audit 14T 88K 14T 0% /mnt/var/audit
zroot/var/crash 14T 88K 14T 0% /mnt/var/crash
zroot/var/log 14T 388K 14T 0% /mnt/var/log
zroot/var/mail 14T 92K 14T 0% /mnt/var/mail
zroot/var/ports 14T 11M 14T 0% /mnt/var/ports
zroot/var/tmp 14T 6.0M 14T 0% /mnt/var/tmp
zroot/vm 14T 2.8G 14T 0% /mnt/vm
zroot/vm/tbedfc 14T 1.6G 14T 0% /mnt/vm/tbedfc
zroot 14T 88K 14T 0% /mnt/zroot
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 51.1G 13.9T 88K /mnt/zroot
zroot/.dake 18.3M 13.9T 18.3M /mnt/.dake
zroot/ROOT 1.71G 13.9T 88K none
zroot/ROOT/default 1.71G 13.9T 1.71G /mnt/mnt
zroot/ds 45.0G 13.9T 96K /mnt/ds
zroot/ds/backup 30.8G 13.9T 88K /mnt/ds/backup
zroot/ds/backup/kazu.pis 30.8G 13.9T 30.8G /mnt/ds/backup/kazu.pis
zroot/ds/distfiles 7.88M 13.9T 7.88M /mnt/ds/distfiles
zroot/ds/obj 10.4G 13.9T 10.4G /mnt/ds/obj
zroot/ds/packages 4.02M 13.9T 4.02M /mnt/ds/packages
zroot/ds/ports 1.26G 13.9T 1.26G /mnt/ds/ports
zroot/ds/src 2.56G 13.9T 2.56G /mnt/ds/src
zroot/tmp 88K 13.9T 88K /mnt/tmp
zroot/usr 10.4M 13.9T 88K /mnt/usr
zroot/usr/home 136K 13.9T 136K /mnt/usr/home
zroot/usr/local 10.2M 13.9T 10.2M /mnt/usr/local
zroot/var 17.4M 13.9T 88K /mnt/var
zroot/var/audit 88K 13.9T 88K /mnt/var/audit
zroot/var/crash 88K 13.9T 88K /mnt/var/crash
zroot/var/log 388K 13.9T 388K /mnt/var/log
zroot/var/mail 92K 13.9T 92K /mnt/var/mail
zroot/var/ports 10.7M 13.9T 10.7M /mnt/var/ports
zroot/var/tmp 5.98M 13.9T 5.98M /mnt/var/tmp
zroot/vm 4.33G 13.9T 2.75G /mnt/vm
zroot/vm/tbedfc 1.58G 13.9T 1.58G /mnt/vm/tbedfc
# zfs mount zroot/ROOT/default
# cd /mnt/mnt/
# mv boot boot.bak
# cp -RPp boot.bak boot
# gpart show /dev/mfid0
=> 40 31247564720 mfid0 GPT (15T)
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
410664 984 - free - (492K)
411648 268435456 3 freebsd-swap (128G)
268847104 30978715648 4 freebsd-zfs (14T)
31247562752 2008 - free - (1.0M)
# gpart bootcode -b /mnt/mnt/boot/pmbr -p /boot/gptzfsboot -i 2 mfid0
partcode written to mfid0p2
bootcode written to mfid0
# cd
# zpool export zroot
#
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
Any idea?
Best regards
[1] http://ds.truefc.org/~kiri/freebsd/current/zfs/messages
[2] https://lists.freebsd.org/pipermail/freebsd-questions/2016-February/270505.html
[3] https://forums.freebsd.org/threads/zfs-i-o-error-all-block-copies-unavailable-invalid-format.55227/#post-312830
---
KIRIYAMA Kazuhiko
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
Since you were about to 'zpool import' on the USB stick, this suggests
the problem is with the boot blocks, not ZFS.

The early boot phase (gptzfsboot) does not read zpool.cache, since that
only lives ON the pool, and the pool has not been imported yet.

Maybe kevans@ or imp@ who have been working on the boot code have some
insight.
--
Allan Jude
Trond Endrestøl
2018-03-20 07:09:29 UTC
Permalink
Post by KIRIYAMA Kazuhiko
Hi,
I've been encountered suddenly death in ZFS full volume
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
ZFS: i/o error - all block copies unavailable
ZFS: can't find dataset u
# gpart show /dev/mfid0
=> 40 31247564720 mfid0 GPT (15T)
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
410664 984 - free - (492K)
411648 268435456 3 freebsd-swap (128G)
268847104 30978715648 4 freebsd-zfs (14T)
31247562752 2008 - free - (1.0M)
#
But nothing had beed happend in old current ZFS full volume
machine(r327038M). According to [2] the reason is boot/zfs/zpool.cache
inconsistent. I've tried to cope with this by repairing
# kldload zfs
# zpool import
pool: zroot
id: 17762298124265859537
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
zroot ONLINE
mfid0p4 ONLINE
# zpool import -fR /mnt zroot
# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/da0p2 14G 1.6G 11G 13% /
devfs 1.0K 1.0K 0B 100% /dev
zroot/.dake 14T 18M 14T 0% /mnt/.dake
zroot/ds 14T 96K 14T 0% /mnt/ds
zroot/ds/backup 14T 88K 14T 0% /mnt/ds/backup
zroot/ds/backup/kazu.pis 14T 31G 14T 0% /mnt/ds/backup/kazu.pis
zroot/ds/distfiles 14T 7.9M 14T 0% /mnt/ds/distfiles
zroot/ds/obj 14T 10G 14T 0% /mnt/ds/obj
zroot/ds/packages 14T 4.0M 14T 0% /mnt/ds/packages
zroot/ds/ports 14T 1.3G 14T 0% /mnt/ds/ports
zroot/ds/src 14T 2.6G 14T 0% /mnt/ds/src
zroot/tmp 14T 88K 14T 0% /mnt/tmp
zroot/usr/home 14T 136K 14T 0% /mnt/usr/home
zroot/usr/local 14T 10M 14T 0% /mnt/usr/local
zroot/var/audit 14T 88K 14T 0% /mnt/var/audit
zroot/var/crash 14T 88K 14T 0% /mnt/var/crash
zroot/var/log 14T 388K 14T 0% /mnt/var/log
zroot/var/mail 14T 92K 14T 0% /mnt/var/mail
zroot/var/ports 14T 11M 14T 0% /mnt/var/ports
zroot/var/tmp 14T 6.0M 14T 0% /mnt/var/tmp
zroot/vm 14T 2.8G 14T 0% /mnt/vm
zroot/vm/tbedfc 14T 1.6G 14T 0% /mnt/vm/tbedfc
zroot 14T 88K 14T 0% /mnt/zroot
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 51.1G 13.9T 88K /mnt/zroot
zroot/.dake 18.3M 13.9T 18.3M /mnt/.dake
zroot/ROOT 1.71G 13.9T 88K none
zroot/ROOT/default 1.71G 13.9T 1.71G /mnt/mnt
zroot/ds 45.0G 13.9T 96K /mnt/ds
zroot/ds/backup 30.8G 13.9T 88K /mnt/ds/backup
zroot/ds/backup/kazu.pis 30.8G 13.9T 30.8G /mnt/ds/backup/kazu.pis
zroot/ds/distfiles 7.88M 13.9T 7.88M /mnt/ds/distfiles
zroot/ds/obj 10.4G 13.9T 10.4G /mnt/ds/obj
zroot/ds/packages 4.02M 13.9T 4.02M /mnt/ds/packages
zroot/ds/ports 1.26G 13.9T 1.26G /mnt/ds/ports
zroot/ds/src 2.56G 13.9T 2.56G /mnt/ds/src
zroot/tmp 88K 13.9T 88K /mnt/tmp
zroot/usr 10.4M 13.9T 88K /mnt/usr
zroot/usr/home 136K 13.9T 136K /mnt/usr/home
zroot/usr/local 10.2M 13.9T 10.2M /mnt/usr/local
zroot/var 17.4M 13.9T 88K /mnt/var
zroot/var/audit 88K 13.9T 88K /mnt/var/audit
zroot/var/crash 88K 13.9T 88K /mnt/var/crash
zroot/var/log 388K 13.9T 388K /mnt/var/log
zroot/var/mail 92K 13.9T 92K /mnt/var/mail
zroot/var/ports 10.7M 13.9T 10.7M /mnt/var/ports
zroot/var/tmp 5.98M 13.9T 5.98M /mnt/var/tmp
zroot/vm 4.33G 13.9T 2.75G /mnt/vm
zroot/vm/tbedfc 1.58G 13.9T 1.58G /mnt/vm/tbedfc
# zfs mount zroot/ROOT/default
# cd /mnt/mnt/
# mv boot boot.bak
# cp -RPp boot.bak boot
# gpart show /dev/mfid0
=> 40 31247564720 mfid0 GPT (15T)
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
410664 984 - free - (492K)
411648 268435456 3 freebsd-swap (128G)
268847104 30978715648 4 freebsd-zfs (14T)
31247562752 2008 - free - (1.0M)
# gpart bootcode -b /mnt/mnt/boot/pmbr -p /boot/gptzfsboot -i 2 mfid0
partcode written to mfid0p2
bootcode written to mfid0
# cd
# zpool export zroot
This step has been big no-no in the past. Never leave your
bootpool/rootpool in an exported state if you intend to boot from it.
For all I know, this advice might be superstition for the present
versions of FreeBSD.

From what I can tell from the above, you never created a new
zpool.cache and copied it to its rightful place.

If you suspect your zpool.cache is out of date, then this should do
the trick:

zpool import -o cachefile=/tmp/zpool.cache -fR /mnt zroot

If you have additional pools, you may want to treat them the same way.

cp -p /tmp/zpool.cache /mnt/mnt/boot/zfs/zpool.cache
reboot
Post by KIRIYAMA Kazuhiko
#
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
Any idea?
Best regards
[1] http://ds.truefc.org/~kiri/freebsd/current/zfs/messages
[2] https://lists.freebsd.org/pipermail/freebsd-questions/2016-February/270505.html
[3] https://forums.freebsd.org/threads/zfs-i-o-error-all-block-copies-unavailable-invalid-format.55227/#post-312830
--
Trond.
Andriy Gapon
2018-03-20 14:29:43 UTC
Permalink
Post by Trond Endrestøl
This step has been big no-no in the past. Never leave your
bootpool/rootpool in an exported state if you intend to boot from it.
For all I know, this advice might be superstition for the present
versions of FreeBSD.
Yes, it is. That does not matter at all now.
Post by Trond Endrestøl
From what I can tell from the above, you never created a new
zpool.cache and copied it to its rightful place.
For the _rooot_ pool zpool.cache does not matter as well.
It matters only for auto-import of additional pools, if any.
--
Andriy Gapon
Allan Jude
2018-03-20 17:10:50 UTC
Permalink
Post by Andriy Gapon
Post by Trond Endrestøl
This step has been big no-no in the past. Never leave your
bootpool/rootpool in an exported state if you intend to boot from it.
For all I know, this advice might be superstition for the present
versions of FreeBSD.
Yes, it is. That does not matter at all now.
Post by Trond Endrestøl
From what I can tell from the above, you never created a new
zpool.cache and copied it to its rightful place.
For the _rooot_ pool zpool.cache does not matter as well.
It matters only for auto-import of additional pools, if any.
As I mentioned previously, the error reported by the user is before it
is even possible to read zpool.cache, so it is definitely not the source
of the problem.
--
Allan Jude
Markus Wild
2018-03-20 07:50:28 UTC
Permalink
This post might be inappropriate. Click to display it.
Toomas Soome
2018-03-20 08:21:02 UTC
Permalink
Post by Markus Wild
Hi there,
Post by KIRIYAMA Kazuhiko
I've been encountered suddenly death in ZFS full volume
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
268847104 30978715648 4 freebsd-zfs (14T)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I had faced the exact same issue on a HP Microserver G8 with 8TB disks and a 16TB zpool on FreeBSD 11 about a year ago.
My conclusion was, that over time (and updating the kernel), the blocks for that kernel file were reallocated to a
later spot on the disks, and that however the loader fetches those blocks, it now failed doing so (perhaps a 2/4TB
limit/bug with the BIOS of that server? Unfortunately, there was no UEFI support for it, don't know whether that
changed in the meantime). The pool was always importable fine with the USB stick, the problem was only with the boot
loader. I worked around the problem stealing space from the swap partitions on two disks to build a "zboot" pool, just
containing the /boot directory, having the boot loader load the kernel from there, and then still mount the real root
pool to run the system off using loader-variables in loader.conf of the boot pool. It's a hack, but it's working
# zfs boot kludge due to buggy bios
vfs.root.mountfrom="zfs:zroot/ROOT/fbsd11"
If you're facing the same problem, you might give this a shot? You seem to have plenty of swap to canibalize as well;)
please check with lsdev -v from loader OK prompt - do the reported disk/partition sizes make sense. Another thing is, even if you do update the current build, you want to make sure your installed boot blocks are updated as well - otherwise you will have new binary in the /boot directory, but it is not installed on boot block area…

rgds,
toomas
Thomas Steen Rasmussen
2018-03-21 07:20:42 UTC
Permalink
Post by Markus Wild
I had faced the exact same issue on a HP Microserver G8 with 8TB disks and a 16TB zpool on FreeBSD 11 about a year ago.
Hello,

I will ask you the same question as I asked the OP:

Has this pool had new vdevs addded to it since the server was installed?
What does a "zpool status" look like when the pool is imported?

Explanation: Some controllers only make a small fixed number of devices
visible to the bios during boot. Imagine a zpool was booted with, say, 4
disks in a pool, and 4 more was added. If the HBA only shows 4 drives to
the bios during boot, you see this error.

If you think this might be relevant you need to chase down a setting
called "maximum int13 devices for this adapter" or something like that.
See page 3-4 in this documentation:
https://supermicro.com/manuals/other/LSI_HostRAID_2308.pdf

The setting has been set to 4 on a bunch of servers I've bought over the
last years. Then you install the server with 4 disks, later add new
disks, reboot one day and nothing works until you set it high enough
that the bootloader can see the whole pool, and you're good again.

/Thomas
Markus Wild
2018-03-21 09:28:48 UTC
Permalink
Hello Thomas,
Post by Thomas Steen Rasmussen
Post by Markus Wild
I had faced the exact same issue on a HP Microserver G8 with 8TB disks and a 16TB zpool on FreeBSD 11 about a year
ago.
Has this pool had new vdevs addded to it since the server was installed?
No. This is a microserver with only 4 (not even hotplug) trays. It was set up using the freebsd installer
originally. I had to apply the (then patch, don't know whether it's included standard now) btx loader fix to retry
a failed read to get around BIOS bugs with that server, but after that, the server booted fine. It's only after
a bit of use and a kernel update that things went south. I tried many different things at that time, but the only
approach that worked for me was to steal 2 of the 4 swap partitions which I placed on every disk initially, and
build a mirrored boot zpool from those. The loader had no problem loading the kernel from that, and when the kernel
took over, it had no problem using the original root pool (that the boot loader wasn't able to find/load). Whence my
conclusion that the 2nd stage boot loader has a problem (probably due to yet another bios bug on that server) loading
blocks beyond a certain limit, which could be 2TB or 4TB.
Post by Thomas Steen Rasmussen
What does a "zpool status" look like when the pool is imported?
$ zpool status
pool: zboot
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Wed Mar 21 03:58:36 2018
config:

NAME STATE READ WRITE CKSUM
zboot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/zfs-boot0 ONLINE 0 0 0
gpt/zfs-boot1 ONLINE 0 0 0

errors: No known data errors

pool: zroot
state: ONLINE
scan: scrub repaired 0 in 6h49m with 0 errors on Sat Mar 10 10:17:49 2018
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/zfs0 ONLINE 0 0 0
gpt/zfs1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gpt/zfs2 ONLINE 0 0 0
gpt/zfs3 ONLINE 0 0 0

errors: No known data errors

Please note: this server is in use at a customer now, it's workin fine with this workaround. I just brought it up
to give a possible explanation to the observed problem of the original poster, and that it _might_ have nothing to do
with a newer version of the current kernel, but rather be due to the updated kernel being written to a new location
on disk, which can't be read properly by the boot loader.

Cheers,
Markus
Thomas Steen Rasmussen
2018-03-21 07:01:24 UTC
Permalink
Post by KIRIYAMA Kazuhiko
Hi,
I've been encountered suddenly death in ZFS full volume
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
ZFS: i/o error - all block copies unavailable
ZFS: can't find dataset u
Has this pool had new vdevs addded to it since the server was installed?
What does a "zpool status" look like when the pool is imported?

/Thomas
KIRIYAMA Kazuhiko
2018-03-22 06:26:09 UTC
Permalink
At Wed, 21 Mar 2018 08:01:24 +0100,
Post by Thomas Steen Rasmussen
Post by KIRIYAMA Kazuhiko
Hi,
I've been encountered suddenly death in ZFS full volume
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
ZFS: i/o error - all block copies unavailable
ZFS: can't find dataset u
Has this pool had new vdevs addded to it since the server was installed?
No. /dev/mfid0p4 is a RAID60 disk of AVAGO MegaRAID driver[1].
Post by Thomas Steen Rasmussen
What does a "zpool status" look like when the pool is imported?
Like below:

***@t1:~ # zpool import -fR /mnt zroot
***@t1:~ # zpool status
pool: zroot
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mfid0p4 ONLINE 0 0 0

errors: No known data errors
***@t1:~ #

[1] http://ds.truefc.org/~kiri/freebsd/current/zfs/dmesg.boot
Post by Thomas Steen Rasmussen
/Thomas
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
---
KIRIYAMA Kazuhiko

Continue reading on narkive:
Loading...