@justpassingby
@sh.itjust.worksHi all,
UPDATE: I closed the post (the timebox I gave myself to understand the issue is now over). Thank you all for the help ^^
DISCLAIMER: The objective of this post is to understand how people would debug issues like these when real data is involved and get to the bottom of the problem. The objective is NOT to "restore service" but to understand what failed. The tone of the post is voluntarily not serious to keep it light.
I am playing a little with TrueNas Scale and ZFS. I was trying to use a second NVME disk via USB to do a replication once a day of the main pool, however I had issues with this secondary pool being SUSPENDED for "too many errors". This pool is not directly write/read by users/apps, but it is just there to be "replicated on" once a day.
Now, please, I know that using disks via USB is not advised. Also I am not interested in recovering the data, since there is nothing real on it. What I am doing is testing to see if the system is brittle, and if it is, how to debug if there is a real issue.
Now to the point. The pool is SUSPENDED. Good. Why? I mean, the real reason why. To see if the system can be used in real life it needs to be debuggable.
Let's start. The pool is SUSPENDED:
pool: tank-02
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-JQ
config:
NAME STATE READ WRITE CKSUM
tank-02 UNAVAIL 0 0 0 insufficient replicas
xxx-xxx-xxx-xxx-xxx FAULTED 3 0 0 too many errors
errors: 4 data errors, use '-v' for a list
To which you may ask: why? Too many errors (the -v says nothing more). Well that doesn't help, does it. When you run zpool clear:
# zpool clear tank-02
cannot clear errors for tank-02: I/O error
Incredibly useful as you can see. dmesg to the rescue?
WARNING: Pool 'tank-02' has encountered an uncorrectable I/O failure and has been suspended.
Thanks? I guess. I know it it trying to safeguard data but again... why?
Before you ask:
usb 2-4: USB disconnect, device number 12
whatever the reason why. I mean, kick me if I know why TrueNas scale decided that having /sys/module/usbcore/parameters/autosuspend
to 2
is a good idea but again, that is not the point. I need ZFS to reply to me what is the issue for its point of view.I have read a lot online. Maybe it is the temperarure (usb enclosure heating up), maybe it is the cable, power, "it is the usb controller", or the chipset doing the usb -> nvme... However, therey are not saying what to check. People is guessing. I saw more tech behind reading tea leaves.
My question for you all is this: ZFS SUSPENDED one of my pools. It (seems to me) is refusing to fix it. Refusing to do anything with it and to tell me why. So, in a real world case, how to debug it? If I have to trust my data to it, I don’t want the only option to be “use many disks and just replace one and the cable when ZFS poo-poo”.
How to know the cause?
Thank you for the help.
PS: I am sure I am missing some very basic ZFS knoweldge on the topic, so please let me know what else can I do to make ZFS talk to me.
Hi all,
I am having a strange new issue with the one of the raspberry pi 4b I have running at home. One of them failed/restarted for some reasons and it is now stuck at boot with the line:
Waiting for root device LABEL=writable...
I am booting this PI from USB. From what I can see the disk is ok. I can mount it on my laptop and access it correctly. The partition is labelled correctly. I tried to move it to another PI I have and I have the same error (I did this to remove the possibility it was the PI/USB port). I am pretty sure it is not the power that is the issue (since I am giving it more than enough).
All of this was working correctly until now (for months). Ubuntu may have updated something (my fault, I may not have disabled the auto-update) or something else could have broken.
I can try to point to the partition via UUID instead of the label, but something tells me that is not the issue. Did anybody encounter such an issue in the past or has any advice on how to debug it?
Thank you for your help and time.
===
Solution: https://sh.itjust.works/comment/1646333
https://longhorn.io/
Cloud native distributed block storage for Kubernetes