-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CentOS 7. Not mount/import pool after reboot #2575
Comments
Can you check what hostid is being used on your system.
|
dmesg | grep "using hostid" sudo zdb |
Does everything work properly for a clean shutdown? |
after any reboot does not work. |
Saw exactly the same problem. No /etc/hostid file exists after fresh zfs installation.
Everything fine in subsequent reboot. |
dd if=/dev/urandom of=/etc/hostid bs=4 count=1 did not help. |
But something has changed. Before you had Does zdb still show any pool? |
i run zpool import STORE (without -f) manually (not via startup service) STORE: At now, i have
all working fine. Until next reboot )) |
Same problem here. I set /etc/hostid, and now that is okay, but still no pool. Looking at /var/log/messages, I see that the mpt2sas driver is enumerating sata disks for that pool, but after this: Sep 21 19:16:03 centos7-ha3 kernel: SPL: using hostid 0x1bef8b26 Is something being run in the wrong order? The LSI HBA is presenting the drives too slowly? Any ideas on how to address this? Thanks! |
Pinging this to try to get a helpful workaround. In CentOS 7, due to parallelism in the systemd implementation, I supposedly can't be guaranteed that importing the pool in /etc/rc.local will work. At the moment, I have to import the pool manually, which is obviously not good. |
As a simple workaround you could create e.g. the following crontab entry for root:
|
Thanks. Doesn't work quite right (at least on CentOS7). I need to use /sbin/zpool due to crontab search path only referencing /bin and /usr/bin. Also, even though I set up vdev_conf to give me the wwn's for the disks, when I did the cron job, I got: pool: tank-copy
Also, for some reason, the NFS share is not being exported. I seem to need to do this: zfs set sharenfs=on tank-copy/vsphere even though it was already on. Not sure how much of this is CentOS7 or not... |
Here's what I seem to need to do (it's worked 5 times in a row lol): sleep 30 |
Second sleep should not be necessary: How to change dev names is explained here: |
I think I was not clear. I created the vdev_id.conf file to define the wwn names as I wanted. That works just fine (except for this one time when I rebooted and did 'zpool import tank-copy' which should have used the wwn names, but didn't (for reasons I can't explain). |
"Second sleep should not be necessary:" Except it is :( Note also that 'zfs share -a' is not adequate. I need to set the sharenfs property to trigger something. I'm not just speculating here, I've tried this a dozen different ways to get it to work... Like I said, may be some centos7/zed/systemd thing... |
Seems I am not the only one, any information I can provide to solve this? |
Possibly fixed by #2766 if Dracut is used. |
Make use of Dracut's ability to restore the initramfs on shutdown and pivot to it, allowing for a clean unmount and export of the ZFS root. No need to force-import on every reboot anymore. Signed-off-by: Lukas Wunner <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue openzfs#2195 Issue openzfs#2476 Issue openzfs#2498 Issue openzfs#2556 Issue openzfs#2563 Issue openzfs#2575 Issue openzfs#2600 Issue openzfs#2755 Issue openzfs#2766
I have a similar issue, at least if I simply read the title. Also running CentOS7, and ZOL version is 0.6.3. My system successfully imports the volume. Both zfs-import-cache.service and zfs-mount.service runs and reports success/status 0. However, I still have to either restart the zfs-mount.service or manually run zfs mount -a. Short, automatic mounting of datasets do not work, but system detects no issue. |
I have the same problem as AsavarTzeth. The pool is visible after a reboot via "zpool list", but it is not mounted. A quick "zfs mount -a" mounts everything fine, but should not be necessary. Tried today with and without SELinux enabled on Centos 7. Package versions: |
An update; the problem in my case was that i am using a raidz assembled from LUKS-encrypted drives, and they were not available when zfs-import-cache.service, zfs-import-scan.service and zfs-mount.service ran. I just added/changed the below lines to make sure that cryptsetup.target had started correctly before these services, and the mount worked: In zfs-import-cache.service: In zfs-import-scan.service: In zfs-mount.service After these changes, it works as expected. Not sure if it is appropriate to change the scripts for Centos 7 to change this behavior or not, but that's what i did to solve it anyway. |
@MagnusMWW You've got the right approach here, and we're already carrying a patch 4f6a147 to address this for the next tag. |
Ah, that is great to hear behlendorf! Thank you for the quick reply! |
Make use of Dracut's ability to restore the initramfs on shutdown and pivot to it, allowing for a clean unmount and export of the ZFS root. No need to force-import on every reboot anymore. Signed-off-by: Lukas Wunner <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue openzfs#2195 Issue openzfs#2476 Issue openzfs#2498 Issue openzfs#2556 Issue openzfs#2563 Issue openzfs#2575 Issue openzfs#2600 Issue openzfs#2755 Issue openzfs#2766
I just upgraded to CentOS 7 and ran into the same issue: the pool was not automatically imported after a reboot. dmesg showed a hostid of 0x00000000, but the hostid command returned a8c07001. Gijsbert |
I've hit this problem a system where I had to delay the zfs-cache-import for a few seconds because the ifs module hadn't finished loading yet. Similar to and same solution described https://bbs.archlinux.org/viewtopic.php?id=183851 (waitforzfsdevice script) worked for me on a centos 7.1 system. Others work fine though, so oddball timing in systemd... |
This should now be addressed in master and the fix has been pulled in for the next port release. As always, if someone who's able to regularly trigger this has time to verify the fix that would be welcome. ba34dd9 Base init scripts for SYSV systems |
I still have the same problem as yesbox, I use vdev aliases and the mount at boot doesn't work (zfs 0.6.4.2-1, seems to have 544f718 in it). When I re-import the pool based on /dev/disk/by-path, it mounts fine at boot. My systemctl status logs are essentially identical to yesbox's when it fails. |
Last time I had a problem like this, changing HBA BIOS settings to wait 2s for each HDD helped. |
Removing as a 0.6.5 blocker. Using a stock CentOS 7 VM I'm unable to reproduce this issue. However, since other have reported this issue I'm leaving it open.
|
I see your pool uses /dev/disk/by-id names. That's what wound up working for me. I had problems when I was using vdev aliases though, did you try it that way? |
@telsin no I didn't test with vdev aliases. That's a good thought and I must have missed that in your comment. When I get a chance to retest this again I'll use vdev aliases. |
Group: Experiencing the same issue here, but I have a workaround that I figured I'd share. System: Workaround: In file: /usr/lib/systemd/system/zfs-import-cache.service
I understand this may be an incorrect approach, but it was the only thing I could do that would reliably mount the second pool (attached to a PCI SATA controller). Guessing it has to do with a delay between the module loading (as in |
I had this problem today with CentOS 7 3.10.0-327.28.3, ZoL version 0.6.5.8-1. Eventually I ended up typing "systemctl enable zfs-import-cache", and now the pool is present and mounted on reboot. |
I updated to ZFS 0.6.5.8-1 from 0.6.5.7-1 and my pools stopped being imported and mounted on startup. After "systemctl enable zfs-import-cache", the pool imports, but is not mounted. CentOS Linux release 7.2.1511 (Core) Message with prior ZFS: Vs messages with latest ZFS RPMS: Nov 1 12:28:41 localhost kernel: ZFS: Loaded module v0.6.5.8-1, ZFS pool version 5000, ZFS filesystem version 5 The mount and subsequent messages never appear. I have to run "zfs mount -a" to get the pool mounted [root@thing1 ~]# zpool status -v
errors: No known data errors lrwxrwxrwx. 1 root root 7 Nov 1 12:29 /dev/mapper/raidzb -> ../dm-9 |
Did you run
I missed it when I did some updates, reading the change log for 0.6.5.8 gave me that, no problems since. |
That worked. Thanks! Nov 1 14:46:30 localhost kernel: ZFS: Loaded module v0.6.5.8-1, ZFS pool version 5000, ZFS filesystem version 5 |
I can recreate this error with latest CentOS / ZoL (0.7.1 as of posting)
No /etc/hostid is created and Sorry if this has been covered but this post helped me solve the issue. |
This file is intentionally not created by default. It is only required for multihost configurations, you can use the
Are you sure needed to run this again? The |
I am sure.
|
I am sure too. CentOS 7.4/ZoL 0.7.3/DKMS: exactly the same problems (imported and new created pools are not mounted after reboot) and “solution” ( |
Faced same issue (thinking after upgrading to the latest kernel). Solution from @Loafdude does not work.
Also tried to yum reinstall libzfs2 kmod-zfs zfs - didn't help.
After the system booting completed manual call of systemctl restart zfs-mount does its job w/o issues.. Please, help! |
See the release notes for 7.4 & 7.5, you probably need a "systemctl enable zfs-import.target" |
I just had this problem after a 0.7.2 to 0.7.5 update (I think those were the versions). My /usr/lib/systemd/system-preset/50-zfs.preset looks like this
root doing # zfs mount -a mounted my pool again, but should I change zfs-import-scan.service to be enabled? Thanks. |
check 'systemctl status zfs-import.target' and make sure it's enabled there and not just the preset. |
We'll see how it goes on next reboot. Thanks. |
Enabling zfs-import.target did the trick for me, which is weird because the first thing I tried was systemctl preset zfs-import-cache zfs-import-scan zfs-import.target zfs-mount zfs-share zfs-zed zfs.target. Am I to understand that zfs-import.target default is disabled? |
As I understand it, the defaults only take effect on a new install and not on an upgrade, so it required manual intervention to enable. According to the release notes, this is supposedly fixed if 7.5, but if you went through 7.4, you got it without it being enabled and still needed to manually activate it. Not sure what happened to you @jasker5183, but if you went through 7.4 you may also have gotten it in a disabled state and needed to do it again. |
@telsin , thank you very much! Enabling zfs-import.target did the trick! |
Running CentOS 7.4, ZoL 7.6 (updated from something much older), enabling zfs-import.target (it was disabled) didn't work for me. I ended up doing |
Hi,
When power is turned off ZFS pool do not correctly export/unmount.
After reboot - "no datasets available"
zfs-import-cache.service say:
авг 06 17:10:38 skynet zpool[2245]: cannot import 'STORE': pool may be in use from other system
авг 06 17:10:38 skynet zpool[2245]: use '-f' to import anyway
авг 06 17:10:38 skynet systemd[1]: zfs-import-cache.service: main process exited, code=exited, status=1/FAILURE
авг 06 17:10:38 skynet systemd[1]: Failed to start Import ZFS pools by cache file.
'zpool import STORE' say 'pool may be in use from other system'
zpool import -f STORE work, but until next reboot. Then repeat again.
The text was updated successfully, but these errors were encountered: