Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CentOS 7. Not mount/import pool after reboot #2575

Closed
x09 opened this issue Aug 7, 2014 · 52 comments
Closed

CentOS 7. Not mount/import pool after reboot #2575

x09 opened this issue Aug 7, 2014 · 52 comments

Comments

@x09
Copy link

x09 commented Aug 7, 2014

Hi,

When power is turned off ZFS pool do not correctly export/unmount.
After reboot - "no datasets available"

zfs-import-cache.service say:

авг 06 17:10:38 skynet zpool[2245]: cannot import 'STORE': pool may be in use from other system
авг 06 17:10:38 skynet zpool[2245]: use '-f' to import anyway
авг 06 17:10:38 skynet systemd[1]: zfs-import-cache.service: main process exited, code=exited, status=1/FAILURE
авг 06 17:10:38 skynet systemd[1]: Failed to start Import ZFS pools by cache file.

'zpool import STORE' say 'pool may be in use from other system'

zpool import -f STORE work, but until next reboot. Then repeat again.

@behlendorf
Copy link
Contributor

Can you check what hostid is being used on your system.

$ dmesg | grep "using hostid"
[    7.066071] SPL: using hostid 0x00000000

# With the pool imported run zdb and look for `hostid`.
$ zdb

@behlendorf behlendorf added this to the 0.8.0 milestone Aug 7, 2014
@behlendorf behlendorf added the Bug label Aug 7, 2014
@x09
Copy link
Author

x09 commented Aug 8, 2014

dmesg | grep "using hostid"
[ 8.642394] SPL: using hostid 0x00000000

sudo zdb
STORE:
version: 5000
name: 'STORE'
state: 0
txg: 101502
pool_guid: 11910095750819889089
errata: 0
hostname: 'skynet'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 11910095750819889089
children[0]:
type: 'mirror'
id: 0
guid: 17803502091298809412
whole_disk: 0
metaslab_array: 33
metaslab_shift: 31
ashift: 9
asize: 400074211328
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 1967309005541862445
path: '/dev/disk/by-vdev/z4-part1'
whole_disk: 1
DTL: 108
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 18368084852077688544
path: '/dev/disk/by-vdev/z500-part1'
whole_disk: 1
DTL: 46
create_txg: 4
features_for_read:

@behlendorf
Copy link
Contributor

Does everything work properly for a clean shutdown?

@x09
Copy link
Author

x09 commented Aug 8, 2014

after any reboot does not work.

@ghost
Copy link

ghost commented Aug 9, 2014

Saw exactly the same problem. No /etc/hostid file exists after fresh zfs installation.
Workaround, after #703

dd if=/dev/urandom of=/etc/hostid bs=4 count=1
zpool import -f STORE

Everything fine in subsequent reboot.

@x09
Copy link
Author

x09 commented Aug 12, 2014

dd if=/dev/urandom of=/etc/hostid bs=4 count=1
zpool import -f STORE

did not help.
...
авг 11 10:41:22 skynet systemd[1]: zfs-import-cache.service: main process exited, code=exited, status=1/FAILURE
авг 11 10:41:22 skynet systemd[1]: Failed to start Import ZFS pools by cache file.
авг 11 10:41:22 skynet systemd[1]: Unit zfs-import-cache.service entered failed state.
авг 11 10:41:22 skynet systemd[1]: Starting Mount ZFS filesystems...
авг 11 10:41:22 skynet systemd[1]: Started Mount ZFS filesystems.
авг 11 10:41:22 skynet systemd[1]: Starting Sound Card.
авг 11 10:41:22 skynet systemd[1]: Reached target Sound Card.
авг 11 10:41:22 skynet kernel: EXT4-fs (sda2): re-mounted. Opts: (null)
авг 11 10:41:22 skynet zpool[459]: cannot import 'STORE': no such pool or dataset
авг 11 10:41:22 skynet zpool[459]: Destroy and re-create the pool from
авг 11 10:41:22 skynet zpool[459]: a backup source.
....

@ghost
Copy link

ghost commented Aug 13, 2014

But something has changed. Before you had
cannot import 'STORE': pool may be in use from other system
Now the error is
cannot import 'STORE': no such pool or dataset
So it seems the hostid issue is fixed, but somehow you lost the 'STORE' pool along the way.

Does zdb still show any pool?
If not, have you tried to create a new pool?

@x09
Copy link
Author

x09 commented Aug 13, 2014

i run zpool import STORE (without -f) manually (not via startup service)
pool is mounted.

STORE:
version: 5000
name: 'STORE'
state: 0
txg: 182457
pool_guid: 11910095750819889089
errata: 0
hostid: 1051535331
hostname: 'skynet'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 11910095750819889089
children[0]:
type: 'mirror'
id: 0
guid: 17803502091298809412
whole_disk: 0
metaslab_array: 33
metaslab_shift: 31
ashift: 9
asize: 400074211328
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 1967309005541862445
path: '/dev/disk/by-vdev/z4-part1'
whole_disk: 1
DTL: 108
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 18368084852077688544
path: '/dev/disk/by-vdev/z500-part1'
whole_disk: 1
DTL: 46
create_txg: 4

At now, i have

  1. CentOS 7 x64
  2. after start OS, i connect to ssh, do zfs import ..... (without -f)
  3. systemctl restart nfs (systemctl enabled nfs, but nfs dont work with zfs after reboot (( )
  4. zfs set sharenfs=on STORE/.../...

all working fine. Until next reboot ))

@dswartz
Copy link
Contributor

dswartz commented Sep 21, 2014

Same problem here. I set /etc/hostid, and now that is okay, but still no pool. Looking at /var/log/messages, I see that the mpt2sas driver is enumerating sata disks for that pool, but after this:

Sep 21 19:16:03 centos7-ha3 kernel: SPL: using hostid 0x1bef8b26
Sep 21 19:16:03 centos7-ha3 zpool: cannot import 'tank-copy': no such pool or dataset
Sep 21 19:16:03 centos7-ha3 zpool: Destroy and re-create the pool from
Sep 21 19:16:03 centos7-ha3 zpool: a backup source.
Sep 21 19:16:03 centos7-ha3 systemd: zfs-import-cache.service: main process exited, code=exited, status=1/FAILURE
Sep 21 19:16:03 centos7-ha3 systemd: Failed to start Import ZFS pools by cache file.
Sep 21 19:16:03 centos7-ha3 systemd: Unit zfs-import-cache.service entered failed state.

Is something being run in the wrong order? The LSI HBA is presenting the drives too slowly? Any ideas on how to address this? Thanks!

@dswartz
Copy link
Contributor

dswartz commented Sep 23, 2014

Pinging this to try to get a helpful workaround. In CentOS 7, due to parallelism in the systemd implementation, I supposedly can't be guaranteed that importing the pool in /etc/rc.local will work. At the moment, I have to import the pool manually, which is obviously not good.

@ghost
Copy link

ghost commented Sep 23, 2014

As a simple workaround you could create e.g. the following crontab entry for root:

@reboot sleep 30; zpool import -a

@dswartz
Copy link
Contributor

dswartz commented Sep 23, 2014

Thanks. Doesn't work quite right (at least on CentOS7). I need to use /sbin/zpool due to crontab search path only referencing /bin and /usr/bin. Also, even though I set up vdev_conf to give me the wwn's for the disks, when I did the cron job, I got:

pool: tank-copy
state: ONLINE
scan: scrub repaired 0 in 0h53m with 0 errors on Sun Sep 21 00:53:05 2014
config:

    NAME                                               STATE     READ WRITE CKSUM
    tank-copy                                          ONLINE       0     0     0
      mirror-0                                         ONLINE       0     0     0
        ata-ST32000644NS_9WM4V7HS                      ONLINE       0     0     0
        pci-0000:1b:00.0-sas-0x4433221105000000-lun-0  ONLINE       0     0     0

Also, for some reason, the NFS share is not being exported. I seem to need to do this:

zfs set sharenfs=on tank-copy/vsphere

even though it was already on. Not sure how much of this is CentOS7 or not...

@dswartz
Copy link
Contributor

dswartz commented Sep 23, 2014

Here's what I seem to need to do (it's worked 5 times in a row lol):

sleep 30
zpool import tank-copy
sleep 30
zfs set sharenfs=on tank-copy/vsphere

@ghost
Copy link

ghost commented Sep 23, 2014

Second sleep should not be necessary:
@reboot sleep 30; /sbin/zpool import -a; /sbin/zfs share -a

How to change dev names is explained here:
http://zfsonlinux.org/faq.html#HowDoIChangeNamesOnAnExistingPool

@dswartz
Copy link
Contributor

dswartz commented Sep 23, 2014

I think I was not clear. I created the vdev_id.conf file to define the wwn names as I wanted. That works just fine (except for this one time when I rebooted and did 'zpool import tank-copy' which should have used the wwn names, but didn't (for reasons I can't explain).

@dswartz
Copy link
Contributor

dswartz commented Sep 23, 2014

"Second sleep should not be necessary:"

Except it is :( Note also that 'zfs share -a' is not adequate. I need to set the sharenfs property to trigger something. I'm not just speculating here, I've tried this a dozen different ways to get it to work... Like I said, may be some centos7/zed/systemd thing...

@rhagu
Copy link

rhagu commented Oct 6, 2014

Seems I am not the only one, any information I can provide to solve this?

@l1k
Copy link
Contributor

l1k commented Oct 6, 2014

Possibly fixed by #2766 if Dracut is used.

behlendorf pushed a commit to behlendorf/zfs that referenced this issue Oct 7, 2014
Make use of Dracut's ability to restore the initramfs on shutdown and
pivot to it, allowing for a clean unmount and export of the ZFS root.
No need to force-import on every reboot anymore.

Signed-off-by: Lukas Wunner <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#2195
Issue openzfs#2476
Issue openzfs#2498
Issue openzfs#2556
Issue openzfs#2563
Issue openzfs#2575
Issue openzfs#2600
Issue openzfs#2755
Issue openzfs#2766
@AsavarTzeth
Copy link

I have a similar issue, at least if I simply read the title. Also running CentOS7, and ZOL version is 0.6.3.

My system successfully imports the volume. Both zfs-import-cache.service and zfs-mount.service runs and reports success/status 0. However, I still have to either restart the zfs-mount.service or manually run zfs mount -a.

Short, automatic mounting of datasets do not work, but system detects no issue.

@MagnusMWW
Copy link

I have the same problem as AsavarTzeth. The pool is visible after a reboot via "zpool list", but it is not mounted. A quick "zfs mount -a" mounts everything fine, but should not be necessary. Tried today with and without SELinux enabled on Centos 7.

Package versions:
"zfs" - 0.6.3 release 1.1.el7.centos

@MagnusMWW
Copy link

An update; the problem in my case was that i am using a raidz assembled from LUKS-encrypted drives, and they were not available when zfs-import-cache.service, zfs-import-scan.service and zfs-mount.service ran. I just added/changed the below lines to make sure that cryptsetup.target had started correctly before these services, and the mount worked:

In zfs-import-cache.service:
Changed "Requires=systemd-udev-settle.service" to "Requires=systemd-udev-settle.service cryptsetup.target"

In zfs-import-scan.service:
Changed "Requires=systemd-udev-settle.service" to "Requires=systemd-udev-settle.service cryptsetup.target"

In zfs-mount.service
Added "After=cryptsetup.target" on line 10

After these changes, it works as expected. Not sure if it is appropriate to change the scripts for Centos 7 to change this behavior or not, but that's what i did to solve it anyway.

@behlendorf
Copy link
Contributor

@MagnusMWW You've got the right approach here, and we're already carrying a patch 4f6a147 to address this for the next tag.

@MagnusMWW
Copy link

Ah, that is great to hear behlendorf! Thank you for the quick reply!

ryao pushed a commit to ryao/zfs that referenced this issue Nov 29, 2014
Make use of Dracut's ability to restore the initramfs on shutdown and
pivot to it, allowing for a clean unmount and export of the ZFS root.
No need to force-import on every reboot anymore.

Signed-off-by: Lukas Wunner <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#2195
Issue openzfs#2476
Issue openzfs#2498
Issue openzfs#2556
Issue openzfs#2563
Issue openzfs#2575
Issue openzfs#2600
Issue openzfs#2755
Issue openzfs#2766
@gwiesenekker
Copy link

I just upgraded to CentOS 7 and ran into the same issue: the pool was not automatically imported after a reboot. dmesg showed a hostid of 0x00000000, but the hostid command returned a8c07001.
While working on another issue I noticed that I could not ping the short hostname. After fixing that by editing /etc/hosts this issue is now gone. I did not have to create a /etc/hostid.

Gijsbert

@telsin
Copy link

telsin commented May 28, 2015

I've hit this problem a system where I had to delay the zfs-cache-import for a few seconds because the ifs module hadn't finished loading yet. Similar to and same solution described https://bbs.archlinux.org/viewtopic.php?id=183851 (waitforzfsdevice script) worked for me on a centos 7.1 system. Others work fine though, so oddball timing in systemd...

@behlendorf
Copy link
Contributor

This should now be addressed in master and the fix has been pulled in for the next port release. As always, if someone who's able to regularly trigger this has time to verify the fix that would be welcome.

ba34dd9 Base init scripts for SYSV systems
544f718 Use ExecStartPre to load zfs modules
65037d9 Add libzfs_error_init() function

@telsin
Copy link

telsin commented Jun 30, 2015

I still have the same problem as yesbox, I use vdev aliases and the mount at boot doesn't work (zfs 0.6.4.2-1, seems to have 544f718 in it). When I re-import the pool based on /dev/disk/by-path, it mounts fine at boot. My systemctl status logs are essentially identical to yesbox's when it fails.

@Bronek
Copy link

Bronek commented Sep 3, 2015

Last time I had a problem like this, changing HBA BIOS settings to wait 2s for each HDD helped.

@behlendorf
Copy link
Contributor

Removing as a 0.6.5 blocker. Using a stock CentOS 7 VM I'm unable to reproduce this issue. However, since other have reported this issue I'm leaving it open.

$ uname -a
Linux burn 3.10.0-229.11.1.el7.x86_64 #1 SMP Thu Aug 6 01:06:18 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
$ reboot

...

$ systemctl status zfs-import-cache
zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; static)
   Active: active (exited) since Fri 2015-09-04 18:58:52 EDT; 25s ago
  Process: 666 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=0/SUCCESS)
  Process: 644 ExecStartPre=/sbin/modprobe zfs (code=exited, status=0/SUCCESS)
 Main PID: 666 (code=exited, status=0/SUCCESS)

$ sudo zpool status -v
  pool: tank
 state: ONLINE
  scan: none requested
config:

    NAME                             STATE     READ WRITE CKSUM
    tank                             ONLINE       0     0     0
      mirror-0                       ONLINE       0     0     0
        virtio-758bc1f5-6d33-4bcb-b  ONLINE       0     0     0
        virtio-3185b8f2-83ae-46f6-b  ONLINE       0     0     0
    logs
      virtio-bf581023-3bc7-4750-b    ONLINE       0     0     0
    cache
      virtio-7decbe42-ae05-4da9-9    ONLINE       0     0     0

errors: No known data errors

@behlendorf behlendorf removed this from the 0.6.5 milestone Sep 4, 2015
@telsin
Copy link

telsin commented Sep 6, 2015

I see your pool uses /dev/disk/by-id names. That's what wound up working for me. I had problems when I was using vdev aliases though, did you try it that way?

@behlendorf
Copy link
Contributor

@telsin no I didn't test with vdev aliases. That's a good thought and I must have missed that in your comment. When I get a chance to retest this again I'll use vdev aliases.

@danwestman
Copy link

Group:

Experiencing the same issue here, but I have a workaround that I figured I'd share.

System:
CentOS 7.1.1503 / 3.10.0-229.20.1.el7.x86_64
Pool 1 - Connected to motherboard SATA bus (no problems here)
Pool 2 - Connected to PCI SATA adapter (reliably will not mount at boot)

Workaround:

In file: /usr/lib/systemd/system/zfs-import-cache.service
Changed ExecStart to a custom script //foo.sh where foo.sh is like this:

#!/bin/sh

sleep 10
/sbin/zpool import -c /etc/zfs/zpool.cache -aN

I understand this may be an incorrect approach, but it was the only thing I could do that would reliably mount the second pool (attached to a PCI SATA controller). Guessing it has to do with a delay between the module loading (as in ExecStartPre=/sbin/modprobe zfs) and the actual import script.

@Ralithune
Copy link

I had this problem today with CentOS 7 3.10.0-327.28.3, ZoL version 0.6.5.8-1.

Eventually I ended up typing "systemctl enable zfs-import-cache", and now the pool is present and mounted on reboot.

@shanghaiscott
Copy link

I updated to ZFS 0.6.5.8-1 from 0.6.5.7-1 and my pools stopped being imported and mounted on startup. After "systemctl enable zfs-import-cache", the pool imports, but is not mounted.

CentOS Linux release 7.2.1511 (Core)
kernel-3.18.34-20.el7.x86_64
zfs-0.6.5.8-1.el7.centos.x86_64
zfs-release-1-3.el7.centos.noarch
libzfs2-0.6.5.8-1.el7.centos.x86_64
zfs-dkms-0.6.5.8-1.el7.centos.noarch
spl-dkms-0.6.5.8-1.el7.centos.noarch
spl-0.6.5.8-1.el7.centos.x86_64

Message with prior ZFS:
Oct 31 19:30:30 localhost kernel: ZFS: Loaded module v0.6.5.7-1, ZFS pool version 5000, ZFS filesystem version 5
Oct 31 19:30:30 localhost systemd-modules-load: Inserted module 'zfs'
Oct 31 19:31:58 localhost systemd: Started Import ZFS pools by device scanning.
Oct 31 19:31:58 localhost systemd: Starting Import ZFS pools by cache file...
Oct 31 19:32:00 localhost systemd: Started Import ZFS pools by cache file.
Oct 31 19:32:00 localhost systemd: Starting Mount ZFS filesystems...
Oct 31 19:32:01 localhost systemd: Started Mount ZFS filesystems.
Oct 31 19:32:01 localhost systemd: Starting ZFS file system shares...
Oct 31 19:32:01 localhost systemd: Started ZFS Event Daemon (zed).
Oct 31 19:32:01 localhost systemd: Starting ZFS Event Daemon (zed)...
Oct 31 19:32:01 localhost zed: ZFS Event Daemon 0.6.5.7-1 (PID 3310)
Oct 31 19:32:01 localhost systemd: Started ZFS file system shares.
Oct 31 19:32:01 localhost systemd: Reached target ZFS startup target.
Oct 31 19:32:01 localhost systemd: Starting ZFS startup target.

Vs messages with latest ZFS RPMS:

Nov 1 12:28:41 localhost kernel: ZFS: Loaded module v0.6.5.8-1, ZFS pool version 5000, ZFS filesystem version 5
Nov 1 12:28:41 localhost systemd-modules-load: Inserted module 'zfs'
Nov 1 12:29:12 localhost systemd: Starting Import ZFS pools by cache file...
Nov 1 12:29:16 localhost systemd: Started Import ZFS pools by cache file.

The mount and subsequent messages never appear.

I have to run "zfs mount -a" to get the pool mounted

[root@thing1 ~]# zpool status -v
pool: export
state: ONLINE
scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
export      ONLINE       0     0     0
  raidz2-0  ONLINE       0     0     0
    raidzb  ONLINE       0     0     0
    raidzc  ONLINE       0     0     0
    raidzd  ONLINE       0     0     0
    raidze  ONLINE       0     0     0
    raidzf  ONLINE       0     0     0
    raidzg  ONLINE       0     0     0
logs
  zil1      ONLINE       0     0     0
cache
  l2arc1    ONLINE       0     0     0

errors: No known data errors
I use device mapper names for the disks (which are whole disks, LUKS encrypted):

lrwxrwxrwx. 1 root root 7 Nov 1 12:29 /dev/mapper/raidzb -> ../dm-9
lrwxrwxrwx. 1 root root 7 Nov 1 12:29 /dev/mapper/raidzc -> ../dm-7
lrwxrwxrwx. 1 root root 8 Nov 1 12:29 /dev/mapper/raidzd -> ../dm-14
lrwxrwxrwx. 1 root root 8 Nov 1 12:29 /dev/mapper/raidze -> ../dm-10
lrwxrwxrwx. 1 root root 8 Nov 1 12:29 /dev/mapper/raidzf -> ../dm-12
lrwxrwxrwx. 1 root root 8 Nov 1 12:29 /dev/mapper/raidzg -> ../dm-13

@telsin
Copy link

telsin commented Nov 1, 2016

Did you run

systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.target?

I missed it when I did some updates, reading the change log for 0.6.5.8 gave me that, no problems since.

@shanghaiscott
Copy link

That worked. Thanks!

Nov 1 14:46:30 localhost kernel: ZFS: Loaded module v0.6.5.8-1, ZFS pool version 5000, ZFS filesystem version 5
Nov 1 14:46:30 localhost systemd-modules-load: Inserted module 'zfs'
Nov 1 14:49:09 localhost systemd: Starting Import ZFS pools by cache file...
Nov 1 14:49:12 localhost systemd: Started Import ZFS pools by cache file.
Nov 1 14:49:12 localhost systemd: Starting Mount ZFS filesystems...
Nov 1 14:49:12 localhost systemd: Started Mount ZFS filesystems.
Nov 1 14:49:12 localhost systemd: Started ZFS Event Daemon (zed).
Nov 1 14:49:12 localhost systemd: Starting ZFS Event Daemon (zed)...
Nov 1 14:49:12 localhost zed: ZFS Event Daemon 0.6.5.8-1 (PID 4191)
Nov 1 14:49:12 localhost systemd: Starting ZFS file system shares...
Nov 1 14:49:12 localhost systemd: Started ZFS file system shares.
Nov 1 14:49:12 localhost systemd: Reached target ZFS startup target.
Nov 1 14:49:12 localhost systemd: Starting ZFS startup target.

@Loafdude
Copy link

Loafdude commented Sep 20, 2017

I can recreate this error with latest CentOS / ZoL (0.7.1 as of posting)

  1. Fresh Centos 7.4 minimal install
  2. Follow ZoL instructions and add repository
  3. Disable DKMS and enable kABI repositories
  4. do a 'yum install zfs'

No /etc/hostid is created and
systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.target
must be run.

Sorry if this has been covered but this post helped me solve the issue.

@behlendorf
Copy link
Contributor

No /etc/hostid is created

This file is intentionally not created by default. It is only required for multihost configurations, you can use the zgenhostid(8) utility provided with 0.7.x to generate it.

systemctl preset ...

Are you sure needed to run this again? The /usr/lib/systemd/system-preset/50-zfs.preset is installed as part of the zfs package and the systemd_post install hook should have done this. Running it again isn't harmful but it shouldn't have been needed.

@Loafdude
Copy link

I am sure.
My troubleshooting went as so;

  1. Pools created on another machine would import but would not remount across reboots.
  2. I created the /etc/hostid file.
  3. The pools would then be 'not be found' and could not be imported.
  4. I ran systemctl enable zfs-import-cache which allowed pools to be imported again. They still would not mount at boot.
  5. I then ran systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.target and pools are now mounting at boot.

@SEBiGEM
Copy link

SEBiGEM commented Nov 18, 2017

I am sure too.

CentOS 7.4/ZoL 0.7.3/DKMS: exactly the same problems (imported and new created pools are not mounted after reboot) and “solution” (systemctl preset …) as stated by @Loafdude

@jazzl0ver
Copy link

Faced same issue (thinking after upgrading to the latest kernel). Solution from @Loafdude does not work.

# uname -r
3.10.0-693.11.6.el7.x86_64

Also tried to yum reinstall libzfs2 kmod-zfs zfs - didn't help.
strace added to the ExecStart line in /usr/lib/systemd/system/zfs-mount.service displayed this after reboot:

...
622   access("/sys/module/zfs", F_OK)   = 0
622   access("/sys/module/zfs", F_OK)   = 0
622   open("/dev/zfs", O_RDWR)          = 3
622   close(3)                          = 0
622   open("/dev/zfs", O_RDWR)          = 3
622   open("/proc/self/mounts", O_RDONLY) = 4
622   open("/etc/dfs/sharetab", O_RDONLY) = -1 ENOENT (No such file or directory)
622   open("/dev/zfs", O_RDWR)          = 5
622   ioctl(3, _IOC(0, 0x5a, 0x04, 0x00), 0x7ffc8a71da80) = 0
622   ioctl(3, _IOC(0, 0x5a, 0x3f, 0x00), 0x7ffc8a71db90) = -1 EPERM (Operation not permitted)
622   close(3)                          = 0
622   close(4)                          = 0
622   close(5)                          = 0
622   exit_group(0)                     = ?
622   +++ exited with 0 +++

After the system booting completed manual call of systemctl restart zfs-mount does its job w/o issues..

Please, help!

@telsin
Copy link

telsin commented Jan 11, 2018

See the release notes for 7.4 & 7.5, you probably need a "systemctl enable zfs-import.target"

@lamixer
Copy link

lamixer commented Jan 12, 2018

I just had this problem after a 0.7.2 to 0.7.5 update (I think those were the versions).

My /usr/lib/systemd/system-preset/50-zfs.preset looks like this

# ZFS is enabled by default
enable zfs-import-cache.service
disable zfs-import-scan.service
enable zfs-import.target
enable zfs-mount.service
enable zfs-share.service
enable zfs-zed.service
enable zfs.target

root doing # zfs mount -a mounted my pool again, but should I change zfs-import-scan.service to be enabled? Thanks.

@telsin
Copy link

telsin commented Jan 12, 2018

check 'systemctl status zfs-import.target' and make sure it's enabled there and not just the preset.

@lamixer
Copy link

lamixer commented Jan 12, 2018

systemctl status zfs-import.target
● zfs-import.target - ZFS pool import target
   Loaded: loaded (/usr/lib/systemd/system/zfs-import.target; disabled; vendor preset: enabled)
   Active: inactive (dead)

systemctl enable zfs-import.target
Created symlink from /etc/systemd/system/zfs-mount.service.wants/zfs-import.target to /usr/lib/systemd/system/zfs-import.target.
Created symlink from /etc/systemd/system/zfs.target.wants/zfs-import.target to /usr/lib/systemd/system/zfs-import.target.

systemctl status zfs-import.target
● zfs-import.target - ZFS pool import target
   Loaded: loaded (/usr/lib/systemd/system/zfs-import.target; enabled; vendor preset: enabled)
   Active: inactive (dead)

We'll see how it goes on next reboot. Thanks.

@jasker5183
Copy link

Enabling zfs-import.target did the trick for me, which is weird because the first thing I tried was systemctl preset zfs-import-cache zfs-import-scan zfs-import.target zfs-mount zfs-share zfs-zed zfs.target. Am I to understand that zfs-import.target default is disabled?

@telsin
Copy link

telsin commented Jan 13, 2018

As I understand it, the defaults only take effect on a new install and not on an upgrade, so it required manual intervention to enable. According to the release notes, this is supposedly fixed if 7.5, but if you went through 7.4, you got it without it being enabled and still needed to manually activate it. Not sure what happened to you @jasker5183, but if you went through 7.4 you may also have gotten it in a disabled state and needed to do it again.

@jazzl0ver
Copy link

@telsin , thank you very much! Enabling zfs-import.target did the trick!

@Merlin83b
Copy link

Running CentOS 7.4, ZoL 7.6 (updated from something much older), enabling zfs-import.target (it was disabled) didn't work for me.

I ended up doing
cat /usr/lib/systemd/system-preset/50-zfs.preset|while read -r i; do systemctl $i; done
which is probably not the systemd way, but worked for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests