-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlock due to filesystem re-entry during zfs_evict_inode() in 0.6.5 #3808
Comments
dmesg output: [153452.271834] INFO: task kswapd0:46 blocked for more than 120 seconds. |
I have a very similar problem; in my case it appears to be related to reclaiming ARC metadata as the arc_meta_max starts to drop, taking arc_meta_used with it, before arc_meta_max shoots back up to a value higher than it was before dropping. At that point, I've seen this issue (or at least one that has the same symptoms) occur once the arc_meta_used exceeds arc_meta_limit. I've actually just switched off the ZOL machines today in favour of BTRFS until this issue is resolved so I sadly have no stack trace. Also running RHEL7, even tried with the 4.x kernels from elrepo.org. There are similar bug reports that also sounded very similar and I've tried the workarounds from without success, perhaps these will help: |
@rgmiller Which kernel version are you running? |
@rgmiller Does the target system have lots of files with xattrs? If so, is That said, however, this is clearly a deadlock situation which could occur in any case. I'm not sure why, other than the above, it would be more likely in 0.6.5 than in earlier releases. Or for that matter, why it might be more likely under kernels newer than 3.17.7. These stack traces are very useful. In summary, you've shown a deadlock which can occur during kswapd-invoked inode cache pruning due to the IO required during management of the unlinked set (f.k.a. "delete queue"). Actually, in this particular case, it's due to the waiting for dependent IO operations (parent zio). |
If there's still no improvement you could test https://github.com/Feh/nocache |
@dweeezil Yes, I'm pretty sure BackupPC makes heavy use of xattrs, and I have set xattr=sa on the zpool. |
@rgmiller I suspect 3af56fd, which was committed to master over the weekend and should appear in a point release shortly, will fix your problem by causing SA xattrs to actually be used. As I pointed out, however, your stack does show a very interesting case of potential deadlock which should be investigated further. It demonstrates a manner in which IO-causing filesystem code can be entered via a reclaim-like operation from the swapper. I'd suggest leaving this issue open and re-titling it to something like:
or something similar even if reinstating SA xattrs fixes your problem. |
Are you sure? I'm quite familiar with backuppc, and I'm pretty sure it Tim Connors |
@dweeezil I see that the 0.6.5.1 tarball is available for download (including the change for the SA xattrs). I'm inclined to install it fairly soon so that I can get my backups running again. Before I do, are there any tests you'd like me to run while I've still got the system in a state that can (probably) reproduce this problem? |
@rgmiller If the problem goes away when you get the SA xattrs enabled again, then there's not much else to do. As I said, there's a deeper problem here which we'll have to address in some way. ZoL tries to prevent re-entering ZFS (causing IO operations) during ZFS operations, themselves, but there's still a possibility of re-entry from the kernel itself and I believe that's what your case demonstrates. @spacelama There's always a chance the filesystems simply have a lot of xattrs due to posixacl and/or selinux. I've got no experience with BackupPC and no knowledge of whether it uses xattrs for its own purpose. |
I updated to 0.6.5.1 last night and let my usual BackupPC jobs run. Unfortunately, they still hung, though they at least didn't lock up the entire server. I've opened up a new issue - #3822 - because the stack traces looked different. It's possible the issues are related, though. |
@rgmiller this problem may be made more likely in 0.6.5 due to the dynamic taskqs. Because we're creating and destroying threads more often it's more likely to trigger this deadlock. My suggestion would be to disable this support for now by setting the module option |
As described in the comment above arc_reclaim_thread() it's critical that the reclaim thread be careful about blocking. Just like it must never wait on a hash lock, it must never wait on a task which can in turn wait on the CV in arc_get_data_buf(). This will deadlock, see issue openzfs#3822 for full backtraces showing the problem. To resolve this issue arc_kmem_reap_now() has been updated to use the asynchronous arc prune function. This means that arc_prune_async() may now be called while there are still outstanding arc_prune_tasks. However, this isn't a problem because arc_prune_async() already keeps a reference count preventing multiple outstanding tasks per registered consumer. Functionally, this behavior is the same as the counterpart illumos function dnlc_reduce_cache(). Signed-off-by: Brian Behlendorf <[email protected]> Issue openzfs#3808 Issue openzfs#3822
@behlendorf Disabling dynamic taskq's seemed to help a little, but not much. I made the change Wednesday evening and was able to run a few backup jobs. However, the machine still locked up overnight. (The normal backup jobs are run by cron starting at 2:00AM.) I couldn't even log in via the console and ended up having to hit the reset button on the machine. |
As described in the comment above arc_reclaim_thread() it's critical that the reclaim thread be careful about blocking. Just like it must never wait on a hash lock, it must never wait on a task which can in turn wait on the CV in arc_get_data_buf(). This will deadlock, see issue #3822 for full backtraces showing the problem. To resolve this issue arc_kmem_reap_now() has been updated to use the asynchronous arc prune function. This means that arc_prune_async() may now be called while there are still outstanding arc_prune_tasks. However, this isn't a problem because arc_prune_async() already keeps a reference count preventing multiple outstanding tasks per registered consumer. Functionally, this behavior is the same as the counterpart illumos function dnlc_reduce_cache(). Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue #3808 Issue #3834 Issue #3822
Resolved by ef5b2e1 which will be cherry-picked in to 0.6.5.2 release. |
As described in the comment above arc_reclaim_thread() it's critical that the reclaim thread be careful about blocking. Just like it must never wait on a hash lock, it must never wait on a task which can in turn wait on the CV in arc_get_data_buf(). This will deadlock, see issue #3822 for full backtraces showing the problem. To resolve this issue arc_kmem_reap_now() has been updated to use the asynchronous arc prune function. This means that arc_prune_async() may now be called while there are still outstanding arc_prune_tasks. However, this isn't a problem because arc_prune_async() already keeps a reference count preventing multiple outstanding tasks per registered consumer. Functionally, this behavior is the same as the counterpart illumos function dnlc_reduce_cache(). Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue #3808 Issue #3834 Issue #3822
ZFS/SPL 0.6.5.2 Bug Fixes * Init script fixes openzfs/zfs#3816 * Fix uioskip crash when skip to end openzfs/zfs#3806 openzfs/zfs#3850 * Userspace can trigger an assertion openzfs/zfs#3792 * Fix quota userused underflow bug openzfs/zfs#3789 * Fix performance regression from unwanted synchronous I/O openzfs/zfs#3780 * Fix deadlock during ARC reclaim openzfs/zfs#3808 openzfs/zfs#3834 * Fix deadlock with zfs receive and clamscan openzfs/zfs#3719 * Allow NFS activity to defer snapshot unmounts openzfs/zfs#3794 * Linux 4.3 compatibility openzfs/zfs#3799 * Zed reload fixes openzfs/zfs#3773 * Fix PAX Patch/Grsec SLAB_USERCOPY panic openzfs/zfs#3796 * Always remove during dkms uninstall/update openzfs/spl#476 ZFS/SPL 0.6.5.1 Bug Fixes * Fix zvol corruption with TRIM/discard openzfs/zfs#3798 * Fix NULL as mount(2) syscall data parameter openzfs/zfs#3804 * Fix xattr=sa dataset property not honored openzfs/zfs#3787 ZFS/SPL 0.6.5 Supported Kernels * Compatible with 2.6.32 - 4.2 Linux kernels. New Functionality * Support for temporary mount options. * Support for accessing the .zfs/snapshot over NFS. * Support for estimating send stream size when source is a bookmark. * Administrative commands are allowed to use reserved space improving robustness. * New notify ZEDLETs support email and pushbullet notifications. * New keyword 'slot' for vdev_id.conf to control what is use for the slot number. * New zpool export -a option unmounts and exports all imported pools. * New zpool iostat -y omits the first report with statistics since boot. * New zdb can now open the root dataset. * New zdb can print the numbers of ganged blocks. * New zdb -ddddd can print details of block pointer objects. * New zdb -b performance improved. * New zstreamdump -d prints contents of blocks. New Feature Flags * large_blocks - This feature allows the record size on a dataset to be set larger than 128KB. We currently support block sizes from 512 bytes to 16MB. The benefits of larger blocks, and thus larger IO, need to be weighed against the cost of COWing a giant block to modify one byte. Additionally, very large blocks can have an impact on I/O latency, and also potentially on the memory allocator. Therefore, we do not allow the record size to be set larger than zfs_max_recordsize (default 1MB). Larger blocks can be created by changing this tuning, pools with larger blocks can always be imported and used, regardless of this setting. * filesystem_limits - This feature enables filesystem and snapshot limits. These limits can be used to control how many filesystems and/or snapshots can be created at the point in the tree on which the limits are set. *Performance* * Improved zvol performance on all kernels (>50% higher throughput, >20% lower latency) * Improved zil performance on Linux 2.6.39 and earlier kernels (10x lower latency) * Improved allocation behavior on mostly full SSD/file pools (5% to 10% improvement on 90% full pools) * Improved performance when removing large files. * Caching improvements (ARC): ** Better cached read performance due to reduced lock contention. ** Smarter heuristics for managing the total size of the cache and the distribution of data/metadata. ** Faster release of cached buffers due to unexpected memory pressure. *Changes in Behavior* * Default reserved space was increased from 1.6% to 3.3% of total pool capacity. This default percentage can be controlled through the new spa_slop_shift module option, setting it to 6 will restore the previous percentage. * Loading of the ZFS module stack is now handled by systemd or the sysv init scripts. Invoking the zfs/zpool commands will not cause the modules to be automatically loaded. The previous behavior can be restored by setting the ZFS_MODULE_LOADING=yes environment variable but this functionality will be removed in a future release. * Unified SYSV and Gentoo OpenRC initialization scripts. The previous functionality has been split in to zfs-import, zfs-mount, zfs-share, and zfs-zed scripts. This allows for independent control of the services and is consistent with the unit files provided for a systemd based system. Complete details of the functionality provided by the updated scripts can be found here. * Task queues are now dynamic and worker threads will be created and destroyed as needed. This allows the system to automatically tune itself to ensure the optimal number of threads are used for the active workload which can result in a performance improvement. * Task queue thread priorities were correctly aligned with the default Linux file system thread priorities. This allows ZFS to compete fairly with other active Linux file systems when the system is under heavy load. * When compression=on the default compression algorithm will be lz4 as long as the feature is enabled. Otherwise the default remains lzjb. Similarly lz4 is now the preferred method for compressing meta data when available. * The use of mkdir/rmdir/mv in the .zfs/snapshot directory has been disabled by default both locally and via NFS clients. The zfs_admin_snapshot module option can be used to re-enable this functionality. * LBA weighting is automatically disabled on files and SSDs ensuring the entire device is used fairly. * iostat accounting on zvols running on kernels older than Linux 3.19 is no longer supported. * The known issues preventing swap on zvols for Linux 3.9 and newer kernels have been resolved. However, deadlocks are still possible for older kernels. Module Options * Changed zfs_arc_c_min default from 4M to 32M to accommodate large blocks. * Added metaslab_aliquot to control how many bytes are written to a top-level vdev before moving on to the next one. Increasing this may be helpful when using blocks larger than 1M. * Added spa_slop_shift, see 'reserved space' comment in the 'changes to behavior' section. * Added zfs_admin_snapshot, enable/disable the use of mkdir/rmdir/mv in .zfs/snapshot directory. * Added zfs_arc_lotsfree_percent, throttle I/O when free system memory drops below this percentage. * Added zfs_arc_num_sublists_per_state, used to allow more fine-grained locking. * Added zfs_arc_p_min_shift, used to set a floor on arc_p. * Added zfs_arc_sys_free, the target number of bytes the ARC should leave as free. * Added zfs_dbgmsg_enable, used to enable the 'dbgmsg' kstat. * Added zfs_dbgmsg_maxsize, sets the maximum size of the dbgmsg buffer. * Added zfs_max_recordsize, used to control the maximum allowed record size. * Added zfs_arc_meta_strategy, used to select the preferred ARC reclaim strategy. * Removed metaslab_min_alloc_size, it was unused internally due to prior changes. * Removed zfs_arc_memory_throttle_disable, replaced by zfs_arc_lotsfree_percent. * Removed zvol_threads, zvols no longer require a dedicated task queue. * See zfs-module-parameters(5) for complete details on available module options. Bug Fixes * Improved documentation with many updates, corrections, and additions. * Improved sysv, systemd, initramfs, and dracut support. * Improved block pointer validation before issuing IO. * Improved scrub pause heuristics. * Improved test coverage. * Improved heuristics for automatic repair when zfs_recover=1 module option is set. * Improved debugging infrastructure via 'dbgmsg' kstat. * Improved zpool import performance. * Fixed deadlocks in direct memory reclaim. * Fixed deadlock on db_mtx and dn_holds. * Fixed deadlock in dmu_objset_find_dp(). * Fixed deadlock during zfs rollback. * Fixed kernel panic due to tsd_exit() in ZFS_EXIT. * Fixed kernel panic when adding a duplicate dbuf to dn_dbufs. * Fixed kernel panic due to security / ACL creation failure. * Fixed kernel panic on unmount due to iput taskq. * Fixed panic due to corrupt nvlist when running utilities. * Fixed panic on unmount due to not waiting for all znodes to be released. * Fixed panic with zfs clone from different source and target pools. * Fixed NULL pointer dereference in dsl_prop_get_ds(). * Fixed NULL pointer dereference in dsl_prop_notify_all_cb(). * Fixed NULL pointer dereference in zfsdev_getminor(). * Fixed I/Os are now aggregated across ZIO priority classes. * Fixed .zfs/snapshot auto-mounting for all supported kernels. * Fixed 3-digit octal escapes by changing to 4-digit which disambiguate the output. * Fixed hard lockup due to infinite loop in zfs_zget(). * Fixed misreported 'alloc' value for cache devices. * Fixed spurious hung task watchdog stack traces. * Fixed direct memory reclaim deadlocks. * Fixed module loading in zfs import systemd service. * Fixed intermittent libzfs_init() failure to open /dev/zfs. * Fixed hot-disk sparing for disk vdevs * Fixed system spinning during ARC reclaim. * Fixed formatting errors in {{zfs(8)}} * Fixed zio pipeline stall by having callers invoke next stage. * Fixed assertion failed in zrl_tryenter(). * Fixed memory leak in make_root_vdev(). * Fixed memory leak in zpool_in_use(). * Fixed memory leak in libzfs when doing rollback. * Fixed hold leak in dmu_recv_end_check(). * Fixed refcount leak in bpobj_iterate_impl(). * Fixed misuse of input argument in traverse_visitbp(). * Fixed missing missing mutex_destroy() calls. * Fixed integer overflows in dmu_read/dmu_write. * Fixed verify() failure in zio_done(). * Fixed zio_checksum_error() to only include info for ECKSUM errors. * Fixed -ESTALE to force lookup on missing NFS file handles. * Fixed spurious failures from dsl_dataset_hold_obj(). * Fixed zfs compressratio when using with 4k sector size. * Fixed spurious watchdog warnings in prefetch thread. * Fixed unfair disk space allocation when vdevs are of unequal size. * Fixed ashift accounting error writing to cache devices. * Fixed zdb -d has false positive warning when feature@large_blocks=disabled. * Fixed zdb -h | -i seg fault. * Fixed force-received full stream into a dataset if it has a snapshot. * Fixed snapshot error handling. * Fixed 'hangs' while deleting large files. * Fixed lock contention (rrw_exit) while running a read only load. * Fixed error message when creating a pool to include all problematic devices. * Fixed Xen virtual block device detection, partitions are now created. * Fixed missing E2BIG error handling in zfs_setprop_error(). * Fixed zpool import assertion in libzfs_import.c. * Fixed zfs send -nv output to stderr. * Fixed idle pool potentially running itself out of space. * Fixed narrow race which allowed read(2) to access beyond fstat(2)'s reported end-of-file. * Fixed support for VPATH builds. * Fixed double counting of HDR_L2ONLY_SIZE in ARC. * Fixed 'BUG: Bad page state' warning from kernel due to writeback flag. * Fixed arc_available_memory() to check freemem. * Fixed arc_memory_throttle() to check pageout. * Fixed'zpool create warning when using zvols in debug builds. * Fixed loop devices layered on ZFS with 4.1 kernels. * Fixed zvol contribution to kernel entropy pool. * Fixed handling of compression flags in arc header. * Substantial changes to realign code base with illumos. * Many additional bug fixes. Signed-off-by: Nathaniel Clark <[email protected]> Change-Id: I87c012aec9ec581b10a417d699dafc7d415abf63 Reviewed-on: http://review.whamcloud.com/16399 Tested-by: Jenkins Reviewed-by: Alex Zhuravlev <[email protected]> Tested-by: Maloo <[email protected]> Reviewed-by: Andreas Dilger <[email protected]>
This deadlock may manifest itself in slightly different ways but at the core it is caused by a memory allocation blocking on file- system reclaim in the zio pipeline. This is normally impossible because zio_execute() disables filesystem reclaim by setting PF_FSTRANS on the thread. However, kmem cache allocations may still indirectly block on file system reclaim while holding the critical vq->vq_lock as shown below. To resolve this issue zio_buf_alloc_flags() is introduced which allocation flags to be passed. This can then be used in vdev_queue_aggregate() with KM_NOSLEEP when allocating the aggregate IO buffer. Since aggregating the IO is purely a performance optimization we want this to either success or fail quickly. Trying too hard to allocate this memory under the vq->vq_lock can negatively impact performance and result in this deadlock. * z_wr_iss zio_vdev_io_start vdev_queue_io -> Takes vq->vq_lock vdev_queue_io_to_issue vdev_queue_aggregate zio_buf_alloc -> Waiting on spl_kmem_cache process * z_wr_int zio_vdev_io_done vdev_queue_io_done mutex_lock -> Waiting on vq->vq_lock held by z_wr_iss * txg_sync spa_sync dsl_pool_sync zio_wait -> Waiting on zio being handled by z_wr_int * spl_kmem_cache spl_vmalloc ... evict ... zfs_inactive dmu_tx_wait txg_wait_open -> Waiting on txg_sync Signed-off-by: Brian Behlendorf <[email protected]> Issue openzfs#3808 Issue openzfs#3867
This deadlock may manifest itself in slightly different ways but at the core it is caused by a memory allocation blocking on file- system reclaim in the zio pipeline. This is normally impossible because zio_execute() disables filesystem reclaim by setting PF_FSTRANS on the thread. However, kmem cache allocations may still indirectly block on file system reclaim while holding the critical vq->vq_lock as shown below. To resolve this issue zio_buf_alloc_flags() is introduced which allocation flags to be passed. This can then be used in vdev_queue_aggregate() with KM_NOSLEEP when allocating the aggregate IO buffer. Since aggregating the IO is purely a performance optimization we want this to either succeed or fail quickly. Trying too hard to allocate this memory under the vq->vq_lock can negatively impact performance and result in this deadlock. * z_wr_iss zio_vdev_io_start vdev_queue_io -> Takes vq->vq_lock vdev_queue_io_to_issue vdev_queue_aggregate zio_buf_alloc -> Waiting on spl_kmem_cache process * z_wr_int zio_vdev_io_done vdev_queue_io_done mutex_lock -> Waiting on vq->vq_lock held by z_wr_iss * txg_sync spa_sync dsl_pool_sync zio_wait -> Waiting on zio being handled by z_wr_int * spl_kmem_cache spl_cache_grow_work kv_alloc spl_vmalloc ... evict zpl_evict_inode zfs_inactive dmu_tx_wait txg_wait_open -> Waiting on txg_sync Signed-off-by: Brian Behlendorf <[email protected]> Issue openzfs#3808 Issue openzfs#3867
This deadlock may manifest itself in slightly different ways but at the core it is caused by a memory allocation blocking on file- system reclaim in the zio pipeline. This is normally impossible because zio_execute() disables filesystem reclaim by setting PF_FSTRANS on the thread. However, kmem cache allocations may still indirectly block on file system reclaim while holding the critical vq->vq_lock as shown below. To resolve this issue zio_buf_alloc_flags() is introduced which allocation flags to be passed. This can then be used in vdev_queue_aggregate() with KM_NOSLEEP when allocating the aggregate IO buffer. Since aggregating the IO is purely a performance optimization we want this to either succeed or fail quickly. Trying too hard to allocate this memory under the vq->vq_lock can negatively impact performance and result in this deadlock. * z_wr_iss zio_vdev_io_start vdev_queue_io -> Takes vq->vq_lock vdev_queue_io_to_issue vdev_queue_aggregate zio_buf_alloc -> Waiting on spl_kmem_cache process * z_wr_int zio_vdev_io_done vdev_queue_io_done mutex_lock -> Waiting on vq->vq_lock held by z_wr_iss * txg_sync spa_sync dsl_pool_sync zio_wait -> Waiting on zio being handled by z_wr_int * spl_kmem_cache spl_cache_grow_work kv_alloc spl_vmalloc ... evict zpl_evict_inode zfs_inactive dmu_tx_wait txg_wait_open -> Waiting on txg_sync Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes openzfs#3808 Closes openzfs#3867
This deadlock may manifest itself in slightly different ways but at the core it is caused by a memory allocation blocking on file- system reclaim in the zio pipeline. This is normally impossible because zio_execute() disables filesystem reclaim by setting PF_FSTRANS on the thread. However, kmem cache allocations may still indirectly block on file system reclaim while holding the critical vq->vq_lock as shown below. To resolve this issue zio_buf_alloc_flags() is introduced which allocation flags to be passed. This can then be used in vdev_queue_aggregate() with KM_NOSLEEP when allocating the aggregate IO buffer. Since aggregating the IO is purely a performance optimization we want this to either succeed or fail quickly. Trying too hard to allocate this memory under the vq->vq_lock can negatively impact performance and result in this deadlock. * z_wr_iss zio_vdev_io_start vdev_queue_io -> Takes vq->vq_lock vdev_queue_io_to_issue vdev_queue_aggregate zio_buf_alloc -> Waiting on spl_kmem_cache process * z_wr_int zio_vdev_io_done vdev_queue_io_done mutex_lock -> Waiting on vq->vq_lock held by z_wr_iss * txg_sync spa_sync dsl_pool_sync zio_wait -> Waiting on zio being handled by z_wr_int * spl_kmem_cache spl_cache_grow_work kv_alloc spl_vmalloc ... evict zpl_evict_inode zfs_inactive dmu_tx_wait txg_wait_open -> Waiting on txg_sync Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #3808 Closes #3867
This deadlock may manifest itself in slightly different ways but at the core it is caused by a memory allocation blocking on file- system reclaim in the zio pipeline. This is normally impossible because zio_execute() disables filesystem reclaim by setting PF_FSTRANS on the thread. However, kmem cache allocations may still indirectly block on file system reclaim while holding the critical vq->vq_lock as shown below. To resolve this issue zio_buf_alloc_flags() is introduced which allocation flags to be passed. This can then be used in vdev_queue_aggregate() with KM_NOSLEEP when allocating the aggregate IO buffer. Since aggregating the IO is purely a performance optimization we want this to either succeed or fail quickly. Trying too hard to allocate this memory under the vq->vq_lock can negatively impact performance and result in this deadlock. * z_wr_iss zio_vdev_io_start vdev_queue_io -> Takes vq->vq_lock vdev_queue_io_to_issue vdev_queue_aggregate zio_buf_alloc -> Waiting on spl_kmem_cache process * z_wr_int zio_vdev_io_done vdev_queue_io_done mutex_lock -> Waiting on vq->vq_lock held by z_wr_iss * txg_sync spa_sync dsl_pool_sync zio_wait -> Waiting on zio being handled by z_wr_int * spl_kmem_cache spl_cache_grow_work kv_alloc spl_vmalloc ... evict zpl_evict_inode zfs_inactive dmu_tx_wait txg_wait_open -> Waiting on txg_sync Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes openzfs#3808 Closes openzfs#3867
This deadlock may manifest itself in slightly different ways but at the core it is caused by a memory allocation blocking on file- system reclaim in the zio pipeline. This is normally impossible because zio_execute() disables filesystem reclaim by setting PF_FSTRANS on the thread. However, kmem cache allocations may still indirectly block on file system reclaim while holding the critical vq->vq_lock as shown below. To resolve this issue zio_buf_alloc_flags() is introduced which allocation flags to be passed. This can then be used in vdev_queue_aggregate() with KM_NOSLEEP when allocating the aggregate IO buffer. Since aggregating the IO is purely a performance optimization we want this to either succeed or fail quickly. Trying too hard to allocate this memory under the vq->vq_lock can negatively impact performance and result in this deadlock. * z_wr_iss zio_vdev_io_start vdev_queue_io -> Takes vq->vq_lock vdev_queue_io_to_issue vdev_queue_aggregate zio_buf_alloc -> Waiting on spl_kmem_cache process * z_wr_int zio_vdev_io_done vdev_queue_io_done mutex_lock -> Waiting on vq->vq_lock held by z_wr_iss * txg_sync spa_sync dsl_pool_sync zio_wait -> Waiting on zio being handled by z_wr_int * spl_kmem_cache spl_cache_grow_work kv_alloc spl_vmalloc ... evict zpl_evict_inode zfs_inactive dmu_tx_wait txg_wait_open -> Waiting on txg_sync Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes openzfs#3808 Closes openzfs#3867
This deadlock may manifest itself in slightly different ways but at the core it is caused by a memory allocation blocking on file- system reclaim in the zio pipeline. This is normally impossible because zio_execute() disables filesystem reclaim by setting PF_FSTRANS on the thread. However, kmem cache allocations may still indirectly block on file system reclaim while holding the critical vq->vq_lock as shown below. To resolve this issue zio_buf_alloc_flags() is introduced which allocation flags to be passed. This can then be used in vdev_queue_aggregate() with KM_NOSLEEP when allocating the aggregate IO buffer. Since aggregating the IO is purely a performance optimization we want this to either succeed or fail quickly. Trying too hard to allocate this memory under the vq->vq_lock can negatively impact performance and result in this deadlock. * z_wr_iss zio_vdev_io_start vdev_queue_io -> Takes vq->vq_lock vdev_queue_io_to_issue vdev_queue_aggregate zio_buf_alloc -> Waiting on spl_kmem_cache process * z_wr_int zio_vdev_io_done vdev_queue_io_done mutex_lock -> Waiting on vq->vq_lock held by z_wr_iss * txg_sync spa_sync dsl_pool_sync zio_wait -> Waiting on zio being handled by z_wr_int * spl_kmem_cache spl_cache_grow_work kv_alloc spl_vmalloc ... evict zpl_evict_inode zfs_inactive dmu_tx_wait txg_wait_open -> Waiting on txg_sync Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes openzfs#3808 Closes openzfs#3867
This deadlock may manifest itself in slightly different ways but at the core it is caused by a memory allocation blocking on file- system reclaim in the zio pipeline. This is normally impossible because zio_execute() disables filesystem reclaim by setting PF_FSTRANS on the thread. However, kmem cache allocations may still indirectly block on file system reclaim while holding the critical vq->vq_lock as shown below. To resolve this issue zio_buf_alloc_flags() is introduced which allocation flags to be passed. This can then be used in vdev_queue_aggregate() with KM_NOSLEEP when allocating the aggregate IO buffer. Since aggregating the IO is purely a performance optimization we want this to either succeed or fail quickly. Trying too hard to allocate this memory under the vq->vq_lock can negatively impact performance and result in this deadlock. * z_wr_iss zio_vdev_io_start vdev_queue_io -> Takes vq->vq_lock vdev_queue_io_to_issue vdev_queue_aggregate zio_buf_alloc -> Waiting on spl_kmem_cache process * z_wr_int zio_vdev_io_done vdev_queue_io_done mutex_lock -> Waiting on vq->vq_lock held by z_wr_iss * txg_sync spa_sync dsl_pool_sync zio_wait -> Waiting on zio being handled by z_wr_int * spl_kmem_cache spl_cache_grow_work kv_alloc spl_vmalloc ... evict zpl_evict_inode zfs_inactive dmu_tx_wait txg_wait_open -> Waiting on txg_sync Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes openzfs#3808 Closes openzfs#3867
This deadlock may manifest itself in slightly different ways but at the core it is caused by a memory allocation blocking on file- system reclaim in the zio pipeline. This is normally impossible because zio_execute() disables filesystem reclaim by setting PF_FSTRANS on the thread. However, kmem cache allocations may still indirectly block on file system reclaim while holding the critical vq->vq_lock as shown below. To resolve this issue zio_buf_alloc_flags() is introduced which allocation flags to be passed. This can then be used in vdev_queue_aggregate() with KM_NOSLEEP when allocating the aggregate IO buffer. Since aggregating the IO is purely a performance optimization we want this to either succeed or fail quickly. Trying too hard to allocate this memory under the vq->vq_lock can negatively impact performance and result in this deadlock. * z_wr_iss zio_vdev_io_start vdev_queue_io -> Takes vq->vq_lock vdev_queue_io_to_issue vdev_queue_aggregate zio_buf_alloc -> Waiting on spl_kmem_cache process * z_wr_int zio_vdev_io_done vdev_queue_io_done mutex_lock -> Waiting on vq->vq_lock held by z_wr_iss * txg_sync spa_sync dsl_pool_sync zio_wait -> Waiting on zio being handled by z_wr_int * spl_kmem_cache spl_cache_grow_work kv_alloc spl_vmalloc ... evict zpl_evict_inode zfs_inactive dmu_tx_wait txg_wait_open -> Waiting on txg_sync Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes openzfs#3808 Closes openzfs#3867
I upgraded to 0.6.5 when it became available for CentOS 7 recently and I've begun to notice problems with large rsync transfers. (I'm using zfs as the main storage for a BackupPC server.) Small rsync's (ie: incremental backups) seem to be OK, but large ones (from the full backups) end up hanging and basically locking up the computer.
I manged to get a stack trace from dmesg this evening, though, and I'll paste it below. The traces all seem to be in a mutex locking function, so maybe this is some kind of deadlock scenario?
I think I can trigger this situation fairly reliably by kicking off a full backup, so if there's something specific you want me to try, just let me know.
The text was updated successfully, but these errors were encountered: