-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zfs receive allows snapshot destruction during receive and fails #1059
Comments
@wrouesnel This bug is quite old (almost a year and a half), is it still a problem, or can it be closed? |
I haven't tried to do a zfs receive in a while. Unless something has On Sun 08 Jun 2014 22:31:57 EST, Turbo Fredriksson wrote:
|
@wrouesnel There's been quite a lot of changes over the last couple of months, both regarding ZoL, but also with send/receive. So if you could test this against latest HEAD, that would be great, thanx! |
I believe this is still an issue which does occasionally bite people. |
I can confirm that this bug still exists with ZoL 0.6.3. On a Debian 7.7 (Wheezy) server, I attempted to zfs send/recv using the -R option on a 2.86TiB dataset. The same option worked fine with smaller datasets, but continued to fail on this particular dataset with the following error "cannot receive incremental stream: invalid backup stream" at which point every snapshot following failed with a broken pipe message. After discussing the issue on #zfsonlinux, dasjoe directed me to this bug report. Sure enough, when I disabled zfs-auto-snapshot on the receiving pool, the same send / receive operation completed successfully. I can also confirm the reporter's finding that this bug does not impact the sending side. I received messages from cron that it was unable to destroy old frequent and hourly snapshots as the dataset was busy. I confirmed that no snapshots were deleted during the zfs send operation. |
With the addition on resumable send/recv this code has seen considerable change. At the moment all the test cases are passing so I believe this issue was fixed. If not we can reopen it. |
) Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.6.0 to 3.7.0. - [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md) - [Commits](Stebalien/tempfile@v3.6.0...v3.7.0) --- updated-dependencies: - dependency-name: tempfile dependency-type: indirect update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Just spent a few hours diagnosing what I believe to be a bug with the way zfs receive operates when receiving send streams containing multiple incremental snapshots.
Specifically, the system I was receiving on was using a timed zfs-auto-snapshot script, which destroys old datasets when making new ones at 15 minute intervals.
While doing some partition resizing (and thus needing to remove my old pool and restore from a file), I found that the receive operation would get about 70% of the way through and then fail with "invalid backup stream".
After disabling cron, the receive worked though. The problem, seemed to be that when the receive got to a set of newer datasets with auto-snapshot might want to remove, it would remove one which was needed to receive the next incremental snapshot in the send stream, and cause the whole receive operation to fail.
A similar bug was previously fixed with the zfs send operation, which marks dependent datasets as held until it completed. It seems to me that zfs receive should do the same thing - mark a just received dataset as held, until either the operation completes or the next dataset is received.
The text was updated successfully, but these errors were encountered: