Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zfs receive allows snapshot destruction during receive and fails #1059

Closed
wrouesnel opened this issue Oct 20, 2012 · 6 comments
Closed

zfs receive allows snapshot destruction during receive and fails #1059

wrouesnel opened this issue Oct 20, 2012 · 6 comments
Milestone

Comments

@wrouesnel
Copy link
Contributor

Just spent a few hours diagnosing what I believe to be a bug with the way zfs receive operates when receiving send streams containing multiple incremental snapshots.

Specifically, the system I was receiving on was using a timed zfs-auto-snapshot script, which destroys old datasets when making new ones at 15 minute intervals.

While doing some partition resizing (and thus needing to remove my old pool and restore from a file), I found that the receive operation would get about 70% of the way through and then fail with "invalid backup stream".

After disabling cron, the receive worked though. The problem, seemed to be that when the receive got to a set of newer datasets with auto-snapshot might want to remove, it would remove one which was needed to receive the next incremental snapshot in the send stream, and cause the whole receive operation to fail.

A similar bug was previously fixed with the zfs send operation, which marks dependent datasets as held until it completed. It seems to me that zfs receive should do the same thing - mark a just received dataset as held, until either the operation completes or the next dataset is received.

@FransUrbo
Copy link
Contributor

@wrouesnel This bug is quite old (almost a year and a half), is it still a problem, or can it be closed?

@wrouesnel
Copy link
Contributor Author

I haven't tried to do a zfs receive in a while. Unless something has
been explicitly changed then it is probably still an issue - but let me
setup a test case and check.

On Sun 08 Jun 2014 22:31:57 EST, Turbo Fredriksson wrote:

@wrouesnel https://github.com/wrouesnel This bug is quite old
(almost a year and a half), is it still a problem, or can it be closed?


Reply to this email directly or view it on GitHub
#1059 (comment).

@FransUrbo
Copy link
Contributor

@wrouesnel There's been quite a lot of changes over the last couple of months, both regarding ZoL, but also with send/receive. So if you could test this against latest HEAD, that would be great, thanx!

@behlendorf behlendorf added Bug - Minor and removed Bug labels Oct 6, 2014
@behlendorf behlendorf removed this from the 0.6.7 milestone Oct 6, 2014
@behlendorf
Copy link
Contributor

I believe this is still an issue which does occasionally bite people.

@jwittlincohen
Copy link
Contributor

I can confirm that this bug still exists with ZoL 0.6.3. On a Debian 7.7 (Wheezy) server, I attempted to zfs send/recv using the -R option on a 2.86TiB dataset. The same option worked fine with smaller datasets, but continued to fail on this particular dataset with the following error "cannot receive incremental stream: invalid backup stream" at which point every snapshot following failed with a broken pipe message. After discussing the issue on #zfsonlinux, dasjoe directed me to this bug report. Sure enough, when I disabled zfs-auto-snapshot on the receiving pool, the same send / receive operation completed successfully.

I can also confirm the reporter's finding that this bug does not impact the sending side. I received messages from cron that it was unable to destroy old frequent and hourly snapshots as the dataset was busy. I confirmed that no snapshots were deleted during the zfs send operation.

@behlendorf
Copy link
Contributor

With the addition on resumable send/recv this code has seen considerable change. At the moment all the test cases are passing so I believe this issue was fixed. If not we can reopen it.

pcd1193182 pushed a commit to pcd1193182/zfs that referenced this issue Sep 26, 2023
)

Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.6.0 to 3.7.0.
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](Stebalien/tempfile@v3.6.0...v3.7.0)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-type: indirect
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants