Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incremental zfs send to file always produces fixed file size and an invalid stream #391

Closed
simonhollis opened this issue Sep 7, 2011 · 12 comments
Milestone

Comments

@simonhollis
Copy link

Running 0.6.0-rc5 ppa, incremental zfs send to file always produces fixed file size and an invalid stream.
The problem has been reproduced on Ubuntu 10.04 32-bit and 64-bit.

If I do an incremental zfs send and re-direct to a file, the file always has the same size

zfs send -i always creates a file of length 128kB, and zfs send -I always creates a file of length 129kB, regardless of the actual size between the snapshots, or the zfs file system used.

Here is a snapshot series

NAME USED AVAIL REFER MOUNTPOINT
freepool/photos@30Aug11 39.5M - 51.9G -
freepool/photos@2Sep11-chmod 1.94M - 51.9G -
freepool/photos@2Sep-prededup 1.64M - 50.6G -
freepool/photos@organised2011 497K - 49.8G -
freepool/photos@3Sep-2011retime 146K - 49.8G -
freepool/photos@4Sep11-sorted 546K - 49.8G -
freepool/photos@5Sep11-Wedding_added 262K - 52.5G -
freepool/photos@7Sep11 0 - 52.6G

i.e. The difference between freepool/photos@30Aug11 and freepool/photos@7Sep11 is in the GBs

If I do:
sudo zfs send -i freepool/photos@30Aug11 freepool/photos@7Sep11 > /media/ntfs2/photos-deltai7Sep
sudo zfs send -I freepool/photos@30Aug11 freepool/photos@7Sep11 > /media/ntfs2/photos-deltaI7Sep

ls -l /media/ntfs2/photos-delta*

-rwxrwxrwx 1 131072 2011-09-07 15:42 /media/ntfs2/photos-deltai7Sep
-rwxrwxrwx 1 131696 2011-09-07 15:26 /media/ntfs2/photos-deltaI7Sep

The sends also take around 15mins to complete.

If I then do this to a cloned file system
sudo zfs receive freepool2/photos < /media/ntfs2/photos-deltai7Sep

receive returns quickly and prints no error message, but the file system is not updated.

@behlendorf
Copy link
Contributor

Thanks for opening the bug, we'll look in to it.

@dajhorn
Copy link
Contributor

dajhorn commented Sep 20, 2011

I can reproduce this behavior. I always get 131,072 bytes in a stream that contains a header but lacks any user data.

@mlsorensen
Copy link

I also get the behavior. 128k file always when redirecting to file. Piping through gzip works, however. Also, this seems to work:

[root]# mkfifo /pipe
[root]# zfs send backupz/1@now > /pipe & cat /pipe > /backup-of-backupz1.img
[root]# ls -lh /backup-of-backupz1.img
-rw-r--r-- 1 root root 1.6G Sep 30 10:35 /backup-of-backupz1.img

[root]# cat /backup-of-backupz1.img | zfs receive backupz/restored

[root]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
backupz 8.84G 1.50T 7.40G legacy
backupz/1 706M 1.50T 706M legacy
backupz/2 122K 1.50T 78K legacy
backupz/restored 706M 1.50T 706M legacy

@dajhorn
Copy link
Contributor

dajhorn commented Sep 30, 2011

@mlsorensen: Thanks for the kludge. It has me thinking that somebody fixed this earlier, but the patch was lost on the mailing list or in another ticket.

@gunnarbeutner
Copy link
Contributor

Hello,

I think gunnarbeutner/spl@7aa721c should fix this. :)

Regards,
Gunnar

@gunnarbeutner
Copy link
Contributor

Hm, this seems to break "zfs send -D" because sockets are non-seekable (libzfs uses socketpair() internally to de-dup the backup stream).

@gunnarbeutner
Copy link
Contributor

New patch is at gunnarbeutner/spl@7c37099 - its behavior should be identical to the read/write syscalls (http://lxr.linux.no/linux+v3.0.4/fs/read_write.c#L402).

@behlendorf
Copy link
Contributor

So to be clear, the flaw in the previous patch was that we didn't increment of the file pointer offset in case where we returned an error code. This change now unconditionally sets the new offset to the vfs_write/read() result.

@gunnarbeutner
Copy link
Contributor

Actually, the problem with my first patch was that I'd increment f_pos rather than using the "new" offset as returned by vfs_write. For sockets vfs_write returns the number of bytes written - but "offset" remains 0. The next vfs_write call would then fail because vfs_write would expect "offset" to still be 0.

@behlendorf
Copy link
Contributor

The fix looks right to me and passes my initial sanity testing. However, before I apply it to master I'd appreciate it if someone watching this bug could give it some additional testing.

@behlendorf
Copy link
Contributor

Fix applied to spl closing issue.

@mlsorensen
Copy link

I should say that I tried it and it worked for me. I did not do exhaustive
testing or anything though.
On Oct 18, 2011 5:53 PM, "Brian Behlendorf" <
[email protected]>
wrote:

Fix applied to spl closing issue.

Reply to this email directly or view it on GitHub:
#391 (comment)

dajhorn referenced this issue in zfsonlinux/pkg-zfs Dec 14, 2011
This would cause problems when using 'zfs send' with a file as the
target (rather than a pipe or a socket as is usually the case) as
for each write the destination offset in the file would be 0.

Signed-off-by: Brian Behlendorf <[email protected]>
Closes ZFS issue #391
sdimitro pushed a commit to sdimitro/zfs that referenced this issue May 23, 2022
Introduce a helper function, `with_chunk()` to simplify
`ReadOnlySummarizedBlockBasedLog::lookup_by_key_impl()`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants