You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The cache/cache2.qcow2 seems to consistently use about 4.5x the size of the reported rootfs size when building the buildextend-qemu (default build).
Environment
What operating system is being used to run coreos-assembler?
Linux with official podman/docker container.
What operating system is being assembled?
Custom derivative of fedora-coreos-config (Fedora 41).
Is coreos-assembler running in Podman or Docker?
Yes, podamn.
If Podman, is coreos-assembler running privileged or unprivileged?
Privileged, rootless, daemonless.
Expected Behavior
The RUNVM_CACHE_SIZE doesn't need to be set to 4.5x the size of the rootfs being assembled.
Actual Behavior
A reported rootfs of 16 GB requires a RUNVM_CACHE_SIZE of ~75 GB in order to do cosa build/cosa buildextend-qemu.
Reproduction Steps
Remove the cache/cache2.qcow2 file
cosa build (default qemu)
Examine the reported rootfs size passed to the osbuild-mpp command in the output
Examine the size of the cache/cache2.qcow2 file.
Notice the cache2.qcow2 file is ~4.5x the size of the rootfs
Other Information
I'm guessing this has to do with the hardcoded list of checkpoints passed to the osbuild command in src/runvm-osbuild, and the list of matching stages in the mpp file. Even though only the qemu target is being exported as the default cosa build output, it appears a long list of checkpoints is being added to the osbuild store and is probably what bloats the cache file. E.g.
Presumably a subsequent buildextend-gcp, buildextend-applehv, etc wouldn't increase the size of the cache2.qcow2 file anymore since they're already generated and included in the osbuild store, but it would be preferrable if these were conditionally generated on request rather than unconditionally as part of the main build. Maybe the cosa build interface should have a single build command and the complete list of outputs should be passed via image.yaml (or something) instead so only the desired outputs are created and written to the osbuild store?
That said though, even ignoring some of the raw formats, it looks like the oci-archive, deployed-tree, and tree would each be a copy of the ostree that already lives in the tmp/repo folder, and would end up with ~3x the rootfs size alone. I do wonder whether it's strictly necessary to checkpoint the tree stage since it's just an ostree pull of the primary ostree ref from the tmp/repo ostree that's independently stored persistently (I think). Though maybe there's internal COW support that makes this near-duplication negligible and the extreme bloat is coming from something else.
The text was updated successfully, but these errors were encountered:
when building the qemu platform any pipelines that are needed by the qemu pipeline will be built. None that aren't needed will be built. The list of checkpoints that we pass in, we do it to optimize later invocations because we know those particular pipelines are depended on by other platforms.
For qemu it would checkpoint deployed-treetreeraw-image, and yes osbuild-mpp would also store a copy of the ociarchive in the osbuild store too. So what you came up with sounds right.
If it's really only what's needed, then yeah the issue can be closed. It's just a huge change from pre-osbuild when the qemu was generated from only the ostree and some config files, and all other images were generated from it. Now there's many copies in various formats generated and preserved, increasing the disk usage of a build from starting at ~2x the rootfs to starting at ~6.5x of the rootfs (adding 4.5x of the rootfs as a new cache storage)
Bug Report
The
cache/cache2.qcow2
seems to consistently use about 4.5x the size of the reported rootfs size when building thebuildextend-qemu
(defaultbuild
).Environment
What operating system is being used to run coreos-assembler?
Linux with official podman/docker container.
What operating system is being assembled?
Custom derivative of
fedora-coreos-config
(Fedora 41).Is coreos-assembler running in Podman or Docker?
Yes, podamn.
If Podman, is coreos-assembler running privileged or unprivileged?
Privileged, rootless, daemonless.
Expected Behavior
The
RUNVM_CACHE_SIZE
doesn't need to be set to 4.5x the size of the rootfs being assembled.Actual Behavior
A reported rootfs of 16 GB requires a
RUNVM_CACHE_SIZE
of ~75 GB in order to docosa build
/cosa buildextend-qemu
.Reproduction Steps
cache/cache2.qcow2
filecosa build
(default qemu)osbuild-mpp
command in the outputcache/cache2.qcow2
file.cache2.qcow2
file is ~4.5x the size of the rootfsOther Information
I'm guessing this has to do with the hardcoded list of checkpoints passed to the
osbuild
command insrc/runvm-osbuild
, and the list of matching stages in the mpp file. Even though only theqemu
target is being exported as the defaultcosa build
output, it appears a long list of checkpoints is being added to the osbuild store and is probably what bloats the cache file. E.g.Presumably a subsequent
buildextend-gcp
,buildextend-applehv
, etc wouldn't increase the size of thecache2.qcow2
file anymore since they're already generated and included in the osbuild store, but it would be preferrable if these were conditionally generated on request rather than unconditionally as part of the main build. Maybe the cosa build interface should have a singlebuild
command and the complete list of outputs should be passed via image.yaml (or something) instead so only the desired outputs are created and written to the osbuild store?That said though, even ignoring some of the raw formats, it looks like the
oci-archive
,deployed-tree
, andtree
would each be a copy of the ostree that already lives in thetmp/repo
folder, and would end up with ~3x the rootfs size alone. I do wonder whether it's strictly necessary to checkpoint thetree
stage since it's just anostree pull
of the primary ostree ref from thetmp/repo
ostree that's independently stored persistently (I think). Though maybe there's internal COW support that makes this near-duplication negligible and the extreme bloat is coming from something else.The text was updated successfully, but these errors were encountered: