-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nix build using flake #162
Conversation
Codecov Report
@@ Coverage Diff @@
## master #162 +/- ##
=======================================
Coverage 58.54% 58.54%
=======================================
Files 67 67
Lines 5727 5727
=======================================
Hits 3353 3353
Misses 2374 2374 Continue to review full report at Codecov.
|
With regards to auto-detecting the GPU arch., would something like building all available archs. be a workable solution? It would take forever to compile but it should be generic. |
I'd like to find a way for telemetry to quietly disable itself if it turns out that $HOME doesn't exist or is not writable. It prevents me from ever doing
I was toying with generating the documentation from nix (and maybe restoring the auto-update of gh-pages as a workflow action). To generate the python API docs, it needs a python from which it can import bifrost... which can be arranged but the telemetry bombs in that environment. I guess that whole preamble could be wrapped in a try/catch and on FileNotFound set |
Now that #157 has been merged we should probably close this PR and open a new one that targets master. |
Yep. I may just rebase/squash onto new master... the timeline is pretty confusing with occasional merge commits from autoconf... but ultimately it should just add 3 files. |
I think I will try out the nix thing once the new server gets setup. |
I may merge the nix stuff soon, but I feel like it deserves a small section of the README or manual... maybe before next release. I'll add a CHANGELOG entry as a placeholder. Next time you're on qblocks, you might try a quick nix setup like this:
Even before a git clone, you should be able to do stuff like the following. (If it needs to configure and build it will, but if you hit exactly the configuration that's in cachix, it will just download. Cachix is usually populated by the CI.)
|
It provides an overlay for nixpkgs that adds ctypesgen and a configurable bifrost package, that can be overridden with various versions of python3 and cudatoolkit (or without them), and can enable debugging or not. The github workflow uses nix to build a few variations, run non-GPU tests, and update documentation in gh-pages branch. Builds are cached by cachix; upon merge that part may need some keys loaded onto the ledatelescope repo. Maybe needs a blurb in the README before next release.
That technique pulled in cudatoolkit too early, even when not building with it (and so it failed on darwin).
Just going with ["70" "75"] in default CUDA builds for now. Always possible to override the `gpuArchs` argument.
overlay → overlays.default devShell.ARCH → devShells.ARCH.default
Better for consistency with non-nix build.
Following merge of PR lwa-project#169
This is a start on specifying a “flake” for building bifrost with Nix. A flake is a format that pins all dependencies to support reproducible builds. This initial version includes github workflow for building with different releases of the nixpkgs tree (containing compilers, python dependencies, etc.) and different versions of python, then running the tests on all those configurations. (This builds on autoconf.) More background: nixos.org, Flakes on Nix wiki, Flakes tutorial from tweag.io
It doesn't yet support GPU builds, though I've gotten it mostly to work. It's a little tricky to integrate because the
libcuda.so.1
must come from the host platform, so it agrees with the kernel version and GPU hardware. If the host is itself NixOS it's manageable, but when Nix is being used on top of another platform (e.g. Ubuntu) we can only build against stubs, and then sub in the real libcuda later. For similar reasons, auto-detecting the right GPU architecture during the build seems to be problematic. As long as GPU architecture is an input to the build, it gets hashed into the package signature, but we can't ask what GPU architecture is “from the inside.” Same story as forbuiltins.currentSystem
for the overall architecture/OS tag. Some hints about libcuda on NixAll this should be solvable (and hopefully useful). I'd like to continue to work on it and tweak it here... so marking this as a draft.