Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Substantial slowdowns with scipy 1.15 #2668

Closed
Balandat opened this issue Jan 4, 2025 · 1 comment
Closed

[Bug] Substantial slowdowns with scipy 1.15 #2668

Balandat opened this issue Jan 4, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@Balandat
Copy link
Contributor

Balandat commented Jan 4, 2025

🐛 Bug

I have observed substantial slowdowns (at least 4X in some cases) in the tutorial runs (both for smoke test and regular execution) when using scipy 1.15.

It's not clear what causes this, but I was unable to reproduce this on my M1 Mac so it seems to be tied to something in the CI setup - possibly something with underlying numerical linalg libraries.

Runtimes in Nightly on 01/03/25 (using scipy 1.14.1):

Run python scripts/run_tutorials.py -p "$(pwd)"
Running tutorial(s) in standard mode.
This may take a long time...
Running tutorial GIBBON_for_efficient_batch_entropy_search.ipynb.
Running tutorial GIBBON_for_efficient_batch_entropy_search.ipynb took 19.36 seconds. Memory usage started at 46.421875 MB and the maximum was 1113.890625 MB.
Running tutorial Multi_objective_multi_fidelity_BO.ipynb.
Running tutorial Multi_objective_multi_fidelity_BO.ipynb took 149.58 seconds. Memory usage started at 46.421875 MB and the maximum was 777.238285 MB.
Running tutorial batch_mode_cross_validation.ipynb.
Running tutorial batch_mode_cross_validation.ipynb took 5.26 seconds. Memory usage started at 46.421875 MB and the maximum was 543.8984375 MB.
Running tutorial baxus.ipynb.
Running tutorial baxus.ipynb took 978.33 seconds. Memory usage started at 46.421875 MB and the maximum was 3127.25390625 MB.
Running tutorial bo_with_warped_gp.ipynb.
Running tutorial bo_with_warped_gp.ipynb took 237.89 seconds. Memory usage started at 46.421875 MB and the maximum was 556.265625 MB.
Running tutorial bope.ipynb.
Running tutorial bope.ipynb took 52 seconds. Memory usage started at 46.421875 MB and the maximum was 606.328125 MB.

Runtimes in Nightly on 01/04/25 (using scipy 1.15):

Run python scripts/run_tutorials.py -p "$(pwd)"
Running tutorial(s) in standard mode.
This may take a long time...
Running tutorial GIBBON_for_efficient_batch_entropy_search.ipynb.
Running tutorial GIBBON_for_efficient_batch_entropy_search.ipynb took 39.89 seconds. Memory usage started at 46.37109375 MB and the maximum was 1115.0 MB.
Running tutorial Multi_objective_multi_fidelity_BO.ipynb.
Running tutorial Multi_objective_multi_fidelity_BO.ipynb took 483.36 seconds. Memory usage started at 46.37109375 MB and the maximum was 772.97265625 MB.
Running tutorial batch_mode_cross_validation.ipynb.
Running tutorial batch_mode_cross_validation.ipynb took 6.00 seconds. Memory usage started at 46.37109375 MB and the maximum was 557.734375 MB.
Running tutorial baxus.ipynb.
Tutorial baxus.ipynb exceeded the maximum runtime of 30 minutes.
Running tutorial bo_with_warped_gp.ipynb.
Tutorial bo_with_warped_gp.ipynb exceeded the maximum runtime of 30 minutes.
Running tutorial bope.ipynb.
Running tutorial bope.ipynb took 219.32 seconds. Memory usage started at 46.37109375 MB and the maximum was 655.171875 MB.
@Balandat Balandat added the bug Something isn't working label Jan 4, 2025
Balandat added a commit to Balandat/botorch that referenced this issue Jan 4, 2025
There appear to be substantial slowdowns with the optimization (presumably both for fitting and candidate generation) with scipy 1.15: pytorch#2668
This pins the scipy version to <1.15 for now to avoid these slowdowns and terrible user experiences (at least on the `main` branch) until we've figured out the root cause of these slowdowns.
facebook-github-bot pushed a commit that referenced this issue Jan 4, 2025
Summary:
There appear to be substantial slowdowns with the optimization (presumably both for fitting and candidate generation) with scipy 1.15: #2668

This pins the scipy version to <1.15 for now to avoid these slowdowns and terrible user experiences (at least on the `main` branch) until we've figured out the root cause of these slowdowns.

Pull Request resolved: #2669

Reviewed By: saitcakmak

Differential Revision: D67826874

Pulled By: Balandat

fbshipit-source-id: 9562cb8ea8f25cc51165931fa4335191e6e4b34c
jduerholt added a commit to experimental-design/bofire that referenced this issue Jan 15, 2025
BoTorch is slowed down massively by scipy 1.15: pytorch/botorch#2668. We should fix it.
bertiqwerty pushed a commit to experimental-design/bofire that referenced this issue Jan 15, 2025
BoTorch is slowed down massively by scipy 1.15: pytorch/botorch#2668. We should fix it.
dlinzner-bcs pushed a commit to experimental-design/bofire that referenced this issue Jan 20, 2025
BoTorch is slowed down massively by scipy 1.15: pytorch/botorch#2668. We should fix it.
dlinzner-bcs added a commit to experimental-design/bofire that referenced this issue Jan 28, 2025
* add draft of restrucuted doe class

* refactoring doe

* add formulaic to be installed always

* add formulaic to be installed always

* add formulaic to be installed always

* add formulaic to be installed always

* check style

* check style

* check style

* remove enumns

* remove enumns

* remove enumns

* fix branch and bound

* move delta into criterion

* move delta into criterion

* move delta into criterion

* move delta into criterion

* move default criterion

* move default criterion

* move default criterion

* move default criterion

* refactor formulas and number of experiments

* pyright

* fix test

* fix test

* fix test

* fix tutorial

* fix tutorial

* fix tutorial

* fix test

* fix test

* fix getting started

* aarons review

* rmv unneded tests

* formulaic version fixed bc of breaking changes

* add explanatory text to doe basic examples

* typo in basic_examples.ipynb

* format basic doe example

* consolidate spac_filling with doe

* Add categoricals for `FractionalFactorialStrategy` (#480)

* integrate factorial in fractional factorial

* fix tests

* merge main

* Multiplicative additive sobo objectives (#481)

* added MultiplicativeAdditive data model

* added actual multiplicative model (callable is missing)

* added torch functions

* added test for objective

* added test for sobo stategy: multiplicative_additive

* changed additive/multiplicative calculations:
- Removed scaling by x**(1/(....)) to avoid numerical errors, if x<0
- included weight transformation for multiplicative objectives from (0, 1] to [1, inf) scale to avoid numerical errors at weights != 1.0
- added tests for weights != 1.0

* added notebook for comparison of merging objectives

* after hooks

* addet .idea/ folder (pycharm) to gitignore

* after hooks

* Apply pre-commit fixes

* Delete .idea directory

* corrected tests for multiplicative_additive_botorch_objective

* after pre-commit

* lint specifications

* corrected weightings calc in test for multiplicative objective

* after hooks

* changed docstrings to google docstring format

* easy fixes, spelling errors

* forgot linting

* easy fixes, spelling errors

* removed denominator additive from multiplicative_additive_sobo strategy

* after hooks

* fixed typing

* tensor initialization of objectives

* after hooks

* avoiding torch size error

* avoid linting error

* after hooks

* reverting test-renaming

* revert isinstance list comprehension to tuple.... solution

* testing copilot suggestions for linting errors

* reverting wrong copilot suggestions

* added test for _callables_and_weights

* after hooks

* added test for SOBO strategy data model

* added test for SOBO strategy data model

* added new sobo strategy to a mysterious list

* after hooks

* still trying to get rid of the linting error, expecting tuple(types)

* WIP

* WIP

* WIP

* WIP

* WIP

* minor corrections

* add pbar support for hyperopt (#494)

* Make the BoFire Data Models OpenAI compatible (#495)

* tuples to lists

* fix tests

* fix linting issues

* Group split kfold (#484)

* add group kfold option in cross_validate of any traainable surrogate

* changed to GroupShuffleSplit, added test case

* improve docstring & add some inline comments in test

* refactor cross_validate & add tests

* imrpve tests, remove unnecessary case while checking group split col

* add push

* formatting

---------

Co-authored-by: Jim Boelrijk Valcon <[email protected]>

* fix strict candidate enforcement (#492)

* Drop support for Python 3.9 (#493)

* update tests and pyproject.toml

* update lint workflow

* update test

* bump pyright

* different pyright version

* change linting

* Update pyproject.toml (#501)

BoTorch is slowed down massively by scipy 1.15: pytorch/botorch#2668. We should fix it.

* kernels working on a given set of features (#476)

* kernels working on a given set of features

* pre-commit

* test map singletaskgp with additive kernel

* test active_dims of mapped kernels

* add features_to_idx_mapper to outlier detection tutorial

* correctly handling categorical mol features

* validating mol features transforms

* verifying proper type

* custom hamming kernel enabling single task gp on categorical features

* removed unnecessary parameter from data model

* testing equivalence of mixed gp and single gp with custom kernel

* (temporary) running on all py versions

* (temporary) debug github actions by printing

* more printing

* Revert "testing equivalence of mixed gp and single gp with custom kernel"

This reverts commit 4a2a547.

* Revert "removed unnecessary parameter from data model"

This reverts commit 6ad1dfd.

* Revert "custom hamming kernel enabling single task gp on categorical features"

This reverts commit 17d8350.

* Revert "Revert "custom hamming kernel enabling single task gp on categorical features""

This reverts commit 2e29852.

* Revert "Revert "testing equivalence of mixed gp and single gp with custom kernel""

This reverts commit 1cd2776.

* removed test debug and restored to latest implemented features

* pinning compatible version of formulaic

* pinning compatible version of formulaic

* removed old code

* lint

* removed scratch file

* removed old code again

* silencing pyright false positive

* compatibility with py39

* pin compatible version of formulaic

* restored old code

* pinning sklearn

* pinning sklearn

* pinning scikit everywhere

* not testing for prediction quality

* matching lengthscale constraints in hamming kernel

* removed equivalence test

* testing hamming kernel

* added test for mol features in single task gp

* categorical onehot kernel uses the right lengthscale for multiple features

* removed redundant check

* more descriptive name for base kernel

* updated docstring

* improved tests and comments

---------

Co-authored-by: Robert Lee <[email protected]>

* Fix mapper tests (#502)

* fix kernel mapper tests

* bump botorch dependency

* rmv unused import in staregies.api

* rmv unused import in space filling

* rmv unused import in space filling

* fix data models tets

* fix data models tets

* fix data models tets

* fix data models tets

* fix data models tets

* no more bnb in test

* add fixtures for criteria

* add fixtures for criteria

---------

Co-authored-by: LinzneDD_basf <[email protected]>
Co-authored-by: Dominik Linzner <[email protected]>
Co-authored-by: linznedd <[email protected]>
Co-authored-by: Robert Lee <[email protected]>
Co-authored-by: Lukas Hebing <[email protected]>
Co-authored-by: Julian Keupp <[email protected]>
Co-authored-by: Jim Boelrijk Valcon <[email protected]>
Co-authored-by: Emilio Dorigatti <[email protected]>
@saitcakmak
Copy link
Contributor

Resolved in #2712

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants