Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LogIntToFloat transform #3091

Closed
wants to merge 3 commits into from

Conversation

saitcakmak
Copy link
Contributor

Summary:
This is a simple subclass of IntToFloat that only transforms log-scale parameters.

Replacing IntToFloat with LogIntToFloat will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in Acquisition.optimize.

Differential Revision: D66244582

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Nov 20, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66244582

@codecov-commenter
Copy link

codecov-commenter commented Nov 20, 2024

Codecov Report

Attention: Patch coverage is 99.29078% with 1 line in your changes missing coverage. Please review.

Project coverage is 95.50%. Comparing base (a44cb2f) to head (d7ab572).

Files with missing lines Patch % Lines
ax/models/torch/botorch_modular/surrogate.py 96.55% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff            @@
##             main    #3091    +/-   ##
========================================
  Coverage   95.49%   95.50%            
========================================
  Files         504      504            
  Lines       50404    50514   +110     
========================================
+ Hits        48131    48241   +110     
  Misses       2273     2273            

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

saitcakmak added a commit to saitcakmak/Ax that referenced this pull request Nov 20, 2024
Summary:

This is a simple subclass of `IntToFloat` that only transforms log-scale parameters.

Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`.

Differential Revision: D66244582
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66244582

saitcakmak added a commit to saitcakmak/Ax that referenced this pull request Nov 20, 2024
Summary:

This is a simple subclass of `IntToFloat` that only transforms log-scale parameters.

Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`.

Differential Revision: D66244582
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66244582

saitcakmak added a commit to saitcakmak/Ax that referenced this pull request Nov 26, 2024
Summary:

This is a simple subclass of `IntToFloat` that only transforms log-scale parameters.

Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`.


Additional context:
With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax.
Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction.

Differential Revision: D66244582
saitcakmak added a commit to saitcakmak/Ax that referenced this pull request Nov 26, 2024
Summary:

This is a simple subclass of `IntToFloat` that only transforms log-scale parameters.

Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`.


Additional context:
With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax.
Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction.

Differential Revision: D66244582
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66244582

saitcakmak added a commit to saitcakmak/Ax that referenced this pull request Nov 26, 2024
Summary:

This is a simple subclass of `IntToFloat` that only transforms log-scale parameters.

Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`.


Additional context:
With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax.
Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction.

Differential Revision: D66244582
saitcakmak added a commit to saitcakmak/Ax that referenced this pull request Nov 26, 2024
Summary:

This is a simple subclass of `IntToFloat` that only transforms log-scale parameters.

Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`.


Additional context:
With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax.
Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction.

Differential Revision: D66244582
saitcakmak added a commit to saitcakmak/Ax that referenced this pull request Nov 26, 2024
Summary:

This is a simple subclass of `IntToFloat` that only transforms log-scale parameters.

Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`.


Additional context:
With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax.
Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction.

Differential Revision: D66244582
saitcakmak added a commit to saitcakmak/Ax that referenced this pull request Nov 26, 2024
Summary:

This is a simple subclass of `IntToFloat` that only transforms log-scale parameters.

Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`.


Additional context:
With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax.
Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction.

Differential Revision: D66244582
saitcakmak and others added 3 commits December 2, 2024 06:50
Summary:
Pull Request resolved: facebook#3129

From the introduction of `ModelConfig` up to the removal of deprecated args as attributes in D66553624, we have serialized both `model_configs` and the deprecated args (replaced with their default values) when serializing `surrogate_spec`. This is fine as long as the default values remain unchanged.

D65622788 changes the default value for `input_transform_classes`, which means the old defaults that were serialized now appear as modified values. Deserializing `SurrogateSpec` from the old jsons now errors out since it encounters both non-default deprecated args and `model_configs`.

This diff resolves this issue by discarding the deprecated args from the json if `model_configs` is also found.

Differential Revision: D66558559

Reviewed By: Balandat
Summary:
Pull Request resolved: facebook#3102

Input normalization is important to get the best performance out of the BoTorch models we use. The current setup relies on either using `UnitX` transform from Ax, or manually adding `Normalize` to `ModelConfig.input_transform_classes` to achieve input normalization.
- `UnitX` is not ideal since it only applies to float valued `RangeParameters`. If we make everything into floats to use `UnitX`, we're locked into using continuous relaxation for acquisition optimization, which is something we want to move away from.
- `Normalize` works well, particularly when `bounds` argument is provided (It's applied at each pass through the model, rather than once to the training data, but that's a separate discussion). However, requiring it as an explicit user input is a bit cumbersome.
This diff adds the machinery for constructing a default set of input transforms. This implementation retains the previous `InputPerturbation` transform for robust optimization, and adds `Normalize` transform if the non-task features of the search space are not normalized.

With this change, we should be able to remove `UnitX` transform from an MBM model(spec) without losing input normalization.

Other considerations:
- This setup only adds the default transforms if the `input_transform_classes` argument is left as `DEFAULT`. If the user supplies `input_transform_classes` or sets it to `None`, no defaults will be used. Would we want to add defaults even when the user supplies some transforms? If so, how would we decide whether to append or prepend the defaults?
- As mentioned above, applying `Normalize` at each pass through the model is not super efficient. A vectorized application of an Ax transform should generally be more efficient. A longer term alternative would be to expand Ax-side `UnitX` to support more general parameter classes and types, without losing information in the process. This would require additional changes such as support for non-integer valued discrete `RangeParameters`, and support for non-integer discrete values in the mixed optimizer.

Differential Revision: D65622788

Reviewed By: Balandat
Summary:
Pull Request resolved: facebook#3091

This is a simple subclass of `IntToFloat` that only transforms log-scale parameters.

Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`.

Additional context:
With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax.
Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction.

Differential Revision: D66244582
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66244582

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 258400c.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants