-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LogIntToFloat transform #3091
Conversation
This pull request was exported from Phabricator. Differential Revision: D66244582 |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3091 +/- ##
========================================
Coverage 95.49% 95.50%
========================================
Files 504 504
Lines 50404 50514 +110
========================================
+ Hits 48131 48241 +110
Misses 2273 2273 ☔ View full report in Codecov by Sentry. |
Summary: This is a simple subclass of `IntToFloat` that only transforms log-scale parameters. Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`. Differential Revision: D66244582
8cfe4e6
to
6664c3b
Compare
This pull request was exported from Phabricator. Differential Revision: D66244582 |
Summary: This is a simple subclass of `IntToFloat` that only transforms log-scale parameters. Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`. Differential Revision: D66244582
6664c3b
to
d10894f
Compare
This pull request was exported from Phabricator. Differential Revision: D66244582 |
Summary: This is a simple subclass of `IntToFloat` that only transforms log-scale parameters. Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`. Additional context: With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax. Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction. Differential Revision: D66244582
Summary: This is a simple subclass of `IntToFloat` that only transforms log-scale parameters. Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`. Additional context: With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax. Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction. Differential Revision: D66244582
d10894f
to
fe5a620
Compare
This pull request was exported from Phabricator. Differential Revision: D66244582 |
Summary: This is a simple subclass of `IntToFloat` that only transforms log-scale parameters. Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`. Additional context: With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax. Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction. Differential Revision: D66244582
Summary: This is a simple subclass of `IntToFloat` that only transforms log-scale parameters. Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`. Additional context: With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax. Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction. Differential Revision: D66244582
Summary: This is a simple subclass of `IntToFloat` that only transforms log-scale parameters. Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`. Additional context: With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax. Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction. Differential Revision: D66244582
Summary: This is a simple subclass of `IntToFloat` that only transforms log-scale parameters. Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`. Additional context: With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax. Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction. Differential Revision: D66244582
Summary: Pull Request resolved: facebook#3129 From the introduction of `ModelConfig` up to the removal of deprecated args as attributes in D66553624, we have serialized both `model_configs` and the deprecated args (replaced with their default values) when serializing `surrogate_spec`. This is fine as long as the default values remain unchanged. D65622788 changes the default value for `input_transform_classes`, which means the old defaults that were serialized now appear as modified values. Deserializing `SurrogateSpec` from the old jsons now errors out since it encounters both non-default deprecated args and `model_configs`. This diff resolves this issue by discarding the deprecated args from the json if `model_configs` is also found. Differential Revision: D66558559 Reviewed By: Balandat
Summary: Pull Request resolved: facebook#3102 Input normalization is important to get the best performance out of the BoTorch models we use. The current setup relies on either using `UnitX` transform from Ax, or manually adding `Normalize` to `ModelConfig.input_transform_classes` to achieve input normalization. - `UnitX` is not ideal since it only applies to float valued `RangeParameters`. If we make everything into floats to use `UnitX`, we're locked into using continuous relaxation for acquisition optimization, which is something we want to move away from. - `Normalize` works well, particularly when `bounds` argument is provided (It's applied at each pass through the model, rather than once to the training data, but that's a separate discussion). However, requiring it as an explicit user input is a bit cumbersome. This diff adds the machinery for constructing a default set of input transforms. This implementation retains the previous `InputPerturbation` transform for robust optimization, and adds `Normalize` transform if the non-task features of the search space are not normalized. With this change, we should be able to remove `UnitX` transform from an MBM model(spec) without losing input normalization. Other considerations: - This setup only adds the default transforms if the `input_transform_classes` argument is left as `DEFAULT`. If the user supplies `input_transform_classes` or sets it to `None`, no defaults will be used. Would we want to add defaults even when the user supplies some transforms? If so, how would we decide whether to append or prepend the defaults? - As mentioned above, applying `Normalize` at each pass through the model is not super efficient. A vectorized application of an Ax transform should generally be more efficient. A longer term alternative would be to expand Ax-side `UnitX` to support more general parameter classes and types, without losing information in the process. This would require additional changes such as support for non-integer valued discrete `RangeParameters`, and support for non-integer discrete values in the mixed optimizer. Differential Revision: D65622788 Reviewed By: Balandat
Summary: Pull Request resolved: facebook#3091 This is a simple subclass of `IntToFloat` that only transforms log-scale parameters. Replacing `IntToFloat` with `LogIntToFloat` will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available in `Acquisition.optimize`. Additional context: With log-scale parameters, we have two options: transform them in Ax or transform them in BoTorch. Transforming them in Ax leads to both modeling and optimizing the parameter in the log-scale (good), but transforming in BoTorch leads to modeling in log-scale but optimizing in the raw scale (not ideal) and also introduces `TransformedPosterior` and some incompatibilities it brings. So, we want to transform log-scale parameters in Ax. Since log of an int parameter is no longer int, we have to relax them. But we don't want to relax any other int parameters, so we don't want to use `IntToFloat`. `LogIntToFloat` makes it possible to use continuous relaxation only for the log-scale parameters, which is a good step in the right direction. Differential Revision: D66244582
fe5a620
to
d7ab572
Compare
This pull request was exported from Phabricator. Differential Revision: D66244582 |
This pull request has been merged in 258400c. |
Summary:
This is a simple subclass of
IntToFloat
that only transforms log-scale parameters.Replacing
IntToFloat
withLogIntToFloat
will avoid unnecessary use of continuous relaxation across the board, and allow us to utilize the various optimizers available inAcquisition.optimize
.Differential Revision: D66244582