-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
migration of quantize_
workflow configuration from callables to configs
#1690
Comments
vkuzo
added a commit
to vkuzo/pytorch_scripts
that referenced
this issue
Feb 12, 2025
Summary: Testing for pytorch/ao#1690 Convenient to have this here to test on torchao main vs torchao experiment Test Plan: Reviewers: Subscribers: Tasks: Tags:
vkuzo
added a commit
to vkuzo/pytorch_scripts
that referenced
this issue
Feb 13, 2025
Summary: Testing for pytorch/ao#1690 Convenient to have this here to test on torchao main vs torchao experiment Test Plan: Reviewers: Subscribers: Tasks: Tags:
quantize_
workflow configuration from callables to configs
This was referenced Feb 13, 2025
Merged
vkuzo
added a commit
that referenced
this issue
Feb 26, 2025
Summary: Thanks to investigation from @eellison, moving the reshape to the end of the cast helps inductor fuse the cast into a single kernel. This doesn't yet work with fp4, but let's unblock fp8 and deal with fp4 later. Fixes #1690 Note: in the repro with swizzling from #1773, we go from 3 to 2 kernels. Further investigation is needed whether we can fuse the swizzling. Test Plan: ``` pytest test/prototype/mx_formats/test_mx_tensor.py -x -s -k test_to_mx_inductor_single_kernel ``` Reviewers: Subscribers: Tasks: Tags:
vkuzo
added a commit
that referenced
this issue
Feb 26, 2025
Summary: Thanks to investigation from @eellison, moving the reshape to the end of the cast helps inductor fuse the cast into a single kernel. This doesn't yet work with fp4, but let's unblock fp8 and deal with fp4 later. Fixes #1690 Note: in the repro with swizzling from #1773, we go from 3 to 2 kernels. Further investigation is needed whether we can fuse the swizzling. Test Plan: ``` pytest test/prototype/mx_formats/test_mx_tensor.py -x -s -k test_to_mx_inductor_single_kernel ``` Reviewers: Subscribers: Tasks: Tags:
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
summary
This issue tracks the migration of
quantize_
per-workflow configuration from Callables to configs..We are migrating the way
quantize_
workflows are configured from callables (tensor subclass inserters) to direct configuration (config objects). Motivation: align with the rest of the ecosystem, enable inspection of configs after instantiation, remove a common source of confusion.What is changing:
Specifically, here is how the signature of
quantize_
's second argument will change:quantize_
changed fromapply_tensor_subclass
toconfig
. Since the vast majority of callsites today are passing in configuration with a positional argument, this change should not affect most people.quantize_
will change fromCallable[[torch.nn.Module], torch.nn.Module]
toconfig: AOBaseConfig
, following a deprecation process detailed below.int8_weight_only
) to camel case (Int8WeightOnlyConfig
). All argument names for each config are kept as-is. We will keep the old snake case names (int8_weight_only
) around and alias them to the new names (int8_weight_only = Int8WeightOnlyConfig
), to avoid breaking callsites. We plan to keep the old names forever. Here are all the workflow config name changes:int4_weight_only
Int4WeightOnlyConfig
float8_dynamic_activation_float8_weight
Float8DynamicQuantizationFloat8WeightConfig
float8_static_activation_float8_weight
Float8StaticActivationFloat8WeightConfig
float8_weight_only
Float8WeightOnlyConfig
fpx_weight_only
FPXWeightOnlyConfig
gemlite_uintx_weight_only
GemliteUIntXWeightOnlyConfig
int4_dynamic_activation_int4_weight
Int4DynamicActivationInt4WeightConfig
int8_dynamic_activation_int4_weight
Int8DynamicActivationInt4WeightConfig
int8_dynamic_activation_int8_semi_sparse_weight
int8_dynamic_activation_int8_weight
Int8DynamicActivationInt8WeightConfig
int8_weight_only
Int8WeightOnlyConfig
uintx_weight_only
UIntXWeightOnlyConfig
Configuration for prototype workflows using
quantize_
will be migrated at a later time.sparsify_
will be migrated in a similar fashion at a later time.How these changes can affect you:
quantize_
API workflows and are passing in config by a positional argument (quantize_(model, int8_weight_only(group_size=128))
), you are not affected. This syntax will keep working going forward. You have the option to migrate your callsite to the new config name (quantize_(model, Int8WeightOnlyConfig(group_size=128))
at your own pace.quantize_
API workflows and are passing in config by a keyword argument (quantize_(model, tensor_subclass_inserter=int8_weight_only(group_size=128))
), your callsite will break. You will need to change your callsite toquantize_(model, config=int8_weight_only(group_size=128))
. We don't expect many people to be in this bucket.quantize_
API, you will need to use the new configuration system. Please see migration ofquantize_
workflow configuration from callables to configs #1690 for details.sparsify_
, you are not affected for now and a similar change will happen in a future version of torchao.This migration will be a two step process:
We will keep the old callable syntax supported by
quantize_
for one release cycle, and delete it afterwards. We will keep the old names as aliases for new names going forward (example:int4_weight_only
as an alias ofInt4WeightOnlyConfig
) to keep existing callsites working without changes.impact on API users
If you are just using the torchao
quantize_
API as specified in the README, this is not BC-breaking. For example, the following syntax will keep working.Note that the type of the object created by
int8_weight_only()
will change from a Callable to a config. You have the option to migrate to the explicit config creation, as follows:user facing API changes
signature of quantize_
usage example
An example for
int4_weight_only
developer facing changes
See the PR details for examples, but they can be summarized as:
migration status
quantize_ non-prototype workflow configuration
quantize_ prototype workflow configuration
Grep for callsites:
grep -r "quantize_(" torchao/prototype
quantize_
used here is a different function, so nothing to doexperimental
sparsify_
sparsify_
to configs #1856tutorials (replace with new registration API)
replace docblocks and public facing descriptions with new names
verify partner integrations still work
confirmed two out of three here: vkuzo/pytorch_scripts#28
delete old path (one version after migration)
config
argument #1861The text was updated successfully, but these errors were encountered: