Skip to content

Commit 7763e1d

Browse files
vkuzofacebook-github-bot
authored andcommittedSep 26, 2020
quant docs: document how to customize qconfigs in eager mode (pytorch#45306)
Summary: Pull Request resolved: pytorch#45306 Adds details to the main quantization doc on how specifically users can skip or customize quantization of layers. Test Plan: Imported from OSS Reviewed By: jerryzh168 Differential Revision: D23917034 Pulled By: vkuzo fbshipit-source-id: ccf71ce4300c1946b2ab63d1f35a07691fd7a2af
1 parent eb39624 commit 7763e1d

File tree

1 file changed

+8
-2
lines changed

1 file changed

+8
-2
lines changed
 

‎docs/source/quantization.rst

+8-2
Original file line numberDiff line numberDiff line change
@@ -402,9 +402,15 @@ prior to quantization. This is because currently quantization works on a module
402402
by module basis. Specifically, for all quantization techniques, the user needs to:
403403

404404
1. Convert any operations that require output requantization (and thus have
405-
additional parameters) from functionals to module form.
405+
additional parameters) from functionals to module form (for example,
406+
using ``torch.nn.ReLU`` instead of ``torch.nn.functional.relu``).
406407
2. Specify which parts of the model need to be quantized either by assigning
407-
``.qconfig`` attributes on submodules or by specifying ``qconfig_dict``
408+
``.qconfig`` attributes on submodules or by specifying ``qconfig_dict``.
409+
For example, setting ``model.conv1.qconfig = None`` means that the
410+
``model.conv`` layer will not be quantized, and setting
411+
``model.linear1.qconfig = custom_qconfig`` means that the quantization
412+
settings for ``model.linear1`` will be using ``custom_qconfig`` instead
413+
of the global qconfig.
408414

409415
For static quantization techniques which quantize activations, the user needs
410416
to do the following in addition:

0 commit comments

Comments
 (0)
Please sign in to comment.