Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Align AvgPool ceil_mode on last value to torch #16752

Conversation

titaiwangms
Copy link
Contributor

@titaiwangms titaiwangms commented Jul 18, 2023

Fix #16203

Previous to this PR, if ceil_mode is on, the calculation of a value would divide the kernel size, even if remaining pixels is less than the kernel size, which causes the difference in this operator between ORT and torch.

However, this fix only applies to the change in #15597, which only supports AvgPool since 19. The older opset version is remain the same, as it's using mlas files.

Also, the PR fixes the shape mismatch caused by sliding window starting from padding. More detail: onnx/onnx#6650 (And this PR is also validated with the tests added in onnx/onnx#6650)

@titaiwangms titaiwangms added converter related to ONNX converters core runtime issues related to core runtime labels Jul 18, 2023
justinchuby
justinchuby previously approved these changes Jul 18, 2023
@titaiwangms titaiwangms changed the title align ceil_mode on last value to torch Align ceil_mode on last value to torch Jul 18, 2023
@titaiwangms titaiwangms changed the title Align ceil_mode on last value to torch Align AvgPool ceil_mode on last value to torch Jul 18, 2023
titaiwangms added a commit to microsoft/onnxscript that referenced this pull request Jul 19, 2023
1D and 3D

[Fix](microsoft/onnxruntime#16752) can save the
op after opset 19, but not before it. So an xfail is created when
ceil_mode=True

---------

Co-authored-by: Justin Chu <[email protected]>
@titaiwangms titaiwangms requested a review from justinchuby July 20, 2023 14:59
@justinchuby
Copy link
Contributor

Was this discrepancy clarified by the ONNX spec? Was it due to an implementation oversight of the spec? Hopefully the spec itself was not different than torch?

titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jul 20, 2023
… shapes"


In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: microsoft/onnxscript#754.

However, the corner case with `ceil_mode` remains unsolved in onnxruntime: microsoft/onnxruntime#16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: microsoft/onnxruntime#16752, and it supports AvgPool since opset19.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jul 20, 2023
In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: microsoft/onnxscript#754.

However, the corner case with `ceil_mode` remains unsolved in onnxruntime: microsoft/onnxruntime#16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: microsoft/onnxruntime#16752, and it supports AvgPool since opset19.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jul 20, 2023
… shapes"


In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: microsoft/onnxscript#754.

However, the corner case with `ceil_mode` remains unsolved in onnxruntime: microsoft/onnxruntime#16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: microsoft/onnxruntime#16752, and it supports AvgPool since opset19.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jul 20, 2023
In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: microsoft/onnxscript#754.

However, the corner case with `ceil_mode` remains unsolved in onnxruntime: microsoft/onnxruntime#16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: microsoft/onnxruntime#16752, and it supports AvgPool since opset19.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jul 21, 2023
… shapes"


In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: microsoft/onnxscript#754.

However, the corner case with `ceil_mode` remains unsolved in onnxruntime: microsoft/onnxruntime#16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: microsoft/onnxruntime#16752, and it supports AvgPool since opset19.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jul 21, 2023
In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: microsoft/onnxscript#754.

However, the corner case with `ceil_mode` remains unsolved in onnxruntime: microsoft/onnxruntime#16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: microsoft/onnxruntime#16752, and it supports AvgPool since opset19.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jul 21, 2023
… shapes"


In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: microsoft/onnxscript#754.

However, the corner case with `ceil_mode` remains unsolved in onnxruntime: microsoft/onnxruntime#16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: microsoft/onnxruntime#16752, and it supports AvgPool since opset19.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jul 21, 2023
In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: microsoft/onnxscript#754.

However, the corner case with `ceil_mode` remains unsolved in onnxruntime: microsoft/onnxruntime#16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: microsoft/onnxruntime#16752, and it supports AvgPool since opset19.

[ghstack-poisoned]
pytorchmergebot pushed a commit to pytorch/pytorch that referenced this pull request Jul 21, 2023
In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: microsoft/onnxscript#754.

However, the corner case with `count_include_pad` remains unsolved in onnxruntime: microsoft/onnxruntime#16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: microsoft/onnxruntime#16752, and it supports AvgPool since opset19.
Pull Request resolved: #105683
Approved by: https://github.com/thiagocrepaldi
justinchuby
justinchuby previously approved these changes Aug 8, 2023
@baijumeswani
Copy link
Contributor

Is this PR still relevant?

@titaiwangms
Copy link
Contributor Author

titaiwangms commented Nov 7, 2023

Hi @baijumeswani Yes. But this PR only works on Pooling op after opset18. Could you help us to add this supports on older versions as well? Basically, ceil_mode goes off whenever there is right padding.

@titaiwangms
Copy link
Contributor Author

@titaiwangms titaiwangms merged commit 8b1d3b3 into microsoft:main Jan 24, 2025
100 checks passed
github-merge-queue bot pushed a commit to onnx/onnx that referenced this pull request Jan 24, 2025
### Description
Pooling in reference currently has two bugs: 

(1) it has ["pads
required"](https://github.com/onnx/onnx/blob/e292b4ae6d016c3231a801bfeb26f802ba95d82a/onnx/reference/ops/op_pool_common.py#L53)
to make sure sliding window does not go out-of-bound, but it does not
exclude pads required from pooling caculations. This causes
microsoft/onnxruntime#16203

(2) In `ceil_mode`, the reference pooling pads the input image, but does
not check if sliding window starts on pads. When we reach the end of the
image, the window should stop working. This causes
pytorch/pytorch#131272. Not directly, but
related fix on MaxPooling: #5741.
Detail:
pytorch/pytorch#116420 (comment)

### Motivation and Context
This PR fixes the two bugs and update their tests accordingly. I also
drafted a PR to onnxruntime to have this update in CPU provider:
microsoft/onnxruntime#16752

---------

Signed-off-by: titaiwangms <[email protected]>
Signed-off-by: Justin Chu <[email protected]>
Signed-off-by: Andreas Fehlner <[email protected]>
Co-authored-by: Justin Chu <[email protected]>
Co-authored-by: Andreas Fehlner <[email protected]>
andife added a commit to onnx/onnx that referenced this pull request Jan 25, 2025
### Description
Pooling in reference currently has two bugs:

(1) it has ["pads
required"](https://github.com/onnx/onnx/blob/e292b4ae6d016c3231a801bfeb26f802ba95d82a/onnx/reference/ops/op_pool_common.py#L53)
to make sure sliding window does not go out-of-bound, but it does not
exclude pads required from pooling caculations. This causes
microsoft/onnxruntime#16203

(2) In `ceil_mode`, the reference pooling pads the input image, but does
not check if sliding window starts on pads. When we reach the end of the
image, the window should stop working. This causes
pytorch/pytorch#131272. Not directly, but
related fix on MaxPooling: #5741.
Detail:
pytorch/pytorch#116420 (comment)

### Motivation and Context
This PR fixes the two bugs and update their tests accordingly. I also
drafted a PR to onnxruntime to have this update in CPU provider:
microsoft/onnxruntime#16752

---------

Signed-off-by: titaiwangms <[email protected]>
Signed-off-by: Justin Chu <[email protected]>
Signed-off-by: Andreas Fehlner <[email protected]>
Co-authored-by: Justin Chu <[email protected]>
Co-authored-by: Andreas Fehlner <[email protected]>
Signed-off-by: Andreas Fehlner <[email protected]>
ashrit-ms pushed a commit that referenced this pull request Feb 11, 2025
Fix #16203

Previous to this PR, if `ceil_mode` is on, the calculation of a value
would divide the kernel size, even if remaining pixels is less than the
kernel size, which causes the difference in this operator between ORT
and torch.

However, this fix only applies to the change in #15597, which only
supports AvgPool since 19. The older opset version is remain the same,
as it's using mlas files.

Also, the PR fixes the shape mismatch caused by sliding window starting
from padding. More detail: onnx/onnx#6650 (And
this PR is also validated with the tests added in
onnx/onnx#6650)
ashrit-ms added a commit that referenced this pull request Feb 11, 2025
### Description
This PR is to update the win-ort-main branch to the tip main branch as
of 2025-02-11.

### PR List
74c778e [WebNN EP] Automatically move input CPU tensors to ml-tensor
(#23073)
3775057 use correct total length to fix static kv_cache performance
(#23615)
3901e96 remove --use_vcpkg flag for Python-CUDA-Packaging-Pipeline
(#23631)
c610df5 Add python_requires to package metadata (#23604)
2d27d68 [QNN EP] Add QNN EP to ARM64X build targets (#23635)
e666503 [webgpu] no longer need pass-in gpu adapter for custom
context (#23593)
af679a0 Fix logic for selecting alternate name for blob (#23617)
e206950 [ARM CPU] Add fp16 mlas kernels for exp, tanh, softmax,
logsoftmax, softcap (#23597)
9ba5619 Update pybind and json to the latest (#23589)
c54736c Migrate iOS release pipeline to 1 ES (#23606)
3981326 Increase timeout for Windows TensorRT CI (#23625)
0274b7b fix on trtCudaVersion (#23616)
740e9ab update run CI script (#23621)
5ef1832 [WebGPU] Support PIX Capture for WebGPU EP (#23192)
0114551 Fix for C4267 warning (#23610)
002916a Validate the context_file_path before EP compile graphs
(#23611)
0887e36 [webgpu] Use pushErrorScope()/popErrorScope() once for an
inference run (#23438)
65008cb Auto-generated baselines by 1ES Pipeline Templates (#23603)
09e5724 [CUDA] Fix beam search of num_beams > 32 (#23599)
82840f6 Implement Flash Attention 2 for webgpu EP (#23576)
a6ea57b OpenVINO EP Weights Sharing Feature (#23553)
2c2ff4a [CUDA] Fix BeamSearchTest.DummyT5WithSequenceInputIds test
failure in Windows (#23596)
d981b15 [webgpu/js] Optimize resize webgpu op & fix precision issues
(#23591)
328a13c Enable VCPKG in more pipelines (#23590)
6728d60 [TensorRT EP] support TensorRT 10.8-GA (#23592)
d1fb58b Quantization tool: Allow user to override calibrator's
session EP (#23559)
649ced4 Enable user loading model with external data from memory
buffer (#23557)
544bdd6 Fix ConvTranspose for certain attribute combinations (#23488)
8f6ddf3 Delete extra cgmanifest entries and files (#23583)
5f6a315 Enable VCPKG in CI build (#23426)
e1e3f62 Bump lintrunner from 0.12.5 to 0.12.7 (#23326)
cd8775f Fix Node JS Samples (#23581)
6b4f9c4 [WebGPU EP] Batch Norm Implementation (#23525)
1fce51b Fix all instances of 4244 and 4267 warnings in OV EP code
(#23567)
c29ca1c Update QNN default version to 2.31 (#23573)
2fc75a4 [mobile] Add Android BrowserStack test project back (#23551)
9e18b6a [CUDA] Update nvcc flags (#23572)
b47e1e6 [QNN EP] Make offloading graph input/output quantization (to
CPU) the default (#23368)
75a9b40 [ROCm] Update CI to use rocm 6.3.2 (#23577)
26ff2b6 Bump ruff from 0.9.3 to 0.9.4 (#23563)
b2560a7 Update react-native to 0.72 (#23509)
faee912 [js] update JavaScript API to support QNN EP options (#23486)
816e8cb [EP Perf] Update env to ubuntu 22.04 (#23570)
cddc271 Use Eigen in Round implementation (#23571)
e8b0bdb Shape inference: ReduceMean dispatcher, quant_pre_process:
skip_symbolic_shape bugfix (#23558)
267b493 delete the supported domain version upper bounds (#23237)
bb7f961 remove log spam from cpuinfo (#23548)
169917b Use latest vcpkg commit in configuration, sync manifest with
deps.txt (#23554)
a9d4d08 Add of ReduceMax Gradient (#23501)
6bbf1bd [js/web] upgrade version of flatbuffers (#23545)
271c509 DP4AMatMul perf refinements (#23539)
cb69c59 Add fusions for SigLIP and Conformer-Encoder (#23528)
61fae9b Remove "--enable_pybind" from webgpu pipeline (#23550)
0bb4ea6 Update BiasGelu fusion and related ops (#23518)
4dde74a Add more details to BrowserStack script failure (#23520)
ead9d5c Set ANDROID_USE_LEGACY_TOOLCHAIN_FILE to false (#23544)
7e24088 Enable dlpack by default (#23110)
dc2f7a9 Add overload of `TryParseStringWithClassicLocale()` that uses
`std::from_chars()` (#23541)
5407c69 Fix the issue that the new generated EP context model not
able to find external data (#23537)
fbae88f [js/web] use the recommended workaround for Vite (#23531)
d5338da Fix tensor external data info length parsing issue. (#23526)
e3e4173 [ROCm EP] Fix transpose helper for gfx gridsize constraints
(#23527)
80bc1d2 Enable Ep context with external data for CPU nodes (#23498)
bf023ab [js/web] allow import .mjs/.wasm file (#23487)
655a23f [onnxruntime/build] Add new flag enable_generic_interface to
build primary EPs by default (#23342)
a770a8d Update RN to 0.71.19 (#23381)
1cf0ebd Delete Prefast workflow until the build failure is fixed
(#23510)
d2c5e24 Add of GlobalMaxPool Gradient (#23502)
ded8730 Remove thrust::unary_function (#23506)
8db97a6 [webgpu] Bump version of Dawn to b9b4a370 (#23494)
fdde2e2 Fix for gcc 13.3.1: Avoid creating a copy (#23500)
96ec1dd Bump ruff from 0.9.2 to 0.9.3 (#23496)
42f0c00 Adds the new System.Numerics.Tensors as an input/output type
when using dotnet 8.0 and up. (#23261)
97c2bbe Fix shape infer of onnx GroupNorm (#23477)
1fc9c48 Enable coremltools for Linux build (#23481)
13348c5 [ARM CPU] hgemm optimized for gqa (#23107)
c89a798 Enable opti on Microsoft.ML.OnnxRuntime with RelWithDebInfo
config (#23463)
d00ae32 Revert "[Mobile] Add BrowserStack Android MAUI Test (#23383)"
(#23474)
8b1d3b3 Align AvgPool ceil_mode on last value to torch (#16752)
06fc73b [TRT EP Perf Tool] Add annotations import to python script to
support annotations on Python 3.8 (#23466)

### Motivation and Context
This update includes the change to add QNN EP to ARM64X build targets.

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Adrian Lizarraga <[email protected]>
Co-authored-by: Ti-Tai Wang <[email protected]>
Co-authored-by: Caroline Zhu <[email protected]>
Co-authored-by: Grégoire <[email protected]>
Co-authored-by: Jing Fang <[email protected]>
Co-authored-by: Changming Sun <[email protected]>
Co-authored-by: Yateng Hong <[email protected]>
Co-authored-by: Michael Sharp <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Malik Shahzad Muzaffar <[email protected]>
Co-authored-by: Yulong Wang <[email protected]>
Co-authored-by: Dmitri Smirnov <[email protected]>
Co-authored-by: Corentin Maravat <[email protected]>
Co-authored-by: Jian Chen <[email protected]>
Co-authored-by: Karim Vadsariya <[email protected]>
Co-authored-by: Lei Cao <[email protected]>
Co-authored-by: Karim Vadsariya <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Hector Li <[email protected]>
Co-authored-by: Ted Themistokleous <[email protected]>
Co-authored-by: Ted Themistokleous <[email protected]>
Co-authored-by: Edward Chen <[email protected]>
Co-authored-by: Takeshi Watanabe <[email protected]>
Co-authored-by: Xavier Dupré <[email protected]>
Co-authored-by: Justin Chu <[email protected]>
Co-authored-by: Tianlei Wu <[email protected]>
Co-authored-by: kunal-vaishnavi <[email protected]>
Co-authored-by: Sushanth Rajasankar <[email protected]>
Co-authored-by: PARK DongHa <[email protected]>
Co-authored-by: George Wu <[email protected]>
Co-authored-by: Xinpeng Dou <[email protected]>
Co-authored-by: Jambay Kinley <[email protected]>
Co-authored-by: Yifan Li <[email protected]>
Co-authored-by: Gavin Kinsey <[email protected]>
Co-authored-by: Prathik Rao <[email protected]>
Co-authored-by: Jon Campbell <[email protected]>
Co-authored-by: Satya Kumar Jandhyala <[email protected]>
Co-authored-by: Joshua Lochner <[email protected]>
Co-authored-by: Ankit Maheshkar <[email protected]>
Co-authored-by: jatinwadhwa921 <[email protected]>
Co-authored-by: jatinwadhwa921 <[email protected]>
Co-authored-by: saurabh <[email protected]>
Co-authored-by: TejalKhade28 <[email protected]>
Co-authored-by: sfatimar <[email protected]>
Co-authored-by: Javier E. Martinez <[email protected]>
Co-authored-by: Preetha Veeramalai <[email protected]>
Co-authored-by: Eric Crawford <[email protected]>
Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
Co-authored-by: Jie Chen <[email protected]>
Co-authored-by: shaoboyan091 <[email protected]>
Co-authored-by: David Hotham <[email protected]>
Co-authored-by: Guenther Schmuelling <[email protected]>
Co-authored-by: Enrico Galli <[email protected]>
seungwoo-ji-03 pushed a commit to seungwoo-ji-03/onnx that referenced this pull request Feb 17, 2025
### Description
Pooling in reference currently has two bugs:

(1) it has ["pads
required"](https://github.com/onnx/onnx/blob/e292b4ae6d016c3231a801bfeb26f802ba95d82a/onnx/reference/ops/op_pool_common.py#L53)
to make sure sliding window does not go out-of-bound, but it does not
exclude pads required from pooling caculations. This causes
microsoft/onnxruntime#16203

(2) In `ceil_mode`, the reference pooling pads the input image, but does
not check if sliding window starts on pads. When we reach the end of the
image, the window should stop working. This causes
pytorch/pytorch#131272. Not directly, but
related fix on MaxPooling: onnx#5741.
Detail:
pytorch/pytorch#116420 (comment)

### Motivation and Context
This PR fixes the two bugs and update their tests accordingly. I also
drafted a PR to onnxruntime to have this update in CPU provider:
microsoft/onnxruntime#16752

---------

Signed-off-by: titaiwangms <[email protected]>
Signed-off-by: Justin Chu <[email protected]>
Signed-off-by: Andreas Fehlner <[email protected]>
Co-authored-by: Justin Chu <[email protected]>
Co-authored-by: Andreas Fehlner <[email protected]>
Signed-off-by: seungwoo-ji <[email protected]>
guschmue pushed a commit that referenced this pull request Mar 6, 2025
Fix #16203

Previous to this PR, if `ceil_mode` is on, the calculation of a value
would divide the kernel size, even if remaining pixels is less than the
kernel size, which causes the difference in this operator between ORT
and torch.

However, this fix only applies to the change in #15597, which only
supports AvgPool since 19. The older opset version is remain the same,
as it's using mlas files.

Also, the PR fixes the shape mismatch caused by sliding window starting
from padding. More detail: onnx/onnx#6650 (And
this PR is also validated with the tests added in
onnx/onnx#6650)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core runtime issues related to core runtime
Projects
None yet
6 participants