Wrong result 8bit blockwise quantization over float16 #1540
Labels
bug
Something isn't working
high priority
(first issues that will be worked on)
Low Risk
Risk of bugs in transformers and other libraries
x64 CPU
Milestone
System Info
Ubuntu 24.04
Reproduction
The following simple script, will yield all 0 dequantized results for all 1 inputs, and
Process finished with exit code 139 (interrupted by signal 11:SIGSEGV)
`
import torch
from bitsandbytes.functional import quantize_blockwise, dequantize_blockwise
if name == "main":
`
Expected behavior
torch.float32 will give correct results of all 1s with
Process finished with exit code 0
The text was updated successfully, but these errors were encountered: