-
Notifications
You must be signed in to change notification settings - Fork 673
Issues: bitsandbytes-foundation/bitsandbytes
FSDP2 integration: torch.chunks(Params4bit) not returning Par...
#1424
opened Nov 21, 2024 by
mreso
Open
1
[RFC] PyTorch Custom Operators & Multi-Backend Support
#1545
opened Feb 27, 2025 by
matthewdouglas
Open
2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[chore] fix continuous release not working correctly
chore
#1558
opened Mar 6, 2025 by
Titus-von-Koeller
3 tasks
[spike] evaluate + prototype interaction of unified memory abstraction with custom_ops
Optimizers
Issues or feature requests relating to optimizers
spike
#1556
opened Mar 5, 2025 by
Titus-von-Koeller
3 tasks
Improve stack trace when C library does not load
contributions-welcome
We welcome contributions to fix this issue!
Low Risk
Risk of bugs in transformers and other libraries
medium priority
(will be worked on after all high priority issues)
[RFC] PyTorch Custom Operators & Multi-Backend Support
cross-platform
high priority
(first issues that will be worked on)
RFC
request for comments on proposed library improvements
non_sign_bits miscalculation in the function 'create_dynamic_map' in bitsandbytes/functional.py
high priority
(first issues that will be worked on)
Low Risk
Risk of bugs in transformers and other libraries
#1541
opened Feb 26, 2025 by
Talel-bm
Wrong result 8bit blockwise quantization over float16
bug
Something isn't working
high priority
(first issues that will be worked on)
Low Risk
Risk of bugs in transformers and other libraries
x64 CPU
The main issue seems to be that the main CUDA runtime library was not detected
CUDA Setup
likely not a BNB issue
Linux
low priority
(will be worked on after all priority issues)
Low Risk
Risk of bugs in transformers and other libraries
question
Further information is requested
#1536
opened Feb 21, 2025 by
wdh7022
Ascend NPU installation
Ascend NPU
Related to Ascend NPU backend
documentation
Improvements or additions to documentation
Low Risk
Risk of bugs in transformers and other libraries
#1523
opened Feb 18, 2025 by
LoSunny
Multiple issues installing for AMD GPU (Radeon RX7600XT)
documentation
Improvements or additions to documentation
high priority
(first issues that will be worked on)
Low Risk
Risk of bugs in transformers and other libraries
ROCm
#1519
opened Feb 16, 2025 by
mcondarelli
torch.version.cuda.split error
bug
Something isn't working
high priority
(first issues that will be worked on)
ROCm
Default to building the C extension automatically when installing from source
build
contributions-welcome
We welcome contributions to fix this issue!
high priority
(first issues that will be worked on)
#1511
opened Feb 11, 2025 by
mgorny
The output logistics of Qwen-7B under 8-bit quantization contain NaN
proposing to close
waiting for info
#1504
opened Feb 9, 2025 by
Kairong-Han
Cannot load pre-quantized Janus Pro 7B
bug
Something isn't working
medium priority
(will be worked on after all high priority issues)
Medium risk
Risk of bugs in transformers and other libraries
#1498
opened Jan 30, 2025 by
neilmehta24
Int8 pipeline parallelism
High Risk
Risk of bugs in transformers and other libraries
medium priority
(will be worked on after all high priority issues)
#1482
opened Jan 22, 2025 by
psinger
Cannot install multi backend (ROCm) Wheel version does not match filename
ROCm
#1473
opened Jan 9, 2025 by
RSWilli
AttributeError: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_fp4'
#1467
opened Jan 3, 2025 by
hessaAlawwad
i am trying to install forge UI but its getting stuck at this
CUDA Setup
#1466
opened Jan 1, 2025 by
MetaYousuf
[Ascend][Bug] Fail to use 4-bit quant model to inference
Ascend NPU
Related to Ascend NPU backend
#1465
opened Dec 28, 2024 by
MengqingCao
Previous Next
ProTip!
Follow long discussions with comments:>50.