-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Issues: abetlen/llama-cpp-python
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
GPU Support Missing in Version >=0.3.5 on Windows with CUDA 12.4 and RTX 3090
#1967
opened Mar 9, 2025 by
mcglynnfinn
Issue with Installing llama-cpp-python 0.3.7: Dependency Problems with scikit-build-core
#1965
opened Mar 7, 2025 by
lcnmzz00
After choosing to offload all layers onto the GPU, the Ram used for model loading is not released
#1964
opened Mar 7, 2025 by
MATII13T
Steps to Build and Install llama-cpp-python 0.3.7 w/CUDA on Windows 11 [06/03/2025]
#1963
opened Mar 6, 2025 by
VigneshRajan-AMRC
Could not install llama-cpp-python 0.3.7 on Macbook Air M1 - Compilation issue
#1956
opened Mar 2, 2025 by
vietanhdev
4 tasks done
The results generated are different from those produced by executing commands with the llama cpp library
#1946
opened Feb 25, 2025 by
HengruiZYP
4 tasks done
CUDA Memory Allocation Failure and mlock Memory Lock Issue in llama-cpp-python
#1944
opened Feb 24, 2025 by
caiyuanhangDicp
CMake build failed //Building wheel for llama-cpp-python (pyproject.toml) ... error
#1943
opened Feb 23, 2025 by
dw5189
Specifying additional_files for model files in directory adds additional copy of directory to download URL
#1938
opened Feb 17, 2025 by
zhudotexe
4 tasks done
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.