You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ ./llama-cli --version
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
version: 4894 (8ae58b3)
built with cc (GCC) 15.0.1 20250223 (experimental) for x86_64-pc-cygwin
Operating systems
Windows
Which llama.cpp modules do you know to be affected?
llama-cli
Command line
./llama-cli -m /cygdrive/z/zhouwg/gemma-2b.Q8_0.gguf -p "pls tell me 1+1=?"
Problem description & steps to reproduce
recently I tried a new approach of build llama.cpp with cygwin on Windows 64 without MS's huge IDE, details could be found at a draft PR:#12215, it seems the inference result is not correct as expected:
Name and Version
$ ./llama-cli --version
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
ggml_backend_load_best: search path does not exist
version: 4894 (8ae58b3)
built with cc (GCC) 15.0.1 20250223 (experimental) for x86_64-pc-cygwin
Operating systems
Windows
Which llama.cpp modules do you know to be affected?
llama-cli
Command line
./llama-cli -m /cygdrive/z/zhouwg/gemma-2b.Q8_0.gguf -p "pls tell me 1+1=?"
Problem description & steps to reproduce
recently I tried a new approach of build llama.cpp with cygwin on Windows 64 without MS's huge IDE, details could be found at a draft PR:#12215, it seems the inference result is not correct as expected:
llama-cli-result-not-correct-on-cygwin-x86-64-windows.mp4
I don't understand the reason and help from community is greatly appreciated, thanks.
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: