-
Notifications
You must be signed in to change notification settings - Fork 11k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: MinGW build fails to load models with "error loading model: PrefetchVirtualMemory unavailable" #9311
Comments
I believe the issue was introduced when the source was reorganized into folders. I'm able to work around it by adding the following to the top level
so I believe the equivalent code is missing from some |
I had a similar problem. I just replaced #define _WIN32_WINNT 0x0601 with #define _WIN32_WINNT 0x0A00 in the <_mingw.h> file and it worked. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Why was this closed, I suffered the same error as this in current latest commit and fixed it using @homenkovo's solution |
having same issue with b4628 and homenkovo's solution can fix it |
No need to modify no files, a simple command line argument is enough:
|
not worked on win 11 for me, using python script. Memory more than enough, it happen when loading to gpu or cpu.
|
thanks it worked for me |
What happened?
llama-cli
andllama-bench
rev. 9379d3c built with MinGW fails to load models:Name and Version
What operating system are you seeing the problem on?
Windows
Relevant log output
The text was updated successfully, but these errors were encountered: