Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CUDA] Preload dependent DLLs #23674

Merged
merged 16 commits into from
Feb 15, 2025
Merged

[CUDA] Preload dependent DLLs #23674

merged 16 commits into from
Feb 15, 2025

Conversation

tianleiwu
Copy link
Contributor

@tianleiwu tianleiwu commented Feb 13, 2025

Description

Changes:
(1) Pass --cuda_version in packaging pipeline to build wheel command line so that cuda_version can be saved. Note that cuda_version is also required for generating extra_require for #23659.
(2) Update steup.py and onnxruntime_validation.py to save cuda version to capi/build_and_package_info.py.
(3) Add a helper function to preload dependent DLLs (MSVC, CUDA, CUDNN) in __init__.py. First we will try to load DLLs from nvidia site packages, then try load remaining DLLs with default path settings.

import onnxruntime
onnxruntime.preload_dlls()

To show loaded DLLs, set verbose=True. It is also possible to disable loading some types of DLLs like:

onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)

PyTorch and onnxruntime in Windows

When working with pytorch, onnxruntime will reuse the CUDA and cuDNN DLLs loaded by pytorch as long as CUDA and cuDNN major versions are compatible. Preload DLLs actually might cause issues (see example 2 and 3 below) in Windows.

Example 1: onnxruntime and torch can work together easily.

>>> import torch
>>> import onnxruntime
>>> session = onnxruntime.InferenceSession("model.onnx", providers=["CUDAExecutionProvider"])
>>> onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
----List of loaded DLLs----
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\curand64_10.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cufft64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_heuristic64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_precompiled64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_ops64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_adv64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cublasLt64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cublas64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\nvrtc64_120_0.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\nvrtc-builtins64_124.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_runtime_compiled64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_cnn64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_graph64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\numpy.libs\msvcp140-d64049c6e3865410a7dda6a7e9f0c575.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudart64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn64_9.dll
D:\anaconda3\envs\py310\msvcp140.dll
D:\anaconda3\envs\py310\msvcp140_1.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cufftw64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\caffe2_nvrtc.dll
D:\anaconda3\envs\py310\vcruntime140_1.dll
D:\anaconda3\envs\py310\vcruntime140.dll
>>> session.get_providers()
['CUDAExecutionProvider', 'CPUExecutionProvider']

Example 2: Use preload_dlls after import torch is not necessary. Unfortunately, it seems that multiple DLLs of same filename are loaded. They can be used in parallel but not ideal since more memory is used.

>>> import torch
>>> import onnxruntime
>>> onnxruntime.preload_dlls(verbose=True)
----List of loaded DLLs----
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cufft\bin\cufft64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublas64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublasLt64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\curand64_10.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cufft64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_heuristic64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_precompiled64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_ops64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_adv64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cublasLt64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cublas64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\nvrtc64_120_0.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\nvrtc-builtins64_124.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_runtime_compiled64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_cnn64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_graph64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_graph64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cuda_runtime\bin\cudart64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\numpy.libs\msvcp140-d64049c6e3865410a7dda6a7e9f0c575.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudart64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn64_9.dll
D:\anaconda3\envs\py310\msvcp140_1.dll
D:\anaconda3\envs\py310\msvcp140.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cufftw64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\caffe2_nvrtc.dll
D:\anaconda3\envs\py310\vcruntime140_1.dll
D:\anaconda3\envs\py310\vcruntime140.dll

Example 3: Use preload_dlls before import torch might cause torch import error in Windows. Later we may provide an option to load DLLs from torch directory to avoid this issue.

>>> import onnxruntime
>>> onnxruntime.preload_dlls(verbose=True)
----List of loaded DLLs----
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cufft\bin\cufft64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublas64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublasLt64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_graph64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cuda_runtime\bin\cudart64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\numpy.libs\msvcp140-d64049c6e3865410a7dda6a7e9f0c575.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn64_9.dll
D:\anaconda3\envs\py310\msvcp140.dll
D:\anaconda3\envs\py310\vcruntime140_1.dll
D:\anaconda3\envs\py310\msvcp140_1.dll
D:\anaconda3\envs\py310\vcruntime140.dll
>>> import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\anaconda3\envs\py310\lib\site-packages\torch\__init__.py", line 137, in <module>
    raise err
OSError: [WinError 127] The specified procedure could not be found. Error loading "D:\anaconda3\envs\py310\lib\site-packages\torch\lib\cudnn_adv64_9.dll" or one of its dependencies.

PyTorch and onnxruntime in Linux

In Linux, since pytorch uses nvidia site packages for CUDA and cuDNN DLLs. Preload DLLs consistently loads same set of DLLs, and it could help maintaining.

>>> import onnxruntime
>>> onnxruntime.preload_dlls(verbose=True)
----List of loaded DLLs----
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn.so.9
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_graph.so.9
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cufft/lib/libcufft.so.11
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/curand/lib/libcurand.so.10
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib/libnvrtc.so.12
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cublas/lib/libcublas.so.12
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cublas/lib/libcublasLt.so.12
>>> import torch
>>> torch.rand(3, 3).cuda()
tensor([[0.4619, 0.0279, 0.2092],
        [0.0416, 0.6782, 0.5889],
        [0.9988, 0.9092, 0.7982]], device='cuda:0')
>>> session = onnxruntime.InferenceSession("model.onnx", providers=["CUDAExecutionProvider"])
>>> session.get_providers()
['CUDAExecutionProvider', 'CPUExecutionProvider']
>>> import torch
>>> import onnxruntime
>>> session = onnxruntime.InferenceSession("model.onnx", providers=["CUDAExecutionProvider"])
>>> onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
----List of loaded DLLs----
/cuda12.8/targets/x86_64-linux/lib/libnvrtc.so.12.8.61
/cudnn9.7/lib/libcudnn_graph.so.9.7.0
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cublas/lib/libcublasLt.so.12
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cublas/lib/libcublas.so.12
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/curand/lib/libcurand.so.10
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cufft/lib/libcufft.so.11
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn.so.9
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12

Without preloading DLLs, onnxruntime will load CUDA and cuDNN DLLs based on LD_LIBRARY_PATH. Torch will reuse the same DLLs loaded by onnxruntime:

>>> import onnxruntime
>>> session = onnxruntime.InferenceSession("model.onnx", providers=["CUDAExecutionProvider"])
>>> onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
----List of loaded DLLs----
/cuda12.8/targets/x86_64-linux/lib/libnvrtc.so.12.8.61
/cuda12.8/targets/x86_64-linux/lib/libcufft.so.11.3.3.41
/cuda12.8/targets/x86_64-linux/lib/libcurand.so.10.3.9.55
/cuda12.8/targets/x86_64-linux/lib/libcublas.so.12.8.3.14
/cuda12.8/targets/x86_64-linux/lib/libcublasLt.so.12.8.3.14
/cudnn9.7/lib/libcudnn_graph.so.9.7.0
/cudnn9.7/lib/libcudnn.so.9.7.0
/cuda12.8/targets/x86_64-linux/lib/libcudart.so.12.8.57
>>> import torch
>>> onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
----List of loaded DLLs----
/cuda12.8/targets/x86_64-linux/lib/libnvrtc.so.12.8.61
/cuda12.8/targets/x86_64-linux/lib/libcufft.so.11.3.3.41
/cuda12.8/targets/x86_64-linux/lib/libcurand.so.10.3.9.55
/cuda12.8/targets/x86_64-linux/lib/libcublas.so.12.8.3.14
/cuda12.8/targets/x86_64-linux/lib/libcublasLt.so.12.8.3.14
/cudnn9.7/lib/libcudnn_graph.so.9.7.0
/cudnn9.7/lib/libcudnn.so.9.7.0
/cuda12.8/targets/x86_64-linux/lib/libcudart.so.12.8.57
>>> torch.rand(3, 3).cuda()
tensor([[0.2233, 0.9194, 0.8078],
        [0.0906, 0.2884, 0.3655],
        [0.6249, 0.2904, 0.4568]], device='cuda:0')
>>> onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
----List of loaded DLLs----
/cuda12.8/targets/x86_64-linux/lib/libnvrtc.so.12.8.61
/cuda12.8/targets/x86_64-linux/lib/libcufft.so.11.3.3.41
/cuda12.8/targets/x86_64-linux/lib/libcurand.so.10.3.9.55
/cuda12.8/targets/x86_64-linux/lib/libcublas.so.12.8.3.14
/cuda12.8/targets/x86_64-linux/lib/libcublasLt.so.12.8.3.14
/cudnn9.7/lib/libcudnn_graph.so.9.7.0
/cudnn9.7/lib/libcudnn.so.9.7.0
/cuda12.8/targets/x86_64-linux/lib/libcudart.so.12.8.57

Motivation and Context

In many reported issues of import onnxruntime failure, the root cause is dependent DLLs missing or not in path. This change will make it easier to resolve those issues.

This is based on Jian's PR #22506 with extra change to load msvc dlls.

#23659 can be used to install CUDA/cuDNN dlls to site packages. Example command line after next official release 1.21:

pip install onnxruntime-gpu[cuda,cudnn]

If user installed pytorch in Linux, those DLLs are usually installed together with torch.

@tianleiwu tianleiwu requested review from snnn and jchen351 February 13, 2025 00:15
@tianleiwu tianleiwu changed the title [CUDA] Try load dependent DLLs [CUDA] Preload dependent DLLs Feb 14, 2025
Copy link
Member

@snnn snnn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I published a test package to ORT-Nightly feed and tried it. It worked well.
The package version is 1.21.0.dev20250214005

Installation instructions:

pip3 install flatbuffers numpy packaging protobuf sympy coloredlogs
pip3 install --user --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-gpu[cuda,cudnn]

@tianleiwu tianleiwu merged commit c7aa9a7 into main Feb 15, 2025
237 of 242 checks passed
@tianleiwu tianleiwu deleted the tlwu/load_cuda_dlls branch February 15, 2025 17:45
tianleiwu added a commit that referenced this pull request Feb 19, 2025
### Description

Update preload_dlls:
(1) Add a parameter `directory` to specify the DLL location.
(2) In Windows, skip loading CUDA/cuDNN dlls when torch for cuda 12.x
has been imported.
(3) In Windows, default search order for CUDA/cuDNN dlls: lib directory
of torch for cuda 12.x in Windows; nvidia site packages; default DLL
loading paths. User can use the directory parameter to change search
order. Use empty string will change search order to `nvidia site
packages; default DLL loading paths`. Use a path if user wants to load
DLLs from a specific location.
(4) Do not load cudnn sub DLLs in Linux.

The benefit of such change is that ORT could work seamlessly with
PyTorch in both Linux and Windows. We also provide option for advanced
users to load CUDA/cuDNN from a location specified by them.

### Examples in Windows

By default, preload_dlls will load CUDA and cuDNN DLLs from PyTorch if
it is compatible:
```
>>> import onnxruntime
>>> onnxruntime.preload_dlls()
>>> onnxruntime.print_debug_info()
onnxruntime-gpu version: 1.21.0
CUDA version used in build: 12.6
platform: Windows-10-10.0.22631-SP0

Python package, version and location:
onnxruntime==1.20.1 at c:\users\abcd\.conda\envs\py310\lib\site-packages\onnxruntime
onnxruntime-gpu==1.21.0 at c:\users\abcd\.conda\envs\py310\lib\site-packages\onnxruntime
WARNING: multiple onnxruntime packages are installed to the same location. Please 'pip uninstall` all above packages, then `pip install` only one of them.
torch==2.6.0+cu126 at c:\users\abcd\.conda\envs\py310\lib\site-packages\torch
nvidia-cuda-runtime-cu12==12.8.57 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-cudnn-cu12==9.7.1.26 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-cublas-cu12==12.8.3.14 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-cufft-cu12==11.3.3.41 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-curand-cu12==10.3.7.77 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-cuda-nvrtc-cu12==12.6.85 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-nvjitlink-cu12==12.8.61 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia

Environment variable:
PATH=c:\users\abcd\.conda\envs\py310;c:\users\abcd\.conda\envs\py310\Library\usr\bin;c:\users\abcd\.conda\envs\py310\Library\bin;c:\users\abcd\.conda\envs\py310\Scripts;c:\users\abcd\.conda\envs\py310\bin;C:\ProgramData\anaconda3\condabin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp;C:\windows\system32;

List of loaded DLLs:
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_adv64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_precompiled64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cufft64_11.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cublasLt64_12.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_ops64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cublas64_12.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_heuristic64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_runtime_compiled64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_graph64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\numpy.libs\msvcp140-263139962577ecda4cd9469ca360a746.dll
c:\users\abcd\.conda\envs\py310\msvcp140.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudart64_12.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn64_9.dll
c:\users\abcd\.conda\envs\py310\vcruntime140_1.dll
c:\users\abcd\.conda\envs\py310\msvcp140_1.dll
c:\users\abcd\.conda\envs\py310\vcruntime140.dll

Device information:
{
  "gpu": {
    "driver_version": "571.96",
    "devices": [
      {
        "memory_total": 8589934592,
        "memory_available": 6032777216,
        "name": "NVIDIA GeForce GTX 1080"
      }
    ]
  },
  "cpu": {
    "brand": "Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz",
    "cores": 10,
    "logical_cores": 20,
    "hz": "3696000000,0",
    "l2_cache": 10485760,
    "flags": "3dnow,3dnowprefetch,abm,acpi,adx,aes,apic,avx,avx2,avx512bw,avx512cd,avx512dq,avx512f,avx512vl,avx512vnni,bmi1,bmi2,clflush,clflushopt,clwb,cmov,cx16,cx8,de,dtes64,dts,erms,est,f16c,fma,fpu,fxsr,ht,hypervisor,ia64,invpcid,lahf_lm,mca,mce,mmx,monitor,movbe,mpx,msr,mtrr,osxsave,pae,pat,pbe,pcid,pclmulqdq,pdcm,pge,pni,popcnt,pqe,pqm,pse,pse36,rdrnd,rdseed,sep,serial,smap,smep,ss,sse,sse2,sse4_1,sse4_2,ssse3,tm,tm2,tsc,tscdeadline,vme,x2apic,xsave,xtpr",
    "processor": "Intel64 Family 6 Model 85 Stepping 7, GenuineIntel"
  },
  "memory": {
    "total": 68414291968,
    "available": 40240791552
  }
}
>>> import torch
```

In below example, we set `directory=""`, which prefers nvidia site
package like the following:
```
>>> import onnxruntime
>>> onnxruntime.preload_dlls(directory="")
>>> onnxruntime.print_debug_info()
...
List of loaded DLLs:
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_adv64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_ops64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_engines_precompiled64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cufft\bin\cufft64_11.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublasLt64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublas64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_heuristic64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_engines_runtime_compiled64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_graph64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cuda_runtime\bin\cudart64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\numpy.libs\msvcp140-263139962577ecda4cd9469ca360a746.dll
C:\Users\abcd\.conda\envs\py310\msvcp140.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn64_9.dll
C:\Users\abcd\.conda\envs\py310\msvcp140_1.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140_1.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140.dll
...
```

In below example, we import torch before preload_dlls. In this case, ORT
skips loading CUDA/cuDNN DLLs, and use the DLLs from torch:
```
>>> import onnxruntime
>>> import torch
>>> onnxruntime.preload_dlls()
Skip loading CUDA and cuDNN DLLs since torch is imported.
>>> onnxruntime.print_debug_info()
...
List of loaded DLLs:
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\nvrtc64_120_0.alt.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\curand64_10.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cufft64_11.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_heuristic64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_precompiled64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_adv64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cublasLt64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_ops64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cublas64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_runtime_compiled64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\nvrtc64_120_0.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\nvrtc-builtins64_126.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_cnn64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_graph64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudart64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\numpy.libs\msvcp140-263139962577ecda4cd9469ca360a746.dll
C:\Users\abcd\.conda\envs\py310\msvcp140.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cufftw64_11.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\caffe2_nvrtc.dll
C:\Users\abcd\.conda\envs\py310\msvcp140_1.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140_1.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140.dll
...
```

Last example is to load CUDA and cuDNN separately from different
locations. CUDA location is based on CUDA_PATH environment variable, and
cuDNN path is a relative path points to cudnn in nvidia site package.
```
>>> import onnxruntime
>>> import os
>>> os.environ["CUDA_PATH"]
'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8'
>>> onnxruntime.preload_dlls(cuda=True, cudnn=False, directory=os.path.join(os.environ["CUDA_PATH"], "bin"))
>>> onnxruntime.preload_dlls(cuda=False, cudnn=True, directory="..\\nvidia\\cudnn\\bin")
>>> onnxruntime.print_debug_info()
...
List of loaded DLLs:
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_adv64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_ops64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_engines_precompiled64_9.dll
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\cufft64_11.dll
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\cublasLt64_12.dll
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\cublas64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_heuristic64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_engines_runtime_compiled64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_graph64_9.dll
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\cudart64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\numpy.libs\msvcp140-263139962577ecda4cd9469ca360a746.dll
C:\Users\abcd\.conda\envs\py310\msvcp140.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn64_9.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140_1.dll
C:\Users\abcd\.conda\envs\py310\msvcp140_1.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140.dll
...
```

### Motivation and Context

To address issues mentioned in description of
#23674 that onnxruntime
preload might cause conflicts with PyTorch.

Before this change, `import torch` after `onnxruntime.preload_dlls` will
cause issue:
```
>>> import onnxruntime
>>> onnxruntime.preload_dlls(verbose=True)
----List of loaded DLLs----
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cufft\bin\cufft64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublas64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublasLt64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_graph64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cuda_runtime\bin\cudart64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\numpy.libs\msvcp140-d64049c6e3865410a7dda6a7e9f0c575.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn64_9.dll
D:\anaconda3\envs\py310\msvcp140.dll
D:\anaconda3\envs\py310\vcruntime140_1.dll
D:\anaconda3\envs\py310\msvcp140_1.dll
D:\anaconda3\envs\py310\vcruntime140.dll
>>> import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\anaconda3\envs\py310\lib\site-packages\torch\__init__.py", line 137, in <module>
    raise err
OSError: [WinError 127] The specified procedure could not be found. Error loading "D:\anaconda3\envs\py310\lib\site-packages\torch\lib\cudnn_adv64_9.dll" or one of its dependencies.
```
tianleiwu added a commit that referenced this pull request Feb 20, 2025
### Description

Update CUDA and cuDNN installation guide and preload configuration.

### Motivation and Context
See related:
#23674
#23659
@ranjitshs
Copy link
Contributor

ranjitshs commented Feb 27, 2025

@tianleiwu @snnn
I started seeing below warning while importing onnxruntime in AIX .
is it observed in LINUX CPU case ?
Let me know if you want to track this as issue.

bash-5.2$ python3
Python 3.11.9 (main, Apr 28 2024, 11:03:03) [GCC 10.3.0] on aix
Type "help", "copyright", "credits" or "license" for more information.
>>> import onnxruntime
/home/buildusr/jenkins/workspace/onnxruntime-gcc12/onnxruntime/build/Linux/Release/onnxruntime/capi/onnxruntime_validation.py:86: UserWarning: WARNING: failed to collect package name and version info
  warnings.warn("WARNING: failed to collect package name and version info")
No module named 'onnxruntime.capi.build_and_package_info'
>>> 

@tianleiwu
Copy link
Contributor Author

tianleiwu commented Feb 27, 2025

@tianleiwu @snnn I started seeing below warning while importing onnxruntime in AIX . is it observed in LINUX CPU case ? Let me know if you want to track this as issue.
No module named 'onnxruntime.capi.build_and_package_info'

The file build_and_package_info.py is generated by setup.py:
https://github.com/microsoft/onnxruntime/blob/05642657161ddc320de0c18ae6c753e5e1c29d80/setup.py#L728C5-L759
setup.py is called by build.py during building wheel:

args = [sys.executable, os.path.join(source_dir, "setup.py"), "bdist_wheel"]

@ranjitshs, How do you build the wheel for AIX?

@ranjitshs
Copy link
Contributor

@tianleiwu
Thanks for info.
After I ran,python setup.py build, build_and_package_info.py got generated and warning is gone.

guschmue pushed a commit that referenced this pull request Mar 6, 2025
### Description

Changes:
(1) Pass --cuda_version in packaging pipeline to build wheel command
line so that cuda_version can be saved. Note that cuda_version is also
required for generating extra_require for
#23659.
(2) Update steup.py and onnxruntime_validation.py to save cuda version
to capi/build_and_package_info.py.
(3) Add a helper function to preload dependent DLLs (MSVC, CUDA, CUDNN)
in `__init__.py`. First we will try to load DLLs from nvidia site
packages, then try load remaining DLLs with default path settings.

```
import onnxruntime
onnxruntime.preload_dlls()
```

To show loaded DLLs, set `verbose=True`. It is also possible to disable
loading some types of DLLs like:
```
onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
```

#### PyTorch and onnxruntime in Windows

When working with pytorch, onnxruntime will reuse the CUDA and cuDNN
DLLs loaded by pytorch as long as CUDA and cuDNN major versions are
compatible. Preload DLLs actually might cause issues (see example 2 and
3 below) in Windows.

Example 1: onnxruntime and torch can work together easily. 
```
>>> import torch
>>> import onnxruntime
>>> session = onnxruntime.InferenceSession("model.onnx", providers=["CUDAExecutionProvider"])
>>> onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
----List of loaded DLLs----
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\curand64_10.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cufft64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_heuristic64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_precompiled64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_ops64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_adv64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cublasLt64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cublas64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\nvrtc64_120_0.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\nvrtc-builtins64_124.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_runtime_compiled64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_cnn64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_graph64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\numpy.libs\msvcp140-d64049c6e3865410a7dda6a7e9f0c575.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudart64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn64_9.dll
D:\anaconda3\envs\py310\msvcp140.dll
D:\anaconda3\envs\py310\msvcp140_1.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cufftw64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\caffe2_nvrtc.dll
D:\anaconda3\envs\py310\vcruntime140_1.dll
D:\anaconda3\envs\py310\vcruntime140.dll
>>> session.get_providers()
['CUDAExecutionProvider', 'CPUExecutionProvider']
```

Example 2: Use preload_dlls after `import torch` is not necessary.
Unfortunately, it seems that multiple DLLs of same filename are loaded.
They can be used in parallel but not ideal since more memory is used.
```
>>> import torch
>>> import onnxruntime
>>> onnxruntime.preload_dlls(verbose=True)
----List of loaded DLLs----
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cufft\bin\cufft64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublas64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublasLt64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\curand64_10.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cufft64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_heuristic64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_precompiled64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_ops64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_adv64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cublasLt64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cublas64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\nvrtc64_120_0.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\nvrtc-builtins64_124.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_runtime_compiled64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_cnn64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn_graph64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_graph64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cuda_runtime\bin\cudart64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\numpy.libs\msvcp140-d64049c6e3865410a7dda6a7e9f0c575.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudart64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cudnn64_9.dll
D:\anaconda3\envs\py310\msvcp140_1.dll
D:\anaconda3\envs\py310\msvcp140.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\cufftw64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\torch\lib\caffe2_nvrtc.dll
D:\anaconda3\envs\py310\vcruntime140_1.dll
D:\anaconda3\envs\py310\vcruntime140.dll
```

Example 3: Use preload_dlls before `import torch` might cause torch
import error in Windows. Later we may provide an option to load DLLs
from torch directory to avoid this issue.
```
>>> import onnxruntime
>>> onnxruntime.preload_dlls(verbose=True)
----List of loaded DLLs----
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cufft\bin\cufft64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublas64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublasLt64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_graph64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cuda_runtime\bin\cudart64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\numpy.libs\msvcp140-d64049c6e3865410a7dda6a7e9f0c575.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn64_9.dll
D:\anaconda3\envs\py310\msvcp140.dll
D:\anaconda3\envs\py310\vcruntime140_1.dll
D:\anaconda3\envs\py310\msvcp140_1.dll
D:\anaconda3\envs\py310\vcruntime140.dll
>>> import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\anaconda3\envs\py310\lib\site-packages\torch\__init__.py", line 137, in <module>
    raise err
OSError: [WinError 127] The specified procedure could not be found. Error loading "D:\anaconda3\envs\py310\lib\site-packages\torch\lib\cudnn_adv64_9.dll" or one of its dependencies.
```

#### PyTorch and onnxruntime in Linux

In Linux, since pytorch uses nvidia site packages for CUDA and cuDNN
DLLs. Preload DLLs consistently loads same set of DLLs, and it could
help maintaining.

```
>>> import onnxruntime
>>> onnxruntime.preload_dlls(verbose=True)
----List of loaded DLLs----
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn.so.9
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_graph.so.9
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cufft/lib/libcufft.so.11
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/curand/lib/libcurand.so.10
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib/libnvrtc.so.12
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cublas/lib/libcublas.so.12
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cublas/lib/libcublasLt.so.12
>>> import torch
>>> torch.rand(3, 3).cuda()
tensor([[0.4619, 0.0279, 0.2092],
        [0.0416, 0.6782, 0.5889],
        [0.9988, 0.9092, 0.7982]], device='cuda:0')
>>> session = onnxruntime.InferenceSession("model.onnx", providers=["CUDAExecutionProvider"])
>>> session.get_providers()
['CUDAExecutionProvider', 'CPUExecutionProvider']
```

```
>>> import torch
>>> import onnxruntime
>>> session = onnxruntime.InferenceSession("model.onnx", providers=["CUDAExecutionProvider"])
>>> onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
----List of loaded DLLs----
/cuda12.8/targets/x86_64-linux/lib/libnvrtc.so.12.8.61
/cudnn9.7/lib/libcudnn_graph.so.9.7.0
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cublas/lib/libcublasLt.so.12
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cublas/lib/libcublas.so.12
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/curand/lib/libcurand.so.10
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cufft/lib/libcufft.so.11
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn.so.9
/anaconda3/envs/py310/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12
```

Without preloading DLLs, onnxruntime will load CUDA and cuDNN DLLs based
on `LD_LIBRARY_PATH`. Torch will reuse the same DLLs loaded by
onnxruntime:
```
>>> import onnxruntime
>>> session = onnxruntime.InferenceSession("model.onnx", providers=["CUDAExecutionProvider"])
>>> onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
----List of loaded DLLs----
/cuda12.8/targets/x86_64-linux/lib/libnvrtc.so.12.8.61
/cuda12.8/targets/x86_64-linux/lib/libcufft.so.11.3.3.41
/cuda12.8/targets/x86_64-linux/lib/libcurand.so.10.3.9.55
/cuda12.8/targets/x86_64-linux/lib/libcublas.so.12.8.3.14
/cuda12.8/targets/x86_64-linux/lib/libcublasLt.so.12.8.3.14
/cudnn9.7/lib/libcudnn_graph.so.9.7.0
/cudnn9.7/lib/libcudnn.so.9.7.0
/cuda12.8/targets/x86_64-linux/lib/libcudart.so.12.8.57
>>> import torch
>>> onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
----List of loaded DLLs----
/cuda12.8/targets/x86_64-linux/lib/libnvrtc.so.12.8.61
/cuda12.8/targets/x86_64-linux/lib/libcufft.so.11.3.3.41
/cuda12.8/targets/x86_64-linux/lib/libcurand.so.10.3.9.55
/cuda12.8/targets/x86_64-linux/lib/libcublas.so.12.8.3.14
/cuda12.8/targets/x86_64-linux/lib/libcublasLt.so.12.8.3.14
/cudnn9.7/lib/libcudnn_graph.so.9.7.0
/cudnn9.7/lib/libcudnn.so.9.7.0
/cuda12.8/targets/x86_64-linux/lib/libcudart.so.12.8.57
>>> torch.rand(3, 3).cuda()
tensor([[0.2233, 0.9194, 0.8078],
        [0.0906, 0.2884, 0.3655],
        [0.6249, 0.2904, 0.4568]], device='cuda:0')
>>> onnxruntime.preload_dlls(cuda=False, cudnn=False, msvc=False, verbose=True)
----List of loaded DLLs----
/cuda12.8/targets/x86_64-linux/lib/libnvrtc.so.12.8.61
/cuda12.8/targets/x86_64-linux/lib/libcufft.so.11.3.3.41
/cuda12.8/targets/x86_64-linux/lib/libcurand.so.10.3.9.55
/cuda12.8/targets/x86_64-linux/lib/libcublas.so.12.8.3.14
/cuda12.8/targets/x86_64-linux/lib/libcublasLt.so.12.8.3.14
/cudnn9.7/lib/libcudnn_graph.so.9.7.0
/cudnn9.7/lib/libcudnn.so.9.7.0
/cuda12.8/targets/x86_64-linux/lib/libcudart.so.12.8.57
```

### Motivation and Context
In many reported issues of import onnxruntime failure, the root cause is
dependent DLLs missing or not in path. This change will make it easier
to resolve those issues.

This is based on Jian's PR
#22506 with extra change to
load msvc dlls.

#23659 can be used to
install CUDA/cuDNN dlls to site packages. Example command line after
next official release 1.21:
```
pip install onnxruntime-gpu[cuda,cudnn]
```

If user installed pytorch in Linux, those DLLs are usually installed
together with torch.
guschmue pushed a commit that referenced this pull request Mar 6, 2025
### Description

Update preload_dlls:
(1) Add a parameter `directory` to specify the DLL location.
(2) In Windows, skip loading CUDA/cuDNN dlls when torch for cuda 12.x
has been imported.
(3) In Windows, default search order for CUDA/cuDNN dlls: lib directory
of torch for cuda 12.x in Windows; nvidia site packages; default DLL
loading paths. User can use the directory parameter to change search
order. Use empty string will change search order to `nvidia site
packages; default DLL loading paths`. Use a path if user wants to load
DLLs from a specific location.
(4) Do not load cudnn sub DLLs in Linux.

The benefit of such change is that ORT could work seamlessly with
PyTorch in both Linux and Windows. We also provide option for advanced
users to load CUDA/cuDNN from a location specified by them.

### Examples in Windows

By default, preload_dlls will load CUDA and cuDNN DLLs from PyTorch if
it is compatible:
```
>>> import onnxruntime
>>> onnxruntime.preload_dlls()
>>> onnxruntime.print_debug_info()
onnxruntime-gpu version: 1.21.0
CUDA version used in build: 12.6
platform: Windows-10-10.0.22631-SP0

Python package, version and location:
onnxruntime==1.20.1 at c:\users\abcd\.conda\envs\py310\lib\site-packages\onnxruntime
onnxruntime-gpu==1.21.0 at c:\users\abcd\.conda\envs\py310\lib\site-packages\onnxruntime
WARNING: multiple onnxruntime packages are installed to the same location. Please 'pip uninstall` all above packages, then `pip install` only one of them.
torch==2.6.0+cu126 at c:\users\abcd\.conda\envs\py310\lib\site-packages\torch
nvidia-cuda-runtime-cu12==12.8.57 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-cudnn-cu12==9.7.1.26 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-cublas-cu12==12.8.3.14 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-cufft-cu12==11.3.3.41 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-curand-cu12==10.3.7.77 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-cuda-nvrtc-cu12==12.6.85 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia
nvidia-nvjitlink-cu12==12.8.61 at c:\users\abcd\.conda\envs\py310\lib\site-packages\nvidia

Environment variable:
PATH=c:\users\abcd\.conda\envs\py310;c:\users\abcd\.conda\envs\py310\Library\usr\bin;c:\users\abcd\.conda\envs\py310\Library\bin;c:\users\abcd\.conda\envs\py310\Scripts;c:\users\abcd\.conda\envs\py310\bin;C:\ProgramData\anaconda3\condabin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp;C:\windows\system32;

List of loaded DLLs:
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_adv64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_precompiled64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cufft64_11.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cublasLt64_12.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_ops64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cublas64_12.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_heuristic64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_runtime_compiled64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_graph64_9.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\numpy.libs\msvcp140-263139962577ecda4cd9469ca360a746.dll
c:\users\abcd\.conda\envs\py310\msvcp140.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudart64_12.dll
c:\users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn64_9.dll
c:\users\abcd\.conda\envs\py310\vcruntime140_1.dll
c:\users\abcd\.conda\envs\py310\msvcp140_1.dll
c:\users\abcd\.conda\envs\py310\vcruntime140.dll

Device information:
{
  "gpu": {
    "driver_version": "571.96",
    "devices": [
      {
        "memory_total": 8589934592,
        "memory_available": 6032777216,
        "name": "NVIDIA GeForce GTX 1080"
      }
    ]
  },
  "cpu": {
    "brand": "Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz",
    "cores": 10,
    "logical_cores": 20,
    "hz": "3696000000,0",
    "l2_cache": 10485760,
    "flags": "3dnow,3dnowprefetch,abm,acpi,adx,aes,apic,avx,avx2,avx512bw,avx512cd,avx512dq,avx512f,avx512vl,avx512vnni,bmi1,bmi2,clflush,clflushopt,clwb,cmov,cx16,cx8,de,dtes64,dts,erms,est,f16c,fma,fpu,fxsr,ht,hypervisor,ia64,invpcid,lahf_lm,mca,mce,mmx,monitor,movbe,mpx,msr,mtrr,osxsave,pae,pat,pbe,pcid,pclmulqdq,pdcm,pge,pni,popcnt,pqe,pqm,pse,pse36,rdrnd,rdseed,sep,serial,smap,smep,ss,sse,sse2,sse4_1,sse4_2,ssse3,tm,tm2,tsc,tscdeadline,vme,x2apic,xsave,xtpr",
    "processor": "Intel64 Family 6 Model 85 Stepping 7, GenuineIntel"
  },
  "memory": {
    "total": 68414291968,
    "available": 40240791552
  }
}
>>> import torch
```

In below example, we set `directory=""`, which prefers nvidia site
package like the following:
```
>>> import onnxruntime
>>> onnxruntime.preload_dlls(directory="")
>>> onnxruntime.print_debug_info()
...
List of loaded DLLs:
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_adv64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_ops64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_engines_precompiled64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cufft\bin\cufft64_11.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublasLt64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublas64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_heuristic64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_engines_runtime_compiled64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_graph64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cuda_runtime\bin\cudart64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\numpy.libs\msvcp140-263139962577ecda4cd9469ca360a746.dll
C:\Users\abcd\.conda\envs\py310\msvcp140.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn64_9.dll
C:\Users\abcd\.conda\envs\py310\msvcp140_1.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140_1.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140.dll
...
```

In below example, we import torch before preload_dlls. In this case, ORT
skips loading CUDA/cuDNN DLLs, and use the DLLs from torch:
```
>>> import onnxruntime
>>> import torch
>>> onnxruntime.preload_dlls()
Skip loading CUDA and cuDNN DLLs since torch is imported.
>>> onnxruntime.print_debug_info()
...
List of loaded DLLs:
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\nvrtc64_120_0.alt.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\curand64_10.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cufft64_11.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_heuristic64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_precompiled64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_adv64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cublasLt64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_ops64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cublas64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_engines_runtime_compiled64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\nvrtc64_120_0.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\nvrtc-builtins64_126.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_cnn64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn_graph64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudart64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\numpy.libs\msvcp140-263139962577ecda4cd9469ca360a746.dll
C:\Users\abcd\.conda\envs\py310\msvcp140.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cudnn64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\cufftw64_11.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\torch\lib\caffe2_nvrtc.dll
C:\Users\abcd\.conda\envs\py310\msvcp140_1.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140_1.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140.dll
...
```

Last example is to load CUDA and cuDNN separately from different
locations. CUDA location is based on CUDA_PATH environment variable, and
cuDNN path is a relative path points to cudnn in nvidia site package.
```
>>> import onnxruntime
>>> import os
>>> os.environ["CUDA_PATH"]
'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8'
>>> onnxruntime.preload_dlls(cuda=True, cudnn=False, directory=os.path.join(os.environ["CUDA_PATH"], "bin"))
>>> onnxruntime.preload_dlls(cuda=False, cudnn=True, directory="..\\nvidia\\cudnn\\bin")
>>> onnxruntime.print_debug_info()
...
List of loaded DLLs:
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_adv64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_ops64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_engines_precompiled64_9.dll
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\cufft64_11.dll
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\cublasLt64_12.dll
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\cublas64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_heuristic64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_engines_runtime_compiled64_9.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_graph64_9.dll
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\cudart64_12.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\numpy.libs\msvcp140-263139962577ecda4cd9469ca360a746.dll
C:\Users\abcd\.conda\envs\py310\msvcp140.dll
C:\Users\abcd\.conda\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn64_9.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140_1.dll
C:\Users\abcd\.conda\envs\py310\msvcp140_1.dll
C:\Users\abcd\.conda\envs\py310\vcruntime140.dll
...
```

### Motivation and Context

To address issues mentioned in description of
#23674 that onnxruntime
preload might cause conflicts with PyTorch.

Before this change, `import torch` after `onnxruntime.preload_dlls` will
cause issue:
```
>>> import onnxruntime
>>> onnxruntime.preload_dlls(verbose=True)
----List of loaded DLLs----
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cufft\bin\cufft64_11.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublas64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cublas\bin\cublasLt64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn_graph64_9.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cuda_runtime\bin\cudart64_12.dll
D:\anaconda3\envs\py310\Lib\site-packages\numpy.libs\msvcp140-d64049c6e3865410a7dda6a7e9f0c575.dll
D:\anaconda3\envs\py310\Lib\site-packages\nvidia\cudnn\bin\cudnn64_9.dll
D:\anaconda3\envs\py310\msvcp140.dll
D:\anaconda3\envs\py310\vcruntime140_1.dll
D:\anaconda3\envs\py310\msvcp140_1.dll
D:\anaconda3\envs\py310\vcruntime140.dll
>>> import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\anaconda3\envs\py310\lib\site-packages\torch\__init__.py", line 137, in <module>
    raise err
OSError: [WinError 127] The specified procedure could not be found. Error loading "D:\anaconda3\envs\py310\lib\site-packages\torch\lib\cudnn_adv64_9.dll" or one of its dependencies.
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants