Nvidia 50 Series (Blackwell) support thread: How to get ComfyUI running on your new 50 series GPU. #6643
Replies: 28 comments 108 replies
-
There are also Windows prebuilt wheels of pytorch on cuda 128 that nvidia gave w-e-w to publish https://huggingface.co/w-e-w/torch-2.6.0-cu128.nv |
Beta Was this translation helpful? Give feedback.
-
The Windows version said: |
Beta Was this translation helpful? Give feedback.
-
nf4 node does not work. |
Beta Was this translation helpful? Give feedback.
-
ImportError: tokenizers>=0.21,<0.22 is required for a normal functioning of this module, but found tokenizers==0.20.3. How can I do to fix problem? |
Beta Was this translation helpful? Give feedback.
-
Sorry - I have been banging my head against this for 6 hours - trying to run a cosmos workflow with 5090 - without SageAttention or TorchCompile - a 14 minute job on 4090 is taking 25 minutes. To turn them on I installed Triton and Sageattention - then I get this error afterwards whether I bypass the patch and compile or not:- KSampler I have run this through GPT and followed every instruction. Step 1: Verify CUDA Toolkit Installation python If it does not match 12.8, you may need to reinstall PyTorch with the correct CUDA version. ✅ Step 2: Check NVIDIA Toolkit and Drivers Update your NVIDIA driver (Download latest) ✅ Step 3: Fix Microsoft Visual Studio Build Tools Run: sh |
Beta Was this translation helpful? Give feedback.
-
anyone knows if other pytorch's cuda versions like 12.6 will work with blackwell? |
Beta Was this translation helpful? Give feedback.
-
When I use [ComfyUI package with a cuda 12.8 torch build], many custom_nodes are "IMPORT FAILED" including Manager, InstantID, ReActor ... |
Beta Was this translation helpful? Give feedback.
-
Is is possible to build a pytorch that works for 5090? If so how to do it? |
Beta Was this translation helpful? Give feedback.
-
Having given up on Portable for now - too many errors lol - I am using WSL - Ubuntu - everything else is set up and working in Comfyui - I have the latest pytorch nightly from pip install --pre torch torchvision torchaudio --index-url[ https://download.pytorch.org/whl/nightly/cu128] - but when I use Sageattention I get this: |
Beta Was this translation helpful? Give feedback.
-
im using comfyui over pinokio, is there any way to update my comfy to work with my new 5080? i deleted the old files and changed in the torch.js on the patch above, but it wont start |
Beta Was this translation helpful? Give feedback.
-
Will using a docker allow for a working torchvision in windows? |
Beta Was this translation helpful? Give feedback.
-
Can someone write a little guide for getting a docker running torchvision, etc on window with the portable Blackwell comfyui release? Please write it for a normal user who has no programming knowledge. There are basic instructions in the OP, but what even is a docker? "docker run -p 8188:8188 --gpus all -it --rm nvcr.io/nvidia/pytorch:25.01-py3" Where is this command supposed to be run? Is a docker something to be installed to system python or standalone folder? How would this be installed on a fresh portable comfyui install? With more Blackwell cards trickling out, there will most likely be more users needing help setting this up. Thank you. |
Beta Was this translation helpful? Give feedback.
-
To anyone having trouble with the portable version for Blackwell gpus, do not update it! I noticed during updating that it uninstalled the cu128 and installed another version for example. Torchvision works with a fresh install without updating! |
Beta Was this translation helpful? Give feedback.
-
Excelente |
Beta Was this translation helpful? Give feedback.
-
Are there no windows pytorch options available for cuda 12.8 still? Any help would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
For those who want to use Sage Attention + Hunyuan + RTX 50xx with a simple tutorial, it is interesting to keep the files inside WSL instead of using the files in Windows due to the reading time (you can test it and you will understand) that's why I recommend copying your comfyui to WSL. https://github.com/alisson-anjos/ComfyUI_Tutoriais/blob/main/WSL/install.md Notes: For those who want to compile SageAttention due to some lack of compatibility with the OS, it is necessary to compile Triton and then Sage Attention. To compile Sage Attention it is necessary to change setup.py with the code from this issue: [Sage Attention Issue] (thu-ml/SageAttention#107 (comment)) Thanks to Kijai for precompiling triton and sage attention. |
Beta Was this translation helpful? Give feedback.
-
Flash Attention now supports and recommends CUDA 12.8 Dao-AILab/flash-attention@454ce31 |
Beta Was this translation helpful? Give feedback.
-
How do you install the ComfyUI-Manager on the standalone ComfyUI package with a cuda 12.8 torch build? The script messes up the installation. |
Beta Was this translation helpful? Give feedback.
-
Run update/update_comfyui.bat to update after downloading, duh. |
Beta Was this translation helpful? Give feedback.
-
thank you so much it worked !! hope they fix it asap |
Beta Was this translation helpful? Give feedback.
-
Awesome! That Windows package got me rolling with image gen (FLUX and all) - works like a charm. But Wan's video generation isnt working yet. Any clue if that'll be sorted with those upcoming pytorch improvements? Hoping so! |
Beta Was this translation helpful? Give feedback.
-
Is there really no proper way to run ComfyUI at full speed on the RTX 50 series? The portable version doesn’t seem to perform well, and since I’m not a developer, I end up endlessly following links from NVIDIA, GitHub, and other sites, installing unnecessary things in a messy loop. I just want to run it on Windows—not WSL2, not Docker—while installing CUDA 12.8, the corresponding nightly versions of Torch and Torchvision, and TensorRT to use the latest version of ComfyUI. But I can’t find any clear instructions on how to do this anywhere. It’s been over a month since the 50 series was released… Can someone please provide a proper setup guide and step-by-step instructions? Related links: Every time I try to follow these links, I end up going in the wrong direction… I’m not familiar with Linux, so I’m really struggling. Please, help me… I never expected getting ComfyUI to work on the 50 series would be this difficult. |
Beta Was this translation helpful? Give feedback.
-
I know it has been mentioned here and there, but if you guys want a docker container that is already working in 5090, I've put this together: https://github.com/HDANILO/comfyui-docker-blackwell I got a 5090 4 days ago and has been struggling myself so I guess if I would spend the time making it work, better make it public for everyone to access... I'm using it on windows, WSL 2, Ubuntu, Docker in Ubuntu, no performance impact this way. Download the models directly into the container and avoid volumes if you don't want slowdown. |
Beta Was this translation helpful? Give feedback.
-
I updated the post with a link to a standalone package with pytorch nightly 2.7 cu128 |
Beta Was this translation helpful? Give feedback.
-
Just got my 5070ti. I had to disable xformers using --disable-xformers, otherwise ksampler wouldn't run. The error message is saying flash attention not supporting the gpu yet. I'm not sure if disabling xformers is the best solution but since that error is from xformers code I gave it a try and it worked. I don't know if that's going to have a performance impact. |
Beta Was this translation helpful? Give feedback.
-
Yes. Torch: 2.7.0.dev20250306+cu128 (file: torch-2.7.0.dev20250306+cu128-cp312-cp312-win_amd64.whl) Critical Notes: Environment isolation is mandatory - always use ComfyUI's embedded Python (python_embeded) rather than system-wide Python installations. |
Beta Was this translation helpful? Give feedback.
-
any idea how to run flux in fp4 using onnx from here https://huggingface.co/black-forest-labs/FLUX.1-dev-onnx |
Beta Was this translation helpful? Give feedback.
-
Using @HDANILO's dockerfile I've been able to get most of comfyui working with my 5080, though sage attention is still broken, even after building from scratch. @kijai's WAN workflows also have a tendency of crashing with this container, even with sageattention turned off. Here's my dockerfile so far, with comments:
This still results in the |
Beta Was this translation helpful? Give feedback.
-
I will try keeping this post up to date as much as possible with the latest developments.
To get your nvidia 50 series GPU working with ComfyUI you need a pytorch that has been built against cuda 12.8
In the next few months there will likely be a lot of performance improvements landing in pytorch for these GPUs so I recommend coming back to this page and updating frequently.
Windows
The recommended download is the standalone package with nightly pytorch 2.7 cu128 that you can download from here
Old package with torch 2.6
Windows users can download this standalone ComfyUI package with a cuda 12.8 torch build
Manual Install
At this moment Blackwell is not yet supported by stable pytorch.
pytorch nightly cu128 is available for Windows and Linux:
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
You can also use the Nvidia Pytorch Docker Container as an alternative which might give more performance.
Link: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch
Here's how to use it:
docker run -p 8188:8188 --gpus all -it --rm nvcr.io/nvidia/pytorch:25.01-py3
Inside the docker container:
Beta Was this translation helpful? Give feedback.
All reactions