Replies: 2 comments
-
You are creating images with limited VRAM (≤ 4GB), here is what you can do to reduce memory usage in Diffuser:
pipe.enable_xformers_memory_efficient_attention()
2. Use FP16 precision with CUDA:
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe.to("cuda")
3. Disable safety checker (optional to save memory):
pipe.safety_checker = lambda images, **kwargs: (images, False)
4. Use pipe.enable_model_cpu_offload() if you're working with low VRAM:
pipe.enable_model_cpu_offload() |
Beta Was this translation helpful? Give feedback.
0 replies
-
Which model do you want to use and which GPU? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Discussions!
Beta Was this translation helpful? Give feedback.
All reactions