A few bits of feedback on the accompanying notebook(s) — at present they seem to be set up for CUDA only. But the code should work just as well on an Apple Silicon mac (or even on an Intel Mac, but extremely slowly) with just a simple change
If you add the following line in the second cell after the imports:
device = "cuda" if torch.cuda.is_available() else "mps" if torch.has_mps else "cpu"
Then all you need to do is change any other cells which have .to("cuda")
to to.(device)
and the code will work on any supported GPU/CPU set up.
Also, if you already have the Hugging Face Stable Diffusion model already downloaded, you can simply set up a symlink (you can do this on any platform — macOS, Linux, Windows) to point to the “stable-diffusion-v1-4” at the location where you have your notebook. Of course, if you are on Colab, it’s easier to download the model all over again — though there’s also a solution there by using a connected Google Drive, but I won’t go into that
So if you have the Hugging Face Stable Diffusion model at /Users/myuser/stable-diffusion-v1-4/, then you can simply switch to the folder where you have the Jupyter notebooks and run the following (on Linux/macOS, the Windows command is slightly different):
ln -s /Users/myuser/stable-diffusion-v1-4/ stable-diffusion-v1-4
That’ll create a folder pointing to the original location and save you several gigabytes of space being used up again
Then, you’d have to change the following line (or similar ones) from the notebook, to point to your folder instead of the model from the Hugging Face hub, as follows:
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16).to(device)
The above should be changed to:
pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16).to(device)
Notice that you are pointing to the directory (or the symlink to the directory) where the models are on your local drive.
And finally, if you are on macOS, you should also drop the float16 parts since working with float16 isn’t supported on macOS correctly at the moment. So drop the following from the above line:
, revision="fp16", torch_dtype=torch.float16
To get this, as your final line (but only on macOS):
pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-4").to(device)