Paperspace 16GB GPUs keep getting OOM?

Hello,

I just signed up for paperspace and am trying to load the models - VAE, UNet, and CLIP. However I keep getting an OOM error when I try to run the diffusion loop:

Imports

vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_auth_token=True)

tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")

unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet", use_auth_token=True)

scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)

vae = vae.to(torch.float16).to(torch_device)
unet = unet.to(torch.float16).to(torch_device)
text_encoder = text_encoder.to(torch.float16).to(torch_device)

Output

RuntimeError: CUDA out of memory. Tried to allocate 15.50 GiB (GPU 0; 15.74 GiB total capacity; 6.35 GiB already allocated; 6.33 GiB free; 7.67 GiB reserved in total by PyTorch)

Has anyone else gotten past this to run stable diffusion on Paperspace?

Resolved: I was passing in a string instead of a list of strings. This then broke the string up and created a lot more embeddings than I intended. #facepalm lol

2 Likes