Share your work here ✅ (Part 2 2022)

I tried the interpolation example notebook shared by @puru for dinosaur to chicken evolution.

dino_chicken_evolution

dino-chicken intermediate by the model :slight_smile:

Original post

19 Likes

I used the lesson 9 notebook to generate a Banksy sketch of a robot. I needed a cool image for the first page of a powerpoint deck I was creating for another course’s project presentation.

5 Likes

What was your daughter’s reaction?

1 Like

looks amazing! :grin:

1 Like

I think she prefers the dog, our granddaughter on the other had likes the unicorn, but then she is into my little pony😀! could have lots of fun with this

4 Likes

I tried to generate images using the same prompt, but in different languages. (All translated from English using DeepL.) The first one was this prompt.

A picture of a town hall in a historical quarter of a city

Seems like the model aligned texts written in a given language with the most common pictures from that area. Except for Greek, which collapsed. I guess the language isn’t well represented in the dataset (?). Also, an interesting interpretation of Estonian town hall. Too many castle pictures associated with this language?

The second one I tried is this.

A crowded street in a big city on a winter morning

Again, the Greek one is rather cumbersome. Is it a kind of “averaged” tourist’s selfie? Not sure if Estonian landscapes really have such kind of mountains… Also, English, German, and French are captured pretty accurately. The most frequent language/image pairs in the dataset?

14 Likes

I wrote up some of my thoughts / learnings from lesson 9 in a blog (including a glossary of terms). It’s not fully updated with things from 9a or 9b yet, but I’ll probably write subsequent blogs alongside lesson 10 this week.

18 Likes

I don’t know anything about Kamon designs but the results are not too shabby IMO.

Glad to see my notebook helped! :slight_smile:

1 Like

I created a video where I walk through using a devcontainer with fastai. I think this is a really nice way to create a stable environment. Fastai .devcontainer Environment Creation - YouTube

I also created a few images. This was my favorite (prompt was “m. c. escher cityscape van gogh” Special thanks to @hiromi for the prompt help!):

21 Likes

Fascinating. I too have been thinking about how well this would work for languages other than English.

The Greek images do suggest there may be biases in the training images!

Thanks for sharing!

1 Like

Great idea and thank you for sharing! I would also think that these images are the result of missing representation of other languages than English in the datasets. The question here is, which dataset matters? The one for CLIP training (not public) or for Stable Diffusion (LAION)? Both? I haven’t wrapped my head around these models yet :sweat_smile:

According to the model card, Stable Diffusion was trained on LAION-2B-en or subsets thereof, which consist of primarily English descriptions. So any other languages should be represented pretty poorly - I’m surprised by the decent results in German and French.

The black image for Greek in the first prompt is probably not a model failure, but got blocked by the overly aggressive NSFW filter, I suppose.

3 Likes

Yes, I agree, that’s interesting how it works! I don’t know quite well how the tokenizer was trained, but it definitely can take at least some meaning from non-English languages as well. I also expected that one might encounter something like “unknown token” error, but I guess it is not the case for such large models. Was it trained on the whole Unicode char set?

That’s true! For some languages, it seems the model tries to generate the most common, “standard” image of that language/country. (Like showing cathedrals along the river for Russian.) Or falls back to some “generic”, even if not related, picture.

Thank you for feedback! I expected that for other languages it would produce even worse results. These complex models require some time to figure them out! I am still feeling a bit lost, even though I am familiar with independent parts of it. But definitely don’t know enough about the data.

That’s a good point. I was also expecting for images that aren’t quite aligned with the prompt for non-English languages. But it seems that somehow, for these two, the model indeed worked quite well! Especially, compared to others, looks very plausible.

Oh, you’re right, I’ve completely forgotten about the filter. I will try to disable it to get the results “as is.”

1 Like

Yea it is fun using interpolation between multiple prompts. I generated the clip embeddings
for these 4 prompts

a = embed_text(['Paris in Spring, digital art'])
b = embed_text(['Paris in Summer, digital art'])
c = embed_text(['Paris in Fall, digital art'])
d = embed_text(['Paris in Winter, digital art'])

and then just did linear interpolation from a to b to c to d and grabbed some
clip embeddings along the way. Then passed each one into the model and made sure to use the same seed 23532 before generating each image so they would be similar enough.

pairs_seasons7

UPDATE: I tried Jeremy’s suggestion below. I’m not sure I understood it correctly but I think the idea was to start with the previous image in the latent space (Image2Image) before going to each next step. So I used the ideas in the deep dive notebook on writing your own function for Img2Img. I played around with it for a while and tweaking the parameters and start steps. But things sort of get less detailed as it progresses. Still looks sort of neat but not sure what I was going for. Maybe there is a bug in the code :slight_smile:
paris_using_prev_step

UPDATE 2:
I’m not so sure you actually want to interpolate the latent space by starting with the previous image as the input. I could be wrong but when I do this it leads to weird effects. I think you want to interpolate from a to b and to make it smoother (more stable) just add more points in between and set a consistent seed/noise. For example, I went back to the interpolation like I first tried without using the image2image suggestion. I simply added more interpolation points. See here. Do you think its better ?

19 Likes

Would using the previous image as an “initial image” help too?

5 Likes

Hehe yes that’s a nice simple idea. I think I had thought about that then got distracted and just wanted to finish something simple. I’ll give that a try tomorrow though :slight_smile:

2 Likes

It knows about emoji’s. You can check out the ALT tags to see the prompts.

2 Likes

Not actually based on the lectures (I just started on the lectures and have only watched the first one) but I’m betting that the lectures will come in handy in revamping the image generation engine I use as I watch more lectures …

Here’s my GUI for Stable Diffusion which works on multiple platforms :slightly_smiling_face: I’ve been using PyTorch/diffusers to handle the image generation part, but PyTorch/diffusers has become very slow on MPS/Apple Silicon lately. So I’m hoping that after the lectures I know enough to find a workaround for the slowdowns …

4 Likes

I’ve been trying to train a conditional diffusion model using a image dataset of galaxies that belong to ten different classes (Galaxy10-SDSS). I’ve been using an approach similar to @tcapelle’s code here which allows you to train a conditional DDPM by adding the label embeddings to the “timestep” embeddings. Notably, there’s no superresolution autoencoder or anything like that, but my images have shape (3, 64, 64) so they’re not that detailed.

Here are the middling results: everything looks very blurry and wispy. I’ve trained for 3000 epochs on this data set of ~20k images, but the loss seems to have plateaued for now. I’ve tested both MSE, L1, and a combination (Huber) loss function, but neither seems to mitigate this issue. Just curious if anyone has thoughts or suggestions?

5 Likes

I tried putting my Mom’s artwork into the Clip Interrogator to see if the output could be useful to her as an artist. Awesomely enough, it identified an artist she wants to study due to similarities in here style! She is quite excited after seeing the results of the first painting! I think there’s opportunities to create tools for artists I am going to play around with :slight_smile:

Thread: https://twitter.com/isaac_flath/status/1583471995341049856

10 Likes

Jeremy asked us to try to implement tricks like negative prompt in code. This gave me the idea of looking into the source code of diffusers library and write about it. Usually I don’t do these types of acrobatics :slight_smile: . But this time I was confident enough to do so(many thanks to the lessons from Jeremy, Jono, Tanishq, Wasim and such a helpful community).

Here is me trying to explain stable diffusion and the source code for the famous StableDiffusionPipeline in diffusers library:

9 Likes