Lesson 9 official topic

Hi! At the end of each fastaibook chapter, there is a questionnaire. I found that super helpful to test my understanding, so I wrote these questions for Lesson 9. Please add more if you have them!

Part 1: how to get started with stable diffusion.

Questionnaire

  1. Why is this called lesson 9?
  2. What does the strmr.com service do?
  3. Mention the four fastai contributors.
  4. Mention four computing services.
  5. What’s fastai/diffusion-nbs?
  6. What’s the content of suggested_tools.md file? Mention two tools.
  7. What is the main library used in the stable_diffusion.ipynb notebook? What’s the organization behind it?
  8. What’s the main idea of a Hugging Face pipeline, and which fastai tool is the most similar to it?
  9. What’s the from_pretrain method for?
  10. What extra feature Paperspace and Lambda labs have that makes them handier to use with pipelines than Google’s Colab
  11. Which method of the stable diffusion pipeline should you call to produce images from a prompt?
  12. Which torch method should you use to set the random seed?
  13. Why would you set the random seed manually?
  14. Why does the pipeline have many steps, and what does it do in each?
  15. Why do we used 50 steps and not 3? Are these values set in stone?
  16. What does the image_grid function do?
  17. What effect do you get when you change the value of the guidance_scale parameter?
  18. Roughly, how does the guidance_scale work?
  19. What’s the effect of a negative prompt?
  20. How does the image-to-image pipeline work?
  21. What’s the effect of the strength parameter?
  22. How can you use the image2image pipeline twice to produce an even better image?
  23. How was the text-to-pokemon model fine-tuned?
  24. What is “textual inversion”?
  25. What is “dreambooth”?

Part 2.

  1. How can you use a mode/function that outputs the probability that an image is an image of a handwritten digit to generate handwritten digit images?
  2. How can you generate a dataset of images of handwritten digits and non-handwritten digits and labels that indicate how much each image resembles a handwritten digit?
  3. Describe the main components of a neural network (disregard specifics about the architecture)
  4. Describe a network that can predict the noise added to each image in the dataset discussed in question 2.
  5. How can you use the network described in question 4 to generate images of handwritten digits?
  6. In practice, what’s the architecture of such a network?
  7. What’s a reasonable size for representing images of handwritten digits? And for beautiful realistic images? What problem will we face if we want to use the former approach to produce beautiful high-definition images?
  8. Is it possible to efficiently but lossy compress images? Which image format does this?
  9. How can we store high-definition images more efficiently using a neural network? What’s the name of these kinds of networks?
  10. What’s the name of the output of the encoder?
  11. How can you use the network from question 9 to speed up the training and inference of the network of question 4?
  12. How can you modify the network from question 4 to be guided by a particular digit?
  13. What’s the problem with this approach for a data set with images with arbitrary descriptions?
  14. How can we build a dataset of images and descriptions?
  15. Suppose you have the dataset from question 14, a randomly initialized network that produces embeddings from the descriptions and another network that produces embeddings form the images (both embedding types with the same shape). Which loss function could you use to train the networks, so they output similar embeddings for (image, description) pairs that appear in the dataset and different ones for pairs that do not appear in the dataset?
  16. What is the name of the pair of models described in 15?
  17. How can we use the model described in 15 to guide image generation?
  18. What is the name of the loss described in 15?
  19. What is the name of the gradients used in 1?
  20. What other greek letter is used for the standard deviation sigma of the noise?
  21. What is a noise schedule, and what are the time steps?
  22. When we generate an image from random noise, we multiply the noise by a small number before the subtraction instead of subtracting the predicted noise. Why?
  23. What is the role of the diffusion sampler?
  24. What other deep learning object is similar to the diffusion sampler?
  25. Apart from the noisy latent input and the embedding for guidance, what other input is used for the diffusion models? What is the area of math where this idea came from? Do you think this is a necessary input? Why?
  26. Instead of using MSE as the loss, what other loss could be used to better approximate if the resulting image looks real?
21 Likes