YouTube Chapter Markers & Transcriptions

Wasn’t sure if anyone was working on this or planning on it but I’ll be working on chapter markers for Lesson 9 today. @jeremy do you know of any current efforts?

Also, I imagine 9A and 9B will need markers as well. @johnowhitaker @ilovescience @seem do you need help with those?

Link to Part 1 discussion on chapter markers and instructions. If I remember correctly, we put too many chapter markers last time so I’ll make markers with that in mind.

5 Likes

Done

Chapter 9
0:00 - Introduction
6:38 - This course vs DALL-E 2
10:38 - How to take full advantage of this course
12:14 - Cloud computing options
14:58 - Getting started (Github, notebooks to play with, resources)
20:48 - Diffusion notebook from Hugging Face
26:59 - How stable diffusion works
30:06 - Diffusion notebook - guidance scale, negative prompts, starting diffusion with an image, textual inversion, Dreambooth
45:00 - Stable diffusion explained & how fast.ai will be teaching it
1:14:37 - Creating a neural network to predict noise in an image
1:27:46 - Working with images and compressing the data with autoencoders
1:40:12 - Explaining latents that will be input into the unet
1:43:54 - Adding text as one hot encoded input to the noise and drawing (aka guidance)
1:47:06 - How do you represent numbers vs text embeddings in our model with image and text encoders (CLIP)
1:53:13 - Encoder’s loss function
2:00:55 - Caveat regarding “time steps”. This term won’t be used in this course
2:07:04 Why don’t we do this all in one step?

11 Likes

Thanks so much @Raymond-Wu this is terrific! Great initiative :smiley:

1 Like

The section from 45 mins on could do with more detail if possible. I think around 8-10 mins between timestamps is a good goal.

Absolutely agree with the timestamp spacing. I went with a bit of a compromise and just combined timestamps I would normally have so you’re still able to search for topics with text without overloading users with timestamps. After 45:00 is still being worked on. That was just my stopping point as I wasn’t sure if anyone else was working on this. Seems like I have the green light though so I’ll work on it more this afternoon.

1 Like

Hi, how about the transcriptions? I’m waiting for the google doc files to start transcribing. I can create the files, but I’m not sure if it is better for Jeremy to own the files…

1 Like

Yep we would very much appreciate help on the math lesson :slight_smile: Thank you so much!

1 Like

Actually I’m just re-rendering that bit now. It’ll add a few minutes to all the timestamps after 45mins, since I’ve recording a new section. Will be done in an hour or two.

1 Like

This is a great initiative and adds a lot of value! As a fellow member of the community I say thank you!

3 Likes

Got it. It’d help me greatly if you could tell me at what time you added/re-recorded a new segment and how long the duration was.

The new section was inserted at 53:04. The new section is 14:53 long. I’ve uploaded it the new video YouTube but it’s processing very slowly. I’ll post here when it’s done.

1 Like

Done

Lesson 9B
0:00 - Introduction
2:19 - Data distribution
6:38 - Math behind lesson 9’s “Magic API”
18:50 - CLIP (Contrastive Language–Image Pre-training)
27:04 - Forward diffusion (markov process with gaussian transitions)
36:11 - Likelihood vs log likelihood
42:16 - Denoising diffusion probabilistic model (DDPM)
48:04 - Conclusion

Additional Links:
Deep Unsupervised Learning using Nonequilibrium Thermodynamics - https://arxiv.org/abs/1503.03585
Denoising Diffusion Probabilistic Models - https://arxiv.org/abs/2006.11239

1 Like

The new video is now available

2 Likes

I’ve added 14:53 to the times after 53:04 now in your post.

Perfect! That saves me the effort. I believe that someone earlier in the thread also asked about transcriptions. Do you want to put them in a Google Doc again for us to edit?

Also, I forgot a timestamp for Ch9
2:07:04 Why don’t we do this all in one step?

2 Likes

Done

Lesson 10
0:00 - Introduction
0:35 - Showing student’s work over the past week.
6:04 - Recap Lesson 9
12:55 - Explaining the papers “Progressive Distillation for Fast Sampling of Diffusion Models” and “On Distillation of Guided Diffusion Models”
26:53 - Explaining the paper “Imagic: Text-Based Real Image Editing with Diffusion Models”
33:53 - Stable diffusion pipeline code walkthrough
41:19 - Scaling random noise to ensure variance
50:21 - Recommended homework for the week
53:42 - What are the foundations of stable diffusion? Notebook deep dive
1:06:30 - Numpy arrays and PyTorch Tensors from scratch
1:28:28 - History of tensor programming
1:37:00 - Random numbers from scratch
1:42:41 - Important tip on random numbers via process forking

Additional Links:
Progressive Distillation for Fast Sampling of Diffusion Models - https://arxiv.org/abs/2202.00512
On Distillation of Guided Diffusion Models - https://arxiv.org/abs/2210.03142
Imagic: Text-Based Real Image Editing with Diffusion Models - https://arxiv.org/abs/2210.09276

6 Likes

@jeremy the video is publicly listed as well lesson 9. Is it correct? Or should they be available only for part 2 attendees until the public release?

Nevermind, I just read the posts about the preview :slight_smile:

Lesson 9 2022 Transcription

3 Likes

Awesome work! Lesson 10 chapter markers are now done as well. The YouTube video can now be updated with all the timestamps Jeremy.

1 Like