YouTube Chapter Markers & Transcriptions

Transcription and Chapters :white_check_mark: Done

Lesson 23 Chapters :white_check_mark: Done

0:00:00 - Admitting an interesting bug
0:05:50 - From Fashion-MNIST to Tiny ImageNet, creating a U-Net
0:07:25 - Tiny ImageNet Dataset
0:13:35 - Transform Class
0:18:35 - DataLoader
0:22:18 - Data augmentation, batch transforms
0:25:10 - Creating a model and training
0:28:15 - Getting better, papers with code
0:30:22 - Going deeper
0:33:35 - More augmentation, TrivialAugment
0:39:15 - Pre-activation ResNets
0:48:17 - Notebook 25: Super resolution
0:55:08 - Autoencoder
0:59:27 - U-Net
1:09:10 - Initializing and training the U-Net
1:14:30 - Perceptual loss
1:23:55 - Initializing and training with Perceptual loss
1:26:50 - Gradual unfreezing
1:33:58 - Cross-convs
1:36:40 - Possible exercises to try

Lesson 23: Deep Learning Foundations to Stable Diffusion, 2022 :white_check_mark: Done

Lesson 24 Chapters :construction:

0:00:00 - Welcoming to Lesson 24
0:00:42 - Unconditional diffusion from scratch: 26_diffusion_unet.ipynb
0:06:58 - SavedResBlock and SavedConv - Python’s mixin pattern
0:14:40 - UNet2DModel
0:20:05 - Train using miniai / Time Embedding
0:32:20 - U-Net with Time Step Embedding
0:35:57 - Second approach - Python’s functions taking functions pattern
0:40:10 - EmbUNetModel - U-Net model with Time Embeddings
0:46:03 - Train the EmbUNetModel and Sampling
0:47:53 - Attention and Transformer blocks
1:02:55 - Attention code


:construction: