Learning fastai part 2

I’m currently learning fastai part 2 and documenting everything I learn along the way in this post. If you have any questions on these topics, please feel free to ask.

Github: GitHub - xrsrke/stable-diffusion-from-scratch: Implementation of Stable Diffusion from scratch

EDIT 1: My learning goal is to reimplement all components in stable diffusion from scratch. I will post it here!

EDIT 2: Added github

8 Likes

28/11/002022, Lesson 12 - 11a_transfer_learning, 12a_awd_lstm

Reimplement transfer learning and LSTM cell from scratch


LSTM

Transfer learning

1 Like

TIL: finally understand multi-head attention in transformer (after going back and forth for almost 1 month)

TIL: implemented transformer’s encoder


TIL: implemented masked attention

TIL: how to calculate the similarity between two embeddings using open_clip and fastai’s Transform

p/s: i implemented the transformer from scratch, but I do not fully understand the src_mask and trg_mask in the Transformer’s forward pass. I will train it on a toy dataset using fastai to fully understand it

1 Like

To warm up my muscles for fastai’s 2023 course. Im currently implementing CLIP, DDPM, and VAE from scratch

TIL: understand how CLIP works (will implement from scratch very soon)

1 Like

Update: got the pipeline working (check out github). This week, the goal is to reimplement CLIP from scratch. I’m finding the Clip tokenizer to be a bit challenging


1 Like

the last few days i learned: some basics of transformers, einops







1 Like

the last two day i learned how text generation works in transformers, decoding strategies










1 Like

the last four days i learned: learned the pipeline of question-answering in NLP



the last five days i learned: how text summarization works, train knowledge distillation, create performance benchmark




the last three day i learned :cold_face:: implemented a custom head for a downstream task

You’re making great progress!

2 Likes

Thanks Jeremy. I am going to post my learning progress on learning particle physics and nanoscience by re-implementing AI-related papers. Persistence is all you need :wink:

2 Likes

the last four days, I learned: implemented the the language model agent in RLHF, prompt dataset

The last four days i learned: implemented and trained GPT-2 from scratch

1 Like

the last two days i learned: figured out how to handle out of context length


TIL: create a language model with a persistent memory for conversation using langchain


the last two days i learned (lol this month i spent a lot of time for non-AI subjects, now i’m back): some techniques for efficiency train deeper model, some AI alignment techniques (will go deeper soon), 3/4 how ToolFormer works (will share notes after finish it), implemented 1/10 ToolFormer


1 Like