Lesson 9 official topic

This post is for topics related to lesson 9 of the course.

This is a wiki post - feel free to edit to add links from the lesson or other useful info.

<<< Lesson 8Lesson 10 >>>

Lesson resources

Links from the lesson

Useful background on fast.ai courses

Discussion

Please use this topic to ask any question about the course (even if you think they are stupid, we really love stupid questions).

Questions with lots of likes will be answered live. If you see a question you’d really like to be answered, click on the heart under it to like it. Don’t post duplicate questions if possible. They will be removed by the moderators (don’t be mad if this happens to you, it’s nothing personal). If your question is not answered during the course, it will be answered on this topic after the lesson.

This topic will be very crowded, so please refrain from getting too much off track (create another topic for this) and please like the post of someone who replied to you instead of replying with thank you.

52 Likes

Hi Jeremy, I just started to watch and take notes on part 2 lectures in 2019. I wonder how would 2022 part 2 be different from part 2 of 2019 apart from adding stable diffusion to the course. What are the other differences or new things should we expect this year?

7 Likes

It will follow a similar structure to the 2019 course but with lots of additions and increased depth in many sections. For details, you’ll have to watch the course! :wink:

20 Likes

I wonder what kind of compute is advised for this iteration of the course? Is it similar as for the first part, i.e., a collab instance, single (low-medium) GPU machine, Kaggle notebook? I’m mostly thinking about things like diffusion models, transformer-based NLP architectures, etc. If I am not mistaken, many of them became less computationally demanding during last few years.

10 Likes

I think here we might learn about stable diffusion and similar stuff. They requires a bit of GPU compute specially to train. Mostly VRAM size. Since colab has T4, it will work.

In the last few weeks training for dreambooth and similar got improved a lot too. Now we can train them on colab too.

Just browse the Stable Diffusion Reddit thread for latest improvements on this area.

6 Likes

Yes, true, that’s why my guess was that a rather modest compute should be enough.

3 Likes

Yep curious how to best prepare for the lesson, i.e. would chapter 11 of the book be a good place to read through?

7 Likes

How do we use real objects in the image generation process. Example, If i have a Specific Study Table, and want to create a living room with MY Study Table, how should i go about it? I know that Dreambooth exists, but it is still not capturing 100% Fidelity, it still kind of distorts the product. How do I go about creating one, where there is no distortion of object in the generated image.

4 Likes

You could use a technique referred to as outpainting where you ask the image generator only to apply its generation to areas outside of the object, this is assuming you don’t want ANY changes to the Table.

There are also parameters for the image generation process (i.e. using img2img) where you can optimize to hold the same structure or content of the original object/image with minor changes to the style for example.

3 Likes

Watching the 2019 part 2 videos would be the best way to prep for the course.

12 Likes

For anyone looking for those videos, I think Part 2: Deep Learning from the Foundations | fast.ai course v3 is probably the best starting point and then you can get to the videos from there.

Or this YouTube playlist: https://www.youtube.com/playlist?list=PLfYUBJiXbdtTIdtE1U8qgyxo4Jy2Y91uj

24 Likes

Hi, I’m wondering if there is any info on environment setup that we should be doing before the lesson? I was intending to run notebooks locally. I didn’t do part 1 of this course so it’s possible I’ve missed the instructions on this. Is there a requirements.txt for the course?

3 Likes

Generally folks have been running stuff on Kaggle, Colab, or Paperspace. For running locally, you’ll need PyTorch, fastai, and the HuggingFace libs installed all with CUDA working.

7 Likes

You might want to check out the docker containers by seeme.ai or paperspace. For Part 1 I used the paperspace container, but I see that the seeme.ai container is newer (pushed a month ago vs paperspace 3 months ago) and smaller than the paperspace container.

Seeme.ai container: Docker Hub
paperspace container: Docker Hub

HTH,

5 Likes

Hi Jeremy,

I have registered for the course, but other than the payment confirmation email, I have not no other information. Can I just view the course here?

2 Likes

Hi Amir
I had a similar confusion. I believe you are all set if you are here and can posts messages into this section of course forum. Check ** About the Part 2 2022 course** for the details on how to get to the live stream.

thanks

4 Likes

Thanks a lot, so the registration was not necessary for me?

1 Like

Exactly. Browse this link: About the Part 2 2022 course for exact date/time and links for each session.

2 Likes

It was necessary - it’s the reason you have access to this forum category and can watch and participate in the course!

4 Likes