A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

We are live!

The words in the slides are blurred - canā€™t read them :frowning:

The video stream seems to be very choppy atleast for me

Yeah Iā€™m sorry for that guys, internet is very poor in this room right now :frowning:

No worries! Enjoying the presentation so far. Thanks for doing this!

Obvs just build a fastai hi-res & audio youtube fixer and re-release in 4k. :wink:

Donut worry about it, this is an awesome thing you are doing. Tech will sort itself out with time.

8 Likes

Hi muellerzr great lecture!

How would I go about solving the following problem. Is it currently possible?

I have image and I want to segment it on color.

So If I had a chess board image, it would produce two segments black and white.

Cheers mrfabulous1 :smiley::smiley:

Thanks guys for tuning in. Apologies for the tech issues, hopefully they will be sorted out next week :slight_smile:

1 Like

Iā€™d imagine thatā€™d be similar to CamVid and masks? :slight_smile: or are we talking coloring it in

Thanks for the lecture today. You mentioned that you will describe lr_find in more detail in a future lecture. If possible could you go in a bit more depth on the following topics regarding lr_find?

  1. What does it mean if we get wildly different plots when running lr_find repeatedly? Running lr_find repeatedly can and does give different plots due to the random nature of how the data is selected for the algorithm. Typically, these differences are small, what does it mean when they dramatically differ?
  2. Would it be possible to show and explain the meaning of a set of lr_find plots that are more degenerate and/or difficult to interpret than the ones typically shown in the classes?
1 Like

Youā€™ve got it! Everyone else, please let me know any topics youā€™d like me to go into more detail on that may have confused you. Since we had the tech issues I will be recording a separate video discussing them

Also thank you all for watching :slight_smile: it was my first time with this large an audience so I thank you for sticking with me :slight_smile:

The power point is also available in the GitHub repository :slight_smile:

1 Like

Coloring in.

Iā€™d recommend perhaps a segmentation approach where you identify where the boxes are, or something like feature loss (we will go into this in lesson 6 and think Jasonā€™s DeOldify model)

1 Like

Cool Iā€™ll give it some more thought and see what I come up with.

Thanks mrfabulous1 :smiley::smiley:

Everyone: I also meant to add. For homework Iā€™d like you to run those models on the data you collected so you get a baseline and you can try again after weā€™ve learned some more of the advanced techniques and see how much it improves :slight_smile:

Lastly, as there were bits that I couldnā€™t cover due to the issues, I will go live again tomorrow at 7pm central time to finish it off. (These will be done in a separate location so we wonā€™t have issues again)

3 Likes

Not a 100% sure but try turning off Adblock on Chrome and give it a shot

Actually, since we have not gone over the time limit yet, Iā€™m going to go ahead and go live again and rewalk through some of the troubling bits. Iā€™ll post another link here in a moment :slight_smile:

Weā€™re live:

Edit: I will post the videos up at the top, hopefully this answers any confusion on the rest of the first notebook :slight_smile:

4 Likes

So why is the training of ā€œstage_2ā€ after freezing ā€œstage_1ā€ and continuing (much) worse than the first stage? Is this because of over-fitting or ā€¦? Given the title in the Notebook ā€œFine tuningā€ one would expect this to be betterā€¦ train some more.

I tried this both on the GoogleColab and on a local GTX1060 under Win10ā€¦ and the worse results seem consistent. Do others see this too?