We are live!
The words in the slides are blurred - canāt read them
The video stream seems to be very choppy atleast for me
Yeah Iām sorry for that guys, internet is very poor in this room right now
No worries! Enjoying the presentation so far. Thanks for doing this!
Obvs just build a fastai hi-res & audio youtube fixer and re-release in 4k.
Donut worry about it, this is an awesome thing you are doing. Tech will sort itself out with time.
Hi muellerzr great lecture!
How would I go about solving the following problem. Is it currently possible?
I have image and I want to segment it on color.
So If I had a chess board image, it would produce two segments black and white.
Cheers mrfabulous1
Thanks guys for tuning in. Apologies for the tech issues, hopefully they will be sorted out next week
Iād imagine thatād be similar to CamVid and masks? or are we talking coloring it in
Thanks for the lecture today. You mentioned that you will describe lr_find in more detail in a future lecture. If possible could you go in a bit more depth on the following topics regarding lr_find?
- What does it mean if we get wildly different plots when running lr_find repeatedly? Running lr_find repeatedly can and does give different plots due to the random nature of how the data is selected for the algorithm. Typically, these differences are small, what does it mean when they dramatically differ?
- Would it be possible to show and explain the meaning of a set of lr_find plots that are more degenerate and/or difficult to interpret than the ones typically shown in the classes?
Youāve got it! Everyone else, please let me know any topics youād like me to go into more detail on that may have confused you. Since we had the tech issues I will be recording a separate video discussing them
Also thank you all for watching it was my first time with this large an audience so I thank you for sticking with me
The power point is also available in the GitHub repository
Coloring in.
Iād recommend perhaps a segmentation approach where you identify where the boxes are, or something like feature loss (we will go into this in lesson 6 and think Jasonās DeOldify model)
Cool Iāll give it some more thought and see what I come up with.
Thanks mrfabulous1
Everyone: I also meant to add. For homework Iād like you to run those models on the data you collected so you get a baseline and you can try again after weāve learned some more of the advanced techniques and see how much it improves
Lastly, as there were bits that I couldnāt cover due to the issues, I will go live again tomorrow at 7pm central time to finish it off. (These will be done in a separate location so we wonāt have issues again)
Not a 100% sure but try turning off Adblock on Chrome and give it a shot
Actually, since we have not gone over the time limit yet, Iām going to go ahead and go live again and rewalk through some of the troubling bits. Iāll post another link here in a moment
Weāre live:
Edit: I will post the videos up at the top, hopefully this answers any confusion on the rest of the first notebook
So why is the training of āstage_2ā after freezing āstage_1ā and continuing (much) worse than the first stage? Is this because of over-fitting or ā¦? Given the title in the Notebook āFine tuningā one would expect this to be betterā¦ train some more.
I tried this both on the GoogleColab and on a local GTX1060 under Win10ā¦ and the worse results seem consistent. Do others see this too?