A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

hi @barnacl, i updated if you want to take a look. https://colab.research.google.com/drive/1nTetOULwzZzOZ8849QM7ZQTLcCTH3V1V#scrollTo=jpQs3pDoh7y7

1 Like

Iā€™m puzzled by this line in the model head, what does it do?

I just watched last weekā€™s video! Very interesting content @muellerzr , I loved the EfficientNet explanation :heart_eyes:. For the theory of style transfer, I totally recommend this series for a brilliant and complementary explanation of the theory.

I tried to segment a food dataset (pretty cool!) following the tutorial but I am getting a weird error :pensive:. Does somebody know what the issue is? (Also, I am looking for people to join my team in this competition :grin:)

RuntimeError: CUDA error: device-side assert triggered
2 Likes

@mgloria you found one of the most common segmentation issues and Iā€™m also going to talk about it next lecture. I meant to put that in! That means your codes donā€™t align to the number of classes present in your dataset

IE: I made 4 labels but in reality there are five. If thereā€™s a strange one because they all ā€œshouldā€ be there, add one more blank or other category. Should fix it

@vijayabhaskar this comes from a flattening layer. Here is the source:

And in create_head:

1 Like

Lastly,

For the last lecture (in vision) as itā€™ll be some more different use cases, what are some vision databunches that people would like examples of?

The most common question Iā€™ve seen is Siamese, what others? :slight_smile:

I see! but I added at the beginning of the codes the ā€œvoidā€ class manually to account for the backgroundā€¦ should not this be enough? This is 0 in the mask.

1 Like

Did you also try not including it too? (I assume so). Itā€™s a hard bug to figure out. Thereā€™s a few discussions on debugging it on the forum, Iā€™ll see what I can dig up but basically another way of doing it is finding the maximum pixel value in your mask, add 1 to it and that tells you the number of classes as well.

This is common when we donā€™t know how the data was prepared beforehand, and itā€™s an issue that you donā€™t realize until youā€™ve started working with the data

@mgloria try incorporating some of the advice here: CUDA error: device-side assert triggered

Siamese would be great! I donā€™t think there is much left in vision in terms of datablocks you covered it all. Only GANs are left. If Iā€™m not asking too much, maybe Image captioning? :sweat_smile:

1 Like

If you havenā€™t covered super resolution, may be ā€œsuper resolutionā€. thanks.

1 Like

Tail end of the series after NLP :wink:

3 Likes

@mgloria just to point out you need to update your vmin, vmax (wonā€™t solve the CUDA error though) :slight_smile:
plus i think you donā€™t need to mention vmin, vmax. matplotlib will handle it.
You could try to find the unique values in your mask and count that to double check if you are missing something.
The error i was making was my mask was 3d instead of it being 2d.
Any reason why you are not using background in the accuracy calculation ?
mask = targ != void_code - you are saying when we calc accuracy i donā€™t care where we have background (iā€™m not sure if you want that ?)
In camvid void was used because there were some pixels which didnā€™t fall in any of the mask categories.
Could you point out to the dataset or a small subset (maybe upload a few to you github) would like to try out and see if i can help :slight_smile:

@muellerzr could we image and regression, image and multiple regression columns (not sure if i missed that.

Keypoints will cover this :slight_smile: but if you can provide a dataset where itā€™s an individual number itself (for true regression) I can include that in that lesson too (Tried looking for this but couldnā€™t find much) if so weā€™ll do single regression.

let me look, i had one for work that was predicting brightness of images.

model explainability using LIME , saliency maps

@vijayabhaskar this was fixed in the latest fastcore (just got informed) :slight_smile:

1 Like

@muellerzr, sorry that I could not share the data I have yet. But, I get pretty good results for my problem compared to classical regression (with hand made feature engineering) . Busy writing paper about it too.

from the blog fastai v1 used to estimate age.

  1. IMDB-Wiki Face Dataset ( https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/)
  2. UTK Face Dataset ( http://aicip.eecs.utk.edu/wiki/UTKFace)
  3. Appa Real Face Dataset ( http://chalearnlap.cvc.uab.es/dataset/26/description/)

Also leaf count example is done but donā€™t know if the data is availableā€¦

2 Likes

This is one Iā€™ve definitely heard of :slight_smile:

What Iā€™ll likely do is just provide a detailed notebook for those few other examples since Iā€™m not quite sure how theyā€™d fit in time wise, but theyā€™ll be there :slight_smile: (if the DataBlock/bunch is complex like Siamese is then Iā€™ll talk about it in class)

2 Likes

It might be useful for future users. I just wonder if itā€™s possible to get both gender and age as output? With current v2 with minor modifications? Do you have any idea?

2 Likes