Lesson 3 Advanced Discussion ✅

(hasif) #123

Did you successfully do the lesson 3 in multi-gpu? There seems an error with that multi-gpu. I dont know if it comes from the model or what.


No, I also had issues there. Seems like some parts of the fast.ai library are not easily parallelizable.

(Sapir Gershov) #126

I’m having some dificulties locating the file name of my predictions.
I’ve worked with the CamVid notebook and preformed a complete learning process and extracted the prediction as well.
unfortunately, when I saved the segmentation predictions to my pc they didn’t match to the original files.
Can it by that I’ve shuffeld my item list and that what caused the missmatch?
Any advice will be appreciated.


Can anyone elaborate on the accuracy_thresh?
I don’t understand why, but I just accept that accuracy_thresh is used when we need to get a prediction when we have multiple labels.
Does that mean that we have a tensor of these labels probability? For instance if we have 4 labels it would be something like [0.59, 0.67, 0.21, 0.89] and then we need to get a binary tensor outcome so we pass this tensor to a threshold function. So it will trigger the label if it’s above particular threshold? For instance if the threshold is 0.5 we will get [1,1,0,1]? And if so, is there a way to see these probabilities? And what’s a rule to choose a proper threshold? Why do we use 0.2 with the planet dataset?

(Kieran) #128

Hey outfuture.

I think you are correct. The accuracy_thresh is used when there is a possibility of 1 or more labels. We choose the accuracy thresh to determine at what threshold do we consider a particular label to be present. Jeremy hasn’t mentioned much with regards to how to decide on the number, I imagine that might come in part 2.

You can run a prediction against a single image and return the outputs. See below where the image is a satellite image.

(Kieran) #129

Hey jolyon, sorry no one responded for a while.
I responded to another similar comment recently, looking at that might help.

The main difference in this particular model is that the prediction can have multiple labels.

In the cat vs dog models there is only one possible output and so what we do is take the argmax(max number of the 2 predctions) to categorise cat or dog.

In this case, however, we have 17 possible labels and the output can be any multiple of labels between 0 and 17. Because there are multiple options, argmax is not going to work here. What we do instead is provide a threshold to tell the model at what level of prediction do we want it to add each label.

So in this case anything where the model outputs above 0.2 is considered a positive label.

As you can see in this printout of a prediction on one image the accuracy_thresh has turned the second tensor in the list into the first tensor by giving all those predictions above 0.2 a 1 and all those below 0.2 a 0.

In the lectures Jeremy advises that the best way to work this out is to experiment and try different thresholds and use your accuracy measures to determine the best score.

Hope that helps.

(Nguyen Hoang Vu) #131

EDIT: Solution here


While working on lesson3-planet notebook, when I try to create a ImageList with the code

src = (ImageList.from_csv(path, 'train_v2.csv', folder='train-jpg', suffix='.jpg')
       .label_from_df(label_delim=' '))

There was an error saying: ImageList has no split_by_rand_pct attribute, so I have to change that into .random_split_by_pct.
But in the docs, I could only find split_by_rand_pct, not sure why this is the case. I have fastai ver 1.0.46.

(Tenzin) #132

This is an open source project trying to create custom line-segmentation model since Google-OCR isn’t doing great with line-segmentation on this kinda of wooden printed documents.

Here I thinking to use UNet architecture in fastai to segment out the individual line by predicting the line boundary given that red lines are the mask, which would label of this image. So is the UNet architecture suitable for this problem, if not is there alternative solution to this problem.

Any help would be highly appreciated :slight_smile: