Lesson 3 Advanced Discussion ✅

Thanks, will have to check that out!

Thanks @wdhorton that’s very interesting. Explains why larger networks are advantageous in getting close to the global minimum.

1 Like

Other than medical imaging, what are some of the practical use cases for image segmentation. Jeremy mentioned self driving cars - but I can’t imagine the effort that goes to do pixelwise labelling of millions of images from SD car cameras. Is there a way to fast track the labelling process ?

Another practical use case of image segmentation: seismic imagery https://www.kaggle.com/c/tgs-salt-identification-challenge. Also it’s commonly used for satellite imagery as well.

3 Likes

What about loss function for multi label classification? Does the same loss function ( cross entropy) work for multi label classification as well?
In Keras , I use binary cross entropy+sigmoid for multi label, it’s not clear how fastai takes care of this

1 Like

Sorry, I answered in the other chat before I saw you posted here:

4 Likes

Cross-entropy is a generalization of binary logloss for multiple (>2) classes.

is leaky relu used more than relu?

1 Like

Once an image is segmented, is there a way to identify the coordinates of the segmented part ?.

Mainly in lstms, I think its because of the vanishing gradient problem. I don’t have any reference to back up my argument. It’s just a practice that I observed.

In playing with the data block api, I’ve found it to be yes flexible, but also much slower on big datasets, since it apparently always loads the entire dataset to memory before doing anything else. Or maybe I’m missing something, does anybody know if there’s a way to speed up databunch creation on larger datasets?

3 Likes

Recently read an article on this - it is basically factory scale at this point. You get entire floors of people who work on nothing but labeling pixels.

It still is likely not in the order of magnitude of millions of segmented images, but tens or hundredths of thousands.

In general, one can get very nice results with segmentation on much smaller datasets though!

2 Likes

We generally see people getting CUDA: out of memory error. When we restart the kernel it run fine. What could be possible reasons for that? It looks like the memory is got getting properly managed ?

4 Likes

This promis to be an interesting dataset for NLP & AI in law: https://case.law/

1 Like

I normally pick the steepest bit of section (1) in your list. But you should try a few a tell us what works best for you! :slight_smile:

Yeah it’s less of an issue now, with optimized PIL and our faster augmentations - although you might still want to resize if your images are huge.

The best way to install PIL is using the comment at the bottom here:

4 Likes

In high dimensions there are basically no local minima - at least for the kinds of functions that neural net losses create.

6 Likes

In the lesson 3, there was an explanation on U-net and how to use it now in v1.

But I do remember that in course-v2 and fastai 0.7, @kcturgutlu implemented a way tp create Dynamic Unets, U-netish models using any pretrained model as encoder: resnet, resnext, vgg…

Are these Dynamic Unets deprecated in v1?

Quite the opposite - that’s what we’re using all the time now! That’s why we were able to automatically create a unet with a given backbone architecture.

3 Likes

You may check https://github.com/fastai/course-v3/blob/master/nbs/dl1/lesson3-camvid.ipynb for how to create_unet in v1. It’s much faster and much lighter in terms of GPU memory,

5 Likes