Lesson 2 In-Class Discussion ✅

As much as you can realistically collect. Fastai has been remarkably powerful even with a few data samples. As long as the images are of real world objects which are not very different from the images used to train Imagenet. Hope this helps.

Fastai has been known to work well even with a few images. See the previous answer to @whatrocks similar question.

I was trying download_images for some pictures and got following errors -

  1. “Error: [content-length]”:
  2. The progress bar used to hang up around 99%.
    Upon checking download_images method and its submethod,i got to know that content-length is being used in download_url for the length. Error might be there. I am investigating into this further meanwhile can anybody explain thee reason behind these error??
    Also download_images works fine with single worker i.e. progress_bar completes 100% . Might be a bug or i might be doing something wrong.
3 Likes

indeed, that would be a lovely addition

@simonw asked the question before I got to that in the video FYI.

Having said that, please be sure that your answers are helpful to the person asking the question. i.e. instead of just saying “it’s in the video”, try providing a link to the relevant time-stamp, or just answer the question yourself.

I’ve been having problems regarding using custom models with create_cnn.

I am getting this error

I found that in this body = create_body(arch(pretrained), ifnone(cut,meta['cut'])), create_body takes pretrained(bool value) as input here and doesn’t pass a Tensor in this case.
While this method works with resnet (models.resnet18(True)), ResNet defined in models takes only pretrained bool value as input (pretrained=True,**kwargs). it may not be applicable with custom models always. Models usually takes (input,**kwargs)``#input Tensor

What can be the fix ?

1 Like

File Deleter is a very cool tool but, as others comment here apparently by now it can not clean train set, only validation set.

I think to address label noise issue train cleaning is at least as important as validation cleaning. Otherwise we are just cleaning noise in validation set what is ok but will make subsequent -certain- validation improvement only due to cleaning. In other words, we are helping the model by a more reliable validation but not a more reliable training set.

A hacky workaround to clean all data is possible with the tool in its current state: Before doing “real” training you can set a bigger validation ratio, say 0.5 and run the model + cleaning tool three/four times, with different random seeds each time for validation split. After that you can do real training with an -almost- completely clean dataset.

4 Likes

We are working on this. For the moment you can use this function:

def get_toploss_paths(md, ds, dl, loss_func, n_imgs=None): if not n_imgs: n_imgs = len(dl) val_losses = get_preds(md, dl, loss_func=loss_func)[2] losses,idxs = torch.topk(val_losses, n_imgs) return ds.x[idxs]

Where you can either feed in Training or Validation Dataset and Dataloader. For the lesson 2 notebook you can call it like this:

train_toploss_fns = get_toploss_paths(learn.model, data.train_ds, data.train_dl, learn.loss_func)

You can then feed train_toploss_fns to FileDeleter.

We are also working on showing and being able to change the labels in the widget.

14 Likes

I’ve extended the sgd notebook of lesson 2 to polynomial fitting:

That wasn’t easy and I was stuck with an error on the “update” function…
But solving it, I 've got a better understanding of PyTorch mechanics: in particular the difference between constant tensors (without gradient) and parameters (differentiable - with “gradient”).

8 Likes

Can someone suggest me a tutorial for developing web app please ? :smiley: , I don’t understand clearly the instruction I find in Scarlette web site. Thank you in advance.

2 Likes

I’m also having trouble install starlette from their instructions:

$ pip3 install starlette
Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPConnection object at 0x10eeb0630>, 'Connection to 52.39.238.16 timed out. (connect timeout=20.0)')': /repository/pypi-all/simple/starlette/
 Could not find a version that satisfies the requirement starlette (from versions: )
No matching distribution found for starlette

I would suggest using a different web app with better documentation, maybe Flask.

Here’s a tutorial.

2 Likes

I’m not sure which part of web app you would like to know better but there are more tutorials about flask which i fairly similar to starllet
you can try those ones

M

2 Likes

Thank you @Michal_w, @astronomy88 . So I switch to Flask now. Maybe I will comeback to Starlette later but at this moment, it’s hard to undertand from their instructions.

Yes.

Thanks.

Is there web app tutorials written by students? It might be easier to understand when they use fastai.

1 Like

I’m getting error name ‘verify_images’ is not defined while running
verify_images(path/c, delete=True, max_workers=8)

That should be the ideal goal of a neural network. To act as the most efficient lookup table.

2 Likes

I’m not sure if anyone has encountered this but when I run FileDeleter it disconnects me from my remote server and sort of kills it in a bad way. Only option is to stop instance => re-launch => re-run notebook (and skip FileDeleter).

Tried it on AWS, Colab and GCP. Every time the same result.

Update[11/4/18]: @sgugger bringing this to your attention as I couldn’t find any answer on the forum, still trying to understand the issue better with the code.

Always try using models.$MODEL as your arch, fastai.vision.models has most of it.

I think you can also look at the part where jeremy talks about how models.resnet34 is just a functional way of defining a resnet type arch and not actually using the resnet34 model, for which we use pretrained=True.

Interesting, I too found a similar situation (in fact a little more mind boggling).

My guess for this situation is

  1. The error_rate is just the miss classification rate and if it stays the same, then for every epoch, your model is miss classifying the same number of images if not the exact same image
  2. You val_loss bumps up a little as your train_loss goes down, this doesn’t look like a problem since both are in a similar range and the loss_func only helps the model find better weights w.r.t the given data. In every epoch, the loss_func tries the same with minor weight adjustments which fluctuates the avg loss over the epoch BUT the marginal changes doesn’t really affect the FC layers(in this example) eventually leading to same/similar classification tags ( This is to best of my knowledge)
  3. You many also find situations where the error_rate drops then goes huge then drops a couple of times in ~20 epochs, I found this happening today and still trying to figure out how to improve. But my learning stands Loss function is helping the model learn, metrics is my specific way of checking how exactly the model learned, these two are related but not coupled