Lesson 2 - Official Topic

Yes, if you have a model pretrained on data more similar to your dataset, you should use that one.

E.g. using a model pretrained on x-rays would be a better starting point if you are doing something with x-rays, compared to ImageNet.

2 Likes

Best way to learn! Code experimentation! :slight_smile:

1 Like

From Dinesh C.: During fine tuning should we focus solely on metric or should we compare training loss vs validation loss understand undercutting/overfitting ?

1 Like

this chapter

1 Like

Are filters independent? By that I mean if filters are pretrained might they become less good in detecting features of previous images when fine tuned?

2 Likes

Chapter 1 questionnaire solutions I compiled:

1 Like

I always have trouble understanding the difference between parameters and hyper parameters. If I am feeding an image of a dog as an input, and then changing the hyperparameters of batch sizing in the model. What would be an example of a parameter in this example?

Yes, they won’t be as good on the general problem they were trained on, because you fine-tuned them to another task.

What are some of the salient characteristics of a loss function: e.g., do the have to be differentiable in the parameter space for SGD to work ? Anything else ? How do you choose a loss function for a task ?

Yep I’ve seen some of the differences but on the same dataset it looks the same most of the time training on resnet34 was faster

I’m curious about the pacing of this course. While I do appreciate the information that Jeremy shares about coronavirus I’m concerned that all the material for the course may not be covered.

10 Likes

Parameters are the “weights” of your neural network. They’re the individual “neurons” of your neural network. Their values are chosen/learned automatically during training.

Hyperparameters (such as batch size, learning rate etc.) are things that we have to choose ourselves and are typically not learned automatically.

4 Likes

There are canonical loss functions that, on top of being differentiable, have some nice properties that play well with the last activation function (we will see what that means soon). Usually you have three general ones at your disposal, and depending on your problem you will pick one of them. We will teach you all about it in the following lessons :slight_smile:

2 Likes

I am wondering about trying to make a dogs/cats/other classifier. The specific problem is that users (imagine the input is eg. photos from a mobile phone) frequently try to confuse the model, so INTENTIONALLY take a picture of for example cars and it really helps for perception of the quality of the model to be able to filter it. How would I tell the model to detect that, so as to be able to tell the user to ‘stop that,this is not a cat or dog’? Obviously, I cant just assume humans will not be humans and behave. :smiley:

3 Likes

We will cover this problem when we look at a multi-label task in the next chapters. (This is a great question btw).

2 Likes

Exploratory data analysis for image datasets, is it still relevant or necessary in deep learning, especially when we are using transfer learning?

3 Likes

How would I tell the model to detect that, so as to be able to tell the user to ‘stop that,this is not a cat or dog’?

What you are describing is the issue of out-of-distribution predictions. In general, neural networks cannot be used for completely different domains from the ones that they are trained in. There exist sophisticated ways to detect if your example image is not from the same distribution as the training set, e.g., using self-supervised learning or other out-of-distribution detection methods.

3 Likes

Link to Zeiler’s paper
Visualizing and Understanding Convolutional Networks

A post was merged into an existing topic: Lesson 2 - Non-beginner discussion

Is there any pretrained weights available other than the ones from Imagenet that we can use? If yes, when should we use others and when Imagenet? Thanks!

7 Likes