Resnet model question

When I use resnet34 model, what kind of loss function does it do to minimize the error? and what is the optimizer that is being used?

learn.opt_func
functools.partial(<class ‘torch.optim.adam.Adam’>, betas=(0.9, 0.99))

learn.loss_func
<fastai.layers.FlattenedLoss at 0x7f0110363748>
https://docs.fast.ai/layers.html#FlattenedLoss

It looks like the default optimizer is Adam and the loss is Flattened Loss

1 Like

wait, what is Flattened Loss? Is there a video where Jeremy explains it?

I don’t know about any video, but I put a link to the documentation. In this case it will flatten the input then use CrossEntropyLoss.

learn.loss_func.func
CrossEntropyLoss()

Cheers.

2 Likes

Ah I see, I didn’t understand the docs :confused:, but you cleared it for me though, so flattened loss means the input will be a flattened previous layer output and then it goes through a loss function, thanks a lot!

1 Like

One important thing to note here, is that the loss function used does not depend on the base model used (resnet34 in your case) but rather on the type of y/label given. If it is aclassification problem Cross Entropy is used, if it is a regression problem, some other function like mse is used. If you do not specify a specific label type, fastai will guess the type of y desired and choose the correct function for that. (All of this happens in the .label_from_xxx part of the datablock api.)

4 Likes

sorry my question seems to cause misunderstanding, I do use this for classification, multi-class classification, so it is categorical cross entropy right? thanks for the heads up :grin:

For mulitclass Cat CrossEntropy is correct, for multi label BCEwithLogits should be the default loss func.

1 Like

Is there a way to see training loss also instead of only validation loss on each epoch?