Is_multi/is_reg equivalent in fastaiv1

In the old version, there are is_multi / is_reg properties on dataset classes (e.g. FilesNhotArrayDataset) that allow ConvLearner to automatically choose the correct loss function. (e.g., like in https://github.com/fastai/fastai/blob/master/courses/dl1/lesson2-image_models.ipynb). I can’t seem to find similar properties on dataset classes or similar behavior of ConvLearner in v1. The current behavior seems to fall on the Learner's default loss_fn=F.cross_entropy, and for multilabel problem one needs to explicitly set the loss_fn=F.binary_cross_entropy_with_logits.
Is this the right understanding and/or the right way to create ConvLearner?

Thanks.

1 Like

Good point.
The data decides the loss function that should be used so what we’ll do is that the learner will by default pick data.loss_fn and a DataBunch will look in the training dataset the loss_fn to use. I’ll implement this this afternoon.

3 Likes

Thanks for implementing this. It’s pretty neat to use.
Do we have any plan to add something like an ImageRegressionDataset? (or maybe I’m missing something existing?)
Currently the best way I can think of for doing image regression is to create and modify a ImageClassificationDataset, similar to how ImageClassifierData was used in https://github.com/fastai/fastai/blob/master/courses/dl2/pascal.ipynb. But that felt a bit awkward.

We do not have it yet, but it’s planned in the near future.

I am interested in working on this. Is it possible?

You certainly can. We may rewrite it entirely though when we go through with coding it :wink:
A few pointers: the dataset should probably subclass ImageClassificationDataset, it should have classes=[] and a c = 0. It should set the proper default loss_func (mse loss I guess).

Then it’s just a matter of adapting cnn_learner.

I think something like this should work for the use cases I had in mind:

class ImageRegressionDataset(ImageClassificationBase):
    def __init__(self, fns:FilePathList, y:Collection[Number]):
        super().__init__(fns, classes=[])
        self.y = np.array(y, dtype=np.float32)[:, None]
        self.c = 1
        self.loss_func = F.mse_loss

Here’s a notebook with an example using it:

Would this work?

Looks pretty cool!
Thinking we should maybe add a range parameter to the create_cnn to add a sigmoid that covers this range (if it’s not None).

Not sure I understand. Is it something similar to this? https://github.com/fastai/fastai/blob/master/fastai/tabular/models.py#L35

Exactly!

Here’s my attempt to do it. Is it anything close to what you had in mind?
(reposted, previously in a private notebook by mistake)