Bugging You (regression CNN)

From create_cnn docs:

assert classification, ‘Regression CNN not implemented yet, bug us on the forums if you want this!’

I want this. How would you implement it? Flat 1D conv?

Thanks!

6 Likes

For regression you need to predict a continuous number instead of a class so the last layer of the model should be a Linear layer and then the loss instead of Cross Entropy should be something like Mean Squared Error (nn.MSELoss) or L1 loss (nn.L1Loss).

You can write a custom head (maybe using create head is the easiest way). Then you can give it to create_cnn with the custom_head argument. Finally you change the loss_func attribute of the learner to nn.L1Loss() for example.

I hope this helps!

6 Likes

Hello,

I also long for CNN regression support in fastai. I tried creating a custom head that outputs a length one tensor, but that didn’t work out. It resulted in an internal error in the fit-method. I used the ‘PointsDataset’ to create my DataBunch but that apparently expects length 2 Tensors as output?

2 Likes

But it has been added already! The head pose example of lesson 3 is a cnn regression example. (red dot on center of face, position in x,y by regression)

1 Like

I believe that one issue people may have with using the PointsDataset is that the values they are predicting may not be points in the image. Digging into the library, it seems that “Points” have special meaning.

After some experimenting, I was trying to predict a single value (unrelated to space/location in the image). To get regression working, I used the following code:

class ImageRegressionDataset(ImageDatasetBase):
    def __init__(self, fns, targets, **kwargs):
        super().__init__(x=fns, y=targets, task_type=TaskType.Regression, **kwargs)
        self.loss_func = nn.L1Loss()
    
    def _get_y(self, i, x):
        return np.array([self.y[i]], dtype=np.float32)

data = (ImageFileList.from_folder(IMAGES)
            .label_from_df(df, suffix='.png')
            .split_by_folder()
            .datasets(ImageRegressionDataset)
            .databunch())

# create_cnn uses c to set the number of output neurons,
# so it needs to be set manually even though we aren't doing classification
data.c = 1
learn = create_cnn(data, models.resnet18)

This is a little bit of a hack, but I got it to train on my data. In the _get_y method, things didn’t work unless I returned a numpy array with the value (Note, it must be float32 as opposed to the default float64 on some systems, and I assume float16 if you are using a half precision model).

Also, MSELossFlat() can be used instead of nn.L1Loss(). You can also pass in custom_head to create_cnn if you have enough information about your base model. Hope this helps!

2 Likes

If your target y is a torch Tensor I think you can just use:

class ImageRegressionDataset(ImageDatasetBase):
    def __init__(self, fns:Collection[Path], pts:Collection[Tensor]):
        super().__init__(c=len(pts[0].view(-1)), x=fns, y=pts, task_type=TaskType.Regression)

It’s also important in transforms to have tfm_y=False, since this transforms the target value the same way the image is, which only makes sense if you’re using points/regions on the image itself.

[quote=“johnhartquist, post:5, topic:29504”]
I believe that one issue people may have with using the PointsDataset is that the values they are predicting may not be points in the image…[/quote]

I figured as much, but didn’t know how to go about it. But your solution works great for me, been scratching my head around this for the past few days and now I got it to learn. Thanks, appreciated!

I made some progress with mostly copy/paste of johnhartquist code above.

class ImageRegressionDataset(ImageDatasetBase):
def init(self, fns, targets, **kwargs):
super().init(x=fns, y=targets, task_type=TaskType.Regression, **kwargs)
self.loss_func = MSELossFlat()

def _get_y(self, i, x):
    return np.array([self.y[i]], dtype=np.float32)

data = (ImageFileList.from_folder(path)
.label_from_df(df, ‘filename’, ‘y_coordinate’)
.random_split_by_pct()
.datasets(ImageRegressionDataset)
.databunch()
)
data.c = 1

learn = create_cnn(data, models.resnet34)

I don’t exactly understand the _get_y function yet but i’ll be looking into it soon.

You can get data.show_batch to work by creating dummy classes that format the numbers as you want them to be displayed.

class RegressionClassWrapper():
    def __getitem__(self, item):
        return str(item)
data.classes = RegressionClassWrapper()

You could probably replace str(item) with something like f'{item:f0.2}'

1 Like

Hi - I had a similar issue and resolved it using these steps: https://medium.com/@btahir/a-quick-guide-to-using-regression-with-image-data-in-fastai-117304c0af90

Hope it helps!

5 Likes

What magnitude of values did you get for your loss calculations during training?

Quite a good article. Thanks for having shared it.

1 Like

Hi !

I looked at your github… https://github.com/btahir/age-detector/blob/master/age-predictor.ipynb

You stop training your model after the following line :

Total time: 50:56
epoch train_loss valid_loss
1 147.177490 139.448914 (17:02)
2 129.303528 125.776764 (16:56)
3 119.536751 117.241501 (16:57)

What’s your loss ? Is it mean squared error ? In which case, you have around 10years of error per picture in your prediction… Is that correct ? An age predictor that is around 10 years wrong each time it makes a prediction doesn’t seem very effective to me. Did you try to fine tune your model a bit more ?

1 Like

This was more of a proof of concept app. I stopped training because it was taking too long. There is an app by Microsoft in production called how-old but I think it’s not that accurate as well. Feel free to fine tune the model and get better results!