Regression vs. Classification

After seeing lesson 3 I noticed Jeremy mentioned that you can train a regression problem using fastai as well. Looking through the code the only change I saw was changing the loss function from cross-entropy to MSE. Is that all you need to do? Does fastai understand the type of problem you are trying to solve based off of the loss function you are using? If I had data that was [very negative - very positive] on a 1-5 scale and instead of a class via cross-entropy I wanted to get a predicted score on 1.-5. would this small tweak be all that was needed?

1 Like

Hi !

I tried to use regression to predict people’s age based on their picture. You also need to change the function you use to create your data variable.
ImageDataBunch won’t work, because it will create a classifier.

you can take a look here :

Yep! That’s right! The loss function needs to be changed as well as changing the classifier to a regressor as what @bdubreu pointed out.

This is extremely helpful, thank you! I will give it a try.

I looked into a bit further. There appears to be more arguments for the images stuff. I don’t see anything like the .label_for_func() method in the textdatabunch class. I see label_cls, but when I tried to pass it FloatList I got an error. Anyone have any ideas on how to use floats and regression to solve text problems in fastai?

I think I got it to work. Trained the same process as the imdb notebook, while tweaking to fit a regression problem. I was surprised to see that bag of words models have much higher correlations among the predictions than a transfer learned model like this. I’ll have to play around with it a bit more.

I found out what was wrong. It wasn’t leveraging the learned language modeler because the tensors were off. In the imdb notebook I’d recommend everyone use np.random.seed()

at this part:

data_lm = (TextList.from_folder(path)
           #Inputs: all the text files in path
            .filter_by_folder(include=['train', 'test', 'unsup']) 
           #We may have other temp folders that contain text files so we only keep what's in train and test
            .random_split_by_pct(0.1)
           #We randomly split and keep 10% (10,000 reviews) for validation
            .label_for_lm()           
           #We want to do a language model so we label accordingly
            .databunch(bs=bs))
data_lm.save('tmp_lm')