Yes sure. I trained a model for the task of image regression in fastai v1. I used the same dataset and created the databunch as shown below:
data = ImageList.from_df(df=data_f,path = path).split_by_idxs(train_idx,valid_idx).label_from_df().databunch(bs=192).normalize()
After creating the databunch, I simply created a cnn_learner with resnet34 architecture as shown below:
learn = cnn_learner(data,models.resnet34,metrics=[mse,mae,r2_score,rmse])
After that, I used data-parallel to use all the four GPU available.
learn.model = nn.DataParallel(learn.model)
After this, I simply followed the fastai training style where I first found the optimal learning rate and then did learn.fit_one_cycle(). I repeated the process until the model converged giving a final rmse of 0.68.
For fastai V2, I followed the exact same process where I used the same dataset and created the databunch using ImageDataLoader as shown below:
dl = ImageDataLoaders.from_df(path = ‘/data/’,df = data_train,fn_col=‘Image_Path’,label_col=‘HC’,bs=128,y_block=RegressionBlock,valid=‘is_valid’)
And similarly created the cnn_learner as shown:
learn = cnn_learner(dl,models.resnet34,metrics = [rmse,R2Score()])
Finally used data-parallel exactly as shown above and followed the exact same training steps but the final rmse didn’t go below 1.
The loss function used in both the experiments was MSELossFlat.