Using Pytorch in the fastai framework - A Guide

Considering this is one of the most asked questions on the forums, I wrote an article discussing how to bring in Pytorch into fastai including:

  • Optimizer
  • Model
  • Datasets
  • DataLoaders

Said article can be found here:

Normally I wouldn’t double-post, but considering this is such an asked question here on the forums, I’m going to go ahead and walk everyone through how to do so. What follows will be a minimal explanation and example based on the aforementioned article

Learner and Pytorch Models vs cnn_learner, tabular_learner, etc

The most important step when bringing in raw Pytorch into fastai is understanding that Learner is fastai’s base class for training. So when we bring in custom models, we should do Learner(dls, mymodel, ...) rather than cnn_learner or tabular_learner, as all of those functions have specific magic for their applications and how they should be used

Note: Learner expects a full Pytorch model, not a function. So you should be passing in an instance of your model such as Learner(dls, MyModel())

Pytorch DataLoaders

When working with Pytorch DataLoaders, the only thing you need to do to have it work with the fastai training loop is wrap them into fastai’s DataLoaders class such as so:

Moving Pytorch DataLoaders to the GPU

fastai will now determine the device to utilize based on what device your model is on. So make sure to set learn.model to cuda() (or just your own model) before fitting

Pytorch Optimizers

When dealing with Pytorch optimizers, fastai has a OptimWrapper class, so any time you want to utilize a pytorch optimizer, simply define a little function (we’ll call it opt_func) that looks like so:

Where we’ve wrapped our Pytorch optimizer inside of this class, and this will work for us during training

Minimal Imports

If you choose to go this route, the only imports from fastai you truly need are:

from fastai.callback.progress import ProgressCallback
from fastai.data.core import DataLoaders
from fastai.learner import Learner
from fastai.optimizer import OptimWrapper

From there if you want access to the learning rate finder, fit_one_cycle, etc I would recommend from fastai.callback.schedule import *, or import the specific fit function you need from schedule

Limitations

Given you’re using Pytorch DataLoaders, you get none of the fastai data magic. This means you will not have access to test_dl, nor predict. get_preds will still work, however you’ll need to build a proper DataLoader yourself

I hope this post and blog will help someone in the future, who wants to understand just how simple fastai can truly be used when used with raw Pytorch

27 Likes

Hi I am new in fastai and learn a lot from your blog and walkwithfastai, great blog! thank you!

I just realize this post after I ask a question in this forum, so I think I will delete my post and ask it here:

So, when I run the code, I can’t use the GPU. I try to move the model to gpu using net.to(device) but it raises an error:

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

After trying and finding other errors, I think the problem is that the Pytorch dataloader can’t be moved directly to the GPU, so fastai DataLoaders can’t use the GPU

Is that true? As I am new in Fastai I hope someone can confirm, Thank you

Have you tried doing dls.to('cuda')?

Also is it just the raw code from the article?

I have tried it, it raises an error:

AttributeError: ‘DataLoader’ object has no attribute ‘to’


My first error occurred at my local using slightly different code. But I just tried your raw code in Colab and it gives the same error. The error raised when I use net.to('cuda'). However, in Colab I’m not sure because I can’t monitor GPU usage

1 Like

Nice call, thank you.

We need to include cbs=CudaCallback to have our data properly be passed to the GPU. So redefine your Learner with:

from fastai.callback.data import CudaCallback
learn = Learner(dls, net, loss_func=criterion, opt_func=opt_func, cbs=[CudaCallback])

I’m updating the notebook now

1 Like

Hey Zachary!

Thanks so much for making this guide. I’m pretty new to fast.ai and just only started learning PyTorch. I prefer to get an overview of things so that I know what I’m getting myself into. Glad to see you took the time to write this up. Helps a noob like me a lot especially when there are so many things online that I can’t even start to think of where to begin.

It works perfectly now! thank you very much, and thanks again for your efforts for this article and walkwithfastai.com. It really helps us who are just starting the journey with Fastai.

1 Like

In the spirit of this blog, I’ve released a smaller sublibrary of fastai called fastai_minima which contains only what’s needed to get Learner, the (absolute) basic Callbacks, and OptimWrapper going:

4 Likes

I’ve submitted a PR removing the need of the CudaCallback, fastai will just automatically determine and move your data to the right device

Edit: is now merged, updating the tutorial

1 Like

I created a custom torch dataset and specified 2 required methods: __getitem__ and __len__. Then I created two torch data loaders:

train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True, num_workers=2)
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=32, shuffle=True, num_workers=2)

and I wanted to use them with fastai, so I tried to run:

from fastai.vision.data import DataLoaders
dls = vision.data.DataLoaders(train_loader, valid_loader)
from fastai.vision.learner import cnn_learner, error_rate
learner_original = cnn_learner(dls, models.resnet34, metrics=error_rate, pretrained=True)

However, this gives me an error:

AttributeError: ‘DataLoader’ object has no attribute ‘after_batch’

What’s wrong with my setup? Should torch dataset have another attribute to make it compatible with fastai DataLoaders?

Can you post the full stack trace?

My best guess is that this is from cnn_learner which has the following line of code:

if normalize: _add_norm(dls, meta, pretrained)

_add_norm is defined as follows:

def _add_norm(dls, meta, pretrained):
    if not pretrained: return
    stats = meta.get('stats')
    if stats is None: return
    dls.add_tfms([Normalize.from_stats(*stats)],'after_batch')

So as you can see, if normalize=True for cnn_learner (which is the default), then it will try to add a Normalize transform assuming that the DataLoader was created like a fastai DataBlock.

To solve this, just pass normalize=False into cnn_learner and make sure your PyTorch Dataset is normalizing the data.

1 Like

What do you mean by “build a proper Dataloader yourself” for get_preds to work?

You won’t have access to learn.dls.test_dl since you’re not using fastai dataloaders. You’ll need to stick with raw pytorch and how you build test dataloaders in raw torch

makes sense. Thank you

I ran the notebook from your blog.

This notebook still contains the import statement.

> from fastai.callback.data import CudaCallback
also the in the Learner creation.
learn = Learner(dls, net, loss_func=criterion, opt_func=opt_func, cbs=[CudaCallback])

Maybe the changes in the library has occured after the release of this notebook.
Also the function definition
def opt_func(params, **kwargs): return OptimWrapper(optim.SGD(params, lr=0.001))
raises an error in learn.fit(2) cell.

----> 1 def opt_func(params, **kwargs): return OptimWrapper(optim.SGD(params, lr=0.001))
TypeError: __init__() missing 1 required positional argument: 'opt'

The constructor signature of OptimWrapper has changed.
https://github.com/fastai/fastai/blob/509af52c118e0f4f23edb03a255169c43d6dd28f/CHANGELOG.md#breaking-changes-1

    • OptimWrapper now has a different constructor signature, which makes it easier to wrap PyTorch optimizers.

Thanks for making such a wonderful course WWF. It was very lucid, comprehensible, and covers many untouched aspects of the fastai API. It was after only going through this course, some hard-to-understand aspects of API became crisp clear. I have not completed this course yet, and hope to learn much more things. :+1: :+1:

I wrote a notebook very similar to the blog, but in the model evaluation process (applying the test set) and saving the trained model I always get an AttributeError:

AttributeError: ‘DataLoader’ object has no attribute ‘new’"

My implementation for loading and preparing data is as follows:

train_dataset = torchvision.datasets.FashionMNIST(
    root="data",
    train=True,
    download=True,
    transform=transforms.ToTensor() # normalized data --> datapoints in range [0.,1.] instead of [0,255]
) 

test_dataset = torchvision.datasets.FashionMNIST(
    root="data",
    train=False,
    download=True,
    transform=transforms.ToTensor()
)

batch_size = 64
train_dataset = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_dataset = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
dls = fastai.data.core.DataLoaders(train_dataset, test_dataset)

and the function calls for testing and saving are:

learn.get_preds(is_test=True)
learn.export(fname='models/model_test.pkl')

What’s wrong with my implementation? Or are there any other implementation options?

I am having issues while loading the learner for my new data loader. So here’s the code story:
I have trained my retinanet model on a MIDOG dataset (here)which is an image dataset having 3 class labels 0-background 1 -hard negative 2- Hard positive, my model training is done and now I want the same model to be trained from my own dataset which has images and annotations in the same format (image format changed but that’s not an issue as openslide is able to load it perfectly), but the change here is that my own dataset has only 2 classes : 0 -background and 1: Hard positive, so when Itry to load the model using state_dict it throws me an error, The model was saved using torch.save_dict(PATH).
I saved this model using torch .save as follows:

torch.save(learn.model.state_dict(),PATH)

When I try to load this model for new data it shows an error:

learn.model.load_state_dict(torch.load(PATH))

Error:

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
   1496         if len(error_msgs) > 0:
   1497             raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
-> 1498                                self.__class__.__name__, "\n\t".join(error_msgs)))
   1499         return _incompatibleKeys(missing_keys, unexpected_keys)
   1500 

RuntimeError: Error(s) in loading state_dict for RetinaNet:
	size mismatch for classifier.3.weight: copying a param with shape torch.Size([3, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([2, 128, 3, 3]).
	size mismatch for classifier.3.bias: copying a param with shape torch.Size([3]) from chec

Also, I tried a different approach for loading it using saving the whole model with torch.save(learner.model,PATH) and then loading it as follows:

learn.model=torch.load('PATH') #map_location=torch.device('cpu') without GPU

And now it loads without error, but then I try to do learn.fit it throws me index error:
THE CODE

max_learning_rate = 1e-3
cyc_len = 50
batch_size=16
learn.fit_one_cycle(cyc_len, max_learning_rate,callbacks=[SaveModelCallback(learn, monitor='train_loss',name='best_train')])

ERROR:

epoch	train_loss	valid_loss	pascal_voc_metric	BBloss	focal_loss	AP-Mitosis	time

 0.00% [0/3 00:00<?]
---------------------------------------------------------------------------
indexError                                Traceback (most recent call last)
<ipython-input-29-eb47d817d7bd> in <module>
      4 print("\n Starting Training with  n=",cyc_len,"epochs with batch_size=",batch_size,"\n")
      5 learn.fit_one_cycle(cyc_len, max_learning_rate,callbacks=[SaveModelCallback(learn, monitor='train_loss',
----> 6                                                                             name='best_train_loss_bs64_GC_1500')])

7 frames
/usr/local/lib/python3.7/dist-packages/fastai/train.py in fit_one_cycle(learn, cyc_len, max_lr, moms, div_factor, pct_start, final_div, wd, callbacks, tot_epochs, start_epoch)
     21     callbacks.append(OneCycleScheduler(learn, max_lr, moms=moms, div_factor=div_factor, pct_start=pct_start,
     22                                        final_div=final_div, tot_epochs=tot_epochs, start_epoch=start_epoch))
---> 23     learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
     24 
     25 def fit_fc(learn:Learner, tot_epochs:int=1, lr:float=defaults.lr,  moms:Tuple[float,float]=(0.95,0.85), start_pct:float=0.72,

/usr/local/lib/python3.7/dist-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
    198         else: self.opt.lr,self.opt.wd = lr,wd
    199         callbacks = [cb(self) for cb in self.callback_fns + listify(defaults.extra_callback_fns)] + listify(callbacks)
--> 200         fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks)
    201 
    202     def create_opt(self, lr:Floats, wd:Floats=0.)->None:

/usr/local/lib/python3.7/dist-packages/fastai/basic_train.py in fit(epochs, learn, callbacks, metrics)
    104             if not cb_handler.skip_validate and not learn.data.empty_val:
    105                 val_loss = validate(learn.model, learn.data.valid_dl, loss_func=learn.loss_func,
--> 106                                        cb_handler=cb_handler, pbar=pbar)
    107             else: val_loss=None
    108             if cb_handler.on_epoch_end(val_loss): break

/usr/local/lib/python3.7/dist-packages/fastai/basic_train.py in validate(model, dl, loss_func, cb_handler, pbar, average, n_batch)
     61             if not is_listy(yb): yb = [yb]
     62             nums.append(first_el(yb).shape[0])
---> 63             if cb_handler and cb_handler.on_batch_end(val_losses[-1]): break
     64             if n_batch and (len(nums)>=n_batch): break
     65         nums = np.array(nums, dtype=np.float32)

/usr/local/lib/python3.7/dist-packages/fastai/callback.py in on_batch_end(self, loss)
    306         "Handle end of processing one batch with `loss`."
    307         self.state_dict['last_loss'] = loss
--> 308         self('batch_end', call_mets = not self.state_dict['train'])
    309         if self.state_dict['train']:
    310             self.state_dict['iteration'] += 1

/usr/local/lib/python3.7/dist-packages/fastai/callback.py in __call__(self, cb_name, call_mets, **kwargs)
    248         "Call through to all of the `CallbakHandler` functions."
    249         if call_mets:
--> 250             for met in self.metrics: self._call_and_update(met, cb_name, **kwargs)
    251         for cb in self.callbacks: self._call_and_update(cb, cb_name, **kwargs)
    252 

/usr/local/lib/python3.7/dist-packages/fastai/callback.py in _call_and_update(self, cb, cb_name, **kwargs)
    239     def _call_and_update(self, cb, cb_name, **kwargs)->None:
    240         "Call `cb_name` on `cb` and update the inner state."
--> 241         new = ifnone(getattr(cb, f'on_{cb_name}')(**self.state_dict, **kwargs), dict())
    242         for k,v in new.items():
    243             if k not in self.state_dict:

/usr/local/lib/python3.7/dist-packages/object_detection_fastai/callbacks/callbacks.py in on_batch_end(self, last_output, last_target, **kwargs)
    153             num_boxes = len(bbox_gt) * 3
    154             for box, cla, scor in list(zip(bbox_pred, preds, scores))[:num_boxes]:
--> 155                 temp = BoundingBox(imageName=str(self.imageCounter), classid=self.metric_names_original[cla], x=box[0], y=box[1],
    156                                    w=box[2], h=box[3], typeCoordinates=CoordinatesType.Absolute, classConfidence=scor,
    157                                    bbType=BBType.Detected, format=BBFormat.XYWH, imgSize=(self.size, self.size))

indexError: list index out of range

NOTE: The dataset is smaller as compared to the previous dataset on which the model is trained (nearly 100 images, 3k images before), so because of that I have also kept the batch size small :16).
Please guide me on how I can train my already trained model on my new dataset, should I save the image data bunch and try reloading the dataset on the older notebook where the MIDOG dataset was trained, and see if it can be reloaded, or is there any way to load the learner so that I can start my training.
INFERENCE WORKS FINE ON THE CURRENT MODEL WITH THE NEW DATASET
Any kind of resource: notebook, or code snippet, would be beneficial.
Thanks for this wonderful forum.
Harshit