Multicategory Inference fastai2

I trained a multi-category image classifier on colab and exported the same. I am trying to do inference on a standalone windows machine. Loading the learner:
inf_learner=load_learner(‘model.pkl’) is giving the following error:

cannot instantiate ‘PosixPath’ on your system

I used same technique with fastai1 and it used to work fine.
Please let me know where I am missing out on detail.


Hi tapashettisr hope you are well!

I don’t use Windows but maybe the links below can give you some ideas.

Cheers mrfabuluos1 :smiley: :smiley:

Hey Sunil,

Unfortunately I don’t believe fastai fully supports Windows - try running the same code on a Linux machine, it should work.

That seems to be the only option now.But, I am not well versed in Linux. Can you please guide me how to install fastai2 on a Linux machine? I just acquired a thin client with ubuntu 18.04. I plan to use it only for inference.
Thanks in anticipation.

Hey Sunil,

No special knowledge required here - it’s the same instructions as on the fastai Github page, and so far for me deploying on Linux machines has gone flawlessly. Hope it works out :slight_smile:

1 Like

I installed fastai on Ubuntu machine. I have a multi-category model trained on Colab. I copied this model to the local machine and tried loading the model:
I am getting the following error:

AttributeError: Can’t get attribute ‘FlattenedLoss’ on <module ‘fastai.layers’ from ‘/home/lenovo/miniconda3/envs/fastai2/lib/python3.8/site-packages/fastai/’>
Is there an issue with my installation??

One possible source for this problem could be mismatched versions - you should make sure you have identical environments (for example by exporting an environment/requirements file from one machine and using it to build an environment on the other machine). Another could be saving the model with some function in the namespace that is not available when you’re trying to load the model. It looks like you did not use FlattenedLoss yourself, so most likely it’s the former.

I trained on colab and exported the model.
For the local machine I installed miniconda 64bits with Python 3.8 and created fastai virtual environment . I followed the installation instructions on the fastai Github page. How can I make sure that the the environment of Colab is same as that of local machine?
Sorry for bothering you repeatedly.

Hi tapashettisr hope all is well and you are having a wonderful day/evening!

  1. immediately you finish training your model on Colab run !pip freeze
    this willl list the libraries in use (you’re only concerned with the ones you are using, in your application, as Colab uses many).

  2. run pip freeze on your local machine compare the libraries.

  3. In requirements.txt on your local machine match any library versions that are different to the ones from Colab.

  4. Sometimes there is a mismatch between the Anaconda build library versions and it won’t allow the same library versions together as Google Colab, even though it’s in the requirements.txt. In this case install use
    pip install
    to override Anaconda if required.

Hope this helps.

Cheers mrfabulous1 :smiley: :smiley:

1 Like

Thanks for the useful hints. I think the issue is with export.
I tried loading the exported model on colab
got the error: Can’t get attribute ‘get_x’ on <module ‘main’>
However the same model was working for inference before exporting.
Is it unpickling issue?

Ii finish training on Colab and export the model then load the same model using load_learner() then the inference is running fine. However if I terminate the runtime and on a new Colab session I load the saved model it is throwing the error
Can’t get attribute ‘get_x’ on <module ‘main’>

I could solve the issue by including the definitions of get_x and get_y functions in the main script. But any idea why it is required? the get_x can be radically different during inference from training. Does’nt the model have the get_x and get_y functions?

Hey Sunil,

What you’ve described is how pickling works and it’s not particular to fastai. You need to make sure that there isn’t anything missing in the namespace when you unpickle (in this case, by using load_learner).

My workaround for problems like these in fastai is this chunk of code (in my case, for loading “fake_accuracy” into the namespace for example):

def load_learner(self, fname, cpu=True):
    "Load a `Learner` object in `fname`, optionally putting it on the `cpu`"
    pickle.Unpickler = CustomUnpickler
    res = torch.load(fname, map_location='cpu' if cpu else None)
    if hasattr(res, 'to_fp32'): res = res.to_fp32()
    if cpu: res.dls.cpu()
    return res
class CustomUnpickler(pickle.Unpickler):
    def find_class(self, module, name):
        if name == 'fake_accuracy':
            from src.train_fastai import fake_accuracy
            return fake_accuracy
        return super().find_class(module, name)
1 Like


I will try it out.

@orendar the load_learner method that you have provided has a self argument suggesting that it belongs to some class. I am new here. So apologies if this is something trivial.
Can you please elaborate on how exactly we can use the snippet for solving the get_x serialization problem?
Also, @tapashettisr were you able to resolve this error?