Deploying mode for inference

Hi, I’m trying to deploy an image classification to production. I exported the model using lear.export('learn.pkl)

As I am running the server using Django on windows system so have I used PureWindowsPath()
path=PureWindowsPath(’./artifacts’)
model=load_learner(path/‘learn.pkl’)

But this throws an error- “PureWindowsPath has no attribute ‘seek’. You can only torch.load from a file that is seekable.”
How do I fix this?
Is there any way to deploy for local inference in windows or is Linux the only option?

Thank you,

1 Like

I have faced this issue too on windows, so I completely avoided Django route, I used starlette and jinja2 to a web app to showcase my vision model, then I deployed this web app via docker image on Heroku or Digital Ocean.
You can find more details here => Deployment ready template for creating responsive web app for Fastai2 Vision models

1 Like

That’s great! thanks for sharing :smiley: