Deploy fastai-v2 model on windows machine

I have a model trained for discriminating different infestations in fruits. I want to deploy this model on a windows-10 machine. Can someone elaborate how to go about it?

1 Like

fastai isn’t really compatible with windows. if you cannot get a Linux machine to deploy on, you could try the WSL on windows 10 and deploy there!

You can try via dockers here are details => Deployment ready template for creating responsive web app for Fastai2 Vision models

Issue is that my earlier model used to run on Windows with Fastai1. Is there no workaround?
Alternatively how can we continue to use Fastai1 in Colab ? further can we install Fastai1 on a windows machine using earlier method.

I have exactly the same problem my earlier model run fine on windows with Fastai1 but now I have issue using the latest Fastai-v2. In Fastai1 installation I follow the instruction of https://medium.com/@pierre_guillou/how-to-install-fastai-v1-on-windows-10-ca1bc370dce4.
Can somebody please help?
Thanks

Hi,

I can get FastAI working on a conda environment in Win 10 using this environment.yml file

name: fastai
channels:
- fastai
- pytorch
- defaults
dependencies:
- fastcore
- fastai
- fastprogress>=0.1.22
- jupyter
- matplotlib
- nbdev
- pandas
- pillow
- pip
- python>=3.6
- pytorch>=1.3.0
- pyyaml
- requests
- torchvision>=0.5

and running the following command in a Conda shell
conda env create -f environment.yml

Just curious, shouldn’t an earlier model be portable in some way?

Oh! that should solve the problem.
Can you suggest how to create a yml file in windows? I tried with Notebook++ but always got a .txt file on saving.
Alternately if you can provide a link to the environment.yml file.

Regards

Try name in quotes when saving
or
In file Explorer, check the View->File Name Extensions
Then you can rename the file
or
Start Dos Command and
rename a.txt a.yml

Hi,
I was successful in installing the fastai using environment.yml. However there seems to be something wrong. When I tried running intro notebook I am getting the error:
No module named ‘fastai2’
Any hint what might be wrong?

Hi,
Thanks.
I trained my model using Colab. I exported the model to run it on Windows 10.
The following command run fine: from fastai.vision.all import
But the problem is when I use the following command
load_learner(‘C:/Python/Test/Model/’) (export.pkl is inside the directory).
I got error:
File “C:\Users…\Anaconda3\envs\fastai\lib\site-packages\torch\serialization.py”, line 210, in init
** super(_open_file, self).init(open(name, mode))**
PermissionError: [Errno 13] Permission denied: 'C:/Python/Test/Model/'

Anybody have access to this directory. I also try moving the model to different directory got the same error.

Thanks again

instead of fastai2, use fastai

fastai2 was a temporary name during development

Yeah, I just got this also when setting up a brand new Windows install - fixed it by downloading the latest fastai source from github again, deleting my existing folder (which I had called “fastai2”, btw) and extracting the github .zip to “fastai-master” and then renaming that new folder to be just “fastai”. Started my kernel and loaded the notebook again, and all fixed now.

I trained on colab and exported the model.
For the local machine I installed miniconda 64bits with Python 3.8 and created fastai virtual environment . I followed the installation instructions on the fastai Github page. I am getting
AttributeError: Can’t get attribute ‘FlattenedLoss’ on <module ‘fastai.layers’ from ‘/home/lenovo/miniconda3/envs/fastai2/lib/python3.8/site-packages/fastai/layers.py’>

I even tried updating the fastai installation but the error persists.
Whats wrong?

Any progress? I am also trying to deploy my model into a Windows local machine.

Could not deploy to Windows machine. Shifted to Linux.

Hi - were you able to find a way to get around the “permission denied” message in this use case?

I was able to get a tabular model deployed on Windows 10 using Flask. The toughest problem was getting the model loaded. Here are the key things I learned getting this to work:

  1. The path definition was critical. I needed to include the following to get one that worked:

     import pathlib
     temp = pathlib.PosixPath
     pathlib.PosixPath = pathlib.WindowsPath
    
  2. The only variation of the load_learner statement that worked was with a single argument defined as follows. I had to use os.path.join to get a fully qualified path while avoiding Windows weirdness. The model file adult_sample_model_may16.pkl is in the same directory as the Python module:

         path = Path(os.getcwd())
         full_path = os.path.join(path,'adult_sample_model_may16.pkl')
         learner = load_learner(full_path)
    

Once I got through those two hurdles, I was able to successfully run inference with the model on my Windows system.