How to use fastai library with a custom pretrained model?


(amine) #1

Hi everyone,
I have a model trained based on xception with pytorch and i want to use it instead of imagenet models (resnet,resnext…) as my pretrained model . Should i use xception as architecture to create my learner :
learn = ConvLearner.pretrained(xception, data)
then load the weights of my pretrained custom model
learn.load('custom_weights') and then do a fit or there are other configs to do ?


(Brian Muhia) #2

If you define a function in torch_imports.py pointing to the .pth, it should work.


(amine) #3

I am like a newbie to deeplearning , what should this function do exactly ?


(Brian Muhia) #4

Example in torch_imports.py:
The function load_pre takes a boolean pre, a model object f, and a file name fn.

def load_pre(pre, f, fn):
    m = f()
    path = os.path.dirname(__file__)
    if pre: load_model(m, f'{path}/weights/{fn}.pth')
    return m

In the case of resnext50, the model object will come from this file (line 31), and if pre is True, will require a file path matching resnext_50_32x4d.pth in the fastai/weights directory. This is the file containing pre-trained weights that you presumably downloaded from somewhere.

I would put the PyTorch code defining the xception model in the models directory, and then edit the torch_imports.py to add an xception function that looks like this:

from models.xception import xception
.
.
.
def xception(pre): return load_pre(pre, xception, 'xception')

Then when you do the standard fastai notebook imports, it’ll be available for you to use as:

arch=xception


(amine) #5

@poppingtonic thanks for the explanation :slight_smile: One more question , so i should create fastai/weights folder and put the file containing the pretrained weights inside right?


(Brian Muhia) #6

Yes. A better way would be to define a weights folder somewhere else (maybe where you have more space), and then symlink it to fastai/weights. Example:

ln -s /media/AI/pytorch/pretrained-weights/ ~/src/fastai/fastai/weights

#7

hey, im also new to deep learning. so if i wanted to do this for a mask rcnn github repo

i would add to torch_imports.py:
from .models.mask_rcnn import mask_rcnn
def mask_rcnn(pre): return load_pre(pre, mask_rcnn, ‘mask_rcnn’)

then add:
mask rcnn’s model.py
to fastai’s models folder and rename it mask_rcnn.py

and if i use the coco weights i can either put them in the weights folder with resnext50 etc., or symlink them

and then i should be able to use arch=mask_rcnn with coco’s weights as a pretrained model?


(Brian Muhia) #8

Unfortunately, model.py is a Tensorflow model, and so it wouldn’t work in this case. If you find (or implement) a PyTorch model, then the path you’ve set out will work.


#9

ok sweet. so for this repo, i could use its model.py and it should work?

also would i need to go through through the installation from the repo? or could i just put the model.py in fastai and wouldnt need the rest of the repo?


(Brian Muhia) #10

I suppose if you take these three imports and roll them into the file, it’ll work. Let me know how it goes.


#11

alright so im not totally sure how to explain/post where im at or fix the errors im getting. im on windows 10.

when i go through the imports in a notebook im getting this error: ---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in ()
----> 1 from fastai.transforms import *
2 from fastai.conv_learner import *
3 from fastai.model import *
4 from fastai.dataset import *
5 from fastai.sgdr import *

~\Anaconda3\fastai\courses\dl1\fastai\transforms.py in ()
1 from .imports import *
----> 2 from .layer_optimizer import *
3 from enum import IntEnum
4
5 def scale_min(im, targ, interpolation=cv2.INTER_AREA):

~\Anaconda3\fastai\courses\dl1\fastai\layer_optimizer.py in ()
1 from .imports import *
----> 2 from .torch_imports import *
3 from .core import *
4
5 def opt_params(parm, lr, wd):

~\Anaconda3\fastai\courses\dl1\fastai\torch_imports.py in ()
20 from .models.nasnet import nasnetalarge
21 from .models.fa_resnet import *
—> 22 from .models.maskrcnn import *
23
24

~\Anaconda3\fastai\courses\dl1\fastai\models\maskrcnn.py in ()
24 from pytorchmaskrcnn import utils
25 from pytorchmaskrcnn import visualize
—> 26 from pytorchmaskrcnn.nms.nms_wrapper import nms
27 from pytorchmaskrcnn.roialign.roi_align.crop_and_resize import CropAndResizeFunction
28

~\Anaconda3\envs\fastai\lib\site-packages\pytorchmaskrcnn\nms\nms_wrapper.py in ()
9 from future import print_function
10
—> 11 from pytorchmaskrcnn.nms.pth_nms import pth_nms
12
13

~\Anaconda3\envs\fastai\lib\site-packages\pytorchmaskrcnn\nms\pth_nms.py in ()
1 import torch
----> 2 from ._ext import nms
3 import numpy as np
4
5 def pth_nms(dets, thresh):

~\Anaconda3\envs\fastai\lib\site-packages\pytorchmaskrcnn\nms_ext\nms_init_.py in ()
1
2 from torch.utils.ffi import _wrap_function
----> 3 from ._nms import lib as _lib, ffi as _ffi
4
5 all = []

ModuleNotFoundError: No module named ‘pytorchmaskrcnn.nms._ext.nms._nms’

i dont have a ._nms folder and i think its because i get an error when running ‘python build.py’ in the pytorchmaskrcnn/nms folder.

when i run ‘python build.py’ through the command prompt i get:
Including CUDA code.
C:\Users\Jason\Anaconda3\pytorch-mask-rcnn\nms
generating C:\Users\Jason\AppData\Local\Temp\tmpvb_3tnks_nms.c
setting the current directory to ‘C:\Users\Jason\AppData\Local\Temp\tmpvb_3tnks’
running build_ext
building ‘_nms’ extension
creating Release
creating Release\Users
creating Release\Users\Jason
creating Release\Users\Jason\Anaconda3
creating Release\Users\Jason\Anaconda3\pytorch-mask-rcnn
creating Release\Users\Jason\Anaconda3\pytorch-mask-rcnn\nms
creating Release\Users\Jason\Anaconda3\pytorch-mask-rcnn\nms\src
C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DWITH_CUDA -IC:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\torch\utils\ffi…\lib\include -IC:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\torch\utils\ffi…\lib\include\TH -IC:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\torch\utils\ffi…\lib\include\THC -IC:\Users\Jason\Anaconda3\envs\fastai\include -IC:\Users\Jason\Anaconda3\envs\fastai\include “-IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.15.26726\include” “-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um” “-ID:\Windows Kits\10\include\10.0.17134.0\ucrt” “-ID:\Windows Kits\10\include\10.0.17134.0\shared” “-ID:\Windows Kits\10\include\10.0.17134.0\um” “-ID:\Windows Kits\10\include\10.0.17134.0\winrt” “-ID:\Windows Kits\10\include\10.0.17134.0\cppwinrt” /Tc_nms.c /Fo.\Release_nms.obj
_nms.c
c:\users\jason\anaconda3\envs\fastai\lib\site-packages\torch\lib\include\thc\THCGeneral.h(9): fatal error C1083: Cannot open include file: ‘cuda.h’: No such file or directory
Traceback (most recent call last):
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils_msvccompiler.py”, line 423, in compile
self.spawn(args)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils_msvccompiler.py”, line 542, in spawn
return super().spawn(cmd)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils\ccompiler.py”, line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils\spawn.py”, line 38, in spawn
_spawn_nt(cmd, search_path, dry_run=dry_run)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils\spawn.py”, line 81, in _spawn_nt
“command %r failed with exit status %d” % (cmd, rc))
distutils.errors.DistutilsExecError: command ‘C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\cl.exe’ failed with exit status 2

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\cffi\ffiplatform.py”, line 51, in _build
dist.run_command(‘build_ext’)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils\dist.py”, line 974, in run_command
cmd_obj.run()
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\setuptools\command\build_ext.py”, line 78, in run
_build_ext.run(self)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils\command\build_ext.py”, line 339, in run
self.build_extensions()
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils\command\build_ext.py”, line 448, in build_extensions
self._build_extensions_serial()
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils\command\build_ext.py”, line 473, in _build_extensions_serial
self.build_extension(ext)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\setuptools\command\build_ext.py”, line 199, in build_extension
_build_ext.build_extension(self, ext)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils\command\build_ext.py”, line 533, in build_extension
depends=ext.depends)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\distutils_msvccompiler.py”, line 425, in compile
raise CompileError(msg)
distutils.errors.CompileError: command ‘C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\cl.exe’ failed with exit status 2

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “build.py”, line 34, in
ffi.build()
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\torch\utils\ffi_init_.py”, line 167, in build
build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
File "C:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\torch\utils\ffi_init
.py", line 103, in _build_extension
ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\cffi\api.py”, line 697, in compile
compiler_verbose=verbose, debug=debug, **kwds)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\cffi\recompiler.py”, line 1520, in recompile
compiler_verbose, debug)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\cffi\ffiplatform.py”, line 22, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
File “C:\Users\Jason\Anaconda3\envs\fastai\lib\site-packages\cffi\ffiplatform.py”, line 58, in _build
raise VerificationError(’%s: %s’ % (e.class.name, e))
cffi.error.VerificationError: CompileError: command ‘C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\cl.exe’ failed with exit status 2

ive tried downloading a couple different things, but they havent worked.

and when i run ‘python build.py’ through bash in the command prompt i get:
/mnt/c/Users/Jason/Anaconda3/pytorch-mask-rcnn/nms
generating /tmp/tmpAUGgWK/_nms.c
setting the current directory to ‘/tmp/tmpAUGgWK’
running build_ext
building ‘_nms’ extension
creating mnt
creating mnt/c
creating mnt/c/Users
creating mnt/c/Users/Jason
creating mnt/c/Users/Jason/Anaconda3
creating mnt/c/Users/Jason/Anaconda3/pytorch-mask-rcnn
creating mnt/c/Users/Jason/Anaconda3/pytorch-mask-rcnn/nms
creating mnt/c/Users/Jason/Anaconda3/pytorch-mask-rcnn/nms/src
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/commandpost/.local/lib/python2.7/site-packages/torch/utils/ffi/…/…/lib/include -I/home/commandpost/.local/lib/python2.7/site-packages/torch/utils/ffi/…/…/lib/include/TH -I/usr/include/python2.7 -c _nms.c -o ./_nms.o -std=c99
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/commandpost/.local/lib/python2.7/site-packages/torch/utils/ffi/…/…/lib/include -I/home/commandpost/.local/lib/python2.7/site-packages/torch/utils/ffi/…/…/lib/include/TH -I/usr/include/python2.7 -c /mnt/c/Users/Jason/Anaconda3/pytorch-mask-rcnn/nms/src/nms.c -o ./mnt/c/Users/Jason/Anaconda3/pytorch-mask-rcnn/nms/src/nms.o -std=c99
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security ./_nms.o ./mnt/c/Users/Jason/Anaconda3/pytorch-mask-rcnn/nms/src/nms.o /mnt/c/Users/Jason/Anaconda3/pytorch-mask-rcnn/nms/src/cuda/nms_kernel.cu.o -o ./_nms.so
x86_64-linux-gnu-gcc: error: /mnt/c/Users/Jason/Anaconda3/pytorch-mask-rcnn/nms/src/cuda/nms_kernel.cu.o: No such file or directory
Traceback (most recent call last):
File “build.py”, line 34, in
ffi.build()
File “/home/commandpost/.local/lib/python2.7/site-packages/torch/utils/ffi/init.py”, line 189, in build
_build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
File “/home/commandpost/.local/lib/python2.7/site-packages/torch/utils/ffi/init.py”, line 111, in _build_extension
outfile = ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname)
File “/home/commandpost/.local/lib/python2.7/site-packages/cffi/api.py”, line 697, in compile
compiler_verbose=verbose, debug=debug, **kwds)
File “/home/commandpost/.local/lib/python2.7/site-packages/cffi/recompiler.py”, line 1520, in recompile
compiler_verbose, debug)
File “/home/commandpost/.local/lib/python2.7/site-packages/cffi/ffiplatform.py”, line 22, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
File “/home/commandpost/.local/lib/python2.7/site-packages/cffi/ffiplatform.py”, line 58, in _build
raise VerificationError(’%s: %s’ % (e.class.name, e))
cffi.error.VerificationError: LinkError: command ‘x86_64-linux-gnu-gcc’ failed with exit status 1

any thoughts?


(Brian Muhia) #12

There’s a lot going on here. First, one of the errors is hinted at here x86_64-linux-gnu-gcc. It’s trying to install the dependencies while assuming Linux tools. See how to install PyTorch on Windows 10, and once you can run import torch with no errors, you’re good to go.


#13

i know i wasnt sure how to show what was going on. but does torch not get imported normally with fastai? because if i take out trying to import the mask rcnn i dont get any errors with the import. could it be because im using windows?