I’m unfamiliar with the audio library right now and I know they’re working on adjusting a number of things for v2. You’d want to look at specifically the transforms. It also looks like you’re not including them into your DataBlock call, pass in your item_tfms and batch_tfms there.
This week was again great! I loved the insights on how to debug the code and also the imagenette leaderboard with working code examples!!
Please correct me if I am wrong: these leaderboard code examples take models available at vision.models and play with the parameters. So the architecture of resnet50 is not the same as xresnet50, is it? Is xresnet an improved version?
Is this the complete list of models? I believe we can also use any of these (coming from pytorch) without doing anything special.
Now you’re getting the idea XResNet brings together 5-6 different paper ideas (and still a one or two more). Some of those work, some don’t. It’s experiments people (and Jeremy) have tried. That’s certainly not the limit (you can take any PyTorch model and use it) it’s just what fastai2 has natively
If you want more models to confuse yourself with take a look at this wonderful repository (seriously, it’s great!)
We’ll go over how to transfer learn direct pretrained weights like the ones here and use them with our models (instead of having the nice option like pretrained=True)
We’ll use a VGG16/19 this week for style transfer too
If you want to try to figure out a way to go about implementing it, take a look at RandomSplitter:
def RandomSplitter(valid_pct=0.2, seed=None, **kwargs):
"Create function that splits `items` between train/val with `valid_pct` randomly."
def _inner(o, **kwargs):
if seed is not None: torch.manual_seed(seed)
rand_idx = L(int(i) for i in torch.randperm(len(o)))
cut = int(valid_pct * len(o))
return rand_idx[cut:],rand_idx[:cut]
return _inner
I might play with that this weekend if no one else does, i’ve been wanting something like that. only thought i had is if there isn’t shuffling of items somewhere or some kind of check to make sure you are not missing labels, you might end up missing labels in your train/valid.
@foobar8675 I was imagining something similar to this where once after they’re split into train valid then you sample those lists. Or you do a wrapper around any split that way we can get any split function used, not just random splitter.
Hi muellerzr hope all is well!
I built an image classifier model on colab about five minutes ago, I use the starter code for render.com as my baseline.
When trying to deploy the app locally i get the following error.
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use **torch.load with map_location=torch.device('cpu')** to map your storages to the CPU.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "app/server.py", line 52, in <module>
learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
File "/opt/anaconda3/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
return future.result()
File "app/server.py", line 45, in setup_learner
raise RuntimeError(message)
RuntimeError:
This model was trained with an old version of fastai and will not work in a CPU environment.
Please update the fastai library in your training environment and export your model again.
I thought I saw a link where you had created a starter repo for fastai2 but haven’t been able to find it again.
My local machine does not have a GPU but I have at least 70 different classifers which run no problem using fastai1-v3 on my local machine.
What do I need to change or configure to resolve this error?
This means that the model is being loaded in as cuda when in fact you don’t have a GPU. How are you getting the model in? Via load_learner? If so, try doing as the suggestion says, and instead load in your model as:
Thanks for doing this. I’m mostly interested in NLP and also tabular data these days but I’m starting to watch the videos and catching up now as all my work so far has been with version 1. Eagerly awaiting Block 2+3