Fastai v2 chat

I might give that a go too

@muellerzr can you find what’s wrong here, am I doing anything dumb to get this behaviour?

You haven’t given read access to your notebook so no one can really help you. Also, I wouldn’t want to be @-ing the admins that way if I were you :wink:, it just makes it less likely to get a reply from them. Grant read access to your notebook.

Cheers,
Tendo

I don’t mind the @ at myself, but yes we would like read/write access :slight_smile: (I’ll try look this weekend @vijayabhaskar)

2 Likes

Sorry, I thought just clicking the share allows people to view the content, here is the updated link

I don’t usually @ mention admins, I believe this might be a bug in fastai2 (if I’m not doing anything dumb) so I thought Sylvian should look into it.

Thanks!

Asking this question here which was previously posted in this topic

how can I customize the batch sampling method? In metric learning approaches, we need some control over the no. of positive/negative examples in a batch, where can I define this logic ?

Did you have a look here: https://dev.fast.ai/callback.data ?

There are no docs yet but maybe something like WeightedDL is what you’re looking for?

1 Like

So we provide wgts for each class and it’ll draw examples for that class with given probability (wgts) ?

I’m not sure if this is a bug, or just unintuitive behavior.
When I try to setup a learner with no weight decay, it doesn’t appear to be so.

learn = cnn_learner(dls, resnet18, wd=0.0)

When I enter learn.opt_func in a notebook and run the cell, it gives me the following output

<function fastai2.optimizer.Adam(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-05, wd=0.01, decouple_wd=True)>

which suggests that the opt_func is still setup with wd=0.01.

Do I need to pass in learn.fit(wd=0.0) to truly fit without weight decay?

The learn.wd is not used when you create the optimizer but it is passed each time you call a fit method with no wd. So don’t worry, it will fit without weight decay.

1 Like

Good to know, thank you!

What’s the difference between passing in wd when creating cnn_learner(...) vs calling learn.fit(wd=...)? If they refer to the same, then does the wd passed in when calling learn.fit() override the other option?

You might want to call fit with a different wd each time, so that’s why we have that

and as Sylvain pointed out:

So yes, if you pass wd to fit it overrides

3 Likes

That makes it crystal clear. Thank you.

What is the correct way of exporting just the model from unet_learner?

I tried:

import dill
torch.save(learn.model,"unet-best-2.pth",pickle_module=dill)

I also tried withouth using dill and it throws me an error when I make model=torch.load("unet-best-2.pth") in other project.


SOLUTION: using torchscript for saving the model made the trick!

Do you know any better approach?

1 Like

get_grid is not passing add_vert parameter to subplots

After installing pytorch-nightly I am getting

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-1-1285ae11e650> in <module>
----> 1 from fastai2.basics import *
      2 from fastai2.vision import models
      3 from fastai2.vision.all import *
      4 from fastai2.metrics import *
      5 from fastai2.data.all import *

~/anaconda3/envs/seg1/lib/python3.7/site-packages/fastai2/basics.py in <module>
----> 1 from .data.all import *
      2 from .optimizer import *
      3 from .callback.core import *
      4 from .learner import *
      5 from .metrics import *

~/anaconda3/envs/seg1/lib/python3.7/site-packages/fastai2/data/all.py in <module>
----> 1 from ..torch_basics import *
      2 from .core import *
      3 from .load import *
      4 from .external import *
      5 from .transforms import *

~/anaconda3/envs/seg1/lib/python3.7/site-packages/fastai2/torch_basics.py in <module>
      2 from .imports import *
      3 from .torch_imports import *
----> 4 from .torch_core import *
      5 from .layers import *

~/anaconda3/envs/seg1/lib/python3.7/site-packages/fastai2/torch_core.py in <module>
    283             setattr(TensorBase, fn, get_f(fn))
    284 
--> 285 _patch_tb()
    286 
    287 # Cell

~/anaconda3/envs/seg1/lib/python3.7/site-packages/fastai2/torch_core.py in _patch_tb()
    279     for fn in dir(t):
    280         if fn in skips: continue
--> 281         f = getattr(t, fn)
    282         if isinstance(f, (MethodWrapperType, BuiltinFunctionType, BuiltinMethodType, MethodType, FunctionType)):
    283             setattr(TensorBase, fn, get_f(fn))

RuntimeError: imag is not implemented for tensors with non-complex dtypes.

I don’t believe fastai has been made for nightly (I don’t even think it worked with nightly in v1?) @sgugger can comment on that more

1 Like

Sad, I have my learners exported. However, I need to load them in pytorch-nightly so I can export the models again to TorchScript

1 Like

Hmmm… I may try looking into this myself then for my fastinference library :slight_smile: (as if you need it, many people do too). I won’t promise I’ll get it working but I’ll try!