Fastai v2 chat

You need to look at tochvision doc:

torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=False, progress=True, num_classes=91, pretrained_backbone=True, **kwargs)

I am going ti try torchvision models with FastAI 2 in several days. I could let you know then!!

Thank you for pointing that out; my ability to miss the obvious amazes me. BTW, the link didn’t work but this should: Torchvision models. I would love to hear about your experience.

1 Like

Sorry, I copied a line of code as URL :joy:

I’ll let you now when I try it! I think that I will look at it on Monday!

@jeremy apologies for bothering you but I noticed that the code for the front-page fastai2 article (on fast.ai) includes databunch several times and I tried one code snippet that didn’t work as a result.

In the COCO dataset section I think the line should be changed from dls = coco.databunch(...) to dls = coco.dataloaders(...) and I’m guessing similar refactoring is needed for the other snippets. I mention it because the article says it is specifically about v2.

John

What is the easiest way of getting an Item from a Dataloader with the transforms having being applied?

Sorry if I misundertood the question but dls.show_batch() should show the images with the tranforms applied. I guess you could specify p=1 if you have a RandTransform that you want to use in order to see how it behaves.

Does anybody tried to do model ensembling in fastai2?
I found the @muellerzr notebook with tabular data that averages the predictions. I was thinking more in merging the models to give a single prediction rather than average them.
I tried some pytorch approaches following this thread but unsuccesful so far.

Practically, I am able to change the last linear layer using list(modelA.model.children())[1][8] = nn.Identity() in a resnet34 but I am not able to build the ensemble model for prediction after this.

Any ideas?

I need to assign the image with the transforms to a variable

This is probably a dumb question but how do I save my model as Fp32 if I trained it with fp16?

EDIT: I solved my question by doing

learn_inf = load_learner('drive/My Drive/pkls/export.pkl')
learn_inf = learn_inf.to_fp32()
learn.export()

You can do learn.to_fp32()

1 Like

Thank you! I saw this just after I edited my post. Thank you for “a walk with fastai2” as well.

On training with a custom dataset, I am not able to train fastai2 for computer vision task on following tutorial

I get the below error:

learn = cnn_learner(dls, resnet50, metrics=partial(accuracy_multi, thresh=0.5))
learn.lr_find()

Hello!

I am having a hard time trying to make weighted_dataloaders to work. @sgugger, @muellerzr any ideas? Here is the code:

tfms = [[attrgetter("filename"), PILImage.create],
        [attrgetter("target"),Categorize()]]

splits=ColSplitter('is_valid')(df)

dsets = Datasets(df, tfms, splits=splits)

dls = dsets.weighted_dataloaders(wgts=df['wgt'].tolist(),bs=8, source=df,
                        after_item = [Resize(528, method='squish'), ToTensor()],
                        after_batch= [IntToFloatTensor(), 
                                      *aug_transforms(size=528, 
                                                      do_flip=True,
                                                      max_rotate=15,
                                                      max_zoom=1.1,
                                                      max_lighting=0.3,
                                                      max_warp=0.0,
                                                      p_affine=1.0,
                                                      p_lighting=1.0
                                                      ), 
                                      Normalize.from_stats(*imagenet_stats)
                       )

The code work with a normal dataloader, without weights, but with weighted_dataloaders I always get this error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-31-90634fcc3c9e> in <module>
----> 1 dls.show_batch()

/srv/conda/envs/saturn/lib/python3.7/site-packages/fastai2/data/core.py in show_batch(self, b, max_n, ctxs, show, unique, **kwargs)
     91             old_get_idxs = self.get_idxs
     92             self.get_idxs = lambda: Inf.zeros
---> 93         if b is None: b = self.one_batch()
     94         if not show: return self._pre_show_batch(b, max_n=max_n)
     95         show_batch(*self._pre_show_batch(b, max_n=max_n), ctxs=ctxs, max_n=max_n, **kwargs)

/srv/conda/envs/saturn/lib/python3.7/site-packages/fastai2/data/load.py in one_batch(self)
    129     def one_batch(self):
    130         if self.n is not None and len(self)==0: raise ValueError(f'This DataLoader does not contain any batches')
--> 131         with self.fake_l.no_multiproc(): res = first(self)
    132         if hasattr(self, 'it'): delattr(self, 'it')
    133         return res

/srv/conda/envs/saturn/lib/python3.7/site-packages/fastcore/utils.py in first(x)
    174 def first(x):
    175     "First element of `x`, or None if missing"
--> 176     try: return next(iter(x))
    177     except StopIteration: return None
    178 

/srv/conda/envs/saturn/lib/python3.7/site-packages/fastai2/data/load.py in __iter__(self)
     95         self.randomize()
     96         self.before_iter()
---> 97         for b in _loaders[self.fake_l.num_workers==0](self.fake_l):
     98             if self.device is not None: b = to_device(b, self.device)
     99             yield self.after_batch(b)

/srv/conda/envs/saturn/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __init__(self, loader)
    379 
    380         self._dataset_fetcher = _DatasetKind.create_fetcher(
--> 381             self._dataset_kind, self._dataset, self._auto_collation, self._collate_fn, self._drop_last)
    382 
    383     def _next_data(self):

/srv/conda/envs/saturn/lib/python3.7/site-packages/torch/utils/data/dataloader.py in create_fetcher(kind, dataset, auto_collation, collate_fn, drop_last)
     39             return _utils.fetch._MapDatasetFetcher(dataset, auto_collation, collate_fn, drop_last)
     40         else:
---> 41             return _utils.fetch._IterableDatasetFetcher(dataset, auto_collation, collate_fn, drop_last)
     42 
     43 

/srv/conda/envs/saturn/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in __init__(self, dataset, auto_collation, collate_fn, drop_last)
     19     def __init__(self, dataset, auto_collation, collate_fn, drop_last):
     20         super(_IterableDatasetFetcher, self).__init__(dataset, auto_collation, collate_fn, drop_last)
---> 21         self.dataset_iter = iter(dataset)
     22 
     23     def fetch(self, possibly_batched_index):

/srv/conda/envs/saturn/lib/python3.7/site-packages/fastai2/data/load.py in __iter__(self)
     25         store_attr(self, 'd,pin_memory,num_workers,timeout')
     26 
---> 27     def __iter__(self): return iter(self.d.create_batches(self.d.sample()))
     28 
     29     @property

/srv/conda/envs/saturn/lib/python3.7/site-packages/fastai2/data/load.py in sample(self)
     89 
     90     def sample(self):
---> 91         idxs = self.get_idxs()
     92         return (b for i,b in enumerate(idxs) if i//(self.bs or 1)%self.nw==self.offs)
     93 

/srv/conda/envs/saturn/lib/python3.7/site-packages/fastai2/callback/data.py in get_idxs(self)
     23         if self.n==0: return []
     24         if not self.shuffle: return super().get_idxs()
---> 25         return list(np.random.choice(self.n, self.n, p=self.wgts))
     26 
     27 # Cell

mtrand.pyx in numpy.random.mtrand.RandomState.choice()

ValueError: 'a' and 'p' must have same size

@sgugger at the beginning of fastai2.vision.learner code on github, line 12 is duplicated.

from . import models

fastbook, 04+mnist_basic, When I run:

#id gradient_descent
#caption The gradient descent process
#alt Graph showing the steps for Gradient Descent
gv('''
init->predict->loss->gradient->step->stop
step->predict[label=repeat]
''')

get the following error:

NameError: name 'gv' is not defined

Any suggestion? Thanks.

Did you import the utils file?

Thanks, it’s fixed now.

1 Like

I managed to make segmentation one work like follows:

However, I tried to change backbone to resnet34 and I am getting: NotImplementedError: Dilation > 1 not supported in BasicBlock :joy:

I think so.

#hide
from fastai2.vision.all import *
from utils import *
matplotlib.rc('image', cmap='Greys')

Something else missing?

Hi!

I’m trying to use the WandbCallback in a TabularLearner. However, I’m having the following errors when calling fit_one_cycle:

Could not set wandb config input dimensions
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
    187             try:
--> 188                 self._do_begin_fit(n_epoch)
    189                 for epoch in range(n_epoch):

29 frames
AttributeError: tfms

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/fastai2/callback/wandb.py in after_fit(self)
     83     def after_fit(self):
     84         self.run = True
---> 85         if self.log_preds: self.remove_cb(self.learn.fetch_preds)
     86 
     87     def _log_config(self):

AttributeError: 'TabularLearner' object has no attribute 'fetch_preds'

Any ideas? Thank you for your awesome help!