A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

Yes! I’ll give this a try in the morning!
Cheers mrfabulous1 :smiley::smiley: :smiley:

1 Like

Also, for those curious on RAM usage, I have three resnet models loaded in, memory spiked during download etc at ~114 Mbs, but it’s staying at a steady 57 MB once all set up :slight_smile:

Hi @muellerzr.

Just curious since you seem to have waded deep into fastai v2. Did you explore detectron2 + fastai2 models by any chance?
They look awesome and I’d like to use them in fastai2, however I’m not sure how best to leverage them? The topic popped up several times in the forums but I don’t know if someone is already experimenting with it?

The PointRend model looks real nice for instance

Detectron is extremely difficult to get set up… I tried to but getting the environment good is a nightmare so I gave up honestly.

I feel ya. Had similar issues :wink:

Might give it a second try though…

2 Likes

I’ve seen a few repositories that got maskrcnn working with fastai on github (do a search and explore) but that’s the best I got. If you get it working please let us know! :slight_smile:

1 Like

not sure where to ask this, but in running 07_Super_Resolution, on an aws deep learning ubuntu instance , i get an exception running this line

learn_gen = create_gen_learner()

and narrowed it down to module.py line 576

    def __getattr__(self, name):
        if '_parameters' in self.__dict__:
            _parameters = self.__dict__['_parameters']
            if name in _parameters:
                return _parameters[name]
        if '_buffers' in self.__dict__:
            _buffers = self.__dict__['_buffers']
            if name in _buffers:
                return _buffers[name]
        if '_modules' in self.__dict__:
            modules = self.__dict__['_modules']
            if name in modules:
                return modules[name]
# this line
        raise AttributeError("'{}' object has no attribute '{}'".format(

#error msg
'Conv2d' object has no attribute 'weight'

but don’t know enough to figure out what is going on. it works fine in colab. suggestions?

What is the exact versions of the setup? (IE PyTorch, fastai2, fastcore, etc)

here is what i got from a conda env export > environment.yml https://gist.github.com/foobar8675/f6f352f9ddada60a45da6257b4ba949a

and here are what i think are the relevant libraries

    - fastai2==0.0.11
    - fastcore==0.1.14
    - fastprogress==0.2.2
    - torch==1.4.0
    - torchvision==0.5.0

oh, is torch 1.4 not supported???

No, that setup looks fine by me. I also can’t recreate it on my machine so I’m not sure :confused:

Thank you for looking. Off topic, but I can’t find a tutorial on how you got your imports into colab, for example https://github.com/muellerzr/Practical-Deep-Learning-for-Coders-2.0/tree/master/Computer%20Vision/imports , such that you can access them from a notebook. if you know of one, can you point me to it?

Hi @muellerzr !

I have run the codes on my dataset for object detection (from ur lecture). However, I am unable to get the results of the model. It gives me the following error –

learn.get_preds()


---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
~/anaconda3/lib/python3.6/site-packages/fastai2/torch_core.py in to_concat(xs, dim)
    216     #   in this case we return a big list
--> 217     try:    return retain_type(torch.cat(xs, dim=dim), xs[0])
    218     except: return sum([L(retain_type(o_.index_select(dim, tensor(i)).squeeze(dim), xs[0])

TypeError: expected Tensor as element 0 in argument 0, but got int

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
<ipython-input-54-d0be1d9a8327> in <module>()
----> 1 learn.get_preds()

~/anaconda3/lib/python3.6/site-packages/fastai2/learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, **kwargs)
    202             self(event.begin_epoch if inner else _before_epoch)
    203             self._do_epoch_validate(dl=dl)
--> 204             self(event.after_epoch if inner else _after_epoch)
    205             if act is None: act = getattr(self.loss_func, 'activation', noop)
    206             res = cb.all_tensors()

~/anaconda3/lib/python3.6/site-packages/fastai2/learner.py in __call__(self, event_name)
    106     def ordered_cbs(self, cb_func): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
    107 
--> 108     def __call__(self, event_name): L(event_name).map(self._call_one)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)

~/anaconda3/lib/python3.6/site-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
    360              else f.format if isinstance(f,str)
    361              else f.__getitem__)
--> 362         return self._new(map(g, self))
    363 
    364     def filter(self, f, negate=False, **kwargs):

~/anaconda3/lib/python3.6/site-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
    313     @property
    314     def _xtra(self): return None
--> 315     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
    316     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    317     def copy(self): return self._new(self.items.copy())

~/anaconda3/lib/python3.6/site-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     39             return x
     40 
---> 41         res = super().__call__(*((x,) + args), **kwargs)
     42         res._newchk = 0
     43         return res

~/anaconda3/lib/python3.6/site-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
    304         if items is None: items = []
    305         if (use_list is not None) or not _is_array(items):
--> 306             items = list(items) if use_list else _listify(items)
    307         if match is not None:
    308             if is_coll(match): match = len(match)

~/anaconda3/lib/python3.6/site-packages/fastcore/foundation.py in _listify(o)
    240     if isinstance(o, list): return o
    241     if isinstance(o, str) or _is_array(o): return [o]
--> 242     if is_iter(o): return list(o)
    243     return [o]
    244 

~/anaconda3/lib/python3.6/site-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
    206             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
    207         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 208         return self.fn(*fargs, **kwargs)
    209 
    210 # Cell

~/anaconda3/lib/python3.6/site-packages/fastai2/learner.py in _call_one(self, event_name)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)
--> 111         [cb(event_name) for cb in sort_by_run(self.cbs)]
    112 
    113     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

~/anaconda3/lib/python3.6/site-packages/fastai2/learner.py in <listcomp>(.0)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)
--> 111         [cb(event_name) for cb in sort_by_run(self.cbs)]
    112 
    113     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

~/anaconda3/lib/python3.6/site-packages/fastai2/callback/core.py in __call__(self, event_name)
     21         _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
     22                (self.run_valid and not getattr(self, 'training', False)))
---> 23         if self.run and _run: getattr(self, event_name, noop)()
     24         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
     25 

~/anaconda3/lib/python3.6/site-packages/fastai2/callback/core.py in after_fit(self)
     93         "Concatenate all recorded tensors"
     94         if self.with_input:     self.inputs  = detuplify(to_concat(self.inputs, dim=self.concat_dim))
---> 95         if not self.save_preds: self.preds   = detuplify(to_concat(self.preds, dim=self.concat_dim))
     96         if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim))
     97         if self.with_loss:      self.losses  = to_concat(self.losses)

~/anaconda3/lib/python3.6/site-packages/fastai2/torch_core.py in to_concat(xs, dim)
    211 def to_concat(xs, dim=0):
    212     "Concat the element in `xs` (recursively if they are tuples/lists of tensors)"
--> 213     if is_listy(xs[0]): return type(xs[0])([to_concat([x[i] for x in xs], dim=dim) for i in range_of(xs[0])])
    214     if isinstance(xs[0],dict):  return {k: to_concat([x[k] for x in xs], dim=dim) for k in xs.keys()}
    215     #We may receives xs that are not concatenatable (inputs of a text classifier for instance),

~/anaconda3/lib/python3.6/site-packages/fastai2/torch_core.py in <listcomp>(.0)
    211 def to_concat(xs, dim=0):
    212     "Concat the element in `xs` (recursively if they are tuples/lists of tensors)"
--> 213     if is_listy(xs[0]): return type(xs[0])([to_concat([x[i] for x in xs], dim=dim) for i in range_of(xs[0])])
    214     if isinstance(xs[0],dict):  return {k: to_concat([x[k] for x in xs], dim=dim) for k in xs.keys()}
    215     #We may receives xs that are not concatenatable (inputs of a text classifier for instance),

~/anaconda3/lib/python3.6/site-packages/fastai2/torch_core.py in to_concat(xs, dim)
    211 def to_concat(xs, dim=0):
    212     "Concat the element in `xs` (recursively if they are tuples/lists of tensors)"
--> 213     if is_listy(xs[0]): return type(xs[0])([to_concat([x[i] for x in xs], dim=dim) for i in range_of(xs[0])])
    214     if isinstance(xs[0],dict):  return {k: to_concat([x[k] for x in xs], dim=dim) for k in xs.keys()}
    215     #We may receives xs that are not concatenatable (inputs of a text classifier for instance),

~/anaconda3/lib/python3.6/site-packages/fastai2/torch_core.py in <listcomp>(.0)
    211 def to_concat(xs, dim=0):
    212     "Concat the element in `xs` (recursively if they are tuples/lists of tensors)"
--> 213     if is_listy(xs[0]): return type(xs[0])([to_concat([x[i] for x in xs], dim=dim) for i in range_of(xs[0])])
    214     if isinstance(xs[0],dict):  return {k: to_concat([x[k] for x in xs], dim=dim) for k in xs.keys()}
    215     #We may receives xs that are not concatenatable (inputs of a text classifier for instance),

~/anaconda3/lib/python3.6/site-packages/fastai2/torch_core.py in to_concat(xs, dim)
    211 def to_concat(xs, dim=0):
    212     "Concat the element in `xs` (recursively if they are tuples/lists of tensors)"
--> 213     if is_listy(xs[0]): return type(xs[0])([to_concat([x[i] for x in xs], dim=dim) for i in range_of(xs[0])])
    214     if isinstance(xs[0],dict):  return {k: to_concat([x[k] for x in xs], dim=dim) for k in xs.keys()}
    215     #We may receives xs that are not concatenatable (inputs of a text classifier for instance),

~/anaconda3/lib/python3.6/site-packages/fastai2/torch_core.py in <listcomp>(.0)
    211 def to_concat(xs, dim=0):
    212     "Concat the element in `xs` (recursively if they are tuples/lists of tensors)"
--> 213     if is_listy(xs[0]): return type(xs[0])([to_concat([x[i] for x in xs], dim=dim) for i in range_of(xs[0])])
    214     if isinstance(xs[0],dict):  return {k: to_concat([x[k] for x in xs], dim=dim) for k in xs.keys()}
    215     #We may receives xs that are not concatenatable (inputs of a text classifier for instance),

~/anaconda3/lib/python3.6/site-packages/fastai2/torch_core.py in to_concat(xs, dim)
    217     try:    return retain_type(torch.cat(xs, dim=dim), xs[0])
    218     except: return sum([L(retain_type(o_.index_select(dim, tensor(i)).squeeze(dim), xs[0])
--> 219                           for i in range_of(o_)) for o_ in xs], L())
    220 
    221 # Cell

~/anaconda3/lib/python3.6/site-packages/fastai2/torch_core.py in <listcomp>(.0)
    217     try:    return retain_type(torch.cat(xs, dim=dim), xs[0])
    218     except: return sum([L(retain_type(o_.index_select(dim, tensor(i)).squeeze(dim), xs[0])
--> 219                           for i in range_of(o_)) for o_ in xs], L())
    220 
    221 # Cell

~/anaconda3/lib/python3.6/site-packages/fastcore/utils.py in range_of(x)
    162 def range_of(x):
    163     "All indices of collection `x` (i.e. `list(range(len(x)))`)"
--> 164     return list(range(len(x)))
    165 
    166 # Cell

TypeError: object of type 'int' has no len()

Can u tell me what did I do wrong?

You can do a !pip list @foobar8675

oh, sorry, wasn’t clear. i wasn’t referring to pip modules, but just files like you have in your github repo under imports/ such as you

inference.py
metrics.py
model.py
utils.py

Thanks for posting these. watching them in preparation for the coming weeks.

@foobar8675 git cline the repository, then just cd until you’re outside the imports directory (IIRC) and then you can do from imports import * (or go one file lower and directly import)

1 Like

I’ll attempt to look at this later this weekend but that’s not really much information to go off of. Refer to here for a good way to report issues:

https://forums.fast.ai/t/how-to-debug-your-code-and-ask-for-help-with-fastai-v2/64196/3

in 07_Super_Resolution in this func get_dls there is this line

dls.c = 3 # For 3 channel image

which i don’t follow. can you elaborate on why this is?

It’s for cnn_learner (it relies on dls.c). Try using it without it and report back what happens :slight_smile: (specifically look at the last few layers of the model)

I have been trying to load this dataset in https://www.kaggle.com/msheriey/104-flowers-garden-of-eden which is in the format:

-train
-- rose
--daffodil
(classnames)
-val
(classnames)

Inorder to use a fastaiv2 data loader, is this code correct:

DATASET_DIR = '/kaggle/input/104-flowers-garden-of-eden/jpeg-512x512'
TRAIN_DIR  = DATASET_DIR + '/train'
VAL_DIR  = DATASET_DIR + '/val'
TEST_DIR  = DATASET_DIR + '/test'

flower_path = Path(DATASET_DIR)
items = get_image_files(TRAIN_DIR)

flowers = DataBlock(blocks=(ImageBlock, CategoryBlock),
                   get_items=get_image_files,
                   splitter=RandomSplitter(),
                   get_y=items,
                   item_tfms=Resize(460),
                   batch_tfms=aug_transforms(size=224, max_rotate=30, min_scale=0.75))

dls = flowers.dataloaders(TRAIN_DIR,  bs=32)
dls.show_batch(max_n=9, figsize=(6,7))