Fastai v2 chat

Can I ask for more custom types? :sweat_smile: :sweat_smile:

For example, it would be great if Tokenizer returned Tokens instead of L (I might have some transforms that I want to apply on Tokens)

I know is super duper easy to add custom types to my applications, but I feel this might be a good default?

Just a suggestion :smile:

[SOLVED] Solution at the bottom

Can I patch a metaclass method somehow?

I want to add a new attribute to all Transforms: listeners = L(), other functions will modify this list so I cannot use @patch_property.

I tried doing:

@patch
def __new__(cls:Transform, *args, **kwargs):
    res = super(Transform, cls).__new__(cls, *args, **kwargs)
    res.listeners = L()
    return res

This works for the simple case when instantiating an Transform, but fails to:

@Transform
def neg(x): return -x

with:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-8-34a6339c4188> in <module>
----> 1 @Transform
      2 def neg(x): return -x

~/git/fastcore/fastcore/transform.py in __call__(cls, *args, **kwargs)
     36             getattr(cls,n).add(f)
     37             return f
---> 38         return super().__call__(*args, **kwargs)
     39 
     40     @classmethod

<ipython-input-2-eece891b5c54> in __new__(cls, *args, **kwargs)
      1 @patch
      2 def __new__(cls:Transform, *args, **kwargs):
----> 3     res = super(Transform, cls).__new__(cls, *args, **kwargs)
      4     res.listeners = L()
      5     return res

TypeError: object.__new__() takes exactly one argument (the type to instantiate)

I also tried patching the metaclass by doing:

class _TfmMeta2(Transform.__class__):
    def __new__(*args, **kwargs):
        res = super().__new__(*args, **kwargs)
        res.listeners = L()
        return res
fastcore.transform.Transform = _TfmMeta2

But failed:

Categorize.__class__
>>> fastcore.transform._TfmMeta

I guess this happens because __class__ was already assigned to Categorize when importing the library?


Note: I cannot do all of this in init because it’s not guaranteed that all subclasses of Transform will super().__init__


Solution:
So I was misunderstanding how new works, it should be like this:

@patch
def __new__(cls:Transform, *args, **kwargs):
    res = super(Transform, cls).__new__(cls)
    res.listeners = L()
    return res

This is super duper cool

2 Likes

I am trying to run my model on multiple GPU’s and trying to follow imagenette example from the fastai2 repository.

When trying to create ctx object like below it fails.

ctx = learn.parallel_ctx if gpu is None and n_gpu else learn.distrib_ctx

The error I see is

AttributeError                            Traceback (most recent call last)
<ipython-input-25-a6bd7af1d238> in <module>
----> 1 ctx = learn.parallel_ctx if gpu is None and n_gpu else learn.distrib_ctx

AttributeError: 'Learner' object has no attribute 'parallel_ctx

I am using fastai2 version 0.0.16. Can you help in figuring out what I am missing here?

Have you tried it using the dev versions of fastcore and fastai2?

1 Like

Nope, Will try and update

Its working. Thanks.

In case you missed it, a general question about architecture where I’d welcome community input here.

1 Like

fastcore 0.1.17 with Ubuntu 19 VBox in Win10

nbdev_test_nbs
make: nbdev_test_nbs: Command not found
make: *** [Makefile:17: test] Error 127

Get Error when starting the test. I started NB anyway, it seems worked fine until the fit_one_cycle box:
ssertionError Traceback (most recent call last)
~/fastai2/fastai2/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
195 try:
–> 196 self._do_begin_fit(n_epoch)
197 for epoch in range(n_epoch):

~/fastai2/fastai2/learner.py in _do_begin_fit(self, n_epoch)
169 def _do_begin_fit(self, n_epoch):
–> 170 self.n_epoch,self.loss = n_epoch,tensor(0.); self(‘begin_fit’)
171

~/fastai2/fastai2/learner.py in call(self, event_name)
133
–> 134 def call(self, event_name): L(event_name).map(self._call_one)
135 def _call_one(self, event_name):

~/fastcore/fastcore/foundation.py in map(self, f, *args, **kwargs)
374 else f.getitem)
–> 375 return self._new(map(g, self))
376

~/fastcore/fastcore/foundation.py in _new(self, items, *args, **kwargs)
325 def _xtra(self): return None
–> 326 def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
327 def getitem(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)

~/fastcore/fastcore/foundation.py in call(cls, x, args, **kwargs)
46
—> 47 res = super().call(
((x,) + args), **kwargs)
48 res._newchk = 0

~/fastcore/fastcore/foundation.py in init(self, items, use_list, match, *rest)
316 if (use_list is not None) or not _is_array(items):
–> 317 items = list(items) if use_list else _listify(items)
318 if match is not None:

~/fastcore/fastcore/foundation.py in _listify(o)
252 if isinstance(o, str) or _is_array(o): return [o]
–> 253 if is_iter(o): return list(o)
254 return [o]

~/fastcore/fastcore/foundation.py in call(self, *args, **kwargs)
218 fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
–> 219 return self.fn(*fargs, **kwargs)
220

~/fastai2/fastai2/learner.py in _call_one(self, event_name)
136 assert hasattr(event, event_name)
–> 137 [cb(event_name) for cb in sort_by_run(self.cbs)]
138

~/fastai2/fastai2/learner.py in (.0)
136 assert hasattr(event, event_name)
–> 137 [cb(event_name) for cb in sort_by_run(self.cbs)]
138

~/fastai2/fastai2/callback/core.py in call(self, event_name)
23 (self.run_valid and not getattr(self, ‘training’, False)))
—> 24 if self.run and _run: getattr(self, event_name, noop)()
25 if event_name==‘after_fit’: self.run=True #Reset self.run to True at each end of fit

~/fastai2/fastai2/callback/fp16.py in begin_fit(self)
84 def begin_fit(self):
—> 85 assert self.dls.device.type == ‘cuda’, "Mixed-precision training requires a GPU, remove the call to_fp16"
86 if self.learn.opt is None: self.learn.create_opt()

AssertionError: Mixed-precision training requires a GPU, remove the call to_fp16

During handling of the above exception, another exception occurred:

AttributeError Traceback (most recent call last)
in
----> 1 learn.fit_one_cycle(4)

~/fastcore/fastcore/utils.py in _f(*args, **kwargs)
428 init_args.update(log)
429 setattr(inst, ‘init_args’, init_args)
–> 430 return inst if to_return else f(*args, **kwargs)
431 return _f
432

~/fastai2/fastai2/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
111 scheds = {‘lr’: combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
112 ‘mom’: combined_cos(pct_start, *(self.moms if moms is None else moms))}
–> 113 self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
114
115 # Cell

~/fastcore/fastcore/utils.py in _f(*args, **kwargs)
428 init_args.update(log)
429 setattr(inst, ‘init_args’, init_args)
–> 430 return inst if to_return else f(*args, **kwargs)
431 return _f
432

~/fastai2/fastai2/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
204
205 except CancelFitException: self(‘after_cancel_fit’)
–> 206 finally: self(‘after_fit’)
207
208 def validate(self, ds_idx=1, dl=None, cbs=None):

~/fastai2/fastai2/learner.py in call(self, event_name)
132 def ordered_cbs(self, event): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, event)]
133
–> 134 def call(self, event_name): L(event_name).map(self._call_one)
135 def _call_one(self, event_name):
136 assert hasattr(event, event_name)

~/fastcore/fastcore/foundation.py in map(self, f, *args, **kwargs)
373 else f.format if isinstance(f,str)
374 else f.getitem)
–> 375 return self._new(map(g, self))
376
377 def filter(self, f, negate=False, **kwargs):

~/fastcore/fastcore/foundation.py in _new(self, items, *args, **kwargs)
324 @property
325 def _xtra(self): return None
–> 326 def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
327 def getitem(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
328 def copy(self): return self._new(self.items.copy())

~/fastcore/fastcore/foundation.py in call(cls, x, args, **kwargs)
45 return x
46
—> 47 res = super().call(
((x,) + args), **kwargs)
48 res._newchk = 0
49 return res

~/fastcore/fastcore/foundation.py in init(self, items, use_list, match, *rest)
315 if items is None: items = []
316 if (use_list is not None) or not _is_array(items):
–> 317 items = list(items) if use_list else _listify(items)
318 if match is not None:
319 if is_coll(match): match = len(match)

~/fastcore/fastcore/foundation.py in _listify(o)
251 if isinstance(o, list): return o
252 if isinstance(o, str) or _is_array(o): return [o]
–> 253 if is_iter(o): return list(o)
254 return [o]
255

~/fastcore/fastcore/foundation.py in call(self, *args, **kwargs)
217 if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
218 fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
–> 219 return self.fn(*fargs, **kwargs)
220
221 # Cell

~/fastai2/fastai2/learner.py in _call_one(self, event_name)
135 def _call_one(self, event_name):
136 assert hasattr(event, event_name)
–> 137 [cb(event_name) for cb in sort_by_run(self.cbs)]
138
139 def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

~/fastai2/fastai2/learner.py in (.0)
135 def _call_one(self, event_name):
136 assert hasattr(event, event_name)
–> 137 [cb(event_name) for cb in sort_by_run(self.cbs)]
138
139 def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

~/fastai2/fastai2/callback/core.py in call(self, event_name)
22 _run = (event_name not in _inner_loop or (self.run_train and getattr(self, ‘training’, True)) or
23 (self.run_valid and not getattr(self, ‘training’, False)))
—> 24 if self.run and _run: getattr(self, event_name, noop)()
25 if event_name==‘after_fit’: self.run=True #Reset self.run to True at each end of fit
26

~/fastai2/fastai2/callback/progress.py in after_fit(self)
37 def after_fit(self):
38 if getattr(self, ‘mbar’, False):
—> 39 self.mbar.on_iter_end()
40 delattr(self, ‘mbar’)
41 self.learn.logger = self.old_logger

~/anaconda3/envs/fastai2/lib/python3.7/site-packages/fastprogress/fastprogress.py in on_iter_end(self)
155 total_time = format_time(time.time() - self.main_bar.start_t)
156 self.text = f’Total time: {total_time}

’ + self.text
–> 157 self.out.update(HTML(self.text))
158
159 def add_child(self, child):

AttributeError: ‘NBMasterBar’ object has no attribute ‘out’

Please advise.

Try updating fastprogress. Also when posting code please surrounding it in three back ticks from the top and bottom (these things between the quotations: “```”). It let’s us be able to read it better :slight_smile:

this is an example

image

Fast progress is the newest 0.2.2?? Now with the following error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-14-495233eaf2b4> in <module>
----> 1 learn.fit_one_cycle(4)

e:\fastcore\fastcore\utils.py in _f(*args, **kwargs)
    428         init_args.update(log)
    429         setattr(inst, 'init_args', init_args)
--> 430         return inst if to_return else f(*args, **kwargs)
    431     return _f
    432 

e:\fastai2\fastai2\callback\schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
    111     scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
    112               'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
--> 113     self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
    114 
    115 # Cell

e:\fastcore\fastcore\utils.py in _f(*args, **kwargs)
    428         init_args.update(log)
    429         setattr(inst, 'init_args', init_args)
--> 430         return inst if to_return else f(*args, **kwargs)
    431     return _f
    432 

e:\fastai2\fastai2\learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
    198                     try:
    199                         self.epoch=epoch;          self('begin_epoch')
--> 200                         self._do_epoch_train()
    201                         self._do_epoch_validate()
    202                     except CancelEpochException:   self('after_cancel_epoch')

e:\fastai2\fastai2\learner.py in _do_epoch_train(self)
    173         try:
    174             self.dl = self.dls.train;                        self('begin_train')
--> 175             self.all_batches()
    176         except CancelTrainException:                         self('after_cancel_train')
    177         finally:                                             self('after_train')

e:\fastai2\fastai2\learner.py in all_batches(self)
    151     def all_batches(self):
    152         self.n_iter = len(self.dl)
--> 153         for o in enumerate(self.dl): self.one_batch(*o)
    154 
    155     def one_batch(self, i, b):

e:\fastai2\fastai2\data\load.py in __iter__(self)
     96         self.randomize()
     97         self.before_iter()
---> 98         for b in _loaders[self.fake_l.num_workers==0](self.fake_l):
     99             if self.device is not None: b = to_device(b, self.device)
    100             yield self.after_batch(b)

E:\Anaconda3\envs\fastai2\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader)
    717             #     before it starts, and __del__ tries to join but will get:
    718             #     AssertionError: can only join a started process.
--> 719             w.start()
    720             self._index_queues.append(index_queue)
    721             self._workers.append(w)

E:\Anaconda3\envs\fastai2\lib\multiprocessing\process.py in start(self)
    110                'daemonic processes are not allowed to have children'
    111         _cleanup()
--> 112         self._popen = self._Popen(self)
    113         self._sentinel = self._popen.sentinel
    114         # Avoid a refcycle if the target function holds an indirect

E:\Anaconda3\envs\fastai2\lib\multiprocessing\context.py in _Popen(process_obj)
    221     @staticmethod
    222     def _Popen(process_obj):
--> 223         return _default_context.get_context().Process._Popen(process_obj)
    224 
    225 class DefaultContext(BaseContext):

E:\Anaconda3\envs\fastai2\lib\multiprocessing\context.py in _Popen(process_obj)
    320         def _Popen(process_obj):
    321             from .popen_spawn_win32 import Popen
--> 322             return Popen(process_obj)
    323 
    324     class SpawnContext(BaseContext):

E:\Anaconda3\envs\fastai2\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
     87             try:
     88                 reduction.dump(prep_data, to_child)
---> 89                 reduction.dump(process_obj, to_child)
     90             finally:
     91                 set_spawning_popen(None)

E:\Anaconda3\envs\fastai2\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
     58 def dump(obj, file, protocol=None):
     59     '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60     ForkingPickler(file, protocol).dump(obj)
     61 
     62 #

E:\Anaconda3\envs\fastai2\lib\site-packages\torch\multiprocessing\reductions.py in reduce_tensor(tensor)
    240          ref_counter_offset,
    241          event_handle,
--> 242          event_sync_required) = storage._share_cuda_()
    243         tensor_offset = tensor.storage_offset()
    244         shared_cache[handle] = StorageWeakRef(storage)

RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_100118\conda\conda-bld\pytorch_1579082551706\work\torch/csrc/generic/StorageSharing.cpp:245

Thank you for your attention.

You had this error previously. Check you have access to the GPU and you may need to remove the call to to_fp16.

torch.cuda.get_device_name() and able to get the name of my device 1050Ti.
After remove to_fp16. I still get the same error.
Opp. Sorry, now I am using Win10 and not Vbox.
After remove to_fp16, the pet.nb fit_one_cycle is working but very slowly. I assume there is issue of my GPU in Ubuntu. The GPU driver gone, and it used to be there with Ubuntu 19 and I tried CUDA in nb before. CUDA 10 is still there. Maybe I should try ubuntu 20 later.

Installing drivers is always a nightmare for me as well.
I would recommend to use Ubuntu and install through pip. This is what typically works the best for me with pytorch based frameworks such as fastai.

With Win 10 and Now the new runtime error is(71). Any suggestion?

 cuda runtime error (71) : operation not supported at C:\w\1\s\tmp_conda_3.7_075911\conda\conda-bld\pytorch_1579075223148\work\torch/csrc/generic/StorageSharing.cpp:245

11 posts were split to a new topic: How to access the raw tensor inputs and outputs passed to the model?

This is definitely a problem from installation. Try to have a working pytorch with gpu

import torch
torch.cuda.is_available()

Maybe run even one of their examples if you still have an issue.

This is bugging me for a day, I don’t understand why L and TfmdLists are giving different results for a DataFrame. Steps:

Load the df:

source = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(source/'texts.csv')

Using TfmdLists gives us the first row of the df:

TfmdLists(df, [])[0]

Using L gives us the entire df:

L(df)[0]

I inspected the MRO TfmdLists and __getitem__ just goes directly to L, I don’t understand why they behave differently.

On your call to L can you try setting as_list=True?

It’s really great to use unique=True with show_batch to examine data augmentations on images of single category, may I do a feature request to show a batch of particular category? :slightly_smiling_face:

I see we’re showing images of id=0 when unique=True , is it possible to look up dictionary and find index of given category?

It just returns the name of a column:
image

1 Like