Fastai v2 chat

It’s likely one of the tests moved the file temporarily (to test if the dataset can be properly downloaded for instance). In general, the safest way is to execute all tests with one worker (we do the parallel for quick prototyping, and we’re used to those errors).

Yesterday it wasn’t showing any error and now I’m constantly getting this one even if I fallback to simplest version of DataLoaders:

My Kaggle directory structure:
kaggle-tree

path = Path("/kaggle/input/fruits/fruits-360_dataset/fruits-360")
fruits = ImageDataLoaders.from_folder(path, train='Training', valid='Test')
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-29-b35d1f01f506> in <module>
----> 1 fruits = ImageDataLoaders.from_folder(path, train='Training', valid='Test')

/opt/conda/lib/python3.6/site-packages/fastai2/vision/data.py in from_folder(cls, path, train, valid, valid_pct, seed, vocab, item_tfms, batch_tfms, **kwargs)

/opt/conda/lib/python3.6/site-packages/fastcore/foundation.py in _init(self, *args, **kwargs)
    148                 if isinstance(arg,MethodType): arg = MethodType(arg.__func__, self)
    149                 setattr(self, k, arg)
--> 150         old_init(self, *args, **kwargs)
    151     functools.update_wrapper(_init, old_init)
    152     cls.__init__ = use_kwargs(cls._methods)(_init)

/opt/conda/lib/python3.6/site-packages/fastai2/data/block.py in __init__(self, blocks, dl_type, getters, n_inp, item_tfms, batch_tfms, **kwargs)

/opt/conda/lib/python3.6/site-packages/fastai2/data/block.py in _merge_tfms(*tfms)

/opt/conda/lib/python3.6/site-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
    360              else f.format if isinstance(f,str)
    361              else f.__getitem__)
--> 362         return self._new(map(g, self))
    363 
    364     def filter(self, f, negate=False, **kwargs):

/opt/conda/lib/python3.6/site-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
    313     @property
    314     def _xtra(self): return None
--> 315     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
    316     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    317     def copy(self): return self._new(self.items.copy())

/opt/conda/lib/python3.6/site-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     39             return x
     40 
---> 41         res = super().__call__(*((x,) + args), **kwargs)
     42         res._newchk = 0
     43         return res

/opt/conda/lib/python3.6/site-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
    304         if items is None: items = []
    305         if (use_list is not None) or not _is_array(items):
--> 306             items = list(items) if use_list else _listify(items)
    307         if match is not None:
    308             if is_coll(match): match = len(match)

/opt/conda/lib/python3.6/site-packages/fastcore/foundation.py in _listify(o)
    240     if isinstance(o, list): return o
    241     if isinstance(o, str) or _is_array(o): return [o]
--> 242     if is_iter(o): return list(o)
    243     return [o]
    244 

/opt/conda/lib/python3.6/site-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
    206             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
    207         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 208         return self.fn(*fargs, **kwargs)
    209 
    210 # Cell

/opt/conda/lib/python3.6/site-packages/fastcore/utils.py in instantiate(t)
    367 def instantiate(t):
    368     "Instantiate `t` if it's a type, otherwise do nothing"
--> 369     return t() if isinstance(t, type) else t
    370 
    371 # Cell

/opt/conda/lib/python3.6/site-packages/fastcore/transform.py in __call__(cls, *args, **kwargs)
     36             getattr(cls,n).add(f)
     37             return f
---> 38         return super().__call__(*args, **kwargs)
     39 
     40     @classmethod

/opt/conda/lib/python3.6/site-packages/fastai2/data/transforms.py in __init__(self, div, div_mask, split_idx, as_item)

TypeError: __init__() got an unexpected keyword argument 'as_item'

Tried with fastai2 0.0.10 and 0.0.11

1 Like

I just came to here comment about this, the pip version breaks but the github version works fine.
pip install git+https://github.com/fastai/fastai2.git --upgrade

1 Like

Thanks, this trick really worked. But I had already tried doing:

git clone https://github.com/fastai/fastai2
cd fastai2 && pip install -e ".[dev]"

I wonder what’s the difference :thinking:

We made a release of fastcore, and we need to make a release of fastai2 to get along with it, doing now.

This notebook is awesome!

3 Likes

Have you made a stable release? The error mentioned above is fixed when I install the latest version but not this one.

with the new fine_tune method if we pass epoch=1, we are actually seeing each image twice. epoch is defined as - how many times to look at each image (known as number of epochs ) (from the draft of the book).
makes naming that parameter weird. :smiley:

Is it though? Let’s look specifically at what fine_tune is doing:

def fine_tune(self:Learner, epochs, base_lr=1e-3, freeze_epochs=1, lr_mult=100,
              pct_start=0.3, div=5.0, **kwargs):
    "Fine tune with `freeze` for `freeze_epochs` then with `unfreeze` from `epochs` using discriminative LR"
    self.freeze()
    self.fit_one_cycle(freeze_epochs, slice(base_lr*2), pct_start=0.99, **kwargs)
    self.unfreeze()
    self.fit_one_cycle(epochs, slice(base_lr/lr_mult, base_lr), pct_start=pct_start, div=div, **kwargs)

So we can see that no matter what we fit for atleast two epochs, as we assume we want to fine tune some transfer learning model (as the frozen epochs will always be one bare minimum). Does this help?

so i was thinking the definition of epoch is - how many times to look at each image. (is that correct ? )
what ever number we pass as epochs to fine_tune we end up seeing each image epochs + 1 times now.

this depends on epochs right ?

Yes, an epoch is going over the dataset once from start to finish (and yes I changed my response there) :wink:

1 Like

Your problem is in your version of fastprogress I believe. Double check you have the latest one.

  1. I tried installing with simple pip
    pip install -q feather-format kornia pyarrow wandb nbdev fastprogress fastcore fastai2 --upgrade
fastai2                 0.0.11         
fastcore                0.1.14
fastprogress            0.2.2
  1. I tried installing dev builds using:
    !pip install git+https://github.com/fastai/fastprogress.git \ git+https://github.com/fastai/fastai2.git \ git+https://github.com/fastai/fastcore.git --upgrade
fastai2                  0.0.12         
fastcore                 0.1.14         
fastprogress             0.2.3          

The issue still persist. You can find the notebook here

And with fastprogress master?

As in ?

As in installing it from github directly.
(This probably won’t change anything since you are the only person having this bug AFAICT, so it’s some kind of environment issue I can’t pinpoint)

Edit Sorry, you already tried that, I was confused while reading.

Looking at your notebook, you are missing the real error which is:

/usr/local/lib/python3.6/dist-packages/fastai2/callback/progress.py in begin_fit(self)
    100         "Prepare file with metric names."
--> 101         self.path.parent.mkdir(parents=True, exist_ok=True)
    102         self.file = (self.path/self.fname).open('a' if self.append else 'w')

AttributeError: 'str' object has no attribute 'parent'

It then creates weird things in the event after_batch/after_fit (that are always run). To catch that, when looking at the stack trace, check if there is not something like

During handling of the above exception, another exception occurred:

You can then ignore whatever is after in general, and start looking at what is above that.

The bug is that CSVLogger does not convert a string into a Path object, so use Path objects for now and I’ll fix this later in fastai2.

1 Like

Oh now I got that, I used to pass Path object but somehow missed this time. Thanks!

So what’s the ideal way of installing fastai2 as of now ?

The github install you did is the best way.

I was trying to use the ReduceLROnPlateau callback with custom metric, but as you can see from the below image the lr reduced after “epoch 1” even though 0.95>0.94 (So I aborted training)
Am I using the callback right? or I’m missing something?