Fastai v2 chat

When I’m trying to install the fastcore on Win10 WSL,I also had the same No module named ‘fastcore.all’.
then I tried to do the fastcore editable install and got this error -

ERROR: ipython 7.9.0 has requirement prompt-toolkit<2.1.0,>=2.0.0, but you’ll have prompt-toolkit 1.0.18 which is incompatible.

I tried to install the fastai2 with both the conda and the editable install and then the fastcore with either pip from github and also the editable version.
What is the updated recommended option to install on fresh conda environment?

This is in the latest version of fastcore (v0.0.3 on Pypi), so if an editable install doesn’t work, the pip one should.

Thanks, I tried the pip install (0.0.3) but I still have issues with the jupyter kernel (I think it’s related to the toolkit version).
I want to start from fresh new environment, should I do -

git clone
cd fastai2
conda env create -f environment.yml
source activate fastai2

Then -

pip install packaging
pip install -e .[dev]

followed by -

pip install fastcore

I probably missing something, I just tried to it install about 10 times already today.

I’m trying to set up DeViSe at the moment. How would I extract the y’s from my databunch? (It’s no longer dbunch.valid_ds.y)

I expect something like dbunch.valid_ds.itemgot(1) should do it, if your dataset contains an L. Otherwise use a list comprehension.

it wound up being ys = dbunch.valid_ds.itemgot[1](). itemgot will return a L of both the x’s and the y’s (unless that’s not the intended behavior and I should be able to as you were wanting). Still not through all the headache but progress :slight_smile:

Here was my issue: if I try using the pathlib I’ll get this stack trace:

/usr/local/lib/python3.6/dist-packages/numpy/lib/ in save(file, arr, allow_pickle, fix_imports)
    540         arr = np.asanyarray(arr)
    541         format.write_array(fid, arr, allow_pickle=allow_pickle,
--> 542                            pickle_kwargs=pickle_kwargs)
    543     finally:
    544         if own_fid:

/usr/local/lib/python3.6/dist-packages/numpy/lib/ in write_array(fp, array, version, allow_pickle, pickle_kwargs)
    641     """
    642     _check_version(version)
--> 643     _write_array_header(fp, header_data_from_array_1_0(array), version)
    645     if array.itemsize == 0:

/usr/local/lib/python3.6/dist-packages/numpy/lib/ in _write_array_header(fp, d, version)
    415     else:
    416         header = _wrap_header(header, version)
--> 417     fp.write(header)
    419 def write_array_header_1_0(fp, d):

/usr/local/lib/python3.6/dist-packages/fastcore/ in write(self, txt, encoding)
    429     "Write `txt` to `self`, creating directories as needed"
    430     self.parent.mkdir(parents=True,exist_ok=True)
--> 431     with'w', encoding=encoding) as f: f.write(txt)
    433 #Cell

TypeError: write() argument must be str, not bytes

To recreate this, do the following:

ys = dbunch.valid_ds.itemgot[1]()
ys = ys[0:len(ys)].stack().numpy()'val_lbl.npy', ys)

Also if there is a cleaner way to do the ys let me know :slight_smile:

If I just do'val_lbl.npy', ys) it will work (I’m doing this in the meantime)

When I use “learn.load()”, I got “No module named ‘’”, this happened after I install “fastai2” and “fastcore”.

You need an editable for both at this stage, since things are moving fast.

I’m still getting this error while trying to install the editable fastai2

ERROR: ipython 7.10.1 has requirement prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0, but you’ll have prompt-toolkit 1.0.18 which is incompatible.

the pip fastcore installation work without any errors, but when I try to run the tests i get tons of errors -

for example -

  1. UserWarning: get_ipython_dir has moved to the IPython.paths module since IPython 4.0.
  2. /python3.7/site-packages/", line 15, in
  3. ModuleNotFoundError: No module named ‘prompt_toolkit.formatted_text’
    i’m getting the same last error when I try to lunch the Jupyter Notebook

just to make sure, do I still need to conda install the dependencies from the environment.yml file before running the pip install -e .[dev] of fastai2?

I think this is due to the pip depency from nbdev to jupyter, which I removed, so it should work now.
Normally, updating your jupyter should solve this (conda update jupyter) if the error stays.


Great, Thanks!
now everything is working.

1 Like


I am trying to train using gray scale images, is this available in V2, if so, how to do this? Thanks,

Following is my sample code:

camvid = DataBlock(blocks=(ImageBlock, ImageBlock(cls=PILMask)),
                   get_y=lambda o: path/'labels'/f'{o.stem}_P{o.suffix}') 
dbunch = camvid.databunch(path/"images", bs=8, path=path,
                          batch_tfms=[*aug_transforms(size=(360,480)), Normalize.from_stats(*imagenet_stats)]

Thanks @sgugger, but I did installed editable for both, it still doesn’t work. I find two way to solve the problem:

  1. re-train the model, save again, and the the load function works with the new .pth file. (it still not work with the old .pth file. this seems strange to me)
  2. I put all the .py file in fastcore’s core folders back to their original place in fastai2 folder, and import them by “import fastai2.core.fundation”, then the load function work with my old saved .pth file.

I don’t understand why this happen. Thanks.

This is because we changed the names of the modules and pickle uses that to save thing. Names should be settled now so it shouldn’t break like this anymore.

Is it possible to get idxs from batches? Or can I match the batch (image ) to databunch or loader…

xb, yb = dbunch.one_batch() # Image and Catagory

I want the id of image …

To be more concrete (from pets more nb):
I need the catagory info (y) for the hook.

b = dbunch.one_batch()
xb_im = TensorImage(dbunch.train_dl.decode(b)[0][0])
xb = b[0]

def hooked_backward(cat=y):

with hook_output(m[0]) as hook_a:
with hook_output(m[0], grad=True) as hook_g:
preds = m(xb)
return hook_a,hook_g

thank you

I’m working to migrate audiov2 from fastai_dev to fastai2 and am still getting ModuleNotFoundError: No module named 'fastai2'

My steps for migrating were:
~1. Fork fastai2~
~2. New branch for merging~
~3. Make new upstream on fastai2 that points at my old fastai dev fork~
~4. git remote update~
~5. remove stuff that was copied over but is no longer in fastai2 (nbdev, tools…etc)~
~6. merge in audio nbs~

After that, my steps for setting up fastai2 were…
~1. pip install packaging~
~2. pip install nbdev~
~3. nbdev_install_git_hooks~
~4. Tried to run an editable install with pip install -e .[dev]~
~5. Removed old egg-info from site-packages folder~
~6. Ran pip install -e .[dev] again, still couldnt import fastai2 in 00_torch_core~
~7. Tried just pip install -e ., still couldnt import fastai2 in 00_torch_core~
What am I missing? Thanks


Fixed, feels like it should have been extremely obvious in hindsight that I should install the repo then make my merge. Started from scratch following the editable install instructions in repo and everything went smoothly this time. For those who may need to move a fork of fastai_dev over to fastai2, here’s what worked for me.

  1. pip install packaging
  2. pip install nbdev
  3. git clone
  4. cd fastai2
  5. pip install -e .[dev]
  6. nbdev_install_git_hooks
  7. Test that install works by running a nb and making sure fastai2 imports
  8. git checkout -b ‘move-repo’ Create a new branch for merging
  9. git remote add ‘remotename’ ‘repo-url’
  10. git remote update
  11. cherry pick the commits you made so that you dont carry over 2700 commits from the old repo
  12. git merge --strategy-option ours ‘remotename/branch-to-be-merged’ (strategy option ours will keep the local files for any conflicts (fastai2 instead of fastai dev)
  13. Move your notebooks to the right folder git mv dev/path-to-your-nb nbs/path-to-your-nb
  14. rm -rf folders and files that fastai2 no longer uses, also git rm -rf them
  15. git add everything that needs to be committed and commit
  16. update your nbs to have proper imports and exports (I opened 07_vision_core and modeled off what was in there), and run nbs to make sure everything works.
  17. commit those changes and you’re done

Hope this helps someone!

Thanks, @sgugger
So I guess I have to retrain my model. The old trained model cannot be used anymore?

They can but you can’t use learn.load directly. You have to map the weights in the state dicts of each model (since the keys changed names).

Understand. Thanks.

@sgugger little bug. I’m trying to do segmentation and when I do dbunch.device I get the following:

TypeError                                 Traceback (most recent call last)
<ipython-input-41-dba5b319a125> in <module>()
----> 1 dbunch.device

11 frames
/usr/local/lib/python3.6/dist-packages/fastcore/ in __getattr__(self, k)
    220         if xtra is None or k in xtra:
    221             attr = getattr(self,self._default,None)
--> 222             if attr is not None: return getattr(attr, k)
    223         raise AttributeError(k)
    224     def __dir__(self): return custom_dir(self, self._dir() if self._xtra is None else self._dir())

/usr/local/lib/python3.6/dist-packages/fastai2/data/ in device(self)
     89     @property
     90     def device(self):
---> 91         if not hasattr(self, '_device'): _ = self._one_pass()
     92         return self._device

/usr/local/lib/python3.6/dist-packages/fastai2/data/ in _one_pass(self)
     42     def _one_pass(self):
---> 43         its = self.after_batch(self.do_batch([self.do_item(0)]))
     44         self._device = find_device(its)
     45         self._n_inp = 1 if not isinstance(its, (list,tuple)) or len(its)==1 else len(its)-1

/usr/local/lib/python3.6/dist-packages/fastcore/ in __call__(self, o)
    173         self.fs.append(t)
--> 175     def __call__(self, o): return compose_tfms(o, tfms=self.fs, split_idx=self.split_idx)
    176     def __repr__(self): return f"Pipeline: {self.fs}"
    177     def __getitem__(self,i): return self.fs[i]

/usr/local/lib/python3.6/dist-packages/fastcore/ in compose_tfms(x, tfms, is_enc, reverse, **kwargs)
    121     for f in tfms:
    122         if not is_enc: f = f.decode
--> 123         x = f(x, **kwargs)
    124     return x

/usr/local/lib/python3.6/dist-packages/fastcore/ in __call__(self, x, **kwargs)
     59     @property
     60     def use_as_item(self): return ifnone(self.as_item_force, self.as_item)
---> 61     def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
     62     def decode  (self, x, **kwargs): return self._call('decodes', x, **kwargs)
     63     def setup(self, items=None): return self.setups(items)

/usr/local/lib/python3.6/dist-packages/fastcore/ in _call(self, fn, x, split_idx, **kwargs)
     68         f = getattr(self, fn)
     69         if self.use_as_item or not is_listy(x): return self._do_call(f, x, **kwargs)
---> 70         res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
     71         return retain_type(res, x)

/usr/local/lib/python3.6/dist-packages/fastcore/ in <genexpr>(.0)
     68         f = getattr(self, fn)
     69         if self.use_as_item or not is_listy(x): return self._do_call(f, x, **kwargs)
---> 70         res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
     71         return retain_type(res, x)

/usr/local/lib/python3.6/dist-packages/fastcore/ in _do_call(self, f, x, **kwargs)
     73     def _do_call(self, f, x, **kwargs):
---> 74         return x if f is None else retain_type(f(x, **kwargs), x, f.returns_none(x))
     76 add_docs(Transform, decode="Delegate to `decodes` to undo transform", setup="Delegate to `setups` to set up transform")

/usr/local/lib/python3.6/dist-packages/fastcore/ in __call__(self, *args, **kwargs)
     96         if not f: return args[0]
     97         if self.inst is not None: f = MethodType(f, self.inst)
---> 98         return f(*args, **kwargs)
    100     def __get__(self, inst, owner):

/usr/local/lib/python3.6/dist-packages/fastai2/data/ in encodes(self, x)
    289             self.mean,self.std = x.mean(self.axes, keepdim=True),x.std(self.axes, keepdim=True)+1e-7
--> 291     def encodes(self, x:TensorImage): return (x-self.mean) / self.std
    292     def decodes(self, x:TensorImage):
    293         f = to_cpu if x.device.type=='cpu' else noop

/usr/local/lib/python3.6/dist-packages/fastai2/ in _f(self, *args, **kwargs)
    256         def _f(self, *args, **kwargs):
    257             cls = self.__class__
--> 258             res = getattr(super(TensorBase, self), fn)(*args, **kwargs)
    259             return retain_type(res, self)
    260         return _f

TypeError: sub(): argument 'other' (position 1) must be Tensor, not list

Dbunch generation follows the CamVid tutorial:

valid_fnames = (path/'valid.txt').read().split('\n')
def ListSplitter(valid_items):
  def _inner(items):
    val_mask = tensor([ in valid_items for o in items])
    return [~val_mask, val_mask]
  return _inner
get_msk = lambda o: path/'labels'/f'{o.stem}_P{o.suffix}'
codes = np.loadtxt(path/'codes.txt', dtype=str)
camvid = DataBlock(blocks=(ImageBlock, ImageBlock(cls=PILMask)),
dbunch = camvid.databunch(path/'images', bs=8,
                          batch_tfms=[*aug_transforms(size=(360,480)), Normalize(*imagenet_stats)])
dbunch.vocab = codes