Fastai v2 chat


Yeah probably, will add it in the templates.



I had to do a breaking change in nbdev that is used in the git hooks. So after upgrading to the latest of nbdev you’ll need to run the command


in all the repos you ran it before (like nbdev, fastai2)

1 Like

(Zachary Mueller) #476


How do we turn off GPU-ported augmentation when training? I’d like to push some specific augmentations (like resize, etc) on the CPU to free up GPU space. The reasoning for this is my GPU usage is considerably higher in v1 vs v2. How would I go about setting that up?

1 Like

(Jeremy Howard (Admin)) #477

just use after_item instead of after_batch to use CPU augmentation. (Or with data blocks it’s item_tfms instead of batch_tfms)


(Zachary Mueller) #478

Thank you for the clarification! :slight_smile:


(Tako Tabak) #480

Just wanted to share my current progress with v2. It really is inviting to work on and gives a lot of room for (mis)use.

I made some fake data since I’m not allowed to share the real data. But it is time-series forecasting with LSTM and a simple layer on top. This for a couple of locations in 2 cities, these are included as embedding. The target depends on the weather. I tried to use the transforms where possible and to plot and test as much as possible.

Any feedback/comment/tips is welcome.

PS I’m currently stuck on a peculiar problem. Any pointers are welcome. When I execute learn.show_results(). It currently is the case that my *yb seems to contain an keyword argument reduction?? I do not get it because * means positional argument right? and ** would be keyword? But still:

~/devtools/fastai_dev/fastai2/ in one_batch(self, i, b)
    215             self.pred = self.model(*self.xb);                self('after_pred')
    216             if len(self.yb) == 0: return
--> 217             self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
    218             if not return
    219             self.loss.backward();                            self('after_backward')

TypeError: mse() got an unexpected keyword argument 'reduction'


You need to implement reduction in your custom loss function to be able to use Learner.get_preds or Learner.show_results, in particular reduction='none'.

1 Like

(Tako Tabak) #482

Thanks!!! I used the metric mse, but with with MSELossFlat it works.

MSELossFlat is defined in the layers notebook, I was looking and using metrics/loss functions from the metrics notebook.



Breaking change to note: Normalize has been changed. The current Normalize(*imagenet_stats) needs to be replaced by Normalise.from_stats(*imagenet_stats).

Note that now, if you don’t fill in stats, Normalize will grab the mean and std of the first batch during setup. Also, the generic init will make it easier to use it for something else than images (tabular data, audio…)


(Zachary Mueller) #484

@sgugger Currently facing an issue with fastcore :frowning: I tried installing via pip install fastcore and

!pip install packaging
!git clone
%cd fastcore
!pip install -e .[dev]

In colab and I’m getting a No module named 'fastcore.all' error. a pip show shows 0.0.2, and when I investigate it I don’t see the file at all in there.

Edit: doing a !pip install git+ does work though. Need to update a pip version on your end?

1 Like


Yes, but since it’s a fast work in progress, you should keep an editable install. I’ll only make releases once a day at most.


(Zachary Mueller) #486

Got it :slight_smile: thanks! (I wasn’t sure when you added it, I’ve been away from 2.0 for a week and TONS of changes :wink: )



It was added two days ago but the dep to fastai2 is only from yesterday morning :wink:

1 Like


When I’m trying to install the fastcore on Win10 WSL,I also had the same No module named ‘fastcore.all’.
then I tried to do the fastcore editable install and got this error -

ERROR: ipython 7.9.0 has requirement prompt-toolkit<2.1.0,>=2.0.0, but you’ll have prompt-toolkit 1.0.18 which is incompatible.

I tried to install the fastai2 with both the conda and the editable install and then the fastcore with either pip from github and also the editable version.
What is the updated recommended option to install on fresh conda environment?



This is in the latest version of fastcore (v0.0.3 on Pypi), so if an editable install doesn’t work, the pip one should.



Thanks, I tried the pip install (0.0.3) but I still have issues with the jupyter kernel (I think it’s related to the toolkit version).
I want to start from fresh new environment, should I do -

git clone
cd fastai2
conda env create -f environment.yml
source activate fastai2

Then -

pip install packaging
pip install -e .[dev]

followed by -

pip install fastcore

I probably missing something, I just tried to it install about 10 times already today.


(Zachary Mueller) #491

I’m trying to set up DeViSe at the moment. How would I extract the y’s from my databunch? (It’s no longer dbunch.valid_ds.y)


(Jeremy Howard (Admin)) #492

I expect something like dbunch.valid_ds.itemgot(1) should do it, if your dataset contains an L. Otherwise use a list comprehension.


(Zachary Mueller) #493

it wound up being ys = dbunch.valid_ds.itemgot[1](). itemgot will return a L of both the x’s and the y’s (unless that’s not the intended behavior and I should be able to as you were wanting). Still not through all the headache but progress :slight_smile:

Here was my issue: if I try using the pathlib I’ll get this stack trace:

/usr/local/lib/python3.6/dist-packages/numpy/lib/ in save(file, arr, allow_pickle, fix_imports)
    540         arr = np.asanyarray(arr)
    541         format.write_array(fid, arr, allow_pickle=allow_pickle,
--> 542                            pickle_kwargs=pickle_kwargs)
    543     finally:
    544         if own_fid:

/usr/local/lib/python3.6/dist-packages/numpy/lib/ in write_array(fp, array, version, allow_pickle, pickle_kwargs)
    641     """
    642     _check_version(version)
--> 643     _write_array_header(fp, header_data_from_array_1_0(array), version)
    645     if array.itemsize == 0:

/usr/local/lib/python3.6/dist-packages/numpy/lib/ in _write_array_header(fp, d, version)
    415     else:
    416         header = _wrap_header(header, version)
--> 417     fp.write(header)
    419 def write_array_header_1_0(fp, d):

/usr/local/lib/python3.6/dist-packages/fastcore/ in write(self, txt, encoding)
    429     "Write `txt` to `self`, creating directories as needed"
    430     self.parent.mkdir(parents=True,exist_ok=True)
--> 431     with'w', encoding=encoding) as f: f.write(txt)
    433 #Cell

TypeError: write() argument must be str, not bytes

To recreate this, do the following:

ys = dbunch.valid_ds.itemgot[1]()
ys = ys[0:len(ys)].stack().numpy()'val_lbl.npy', ys)

Also if there is a cleaner way to do the ys let me know :slight_smile:

If I just do'val_lbl.npy', ys) it will work (I’m doing this in the meantime)


(fanyi) #494

When I use “learn.load()”, I got “No module named ‘’”, this happened after I install “fastai2” and “fastcore”.