Yeah probably, will add it in the templates.
I had to do a breaking change in
nbdev that is used in the git hooks. So after upgrading to the latest of nbdev you’ll need to run the command
in all the repos you ran it before (like nbdev, fastai2)
How do we turn off GPU-ported augmentation when training? I’d like to push some specific augmentations (like resize, etc) on the CPU to free up GPU space. The reasoning for this is my GPU usage is considerably higher in v1 vs v2. How would I go about setting that up?
just use after_item instead of after_batch to use CPU augmentation. (Or with data blocks it’s item_tfms instead of batch_tfms)
Thank you for the clarification!
Just wanted to share my current progress with v2. It really is inviting to work on and gives a lot of room for (mis)use.
I made some fake data since I’m not allowed to share the real data. But it is time-series forecasting with LSTM and a simple layer on top. This for a couple of locations in 2 cities, these are included as embedding. The target depends on the weather. I tried to use the transforms where possible and to plot and test as much as possible.
Any feedback/comment/tips is welcome.
PS I’m currently stuck on a peculiar problem. Any pointers are welcome. When I execute
learn.show_results(). It currently is the case that my
*yb seems to contain an keyword argument
reduction?? I do not get it because
* means positional argument right? and
** would be keyword? But still:
~/devtools/fastai_dev/fastai2/learner.py in one_batch(self, i, b) 215 self.pred = self.model(*self.xb); self('after_pred') 216 if len(self.yb) == 0: return --> 217 self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') 218 if not self.training: return 219 self.loss.backward(); self('after_backward') TypeError: mse() got an unexpected keyword argument 'reduction'
You need to implement
reduction in your custom loss function to be able to use
Learner.show_results, in particular
Thanks!!! I used the metric
mse, but with with
MSELossFlat it works.
MSELossFlat is defined in the layers notebook, I was looking and using metrics/loss functions from the metrics notebook.
Breaking change to note:
Normalize has been changed. The current
Normalize(*imagenet_stats) needs to be replaced by
Note that now, if you don’t fill in stats,
Normalize will grab the mean and std of the first batch during setup. Also, the generic init will make it easier to use it for something else than images (tabular data, audio…)
@sgugger Currently facing an issue with
fastcore I tried installing via
pip install fastcore and
!pip install packaging !git clone https://github.com/fastai/fastcore.git %cd fastcore !pip install -e .[dev]
In colab and I’m getting a
No module named 'fastcore.all' error. a
pip show shows 0.0.2, and when I investigate it I don’t see the
all.py file at all in there.
Edit: doing a
!pip install git+https://github.com/fastai/fastcore does work though. Need to update a pip version on your end?
Yes, but since it’s a fast work in progress, you should keep an editable install. I’ll only make releases once a day at most.
Got it thanks! (I wasn’t sure when you added it, I’ve been away from 2.0 for a week and TONS of changes )
It was added two days ago but the dep to fastai2 is only from yesterday morning
When I’m trying to install the fastcore on Win10 WSL,I also had the same No module named ‘fastcore.all’.
then I tried to do the fastcore editable install and got this error -
ERROR: ipython 7.9.0 has requirement prompt-toolkit<2.1.0,>=2.0.0, but you’ll have prompt-toolkit 1.0.18 which is incompatible.
I tried to install the fastai2 with both the conda and the editable install and then the fastcore with either pip from github and also the editable version.
What is the updated recommended option to install on fresh conda environment?
This is in the latest version of fastcore (v0.0.3 on Pypi), so if an editable install doesn’t work, the pip one should.
Thanks, I tried the pip install (0.0.3) but I still have issues with the jupyter kernel (I think it’s related to the toolkit version).
I want to start from fresh new environment, should I do -
git clone https://github.com/fastai/fastai2
conda env create -f environment.yml
source activate fastai2
pip install packaging
pip install -e .[dev]
followed by -
pip install fastcore
I probably missing something, I just tried to it install about 10 times already today.
I’m trying to set up DeViSe at the moment. How would I extract the y’s from my databunch? (It’s no longer
I expect something like
dbunch.valid_ds.itemgot(1) should do it, if your dataset contains an
L. Otherwise use a list comprehension.
it wound up being
ys = dbunch.valid_ds.itemgot().
itemgot will return a
L of both the x’s and the y’s (unless that’s not the intended behavior and I should be able to as you were wanting). Still not through all the headache but progress
Here was my issue: if I try using the pathlib I’ll get this stack trace:
/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py in save(file, arr, allow_pickle, fix_imports) 540 arr = np.asanyarray(arr) 541 format.write_array(fid, arr, allow_pickle=allow_pickle, --> 542 pickle_kwargs=pickle_kwargs) 543 finally: 544 if own_fid: /usr/local/lib/python3.6/dist-packages/numpy/lib/format.py in write_array(fp, array, version, allow_pickle, pickle_kwargs) 641 """ 642 _check_version(version) --> 643 _write_array_header(fp, header_data_from_array_1_0(array), version) 644 645 if array.itemsize == 0: /usr/local/lib/python3.6/dist-packages/numpy/lib/format.py in _write_array_header(fp, d, version) 415 else: 416 header = _wrap_header(header, version) --> 417 fp.write(header) 418 419 def write_array_header_1_0(fp, d): /usr/local/lib/python3.6/dist-packages/fastcore/utils.py in write(self, txt, encoding) 429 "Write `txt` to `self`, creating directories as needed" 430 self.parent.mkdir(parents=True,exist_ok=True) --> 431 with self.open('w', encoding=encoding) as f: f.write(txt) 432 433 #Cell TypeError: write() argument must be str, not bytes
To recreate this, do the following:
ys = dbunch.valid_ds.itemgot() ys = ys[0:len(ys)].stack().numpy() np.save(tmp_path/'val_lbl.npy', ys)
Also if there is a cleaner way to do the ys let me know
If I just do
np.save('val_lbl.npy', ys) it will work (I’m doing this in the meantime)
When I use “learn.load()”, I got “No module named ‘fastai2.core.foundation’”, this happened after I install “fastai2” and “fastcore”.