Fastai v2 chat

In, Data block examples Tutorial link is broken. It points to (letter k missing in datablock) instead of The corresponding notebook is the only one that doesn’t have tutorial in its name (50_datablock_examples.ipynb) as opposed to the other tutorials notebooks. Shouldn’t be named 50_tutorial.datablock_examples.ipynb for consistency (no relation with the missing k) :slightly_smiling_face:?

1 Like

Hi @Jeremy, when can we expect the new course lectures with fastai v2 to start showing up on the website (or on Youtube)? After each live lecture or when the whole course completes? Do you have any dates to share?

Eagerly waiting for this. Sounds like the rewrite of the library would lend itself much better to writing custom models using the mid and low-level APIs - which is exciting.

He sent out invites for those invited to the live viewings. Else it should be in July for YouTube,, etc.

In the mean time I’d recommend my Walk with fastai2 study group for what’s new in the API and in depth guide to certain techniques (search the forum, its a very large megathread :slight_smile: ) (also Jeremy did his own v2 code walkthroughs as well)

Thanks @muellerzr!

July :slight_smile:

1 Like

A post was split to a new topic: ModuleNotFoundError

Hi all,

I’m trying to use optimizers, and I’m following the docs which has syntax like so:
opt = Lookahead(Lamb(params, lr=0.1))

But, I get the following error:
----> 1 learn.lr_find()

2 frames
/usr/local/lib/python3.6/dist-packages/fastai2/callback/ in lr_find(self, start_lr, end_lr, num_it, stop_div, show_plot, suggestions)
    195     n_epoch = num_it//len(self.dls.train) + 1
    196     cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
--> 197     with self.no_logging():, cbs=cb)
    198     if show_plot: self.recorder.plot_lr_find()
    199     if suggestions:

/usr/local/lib/python3.6/dist-packages/fastai2/ in fit(self, n_epoch, lr, wd, cbs, reset_opt)
    284     def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False):
    285         with self.added_cbs(cbs):
--> 286             if reset_opt or not self.opt: self.create_opt()
    287             self.opt.set_hypers(wd=self.wd if wd is None else wd, if lr is None else lr)

/usr/local/lib/python3.6/dist-packages/fastai2/ in create_opt(self)
    233     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
    234     def create_opt(self):
--> 235         self.opt = self.opt_func(self.splitter(self.model),
    236         if not self.wd_bn_bias:
    237             for p in self._bn_bias_state(True ): p['do_wd'] = False

TypeError: 'Lookahead' object is not callable

My learner has:

Any tips?

You need to pass an opt_func, not an opt to Learner. So just wrap your code in a function that returns the opt you want (see the source code for Adam, or any other fastai optimizer for inspiration).

I had just tried that:
def opt_func(p, lr=slice(3e-3)): return Lookahead(Lamb(p, lr))

But, got this error:
/usr/local/lib/python3.6/dist-packages/fastai2/ in (.0)
299 def _init_state(self): self.count,self.slow_weights = 0,None
–> 300 def _copy_weights(self): self.slow_weights = L(L(p.clone().detach() for p in pg) for pg in self.param_groups)
302 @property

AttributeError: 'function' object has no attribute 'clone'

There is some problem with your parameters apparently. In general, please share all the code you’re running, not just the line that through the error.

I got it working now. I had an issue with my params defintion.
I’ll remember to share all the code. Thanks for your help!

I hope this is the right place to ask this question. Since I am hoping to learn and may be contribute to FastAI someday, I pulled the latest fastai2 from github onto my ubuntu laptop to take a look. I am able to get quite far, installing pre-reqs like CUDA and pytorch etc., and able to run most of the tests successfully except for two:

  • 04_data.external.ipynb
  • 09b_vision.utils.ipynb

I tried to troubleshoot 04_data.external.ipynb… The error is:

The log we have for IMAGENETTE in checks does not match the actual archive. To fix this, you need to run the following code in this notebook before making a PR (there is a commented cell for this below):

_add_check(url, URLs.path(url))

I opened the notebook using jupyter and tried to run the suggested cell but I still have the same error.

I looked into the file fastai2/data/checks.txt, the entry for IMAGENETTE is:”: [

While on


At least the size seems to match. Not sure how to proceed from here. Help?

I think you may have the old version of the dataset. Try removing it and downloading it again (with untar_data(URLs.IMAGENETTE)).

BTW for those confused on why show_doc won’t work in colab:

IPython’s Markdown isn’t currently supported, and it doesn’t seem like they’ll be updating it any time soon…

1 Like

@sgugger thank you. I did find outdated copy of imagenette2.tgz under ~/.fastai and after I cleaned out data/ and archive/ (under ~/.fastai/) the 04_data.external.ipynb test is now passing.

Remaining Issues:

  1. 09b_vision.utils.ipynb is failing with timeout. i.e.

~/Projects/fastai2> nbdev_test_nbs --fname nbs/09b_vision.utils.ipynb
testing: /home/brian/Projects/fastai2/nbs/09b_vision.utils.ipynb

Error in /home/brian/Projects/fastai2/nbs/09b_vision.utils.ipynb:
Cell execution timed out
Traceback (most recent call last):

File “/home/brian/pyenvs/f2dev/bin/nbdev_test_nbs”, line 8, in

File “/home/brian/pyenvs/f2dev/lib/python3.7/site-packages/fastscript/”, line 73, in _f

File “/home/brian/pyenvs/f2dev/lib/python3.7/site-packages/nbdev/”, line 70, in nbdev_test_nbs
raise Exception(msg + ‘\n’.join([ for p,f in zip(passed,files) if not p]))

  1. Transient test failures with CUDA out of memory errors from tests such as 43_tabular.learner.ipynb, e.g.
An error occurred while executing the following cell:

dls = to.dataloaders(bs=64)

RuntimeError: CUDA error: out of memory

My laptop has a GTX-1050 with 4GB VRAM. Does fastai have a minimum acceptable spec? Should I summit PR to limit the dataloader’s call in tests to smaller batch size?

I have no idea why 9b times out, you should run the notebooks to find why.
We can certainly limit the batch size to 16 for the tests/doc, so a PR would be welcome, yes.

A small suggestion: how about adding a simple test for checking whether directories mentioned in GrandParentSplitter actually exist, I had spelled it wrong and found about that much later when I called learn.fit_flat_cos :laughing:

I’d like to make a PR if you consider adding it!

@kshitijpatil09 show_batch() didn’t catch it? (Not saying it can’t, just surprised)

@sgugger, I ran 09b in jupyter notebook and it passed. Seems like the cell with path = untar_data(URLs.IMAGENETTE) just took a long time downloading and extracting.

Drilling down into nbdev_test_nbs I see it calls concurrent.futures.ProcessPoolExecutor which probably has a default timeout value somewhere that triggered. Now that the notebook has downloaded ImageNet once, running 09b test via nbdev_test_nbs --fname nbs/09b_vision.utils.ipynb is passing.

When I ran each test individually, all tests passed. When ran concurrently with make test, some tests fail with CUDA out of memory and some with random assertion failures.

My guess is that concurrent tests are tripping over each other. Luckily I noticed I can force sequential test execution by setting number of worker threads e.g. nbdev_test_nbs --n_workers 1. On my laptop it takes about 5 min and shows all tests are passing!

If I try nbdev_test_nbs --n_workers 2 I start to see assertion failure e.g.
FileNotFoundError: [Errno 2] No such file or directory: '~/.fastai/data/mnist_tiny/train/7/9603.png'.

I wonder if anyone have seen this behavior? How do you get around it?

It’s likely one of the tests moved the file temporarily (to test if the dataset can be properly downloaded for instance). In general, the safest way is to execute all tests with one worker (we do the parallel for quick prototyping, and we’re used to those errors).