Fastai v2 chat

I’m trying to run notebook tests from the command line and getting an error. What am I doing wrong?

> python --fn 00_test.ipynb 

Error in 00_test.ipynb
Traceback (most recent call last):
  File "", line 18, in <module>
    slow:Param("Run slow tests", bool)=False, cpp:Param("Run tests that require c++ extensions", bool)=False):
  File "/home/jupyter/fastai_dev/dev/local/", line 37, in call_parse
  File "", line 24, in main
    for f in sorted(fns): test_nb(f, flags=flags)
  File "/home/jupyter/fastai_dev/dev/local/notebook/", line 113, in test_nb
    raise e
  File "/home/jupyter/fastai_dev/dev/local/notebook/", line 110, in test_nb
TypeError: preprocess() missing 1 required positional argument: 'resources'

You need to update nbconvert to the latest version.

fastai v2 is now compatible with pytorch 1.3 (and remains compatible with 1.2).


Do you plan to utilize the named tensors feature in any way, or will you avoid doing so to ensure compatibility with 1.2?

We plan to use named tensors once PyTorch considers them stable.

  1. Zoom transform - does it only zoom in and not out? There’s a max_zoom(>=1.0) parameter, but couldn’t find a way to let it zoom say from x0.6 (smaller) to x1.3 (larger)

  2. Any plan to incorporate AutoAugment? [1] [2] [3]

  3. Where is the “suggested LR” from lr_find? It’s mentioned in v1 docs, but I don’t see it now.

  4. I was looking for a way to quickly pickle some data structure, and stumbled upon this post about not using pickle and instead use MessagePack (which I haven’t tried and no experience with).
    What is your opinion on that matter? For example when exporting model/learner.

  5. What’s the equivalent of PyTorch’s Dataloader sampler to balance classes when wanting to rebalance classes to prevent overfitting with an imbalanced dataset?

(moved everything to previous post)

Where does the DataBunch.c attribute get set? It’s used in get_c() function, but I couldn’t find the code that sets it.

It may be set by one of the transforms.


I am unsure if these questions are for mainly v2 or both v1 and v2. I might be able to answer some of these questions:

  1. I am unsure but I guess you could probably try this yourself and see :slightly_smiling_face:
  2. Don’t know
  3. For fastai v1, you could do learn.recorder.plot(suggestion=True) but I am not sure if this is the same for v2
  4. Don’t know
  5. For fastai v1 we have the OverSamplingCallback that uses the weighted random sampler. I will probably write a version for fastai v2 in the next couple weeks.

I can connect to WSL and run python/jupyter, but for fastai2, it reports No module named ‘local’. So who knows what I am missing?

See resolution here:

I was referring to @jeremy’s plans for v2, exactly bacuse some was available in v1 as you mentioned.

Ah ok I will try to answer again based on what’s in the code now:

  1. Here is the code for the zoom. It seems it uses PyTorch .uniform_ from 1 to max_zoom, but I am not sure if it will work if max_zoom<1 and the upper bound is actually smaller than the lower bound.
  2. Not sure
  3. The code for the LR finder plot is here. There is no suggestion arguments right now.
  4. Not sure
  5. Again, I will develop this as a callback soon.

I hope this helps answer some of your questions.

There’s weighted_databunch now, although it’s not well tested.


I was not aware of this. Does this mean there is no need for an OverSamplingCallback like there was for fastai v1?

Why does drop_last equal shuffles?

That’s right. I just added this thingie the other day :slight_smile:

1 Like

Generally you just want your training set to have drop_last.

So it’s implicit there that training set has shuffle=True and test set has shuffle=False?