Fastai v2 chat

Why does drop_last equal shuffles?

Thatā€™s right. I just added this thingie the other day :slight_smile:

1 Like

Generally you just want your training set to have drop_last.

So itā€™s implicit there that training set has shuffle=True and test set has shuffle=False?

Are there any end-to-end training examples where we can see models (especially vision) training using an example of the high-level API? I checked around the nbs but, besides a brief example in 21_tutorial_imagenette, didnā€™t see any.

I just got one working for audio. 97% accuracy on speaker recognition using basic fit and xresnet18, even though the rest of our audio stuff is still pretty buggy.

2 Likes

IIRC Jeremy said the high level isnā€™t quite done yet. Just medium and low level

1 Like

Do we have an example of inference on a test set? (Or how to add that to the databunch?)

At the end of the imagenette tutorial, there is an example of adding a test set (there is a test_dl function for that). Note that this part is still under development and isnā€™t thoroughly tested yet.

2 Likes

Do we know the equivalent for a pip install for mkl? I cannot import anything from fastai2.basics due to an import error ā€œNo module named ā€˜mklā€™ā€. Which seems to be only installable via conda (but I cannot do that as Iā€™m using Google Colab). Thoughts?

Edit: for those who do not know (like me just now!), pip is for python packages, mkl is not python

Edit x2: things I have tried:

I tried following this to install conda, and when I double checked and did a !conda install mkl-service (and did regular mkl too) it says itā€™s already installed.

same thing happen to me. I can only turn around by remove the ā€œMKLā€ in local/core/imports.py for the time being. Hope anyone can help to solve the problem.

Iā€™ll remove that import soon. In the meantime just remove it as @fanyi suggested.

1 Like

Will do, thanks Jeremy! :slight_smile:

To the high-level API question earlier, is your finalized version of what you want that to look like is a PipeLine + .databunch()? Or is there any intentions of doing it in one step (I donā€™t see how to do that exactly so just pondering your thoughts/intentions, like how we could ImageDataBunch in one go)

Sorry @muellerzr I donā€™t understand your question. Could you provide more context and detail please? Perhaps also some examples?

Realized I answered my own question there, my apologies! :slight_smile:

Although a separate question on test sets, Sylvain mentioned test sets were at the end of the Imagenette turtorial (nb 22) but I did not see test set in there.

Do you know which notebook I should look in to see how to do this? :slight_smile:

in 38_tutorial_ulmfit.ipynb, when I run:
learn = language_model_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy, Perplexity()], path=path, opt_func = partial(Adam, wd=0.1)).to_fp16()

I got an error ā€œnot a gzip fileā€, any suggest from your experts.

Ah yes, the example has been removed. Though the function is documented with an example.

1 Like

How do we use the new test_dl? Do we replace our Learner's data? Or create a new one with the three? And how do we get predictions off of it? (A bit lost on that end of it)

Learner.get_preds takes a dataloader if you pass it with dl=...

1 Like

Awesome! Thanks :slight_smile:

Any interest in a test_warns function for 00_test? Python warnings library has a nice context manager for testing that we could wrap. Iā€™d be happy to do it, or let me know if thereā€™s a better way.

#export
def test_warns(f, args, kwargs, show=False):
    with warnings.catch_warnings(record=True) as w:
        test_eq(len(w), 0)
        f(*args, **kwargs)
        test_ne(len(w), 0)
        if show: 
            for e in w: print(f"{e.category}: {e.message}")

Itā€™d also be pretty easy to add a regex or msg to make sure a certain warning is given similar to how test_stdout works, not sure if itā€™s worth the trouble though.