Why does drop_last
equal shuffles?
Thatās right. I just added this thingie the other day
Generally you just want your training set to have drop_last
.
So itās implicit there that training set has shuffle=True and test set has shuffle=False?
Are there any end-to-end training examples where we can see models (especially vision) training using an example of the high-level API? I checked around the nbs but, besides a brief example in 21_tutorial_imagenette
, didnāt see any.
I just got one working for audio. 97% accuracy on speaker recognition using basic fit
and xresnet18, even though the rest of our audio stuff is still pretty buggy.
IIRC Jeremy said the high level isnāt quite done yet. Just medium and low level
Do we have an example of inference on a test set? (Or how to add that to the databunch?)
At the end of the imagenette tutorial, there is an example of adding a test set (there is a test_dl
function for that). Note that this part is still under development and isnāt thoroughly tested yet.
Do we know the equivalent for a pip install
for mkl
? I cannot import anything from fastai2.basics
due to an import error āNo module named āmklāā. Which seems to be only installable via conda
(but I cannot do that as Iām using Google Colab). Thoughts?
Edit: for those who do not know (like me just now!), pip is for python packages, mkl is not python
Edit x2: things I have tried:
I tried following this to install conda, and when I double checked and did a !conda install mkl-service
(and did regular mkl too) it says itās already installed.
same thing happen to me. I can only turn around by remove the āMKLā in local/core/imports.py for the time being. Hope anyone can help to solve the problem.
Will do, thanks Jeremy!
To the high-level API question earlier, is your finalized version of what you want that to look like is a PipeLine
+ .databunch()
? Or is there any intentions of doing it in one step (I donāt see how to do that exactly so just pondering your thoughts/intentions, like how we could ImageDataBunch
in one go)
Sorry @muellerzr I donāt understand your question. Could you provide more context and detail please? Perhaps also some examples?
Realized I answered my own question there, my apologies!
Although a separate question on test sets, Sylvain mentioned test sets were at the end of the Imagenette turtorial (nb 22) but I did not see test set in there.
Do you know which notebook I should look in to see how to do this?
in 38_tutorial_ulmfit.ipynb, when I run:
learn = language_model_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy, Perplexity()], path=path, opt_func = partial(Adam, wd=0.1)).to_fp16()
I got an error ānot a gzip fileā, any suggest from your experts.
How do we use the new test_dl
? Do we replace our Learner
's data? Or create a new one with the three? And how do we get predictions off of it? (A bit lost on that end of it)
Learner.get_preds
takes a dataloader if you pass it with dl=...
Awesome! Thanks
Any interest in a test_warns
function for 00_test? Python warnings library has a nice context manager for testing that we could wrap. Iād be happy to do it, or let me know if thereās a better way.
#export
def test_warns(f, args, kwargs, show=False):
with warnings.catch_warnings(record=True) as w:
test_eq(len(w), 0)
f(*args, **kwargs)
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
Itād also be pretty easy to add a regex or msg to make sure a certain warning is given similar to how test_stdout works, not sure if itās worth the trouble though.