(moved everything to previous post)
Where does the
DataBunch.c attribute get set? It’s used in get_c() function, but I couldn’t find the code that sets it.
It may be set by one of the transforms.
I am unsure if these questions are for mainly v2 or both v1 and v2. I might be able to answer some of these questions:
- I am unsure but I guess you could probably try this yourself and see
- Don’t know
- For fastai v1, you could do
learn.recorder.plot(suggestion=True)but I am not sure if this is the same for v2
- Don’t know
- For fastai v1 we have the
OverSamplingCallbackthat uses the weighted random sampler. I will probably write a version for fastai v2 in the next couple weeks.
I can connect to WSL and run python/jupyter, but for fastai2, it reports No module named ‘local’. So who knows what I am missing?
See resolution here:
I was referring to @jeremy’s plans for v2, exactly bacuse some was available in v1 as you mentioned.
Ah ok I will try to answer again based on what’s in the code now:
Here is the code for the zoom. It seems it uses PyTorch
.uniform_from 1 to max_zoom, but I am not sure if it will work if max_zoom<1 and the upper bound is actually smaller than the lower bound.
- Not sure
- The code for the LR finder plot is here. There is no
suggestionarguments right now.
- Not sure
- Again, I will develop this as a callback soon.
I hope this helps answer some of your questions.
weighted_databunch now, although it’s not well tested.
I was not aware of this. Does this mean there is no need for an
OverSamplingCallback like there was for fastai v1?
drop_last equal shuffles?
That’s right. I just added this thingie the other day
Generally you just want your training set to have
So it’s implicit there that training set has shuffle=True and test set has shuffle=False?
Are there any end-to-end training examples where we can see models (especially vision) training using an example of the high-level API? I checked around the nbs but, besides a brief example in
21_tutorial_imagenette, didn’t see any.
I just got one working for audio. 97% accuracy on speaker recognition using basic
fit and xresnet18, even though the rest of our audio stuff is still pretty buggy.
IIRC Jeremy said the high level isn’t quite done yet. Just medium and low level
Do we have an example of inference on a test set? (Or how to add that to the databunch?)
At the end of the imagenette tutorial, there is an example of adding a test set (there is a
test_dl function for that). Note that this part is still under development and isn’t thoroughly tested yet.
Do we know the equivalent for a
pip install for
mkl? I cannot import anything from
fastai2.basics due to an import error “No module named ‘mkl’”. Which seems to be only installable via
conda (but I cannot do that as I’m using Google Colab). Thoughts?
Edit: for those who do not know (like me just now!), pip is for python packages, mkl is not python
Edit x2: things I have tried:
I tried following this to install conda, and when I double checked and did a
!conda install mkl-service (and did regular mkl too) it says it’s already installed.
same thing happen to me. I can only turn around by remove the “MKL” in local/core/imports.py for the time being. Hope anyone can help to solve the problem.