In kaggle kernels, yes.
Thank you for this!
Have you (or anyone else) been able to import vision.core in colab? Iām getting
AttributeError Traceback (most recent call last)
<ipython-input-18-755b6a8da78b> in <module>()
----> 1 from local.vision.core import *
/content/fastai_dev/dev/local/vision/core.py in <module>()
13
14 #Cell
---> 15 _old_sz = Image.Image.size.fget
16 @patch_property
17 def size(x:Image.Image): return Tuple(_old_sz(x))
AttributeError: type object 'Image' has no attribute 'size'
This works fine locally, so Iām wondering what the important difference is
Thank you so much for your hard work. Iām really looking forward to v2. The feature I want most is about GAN. I have never seen discussion about GAN for v2. I want to know if there are any plans for GAN!
@Jeremy bringing this to your attention as Iām getting this on my end too. I also tried directly forcing PIL (import PIL.Image) just to make sure it wasnāt a mixed import issue but that did not work. Any ideas?
Iām trying to run notebook tests from the command line and getting an error. What am I doing wrong?
> python run_notebook.py --fn 00_test.ipynb
Error in 00_test.ipynb
Traceback (most recent call last):
File "run_notebook.py", line 18, in <module>
slow:Param("Run slow tests", bool)=False, cpp:Param("Run tests that require c++ extensions", bool)=False):
File "/home/jupyter/fastai_dev/dev/local/script.py", line 37, in call_parse
func(**args.__dict__)
File "run_notebook.py", line 24, in main
for f in sorted(fns): test_nb(f, flags=flags)
File "/home/jupyter/fastai_dev/dev/local/notebook/test.py", line 113, in test_nb
raise e
File "/home/jupyter/fastai_dev/dev/local/notebook/test.py", line 110, in test_nb
ep.preprocess(pnb)
TypeError: preprocess() missing 1 required positional argument: 'resources'
You need to update nbconvert to the latest version.
fastai v2 is now compatible with pytorch 1.3 (and remains compatible with 1.2).
Do you plan to utilize the named tensors feature in any way, or will you avoid doing so to ensure compatibility with 1.2?
We plan to use named tensors once PyTorch considers them stable.
-
Zoom transform - does it only zoom in and not out? Thereās a
max_zoom
(>=1.0) parameter, but couldnāt find a way to let it zoom say from x0.6 (smaller) to x1.3 (larger) -
Where is the āsuggested LRā from lr_find? Itās mentioned in v1 docs, but I donāt see it now.
-
I was looking for a way to quickly pickle some data structure, and stumbled upon this post about not using pickle and instead use MessagePack (which I havenāt tried and no experience with).
What is your opinion on that matter? For example when exporting model/learner. -
Whatās the equivalent of PyTorchās Dataloader sampler to balance classes when wanting to rebalance classes to prevent overfitting with an imbalanced dataset?
(moved everything to previous post)
Where does the DataBunch.c
attribute get set? Itās used in get_c() function, but I couldnāt find the code that sets it.
It may be set by one of the transforms.
Hello,
I am unsure if these questions are for mainly v2 or both v1 and v2. I might be able to answer some of these questions:
- I am unsure but I guess you could probably try this yourself and see
- Donāt know
- For fastai v1, you could do
learn.recorder.plot(suggestion=True)
but I am not sure if this is the same for v2 - Donāt know
- For fastai v1 we have the
OverSamplingCallback
that uses the weighted random sampler. I will probably write a version for fastai v2 in the next couple weeks.
I can connect to WSL and run python/jupyter, but for fastai2, it reports No module named ālocalā. So who knows what I am missing?
See resolution here:
I was referring to @jeremyās plans for v2, exactly bacuse some was available in v1 as you mentioned.
Ah ok I will try to answer again based on whatās in the code now:
-
Here is the code for the zoom. It seems it uses PyTorch
.uniform_
from 1 to max_zoom, but I am not sure if it will work if max_zoom<1 and the upper bound is actually smaller than the lower bound. - Not sure
- The code for the LR finder plot is here. There is no
suggestion
arguments right now. - Not sure
- Again, I will develop this as a callback soon.
I hope this helps answer some of your questions.
Thereās weighted_databunch
now, although itās not well tested.
I was not aware of this. Does this mean there is no need for an OverSamplingCallback
like there was for fastai v1?