Fastai v2 chat

In kaggle kernels, yes.

2 Likes

Thank you for this!

Have you (or anyone else) been able to import vision.core in colab? Iā€™m getting

AttributeError                            Traceback (most recent call last)

<ipython-input-18-755b6a8da78b> in <module>()
----> 1 from local.vision.core import *

/content/fastai_dev/dev/local/vision/core.py in <module>()
     13 
     14 #Cell
---> 15 _old_sz = Image.Image.size.fget
     16 @patch_property
     17 def size(x:Image.Image): return Tuple(_old_sz(x))

AttributeError: type object 'Image' has no attribute 'size'

This works fine locally, so Iā€™m wondering what the important difference is :thinking:

Thank you so much for your hard work. Iā€™m really looking forward to v2. The feature I want most is about GAN. I have never seen discussion about GAN for v2. I want to know if there are any plans for GAN!

@Jeremy bringing this to your attention as Iā€™m getting this on my end too. I also tried directly forcing PIL (import PIL.Image) just to make sure it wasnā€™t a mixed import issue but that did not work. Any ideas?

Iā€™m trying to run notebook tests from the command line and getting an error. What am I doing wrong?

> python run_notebook.py --fn 00_test.ipynb 

Error in 00_test.ipynb
Traceback (most recent call last):
  File "run_notebook.py", line 18, in <module>
    slow:Param("Run slow tests", bool)=False, cpp:Param("Run tests that require c++ extensions", bool)=False):
  File "/home/jupyter/fastai_dev/dev/local/script.py", line 37, in call_parse
    func(**args.__dict__)
  File "run_notebook.py", line 24, in main
    for f in sorted(fns): test_nb(f, flags=flags)
  File "/home/jupyter/fastai_dev/dev/local/notebook/test.py", line 113, in test_nb
    raise e
  File "/home/jupyter/fastai_dev/dev/local/notebook/test.py", line 110, in test_nb
    ep.preprocess(pnb)
TypeError: preprocess() missing 1 required positional argument: 'resources'

You need to update nbconvert to the latest version.

fastai v2 is now compatible with pytorch 1.3 (and remains compatible with 1.2).

6 Likes

Do you plan to utilize the named tensors feature in any way, or will you avoid doing so to ensure compatibility with 1.2?

We plan to use named tensors once PyTorch considers them stable.

2 Likes
  1. Zoom transform - does it only zoom in and not out? Thereā€™s a max_zoom(>=1.0) parameter, but couldnā€™t find a way to let it zoom say from x0.6 (smaller) to x1.3 (larger)

  2. Any plan to incorporate AutoAugment? [1] [2] [3]

  3. Where is the ā€œsuggested LRā€ from lr_find? Itā€™s mentioned in v1 docs, but I donā€™t see it now.

  4. I was looking for a way to quickly pickle some data structure, and stumbled upon this post about not using pickle and instead use MessagePack (which I havenā€™t tried and no experience with).
    What is your opinion on that matter? For example when exporting model/learner.

  5. Whatā€™s the equivalent of PyTorchā€™s Dataloader sampler to balance classes when wanting to rebalance classes to prevent overfitting with an imbalanced dataset?

(moved everything to previous post)

Where does the DataBunch.c attribute get set? Itā€™s used in get_c() function, but I couldnā€™t find the code that sets it.

It may be set by one of the transforms.

Hello,

I am unsure if these questions are for mainly v2 or both v1 and v2. I might be able to answer some of these questions:

  1. I am unsure but I guess you could probably try this yourself and see :slightly_smiling_face:
  2. Donā€™t know
  3. For fastai v1, you could do learn.recorder.plot(suggestion=True) but I am not sure if this is the same for v2
  4. Donā€™t know
  5. For fastai v1 we have the OverSamplingCallback that uses the weighted random sampler. I will probably write a version for fastai v2 in the next couple weeks.

I can connect to WSL and run python/jupyter, but for fastai2, it reports No module named ā€˜localā€™. So who knows what I am missing?

See resolution here:

I was referring to @jeremyā€™s plans for v2, exactly bacuse some was available in v1 as you mentioned.

Ah ok I will try to answer again based on whatā€™s in the code now:

  1. Here is the code for the zoom. It seems it uses PyTorch .uniform_ from 1 to max_zoom, but I am not sure if it will work if max_zoom<1 and the upper bound is actually smaller than the lower bound.
  2. Not sure
  3. The code for the LR finder plot is here. There is no suggestion arguments right now.
  4. Not sure
  5. Again, I will develop this as a callback soon.

I hope this helps answer some of your questions.

Thereā€™s weighted_databunch now, although itā€™s not well tested.

3 Likes

I was not aware of this. Does this mean there is no need for an OverSamplingCallback like there was for fastai v1?