Fastai v2 chat

I used it similar to how the image regression one was one (I pointed to the image folder) IIRC. I don’t have the code in front of me though at the moment.

I’d recommend looking at the Rossmann example, it may help answer some questions I think

1 Like

Ok. I look at your examples but it uses pointblock… I’ll try some more. :grinning:

1 Like

Same concept in the end though :wink: If you still can’t get it let me know and I’ll try to find when I did it. Finally jumping back into the code now that the holidays are done :slight_smile:

Also @s.s.o is the dataset publicly available? I’d be interested in that for the study group :slight_smile:

@muellerzr Currently the dataset not public we are still collecting. It’s dental data (not my domain tho). I’m trying to convince my colleagues to make it public.

you may check fastai user he shared his data.

https://hackernoon.com/building-an-age-predictor-web-app-using-deep-learning-25f0190ea18f

2 Likes

Hi everyone.

Following the latest instruction for setting up v2 in Google Colab from [ Fastai-v2 - read this before posting please!, I had these errors:

  1. First after the installing
ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.
ERROR: gql 0.2.0 has requirement graphql-core<2,>=0.5.0, but you'll have graphql-core 2.2.1 which is incompatible.
  1. Second after from fastai2.vision.all import *:
ImportError                               Traceback (most recent call last)
<ipython-input-1-533e7442bc6c> in <module>()
      1 from fastai2.basics import *
      2 from fastai2.callback.all import *
----> 3 from fastai2.vision.all import *
      4 from fastai2.notebook.showdoc import *
      5 

7 frames
/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py in <module>()
      3 import sys
      4 import math
----> 5 from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION
      6 try:
      7     import accimage

ImportError: cannot import name 'PILLOW_VERSION'

Today is Friday, 1/3/2020.
Is there any way to fix this? Thanks.

2 Likes

You’re not alone. I’ve noticed this too.

1 Like

This looks like an error in torchvision, not fastai (stack trace ends there) so you should raise an issue in their repo.

1 Like

This was caused by a new Pillow release.

For now you should install Pillow by doing:

!pip install Pillow==6.2.1
5 Likes

I have a fastai2 model producing promising results and I am wondering about a remark Jeremy made in class: that he did well by training a model on small images and then scaling up to larger images. Two questions:

  1. How do you decide when it’s time to scale up? Is there any useful indicator, or is it just a matter of how much time you have left? I haven’t tried it yet so I don’t have a gut feeling for how much large images will slow the training, or how long it will take to get the model back to the same level of accuracy with larger images; and
  2. How big to go? Is there a limit either to the image size or to the size increment where you lose advantage from going any larger? The main advantage I see is that the center crop on the test images will be larger.

I know I can figure this out the hard way, but I’m hoping some of you who are much more experienced than I am will have words of wisdom.

@sgugger just a minor tidbit I want to be sure of. In the most recent version you adjusted predict, and it seems we can no longer do something like the following:

learn.predict('image1.png')

Instead I need to make a path first. (IE path_im = Path('image1.png')) Is this a permanent adjustment?

Thanks! :slight_smile:

1 Like

I have no idea what your dataset look like, but as the error message should have warned you, the predict method was expecting one of the type encountered while processing your training/validation data.
This is permanent, yes.

1 Like

It was an image on the local directory, and the warning was it was not a Path type or an image in the dataset. We used to be able to just pass in a string for the file location and it would convert it to a path object (PathOrStr IIRC). Got it. Thanks :slight_smile:

Ah wait I totally mis-read that. Looking at the lesson 2 example: pred_class,pred_idx,outputs = learn.predict(path/'black'/'00000021.jpg') it still used a path. Sorry! :slight_smile:

I’m having trouble creating a test set where the get_x and get_y methods aren’t the same as for the training/validation data. I read x and y from a dataframe for the training/validation data, but the test set must be read from a folder.

I’ve tried creating a test databunch from scratch from a DataBlock:

def get_xpath(x): return x
colnames = df.columns #these are the category labels
test_dblk = DataBlock(blocks=(ImageBlock, MultiCategoryBlock()),
                     get_items = get_image_files,
                     get_x = get_xpath,
                     splitter=RandomSplitter(valid_pct=1))

item_tfms = Resize(512)
batch_tfms = [*aug_transforms(size=224), Normalize()]

testbunch = ImageDataBunch.from_dblock(test_dblk,'testpath', test='test')

It works but I get a warning that it doesn’t know what c is, so I add a vocabulary and set c:

testbunch.vocab = CategoryMap(colnames,sort=False)
testbunch.c = c

All seems good but then throws an error when I try to grab one item using ```

testbunch.valid_ds[0][0]` --> Error: 'PosixPath' object is not iterable

The error is coming from transforms, but it somehow relates to the vocab:

.../fastai2/fastai2/data/transforms.py in encodes(self, o)
    201             self.vocab = CategoryMap(list(vals), add_na=self.add_na)
    202 
--> 203     def encodes(self, o): return TensorMultiCategory([self.vocab.o2i[o_] for o_ in o])
    204     def decodes(self, o): return MultiCategory      ([self.vocab    [o_] for o_ in o])

I tried making everything into lists, but then get other errors. I also tried calling test_dl with the main databunch I use for train/valid data, but that also failed.

testfiles = get_image_files(testpath)
mytestdl = test_dl(main_dbunch,testfiles) 

In fastai v1, you could add a test set to a DataBunch with data.add_data() but that option has disappeared.

(I’m trying not to ask too many questions, but I’m so stuck!)

If you have different get_x/get_y methods for your test set, the data block API is not going to work for you, you need to dig in the middle-level API. Define your own TfmdList/Datasource for your dataset, then create your test_dl with

test_dl = dbunch.valid_dl.new(dataset=my_datasource_or_tmfd_list)

Ah, that explains it! Thanks very much for that clue, Sylvain.

Hello, friends!

I’m fiddling around with fastai2 and it seems that 03_data.core isn’t passing the tests:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-37-023481880db5> in <module>
      1 items = L([1.,2.,3.]); tfms = [neg_tfm, int2f_tfm]
----> 2 tl = TfmdList(items, tfms=tfms)
      3 test_eq_type(tl[0], TitledInt(-1))
      4 test_eq_type(tl[1], TitledInt(-2))
      5 test_eq_type(tl.decode(tl[2]), TitledFloat(3.))

/opt/anaconda3/lib/python3.7/site-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     39             return x
     40 
---> 41         res = super().__call__(*((x,) + args), **kwargs)
     42         res._newchk = 0
     43         return res

<ipython-input-32-15dcc14be248> in __init__(self, items, tfms, use_list, do_setup, as_item, split_idx, train_setup, splits, types)
     11         self.tfms = Pipeline(tfms, as_item=as_item, split_idx=split_idx)
     12         self.types = types
---> 13         if do_setup: self.setup(train_setup=train_setup)
     14 
     15     def _new(self, items, **kwargs): return super()._new(items, tfms=self.tfms, do_setup=False, types=self.types, **kwargs)

<ipython-input-32-15dcc14be248> in setup(self, train_setup)
     24 
     25     def setup(self, train_setup=True):
---> 26         self.tfms.setup(self, train_setup)
     27         if len(self) != 0:
     28             x,self.types = super().__getitem__(0),[]

TypeError: setup() takes from 1 to 2 positional arguments but 3 were given

Can you reproduce this or is it just something on my environment?

Thanks!

1 Like

Hey @mrdbarros! I made an issue on GitHub and @sgugger recommended:

I think you have a conflict between latest fastcore and not latest fastai2. This should be fixed now (please reopen if it is not).

How are you setting up the environment? And what versions do you have for fastai2 and fastcore? I have not had a chance to test out this solution yet today :slight_smile:

Hey @muellerzr. Thanks for the quick response.

I set up the fastai2 environment as a editable install. It is up-to-date with the repo. Fastcore were downloaded as a dependency I believe…

dataclasses               0.6                        py_0    fastai
fastai                    1.0.59                        1    fastai
fastai2                   0.0.4                     dev_0    <develop>
fastcache                 1.1.0            py37h7b6447c_0
fastcore                  0.1.6                    pypi_0    pypi
fastprogress              0.2.1                    pypi_0    pypi
fastscript                0.1.2                    pypi_0    pypi
nvidia-ml-py3             7.352.0                    py_0    fastai
regex                     2018.01.10      py37h14c3975_1000    fastai
spacy                     2.0.18          py37hf484d3e_1000    fastai
thinc                     6.12.1          py37h637b7d7_1000    fastai 

I’ve just tried to make an editable install for fastcore too, which upgraded its version:

jupyter@fastai-sp-1:~/mrdbarros/fastcore$ conda list | grep fast
    dataclasses               0.6                        py_0    fastai
    fastai                    1.0.59                        1    fastai
    fastai2                   0.0.4                     dev_0    <develop>
    fastcache                 1.1.0            py37h7b6447c_0
    fastcore                  0.1.10                    dev_0    <develop>
    fastprogress              0.2.1                    pypi_0    pypi
    fastscript                0.1.2                    pypi_0    pypi
    nvidia-ml-py3             7.352.0                    py_0    fastai
    regex                     2018.01.10      py37h14c3975_1000    fastai
    spacy                     2.0.18          py37hf484d3e_1000    fastai
    thinc                     6.12.1          py37h637b7d7_1000    fastai

I’m still getting the same error. :thinking:

You can do this again, I added a bit that lets us customize the input types each transform can receive at inference.

1 Like