Developer chat

create_cnn is a function, it’s not inside Learner.

Do the docs need to be updated then? It says here:

learn = Learner.create_cnn(data, models.resnet18, metrics=accuracy)

Yes, orig_user_name can be made into a parameter and then you could use the script with any github project.
that’s why I called it, fastai-make-pr-branch - as it hardwires the fastai user :wink:

The only custom thing in the script is that it runs tools/run-after-git-clone if it finds it in the repo.

1 Like

Hey all,

Just sharing this issue here with the validation set random seed.
https://forums.fast.ai/t/lesson-1-pets-benchmarks/27681/55?u=jamesrequa

Please feel free to verify this on your end as well. Steps to reproduce:

  1. Set a random seed in the jupyter notebook np.random.seed(2)
  2. Create an ImageDataBunch data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=299, bs=bs, valid_pct=0.2)
  3. Save train/val x for later trn_x, val_x = data.train_ds.x, data.valid_ds.x
  4. Create a new ImageDataBunch repeating step 2
  5. Check again the train/val x for this new data instance trn_x2, val_x2 = data.train_ds.x, data.valid_ds.x. Compare with the first train/valid set and verify they are not the same.

I have already implemented the code changes which fixes this issue so if you like I can submit a PR :slight_smile: I think this is pretty important to fix right away as it can result in validation loss/error rate results which are not reliable and can happen in a very innocent way like if you just wanted to change the batch size or image size (as we saw one student achieved 1% error rate on pets dataset for this reason).

I don’t have that issue. The thing is you say to repeat the creation from step 2, but you have to go back to step 1 and reset the seed to 2, then your validation set will be the same.

1 Like

Hi Sylvain, good to see you here!

Thank you for confirming this! I was not re-running the code cell np.random.seed(2) when I went to re-create ImageDataBunch. My guess is that others did the same.

I would suggest updating the notebooks so that this seed generation is being done in the same cell block as the creation of the ImageDataBunch to avoid something like this happening for others :slight_smile:

Alternatively, passing in the seed value as a parameter to ImageDataBunch will ensure this mixup will never happen (more user friendly imho) but I realize this affects the code so its probably less desirable.

Good idea.

Hi guys, I am working on some segmentation dataset, the current segmentation dataset assumes the mask is a file, but I often encounter mask as a run-length-encoding string. So I write a function take can takes string to a fastai Image.

I am not sure should I combine open_mask and open_mask_rle_encoded into 1 function. I can make a PR for this later if this is useful.

Also a follow up question on Segmentation Mask, is converting mask to file before or decode it in a PyTorch dataset at run time more preferable? I am thinking if I need to modified the current SegmentationDataset to take rle-encoded mask.

It’s best to write your own dataset class.

Converting RLE to a mask every cycle is likely to be very slow if you have more than a few hundred images. IMHO it is best to decode RLE masks into pngs at the outset. Where possible I prefer to munge data into formats a framework like fastai accepts rather than write bespoke classes.

I believe that fastai doesn’t work with 16bit grayscale - this is a big issue for medical images.
My current solution is to replace open_image with my own implementation but this is not a durable solution. I therefore tracked the flow: open_image => pil2tensor.

@sgugger i have now created the post as an issue on github so that it doesn’t get lost: https://github.com/fastai/fastai/issues/1018

For the “pil2tensor” i have a simple proposal: [https://github.com/albertnaur/fastaiNotebooks/blob/master/pil2tensor/pil2TensorTests.ipynb](http://pil2tensor notebook). The implementation is slightly faster, works for rgb and 16bit grayscale and probably most other formats because there is no assumptions on the format, it may even use less memory because image.tobytes() is avoided:

def Pil2tensor(image)->TensorImage:
“handles all numpy format”
arr = torch.from_numpy(np.asarray(image))
arr = arr.view(image.size[1], image.size[0], -1)
return arr.permute(2,0,1)

Things are more complicated for 1) open_image where we need to attach the “conversion flag” and the denumerator to the dataset:

  • grayscale: x = PIL.Image.open(fn).convert(‘I’) #must be divided by 65536
  • rgb: x = PIL.Image.open(fn).convert(‘RGB’) # must be divided by 255
    => now i can override the “def _get_x(self,i):” in the new version (1.0.16.dev0 ) and then open the image in the notebook. that solves the flag and divide by issue
  1. show_image where we need to be able to attach the colormap to the dataset.

How do we do this so that it works as fluently behind he scenes as in the current solution ?

A complicated example would be an Imagesegmentation dataset where the image is 16bit grayscale and the mask is 8bit and shown in colors (fx tab20 or other) ?

1 Like

New big changes in the library (normally all backward compatible don’t worry).

  • ImageDataBunch.show_batch() is now unified to work with every task (classification, segmentation and object detection).
  • Image.show has been slightly modified so that when we say x.show(y=y, classes=classes) with y the target in a classification task and x the image, title with the classes is automatically set (like in show_batch()). This title can be overridden with the argument title
  • also added tiny versions of planet, camvid and coco yesterday to use for tests or docs.
1 Like

thanks this solves one of the issues with reading 16bit images.
Could you have a look in the previous post about the issue with pil2tensor

I can open a image from rle_encoding pretty quick, I have some storage issue that’s why I am thinking should I save a copy of mask. But I agree it would be easier to convert it to a format which I can just use the available factory method to construct the dataset.

Image dimension is 768*768

620 µs ± 12.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

encoded_str = '7433 1 8199 4 8965 6 9732 8 10498 10 11265 12 12031 15 12798 16 13565 18 14331 20 15098 22 15865 24 16631 26 17398 28 18165 30 18931 32 19698 34 20465 35 21231 38 21998 38 22760 1 22765 37 23526 4 23531 37 24293 42 25059 42 25826 42 26592 42 27358 42 28125 42 28891 42 29657 43 30424 42 31190 42 31956 43 32723 42 33489 43 34256 42 35022 42 35788 42 36555 41 37324 39 38092 37 38861 34 39630 31 40398 30 41167 29 41936 26 42704 24 43473 22 44242 19 45010 18 45779 15 46548 12 47316 11 48085 8 48854 6 49622 4 50391 1'

A few more changes, ObjectDetectDataset now takes slightly different arguments to make it consistent with the rest of the general API. There’s a full example creating one in the data block doc.

this i really a beautiful improvement- thx

Is there a good place to quickly see the list of breaking changes with each release version? If not, where would be a good place to list those?

Relatedly, is there interest in following https://semver.org/, so that versions that are not backward compatible are more clearly identified.

That would be https://github.com/fastai/fastai/blob/master/CHANGES.md.

1 Like

Thanks, @sgugger. What are thoughts on following semver?