Developer chat

Xposting
https://forums.fast.ai/t/fastai-documentation-o/26520/11?u=insoluble

Certainly too early at this point - not sure it’s quite appropriate for something used mainly for prototyping/training. Still thinking about this.

We may need to separate out the part of the API that needs to be stable for production, and have some one to deal with that.

For now, you should read CHANGES.md if you update your fastai lib.

1 Like

Thanks @jeremy. Good to know. Maybe worth putting front and center in the docs that there will be breaking changes, folks should read CHANGES.md, and folks should regularly update their fastai package versions.

That’s a good idea.

Maybe we could version docs and provide links for documentation in previous releases.

Can I specify databunch to be on cpu device?

like databunch(bs=64, device=‘cpu’)?

what do i need to do for the learner also to use cpu (although one may have a cuda device)?

Thanks much !

labeled images of camvid in camvid-tiny are gray-scale. I thought that was required only for caffe.
Am I wrong?

With many thanks to @aayres we now have google site search enabled for docs.fast.ai ! :slight_smile:

2 Likes

Please don’t use this library dev thread for user questions. They can go over here: #fastai-users

I am not sure if I should post such small things (if not, please tell me):
A small inconvenience I discovered when I was using ClassificationInterpretation with normalize=True is that the numbers are displayed with all decimals and the chart is very crowded.

A small change in line 122 of learner.py solves it to only show 3 decimals:
cm[i, j] too "{:.3f}".format(cm[i, j])

I uploaded an notebook for visualization: https://nbviewer.jupyter.org/github/MicPie/fastai_course_v3/blob/master/ClassificationInterpretation_normalized.ipynb

If I should create a PR just tell me. :slight_smile:

Probably it would be even better if you provide something like num_format='{:.3f}' argument to override this value as needed.

1 Like

You can create a PR, but only if you use f-strings, like everywhere in the library :wink:
Here is an example of formatted-float in f-string.

1 Like

Thank you for the tips, I incorporated both in the PR: https://github.com/fastai/fastai/pull/1052
:smiley:

1 Like

I’m probably totally missing something so I apologize, but I see the bb_pad_collate function being defined but I can’t find it actually being used anywhere so it seems like the default data_collate method is being used for ObjectDetectDataset. If data_collate is being used instead of bb_pad_collate does this cause an issue or is bb_pad_collate being used and I’m just not finding it.

A collate function is to collate elements in a dataloader. There is no dataloader or databunch for object detection inside the library but when you want to build one, you’ll need to use that function. There’s one example in the docs (scroll a bit down from that link).

1 Like

Ok. Got it. Thank you!

I’ve noticed new functionality in verifiy_image for resizing when thought how to implement it myself and tried it today on openimages dataset from Google (1.7mln images). I noticed that progress bar works strange and it was looks like empty string printed for once in a while. I’ve debugged code and found that assertion error will be empty if there is no message provided. And assertion error was raised because some of the images were opened as one-channel images by mistake by PIL (they are all RGB images).

@sgugger it’s really awesome that you’ve added this functionality! Thank you very much for your excellent work! Though I think this PR can speed up a bit the whole process by moving few operations out of cycle (directory creating, file checking). What I’ve done here:

  1. Moved directory creation out of cycle, so we’ll not be calling os to check if a directory exists
  2. Make dest path for verify_image required if there is max_size set and make it Path type. This way we’ll skip creation of dest folder path variable and win a bit of time. I’m not sure about this one, but I don’t see when we’ll need to call this function without a cycle and for different paths, so it looks like an easy win.
  3. Added resume param which will skip already existed images in dest path and we’ll win a lot of time if something unexpected happened in the middle of resizing. Also, I moved path variable creation and checking more up – so we’ll skip computations if we don’t need them
  4. I’ve added RGB conversion for images if 3 channels requested (it was done similar in prev version of library https://github.com/fastai/fastai/blob/master/old/fastai/dataset.py#L42)

Also, I’m thinking about some param like max_side_size to resize image the way it will not lose any quality in further work – it will be resized in same proportions as original, and smallest side will be resized to asked size. Why it can be helpful: we have an image for example 640x480. We want to do training on 100x100. If we resize images by current max_size param – it will be resized to ~100x75. So in training time, we should resize it to bigger size or pad or something else with quality loss. And if we’ll use this max_side_param it will be resized to 133x100 and we even can random crop 100x100 from it with any additional steps.

1 Like

I have submitted a PR for pil2tensor based on this issue: https://github.com/fastai/fastai/issues/1018.
The problem and its solution is demonstrated herw in the testcases and here: http://localhost:8888/notebooks/fastaiNotebooks/pil2tensor/pil2TensorTests.ipynb#

it can process all images that PIL can convert to RGB, grayscale and binary as PIL makes the heavy lifting.
I just used the plain conda pillow recommended by fastai. That version can read 380 of PILLOWS 468 test images with the following extension : [’.ara’ ‘.blp’ ‘.bmp’ ‘.bw’ ‘.cur’ ‘.dcx’ ‘.dds’ ‘.eps’ ‘.fli’ ‘.ftc’ ‘.ftu’ ‘.gbr’ ‘.gif’ ‘.icns’ ‘.ico’ ‘.im’ ‘.im1’ ‘.jpg’ ‘.mic’ ‘.mpo’ ‘.msp’ ‘.p7’ ‘.pbm’ ‘.pcd’ ‘.pcx’ ‘.pgm’ ‘.png’ ‘.ppm’ ‘.psd’ ‘.pxr’ ‘.ras’ ‘.rgb’ ‘.sgi’ ‘.spider’ ‘.tga’ ‘.tif’ ‘.tiff’ ‘.xbm’ ‘.xpm’]

Also 378 out of 380 can be converted to a tensor without pill making a conversion to “RGB”, “I”, or “L” - just handing over PIL.Image.open(filename).

I have only included a few test images and they may not all be meningfull - maybe some are missing?

Added support for different resize methods and make life easier for people who don’t specify a list of transforms. Specifically, when you create an ImageDataBunch, you can add a resize_mtd argument that can be:

  • ResizeMtd.CROP: the image will be cropped to the desired size after the smaller dimension is resized to size
  • ResizeMtd.PAD: the image will be padded to the desired size after the greater dimension is resized to size
  • ResizeMtd.SQUISH: the image will be squished to the desired size
  • ResizeMtd.NO: no resize

If you pass an empty list of transforms, or a list of transforms that doesn’t contain crop_pad, the constructor will add a center crop/pad if you picked ResizeMtd.CROP or ResizeMtd.PAD.

Default is ResizeMtd.CROP if the size is a single int, ResizeMtd.SQUISH if the size is a tuple (cause crop_pad doesn’t work well with rectangular images yet).

3 Likes

Could we discuss a flexible approach for reading images :

The current version of fastai does not allow to customize open_image. In order to read 16bit grayscale images in the current version you would have to fx.
1: create class GrayImageClassificationDataset(ImageClassificationDataset): and override its “def _get_x(self,i):”
2: create class GrayImageDataBunch(ImageDataBunch): to overide the create function and stick in your own GrayImageClassificationDataset for the validation and train class.

(Note that point 2 might be avoided with the new datablocks.)

Anyway having to overide _get__X and class methods is too cumbersome for something as basic as reading input.

Instead i propose that we make it possible for a dataset to take a reader. It should by use the current open_image which will cover 90% of the needs. If the user have dicom input, microscopy etc. then they can provide their own reader. In this way fastai can stay focused, yet include the medical community.

What do you think ? @sgugger , @jeremy and all