A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

Yes, before exporting the model bring it back from mixed precision (to_fp32)

And I do my best to stay active and give back :wink:

3 Likes

Yes, it finally works :slight_smile:! Thanks again for your help!

1 Like

Alright gang, we will be good to go today! For the last lecture (as posted earlier), we’ll be covering:

  • Super Resolution with GANs
  • Siamese Dataloaders
  • Audio

Here’s the link:

Also, after this lecture I’m going to break our megathread into a seperate one for Tabular, and then once we hit NLP another for it, so each have a focus :slight_smile:

4 Likes

5pm cst right?

Correct :smiley:

Also I’ve made the new tabular thread, so others are aware of it:

https://forums.fast.ai/t/a-walk-with-fastai2-tabular-study-group-and-online-lectures-megathread/64361/2

And then finally, @mgloria I made a function to fix segmentation masks (that should keep the codes the same order as you had it before):

https://forums.fast.ai/t/dealing-with-cuda-device-assist-on-segmentation-some-tips/64363/2

3 Likes

Thanks to everyone who joined :slight_smile: Here’s some links I mentioned today:

Wassertein GAN example
SuperRes with Feature Loss
fastai2 audio
fastai2 audio megathread
Deep Learning with Audio Megathread
fastai book (draft)
Deep Learning for Coders with fastai and PyTorch

6 Likes

Just wanted to give you guys a heads up, I’m going through and deploying most of the models we went over (I’ll do this for tabular and NLP once we get to them) if anyone needs help (spec vision for now) let me know

Will you be uploading the notebook for this ? Are you doing it wit Voila?

I am not. I am deploying via Starlette as that’s more realistic for people in production, (IE the JavaScript etc), Atleast with what I’ve seen around here. I may try Jeremy’s new method later.

I will be uploading the website code to GitHub once it’s all done.

Edit: @barnacl as I go through this, I’ll update that deployment notebook with the examples (like the code that’s already there).

Edit x2: Also adding a correction to one of my statements, for now I am still installing the fastai2 library (it’s needed for loading in our pickle), but that’s the only instance I use it and import from it (I don’t even call from fastai2 import)

5 Likes

Hi @muellerzr . I am getting following error while running the object detection notebook on my data.

ValueError: This DataLoader does not contain any batches

I have created the lbl_bbox just as expected and passed the image names in imgs variable. Then I passed the path to folder containing images in DataBlock.dataloaders and it ran fine. Then I set dls.c=10 as per my requirement. However, it all seemed to run well. But when I did dls.show_batch, I got that error. No clue why I am getting this…

@Vishucyrus we’ll need more information than this to debug properly. How is your data stored? How are you building the block? How is your data labeled?

My data is stored as jpg files in folder named ‘/det’.
I have annotated the images using this tool – VGG Image Annotator
It provides the annotations with the data as ‘x’, ‘y’ , ‘width’, ‘height’ and ‘id’ with id names.
So I wrote a function to get following –
variable ‘imgs’ containing the image file names.
variable ‘lbl_bbox’ containing list of tuples ([list of bboxes] , [ids of the annotations])
And I thought that’s it.

I then went on and replaced the imgs and lbl_bbox comingout of get_annotations function in ur notebook with my data.

I simply then ran the cells and got that error when I reached dls.show_batch()

Let me know if I missed anything.

@Vishucyrus I can’t do much without the exact code with how you set it up and the stack trace. Can you please provide the DataBlock setup, and what dblock.summary(path) provides you?

Also, after some discussion I’m providing a Binary Segmentation notebook (07_Binary_Segmentation) to the course, as it shows how to do the segmentation adjustments

2 Likes

Hi Zachary

The datablock.summary(path) gives following

Setting-up type transforms pipelines
Collecting items from det
Found 30 items
2 datasets of sizes 24,6
Setting up Pipeline: -> PILBase.create
Setting up Pipeline: -> TensorBBox.create
Setting up Pipeline: -> MultiCategorize

Building one sample
Pipeline: -> PILBase.create
starting from
28_Cancel2.pdf.jpg
applying gives
det/28_Cancel2.pdf.jpg
applying PILBase.create gives
PILImage mode=RGB size=2550x3300
Pipeline: -> TensorBBox.create
starting from
28_Cancel2.pdf.jpg
applying gives
[[181, 571, 337, 75], [159, 632, 715, 117], [283, 1478, 651, 75], [1954, 178, 587, 109], [1389, 565, 331, 95], [729, 3046, 554, 64]]
applying TensorBBox.create gives
TensorBBox of size 6x4
Pipeline: -> MultiCategorize
starting from
28_Cancel2.pdf.jpg
applying gives
[phn, pha, ik, pn, in, don]
applying MultiCategorize gives
TensorMultiCategory([8, 7, 3, 9, 4, 2])

Final sample: (PILImage mode=RGB size=2550x3300, TensorBBox([[ 181., 571., 337., 75.],
[ 159., 632., 715., 117.],
[ 283., 1478., 651., 75.],
[1954., 178., 587., 109.],
[1389., 565., 331., 95.],
[ 729., 3046., 554., 64.]]), TensorMultiCategory([8, 7, 3, 9, 4, 2]))

Setting up after_item: Pipeline: BBoxLabeler -> PointScaler -> Resize -> ToTensor
Setting up before_batch: Pipeline: bb_pad
Setting up after_batch: Pipeline: IntToFloatTensor -> AffineCoordTfm -> Normalize

Building one batch
Applying item_tfms to the first sample:
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py:2764: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
warnings.warn("Default grid_sample and affine_grid behavior has changed "
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py:2705: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
warnings.warn("Default grid_sample and affine_grid behavior has changed "
Pipeline: BBoxLabeler -> PointScaler -> Resize -> ToTensor
starting from
(PILImage mode=RGB size=2550x3300, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2]))
applying BBoxLabeler gives
(PILImage mode=RGB size=2550x3300, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2]))
applying PointScaler gives
(PILImage mode=RGB size=2550x3300, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2]))
applying Resize gives
(PILImage mode=RGB size=224x224, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2]))
applying ToTensor gives
(TensorImage of size 3x224x224, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2]))

Adding the next 3 samples

Applying before_batch to the list of samples
Pipeline: bb_pad
starting from
[(TensorImage of size 3x224x224, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2])), (TensorImage of size 3x224x224, TensorBBox of size 6x4, TensorMultiCategory([3, 9, 8, 7, 4, 2])), (TensorImage of size 3x224x224, TensorBBox of size 9x4, TensorMultiCategory([ 2, 3, 9, 8, 12, 1, 6, 5, 4])), (TensorImage of size 3x224x224, TensorBBox of size 6x4, TensorMultiCategory([ 2, 9, 11, 8, 7, 1]))]
applying bb_pad gives
[(TensorImage of size 3x224x224, Tensor of size 5x4, tensor([8, 9, 4, 2, 0])), (TensorImage of size 3x224x224, Tensor of size 5x4, tensor([9, 8, 7, 4, 2])), (TensorImage of size 3x224x224, Tensor of size 5x4, tensor([ 2, 9, 8, 12, 6])), (TensorImage of size 3x224x224, Tensor of size 5x4, tensor([2, 9, 8, 7, 0]))]

Collating items in a batch

Applying batch_tfms to the batch built
Pipeline: IntToFloatTensor -> AffineCoordTfm -> Normalize
starting from
(TensorImage of size 4x3x224x224, Tensor of size 4x5x4, Tensor of size 4x5)
applying IntToFloatTensor gives
(TensorImage of size 4x3x224x224, Tensor of size 4x5x4, Tensor of size 4x5)
applying AffineCoordTfm gives
(TensorImage of size 4x3x224x224, Tensor of size 4x5x4, Tensor of size 4x5)
applying Normalize gives
(TensorImage of size 4x3x224x224, Tensor of size 4x5x4, Tensor of size 4x5)

Cool so that seems to be fine. (Also in the future, it’s worth wrapping code in something like so:

“```python”

My Code

“```”
This way it’s readable for others (remove the quotations and wrap your code in the middle)
So clearly we can build a batch out of them, build the dataloaders and provide the trace of show_batch()? (Wrapped in the ``python please :slight_smile: )

3 Likes

@muellerzr, by chance did you see any examples how to deal with very big segments and small ones at the same mask? -Due to the low training accuracy.
thank you.

I’m confused by the wording here, Can you show an example of this issue?

dls.show_batch() gives following trace —

“```python”

ValueError Traceback (most recent call last)
in ()
----> 1 dls.show_batch()

~/anaconda3/lib/python3.6/site-packages/fastai2/data/core.py in show_batch(self, b, max_n, ctxs, show, **kwargs)
88
89 def show_batch(self, b=None, max_n=9, ctxs=None, show=True, **kwargs):
—> 90 if b is None: b = self.one_batch()
91 if not show: return self._pre_show_batch(b, max_n=max_n)
92 show_batch(*self._pre_show_batch(b, max_n=max_n), ctxs=ctxs, max_n=max_n, **kwargs)

~/anaconda3/lib/python3.6/site-packages/fastai2/data/load.py in one_batch(self)
127 def do_batch(self, b): return self.retain(self.create_batch(self.before_batch(b)), b)
128 def one_batch(self):
–> 129 if self.n is not None and len(self)==0: raise ValueError(f’This DataLoader does not contain any batches’)
130 with self.fake_l.no_multiproc(): res = first(self)
131 if hasattr(self, ‘it’): delattr(self, ‘it’)

ValueError: This DataLoader does not contain any batches

“```”

Interesting. Are you splitting your data? (Also remove the " ) when doing the wrapping and it will work, when you see the preview it should look something like so:

Hello World

What does the DataBlock code look like you’re building with?

1 Like

Lets, say I have a 224x 224 image and one big segment (mask of class 2)50x50 pxl and other one is (mask of class 5) 3x3 pxl for segmentation…