A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

I’ve got some time this morning, I’ll take a look

1 Like

Thank you so much :grinning:!!

@sebderhy found a solution. Still do the np.array method for now but. When loading your model in, attach:

learn.dls.device = 'cpu'

The Dataloaders don’t default to the load_learner's environment (I made a post about this in another thread)

Also, one more adjustment to the template, under analyze:

return JSONResponse({
      'result': str(pred[0])
  })

cc @mrfabulous1 because I know you’ve had issues with this :wink:

Here’s the full analyze function now:

@app.route('/analyze', methods=['POST'])
async def analyze(request):
  img_data = await request.form()
  img_bytes = await (img_data['file'].read())
  img_np = np.array(Image.open(BytesIO(img_bytes)))
  learn.dls.device = 'cpu' # This can also be put into setup_learner
  pred = learn.predict(img_np)
  return JSONResponse({
      'result': str(pred[0])
  })

Looking into the BytesIO now

Edit x2:

The BytesIO is a type-annotations bug, should hopefully be fixed later today

3 Likes

Thanks! The solution still doesn’t work for me, but I think it’s because I trained a model with to_fp() (see error below). Will try again without it .

On a side note, I never stop being amazed by fastai and people like you pushing it forward. It’s really incredible that a bug discovered on a saturday is being fixed on the same day. I’ve never seen this before :muscle: !

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 385, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/fastapi/applications.py", line 140, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/starlette/applications.py", line 134, in __call__
    await self.error_middleware(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/starlette/middleware/errors.py", line 178, in __call__
    raise exc from None
  File "/usr/local/lib/python3.7/site-packages/starlette/middleware/errors.py", line 156, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.7/site-packages/starlette/middleware/cors.py", line 84, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/usr/local/lib/python3.7/site-packages/starlette/middleware/cors.py", line 140, in simple_response
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/starlette/exceptions.py", line 73, in __call__
    raise exc from None
  File "/usr/local/lib/python3.7/site-packages/starlette/exceptions.py", line 62, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 590, in __call__
    await route(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 208, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 41, in app
    response = await func(request)
  File "app/server.py", line 75, in analyze
    pred = learn.predict(img_np)
  File "/usr/local/lib/python3.7/site-packages/fastai2/learner.py", line 215, in predict
    inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
  File "/usr/local/lib/python3.7/site-packages/fastai2/learner.py", line 203, in get_preds
    self._do_epoch_validate(dl=dl)
  File "/usr/local/lib/python3.7/site-packages/fastai2/learner.py", line 159, in _do_epoch_validate
    with torch.no_grad(): self.all_batches()
  File "/usr/local/lib/python3.7/site-packages/fastai2/learner.py", line 127, in all_batches
    for o in enumerate(self.dl): self.one_batch(*o)
  File "/usr/local/lib/python3.7/site-packages/fastai2/learner.py", line 133, in one_batch
    self.pred = self.model(*self.xb);                self('after_pred')
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
    input = module(input)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
    input = module(input)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 345, in forward
    return self.conv2d_forward(input, self.weight)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
    self.padding, self.dilation, self.groups)
RuntimeError: "unfolded2d_copy" not implemented for 'Half'```
1 Like

Yes, before exporting the model bring it back from mixed precision (to_fp32)

And I do my best to stay active and give back :wink:

3 Likes

Yes, it finally works :slight_smile:! Thanks again for your help!

1 Like

Alright gang, we will be good to go today! For the last lecture (as posted earlier), we’ll be covering:

  • Super Resolution with GANs
  • Siamese Dataloaders
  • Audio

Here’s the link:

Also, after this lecture I’m going to break our megathread into a seperate one for Tabular, and then once we hit NLP another for it, so each have a focus :slight_smile:

4 Likes

5pm cst right?

Correct :smiley:

Also I’ve made the new tabular thread, so others are aware of it:

https://forums.fast.ai/t/a-walk-with-fastai2-tabular-study-group-and-online-lectures-megathread/64361/2

And then finally, @mgloria I made a function to fix segmentation masks (that should keep the codes the same order as you had it before):

https://forums.fast.ai/t/dealing-with-cuda-device-assist-on-segmentation-some-tips/64363/2

3 Likes

Thanks to everyone who joined :slight_smile: Here’s some links I mentioned today:

Wassertein GAN example
SuperRes with Feature Loss
fastai2 audio
fastai2 audio megathread
Deep Learning with Audio Megathread
fastai book (draft)
Deep Learning for Coders with fastai and PyTorch

6 Likes

Just wanted to give you guys a heads up, I’m going through and deploying most of the models we went over (I’ll do this for tabular and NLP once we get to them) if anyone needs help (spec vision for now) let me know

Will you be uploading the notebook for this ? Are you doing it wit Voila?

I am not. I am deploying via Starlette as that’s more realistic for people in production, (IE the JavaScript etc), Atleast with what I’ve seen around here. I may try Jeremy’s new method later.

I will be uploading the website code to GitHub once it’s all done.

Edit: @barnacl as I go through this, I’ll update that deployment notebook with the examples (like the code that’s already there).

Edit x2: Also adding a correction to one of my statements, for now I am still installing the fastai2 library (it’s needed for loading in our pickle), but that’s the only instance I use it and import from it (I don’t even call from fastai2 import)

5 Likes

Hi @muellerzr . I am getting following error while running the object detection notebook on my data.

ValueError: This DataLoader does not contain any batches

I have created the lbl_bbox just as expected and passed the image names in imgs variable. Then I passed the path to folder containing images in DataBlock.dataloaders and it ran fine. Then I set dls.c=10 as per my requirement. However, it all seemed to run well. But when I did dls.show_batch, I got that error. No clue why I am getting this…

@Vishucyrus we’ll need more information than this to debug properly. How is your data stored? How are you building the block? How is your data labeled?

My data is stored as jpg files in folder named ‘/det’.
I have annotated the images using this tool – VGG Image Annotator
It provides the annotations with the data as ‘x’, ‘y’ , ‘width’, ‘height’ and ‘id’ with id names.
So I wrote a function to get following –
variable ‘imgs’ containing the image file names.
variable ‘lbl_bbox’ containing list of tuples ([list of bboxes] , [ids of the annotations])
And I thought that’s it.

I then went on and replaced the imgs and lbl_bbox comingout of get_annotations function in ur notebook with my data.

I simply then ran the cells and got that error when I reached dls.show_batch()

Let me know if I missed anything.

@Vishucyrus I can’t do much without the exact code with how you set it up and the stack trace. Can you please provide the DataBlock setup, and what dblock.summary(path) provides you?

Also, after some discussion I’m providing a Binary Segmentation notebook (07_Binary_Segmentation) to the course, as it shows how to do the segmentation adjustments

2 Likes

Hi Zachary

The datablock.summary(path) gives following

Setting-up type transforms pipelines
Collecting items from det
Found 30 items
2 datasets of sizes 24,6
Setting up Pipeline: -> PILBase.create
Setting up Pipeline: -> TensorBBox.create
Setting up Pipeline: -> MultiCategorize

Building one sample
Pipeline: -> PILBase.create
starting from
28_Cancel2.pdf.jpg
applying gives
det/28_Cancel2.pdf.jpg
applying PILBase.create gives
PILImage mode=RGB size=2550x3300
Pipeline: -> TensorBBox.create
starting from
28_Cancel2.pdf.jpg
applying gives
[[181, 571, 337, 75], [159, 632, 715, 117], [283, 1478, 651, 75], [1954, 178, 587, 109], [1389, 565, 331, 95], [729, 3046, 554, 64]]
applying TensorBBox.create gives
TensorBBox of size 6x4
Pipeline: -> MultiCategorize
starting from
28_Cancel2.pdf.jpg
applying gives
[phn, pha, ik, pn, in, don]
applying MultiCategorize gives
TensorMultiCategory([8, 7, 3, 9, 4, 2])

Final sample: (PILImage mode=RGB size=2550x3300, TensorBBox([[ 181., 571., 337., 75.],
[ 159., 632., 715., 117.],
[ 283., 1478., 651., 75.],
[1954., 178., 587., 109.],
[1389., 565., 331., 95.],
[ 729., 3046., 554., 64.]]), TensorMultiCategory([8, 7, 3, 9, 4, 2]))

Setting up after_item: Pipeline: BBoxLabeler -> PointScaler -> Resize -> ToTensor
Setting up before_batch: Pipeline: bb_pad
Setting up after_batch: Pipeline: IntToFloatTensor -> AffineCoordTfm -> Normalize

Building one batch
Applying item_tfms to the first sample:
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py:2764: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
warnings.warn("Default grid_sample and affine_grid behavior has changed "
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py:2705: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
warnings.warn("Default grid_sample and affine_grid behavior has changed "
Pipeline: BBoxLabeler -> PointScaler -> Resize -> ToTensor
starting from
(PILImage mode=RGB size=2550x3300, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2]))
applying BBoxLabeler gives
(PILImage mode=RGB size=2550x3300, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2]))
applying PointScaler gives
(PILImage mode=RGB size=2550x3300, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2]))
applying Resize gives
(PILImage mode=RGB size=224x224, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2]))
applying ToTensor gives
(TensorImage of size 3x224x224, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2]))

Adding the next 3 samples

Applying before_batch to the list of samples
Pipeline: bb_pad
starting from
[(TensorImage of size 3x224x224, TensorBBox of size 6x4, TensorMultiCategory([8, 7, 3, 9, 4, 2])), (TensorImage of size 3x224x224, TensorBBox of size 6x4, TensorMultiCategory([3, 9, 8, 7, 4, 2])), (TensorImage of size 3x224x224, TensorBBox of size 9x4, TensorMultiCategory([ 2, 3, 9, 8, 12, 1, 6, 5, 4])), (TensorImage of size 3x224x224, TensorBBox of size 6x4, TensorMultiCategory([ 2, 9, 11, 8, 7, 1]))]
applying bb_pad gives
[(TensorImage of size 3x224x224, Tensor of size 5x4, tensor([8, 9, 4, 2, 0])), (TensorImage of size 3x224x224, Tensor of size 5x4, tensor([9, 8, 7, 4, 2])), (TensorImage of size 3x224x224, Tensor of size 5x4, tensor([ 2, 9, 8, 12, 6])), (TensorImage of size 3x224x224, Tensor of size 5x4, tensor([2, 9, 8, 7, 0]))]

Collating items in a batch

Applying batch_tfms to the batch built
Pipeline: IntToFloatTensor -> AffineCoordTfm -> Normalize
starting from
(TensorImage of size 4x3x224x224, Tensor of size 4x5x4, Tensor of size 4x5)
applying IntToFloatTensor gives
(TensorImage of size 4x3x224x224, Tensor of size 4x5x4, Tensor of size 4x5)
applying AffineCoordTfm gives
(TensorImage of size 4x3x224x224, Tensor of size 4x5x4, Tensor of size 4x5)
applying Normalize gives
(TensorImage of size 4x3x224x224, Tensor of size 4x5x4, Tensor of size 4x5)

Cool so that seems to be fine. (Also in the future, it’s worth wrapping code in something like so:

“```python”

My Code

“```”
This way it’s readable for others (remove the quotations and wrap your code in the middle)
So clearly we can build a batch out of them, build the dataloaders and provide the trace of show_batch()? (Wrapped in the ``python please :slight_smile: )

3 Likes

@muellerzr, by chance did you see any examples how to deal with very big segments and small ones at the same mask? -Due to the low training accuracy.
thank you.