Understand the amount of information the network needs to know about the object to segment it vs put a bb. For a bb it needs to locate the approximate location of only certain features and use that as its decision criteria. For segmentation, you network needs much more knowledge about the object exact shape, interaction with surrounding, occluding etc. Like in the case of classification, I can get away with only looking for 1-2 pivotal features and discard the rest. Of course the main argument being practically that this precise information is very useful for certain tasks.
How would you put a bounding box on the sky, with out including parts of building tops that marginally come in the way and without excluding any sky. It would have to be a non linear bounding box. And thats essentially what segmentation is - infinitely flexible bounding boxes.
When I played with the Zeit Now demo shown in class with the teddy-bear starter-kit, I met this error:
ERROR: Exception in ASGI application
Traceback (most recent call last):
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/uvicorn/protocols/http/httptools_impl.py”, line 389, in run_asgi
result = await asgi(self.receive, self.send)
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/middleware/errors.py”, line 128, in asgi
raise exc from None
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/middleware/errors.py”, line 106, in asgi
await asgi(receive, _send)
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/middleware/cors.py”, line 138, in simple_response
await inner(receive, send)
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/exceptions.py”, line 74, in app
raise exc from None
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/exceptions.py”, line 63, in app
await instance(receive, sender)
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/routing.py”, line 43, in awaitable
response = await func(request)
File “app/server.py”, line 50, in analyze
return JSONResponse({‘result’: learn.predict(img)[0]}) AttributeError: ‘Learner’ object has no attribute 'predict’
My fastai is the latest 1.0.21 version, and I didn’t change anything in the server.py file. Did I miss anything?
files starting with a “.” are hidden to the regular “ls” command. use “ls -a” and you shoud see you missing directory in the /home/ubuntu" directory in your example.
I think you can squish the regression output using this trick, as explained in the collaborative filtering lesson 5 DL1 v2, or do something similar to this
Hmm…interesting point. If I want to jump around and change directions a lot to get out of bad local minima, it seems like I wouldn’t want momentum encouraging me to keep going in a one direction.
Edit:
Oh…it seems like sgugger already blogged about this
Can one specify colors for different classes such as:
R G B
Class
64 128 64
Animal
192 0 128
Archway
0 128 192
Bicyclist
The label images will also be matching these colors.
The reason I am asking is that I may want to know how many different classes exist in an image. I can simply count number of unique colors that belong to my Color-to-Class map
When @jeremy changed his model from 128 to 256 image sizes, but kept the weights from the previous model, I can’t get my head round how the weights learned were still useful. Everything has got 4 times bigger and surely your filters won’t work anymore, in particular for satellite images where everything is at the same scale. The only way I can possibly see this working is if somehow the augmentation had done a lot of zooming in and out so the learned filters were able to adapt. Can anyone shed any light on this please?
EDIT: I couldn’t watch all the lesson live, so I need to go back and watch the end, so apologies if this was covered.