Lesson 3 In-Class Discussion ✅

Understand the amount of information the network needs to know about the object to segment it vs put a bb. For a bb it needs to locate the approximate location of only certain features and use that as its decision criteria. For segmentation, you network needs much more knowledge about the object exact shape, interaction with surrounding, occluding etc. Like in the case of classification, I can get away with only looking for 1-2 pivotal features and discard the rest. Of course the main argument being practically that this precise information is very useful for certain tasks.

How would you put a bounding box on the sky, with out including parts of building tops that marginally come in the way and without excluding any sky. It would have to be a non linear bounding box. And thats essentially what segmentation is - infinitely flexible bounding boxes.

When I played with the Zeit Now demo shown in class with the teddy-bear starter-kit, I met this error:

ERROR: Exception in ASGI application
Traceback (most recent call last):
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/uvicorn/protocols/http/httptools_impl.py”, line 389, in run_asgi
result = await asgi(self.receive, self.send)
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/middleware/errors.py”, line 128, in asgi
raise exc from None
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/middleware/errors.py”, line 106, in asgi
await asgi(receive, _send)
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/middleware/cors.py”, line 138, in simple_response
await inner(receive, send)
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/exceptions.py”, line 74, in app
raise exc from None
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/exceptions.py”, line 63, in app
await instance(receive, sender)
File “/anaconda3/envs/fastai/lib/python3.6/site-packages/starlette/routing.py”, line 43, in awaitable
response = await func(request)
File “app/server.py”, line 50, in analyze
return JSONResponse({‘result’: learn.predict(img)[0]})
AttributeError: ‘Learner’ object has no attribute 'predict’

My fastai is the latest 1.0.21 version, and I didn’t change anything in the server.py file. Did I miss anything?

5 Likes

To show the color it uses colormap, which map each numeric value to color.

1 Like

To draw the grid lines replace
learn.recorder.plot() with
learn.recorder.plot(); plt.grid(True)

3 Likes

files starting with a “.” are hidden to the regular “ls” command. use “ls -a” and you shoud see you missing directory in the /home/ubuntu" directory in your example.

Why it’s there I don’t know :slight_smile:

2 Likes

I think you can squish the regression output using this trick, as explained in the collaborative filtering lesson 5 DL1 v2, or do something similar to this

    def forward(self, ...):
        ...
        output = F.sigmoid(unsquished_output) * (max_value-min_value) + min_value
        return output

3 Likes

Hmm…interesting point. If I want to jump around and change directions a lot to get out of bad local minima, it seems like I wouldn’t want momentum encouraging me to keep going in a one direction.

Edit:
Oh…it seems like sgugger already blogged about this

To accompany the movement toward larger learning rates, Leslie Smith found in his experiments that decreasing the momentum led to better results.

2 Likes

update fastai and pytorch.

You can train it further.

it is binary_cross_entropy_with_logits. for documentation refer to Documentation of binary_cross_entropy_with_logits

1 Like

You can check loss function used by learner by simply running learn.loss_func??

2 Likes

Thank you bluesky314 (Rahul). That makes a good case for segmentation.

Can one specify colors for different classes such as:

R G B Class
64 128 64 Animal
192 0 128 Archway
0 128 192 Bicyclist

The label images will also be matching these colors.

The reason I am asking is that I may want to know how many different classes exist in an image. I can simply count number of unique colors that belong to my Color-to-Class map

1 Like

is the Lesson3-imdb,ipynb pointing towards wrong URL for IMDB (https://s3.amazonaws.com/fast-ai-nlp/imdb.tgz)?

The gzipped file does not contain anything like:

[PosixPath(’/home/jhoward/.fastai/data/imdb/imdb.vocab’),
PosixPath(’/home/jhoward/.fastai/data/imdb/models’),
PosixPath(’/home/jhoward/.fastai/data/imdb/tmp_lm’),
PosixPath(’/home/jhoward/.fastai/data/imdb/train’),
PosixPath(’/home/jhoward/.fastai/data/imdb/test’),
PosixPath(’/home/jhoward/.fastai/data/imdb/README’)]

As a result

(path/‘train’).ls()

results in an error

I created a symlink for my data folder in place of .fastai/data. Thus I can avoid having to open that .fastai directory often

1 Like

I analyzed the lesson videos to get Jeremy’s facial expressions to find when he’s most happy, sad, surprised etc. during the lessons. Gist linked in the post over on “Share your work”: https://forums.fast.ai/t/share-your-work-here/27676/366?u=jerbly

2 Likes

creating a symlink to the data folder has been great for me.

How to pass in your own loss function?

Yes I don’t know why dice was not used

When @jeremy changed his model from 128 to 256 image sizes, but kept the weights from the previous model, I can’t get my head round how the weights learned were still useful. Everything has got 4 times bigger and surely your filters won’t work anymore, in particular for satellite images where everything is at the same scale. The only way I can possibly see this working is if somehow the augmentation had done a lot of zooming in and out so the learned filters were able to adapt. Can anyone shed any light on this please?

EDIT: I couldn’t watch all the lesson live, so I need to go back and watch the end, so apologies if this was covered.

1 Like