Lesson 10 Discussion & Wiki (2019)

Jeremy covered that. There is nothing that is going to make your activations go high for the absence of classes, so you are making it a harder task for your model.

3 Likes

image
with binomial function whatā€™s the limit to say there is / there isnā€™t a ā€˜thingā€™ (say fish) in the image. is that 0.5?

1 Like

You will have to decide on this :wink: For instance, on the planet dataset, 0.2 worked rather well.

1 Like

thanks ā€¦for the multiclass classification also we can have this approach ?assumption every image has to be one class

Did we already create nn.Conv2d last week?

1 Like

No, we didnā€™t. And I donā€™t think we will, Jeremy is cheating :wink:

8 Likes

Is it better to build a layer dictionary and pass that to nn.Sequential or list the layers directly?

1 Like

As you prefer, this is really a styling issue.

1 Like

Thanks

Is there ever any reason not to send things to CUDA?

2 Likes

GPU Memory Restrictions

1 Like

If you donā€™t have a GPU? More seriously, things like metrics or stored losses need to be put back on the CPU to avoid OOM errors.

2 Likes

If you want to test your model in inference mode to see how long it takes to run inference on a CPUā€¦for implementation purposes.

2 Likes

Jeremy also mentioned that inferrence is often better done on CPU.

Why would that be the case?

Isnā€™t there acceleration that can be had if you have a small sliver of GPU? i.e. for the forward pass

How did he decide on the order=2 for that batch transform cb?

1 Like

I sometimes use CPU for inference. I donā€™t need the speed of the GPU.

https://aws.amazon.com/machine-learning/elastic-inference/

havenā€™t checked fastai code recently but image augmentation is or can also be implemented as callback?

1 Like