Lesson 3 - Official Topic

Well then if you keep decreasing size, you have decreased performance. In many cases 224x224 is a good tradeoff between performance and compute cost/memory usage. It will depend on the particular application though and you have to try out different sizes likely.

what do we need to set up on the production server to deploy the model? Fastai 2? anything else?

Great question. This classifier will fail (for reasons we will see later tonight) because the model will want to predict one class on top of the others.
You need to use a different kind of loss function to deal with images that can have multiple labels (we will see this next week I think).

6 Likes

It’s best to retrain your model regularly, on a mix of new and old data. What percentage of which depends on your problems: bears don’t change so you probably want everything, but if your data could shift, you probably want some mix like 80-90% new and 20-10% old.

4 Likes

Regarding rapid prototyping & deployment of a model: I like Flask Restplus for a quick and dirty API that you can send to other engineers.

4 Likes

Can one reliably do classification of an image object as belonging to given classes & none of them?
(in this case one of the bears vs not a bear ?)

You could try this blog https://asvcode.github.io/Blogs/fastai/augmentation/image-augmentation/2020/03/26/Fastai2-Image-Augmentation.html

1 Like

Yes, you would use a sigmoid activation function which gives probability of class present. I think this may be discussed when talking about multi-label models later in class.

2 Likes

The purpose of the augmentation step is to better train and improve your model by exposing it to a wider variety of examples. Therefore augmentation is applied only to the training data, not to validation or test data.

2 Likes

So if you added a different type of bear would you want to retrain entirely then? If not, what sort of scenarios would involve retraining on all the data?

Also, is this covered in any of the lessons? I can’t actually think of a way to train on JUST new data

1 Like

If you want to try deployment to an Azure function - take a look at @zenlytix topic


I tried this out earlier today and it worked a treat.

2 Likes

Not only if you have a new type of bear. If you have pictures from a new region, a different weather, different times of days… all of those could be useful to make your model more robust by retraining it.

3 Likes

An interesting idea, but would require a tremendous amount of training data to get the model to understand the “not bear” class!

1 Like

Would it really though? couldn’t we use a dataset of random images without bears? Obviously it would have to be in the wilderness so that the classifier doesn’t become a wilderness detector!

Not necesarily, just a new 150 images with no bears in them would probably be a good start.

3 Likes

I thought about this after lecture 2. I realized that it is easy to create one class neural network than create non bear class

2 Likes

Having done a sports identifier sample for the last course I found my ‘not a sport’ category was very difficult to curate. A scene with grass? Must be cricket!

1 Like

Do widgets work in colab? I have not had much success using image cleaner in it?

will voila work in Paperspace?

1 Like

Yes definitely, you have to be careful about the background. Hence I say, “it would have to be in the wilderness so that the classifier doesn’t become a wilderness detector!”

1 Like