Lesson 3 - Official Topic

Look into this maybe?:

3 Likes

You are talking about binary classification. Sure works and we all know that. Multi label classification is where issue is.

As we will see later on, it won’t work with this model which has to produce numbers that add up to one. You need another kinds of model/loss function for this. Stay tune for the lesson about multi-labels problems :wink:

2 Likes

I used that approach to offer a ‘second guess’ classification if the % was below a threshold
https://sportsidentifier.azurewebsites.net/

1 Like

Installing voila with these commands:

    !pip install voila
    !jupyter serverextension enable voila --sys-prefix

fails in colab with this error:

        Enabling: voila
        - Writing config: /usr/etc/jupyter
            - Validating...
        Error loading server extension voila
              X is voila importable?

Any known solution?

1 Like

I wouldn’t have expected it to work in Colab. Remember: Google own fork of jupyter that’s not compatible with the rest of the world (sadly…)

1 Like

Sure this the closest how we get to address this presently.

ipywidgets are not supported in native colab. Try this:

Same problem about biased algorithms is discussed in this book:Algorithms of Oppression
Book by Safiya Noble.

1 Like

So, if face training with pictures of only one face, how to confirm it won’t get confused when tested with different faces? Wouldn’t it need to be trained by finding the difference between faces?

It couldn’t be trained solely on one person’s face. It would have to be a dataset that contains many pictures of the person you’re trying to identify and many pictures of other random people. My point was just that you don’t have to show it every other person in order to build a classifier that works.

1 Like

Is domain shift, same as concept drift?

2 Likes

I’ll plug my project which is looking at “out of domain” situation with different chess piece sets (as noted by @wdhorton) if you have any thought

In the example of data shift where the classifier goes from bears to raccoons should you do transfer learning from your previous model or retrain completely?

@rachel Always love your additions!

3 Likes

thanks!

If the difference is big like bears vs racoons and you already have a good pretrained model (like the ones we use at the base of our training), I would say retrain.

2 Likes

One interesting example of domain shift that’s present in astrophysics is when we train models on simulated data, e.g., a simulated Universe of galaxies, but then we want to do inference on observed data from an actual telescope, where you end up with correlated noise, weird systematic effects, and physical effects that are quite difficult to simulate.

10 Likes

Very interesting. What has been done in order to solve this problem?

Jeremy, does fast.ai have methods built in that provide for incremental learning?

(i.e. improving the model slowly over time with a single data point each time?)

5 Likes