Understanding Softmax/Probabilities Output on a multi-class classification problem

yes please

So you can merge the PR.

  • the plot_most_uncertain function requires now an argument of the selected class
  • the class is more documented; I have learned what are the docstrings and when to use them and I applied them properly :slight_smile:
  • I have clean also the plot_val_with_title function (since the selected class (y) will always be an integer from now on, not a vector) and I added the case when the idxs is empty (for ex: there are no incorrect classes) - in order to prevent the errors.

2 Likes

Wow you sure learn fast! :slight_smile: Thanks for the updated PR.

BTW, just a minor matter regarding this:

    # computes the probabilities
    self.probs = np.exp(log_preds)
    # extracts the number of classes
    self.num_classes = log_preds.shape[1]

My personal belief is that comments like this are redundant (since we know from the attribute names what is being set in each case), and therefore make the code a little harder to read. My opinion on this is somewhat widely shared, but certainly some disagree, so perhaps take this merely as a tip on how to fit in with the commenting style of this particular library.

2 Likes

Thanks for the feedback. Yes you’re right, I’ll keep it simple from now on, specially when the name of vars are descriptive enough (and usually they should be like this).

Hey @alessa here’s an idea - only if you have time and it seems interesting… I’m wondering if you might consider writing a blog post about your experience of contributing to fastai. I was thinking it would be nice if there were somewhere we could point people to learn from a real world example about:

  • What is the experience of contributing like in general?
  • What is a pull request?
  • How do you create a PR (prefereably, using hub)?
  • What happens after you submit a PR?
  • What is a docstring, and why should you care
  • Anything else that you took away from the experience of contributing to this open source project
8 Likes

The blog is written since then, just that I didn’t find time to review it - and I found it too much emotional, and too less technical :slight_smile: with the holidays I will have time to finally publish it

I have the following error

Input type (CUDAFloatTensor) and weight type (CPUFloatTensor) should be the same

I want to try the code on the initial network with 1000 classes so what I do is the following

sz = 224
arch = models.resnet50
bs = 64

model = arch(True)

img_path = f'{path}/train/dogs/dog.802.jpg'
i = image_loader(img_path, expand_dim=True)
i = i.cuda()

# get actions for the first nr_layers(max:10) layers - the 11th is already FC / flatten
nr_layers = 6
tmp_model = get_activation_layer(model, nr_layers)
layer_outputs = tmp_model(Variable(i))

The error means that you have one tensor on the gpu and another on the cpu. At some point you need to bring the tensor to the GPU.

Maybe it is your model that lives on the CPU since IIRC I think ConvnetBuilder would normally bring it over to the GPU for us when using the fastai lib.

1 Like

Yes indeed, I fix it the other way around, putting the image on the cpu :slight_smile:

2 Likes

There it is my draft

Any feedback will be really helpful <3

7 Likes

This is nice! One minor correction: use a single ? to get docs in jupyter. Using ?? shows the full source code. Even more minor - there are some extra empty bullet points in the last list.

Let me know when you’re ready to share (and tell me your twitter handle so I can give credit).

2 Likes

Thank you Jeremy for your feedback. I did the corrections, review it once again and publish it.


twitter alessaww

4 Likes

I’m curious what the code change was, to put the image on the CPU.