Developer chat

def show_image(img:Image, ax:plt.Axes=None, figsize:tuple=(3,3), hide_axis:bool=True, cmap:str=‘binary’,
alpha:float=None, **kwargs)->plt.Axes:
“Display Image in notebook.”
if ax is None: fig,ax = plt.subplots(figsize=figsize)
xtr = dict(cmap=cmap, alpha=alpha, **kwargs)
ax.imshow(image2np(img.data), **xtr) if (hasattr(img, ‘data’)) else ax.imshow(img, **xtr)
if hide_axis: ax.axis(‘off’)
return ax

for example, I looked up the source code of the function show_image, used to visualise tensors in lesson 3. Could understand a bit, but not what is hasattr kwargs, image2np. Prolly have to dive deeper to see what each means

How Can we Build a facial Liveness detection model , and how i can get the relative data ?

Kindly help

I remember there was a discourse, slack, or gitter where you could ask questions in real time but cannot find the link to it. Could you please share it here?

here you go → fast.ai

The hasattr means has attribute. kwargs represents keyword arguments and ** is an unpacking function. These are all standard python syntax. Python is a bit different then other programming languages, so a good python tutorial is necessary for anyone (even experienced programmers in other languages) to understand fast.ai code. I don’t have a good python tutorial at my fingertips right now, but I’m sure if you use the “search” function of either this forum or google you can find a good one. If you’re going to write pytorch or fasta.ai code this should really be step one.

Hi, I don’t know how to create a separate topic, but could someone please help me figure out what the default behavior is here? Docs aren’t very specifc:

I have a net like this:

import torch 
import torch.nn as nn
import torch.nn.functional as F


class Net(nn.Module):
    def __init__(self, pretrained=False):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = torch.flatten(x, 1) # flatten all dimensions except batch
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        return x

I create a learner like this:

learn = cnn_learner(dls, Net, metrics=[error_rate, accuracy], model_dir="/tmp/model/").to_fp16()

So my net technically returns a vector of length 10, but the dls I have has training data labelled 0 or 1. AFAIK the default loss function is CrossEntropyLoss, but when making predictions, how does FastAI convert the vector to a single number? It definitely manages to do it somehow as I am able to get back predictions after training, but would like specifics on what it’s doing, is it like a softmax?

after update to pytorch plot top losses is not showing images anymore.

1 Like

How can i use the previous version while the top loss plot is being fixed.
I just started using fastai last week.I use it in google colab.

!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
from fastbook import *
from fastai.vision.widgets import *strong text

I need a reproducer to know how to debug this please. As this works just fine on the PETs dataset without issue

I tried with the pets dataset in google colab same result no images in top losses similar to above :
Here is a link to the notebook

OR

!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
from fastbook import *
from fastai.vision.all import *
path = untar_data(URLs.PETS)/‘images’
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224))
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_top_losses(10)

PFA the screenshot.

There is a result there, there’s one. And the error rate is <1%, meaning in high likelyhood only 1 image was misclassified. What was the accuracy of your model before?

Is this expected behaviour to have these empty plots i never saw tham before atleast. not last week during training.

Also the confusion matrix shows 8 miss classifications.

image

1 Like

Thanks! Looks like it probably stems from an error in torchvision. I’ve made Jeremy aware of the issue! :smiley:

1 Like

Great thanks. is there a way to use the previous version for the time being?

hi, is there any simple way to get validation loss per batch and save the best model via this process? For e.g. every 32 steps (32 = batch size) which will be lesser than number of steps in the overall epoch i.e. the normal frequency of calculating validation loss. I found some methods such as learner(‘after_batch’) and learner(‘before_batch’) but couldn’t figure how to get the val loss from these methods.

I am currently using learn.fit_one_cycle() in my training loop and then performing inference on that.

Thanks for the help

Hi,
Check the info you entered. It doesn’t match the info for this card azure I get this popup when i try to do a card authentication every-time. I’m filling the information correctly. Can you help me with this?

Thanks!

Hi Fastai Community,

I’m trying to use Deepspeed library released by microsoft for improving the training time using various methods such as ZeRo(1,2,3) stage . But I did not find a method to incorporate Deepspeed into fastai. Has someone in the fastai community tried to use deepspeed with fastai?
Kindly help me with this.

Regards,

Hi everyone. I would like to skip the validation step that follows each epoch while using fit_one_cycle(), and validate only when wanted. Is there is clear way to do this?

Here is a method from 2019. It does suppress the validation loss calculation.

Is there a better way with fastai2?

Thanks for your help!

Malcolm

Hi @kcturgutlu ,

I can see here that you have created a new dataloader with half the batchsize as the one you trained with. Could you explain what is the intuition behind this?

Thanks,
Vinayak.

Hello everyone!

I was wondering if any of you would like to give your feedback on this security scanner my team and i have finished working on recently?

It’s basically a programme that scans PHP code and suggests how to change it if it spots an issue and it’s currently in beta.

Obviously who is better to find out if something actually does its job than a second (or more) pair of eyes that are trained for this sort of thing.

I’m not a cybersecurity specialist so i come to you lads, lasses and the rest – help us out please?

I’d really appreciate it if you could check it out and leave any feedback you may have in the comments to this post!

Massive cheers!