Lesson 2 official topic

Hi I am facing error in deploying the gradio app on Huggingface space. Here is my error message:

Traceback (most recent call last):
  File "/home/user/app/app.py", line 14, in <module>
    learn = load_learner(os.path.abspath('./export.pkl'))
  File "/home/user/.local/lib/python3.10/site-packages/fastai/learner.py", line 446, in load_learner
    try: res = torch.load(fname, map_location=map_loc, pickle_module=pickle_module)
  File "/home/user/.local/lib/python3.10/site-packages/torch/serialization.py", line 1014, in load
    return _load(opened_zipfile,
  File "/home/user/.local/lib/python3.10/site-packages/torch/serialization.py", line 1422, in _load
    result = unpickler.load()
  File "/usr/local/lib/python3.10/pathlib.py", line 962, in __new__
    raise NotImplementedError("cannot instantiate %r on your system"
NotImplementedError: cannot instantiate 'WindowsPath' on your system

Here is my app.py code:

import gradio as gr
from fastai.vision.all import *
import skimage
import pathlib
import sys
import os

if sys.platform == "win32":
    path_class = pathlib.WindowsPath
else:
    path_class = pathlib.PosixPath


learn = load_learner(os.path.abspath('./export.pkl'))



labels = learn.dls.vocab
def predict(img):
    img = PILImage.create(img)
    pred,pred_idx,probs = learn.predict(img)
    return {labels[i]: float(probs[i]) for i in range(len(labels))}

title = "Pet Breed Classifier"
description = "A pet breed classifier trained on the Oxford Pets dataset with fastai. Created as a demo by Pranab Sarma."
article="<p style='text-align: center'><a href='https://tmabraham.github.io/blog/gradio_hf_spaces_tutorial' target='_blank'>Blog post</a></p>"
examples = ['siamese.webp']
interpretation='default'
enable_queue=True

gr.Interface(fn=predict,inputs=gr.Image(height=512, width=512),
            outputs=gr.Label(num_top_classes=3),
            title=title,description=description,article=article,examples=examples).launch()
1 Like

UPDATE: Read over the gradio documentation (should have done earlier), and seems .inputs is no longer used so I just went with gr.Interface(fn=classify_image,inputs=gr.Image(type=“pil”),outputs=gr.Label(),examples=[‘dog.jpg’, ‘cat.jpg’]) as my interface and worked like a charm! Will be paying a closer eye to the updated docs here on out.


Hi there, I keep getting an error in my jupyter for gradio:
AttributeError: module ‘gradio’ has no attribute ‘inputs’

I can’t find any suitable solution online except to totally torch my env set up on unbuntu and start over which I’d like to avoid if there’s anything obvious I can try. Anyone have any ideas or has ran into this before?

Python 3.10.12
mamba 1.4.2
conda 23.3.1
gradio 4.8.0
pytorch 2.1.1

I initially installed gradio using mamba, then uninstalled and used pip to install as it was suggested on the gradio site but this did not correct the issue. I do not yet have a .py file in my folder - naming a script after gradio can confuse things but I do not have an additional file here so I’m out of ideas on what may be going wrong. It’s definitely installed as it does import and can easily find Interface, just not inputs for some reason.


1 Like

Is anyone else having this problem in Kaggle:

bears = bears.new(item_tfms=Resize(128, ResizeMethod.Squish))
dls = bears.dataloaders(path)
dls.valid.show_batch(max_n=4, nrows=1)

TypeError Traceback (most recent call last)
Cell In[26], line 2
1 bears = bears.new(item_tfms=Resize(128, ResizeMethod.Squish))
----> 2 dls = bears.dataloaders(path)
3 dls.valid.show_batch(max_n=4, nrows=1)

File /opt/conda/lib/python3.10/site-packages/fastai/data/block.py:155, in DataBlock.dataloaders(self, source, path, verbose, **kwargs)
149 def dataloaders(self,
150 source, # The data source
151 path:str=‘.’, # Data source and default Learner path
152 verbose:bool=False, # Show verbose messages
153 **kwargs
154 ) → DataLoaders:
→ 155 dsets = self.datasets(source, verbose=verbose)
156 kwargs = {**self.dls_kwargs, **kwargs, ‘verbose’: verbose}
157 return dsets.dataloaders(path=path, after_item=self.item_tfms, after_batch=self.batch_tfms, **kwargs)

File /opt/conda/lib/python3.10/site-packages/fastai/data/block.py:147, in DataBlock.datasets(self, source, verbose)
145 splits = (self.splitter or RandomSplitter())(items)
146 pv(f"{len(splits)} datasets of sizes {‘,’.join([str(len(s)) for s in splits])}", verbose)
→ 147 return Datasets(items, tfms=self._combine_type_tfms(), splits=splits, dl_type=self.dl_type, n_inp=self.n_inp, verbose=verbose)

File /opt/conda/lib/python3.10/site-packages/fastai/data/core.py:454, in Datasets.init(self, items, tfms, tls, n_inp, dl_type, **kwargs)
445 def init(self,
446 items:list=None, # List of items to create Datasets
447 tfms:MutableSequence|Pipeline=None, # List of Transform(s) or Pipeline to apply
(…)
451 **kwargs
452 ):
453 super().init(dl_type=dl_type)
→ 454 self.tls = L(tls if tls else [TfmdLists(items, t, **kwargs) for t in L(ifnone(tfms,[None]))])
455 self.n_inp = ifnone(n_inp, max(1, len(self.tls)-1))

File /opt/conda/lib/python3.10/site-packages/fastai/data/core.py:454, in (.0)
445 def init(self,
446 items:list=None, # List of items to create Datasets
447 tfms:MutableSequence|Pipeline=None, # List of Transform(s) or Pipeline to apply
(…)
451 **kwargs
452 ):
453 super().init(dl_type=dl_type)
→ 454 self.tls = L(tls if tls else [TfmdLists(items, t, **kwargs) for t in L(ifnone(tfms,[None]))])
455 self.n_inp = ifnone(n_inp, max(1, len(self.tls)-1))

File /opt/conda/lib/python3.10/site-packages/fastcore/foundation.py:98, in _L_Meta.call(cls, x, *args, **kwargs)
96 def call(cls, x=None, *args, **kwargs):
97 if not args and not kwargs and x is not None and isinstance(x,cls): return x
—> 98 return super().call(x, *args, **kwargs)

File /opt/conda/lib/python3.10/site-packages/fastai/data/core.py:368, in TfmdLists.init(self, items, tfms, use_list, do_setup, split_idx, train_setup, splits, types, verbose, dl_type)
366 if do_setup:
367 pv(f"Setting up {self.tfms}", verbose)
→ 368 self.setup(train_setup=train_setup)

File /opt/conda/lib/python3.10/site-packages/fastai/data/core.py:397, in TfmdLists.setup(self, train_setup)
395 x = f(x)
396 self.types.append(type(x))
→ 397 types = L(t if is_listy(t) else [t] for t in self.types).concat().unique()
398 self.pretty_types = ‘\n’.join([f’ - {t}’ for t in types])

TypeError: ‘NoneType’ object is not iterable

Hey everyone!

  • How do I figure out how many examples are there in both the training/validation set after data augmentation?

  • What’s the best way to keep track of training/validation losses while training the learner?

The kernel restart was very help with Google Colab!

1 Like

We don’t make a fixed number of augmented images or whatever. Whenever we need an image for training, a random augmentation (say, rotate 4.5 degrees) is applied before it is used. Each time we get a slightly different (augmented) image.

Thank you very much! :slight_smile:

Does anyone have advice on how I could include a ‘None of Above’ label for image classification? I was thinking of unsupervised learning, but I’m not too sure.

I just think it’s not realistic to feed the model enough ‘garbage’ images and requires lots of labelling. I also thought of setting a threshold where if the model is not as confident, then it would be labelled ‘None of the Above’. However, I don’t think it is a guarantee as the model could still try to find some pattern from its training.

Hi everyone,

I’m a bit confused with what I have to do in this lesson. I’ve seen the video, is following along enough?

Forgive me if this is basic/obvious, I’m just confused

Hi Bencoman,

I was also getting the gradio has no attribute ‘inputs’ error (in Hugging Face Spaces). I resolved it by updating (or “backdating”) the gradio sdk-version in the Readme.md file from 4.12.0 to 2.9.4.

Gradio has deprecated gr.inputs in the newer versions.

1 Like

Why does Jeremy have two conda environments (base)/(main) or (base)/(master) around 56’ when he is demonstrating his terminal?

Hi there, after changing search_images_ddg,

Im stuck at the same problem:

dls = bears.dataloaders(path)
TypeError Traceback (most recent call last)
in <cell line: 1>()
----> 1 dls = bears.dataloaders(path)

6 frames
/usr/local/lib/python3.10/dist-packages/fastai/data/core.py in setup(self, train_setup)
395 x = f(x)
396 self.types.append(type(x))
→ 397 types = L(t if is_listy(t) else [t] for t in self.types).concat().unique()
398 self.pretty_types = ‘\n’.join([f’ - {t}’ for t in types])
399

TypeError: ‘NoneType’ object is not iterable

I cant seem to proceed, i’d appreciate some help, thank you in advance.

1 Like

AttributeError: module ‘gradio’ has no attribute 'inputs’
Here it mentions the issue and the recording’s code can be updated by changing gr.inputs.Image(…) and gr.outputs.Label(…) to gr.Image(…) and gr.Label(…) respectively

Like several others in this forum, I noticed that hugging face spaces no longer makes API access to the hosted model available like it was when the video was recorded.

As far as I can tell, there isn’t any easy way to access the model from an HTML page now, even though spaces does offer python and javascript API options.

Their javascript option requires the @gradio/client NPM module. I fiddled around with browserify to see if it could package that library up for the browser, but that didn’t work. I did see a reference to browserify being mostly unsupported at this point, so maybe that’s why.

So it looks like API access to a HF space now requires us to run either a python webserver or nodejs, with the appropriate gradio library installed.

As at January 2024, I had all sorts of Python dependency issues trying to set things up locally for Chapter 2. I will say, everything worked great in Colab but since I’m minded to run things locally so I understand them so, I’ve written a blog post describing how I got things running. This may or may not help people after me - much of it is probably covered in the thread here but it’s pretty hard to dig through given the length but hopefully it helps someone and if not, well at least I wrote my first blog post.

4 Likes

Your blog post is very helpful, thanks. I’ve been sitting here for a couple hours now having watched the course, read up to the part in the book where we have to get an Azure key… and trying to import the lesson2 notebook into kaggle yields a blank screen. I was about to follow the steps in your post and get miniforge, but on the miniforge readme it says: Apple silicon builds are experimental and haven’t had testing like the other platforms. So now I’m wondering if I should use something else. Do you have any suggestions? I’m on an Apple M3.

In the meantime, I seem to have no such issue in colab… I am kind of shocked by the night/day diff btw kaggle and colab on this one. What am I missing?!

I suggest you work on colab if it works for you. I would not spend a lot of time trying to setup a local environment. It can be a headache at times. If you are very serious about deep learning, then you could build a machine with Nvidia GPU. Or use other computing resources online.

1 Like

Hi, thanks for your quick reply! I will proceed with colab for the timebeing - and hopefully it continues to work in future lessons. Can you please help me understand why using kaggle equates to setting up a local environment?

I don’t think using kaggle equates to setting up a local environment.
Kaggle has great GPUs, but CPUs are very weak. Therefore, GPUs are just waiting for CPUs.
You can do anything with local environment, so it is easier.

1 Like