Lesson 2 official topic

I noticed the same thing when I recently set up a local Ubuntu environment in WSL as described in the second course. Apparently, all of the functionality of mambaforge is contained within miniforge now. I went with the miniforge install and it worked as expected.

1 Like

This still needs to be updated. You can follow along in the video and perhaps even submit a PR!

Does anyone know where I can find the duckduckgo (ddg) version of the 02_production.ipynb notebook?

Jeremy mentioned in this lesson that he would upload this notebook, but I couldn’t find a link to it in this topic.

I don’t know if or where that ddg version of 02_production.ipynb exists, but for reference, in case you haven’t seen it, there is a ddg-based approach shown for a different use case (identifying birds) in this Kaggle notebook by Jeremy.

1 Like

Hi,
I am new to the course and this one question seems to keep popping into my mind,
say we have a model for Cat_or_Dog is there any way to classify if as neither cat nor dog,

the model should predict whether if it is cat, dog or other.
How should this problem be handled ?

Thanks.

I haven’t tackled this problem myself, but here are some resources that I’ve found in the forums:

  • A notebook example which uses MultiCategoryBlock and BCEWithLogitsLossFlat to create a model which does not return a label for data of a class that’s not in the training set.
  • This and this forum topic where folks are discussing this issue.
1 Like

hi @vbakshi thanks, i guess the example which you shared comes close to my use case, i still have a lot of reading to catch-up, will need to pick up the pace. thanks again.

1 Like

adding following before resolved it for me:

def is_cat(x): return x[0].isupper() 

I was getting same error running locally in notebook. Same if original pickle generated locally or in kaggle

AttributeError: Custom classes or functions exported with your `Learner` not available in namespace.\Re-declare/import before loading:
	Can't get attribute 'is_cat' on <module '__main__'>

Hey guys, currently following Tanishq’s blog post but am having trouble importing gradio. Can anyone point me in the right direction here? Thx!

1 Like

Have you installed Gradio?

pip install gradio

Also try a kernel restart after the install.

For those who deploy applications on huggingface’s space but have been stuck in “building…”, you can refer to my space, thanks to tanishq’s blog, I finally got the application running

1 Like

Hello all,

I am relatively inexperienced in Python hence my question. In the second course of Part1 I see the following sentence

To remove all the failed images, you can use unlink on each of them. Note that, like most fastai functions that return a collection, verify_images returns an object of type L , which includes the map method. This calls the passed function on each element of the collection:

Could somebody explain the above to me because I have trouble understanding it ? What does it mean that L includes the map method ?

I know that python has its own map method which applies a function to a list. How is that different ?
In standard python I would have written the below

failed.map(Path.unlink);

as follows

deleted = map(Path.unlink, failed)

Is it different ?

Thank you in advance

L is like an advanced list, defined in fastcore (here are the docs that explain the different methods available for an L object). L.map is one of those methods. Here is the source code for L.map which in turn calls the fastcore-defined map_ex function which seems like a wrapper around the default python map function.

I think the two methods you have listed yield the same result.

Here is the method with Python’s built-in map:

And here is a similar result with L.map:

Hi I am facing error in deploying the gradio app on Huggingface space. Here is my error message:

Traceback (most recent call last):
  File "/home/user/app/app.py", line 14, in <module>
    learn = load_learner(os.path.abspath('./export.pkl'))
  File "/home/user/.local/lib/python3.10/site-packages/fastai/learner.py", line 446, in load_learner
    try: res = torch.load(fname, map_location=map_loc, pickle_module=pickle_module)
  File "/home/user/.local/lib/python3.10/site-packages/torch/serialization.py", line 1014, in load
    return _load(opened_zipfile,
  File "/home/user/.local/lib/python3.10/site-packages/torch/serialization.py", line 1422, in _load
    result = unpickler.load()
  File "/usr/local/lib/python3.10/pathlib.py", line 962, in __new__
    raise NotImplementedError("cannot instantiate %r on your system"
NotImplementedError: cannot instantiate 'WindowsPath' on your system

Here is my app.py code:

import gradio as gr
from fastai.vision.all import *
import skimage
import pathlib
import sys
import os

if sys.platform == "win32":
    path_class = pathlib.WindowsPath
else:
    path_class = pathlib.PosixPath


learn = load_learner(os.path.abspath('./export.pkl'))



labels = learn.dls.vocab
def predict(img):
    img = PILImage.create(img)
    pred,pred_idx,probs = learn.predict(img)
    return {labels[i]: float(probs[i]) for i in range(len(labels))}

title = "Pet Breed Classifier"
description = "A pet breed classifier trained on the Oxford Pets dataset with fastai. Created as a demo by Pranab Sarma."
article="<p style='text-align: center'><a href='https://tmabraham.github.io/blog/gradio_hf_spaces_tutorial' target='_blank'>Blog post</a></p>"
examples = ['siamese.webp']
interpretation='default'
enable_queue=True

gr.Interface(fn=predict,inputs=gr.Image(height=512, width=512),
            outputs=gr.Label(num_top_classes=3),
            title=title,description=description,article=article,examples=examples).launch()
1 Like

UPDATE: Read over the gradio documentation (should have done earlier), and seems .inputs is no longer used so I just went with gr.Interface(fn=classify_image,inputs=gr.Image(type=“pil”),outputs=gr.Label(),examples=[‘dog.jpg’, ‘cat.jpg’]) as my interface and worked like a charm! Will be paying a closer eye to the updated docs here on out.


Hi there, I keep getting an error in my jupyter for gradio:
AttributeError: module ‘gradio’ has no attribute ‘inputs’

I can’t find any suitable solution online except to totally torch my env set up on unbuntu and start over which I’d like to avoid if there’s anything obvious I can try. Anyone have any ideas or has ran into this before?

Python 3.10.12
mamba 1.4.2
conda 23.3.1
gradio 4.8.0
pytorch 2.1.1

I initially installed gradio using mamba, then uninstalled and used pip to install as it was suggested on the gradio site but this did not correct the issue. I do not yet have a .py file in my folder - naming a script after gradio can confuse things but I do not have an additional file here so I’m out of ideas on what may be going wrong. It’s definitely installed as it does import and can easily find Interface, just not inputs for some reason.


1 Like

Is anyone else having this problem in Kaggle:

bears = bears.new(item_tfms=Resize(128, ResizeMethod.Squish))
dls = bears.dataloaders(path)
dls.valid.show_batch(max_n=4, nrows=1)

TypeError Traceback (most recent call last)
Cell In[26], line 2
1 bears = bears.new(item_tfms=Resize(128, ResizeMethod.Squish))
----> 2 dls = bears.dataloaders(path)
3 dls.valid.show_batch(max_n=4, nrows=1)

File /opt/conda/lib/python3.10/site-packages/fastai/data/block.py:155, in DataBlock.dataloaders(self, source, path, verbose, **kwargs)
149 def dataloaders(self,
150 source, # The data source
151 path:str=‘.’, # Data source and default Learner path
152 verbose:bool=False, # Show verbose messages
153 **kwargs
154 ) → DataLoaders:
→ 155 dsets = self.datasets(source, verbose=verbose)
156 kwargs = {**self.dls_kwargs, **kwargs, ‘verbose’: verbose}
157 return dsets.dataloaders(path=path, after_item=self.item_tfms, after_batch=self.batch_tfms, **kwargs)

File /opt/conda/lib/python3.10/site-packages/fastai/data/block.py:147, in DataBlock.datasets(self, source, verbose)
145 splits = (self.splitter or RandomSplitter())(items)
146 pv(f"{len(splits)} datasets of sizes {‘,’.join([str(len(s)) for s in splits])}", verbose)
→ 147 return Datasets(items, tfms=self._combine_type_tfms(), splits=splits, dl_type=self.dl_type, n_inp=self.n_inp, verbose=verbose)

File /opt/conda/lib/python3.10/site-packages/fastai/data/core.py:454, in Datasets.init(self, items, tfms, tls, n_inp, dl_type, **kwargs)
445 def init(self,
446 items:list=None, # List of items to create Datasets
447 tfms:MutableSequence|Pipeline=None, # List of Transform(s) or Pipeline to apply
(…)
451 **kwargs
452 ):
453 super().init(dl_type=dl_type)
→ 454 self.tls = L(tls if tls else [TfmdLists(items, t, **kwargs) for t in L(ifnone(tfms,[None]))])
455 self.n_inp = ifnone(n_inp, max(1, len(self.tls)-1))

File /opt/conda/lib/python3.10/site-packages/fastai/data/core.py:454, in (.0)
445 def init(self,
446 items:list=None, # List of items to create Datasets
447 tfms:MutableSequence|Pipeline=None, # List of Transform(s) or Pipeline to apply
(…)
451 **kwargs
452 ):
453 super().init(dl_type=dl_type)
→ 454 self.tls = L(tls if tls else [TfmdLists(items, t, **kwargs) for t in L(ifnone(tfms,[None]))])
455 self.n_inp = ifnone(n_inp, max(1, len(self.tls)-1))

File /opt/conda/lib/python3.10/site-packages/fastcore/foundation.py:98, in _L_Meta.call(cls, x, *args, **kwargs)
96 def call(cls, x=None, *args, **kwargs):
97 if not args and not kwargs and x is not None and isinstance(x,cls): return x
—> 98 return super().call(x, *args, **kwargs)

File /opt/conda/lib/python3.10/site-packages/fastai/data/core.py:368, in TfmdLists.init(self, items, tfms, use_list, do_setup, split_idx, train_setup, splits, types, verbose, dl_type)
366 if do_setup:
367 pv(f"Setting up {self.tfms}", verbose)
→ 368 self.setup(train_setup=train_setup)

File /opt/conda/lib/python3.10/site-packages/fastai/data/core.py:397, in TfmdLists.setup(self, train_setup)
395 x = f(x)
396 self.types.append(type(x))
→ 397 types = L(t if is_listy(t) else [t] for t in self.types).concat().unique()
398 self.pretty_types = ‘\n’.join([f’ - {t}’ for t in types])

TypeError: ‘NoneType’ object is not iterable

Hey everyone!

  • How do I figure out how many examples are there in both the training/validation set after data augmentation?

  • What’s the best way to keep track of training/validation losses while training the learner?

The kernel restart was very help with Google Colab!

1 Like

We don’t make a fixed number of augmented images or whatever. Whenever we need an image for training, a random augmentation (say, rotate 4.5 degrees) is applied before it is used. Each time we get a slightly different (augmented) image.

Thank you very much! :slight_smile: