Lesson 2 official topic

If anyone wants help with spinning up a front end for their lesson 2 projects, I’d be happy to help for free. Jeremy goes into a bit of detail about how to do this at the end of the lesson, but if you have something that you really like and you want to share it with the world and maybe add some more functionality or information, I’d be happy to help. This is not a commercial solicitation - I’m totally open to just helping fellow students. I really liked the cloud recognition model that I trained, so I put together a simple front end here: https://cloudatlasai.netlify.app/ Feel free to reach out if you could use any help in that area. It was fun to make something that I actually enjoy using and sharing!

Video demo here: CloudAtlas UI demo - YouTube

1 Like

Having a really stupid issue with my code I’ve tried debugging it multiple times but it does not seem to work. I’m basically getting the following on pushing app.py to a space

Traceback (most recent call last):
  File "/home/user/app/app.py", line 7, in <module>
    learn = load_learner(r'C:\Users\...\OneDrive\...\export.pkl')
  File "/home/user/.local/lib/python3.10/site-packages/fastai/learner.py", line 446, in load_learner
    try: res = torch.load(fname, map_location=map_loc, pickle_module=pickle_module)
  File "/home/user/.local/lib/python3.10/site-packages/torch/serialization.py", line 791, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/user/.local/lib/python3.10/site-packages/torch/serialization.py", line 271, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/user/.local/lib/python3.10/site-packages/torch/serialization.py", line 252, in __init__
    super().__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\...\\OneDrive\\...\\export.pkl'

Here is the relevant bit for the code:

learn = load_learner(r’C:\Users.…\OneDrive.…\export.pkl’)

I’ve tried escaping the escape characters rather than using a raw string, tried moving the .pkl file but always getting the same error.

Hello,

Just for reference, I am not a computer scientist by any means so I really dont know what I am talking about, but I do use python for work (basically all in anaconda). I am at around 46:15 in the video, the point where Jeremy creates a simple interface, then hits ‘launch’ on his jupyter notebook. It looks like he is working in a locally hosted notebook. I am working in google colab. Everything in my model works, and it is classifying appropriately. The .jpg files and the .pkl file are hosted presumably in google drive (tbh I am still struggling to understand how to navitage drives, where exactly everything needs to be saved) When I run this code block in the google colab notebook:

#/export

image = gr.inputs.Image(shape=(192,192))

label = gr.outputs.Label()

examples = [‘dog.jpg’, ‘cat.jpg’, ‘fox.jpg’]

intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples)

intf.launch(inline=False)

I get this message, and nothing happens, i.e. no Gradio interface opens:

I understand the first half, basically gradio got updates and the old methods are being phased out. The second half, though, I don’t know what it means. Do I have to run this notebook locally? Could I run this as a .py file in VScode, with the .pkl and images in my working directory? I could open a jupyter notebook locally through anaconda, should I do that instead?

I am really stumped by this and tbh kind of frustrated. Why do the book and the video do all of this such different ways? I set up VScode, wsl2 Ubuntu, and kaggle, for what reason? tbh I would prefer to just learn one way that works, and branch out from there as necessary.

Hi all, first AI course, first post, so huge thanks, this is fun!

I’ve noticed that Hugging face spaces has changed their api documentation since the lesson 2 video. Even a pinned tutorial from ilovescience has a link to the old API that doesn’t work anymore.

Now they only seem to have API documentation for python or javascript that uses gradio. That’s fine, but I’m a long-time developer and want to know the actual JSON API spec like they used to provide with curl examples that are no longer there.

e.g. I want to know what the post body should look like, what http status codes are returned under what cases etc. with a view to building my own consumer without using gradio.

Does this doco still exist somewhere?

thanks :slight_smile:

1 Like

I don’t really know gradio, so not sure if this is applicable, but I notice it says…
“To create a public link, set share=True in launch()”
and you only have “launch(inline=False)”

Now there is a whole lot going on in those few lines of code.
You don’t say that you ever had gradio working, so try starting with the simplest possible gradio app and work up from there adding just one thing at a time to distiguish what works and what doesn’t. i.e. try googling: gradio colab hello world

Thank you ben, that fixed the issue of the gradio app launching, I should have been able to catch such a simple error.

1 Like

Hi Kamui, thanks for your help here; I had the same problem. But after making the changes, I get the same problem. I wonder if it has anything to do with the path.

path = Path('bears')
path

returns:

Path('bears')

The same problem occurs as in the OP:

---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

<ipython-input-52-add9d3ce560a> in <cell line: 1>()
----> 1 dls = bears.dataloaders(path)

6 frames

/usr/local/lib/python3.10/dist-packages/fastai/data/core.py in setup(self, train_setup)
    395                 x = f(x)
    396             self.types.append(type(x))
--> 397         types = L(t if is_listy(t) else [t] for t in self.types).concat().unique()
    398         self.pretty_types = '\n'.join([f'  - {t}' for t in types])
    399 

TypeError: 'NoneType' object is not iterable

Any ideas what I can do to fix it? I want to carry on with the production model, but this is tripping me up. The lesson 2 notebook on Kaggle is completely different.

Thanks for any help.

are you using search_images_ddg? because it seems that there is a problem with it.

Hi Everyone, I worked through my setup issues for the most part, but I am still having a few errors. I managed to get everything to work up until around 50:00, I compiled my model, then I pushed my model to huggingface using git commands in ubuntu, but when I click on the model in my huggingface account, I get a runtime error:

image

So huggingface does not natively have the fast.ai library, but I am struggling to figure out how to install it, and my google fu is proving too weak for this.

OK, if anyone else is having this issue, I fixed the issue by having a .txt document in the folder with my model.pkl and app.py files called ‘requirements’ that simply listed the names of the libraries I needed, namely in this case ‘fastai’ and ‘gradio’

Thanks for the response.
DDG seems to work, it pulls up the images fine. The classifier works too.

I had tried to use a separate notebook for the production part. So I tried various ways to find the location of the pkl file exported from the previous notebook. But I couldn’t find the correct path, either from the pkl file as hosted on Kaggle or by specifying the path to the file downloaded to my computer.

For input files its easy to get their path, but not so for output files, which I tried to specify as an input to the production notebook.

Anyway, a workaround is to continue using the same notebook. Perhaps its a persistence issue, that the file is no longer available when you use a different notebook. But it seems it should be able to be made available. No idea why I couldn’t specify a full path.

If you can offer any insight or suggestions as to the above, I’d appreciate it. But as least now, I’m able to continue on.

Glad you worked it out. FYI. The ‘requirements.txt’ is mentioned in the Gradio Tutorial in resources section at the top of this thread

hello.
I am new to the course and starting to create my own notebooks. I would like to ask where is a good place to save these notebooks? maybe for a future job interview, somewhere to show as a portfolio.

you can save them in a github repository, and also you can create a blog where you refer to them and explain what you do in each one of them.

Turning your notebooks into a blog post itself is a great way to store them.

You can easily set up a blog, and turn Jupyter Notebooks into blog posts with Quarto.

hi. when trying to run my app.py file, i get the following error:
Runtime error
Traceback (most recent call last):
File “/home/user/app/app.py”, line 2, in
from fastai.vision.all import *
ModuleNotFoundError: No module named ‘fastai’

i followed the tutorial and added the requirements.txt file to huggingface…

just restart your huggingface space and try again, with a valid requirements.txt containing

fastai
torch
gradio

it should work.

tried this. still dosent work…
this is my app.py file:
import gradio as gr
from fastai.vision.all import *

learn = load_learner(‘model.pkl’)

categories = (‘muffin’,‘chihuahua’)
def classify_image(img):
pred,idx,probs = learn.predict(img)
return dict(zip(categories,map(float,probs)))

image = gr.inputs.Image(shape=(192,192))
label = gr.outputs.Label()
examples1 = [‘muffin.jpg’,‘chihuahua.jpg’,‘dunno.jpg’]

#d
title = “Chihuahua or Muffin?!”
description = “A Chihuahua or Muffin classifier. Created as a demo for Gradio and HuggingFace Spaces.”
intf = gr.Interface(fn=classify_image,inputs=image,outputs=label,examples=examples1,title=title,description=description)
intf.launch(share=True)

and i have a requirements file with:
fastai
gradio
torch
torchvision

i uploaded the model as weel and the example images

can you give me the link to your Hugging Face space?

fixed the issue by refreshing and pressing the “factory reboot” in settings

1 Like