i have this problem were i took lesson 2’s code, but instead of bears i’m building a cups detector (hence the “cups” DataBlock naming you see in the code). i ran all the code before, meaning “path” and “cups” are both instantiated, however upon running the show_batch() method, it fails??
let me know if anyone’s interested & i can forward you the link to my colab if u want to take a closer look
@vishutanwar and @Shumbabala it may be because the default batch size (64) is too high, so you may need to specify the batch size. Please see this forum post.
If that doesn’t solve it, please share a link to a Google Colab or Kaggle notebook with your code.
Hi everyone, I was trying to upload my exported model to hugging face space but it won’t allow me because it was too big. Anyone else have that problem?
I cannot get gradio to work at all. The library version I was running just shows “Error” in red in a circle whenever I submit an image. I updated to the latest version of gradio, and now gradio just says “loading…” forever when I launch it.
I tried a bunch of different versions, and it seems that for versions 3.44.4 and below I get the “Error” error, and for versions 3.45.0 and above I get the infinite “Loading…” screeen.
My .pkl model works if I run it directly on an image in the notebook, so that’s not the issue.
Hello everybody. I´ve just spent a couple of hours debugging some of the common errors thrown by Gradio and the libraries needed to run this locally. I´ve read some questions I had in other forums, github issues and blogs without any response so I hope to answer some of them to help others:
I am running the gradio app on windows via a ‘normal’ python env and vscode as my IDE…
I basically had to install fastai, gradio, fastbook and many other stuff thrown by the editor itself. It´s a very longs list so if someone needs it please let me know.
change the label and the inputs to this:
image = gr.Image(height=192, width=192)
label = gr.Label()
I also had to change my path like this:
import pathlib
temp = pathlib.PosixPath
pathlib.PosixPath = pathlib.WindowsPath
and remember to change it back like this:
pathlib.PosixPath = temp
After that your model will work. Mine finally did in a local environment. I tried colab as well but error connections appeared and after those minor changes the model ran locally with no isses. Mostly requirement problems ngl.
Hello! I just worked through the Lesson 2 Pet Classifier model and wrote up a full tutorial for anyone who’s having issues with the following, due to some deprecation that requires rewriting a couple lines of code. It’s VERY detailed but hopefully it helps someone! Link is below.
working with git
lfs / model.pkl file
gradio .Interface function
I spent so much time figuring out dependency issues and working through jupyter extension quirks that I didn’t have time to build out a custom classifier. However, I did add my dog Roman as the example for how to use my app: Fast Ai Lesson 2 - a Hugging Face Space by chuckfinca
I will say though, that now that I’ve got my head round mamba environments, I think they are pretty cool! They seem to work well with VSCode (which is also new to me as of this week), you just need to make sure to activate VSCode from your terminal, while in your mamba environment, and then everything just works! Very cool
Hi Folks!
I have completed Lecture 2 Assignment. The project I chose was classifying the paintings made by Picasso and Monet (I couldn’t think of any crazy project ). Deployed it in HuggingFace Spaces and also wrote a blog (setup my blog using Quarto & Github Pages) about the project with source code. Happy feelings!
@jeremy I want to thank you for creating these lessons! I wish I found this sooner! Better late than never I guess.
Just finished working through lesson 2 (already watched 3 and 4 haha) and I wanted to give the handwritten digits recognition a shot! So I got everything configured for running Jupyter locally (kaggle was too slow compared to my 3080ti for small projects) Fine-tuned the resnet18 model using the MNIST Dataset (~99٪ accuracy) and configured it to work with gradio sketchpad. What a fun project!
I am supposed to get a bing API search key so I can download images to train models. I am on the bear detector model section from the book. However I can’t seem to find a way to get this API key from azure. I have searched through every inch of their website but I cant find it. Every time I attempt to use their search bar it just redirects me to the home page of azure no matter what I click on. I tried to search for tutorials online, but none of them worked. Has anyone else had a similar issue as this? I am using a student azure account and I am in germany.
How accurate is the cat dog model? After uploading it to hugging space, I tried using it to classify the grizzly bear pic from earlier, and it predicted it was a Dog. but the concerning part is that it was 100% certain. So I tried it on other random images as well, and it would always confidently predict it was a dog or a cat (even predicted an excel sheet was a cat), at about 100% (sometimes down to the mid 90th percentile).
check if you done this “from fastbook import *
from fastai.vision.widgets import *”
those import has the function called search_images_bing as well as search_images_ddg for duckduckgo