Lesson 2 official topic

Hello!

I have some problems when I try to use models in Colab when they are generated in Kaggle, and viceversa. After update the Pillow package, I can load the model using ‘load_learner()’, but then when I do the fine tune, the columns appear with NaN or None values. Does anyone know why this happens?

Thanks :slight_smile:

Hi,

I am encountering a bug in lesson 2 on my local machine where running ImageClassifierCleaner() and trying to generate widget fail due to RGBA as JPEG error in ipywidgets v8. I also tried downgrading ipywidgets to v7.7.1, but I am getting error "Failed to load model class ‘VBoxModel’ from module ‘@jupyter-widgets/controls’. I also tried ipywidgets v7.0.0, and the error that I got was “failed to load widget”.

I am really confused on what to do with this class. Different version give different errors on my local machine. On another note, the class works in Colab, so I believe the code in lesson 2 should be correct.

I also tried to match the package version in local and colab are the same, but local still doesnt work:

ipywidget 7.7.1
python3 3.10.6
fastbook 0.0.29
fastai 2.7.12

Local maching is running Ubuntu 22.04 with an RTX 3060 with driver 535.54.03 with CUDA 12.2. Pytorch version is 2.0.1+cu117. Jupyter Notebook is running version 7.0.0

1 Like

Currently working on lesson 2. I’m using mamba for the first time, I installed a python distribution from running setup-conda.sh from fastsetup github repo. And I have a folder where I’m making my first gradio app. My typical pattern is to create a virtualenv for each project I have from whatever python distribution I have. A question I have is

  1. Is this a problematic or not recommended process here?
  2. Is there a recommended way to build the virtualenv from the python distribution that fastsetup downloaded?

I am not able to do import chapter-2. When i import chapter-2 in kaggle, it shows up empty.

How can i run chapter2 notebook locally in localhost:8888/notebooks/fastbook/. Video uses this entire time, but i wanted to setup on my machine.

Hi, I just finished lesson 2, and have made a slightly inaccurate(but fun-to-play-with) paintings classifier.

Here’s the Gradio / HF Space.

And here’s a blog post where I discuss the reasons for it being a little inaccurate.

I didn’t properly utilize Quarto’s “freeze render” option so all of my code blocks are just screen shots. I’ll attempt to fix that in my next blog post.

Thank you all and thanks fast.ai for such an awesome course.

Hello everyone, finally found time for this course and it is very exiting, great teaching and interaction. Followed Tanishq Abraham’s great tutorial on gradio and decided for fun to dockerize the app. But got into some difficulties.

First this is how my Dockerfile looks like:

FROM fastai/fastai

WORKDIR /app

COPY requirements.txt /app/

RUN pip3 install --no-cache-dir -r requirements.txt


COPY . /app

EXPOSE 7861

CMD ["gradio", "app.py"]

and requirements.txt

scikit-image
gradio

and the app itself

import gradio as gr
from fastai.vision.all import *
import skimage

learn = load_learner('export.pkl')

labels = learn.dls.vocab

def predict(img):
    img = PILImage.create(img)
    pred, pred_idx, probs = learn.predict(img)
    return {labels[i]: float(probs[i]) for i in range(len(labels))}

title = "Pet Breed Classifier"
description = "A pet breed classifier trained on the Oxford Pets dataset with fastai. Created as a demo for Gradio and HuggingFace Spaces."
examples = ['Dog (1).jpg', 'Dog (10).jpg', 'Dog (1010).jpeg']
interpretation='default'
enable_queue=True

gr.Interface(fn=predict, 
             inputs=gr.components.Image(shape=(512, 512)),
             outputs=gr.components.Label(num_top_classes=3),
             title=title,
             description=description,
             examples=examples,
             interpretation=interpretation,
             ).queue().launch( 
                      share=True,
                      )

the command to run the docker container docker run --rm -d --name gradio -p 3000:7861 gradio_app

Problem is that I connect using an URL link to the container but cannot connect to the exposed port from docker container. Any suggestions will be very helpful, thank you!!!

Hi all. Just thought I’d introduce myself. I’m currently working my way through the course and posting updates of my learning here. Will keep posting more as I go along.

1 Like

Hi all! This course is great and it’s wonderful to see such an active community. I faced a 404 error while trying to install fastchan. This is because the package platformdirs is not available on fastchan. A solution to this can be found here.

1 Like

will try it thx

Hi all,

Has anyone else had an issue with HuggingFaces Spaces getting stuck in “Building…” mode forever after pushing Gradio app and model?

Thanks!

I am running into this problem as well. Does anybody have any suggestions?

Hi Jeremy and FastAI team,
I am having a bit of trouble using the Bing Search Images and I’m stuck at the moment.
I’ve successfully acquired the api key and endpoint and followed this guide of using the API: Bing Image Search Python client library quickstart - Bing Search Services | Microsoft Learn

However I still get an Error that I cannot figure out a solution to. Following is the code that I wrote and the error message:

import os

key = os.environ[‘AZURE_SEARCH_KEY’]

endpoint = “https://api.bing.microsoft.com/v7.0/images/search
search_term=“dog”
#Create an instance of CognitiveServicesCredentials
client = ImageSearchClient(endpoint=endpoint, credentials=CognitiveServicesCredentials(key))
image_results = client.images.search(query=search_term)

Error(at last line:image_results):

ErrorResponseException Traceback (most recent call last)
Cell In[52], line 1
----> 1 image_results = client.images.search(query=search_term)

File ~/.local/lib/python3.11/site-packages/azure/cognitiveservices/search/imagesearch/operations/_images_operations.py:491, in ImagesOperations.search(self, query, accept_language, user_agent, client_id, client_ip, location, aspect, color, country_code, count, freshness, height, id, image_content, image_type, license, market, max_file_size, max_height, max_width, min_file_size, min_height, min_width, offset, safe_search, size, set_lang, width, custom_headers, raw, **operation_config)
488 response = self._client.send(request, stream=False, **operation_config)
490 if response.status_code not in [200]:
→ 491 raise models.ErrorResponseException(self._deserialize, response)
493 deserialized = None
494 if response.status_code == 200:

ErrorResponseException: Operation returned an invalid status code ‘Resource Not Found’

Hi,

I am seeing the same error and tried the solution you suggested. even after installing platformdirs it did not work for me. getting the same error. any idea?

Thank you for your help.

Bing Search has been problematic,

The current Part 1 course uses Duck Duck Go (ddg_search) for searching.

Try

‘’’
from duckduckgo_search import ddg_images
from fastcore.all import *

def search_images(term, max_images=30):
print(f"Searching for ‘{term}’")
return L(ddg_images(term, max_results=max_images)).itemgot(‘image’)
‘’’

Hi, I am running into exact same issue. were you by any chance able to resolve the issue? if so how?

Thank you for our help.

I figured out what the issue was. putting the solution out here incase anyone else is running into same issue.

Basically if you had run the below code block once and ran into an issue within the for loop, it would have created the parent path folder. Then if you run the block again, after fixing the issue, it would never reach the for loop to search for images. Because first if condition will execute false. so solution is to bring the for loop out of the if. Or another solution would be to ignore if the path exists in the path.mkdir() call.

code block with issue:

if not path.exists():
    path.mkdir()
    for o in bear_types:
        dest = (path/o)
        dest.mkdir(exist_ok=True)
        results = search_images_ddg(f'{o} bear')
        download_images(dest, urls=results)

solution:

if not path.exists():
    print('path not exists')
    path.mkdir()
for o in bear_types:
    dest = (path/o)
    dest.mkdir(exist_ok=True)
    results = search_images_ddg(f'{o} bear')
    download_images(dest, urls=results)
2 Likes

1. I have fine-tuned resnet to classify 10 types of animals commonly seen on African Safaris using duckduckgo search, and reached quite good accuracy, with a less than 1% error on the validation set. I am now trying to run the model on a test set of photos that I took, which I have organized into the parent-label file structure. I’m not sure how to predict on the test set and get accuracy numbers. .predict only works for a single image at a time, and .get_preds only outputs unlabeled class prediction probabilities. I looked on the forum and saw that .validate was what I needed, but the output from that does not do what I was expecting (screenshot below). How can I test my model’s performance on the test set in a table, the same way I see validation performance?

Current Code:

More generally, is this not a normal task? I am surprised it has been so difficult to track down the answer so I’m wondering if it’s not part of a regular ML engineering workflow.

2.
When I use the code from lecture one to search duckduckgo for images, I get a few deprecation warnings. Is there newer code for searching duckduckgo already out there somewhere? I just don’t want to come up with a function myself if it has already been done!

get_preds and with_decoded should be what you need.

This tutorial on inference by Benjamin Warner covers a number of useful tips regarding inference. Inference With fastai - Model Saving, Loading, and Prediction | Just Stir It Some More