Lesson 2 official topic

It worked great!!! Thank you!!

1 Like

In deploying my model, I came across this problem. Any thoughts?

If you don’t have one already, create a ‘requirements.txt’ file in your root folder, then write ‘fastai’ in the document. Once you’ve pushed your changes to huggingface you should have access to fastai.

3 Likes

It works thank you

Also I am having trouble in adding images with git. Any ideas?

Where is says…

Have you done that?

If you can’t work this out yourself, its discussed somewhere in the live coding videos.
(note: You’ll gain a lot watching the whole series)

It’s also discussed in the HF Spaces tutorial linked in the topic and discussed in the lesson.

@ilovescience has a wonderful blog post on how exactly to go from a FastAI model to a published Gradio + HuggingFace demo:

Essentially the export.pkl file is too large to use git, so you install git lfs to track it:

git lfs install
git lfs track "*.pkl"
git add .gitattributes
git commit -m "update .gitattributes so git lfs will track .pkl files"

Then just commit and push normally

1 Like

About the description about ResizeMethod

I am a bit confused by the following description in the content of L2:

All of these approaches seem somewhat wasteful, or problematic. If we squish or stretch the images they end up as unrealistic shapes, leading to a model that learns that things look different to how they actually are, which we would expect to result in lower accuracy. If we crop the images then we remove some of the features that allow us to perform recognition…

In my understanding, can’t we just regard those “crop/squish/pad” methods as kind of “augmentation”? How could they be wasteful or problematic?

Welcome to the forum @hiankun!
I believe the answer is within the quote itself.

—> problematic

—> wasteful

Or have I not understood your question correctly?

I am afraid that it’s me keeping misunderstanding the original context, but let me try to elaborate my question again. :smile:

When we apply augmentation techniques, the images are often “squished” and even “distorted”, therefore I don’t see why the “squishing” operation will cause any problems.

Similarly, “cropping” is also a common augmentation technique. How could it is okay for augmentation but is wasteful when we apply ResizeMethod?

1 Like

I think the statement should be viewed in the context of ‘Resize’ vs ‘RandomResizedCrop’. Jeremy is saying that randomly cropping a different part of an image is better than cropping the same part every epoch since the model will get to see different parts of the image.
It is also better than squishing the original image since squishing changes the aspect ratio. Hope this helps?

1 Like

Thank you for the context part. Can I say that “cropping out different parts” is a better strategy than “just cropping the center part” in the sense of utilizing the original images?

For the squishing part, my understanding now is as the following (FYI and welcome any suggestions or corrections):

  • If we just squish the image blindly, than it might cause the objects (e.g., bears here) be transformed into extremely aspect ratios and which could be out of reasonable range hence be harmful to our models.
  • When applying augmentation techniques, however, the aspect ratios would be limited within certain ranges and therefore the effects might be different to the purely squishing operation.

I never expect that I would have to think such basic concepts again after training many models. That’s great to revisit and to clarify these stuff.

Even although it’s common for augmentation, it still has problems. Hopefully from the description you quoted, you can see that it has downsides, and you can think about how to best mitigate these downsides when thinking about how to augment your data.

2 Likes

in the video for this lesson Jeremy said he will share the version of the notebook using ddg instead of bing image search. I can’t see a link to it within this topic. does anyone know if it is available? thanks

2 Likes

DDG is used in these.

In the course repo.

Or Kaggle if you prefer.

3 Likes

By the way: Yuqi’s code (Lesson 2 official topic - #335 by Yuqi) solved a bug in my HF app. Turns out I couldn’t just use my original ‘categories’ list from generating & saving a learner, when I reload it. I needed to do just what Yuqi did and use its “learn.dls.vocab” list instead. Maybe fast.ai alphabetizes the categories?

I created an image classifier that takes in 3 labels, but doesn’t recognize when the input is none of the labels. did I miss a section in the lesson where this is addressed?
link: Nyc Iconic Building Classifier - a Hugging Face Space by jvrbntz

1 Like

Reworking lesson 2, let me share my learnings from putting my Bear Detector on HuggingFace

  • Interestingly, some things had changed a little bit in the meantime: the gradio API, working with nbdev
  • Working on my local machine, I had to install gradio, nbdev and Git Large File Storage (LFS)

For the complete summary and the source code, check out my GitHub.

3 Likes

See…

but note that cnn_learner() is now named vision_learner()

1 Like