Lesson 2 official topic

One issue with github pages is now you need to make your repo public to use it for free. Is there any good alternatives for static website hosting which you have tried out?

1 Like

I also started with Jekyll and GitHub Pages for my personal website, but eventually migrated to a custom Go solution hosted on Digital Ocean. I didn’t mind keeping the source open; however, I found it a bit inconvenient and somewhat limited for my purposes. Still, a great way to start! You can rewrite it later. Especially, if you have a domain name that you can easily redirect to another platform.


Another great lesson! Thanks for the major shout-out Jeremy, was not expecting it :sweat_smile:


5 posts were merged into an existing topic: Help: Creating a dataset, and using Gradio / Spaces :white_check_mark:

Hi all, really enjoyable lesson again today, all the gradio/streamlit chat is inspiring me to try and plan a project with fast.ai. so on to my main question:

Has anyone done any pose estimation with fast.ai before?


1 Like

5 posts were merged into an existing topic: Help: Using Colab or Kaggle :white_check_mark:

There is an example of head pose (see Computer vision | fastai Points).

This new approach is now supported in the latest nbdev and will be the default in the future. We’ll be integrating with Quarto, and that’s what they use.


Awesome lecture @jeremy, I know that this is a little ahead of the class but I already know what I wanted to work on and it partially involves segmentation and I found a great notebook from Zachary Mueller and thought I would give it a shot.

I think loading the pretrained weights fails, so I commented that out too.

Maybe it’s the Normalization function? I tried commenting that out and its the same error.

I got the images and labels to load but im getting a GPU error.

I made it so the notebook is easily re-runable and automatically downloads the dataset it uses:
Warning its medical so the images are a bit gross.

I played around with resnet sizes and image sizes and I dont think its an OOM. The CamVid notebook from Zachery can run lr_finder and fit fine, so I guess its my data / accuracy / optimization functions?

I think theres only 1 class which might be an issue, but I have no idea where you would configure that.

Does anyone know what this issue is with?
I have a T4 and it looks like the memory is fine.

import torch

Wow GPU enabled WSL that easy. I’ve been battling with cuda enabled WSL with docker blah blah for ages but that method Jeremy showed us was easy as. For those doing it from scratch I did very close to what he showed us but had to git clone the repo from within my wsl (linux) so it didnt change the UNIX line endings (whatever that means). Then running

source setup-conda.sh

worked. Also I followed these CUDA WSL instructions (<=section 3). Which is exactly same except recommends to get the right cuda-wsl driver from nvidia. So then I cloned the books repo but also had to run sudo apt-get install graphviz in the wsl to get the graphs to display. BUT yeah local GPU accelerated fast.ai here we come! Thanks!


Please don’t get it the wrong way, and I’ve done this myself in the past, but just a quick reminder that Jeremy has indicated not to at-mention admins unless it’s an admin issue. If you just state the issue clearly, people would be more than happy to help.

Also, it would be really helpful if the screenshot gave some context of what it is that was being executed that gave that error. All I can tell from this error is that you got some kind of a generic error. As is the case in 90% of debugging situations, usually the generic error you get is not the issue.

You’ll find that most people on this forum are quite helpful, but they’d need the OP to give some kind of context, ie; “you need to help them help you”. For example, even if I try to reproduce your problem, here are the things I’d need to know:

  1. What cell are you trying to run that gives that error?
  2. Which environment were you running it in? (Kaggle notebook, Colab, your home machine?)
  3. what is your setup like?

These things would be helpful so that if someone tries to help you, they wouldn’t spend an inordinate amount of time asking follow up clarifying questions about specifics of the problem that can be very easily provided in the original question and so that their valuable time trying to help is also minimized.


1 Like

BTW, when I import your notebook into Kaggle and run it, I get the following error for learn.lr_find() which is 3rd cell from the bottom of the notebook:

> /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
>    2822     if size_average is not None or reduce is not None:
>    2823         reduction = _Reduction.legacy_get_string(size_average, reduce)
> -> 2824     return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
>    2825 
>    2826 
> IndexError: Target 255 is out of bounds.

I did not see any GPU exhaustion at all, I think it dies before things get to the GPU (just a guess)

1 Like

Question: How to specify a destination folder when calling untar_data()?

This is the function signature:

untar_data(url, archive=None, data=None, c_key='data', force_download=False)

The documentation states:

"Download url to fname if dest doesn’t exist, and extract to folder dest"

However, the documentation & the function signature are not matching. Where do we specify dest?



On the right of the documentation is a link to the [source]

Click that and you see…

From that it seems you don’t “specify” dest. Its implied from the fixed base and the default c_key=‘data’ that produces the path for the documentation example…

If you look up to Line 115 you can see how fname is determined from url.

This is the first time of looked at this code, so I’m only guessing, but your options seem to be…

  1. Use the data from the default download position.
  2. Copy the untar_data() function into your code as my_untar_data() and play with the parameter values used with FastDownload and get (i.e. maybe base ??).

Unless there is a compelling reason, the Option 1 default is probably your path of least resistance.


From what I can tell it’s a wrapper around fastdownload and in the fastdownload docs it states that you can change ‘base’ to any directory you want. By default {base} points to ~/.fastdownload or something

EDIT: But I see what you’re saying, base can’t be reached from untar_data()

Instead of get, use download to download the URL without extracting it, or extract to extract the URL without downloading it (assuming it’s already been downloaded to the archive directory). All of these methods accept a force parameter which will download/extract the archive even if it’s already present.

You can change any or all of the base, archive, and data paths by passing them to FastDownload:

d = FastDownload(base=’~/.mypath’, archive=‘downloaded’, data=‘extracted’)

1 Like

Jeremy then showed the Image Classifier Cleaner, and Nick said it pays to visually inspect when using these “open” image searches by @JaviNavarro post.


I’ve added a section to the README now.


3 posts were merged into an existing topic: Help: Python, git, bash, etc :white_check_mark:

I’m running into difficulty with ImageClassifierCleaner(learn).

delete() works as expected:

for idx in cleaner.delete(): cleaner.fns[idx].unlink()

change(), however, fails:

for idx,cat in cleaner.change(): shutil.move(str(cleaner.fns[idx]), path/cat)
Error                                     Traceback (most recent call last)
/tmp/ipykernel_27738/4259621786.py in <module>
----> 1 for idx,cat in cleaner.change(): shutil.move(str(cleaner.fns[idx]), path/cat)

~/anaconda3/lib/python3.9/shutil.py in move(src, dst, copy_function)
    811         if os.path.exists(real_dst):
--> 812             raise Error("Destination path '%s' already exists" % real_dst)
    813     try:
    814         os.rename(src, real_dst)

Error: Destination path 'four_seasons/summer/00000129.jpg' already exists

The problem appears to occurs because search_images_ddg() indexes each category independently, starting with 00000000.jpg. This results in duplicate fnames across categories. Accordingly, attempting to change an image from one category to another results in a collision and the code fails.

I can come up with a work around, but wonder if anyone has solved this particular problem?

Maybe there is a utility I missed in the fastai code base. I will look more closely.


I haven’t seen a fix for this - suggestions most welcome!