Lesson 2 official topic

That should be fixed now - I modified the function to save images with random filenames.


Re : survey Qu1 my answer middle option

If you have an issue in that you feel you are falling behind in your schedule then if you need to speed up that’s fine with me. What I note with V5 is that the lessons are much shorter in duration compared with previous version so I guess there is more pressure to get through all the material, perhaps Iam mistaken.
Please continue with your excellent work

So fastai lib will decide whether to use the GPU or not during training based on model size and/or dataset number of samples? I was going through the notebook of the 2nd chapter of the book and when training the bear classifier I noticed that nvidia-smi showed no activity on the GPU, even though I have a GPU available:

=== Software === 
python        : 3.9.12
fastai        : 2.6.0
fastcore      : 1.4.2
fastprogress  : 1.0.2
torch         : 1.11.0+cu102
nvidia driver : 440.64
torch cuda    : 10.2 / is available
torch cudnn   : 7605 / is enabled

=== Hardware === 
nvidia gpus   : 1
torch devices : 1
  - gpu0      : Tesla V100-PCIE-16GB

=== Environment === 
platform      : Linux-5.4.86-1.el7.x86_64-x86_64-with-glibc2.17
distro        : #1 SMP Wed Jan 6 09:06:12 UTC 2021
conda env     : Unknown
python        : ../fastai/env/bin/python3
sys.path      : ../fastai/fastbook


15 posts were merged into an existing topic: Help: Creating a dataset, and using Gradio / Spaces :white_check_mark:

Not sure why, probably have to look into your code/setup because the learner loads the model in GPU, whenever available.


So I flushed my setup and used fastsetup this time, still nvidia-smi shows no process running on the GPU. Interestingly, nvtop does show memory usage while training the bear classifier.

so, at least, I have the confirmation that the GPU is being used :slight_smile:. Thank you for the pointer @imrandude

I am doubting if torch is able to recognise the GPU. You can easily check it by using this code.

import torch

It should return True if it is able to recognize. If it is False, then maybe something went wrong with the installation.


it does return True.

I hope you tried watch nvidia-smi which refreshes. I am just wondering if you have checked it while the model was not running.

If you have used watch nvidia-smi, then I am not sure what is happening :grinning:

yes… i tried watch nvidia-smi and no processes show up. As well, while looking at the help of nvidia-smi i saw you can probe in a loop using -l or --loop and tried that but same result, no processes.

1 Like

Maybe it’s happening too fast? I’d try it with the imdb example in the first notebook with a large batch size (64) and run nvidia-smi -l to see if it hits the gpu or gpu ram? on a v100 it should take about 60-120 seconds per epoch IIRC … if it takes much much longer then it’s probably not hitting the GPU

That is a great addition!

I came up with a different solution, which was to create a new function python function reindex, which can be used to create unique pathnames across a set of directories.

reindex fixes this problem by uniquely reindexing all files within first-level directories of dest

dest contains directories of images: dir1/{1.jpg, 2.jpg, 3.jpg}, dir2/{1.jpg, 2.jpg, 3.jpg}, dir3/{1.jpg, 2.jpg, 3.jpg}                                                                                     
reindex(dest, start_idx=1) -> dir1/{1.jpg, 2.jpg, 3.jpg}, dir2/{4.jpg, 5.jpg, 6.jpg}, dir3/{7.jpg, 8.jpg, 9.jpg}                                                                                                

It can be used from within python or from the shell. Here is how one would use it from the shell:

usage: reindex.py [-h] [--start_idx START_IDX] [--ext EXT] dest                                                                                                                                                 
Uniquely reindexes all files within first-level directories of `dest`                                                                                                                                           
positional arguments:                                                                                                                                                                                           
  dest                  Contains one or more directories                                                                                                                                                        
optional arguments:                                                                                                                                                                                             
  -h, --help            show this help message and exit                                                                                                                                                         
  --start_idx START_IDX                                                                                                                                                                                         
                        Starting index across all files                                                                                                                                                         
  --ext EXT             Optional file extention                                                                                                                                                                 

It is on github at fastai_utilities

1 Like

If you are running it inside a container then the process will not be displayed. If you have access to the host you can see the process though.

Ok, that’s a good point, maybe an epoch lasts less than 60 secs for the bear classifier, I’ll try your suggestion. Now, I am sure it is using the GPU as nvtop shows memory being used in the GPU the training, as I pointed out previously :slight_smile:

Anyone one know of a downside to using the Community VSCode… https://vscodium.com/
rather than Microsoft VSCode… https://code.visualstudio.com/

I upgraded last week to Win 11 to get a WSL2 Ubuntu environment to use with this for this course. Win 11 allows Linux GUI apps to display seemlessly next to Windows Desktop apps.

Since VSCode and Codium are the same code base, they both give the following error when installed and run using “code .” at a WSL prompt.

To use Visual Studio Code with the Windows Subsystem for Linux, please install Visual Studio Code in Windows and uninstall the Linux version in WSL. You can then use the code command in a WSL terminal just as you would in a normal command prompt.

So both seem hardcoded to operate only with Linux as a VS Code Server.

So I ended up taking path of least resistance to install MS VScode on Windows.

I use nvtop a lot, can def. recommend leave it running in the terminal to monitor gpu usage. You seem to be missing some fonts, or the terminal is somehow not setup to use utf-8 characters. It generally looks a lot better.


also, given all the N/As in the screenshot, you could either be running it inside the container or with not enough permissions. you should see able to see more details when running with enough credentials(sudo perhaps) on the host itself.


I have been using Microsoft VS code but did not even know vscodium exists. Curious to know if there are any pros and cons apart from privacy.

1 Like

Ah I see. I couldn’t make heads or tails of it maybe something’s wrong with the terminal font? I usually just run nvidia-smi -l in a terminal (within my paperspace container) and run the training in the notebook that connects to the container all on my local machine. With a reasonable batchsize (64 or above) for the IMDB example in the first notebook, you should see 8GB+ utilization on the GPU, At least that’s what I see on my 8GB 1070ti …

EDIT: To clarify, I run a docker container (built with fastai+fastbook and published by paperspace) on my local server at home.

1 Like

thank you for pointing that out … I did not pay attention to those weird characters before as i had no idea how the output of nvtop should really look like :slight_smile:. It seems this is an issue of the ncurses lib and UTF-8 locale so doing export NCURSES_NO_UTF8_ACS=1 will fix the handling of line-drawing glyphs :sweat_smile:.

much better!

1 Like