OK, thank you for showing me that it’s js/template-generated and not in the source code.
I guess I can’t rely on a local fs link checker (it’s just much much faster). That’s OK, the live site’s one seems to work and reports a lot more problems. The latest one is here:
I hope this thread is ok for this: Would it make sense to have a check_images method in ImageDataBunch? @Taka and I had a short discussion in the “share your work” thread of the ongoing course. I used a very simple quick and dirty for loop to check for file integrity in my little fun notebook:
A parallelized version with a delete flag could be nice for cleaning a dataset.
@stas Great idea. But before doing the merge “surgery”, note that there may be other workloads occupying parts of the GPU, and many GPUs to choose from.
How about providing a simple API that
returns a list of how much RAM is available at each GPU (via-smi)
returns the RAM requirements for a given model x, so that we can then set the bs and GPU using whatever policy…
Also, using LMS or other high-bandwidth swapping GPU-CPU RAM mechanisms we can deploy much bigger models with a minimal performance penalty…
For beginners the simpler thing is to have a bs table/multiplier factor, as there are just a few GPU RAM configurations, eg 8GB, 11GB, 16GB, …
tools/update-nbs replaces docs_src/tools/gen_all.sh and docs_src/tools/sgen_notebooks.py
usage of fire has been removed, one script can now update specific or all notebooks, the usage of arguments to be passed changed a bit to a simpler:
update all notebooks: tools/update-nbs
update specific notebooks: tools/update-nbs docs_src/one.ipynb docs_src/one.ipynb
to pass arguments to update_notebooks just use add any of: --update_html --update_nb --update_nb_links --do_execute --update_line_num
and that would indicate --arg=True, otherwise the defaults are used
docs_src/tools/make_sidebar.py moved to tools/make_sidebar.py and the data was split off into docs_src/sidebar/sidebar_data.py (feel free to relocate the data elsewhere if you don’t like that location, but please not in tools/ folder - thank you).
I made a new mem-utils branch with the starter code. (new code is in fastai/utils/mem.py)
Very good start. Just tested it on 2 different servers…
Now it also reports the memory available for the smaller dedicated graphics card (eg a 710) which is not a good candidate for running CUDA.
@sgugger, both XXXs are where the function is called on a python file, not ipynb - which I don’t think exists anymore, is it? If it is can you update the example to reflect it? Thanks.
.dev0 just means it’s not a release. i.e. not reliable.
The date would be useless because 20181025 in one part of the world will not be 20181025 in another, so it’s not a good reference point. If you need to rely on an exact time stamp, you can always use git tools to see when the last commit was made in the fastai repo of your checkout.
Unless you meant 1.0.14.20181025 (no dev in your example), then your question is of a different nature.
Probably need to provide a way to exclude certain cards.
One way would be to tap into the existing CUDA_VISIBLE_DEVICES=“1,3” env var from pytorch. That way you can exclude the devices you don’t want to be reported. And it’ll work in fastai and pytorch (my code needs to be changed to include that).
The order of problems:
-corrected wrn_22 to wrn_22(pretrained=False) because the new fastai version requires this argument
-corrected num_features so that it return 0 when i cannot find the attribute num_features. However that leads to problems later be in layers line 33 “layers.append(nn.Linear(n_in, n_out))”