I hope this thread is ok for this: Would it make sense to have a check_images method in ImageDataBunch? @Taka and I had a short discussion in the “share your work” thread of the ongoing course. I used a very simple quick and dirty for loop to check for file integrity in my little fun notebook:
A parallelized version with a delete flag could be nice for cleaning a dataset.
tools/update-nbs replaces docs_src/tools/gen_all.sh and docs_src/tools/sgen_notebooks.py
usage of fire has been removed, one script can now update specific or all notebooks, the usage of arguments to be passed changed a bit to a simpler:
update all notebooks: tools/update-nbs
update specific notebooks: tools/update-nbs docs_src/one.ipynb docs_src/one.ipynb
to pass arguments to update_notebooks just use add any of: --update_html --update_nb --update_nb_links --do_execute --update_line_num
and that would indicate --arg=True, otherwise the defaults are used
docs_src/tools/make_sidebar.py moved to tools/make_sidebar.py and the data was split off into docs_src/sidebar/sidebar_data.py (feel free to relocate the data elsewhere if you don’t like that location, but please not in tools/ folder - thank you).
.dev0 just means it’s not a release. i.e. not reliable.
The date would be useless because 20181025 in one part of the world will not be 20181025 in another, so it’s not a good reference point. If you need to rely on an exact time stamp, you can always use git tools to see when the last commit was made in the fastai repo of your checkout.
Unless you meant 18.104.22.16881025 (no dev in your example), then your question is of a different nature.
Probably need to provide a way to exclude certain cards.
One way would be to tap into the existing CUDA_VISIBLE_DEVICES=“1,3” env var from pytorch. That way you can exclude the devices you don’t want to be reported. And it’ll work in fastai and pytorch (my code needs to be changed to include that).
The order of problems:
-corrected wrn_22 to wrn_22(pretrained=False) because the new fastai version requires this argument
-corrected num_features so that it return 0 when i cannot find the attribute num_features. However that leads to problems later be in layers line 33 “layers.append(nn.Linear(n_in, n_out))”