Thanks for letting me know. That’s fixed now.
Just a small note for others who may want to try to use google colaboratory as a free alternative to trying the library out. It will not work! Colab does not support python 3.7 as of writing this. I will look into other free options for those of us who cannot afford scraping money atm.
It looks like paperspaces free tiers begin in March
Came across two more errors when I ran the build_docs.py.
First one was after
converting: 19_callback_mixup.ipynb => docs/callback.mixup.html
converting: 92_notebook_showdoc.ipynb => docs/notebook.showdoc.html
An error occurred while executing the following cell:
------------------
show_doc(show_doc, default_cls_level=2)
------------------
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-30-f55211237a06> in <module>
----> 1 show_doc(show_doc, default_cls_level=2)
<ipython-input-29-2f3af20125ce> in show_doc(elt, doc_string, name, title_level, disp, default_cls_level)
14 doc = f'<h{title_level} id="{name}" class="doc_header">{name}{source_link}</h{title_level}>'
15 doc += f'\n\n> {args}\n\n' if len(args) > 0 else '\n\n'
---> 16 if doc_string and inspect.getdoc(elt): doc += add_doc_links(inspect.getdoc(elt))
17 if disp: display(Markdown(doc))
18 else: return doc
<ipython-input-13-3a2dc205f4b1> in add_doc_links(text)
3 "Search for doc links for any item between backticks in `text`."
4 def _replace_link(m): return doc_link(m.group(1) or m.group(2))
----> 5 return _re_backticks.sub(_replace_link, text)
<ipython-input-13-3a2dc205f4b1> in _replace_link(m)
2 def add_doc_links(text):
3 "Search for doc links for any item between backticks in `text`."
----> 4 def _replace_link(m): return doc_link(m.group(1) or m.group(2))
5 return _re_backticks.sub(_replace_link, text)
<ipython-input-10-50cf3cb77d8f> in doc_link(name, include_bt)
4 cname = f'`{name}`' if include_bt else name
5 #Link to modules
----> 6 if is_fastai_module(name): return f'[{cname}]({FASTAI_DOCS}/{name}.html)'
7 #Link to fastai functions
8 try_fastai = source_nb(name, is_name=True)
<ipython-input-7-3edfbf61a3cc> in is_fastai_module(name)
3 "Test if `name` is a fastai module."
4 dir_name = os.path.sep.join(name.split('.'))
----> 5 return (Path(_file_).parent.parent/f"{dir_name}.py").exists()
NameError: name '_file_' is not defined
NameError: name '_file_' is not defined
Second one was after
converting: 05_data_core.ipynb => docs/data.core.html
converting: 15_callback_hook.ipynb => docs/callback.hook.html
Traceback (most recent call last):
File "build_docs.py", line 43, in <module>
force_all:Param("Rebuild even notebooks that haven't changed", bool)=False
File "/Users/i077725/Documents/GitHub/fastai_dev/dev/local/script.py", line 34, in call_parse
func(**args.__dict__)
File "build_docs.py", line 46, in main
_make_sidebar()
File "build_docs.py", line 39, in _make_sidebar
open('docs/_data/sidebars/home_sidebar.yml', 'w').write(res_s)
FileNotFoundError: [Errno 2] No such file or directory: 'docs/_data/sidebars/home_sidebar.yml'
Yup I found them too as I was getting the docs up and running - both fixed now.
Perhaps we should try to support py36 for now, since py37 isn’t that widely released yet.
I’ve now made the v2 docs site available at: http://dev.fast.ai
For instance, here’s a tutorial on the new lower-level data APIs: http://dev.fast.ai/pets.tutorial.html
I suspect the notebooks are still the best way to learn what’s in the library so far - but the docs site might be useful too due to the hyperlinks.
I’ll wait patiently then
ndim was added to pytorch since 1.2.0 torch.Tensor — PyTorch 2.4 documentation
Therefore
Tensor.ndim = property(lambda x: x.dim())
monkey patching is no longer required: pytorch>=1.2.0 is in environment.yml
PL submitted: Update imports.py by kdorichev · Pull Request #158 · fastai/fastai_dev · GitHub
Good point. Guess who suggested that addition to PyTorch
@jeremy how much overhead would need to be done for 3.6 capability? Or what specific python 3.7 functionalities are being used? (If that’s a simple answer even)
I can try to help out with that.
I have no idea. Let’s try to find out!
I’ll go ahead and make a new forum post for where differences are and how to fix them in hopes of a team effort with this
I’m going to take a quick look at it now. It might be an easy fix.
Looks to be just the MethodWrapperType
external.py is not generating a config.yml when we run untar_data
, looking into it now. Temporary fix is copying the Config() from v1
OK all works with py36 now. Also added some basic install and test-running steps to the readme.
There’s a few failing tests still, which I’ll look at.
Thanks for the 3.6! Did you intend to use the same Config() class from v1? Or were you wanting to switch them. I noticed if it does not exist then config if just blank. Eg:
config_path = Path(os.getenv('FASTAI_HOME', '~/.fastai')).expanduser()
config_file = config_path/'config.yml'
if config_file.exists():
with open(config_file, 'r') as yaml_file:
config = yaml.safe_load(yaml_file)
if 'version' in config and config['version'] == 1: return config
else: config = {} <- Here we are not creating the config file
I assume we’re hoping to use the same config.yml that’s in v1, unless that’s incompatible in some way.
And yes, if it’s missing, we should create it!
I’ll put in the PR! It works
Any thought to optional labeled test datasets? I know this was a request for v1. If it were to work how you would like I believe it would fall along the following: defaultly label them flat unless explicitly stated. As the unlabeled test set helps for Kaggle, a labeled test set helps for research.
(something of which I have been doing quite often)
And PR posted