IMHO, documentation by example is very useful, but at the end of the day, there is no substitute for thorough API documentation itself. Especially when an API relies so much on indirection (e.g., most of the magic in method X is done by these four public methods that receive their arguments through additional arguments passed to method X), not having actual documentation can make using the library very difficult.
As a case in point, I was working a couple of weeks ago with the data block API. The narrative introduction to that section of the docs reads:
The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized DataBunch for training, validation and testing.
This (justifiably, I believe) led me to believe that this methods in this API were chainable, as a number of the examples demonstrate. The problem is, some of those methods are chainable, and some aren’t. The only way to understand what can be chained with what is by knowing on what class a given method is defined, and by knowing the return values of a given method. A good number of the methods on that page give no hint of where a method is defined (short of going to the source), or give no indication of the return value.
So, I agree the fast.ai library desperately needs better documentation, but not just in example form—actual documentation. I’d be more than happy to help contribute to this. Has anyone spearheaded this effort yet?
As an aside, I’ve been bitten more than once by fast.ai’s non-adherence to semantic versioning. (And, I do understand that this is likely a conscious choice the maintainers have made.) However, do the maintainers expect to reach a point where a patch-level revision really is a patch-level revision in the semver sense (i.e., anything introduced in a minor or patch-level revision is completely backward-compatible, and backward-compatibility is only ever broken in major revisions)?