SOURCE CODE: Mid-Level API

Yes exactly we should work towards adding that. That tab has never been closed since I found it haha

1 Like

That’s one of my goals (I was thinking the same thing looking at the WWF2 notebooks) so I’ll try to focus on that here today :slight_smile: (note: I’ll be using my study group data bits so there will be more than simply what was in there)

I’ll write up a draft then you all can tell me your thoughts!

1 Like

Yes would love to see that. I was planning to add stuff and send you a pr. The show_batch is where I’m stuck for a Multi regression case.

1 Like

I’ve setup the blog here: https://akashpalrecha.me/fastai-explained/
This is the github repo for whoever wants to contribute: https://github.com/akashpalrecha/fastai-explained

I’ve added @muellerz as a contributor. Ping me your github handles and I’ll add you too!

I’ve kept some of the initial example blog posts to serve as reference/guides for people who haven’t yet used such a thing as fastpages.

1 Like

We have also tried to make it so that those things that are being called behind-the-scenes are also independently useful. So see if you can show some examples of using the lower-level bits to do useful things - I think that would be really helpful. You could even make a PR to the fastai2 docs/nbs with those examples and additional explanation.

Overall, if you can make it a goal to have fastai2 doc PRs as part of the outcome of your digging, that would potentially be very valuable to the project! :slight_smile:

5 Likes

nbdev also has a function that gives you a link to the actual notebook that a symbol is defined in! :slight_smile: (I don’t recall the function name - perhaps someone can look thru nbdev to find it?)

2 Likes

s/a blog post/a really amazingly great blog post/ :slight_smile:

1 Like

nb_source_link :grin:

Keep in mind that you need to import it first with:
from nbdev.showdoc import nb_source_link

2 Likes

Precisely what I suggested yesterday:

1 Like

So I was going through Jeremy’s FastAI V2 Walkthroughs and the first walkthrough takes us through some of the low level APIs and he talks about the design decisions that went into them. Going through the Transform and TypeDispatch classes and understanding them was a lot of fun. It was a bit hard, since there’s a lot of jumping in between functions involved, but it was certainly fun.
I’m feeling more inclined towards writing about the low level APIs. But I’ve still got to see a lot of the Datablock API yet so I wouldn’t count on this :sweat_smile:.

1 Like

Actually that’s a great thing to write on, as all the transforms work in that way! This would be especially handy for custom datatypes (like if we made an ImageTuple for Siamese)

Also, I think a lot of it isn’t documented very well so helping out there could be another objective as Jeremy mentioned already.

I guess the question becomes how should that documentation look? For instance if we took untar_data, would we want to explain each sub function it calls and show an example of each? Or one creative sub example (such as a custom extract_func) and explain the rest in text.

Has the first actual meeting (aside from initial planning) happened? Is there a schedule?

1 Like

I don’t believe there’s a schedule set yet, meeting notes (from the other day) is we’ll bring in our first articles by Sunday and go over them together and discuss our different approaches.

For the first article we agreed to all write our own takes on the Lesson 1 topics (untar_data, etc) with a few of us branching into sub ideas along the way.

2 Likes

So, as far as the documentation is concerned, I feel the existing format in FastAI of having one line docs for functions is something I wouldn’t tamper with. Also, adding comments in functions may seem like a good idea, but too much of that might make things look very clunky. But, I’ve found that quite a lot of those one line docs are not well formed, and sometimes they’re even missing. So as we understand more of the code maybe we could just make those one liners better.

Now for the blogging part I think we’ll all know better by Sunday as to what works best, but I’d be more inclined to simply calling a function and following it through something like a ‘set_trace’ and providing short, concise explanations about snippets of code and possibly inferred design decisions without beating too much around the bush. This is because people already have too much to read and I would rather want to make this short and concise for them, and for us. For us, because writing this way will allow us to cover much more.

1 Like

Thanks Jeremy, I have updated the data.external quite a bit to include examples for Config, URLs, download_url and added much more prose than what currently exists.

I have also added a complete list of datasets with best possible explanations that I could find from the internet for each dataset in the URLs class. Here is the PR. Hopefully future users will find these docs more useful.

4 Likes

Each sub-function it calls should already be in the docs (unless it’s private, which isn’t common) - so best would be to add the examples and details in the docs for that function in the place it’s already documented.

1 Like

We also have dataset descriptions here: https://course.fast.ai/datasets

Maybe just link to that, and add details for those that aren’t in that link?

2 Likes

Is my understanding correct we write short Jupyter blog for functions with nice explanations and than we merge with fastai-explained ?

Also if this is the case we should have some tracker who is writing about which function =)