Lesson 1 - Official topic

I think it’s worth mentioning that the other issue with import * is that it makes it difficult to browse code and understand where functions are defined without a REPL or a really good IDE. For example, VSCode (which I think is a great IDE) often has a hard time finding a function’s source if it was imported with import *.

1 Like

FastAI under the hood uses the __all__ as well;

I can see the from_name_func for the high level API, but noticed the mid-level doesn’t have a matching one (it does have RegexLabeller). Wondering about the design decisions, i.e. why no FuncLabeller. Want to understand the thought process more, thank you!

Ah ah, I’m going to say that to the next person that tells me to use VScode instead of jupyter notebooks :wink:

5 Likes

Good one. TWIML ep on snorkel is a good convo!

2 Likes

Question for Jeremy (potentially): would you mention the differences between fastai and fastai_v2? For example, I think the method fine_tune is new (very clear name, btw).

The mid level API works with a label_func. RegexLabeller is a kind of label_func.

1 Like

There’s a ton of them so I think it’s easier to just learn the new API. fine_tune (if you check the docs/source code) will do more or less what we’ve done for transfer learning (train a few epochs, unfreeze, and change the learning rate a little)

2 Likes

I have used fastiv1 and fastaiv2 for ~4 months now. The mid-level apis for fastaiv2 are a LOT better and easier to use. Much much less, Now I need to rewrite everythin! When it doesn’t fit into the higher level API.

Ah! That makes sense! So based more around the call ability vs. the name.
Thank you!

How can training loss be greater than validation loss? Considering that weights are adjusted based on training data.

It is very new yes. We won’t work on a main differences file since there are so many, but may do one for the high-level API. Suggestions on it are welcome :slight_smile:

1 Like

Does this run n -fold (e.g., 10-fold) cross validation to get the valid_error?

What’s the difference between validation loss and error rate in model training output? Jeremy mentioned that the validation set isn’t touched while training so what is the distinction between the two?

2 Likes

No, but cross validation has been implemented with the library (not in the library) :slight_smile: (I have some notebooks on it too)

How (and why) to create a good validation set:

12 Likes

Haha fair :slight_smile: I find myself browsing code on my phone somewhat frequently, too. import * makes that super tough

@rachel this please. as @jeremy mentions things, if possible, can he please also specify if there are differences we need to be aware of?

I learned the library’s ~2017/18 version with bits and updates for the other versions, but I’m sure I must have missed stuff. If there are changes for v2, then it would be helpful to know. thanks!

Look at the Walk with fastai2 megathread (it’s my study group): A walk with fastai2 - Vision - Study Group and Online Lectures Megathread one of the notebooks cover it

1 Like

Error rate is the metric (1-accuracy) and unlike the loss (crossentropy) is not used for updating gradients in backpropogation.

1 Like