Here is what you were talking about. Thanks Sylvain.
yield keeps the state of the for loop until itâs finished.
sometimes we donât want to reset gradients to zero, for example in a warm-start.
- where is iter of Dataloader invoked ?
Doesnât the provided collate function void the benefits of iterators?
When you call for bla in dl
In what sense? You need a way to collate your samples together in a batch.
Right, my point was to always clear it by default, but if donât want to clear it specify it explicitly, func like keep_grad
we must be sampling without replacement
Well, doesnât it get the whole batch into memory? What if more than one row of a batch canât fit in memory?
A link has been posted twice to answer that question. Please click on it before asking it again
I have put it up in the wiki.
Now, you only collate the samples you want to yield right now. If your batch size is 64, just those 64 samples.
Note that num_workers > 1
could bring you some problems with memory leakage. Not sure if that was somehow fixed in the most recent PyTorch or if itâs an essential Pythonâs issue.
A lot times in kaggle and colab kernels. (a very big headache)
How do we incrementally train the model? Do we use the parameters of the original training data i.e. mean and std dev and keep training it?
thanksâŚ
What does
With torch.no_Grad ensures ?
@champs.jaideep It ensures that you do not perform backprop on the validation set in this caseâŚ
If I can ask a bit of an off topic question. Do we anticipate more integration of generative models in fast.ai soon? Or is it that the main components are given and itâs up to us to construct the more complex models?
When you do the forward pass, you need to store intermediary results for the backward pass (as we have shown in notebook 02). Using with torch.no_grad
removes that default behavior to let you save GPU memory.