I seem to remember that in fastai there’s a xSequential
or something like that. If so, what does it do?
nn.Parameter is to register a parameter inside a model. This is for basics layers like Linear.
Thanks @sgugger ! makes sense
I think DummyModule
's parameters
method calls layer.parameters()
- that’s a bit like using the word in the word’s definition
In the training model section, how the was the backpropagation calculated? Was used the pytorchs autograd? The derivative of the new loss function was not defined manually, so I assume “we are allowed” to use pytorch’s autograd then.
Yes I believe we have been allowed to use it from the start of lesson 2
Yes we are allowed to do so since notebook 02
You do not want to see the gradients of a softmax.
Its SequentialEx
… and yah, I’m hoping that comes up soon too
yes, as it is created by fastai around the start of part 1 last year.
WHAT IS TORCH.NN REALLY ?
by Jeremy Howard, fast.ai. Thanks to Rachel Thomas and Francisco Ingham.
This is like taking a walk through Pytorch development, a tiny but important part of it… Nice!
What are the default initializations for the fast.ai learners? I must be missing it.
Does similar things exist in other libs like Mxnet, TF etc? as these are the literal building blocks.
We are not creating only fast.ai from from scratch but pytorch too
That’s a very wide question, could you refine it? Do you mean model initialization?
Sounds like a good usecase for property based testing.
Understand how things developed give much more context and information about it. Learning how to develop this kind of programming mindset is much more useful than calling library API only.
What’s the degree of support of fast.ai towards TensorFlow? i.e. I’d think that some classes could be very helpful also for training in TF (e.g. all the data management bit)
is it fair to think of generators (output of yield) as task graphs?
There is no support towards TensorFlow. For now, fast.ai is build on PyTorch only.
I believe that TF has a ton of its own iterators and datasets already. As well as high-level estimators. Not sure about their data augmentation though.