Lesson 14 (2019) discussion and wiki

Now I am starting to understand where the name of the Chainer (library) comes from :smile:

4 Likes

It’s cool how the order of function application for the forward pass is literally forward, from input to output, and the order of function application in the backward pass is backward, from output to input, so you can see how the derivative info is propagated backward by those function applications.

5 Likes

Yes, value semantics and our formulation of the S4TF autodiff system based on functions results in something much closer to the actual underlying mathematics. :slight_smile: Once we finish building out this feature, we think this should make it possible to build APIs that are both powerful and flexible and easy to use.

2 Likes

How would one go about adding layer freezing? i.e. How can I compute all the gradients but only update certain differentiable variables while using an optimizer like Adam? I have a hacky solution where I reimplemented the Adam optimizer and only updating a certain value when iterating through allDifferentiableVariables but I feel there must be a better way to go about it.

What is the roadmap for fastai in medium term, will there be two versions one based on S4TF/TensorFlow and the other on python/Pytorch and will the next iteration of the course be taught using S4FT

5 Likes

Well, that course was a mouthful! As you’ve warned us, lot of material to digest in the next year(s).

I wanted to thank again the whole team, we all really appreciate what you’re doing for making Deep Learning more accessible.
What you’ve put together is an amazing course that will probably inspire many people in diverse ways.

Also looking forward to the bonus lectures!

10 Likes

Right now, that’s exactly what makes sense for now, however there’s a new Swift language feature that we’re super excited about: Property Delegates (aka Static Custom Attributes). This might allow us to naturally mark certain variables as frozen or not. We’re definitely still exploring the design space. For now, it’s possible doing exactly what you suggested, and can be made more flexible by having layer-wise learning rates (that can themselves be set using keypaths). But we’re looking forward to iterating on our design based on you (and the rest of the community’s) feedback!

7 Likes

Certainly a wild ride. Thanks Rachel, Jeremy and Chris - and the rest of the team (Sylvain and the S4TF folks) and all on the forums too for great insight and support and making me not feel alone and stupid. Looking forward to the extra sessions - and going through this all again - with frequent pauses!

9 Likes

Great material tonight and last week! What you have accomplished with S4TF so far is beyond amazing! Congratulations! We can all be grateful that Chris sees the genius of the fast.ai research group as a collaborator in bringing DL to S4TF, even though he is now employed by Google which appears from the outside world to be the center of gravity of DL. This validates the conclusion we have all made that fast.ai and fastai stands out above the crowd as the place to make DL work.

Myself and many of the rest of us are not DL experts or language builders, but are experts on something else, like medicine, that needs DL to make a quantum leap in the evolution of our field of expertise. In one or two years, S4TF will be complete enough for us to switch over, and when we do, there will be a very easy learning curve, great documentation, minimal bugs, and huge benefits in flexibility and performance in implementing DL. In the meantime, I’m grateful that we have this solid fastai/PyTorch/Python framework in which we can keep moving forward today.

7 Likes

Thanks a lot Jeremy and Rachel. And Chris Lattner for the amazing two weeks.

4 Likes

Sure… :grinning:

Great to be part of this amazing community. So much to learn. S4TF will surely conquer the depth of practical deep learning. However, " take one particular thing or project and make it fantastic " is what really keeps me going on.

4 Likes

Thanks for a great course, wow! So much material in so little time. Thanks Rachel, Jeremy and the team, what a great way to spend my evenings (and my future evenings).

1 Like

big thanks to Rachel, Sylvain and Jeremy from fast.ai and Chris and team from S4TF for this class, amazing to be part of this!

1 Like

Well thats takes up the rest of 2019 and a good part of 2020. Very entertaining. It’s obvious a lot of effort by a lot of people has gone into this, including some chance meetings by some very focused individuals.

A great many thanks to all

2 Likes

Thanks Jeremy, Rachel, Chris and fastai + S4TF teams for this great Part 2 v3! Looking forward to the future with Swift TF.

1 Like

Thank you Jeremy, Chris & Rachel, for this amazing course and a very exciting last two weeks. Looking forward to contributing to S4TF.

1 Like

“You can use his camera while looking through your Topology eye wear glasses.” :rofl:

1 Like

Since no one has said this so far, let me:
Jeremy-months needs to be a thing. Hence forth all effort should be measured in it.

Which makes me realise that when Jeremy says “I pack enough material to keep you busy for a year” I need to allocate the next 6 yrs for it! :grimacing:

Thanks Chris for the reality check!

4 Likes

the forum probably doesn’t allow emotions. but allow me:

I feel both sad and heavy now that the course is over!

5 Likes