In 2017 edition, even though we were provided with code snippets like
vgg.py. We were encouraged by @jeremy to do things ourselves, learning the frameworks keras.
With 2018 edition, we are now presented with fastai library.
Most of the lectures are now build on top of this library, and we hardly digs into its source code.
Does it mean that learning fastai is enough, for both learning and production usage?
Or should we learn pytorch and try to write our own version of fastai?
In my experience, learning the rudiments of pytorch is very important. Sure you can use fastai as a out-of-box solution for common problems. But there’s only so much you can do with the fastai library purely. FastAI, at the moment, is still mostly utility tools for building a workflow pipeline, and does not describe its own interfaces for building layers or data-types as (for example) keras does.
PyTorch shouldn’t be hard to learn at all. Maybe write from scratch one or two deep-learning model. You will see that the concepts are fairly straight-forward. Pytorch is more like numpy than it is anything else.
But of course that doesn’t mean one can be a PyTorch virtuoso quickly. Much of the learning curve is associated with learning about the core concepts of deep-learning itself. Once you get into more exotic problems such as image segmentation, generative models, language translation etc., it almost exclusively requires heavy model customizations, which fastai itself would not be capable of providing.
Good luck, and enjoy the journey!
Here’s my own experience:
we hardly digs into its source code.
We start to peel off the layers above and create a model from scratch using PyTorch from lesson 5.
There is so much too learn in just 7 lessons. fast.ai intention is not to bog you down by learning a library or framework too early in the course. Of course you can pick up PyTorch along the way, nothing stopping you from doing so.
I think, for effective learning and to get the most out of the course, we have to put in our own efforts and take our own initiatives to learn. What is more important is the starting point, direction, and learning approach set out by Jeremy and fast.ai’s team. BTW, I have took both the 2017 and 2018 version part 1 of the course and can weight in.
If you peek into fastai library, it is really a thin fancy wrapper on top of PyTorch, nothing more and nothing unnecessary for going through the course. So, don’t feel afraid to take a look under the hood/behind the scenes. I agree we need proper documentation and more documentation for fastai library, which is still lacking currently.
So far, fastai library is sufficient for education and prototyping. For production, in my case, it depends what you are building and where you are deploying your model to. If you are deploying to mobile platforms, go with TensorFlow/Keras. Otherwise, you can export your PyTorch model/graph with the built-in ONNX tool and import into TF. Theoretically, things should just work unless your model is so exotic
In any regards, happy learning and all the best!
You can definitely learn pytorch.
The fast ai library is meant to be wrappers over constructs that are used very frequently. I think it allows you to focus on techniques of DL itself rather than a framework itself.
Having said that, PyTorch is very easy to start learning, and you shouldn’t take very long to understand what is happening behind the scenes of Fast ai library.
Also, pytorch has actually very high quality examples to demonstrate the workings of their framework (unlike tensorflow, which doesnt explain their code design details very much).
you can find the examples at http://pytorch.org/tutorials/ . They go from beginner level up to fairly advanced stuff (like style transfer)
I’m trying to learn pytorch while following fastai lessons.
As a starting point, you could check these examples by Justin Johnson (Stanford):
They are super-simple and yet allow you to learn a lot about pytorch (which I find simpler than TF, btw).
Yes, this is my go-to examples in the early days when I started learning PyTorch and the lack of good tutorials in the docs. I discovered this through CS231n course. Good thing is, now, this is part of the official PyTorch docs. I get a lot out from these examples. I remember this is the place where I first learned how to move the model and Variables to GPU and back to CPU and how to train using multi-GPU with PyTorch DataParallel.