This topic is editable, so feel free to add, remove, organize, etc questions and answers based on what you’ve seen coming up in other threads. (Please don’t add a “FAQ” however unless it’s actually a question you’ve seen asked more than once on the forums! Please include a full question and answer in English for each, not just a link.)
The course and forums
- I’ve started the 2017 version of the course already. Should I switch to the new 2018 version?
- Yes! The new course is a huge step up from version 1 (v1) in terms of the quality of models that you’ll learn and the amount you’ll be able to do. If you’ve already completed v1, you’ll find most of version 2 (v2) is new, so it’s worth doing as well (especially lessons 2-4).
- Should I complete Intro to Machine Learning (ML) before doing this course?
- The ML class goes at a somewhat gentler pace, but doesn’t show how to build world-class models (the focus is more on process and interpretation, and also more in depth discussion of foundational details). The Deep Learning (DL) class is more intense, and gets you building state of the art models from lesson 1. Both can be understood on their own, but they both support each other.
The fastai library
Why are we using PyTorch? Should I study TensorFlow instead?
- During the development of Cutting-Edge Deep Learning for Coders, fast.ai started to hit the limits of the libraries we had chosen: Keras and TensorFlow. Therefore PyTorch was used for the 2018 course, which allowed fast.ai to use all of the flexibility and capability of regular python code to build and train neural networks, and to tackle a much wider range of problems. An additional benefit with PyTorch is that you can fully dive into every level of the computation, and see exactly what is going on. Furthermore, PyTorch tends to have more recent research advances earlier. For more details, see the post Introducing PyTorch for fast.ai . (We also briefly teach Keras+TensorFlow during the course, and the concepts transfer easily.)
Why are we using the fastai library?
- PyTorch does not have a clear simple API of Keras for training models, and does not have defaults chosen based on best practices - you have to specify everything in detail yourself. Therefore we took inspiration from Keras in creating a library on top of PyTorch designed to fill these gaps, and ended up created a totally new library which allows faster and more accurate models to be trained more quickly, with less code.
Can I do the course in Keras or some other library, instead of fastai+PyTorch?
- Probably not. Many students have tried, but no-one has been successful yet, because there’s a lot of important features in fastai that aren’t available in any other libraries, and trying to replicate them without the benefits provided by PyTorch is very difficult. We use fastai+PyTorch because it’s the most productive environment for prototyping and learning about deep learning algorithms. You’ll also learn in the course how to use Keras+TensorFlow, but you’ll also find they are much slower, result in less accurate models, and take more code!
Python, Jupyter, numpy and friends
- What programming tools do I need to know?
- You need to be familiar with the basics of Python and Numpy. You can start the course if you haven’t used Python before but are a proficient programmer - you’ll just need to do some googling to learn as you go! Here is a brief numpy tutorial to get you started quickly with this important library.
- Why is the variable naming and code formatting different to established standards such as PEP-8?
- Jeremy prefers for more to fit in the amount of screen space he can see at once. The approaches that work best for data science are not the same as those that work best for general software engineering. Unfortunately, few people have written about effective patterns for data science code. Note that every variable name is either a mnemonic (lr->learning rate), or is based on standards from the ML and stats literature (x->independent variables; y->dependent variables).
- How can I access a GPU for minimal cost?
- If you’re a university student, AWS Provides a few credits to students under AWS Educate Packs.
- Github provides additional AWS credits through their student pack
- Google Cloud Platform (GCP) provides $300 worth of free trial credits that can be used over the course of 12 months. Please have a look at this thread for a complete guide on how to setup GCP step-by-step using the Paperspace bash script designed for this course.
- Can I use my own Linux box, instead of paperspace/crestle/AWS?
- Yes you can, as long as it has an NVIDIA GPU, and you don’t mind spending the time getting it set up and maintaining it. However note that that can be quite a distraction from actually studying deep learning, so we normally recommend using the support cloud based options until you’ve completed part 1.
- Can I use my own Windows or Mac machine?
- Should I buy a laptop with an NVIDIA GPU?
- Probably not. You’ll get a much better GPU for much less money if you get a desktop and simply connect to it from a cheap laptop - but (as mentioned above) for now you’re likely better off using a cloud based approach. Having said that, if you do want to work directly on a laptop, there’s a discussion of options in this thread.
Deep learning questions
- No questions yet