I can attest to this. For me, I’d do Colab for any models except NLP, for that I’d go to paperspace (though briefly)
I don’t think this will be covered in depth in the class. I have used different systems and depending on your needs, I can suggest looking into:
the fastai inference system (good for light use or batch inference, not good for intense real-time). Super easy and covers most “side-project” kind of things, including building Web interfaces around it with Flask, for example
Completely hosted solutions (there are many around, you can google it). Very easy, but tend to be $$$
Save model as any other pytorch model, then convert that to the NVIDIA TensorRT (https://developer.nvidia.com/tensorrt) system. This is good for high-performance, parallel inference. This requires a bit of an investment to get it right the first time around, but it then becomes pretty straightforward. This is what we use in production at my company. Despite the name, you can use with it models from PyTorch, TensorFlow, Caffe’…
Sorry if this is obvious but I’m confused about the relationship between the platforms we can use. I am an avid jupyter notebook user, and I see Jeremy using one now. But when I look at the setup instructions, I see paperspace, google collab, etc setup instructions, but not jupyter specifically.
Those are platforms to give you a GPU to work with. Jupyter is the programming interface where you’ll be writing/running code that uses the GPU
This is great, thanks.
I think we should create a dedicated topic for deployment in different environments/devices.
I have made a thread for the questionnaire:
Never apologize for asking a question. For your information, I had to delete 7 replies to your (allegedly) obvious question (and left the best IMHO) so that’s how eager people are to reply to obvious questions
Shouldn’t fixing the random seed guarantee reproducibility of the results?How can they vary between multiple runs?
The seed is for the split training/validation. Not the training loop (which also contains random things, as we will see).
I’m lucky enough to have a good old GTX 1080Ti available locally (through a Razer Core Thunderbolt GPU enclosure :-)). Is there any reason not to use it? I see that everything assumes we’re on the cloud. I guess that’s just because we can’t assume that everybody will have a local GPU, but I just want to make sure there isn’t anything beyond that.
It was a good question! And I will add for those answering to perhaps pause and see if someone else has already responded, as so many answers to one question can be overwhelming.
is there documentation for what things in fastai v2 do? For example, what does .fine_tune() do?
Can we get to know the setup Jeremy uses, i.e the GPU he uses, the RAM etc.?
Is there a course.fast.ai like website for course-v4 and fastai v2 docs.Thanks in advance
You can use it later. Even if you are a professional, it is going to take you quite a bit of time to setup your GPU before you can use it, and you will not spend that time learning deep learning. I’d recommend only going through setting it up at the end of the course, in seven weeks, as a personal side project.
The documentation is (work in progress) here.
Sorry, I should have mention it It’s already set up and running, I can run the Intro notebook with it.
How did Jeremy hide code output?
Folks don’t run the nbs in parallel; Watch the lecture first!