So I understand that I need a GPU to effectively train models, but how about evaluating a trained model against a new image? Is it possible to do that using CPU only?
I ask because I would love to be able to deploy a trained fastai/pytorch model to a much cheaper hosting provider in a way that allowed me to classify new images without paying for a GPU.
Yes, the confusion matrix is for the validation set. Typically in a dataset you’ll be given a set of data w/ labels. You split this data w/ labels into a training set and a validation set. The validation set helps you test if your model is over fitting. The test set (for example in a Kaggle Competition) is a set of data that you do not have the labels. You wouldn’t be able to produce a confusion matrix on data where you do not know the actual labels.
@rachel would it be possible to get an answer on that ? It’s really puzzling me You asked Jeremy but as my question wasn’t very clear he didn’t really understood what I meant and just said that we’ll talk more about plot_top_loss in the next lecture.
Installation trouble: TL/DR: ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
Full account:
I had CUDA 9.1 and CUDNN 7.0 working fine with PyTorch on my Ubuntu 16.04 system, and have run with these before. But tonight when I installed Fast.ai, it seemed it wasn’t using the GPU :-(.
Ok, so I see need CUDA 9.2 and CUDNN 7.1, so I upgraded to them by following these instructions.
Tried conda uninstall the above packages, and then reinstalling them. No good. I’m out of ideas. Does anyone have a suggestion? Thanks.
PS- nvcc works, nvidia-smi shows both my cards as working.
EDIT: Just noticed Fast.ai GitHub says one doesn’t need a separate install of CUDA, that it comes packaged with PyTorch now. Well, in that case the nightly PyTorch is indeed not using the GPU, despite my grabbing the GPU version.
Some of that is going to just depend on how close your images are to the problem that was already solved. The more different the problem, the more images you will need. Try it with however many images you have and if you need to get more images, you will have the pipeline to feed more images in easily.
With a lot of problems though, you should be able to build something decent with 50-100 images. The more you have, the better it would be
i have same question ,Jeremy talked about mobile devices and he said clean way is to keep models over on server and get predictions from them.Is it because even getting prediction may require gpu?
I’ve been able to run inference on images on a free Heroku instance, so it’s totally doable. Pytorch even makes a CPU-only wheel (Python package) to keep your bundle size low.