Since a lot of GPU instances are pretty pricey, is it ever a good idea to self-host a deep learning web app on your own computer, at least the DL backend? I realize that hosting your entire website (i.e. www.yourwebsite.com) on your own local computer is just asking for security problems and getting in trouble with your ISP.
But what if you offloaded the heavy GPU-intensive parts of your app to your local machine. Let’s say you’re paying for a cheap AWS instance with no GPU that your users will go to when they access www.yourwebsite.com and your app has a function where the user can train their own image classifiers. When they’re ready to train the classifier, the AWS instance opens a SSH connection to your local machine that has the GPUs where it’ll train the classifier and use it to make predictions which will be sent to the AWS frontend where they get the results.
Is this an acceptable setup when you’re still prototyping an app and don’t need superb performance but also can’t afford to sink a lot of money into a GPU instance before knowing if your app will be viable in the future?
I am not entirely sure what you are trying to achieve? Are you trying to host a web app in which users can train their own models on your computer? Or do you want to host a web app that uses your trained model for inference?
If you want to make you nicely trained model available so that people can use it to predict stuff, then you will not necessarily need a GPU to do so. Actually there was a nice post earlier showing how you can easily host a small classifier for free on heroku
Oh, the web app I’m trying to build will require automatic (unattended) and periodic retraining from time to time.
I mean, you can do what you’re suggesting, but there are a few more caveats I guess.
First of all, you would have to keep that computer on all the time. You would have to pay for its electricity (which is not minor if its running GPUs and is online 24/7). You probably don’t have a good connection (compared to what AWS offers). You cannot scale easily (dunno if that is necessary…).
I mean, I agree, hosting a GPU on AWS 24/7 is super expensive. But how about launching AWS spot instances when necessary? The data can probably be shared with your normal cheap AWS for the website. I have this thought that you might benefit from the fastec2 tool that Jeremy shared recently. If you build the website and data gathering on your cheap AWS and use fastec2 to fire up a spot instance with GPU when necessary, for a few cents, you should end up with a reasonable cost I believe. I mean I don’t know how much GPU time you are expecting to be necessary, but since you’re planning to use your home gpu I assume it is not enormous.
Depending on how much GPU time you actually need this might even be cheaper than the electricity for the computer
Oh, thanks for the fastec2 suggestion… looks like that might take care of my GPU woes since I won’t need the GPU 24/7 but will need it when a model needs to be retrained in a reasonable amount of time on demand.