How to remote access gpu from friend’s computer without interrupting him?

This post was flagged by the community and is temporarily hidden.

Hi,

The thing is I think you cannot avoid to interrupt him when he uses the pc and you also want to use the gpu for deep learning at the same time :frowning:
When I train a network on my pc I can barely scroll a web browser sometimes, so it’s noticeable.
(Okey I use my gpu on 97% load most of the cases, so yeah - guilty ^^)

As I saw that pc config it has server-like threadripper cpu and if he only use the cpu and like a server somewhere not for personal use, maybe, just maybe you can use the gpu and some cpu cores like 4 of them to feed data to the gpu and he can still use the other 8 cores for whatever reason and maybe not that noticeable, but I still have doubts about it.
And we didn’t talk about the ssd and hdd usage yet - that’s what makes things really noticeable when the pc touches them - which is even more problematic than the cpu part.

I think you can try 2 other options in your case.

    1. use his pc for DL when he doesn’t use the pc, so not the same time
      if it’s a server than always runs, so you only need to know when he uses it to not interrupt him
    1. use some cloud solution for entry level which can be free
      and don’t use his pc at all
      like this:
      https://www.paperspace.com/pricing
      there is a plan for free to practice DL with fastai and with jupyter notebooks
      and you can surely access it remotely from your laptop

Hope it helps ^^

Okey I understand, so you still want own stuffs, that’s the spirit :slight_smile:
+there is the gpu already - so no turning back

Well, I also think linux is a better platform for DL than windows, because win10 doesn’t let you fully use the whole gpu, 10% gpu memory is reserved for the win10, so remained 90% and now the 2nd hit is the DWM which reserves the 10% of the 90%, so 81% remained :slight_smile:
On win10 you cannot fully use pytorch in terms of performance (or any other libs), because of compilation difficulties, some things run 2-3x slower on win10 when you compare to linux.
Also when you want to use docker for example with nvidia accelerated gpu capabilites then you can realize nvidia did it on linux and no windows support for that - ok, its not ur problem, it was my problem once ^^.
But the list goes on…
On linux you can monitor things better in terminal, like ‘watch -n 5 nvidia-smi’, ‘watch -n 5 sensors’ and you can pipe outputs, on windows it’s harder to do or impossible in some cases.
Linux is more like Mac, Win is a different beast - but it doesn’t mean you cant use win10 for DL - you can, just sometimes need more workarounds/can be less performant and I wanted you to know that, but your friend will watch netflix on win10, so no need to disturb him with linux if he doesn’t want to.

So when you have 2 gpus, ofc you can use both if the motherboard has 2 pcie slots, and you can specifically use 1 of them for DL - my suggestion to put the stronger gpu in the 16x slot, because the other slot more than likely only 8x, so you can use the full memory bandwidth for DL, and for netflix and web browsing there is in the pcie8x the older gpu :slight_smile:

It can work, so gogo :slight_smile:

Btw I didn’t want to discourage you, I want you to succeed and you will :slight_smile:
I just gave full information what I experienced when I tried some iterations before you - and I also don’t have money, so we are in the same shoes in this regard :slight_smile:
But I also wanted to do DL and when I win some competition then I will buy my own stuff ^^