For those who run their own AI box, or want to

off the top of my head, python-qt4 sounds like a UI toolkit library, most likely not needed for fastai.
Perhaps it’s used by some graphical libs, hard to tell, but I doubt it’s directly needed by fastai work.

When in these sort of confusions, I’d say just remove/comment those out, and see if it throws an error down the road. Dependencies sometimes can be tricky, but in this case it’s probably just a case of accumulation over time. cc @mike.moloch !

1 Like

Sorry, yes that’s what I meant (the output tab.) I’m an old dog from the sys admin world and can’t help myself referring to things in the old ways even though it’s right in front of me. :slight_smile:

So, I pasted the file (edited for my needs), gave it the tags, then hit build image. It goes on for a bit then just flashes a cryptic generic message on the top right (“can’t build” or something- no mention of python-qt4). The output tab doesn’t become active.

It’s no big deal and probably related to my own setup, I was able to get the error message about pyqt4 and will try after taking it out.

Thanks!

For those who wants to build a DL box running ubuntu, here’s a guide that I found super helpful (just ignore the rtx 3090 part, as if we can afford this monster lol). This tutorial includes all the cuda toolkit + cudnn installation. I’ve just followed this to bring back my old PC with a gtx1080 back to life to train some models

2 Likes

Hi Jeremy
Hope you are well. I like to use my own dedicated environments which have worked in the past on previous version. As a considerable amount of time has passed since the last courses I would like information on the versions of software involved for the new course so I can research before hand if this is still feasible with the current equipment I have, could you point me in that direction please. Perhaps as before I believe we had a topic for this, although it is most unlikely my equipment won’t be updatable for the course I currently have Ubuntu 18.04 with a 1080 ti GPU

regards
Roger

We’ll be using the latest version of fastai and pytorch. If you pip install fastbook you’ll get what you need.

the 1080ti will work just fine. I just tested it out within ubuntu 20.04. The card itself is 5 years old, but still performs decently. I will be making a post about it here soon. In short, it takes 2x as long as a 3080ti to train models when dealing with image classification (PETS @ ~ 25secs per epoch) and text (IMDB @ 4:52 for the fine tuning w/bs=32).

Did you use use the same batch size for the comparison. Just curious.

Great thanks for that, I am still on 18.04 but it’s now time to update to 20.04 will get there by the time the course starts. I have posted on here before about personal setups with the 1080ti I think there is a separate category if I remember

yes, bs was 32 on both. I have never been able to get the 1080ti to do bs=64 on the IMDB example. I would bet that my 3080ti could handle that example at bs=64 if the card was also not handling graphics output to the monitor.

2 Likes

Hi @RogerS49 you should be fine with a 1080ti. I have a 10 year old Dell T3600 with a 1070ti and I get “reasonable” performance out of it. By reasonable I mean it’s about as good as the Google Colab free version :smiley: but that’s good enough for the purposes of the course I think.

I also upgraded my Ubuntu from 18.xx to 20.xx and had to do a fresh install of the whole stack, which in a way was a good thing because I hadn’t done that in a while. I did end up going the nvidia and paperspace fastbook container route though because I just couldn’t get it to install natively on my machine.

All the best.

1 Like

You should even be able to do the course on a 1080!

My advice is stick to linux (I’m still running 18.04 and its solid) and you should be in more than good shape doing the course on your 1080Ti. One of the great lessons I got from fastai is how to do more with less … as a matter of fact, I built my open source library and compete on kaggle using a single 1080Ti myself.

6 Likes

Jeremy I think I should have stated my question better, I have anaconda and use various environments of different pythons versions for different things, I wish to create a new env for this new version of the course, what would be the version currently used to develop faster?

thanks for your time

Edit
My new environment for this will be pytorch 1.8 with python 3.7 and cuda 11

I agree but 18.04 support concludes next year, 22.04 I believe is created this year, so thought this a good time to bite the bullet to get 20.04, supported till 2025. Plus the fact that you can upgrade straight from 18.04 with the software updater hopefully simplifies the process.

thanks for your time

EDIT
The best bit about this type of upgrade is being able to use editors and terminals along side of the updater when issues occur

1 Like

These are pretty old versions of python and pytorch - I’d strongly recommend using the latest.

2 Likes

22.04 is out as of today (4/21)

3 Likes

That’s awesome. Did you try ubuntu 22.04 LTS?

I have upgrade from 18.04 LTS to 20.04 LTS via the Ubuntu Software Update app the one with the circular motif, check the release notes on Ubuntu web site. I backed up my data just incase. Every thing went smoothly it took about half a working day. Afterwards all my previous working applications still worked as before including the pathways through the 1080ti GPU. I am glad I did it.

Please refer to the EDIT I did in my reply to ‘wgpubs’

1 Like

If anyone would like to take the sacrifice for the team and try 22.04 to confirm if CUDA + everything works without pain, we all would be thankful :grin: :pray:

6 Likes

I tend to run fastai (+ other cuda related libs etc.) & pretty much all my projects via Docker containers (with nvidia containers support). I know that the course recommends NOT spending much time on “fighting” local setup, and if you’re starting out then I’d HIGHLY recommend following the advice.

However, if anybody wants to get any Docker + nvidia-container-toolkit help/discussion in terms of dockerfile/compose etc., then I can try to share what works for me (so far). Do note that I’m on a slighty non-standard linux distro (NixOS), but the majority of it should work on any Linux host when the cuda driver versions & libraries are matched.

Once again though, I would recommend NOT looking in this now unless you’re fairly comfortable with docker already. The options listed at the top of this thread will make you much more productive instead.

I’ve documented the steps on what works for me here. ( Docker + nvidia-container-toolkit + optionally NixOS)

8 Likes

I’ve been using Pop OS 22.04 (beta) for a while, as production machine. It will be automatically uplifted to full-fledged 22.04 within a few days.
Everything works (and worked) without any hassle. Nvidia drivers, miniconda, docker, etc…

Why Pop and not Ubuntu? Because it’s a bit more stable (just my personal experience, of course) and polished. Ubuntu flavors are all ok, but there are always minor glitches.

One additional point regards Alder Lake graphics. If you have 12-th gen Intel cpus like me and want to use the IGP so to leave the nvidia gpu(s) alone, you need kernels >5.16.
Ubuntu 22.04 adopts 5.15, while Pop gets 5.16
Of course you can always install an ubuntu mainline kernel (up to 5.18 RCs, from bare .debs) but then the burden of periodically updating the kernel will be upon you.

6 Likes