How many sessions will these be? Are there any tentative dates for the sessions besides the first session’s date mentioned above? Will these sessions also be recorded? How long, approximately, will these sessions be for?
Just trying to plan my schedules… It will be 3am in South Africa!
Wow sorry about the terrible timing for you! I’m thinking this might be Tues-Fri Aus time, but it just depends how things go really. They’ll all be recorded. I’m thinking that we might go for an hour at a time – but if my daughter is OK with it they might be longer.
If you’re only able to watch the recordings, you’d be welcome to ask questions or make requests about the previous day’s lesson here and we could always just answer them for you the next day…
I’ll be doing the walkthrus on Paperspace, so everything I describe will apply there directly. Colab definitely isn’t a good choice for the walkthrus, but using your own machine is fine, as long as you’ve got everything installed and working already.
I guess I’m not trying the fancy stuff. I just logged in, instantiated a fast.ai paperspace container, chose a free GPU (its a Quadro 4000 , 8GB but that’s about the same as my local 1070ti … so… good enough! ) Then it started the workspace with all the fastbook notebooks already populated.
The only think I find a little bit annoying is that I can’t drop down to a bash shell. When I click on it, it says I need to upgrade. Other than that it was pretty seamless.
Thanks Jeremy, I am looking forward to these part of series. What you have outlined seems very interesting. Can we please also have some intorudction on how to read libraries/documentations?
I am reading through course material and Radek’s book. I feel my next step us for me is if I can get more comfortable navigating through libraries but at this stage that seems very intimidating. Thanks
Yeah, I probably should have done that I started an instance, was able to access it via browser, but then I noticed it didn’t have:
It was the data science stack image. I couldn’t quite figure out how I would go about for instance pulling repos from github… or using a custom docker image… There are some instructions about pushing to a registry or something but it all seemed fairly involved…
I think it is more a situation where every new environment requires a bit of time investment, there is a bit of friction that feels painful You also have to invest the time where generally my answer right now is when it comes to new tools “no, thank you, I’d rather not spend the time to figure out your crazy (new) way of doing things that I can already do on my hardware ”
But maybe learning how to use paperspace is worth the investment If I wouldn’t mind spending $100 - $200 dollars a month such a thing as paperspace would completely have no use for me, as I could just use GCP for everything.
But being part of a class will definitely speed up the process of learning the ins and outs of using the platform, plus hacking on things together can be fun, so definitely looking forward to that!
Yeah, I noticed if I hit the “advanced” slider it started asking for a docker image even though I had already selected the FASTAI image. So I just refreshed that page and selected the fastai+free gpu option without touching the advanced button and it fired up a notebook env with fastbook/fastai already installed. It’s a p4000 mind you so it’s rather slow. I probably won’t be using it for tonight’s walkthru but just wanted to fire it up and see what happens. I use a paperspace docker image on my local machine anyway so it’s pretty close to what they have on paperspace and my 1070ti is still slightly faster than the free GPU there.
Yup, that’s it. Essentially, how to have one conda env but multiple python versions. You can have one nb running python 3.5, and another python 3.7. Very rarely is this useful, but in the scenario where you want to run some old python code, there is a solution without having to set up a new environment
(also, if you have two envs and want to work on the code at the same time or execute files in succession – for instance to process some data in one notebook and continue in the other --, it gets painful… this is just a cleaner way to achieve it, though it is a super rare case that this might be needed)
rdkit is just some random repo I wanted to use for a Kaggle competition, not sure I will use it ever again in my life
In fact, this is such a niche case I don’t even think it is worth covering this in the walk-thrus, but probably good to know this solution exists if you ever encounter a situation where it might be helpful (googling for this is not that easy).
I’ve used this feature especially when I’m either taking a course which has its own python version and its interdependent packages. So you create a new conda env at that python version and do a pip install with the requirements.txt file provided by the course/hackathon.
For example, if you were to clone out Jake Vanderplas’ book repo from github, it uses an older version of python and packages. Using Conda makes it very easy to create an environment at the right python/pip/package level so you can run those notebooks without running into issues that you’d encounter if you used the latest versions of packages.