Part 1 FAQ

(ecdrid) #42

The best way I use the latest is like this

Have a git clone of the lib itself

Make sure you have installed all the necessary modules.( Just install a couple of big modules as they will install nearly a lot of dependencies, I remember explicitly installing Bcolz,graphviz, space etc…)

And all the notebooks are in the same directory as Jeremy’s are…(you can change this or use wherever you want by creating a symlink in that directory)

And now to use the bleeding edge version, I just do git pull in the parent/original colned repo…


I submitted my request to Paperspace on May 22, and waited and waited, but they had not givin me back a word.
Paperspace send me a mail on June 13, that my request has been approved. I never expected I had to wait more than half month – I have waited so long that my motivation has gone away. Please be careful, waiting time can be several weeks.

(wenrei) #44

Hi, should we choose AWS Deep Learning AMI Ubuntu as the AMI to setup instead of creating a new e2c? I am using the aws educate starter account.

(Shaurya Goel) #45

Does anybody know how to get free GPU service online (except at google colab)?

(Rishabh Agrahari) #46

I am also facing the same problem. I created a entropay account and tried to pay through my debit card, when I try to pay I do get OTP on my phone but after entering the OTP and returning back to entropay interface I get error saying “We were unable to top up your Entropay card from this credit or debit card. Please check that your card details (including CVV and expiry) were entered correctly and that you have sufficient funds for this transaction on this credit or debit card.”
I do have sufficient funds in my account and I’ve checked all my credentials and they are correct.

Could you please help me in this regard? How did you pay for your entropay top up?

(Abhisek Panigrahi) #47

hi @jeremy. I was looking into the script that we get from setup for fastai. How did source activate fastai work as there was no environment called fastai got created before ?

(William Horton) #48

You can run the lessons on Kaggle Kernels, which now have free GPU support:

(karthik palepu) #49

Can someone help me to get free GPU? I tried google cloud, but there are no free GPUs. Also i tried google colab, even that is busy everytime and has less memory. Even i tried snark, but offer is over. Could anyone please help with the current available and good GPU ?

(William Horton) #50

I would suggest taking a look at Kaggle Kernels

(karthik palepu) #52

Thanks @wdhorton for suggestions. After couple of tweaks, goggle colab helped me better.


I’m having a problem in Paperspace disk space error. While I was downloading the data from Part2’s planet data, paperspace had a space error, so I tried to delete the big files that I’ve downloaded through jupyter notebook, but some weird error appeared and it won’t delete the file.


If I refresh the page, the file is deleted, but no matter what I delete, it keeps saying that there’s no space left in the device, so I can’t proceed anything.



The google colab method worked for me but the colab instance is dying very quickly. The connection to the clouderizer machine is lost in under 10 min. I am wondering if this is related to my internet connection (around 30 Mbps) or something else.

If anyone else has solved this problem, I would appreciate help.


(Carson David Schubert) #55

Hi all,

I set up an Ubuntu 18.04 system on my laptop (GTX 960M, yeah it’s not super powerful) and got everything running after having to build pyTorch from source to make it work with my older GPU. However, when I run the first training code in the Lesson 1 Notebook, I get this stack trace:

TypeError                                 Traceback (most recent call last)
<ipython-input-12-676345a6c308> in <module>()
      2 data = ImageClassifierData.from_paths(PATH, bs=8, tfms=tfms_from_model(arch, sz))
      3 learn = ConvLearner.pretrained(arch, data, precompute=True)
----> 4, 2)

~/fastai/fastai/ in fit(self, lrs, n_cycle, wds, **kwargs)
    300         self.sched = None
    301         layer_opt = self.get_layer_opt(lrs, wds)
--> 302         return self.fit_gen(self.model,, layer_opt, n_cycle, **kwargs)
    304     def warm_up(self, lr, wds=None):

~/fastai/fastai/ in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, best_save_name, use_clr, use_clr_beta, metrics, callbacks, use_wd_sched, norm_wds, wds_sched_mult, use_swa, swa_start, swa_eval_freq, **kwargs)
    247             metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, fp16=self.fp16,
    248             swa_model=self.swa_model if use_swa else None, swa_start=swa_start,
--> 249             swa_eval_freq=swa_eval_freq, **kwargs)
    251     def get_layer_groups(self): return self.models.get_layer_groups()

~/fastai/fastai/ in fit(model, data, n_epochs, opt, crit, metrics, callbacks, stepper, swa_model, swa_start, swa_eval_freq, visualize, **kwargs)
    161         if not all_val:
--> 162             vals = validate(model_stepper, cur_data.val_dl, metrics, epoch, seq_first=seq_first, validate_skip = validate_skip)
    163             stop=False
    164             for cb in callbacks: stop = stop or cb.on_epoch_end(vals)

~/fastai/fastai/ in validate(stepper, dl, metrics, epoch, seq_first, validate_skip)
    240             loss.append(to_np(l))
    241             res.append([f(datafy(preds), datafy(y)) for f in metrics])
--> 242     return [np.average(loss, 0, weights=batch_cnts)] + list(np.average(np.stack(res), 0, weights=batch_cnts))
    244 def get_prediction(x):

~/anaconda3/envs/fastai/lib/python3.6/site-packages/numpy/lib/ in average(a, axis, weights, returned)
    381             wgt = wgt.swapaxes(-1, axis)
--> 383         scl = wgt.sum(axis=axis, dtype=result_dtype)
    384         if np.any(scl == 0.0):
    385             raise ZeroDivisionError(

~/anaconda3/envs/fastai/lib/python3.6/site-packages/numpy/core/ in _sum(a, axis, dtype, out, keepdims, initial)
     34 def _sum(a, axis=None, dtype=None, out=None, keepdims=False,
     35          initial=_NoValue):
---> 36     return umr_sum(a, axis, dtype, out, keepdims, initial)
     38 def _prod(a, axis=None, dtype=None, out=None, keepdims=False,

TypeError: No loop matching the specified signature and casting
was found for ufunc add

This seems to be an error in the fastai library, though I have no idea. I have looked extensively online for a solution to this problem to no avail.

Any help would be greatly appreciated!

(Ankur Bansal) #56

Hi I just started learning Deep Learning. I am watching the lecture series and brought subscription plan for Paperspace. I followed every step as mentioned in the wiki and reached till the step of jupyter notebook. Then I started the file courses/dl1/lesson1.ipynb. However it is not complete, I just see the content till line torch.cuda.is_available() . After this there is no content as we saw in lecture. I am attaching the screenshot.

I also did git pull in the folder fastai, so my git repository is updated. But still I don’t know why I am not getting the full file.

Please help. Thanks in advance.

(Niall) #57

FYI - they replied to me in three days.


I want use ide (such as pycharm, visual studio code) with jupyter notebook.

I saw Jeremy in his lecture could check function definition in ide from Jupyter. Can anyone tell me how to do that (set up)?

Thank you so much.


Hi. I have access to GPUs through the supercomputer in my university. Has anyone tried installing fastai material on a remote cluster ? Thank you.

(Kaspar Lund) #60

watch lesson 1

(Nafiseh Salmaniniyasar) #61

after starting a paperspace machine and entering password the curl | bash doesn’t work.
Am I missing anything?

  1. Tab -> get the list of Methods.
  2. shift + Tab -> Get the arugument list help.
  3. shift + Tab + Shift + Tab -> Documentation.
  4. shift + Tab + Shift + Tab + shift + Tab -> Documentation in a new window.
  5. ?learn.exp() --> result is similar to shift + Tab 3 times (4)
  6. ??learn.exp() --> Opens the source code