Edit: The solution to my errors below was to ensure ipython and torch were at their latest version. Using python 3.7.5 was needed for some other errors. Another really weird source of problems was using the file name “code.py”. For example, I had a temporary testing file called temp.py with two line (import fastai; from fastai.text import *; ) and would immediately run code.py after completion despite no reference to it at all.
Now I’m just hoping for a solution to run long computations that would take days.
tldr; If anyone could suggest how to run a long training process for a fastai project, that would be great. Specifically, I would like to be able to run the code from this Tutorial without having a constant connection from my computer to the SSH connection.
I’m having a load of problems getting GCP to work with fastai. At the moment, the only thing I can get to work is the Jupyter notebooks which is great for testing. Unfortunately this requires constant ssh connection to the VM instance, otherwise progress is lost. I have a training process for BERT NLP that would take 2-3 days from the estimates and can’t have my computer under constant connection.
I want to run a long training job with GCP (or elsewhere, but preferably GCP). I have the code copied and pasted from this example on how to use BERT to classify toxic comments: Tutorial.
Things that have gone wrong (solved in Edit):
None of the fastai imports work, for example the biggest issue is getting the error below. I simply try to run the python code from the above tutorial using “python code.py”. I tried many suggestions with no luck.
Traceback (most recent call last):
File “code.py”, line 3, in
from fastai.text import *
File “/home/bluteaur/.conda/envs/fastai/lib/python3.6/site-packages/fastai/text/init.py”, line 1, in
from … import basics
File “/home/bluteaur/.conda/envs/fastai/lib/python3.6/site-packages/fastai/basics.py”, line 1, in
from .basic_train import *
File “/home/bluteaur/.conda/envs/fastai/lib/python3.6/site-packages/fastai/basic_train.py”, line 2, in
from .torch_core import *
File “/home/bluteaur/.conda/envs/fastai/lib/python3.6/site-packages/fastai/torch_core.py”, line 2, in
from .imports.torch import *
File “/home/bluteaur/.conda/envs/fastai/lib/python3.6/site-packages/fastai/imports/init.py”, line 1, in
from .core import *
File “/home/bluteaur/.conda/envs/fastai/lib/python3.6/site-packages/fastai/imports/core.py”, line 17, in
from pdb import set_trace
File “/home/bluteaur/.conda/envs/fastai/lib/python3.6/pdb.py”, line 76, in
File “/home/bluteaur/code.py”, line 4, in
from fastai.metrics import error_rate
File “/home/bluteaur/.conda/envs/fastai/lib/python3.6/site-packages/fastai/metrics.py”, line 3, in
from .callback import *
File “/home/bluteaur/.conda/envs/fastai/lib/python3.6/site-packages/fastai/callback.py”, line 2, in
from .basic_data import *
File “/home/bluteaur/.conda/envs/fastai/lib/python3.6/site-packages/fastai/basic_data.py”, line 5, in
DatasetType = Enum(‘DatasetType’, ‘Train Valid Test Single Fix’)
NameError: name ‘Enum’ is not defined
My instance has these settings:
gcloud compute instances create $INSTANCE_NAME \ --zone=$ZONE \ --image-family=$IMAGE_FAMILY \ --image-project=deeplearning-platform-release \ --maintenance-policy=TERMINATE \ --accelerator="type=nvidia-tesla-p100,count=1" \ --machine-type=$INSTANCE_TYPE \ --boot-disk-size=200GB \ --metadata="install-nvidia-driver=True"
Any help would be great, thanks