Lesson 4:AssertionError: Torch not compiled with CUDA enabled

Hi, when I trying to re-run lesson 4’s code in MacOS High Sierra Version 10.13.4

I want to use CPU to run this…

In this part

FILES = dict(train=TRN_PATH, validation=VAL_PATH, test=VAL_PATH)
md = LanguageModelData.from_text_files(PATH, TEXT, **FILES, bs=bs, bptt=bptt, min_freq=10)

An error arises

AssertionError Traceback (most recent call last)
in ()
1 FILES = dict(train=TRN_PATH, validation=VAL_PATH, test=VAL_PATH)
----> 2 md = LanguageModelData.from_text_files(PATH, TEXT, **FILES, bs=bs, bptt=bptt, min_freq=10)

/Users/apple/anaconda3/lib/python3.6/site-packages/fastai/nlp.py in from_text_files(cls, path, field, train, validation, test, bs, bptt, **kwargs)
241 path, text_field=field, train=train, validation=validation, test=test)
242
–> 243 return cls(path, field, trn_ds, val_ds, test_ds, bs, bptt, **kwargs)
244
245

/Users/apple/anaconda3/lib/python3.6/site-packages/fastai/nlp.py in init(self, path, field, trn_ds, val_ds, test_ds, bs, bptt, **kwargs)
220
221 self.trn_dl, self.val_dl, self.test_dl = [ LanguageModelLoader(ds, bs, bptt)
–> 222 for ds in (self.trn_ds, self.val_ds, self.test_ds) ]
223
224 def get_model(self, opt_fn, emb_sz, n_hid, n_layers, **kwargs):

/Users/apple/anaconda3/lib/python3.6/site-packages/fastai/nlp.py in (.0)
220
221 self.trn_dl, self.val_dl, self.test_dl = [ LanguageModelLoader(ds, bs, bptt)
–> 222 for ds in (self.trn_ds, self.val_ds, self.test_ds) ]
223
224 def get_model(self, opt_fn, emb_sz, n_hid, n_layers, **kwargs):

/Users/apple/anaconda3/lib/python3.6/site-packages/fastai/nlp.py in init(self, ds, bs, bptt)
132 text = sum([o.text for o in ds], [])
133 fld = ds.fields[‘text’]
–> 134 nums = fld.numericalize([text])
135 self.data = self.batchify(nums)
136 self.i,self.iter = 0,0

/Users/apple/anaconda3/lib/python3.6/site-packages/torchtext/data/field.py in numericalize(self, arr, device, train)
314 arr = arr.contiguous()
315 else:
–> 316 arr = arr.cuda(device)
317 if self.include_lengths:
318 lengths = lengths.cuda(device)

/Users/apple/anaconda3/lib/python3.6/site-packages/torch/_utils.py in cuda(self, device, async)
67 else:
68 new_type = getattr(torch.cuda, self.class.name)
—> 69 return new_type(self.size()).copy
(self, async)
70
71

/Users/apple/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py in _lazy_new(cls, *args, **kwargs)
356 @staticmethod
357 def _lazy_new(cls, *args, **kwargs):
–> 358 _lazy_init()
359 # We need this method only for lazy init, so we can remove it
360 del _CudaBase.new

/Users/apple/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py in _lazy_init()
118 raise RuntimeError(
119 "Cannot re-initialize CUDA in forked subprocess. " + msg)
–> 120 _check_driver()
121 torch._C._cuda_init()
122 torch._C._cuda_sparse_init()

/Users/apple/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py in _check_driver()
53 def _check_driver():
54 if not hasattr(torch._C, ‘_cuda_isDriverSufficient’):
—> 55 raise AssertionError(“Torch not compiled with CUDA enabled”)
56 if not torch._C._cuda_isDriverSufficient():
57 if torch._C._cuda_getDriverVersion() == 0:

AssertionError: Torch not compiled with CUDA enabled

Anyone could help??

Interesting. I was getting the same error but not while initialising LanguageModelData but while calling TEXT.numericalize.

I had a look at the PyTorch documentation and it looks like numericalize accepts device argument, where -1 refers to the CPU. To get the line working on my MacBook, I changed the method invocation like this:

TEXT.numericalize([md.trn_ds[0].text[:12]], device=-1)

Hopefully that’s somewhat useful :slight_smile:

2 Likes

This, really work.But I change the other line of code.
Anyway your answer help!

Can you tell me which line of code did you modify to make it work? Am having same issues.

hello @adi0229 i also have the same problem can you please say what changes did you make to fix this problem?