AttributeError: '_FakeLoader' object has no attribute 'persistent_workers'

Just a beginner taking course at https://course.fast.ai/

Setup new environment on my laptop:
Anaconda3-2020.07-Windows-x86_64
conda install pytorch torchvision torchaudio cpuonly -c pytorch
conda install -c fastai -c pytorch fastai
conda install -c fastai fastbook

Installed versions:
pytorch 1.7.0
fastai 2.0.16
fastbook 0.0.11

In …notebooks/fastbook/clean/01_intro.ipynb executing cell:

# CLICK ME
from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'

def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=42,
    label_func=is_cat, item_tfms=Resize(224))

learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)

Resulted in error:


AttributeError Traceback (most recent call last)
in
9
10 learn = cnn_learner(dls, resnet34, metrics=error_rate)
—> 11 learn.fine_tune(1)

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastcore\logargs.py in _f(*args, **kwargs)
54 init_args.update(log)
55 setattr(inst, ‘init_args’, init_args)
—> 56 return inst if to_return else f(*args, **kwargs)
57 return _f

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\callback\schedule.py in fine_tune(self, epochs, base_lr, freeze_epochs, lr_mult, pct_start, div, **kwargs)
159 “Fine tune with freeze for freeze_epochs then with unfreeze from epochs using discriminative LR”
160 self.freeze()
–> 161 self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs)
162 base_lr /= 2
163 self.unfreeze()

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastcore\logargs.py in _f(*args, **kwargs)
54 init_args.update(log)
55 setattr(inst, ‘init_args’, init_args)
—> 56 return inst if to_return else f(*args, **kwargs)
57 return _f

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\callback\schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
111 scheds = {‘lr’: combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
112 ‘mom’: combined_cos(pct_start, *(self.moms if moms is None else moms))}
–> 113 self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
114
115 # Cell

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastcore\logargs.py in _f(*args, **kwargs)
54 init_args.update(log)
55 setattr(inst, ‘init_args’, init_args)
—> 56 return inst if to_return else f(*args, **kwargs)
57 return _f

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
205 self.opt.set_hypers(lr=self.lr if lr is None else lr)
206 self.n_epoch = n_epoch
–> 207 self._with_events(self._do_fit, ‘fit’, CancelFitException, self._end_cleanup)
208
209 def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
153
154 def with_events(self, f, event_type, ex, final=noop):
–> 155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel
{event_type}’)
157 finally: self(f’after_{event_type}’) ;final()

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in _do_fit(self)
195 for epoch in range(self.n_epoch):
196 self.epoch=epoch
–> 197 self._with_events(self._do_epoch, ‘epoch’, CancelEpochException)
198
199 @log_args(but=‘cbs’)

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
153
154 def with_events(self, f, event_type, ex, final=noop):
–> 155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel
{event_type}’)
157 finally: self(f’after_{event_type}’) ;final()

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in _do_epoch(self)
189
190 def _do_epoch(self):
–> 191 self._do_epoch_train()
192 self._do_epoch_validate()
193

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in _do_epoch_train(self)
181 def _do_epoch_train(self):
182 self.dl = self.dls.train
–> 183 self._with_events(self.all_batches, ‘train’, CancelTrainException)
184
185 def _do_epoch_validate(self, ds_idx=1, dl=None):

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
153
154 def with_events(self, f, event_type, ex, final=noop):
–> 155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel
{event_type}’)
157 finally: self(f’after_{event_type}’) ;final()

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in all_batches(self)
159 def all_batches(self):
160 self.n_iter = len(self.dl)
–> 161 for o in enumerate(self.dl): self.one_batch(*o)
162
163 def _do_one_batch(self):

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\data\load.py in iter(self)
100 self.before_iter()
101 self.__idxs=self.get_idxs() # called in context of main process (not workers/subprocesses)
–> 102 for b in _loadersself.fake_l.num_workers==0:
103 if self.device is not None: b = to_device(b, self.device)
104 yield self.after_batch(b)

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\torch\utils\data\dataloader.py in init(self, loader)
761
762 def init(self, loader):
–> 763 super(_MultiProcessingDataLoaderIter, self).init(loader)
764
765 assert self._num_workers > 0

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\torch\utils\data\dataloader.py in init(self, loader)
413 self._sampler_iter = iter(self._index_sampler)
414 self.base_seed = torch.empty((), dtype=torch.int64).random(generator=loader.generator).item()
–> 415 self._persistent_workers = loader.persistent_workers
416 self._num_yielded = 0
417

AttributeError: ‘_FakeLoader’ object has no attribute ‘persistent_workers’

Any suggestions?
Regards

1 Like

Currrent “workaround” would be to use pytorch version 1.7

edit: meant 1.6

Already mentioned the versions of packages at the beginning of the post.
Problem is occuring on pytorch 1.7.0.

Same problem with Azure DSVM. Bummer, was looking forward to getting started.

Having the same problem on my local even though the requirement.txt file was installed right before running the intro notebook.

When run on AWS SageMaker haven’t seen any issues. Looks like I am missing things on my local machine but since this is pointing out to an attribute error, I have not been able figure this out yet.

Please advise

I’m also getting on the on SageMaker now. Wasn’t seeing it before.

pytorch 1.7.0
fastai 2.0.16

Tried downgrading fastai and that didn’t help…

Ah ha! Was able to fix by downgrading to pytorch 1.6.0:

pip install torch==1.6.0 torchvision==0.7.0

2 Likes

Downgraded with:
conda install pytorch==1.6.0 torchvision==0.7.0 cpuonly -c pytorch

Now it is producing the error as:

Empty Traceback (most recent call last)
D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\torch\utils\data\dataloader.py in _try_get_data(self, timeout)
778 try:
–> 779 data = self._data_queue.get(timeout=timeout)
780 return (True, data)

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\multiprocessing\queues.py in get(self, block, timeout)
107 if not self._poll(timeout):
–> 108 raise Empty
109 elif not self._poll():

Empty:

During handling of the above exception, another exception occurred:

RuntimeError Traceback (most recent call last)
in
9
10 learn = cnn_learner(dls, resnet34, metrics=error_rate)
—> 11 learn.fine_tune(1)

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastcore\logargs.py in _f(*args, **kwargs)
54 init_args.update(log)
55 setattr(inst, ‘init_args’, init_args)
—> 56 return inst if to_return else f(*args, **kwargs)
57 return _f

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\callback\schedule.py in fine_tune(self, epochs, base_lr, freeze_epochs, lr_mult, pct_start, div, **kwargs)
159 “Fine tune with freeze for freeze_epochs then with unfreeze from epochs using discriminative LR”
160 self.freeze()
–> 161 self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs)
162 base_lr /= 2
163 self.unfreeze()

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastcore\logargs.py in _f(*args, **kwargs)
54 init_args.update(log)
55 setattr(inst, ‘init_args’, init_args)
—> 56 return inst if to_return else f(*args, **kwargs)
57 return _f

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\callback\schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
111 scheds = {‘lr’: combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
112 ‘mom’: combined_cos(pct_start, *(self.moms if moms is None else moms))}
–> 113 self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
114
115 # Cell

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastcore\logargs.py in _f(*args, **kwargs)
54 init_args.update(log)
55 setattr(inst, ‘init_args’, init_args)
—> 56 return inst if to_return else f(*args, **kwargs)
57 return _f

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
205 self.opt.set_hypers(lr=self.lr if lr is None else lr)
206 self.n_epoch = n_epoch
–> 207 self._with_events(self._do_fit, ‘fit’, CancelFitException, self._end_cleanup)
208
209 def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
153
154 def with_events(self, f, event_type, ex, final=noop):
–> 155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel
{event_type}’)
157 finally: self(f’after_{event_type}’) ;final()

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in _do_fit(self)
195 for epoch in range(self.n_epoch):
196 self.epoch=epoch
–> 197 self._with_events(self._do_epoch, ‘epoch’, CancelEpochException)
198
199 @log_args(but=‘cbs’)

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
153
154 def with_events(self, f, event_type, ex, final=noop):
–> 155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel
{event_type}’)
157 finally: self(f’after_{event_type}’) ;final()

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in _do_epoch(self)
189
190 def _do_epoch(self):
–> 191 self._do_epoch_train()
192 self._do_epoch_validate()
193

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in _do_epoch_train(self)
181 def _do_epoch_train(self):
182 self.dl = self.dls.train
–> 183 self._with_events(self.all_batches, ‘train’, CancelTrainException)
184
185 def _do_epoch_validate(self, ds_idx=1, dl=None):

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
153
154 def with_events(self, f, event_type, ex, final=noop):
–> 155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel
{event_type}’)
157 finally: self(f’after_{event_type}’) ;final()

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\learner.py in all_batches(self)
159 def all_batches(self):
160 self.n_iter = len(self.dl)
–> 161 for o in enumerate(self.dl): self.one_batch(*o)
162
163 def _do_one_batch(self):

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\fastai\data\load.py in iter(self)
100 self.before_iter()
101 self.__idxs=self.get_idxs() # called in context of main process (not workers/subprocesses)
–> 102 for b in _loadersself.fake_l.num_workers==0:
103 if self.device is not None: b = to_device(b, self.device)
104 yield self.after_batch(b)

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\torch\utils\data\dataloader.py in next(self)
361
362 def next(self):
–> 363 data = self._next_data()
364 self._num_yielded += 1
365 if self._dataset_kind == _DatasetKind.Iterable and \

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self)
972
973 assert not self._shutdown and self._tasks_outstanding > 0
–> 974 idx, data = self._get_data()
975 self._tasks_outstanding -= 1
976

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\torch\utils\data\dataloader.py in _get_data(self)
939 else:
940 while True:
–> 941 success, data = self._try_get_data()
942 if success:
943 return data

D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\site-packages\torch\utils\data\dataloader.py in _try_get_data(self, timeout)
790 if len(failed_workers) > 0:
791 pids_str = ', '.join(str(w.pid) for w in failed_workers)
–> 792 raise RuntimeError(‘DataLoader worker (pid(s) {}) exited unexpectedly’.format(pids_str))
793 if isinstance(e, queue.Empty):
794 return (False, None)

RuntimeError: DataLoader worker (pid(s) 20024, 19092, 20096, 16772) exited unexpectedly

Underlying error is displayed on jupyter-notebook console/trace:

File “D:\ProgramData\Anaconda3\envs\IP-Fastai\lib\multiprocessing\spawn.py”, line 126, in _main
self = reduction.pickle.load(from_parent)self = reduction.pickle.load(from_parent)self = reduction.pickle.load(from_parent)self = reduction.pickle.load(from_parent)

AttributeError: Can’t get attribute ‘is_cat’ on <module ‘main’ (built-in)>

This seems to be another problem than the main problem of the post.

This new problem probably be related to version of multiprocessing packages on Windows OS and probably requires separate post.

Hello, I had the same error recently. It is a Pytorch error, I am now following the tutorial using Paperspace/Gradient and do not have any issue. On those machine the versions used are :
fastai version : 2.0.11
torch version : 1.6.0
It may be worth checking if you still encounter the issue with those versions. Also, even if you make it work locally I would strongly advise you to use free GPU platform online otherwise the training of your models will be very slow.

Hope it helps !
Charles

1 Like

Experience of statring fast.ai course while configuring environment on Windows OS and without GPU is not smooth enough despite higher expectation set by Anaconda like package managers with the claim of taking care of platform/package dependencies.

Appreciating your advise and now heading to cloud environments starting with one you’ve advised with the hope that it’d let me focus on real content of the course instead of wasting load of hours setting up environment, which should not be the first challenge for beginners!

Thanks again!

You need fastai >=2.1.0 if you are using torch 1.7.0. Else fastai <=2.0.18 can only use torch 1.6…

! pip install “fastai==2.0.13” “fastcore==1.2.5” “torch==1.6.0”

This install threw the error Saturday.
It runs fine today.

Something got fixed.

Sorry, I meant 1.6

This was worked for me. I ran the notebook on a local machine so it did work after using this solution provided by @danielnbarbosa