Memory Error (Not cuda out of memory error)

There is some kind of memory leak caused by using ThreadPoolExecutor in DataLoader (fastai/dataloader.py). A temporary fix is to disable multi-threaded execution altogether. For example the following diff will fix the issue but make loading data slower:

     def __iter__(self):
-        with ThreadPoolExecutor(max_workers=self.num_workers) as e:
-            for batch in e.map(self.get_batch, iter(self.batch_sampler)):
-                yield get_tensor(batch, self.pin_memory)
+        for batch in map(self.get_batch, iter(self.batch_sampler)):
+            yield get_tensor(batch, self.pin_memory)

5 Likes