Dataloader - parallel processing error - ParallelNative.cpp

Problem: dataloader is not using all CPUs. Because of that training is very slow.

My environment:

MacOS
Python 3.6
Fastai==2.1.5
Torch==1.7.0

I am creating a dataloader like this:

textblock = TextBlock.from_df(
    '_VALUE',  # Which dataframe column to read
    is_lm=True,  # We only have X and no Y for the language model
    tok=RulesTokenizer(),
    rules=[]  # Diable default fastai rules
)

datablock = DataBlock(
    blocks=textblock,  # That's how we read, tokenize and get X
    get_x=ColReader('text'),  # After going through TextBlock, tokens are in the column `text`
    splitter=RandomSplitter(0.2)  # Splitting to train/validation
)


        dataloader = datablock.dataloaders(
            subset,  # Source of data
            bs=256,  # Batch size
            num_workers=8,  
            pin_memory=True,  
        )

When I attempt to train, I get a bunch of duplicate errors:

[W ParallelNative.cpp:206] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)

From this article, it appears that Pytorch has an error in 1.7 related to parallel processing.
Setting an environment variable to 1 removes the error:
export OMP_NUM_THREADS=1

(Potentially) as a result of it CPU load is very low (no parallel computing?) and training is slow too. I am training on CPU-only Mac, so I expect all CPUs to run at 100% to reach good speeds. But seems like the dataloader is the bottleneck due to lack of parallel computing.

I wonder if anyone had the same problem?

1 Like

The problem also does not happen if dataloader has num_workers=0

1 Like

I am also using the exact same environment as you @versus. Just that my python version is 3.8.3.

I am creating an instance of ImageDataLoader class and training it like this:

path = untar_data(URLs.PETS)
files = get_image_files(path/"images")
def label_func(f): return f[0].isupper()
dls = ImageDataLoaders.from_name_func(path, files, label_func, item_tfms=Resize(224))
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)

When I run the the last two lines of code on Jupyter notebook, I get the exact same error as you get. I then tried setting:

torch.set_num_threads = 1

It resolved the problem but the training was too slow. If you meanwhile were able to solve it, please do let me know.

2 Likes

I was not able to solve it on CPU-only machine. I tried py3.6 and 3.8 - same result. The difference in 3.8 is that multiprocessing works differently, but it doesn’t seem to affect the problem.
The problem is not reproduced on GPU-machines. I am able to use multi-threaded data processing and there is no warning like this.

As I wrote in my first post, seems like it is a pytorch bug reported here: https://github.com/pytorch/pytorch/issues/46409

1 Like

Hello Dimitrii.

Did you solve it finally?
I am still facing this problem with py 3.7.9

1 Like

I’m hitting this wall as well on a 2018 MacBook Pro w, Python 3.7 + Pytorch 1.7.1 + fastAI 2.2.7.

2 Likes

I am facing the same problem on Mac.
Did you find any solution?

I’m still seeing this in fastai v2.5.3 and pytorch 1.9.0.

Still seeing this.

2018 Macbook Pro
torch: 1.12.0
Python 3.9.13
fastai 2.7.7