Kernels dies when using fastai library due to CPU memory issue

Hi there,
I’ve been facing the same issue too!

So, I’ve tried to do the TTA separately for ‘test-jpg’ and ‘test-jpg-additional’ folders. Created separate result csv’s and finally merged them using
‘result = pd.concat([test_df, addn_df])’

And it worked.
Hope this solves for you too.
(Just be watchful of the filenames while creating addn_df and ‘index=False’ during .to_csv())

Thanks for the advice, I will look into it!

I think its raising problem in other places too. Such as functions in metrics.py

Please share some code to reproduce if possible. Maybe we need to call to_tensor or something in the DataLoaderIter, I will investigate this weekend

Cheers, Johannes

@j.laute I’m running into the same issue reported by @heisenburgzero. I’ve uploaded my notebook here and a Gist of the stack trace I get here. Let me know if you have any thoughts.

Update, resolved this (at least locally) by re-creating the calls to get_tensor that look to have been lost in @j.laute’s original fix. Demonstrated here

Any suggestions on how to make a permanent fix? Is there anything that can be done to the ThreadPoolExecutor construction in dataloader.py?

https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor-example

Amazing! Will change that as well. Didn’t have time yet to submit a pr to master, hopefully tomorrow.

Cheers, Johannes

This should now be fixed in git. Let me know if you see any further problems.

2 Likes

I am still facing the same issue after the new update. Can anyone confirm if it works for them now after the update?

I have the error as well. The code seems to have been updated in the dataloader.py file but solution is apparently not working for us.

Git pull fixed the issues for me. Thank you!

It has not been completely solved, although it is better. (Thanks for improving the code)

Indeed, fitting the data uses less RAM (although still a lot). After an epoch the memory gets freed, but when swap was needed only part of that memory is getting freed. So when too many epochs are run, the kernel might still die.

For the Amazon data set and the code I had the fix is good enough, but I think that in some cases this won’t solve the issue completely.

I’m getting this same error when running ImageClassifierData.from_csv against the yelp dataset (which is marked as “extra large”). I’ve verified that it is memory related using top.

Seems odd, since I wouldn’t expect this code to actually load the images into memory.

Note: I tried a git pull and a conda env update.

Ignore that. I had a bug in my data processing code. :frowning:

Similar problem. How did you fixed that? @harveynick
I am loading data from csv, for Landmark Recognition Challenge and still can’t get it work.
Kernel keeps dying. I am using Google Cloud with 26GB Ram and 1xK80 GPU

@wnurmi , i actually did the changes mentioned in your PR but still Kernel is dying. I am working on same Google Image recognition and Kernel dies while using ImageClassifierData.from_csv. Did any solution worked for you? if so, could you let me know what changes can be done and how?

@jeremy, I just tried Git pull and conda env update but kernel is shutting down while trying to process ImageClassifierData.from_csv. I am working on Google landmark recognition competition where dataset is too large. I am using PaperSpace with 30 GB RAM and 16 GB GPU.

Hi. The PR is already merged so no need to make the same changes unless you are on an old version (in which case, try pulling the latest one!).

After the PR and after upgrading to 60 GB RAM I was able to run from_csv without crashing, but I think there are still some parts of the fastai library that are very memory intensive when run on big datasets with this many labels, so I had to do e.g. inference in parts (due to out of RAM crashes if a I recall correctly).

Thanks for the quick response William but I have already tried doing git pull and conda env update recently but it haven’t resolved the issue for me.

I’ll check with PaperSpace support if RAM can be increased.