Kernels dies when using fastai library due to CPU memory issue

Hmm, can you share a full stack trace or provide a minimal code example that fails. I will try to fix it later.
cheers, Johannes

Nevermind Johannes, looks like this was unrelated! Apologies. Will update later if I run into any troubles!

@j.laute 's fix worked for me as well - cheers, well done. Would support this being PRed into master

1 Like

Hi there,
I’ve been facing the same issue too!

So, I’ve tried to do the TTA separately for ‘test-jpg’ and ‘test-jpg-additional’ folders. Created separate result csv’s and finally merged them using
‘result = pd.concat([test_df, addn_df])’

And it worked.
Hope this solves for you too.
(Just be watchful of the filenames while creating addn_df and ‘index=False’ during .to_csv())

Thanks for the advice, I will look into it!

I think its raising problem in other places too. Such as functions in metrics.py

Please share some code to reproduce if possible. Maybe we need to call to_tensor or something in the DataLoaderIter, I will investigate this weekend

Cheers, Johannes

@j.laute I’m running into the same issue reported by @heisenburgzero. I’ve uploaded my notebook here and a Gist of the stack trace I get here. Let me know if you have any thoughts.

Update, resolved this (at least locally) by re-creating the calls to get_tensor that look to have been lost in @j.laute’s original fix. Demonstrated here

Any suggestions on how to make a permanent fix? Is there anything that can be done to the ThreadPoolExecutor construction in dataloader.py?

https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor-example

Amazing! Will change that as well. Didn’t have time yet to submit a pr to master, hopefully tomorrow.

Cheers, Johannes

This should now be fixed in git. Let me know if you see any further problems.

2 Likes

I am still facing the same issue after the new update. Can anyone confirm if it works for them now after the update?

I have the error as well. The code seems to have been updated in the dataloader.py file but solution is apparently not working for us.

Git pull fixed the issues for me. Thank you!

It has not been completely solved, although it is better. (Thanks for improving the code)

Indeed, fitting the data uses less RAM (although still a lot). After an epoch the memory gets freed, but when swap was needed only part of that memory is getting freed. So when too many epochs are run, the kernel might still die.

For the Amazon data set and the code I had the fix is good enough, but I think that in some cases this won’t solve the issue completely.

I’m getting this same error when running ImageClassifierData.from_csv against the yelp dataset (which is marked as “extra large”). I’ve verified that it is memory related using top.

Seems odd, since I wouldn’t expect this code to actually load the images into memory.

Note: I tried a git pull and a conda env update.

Ignore that. I had a bug in my data processing code. :frowning:

Similar problem. How did you fixed that? @harveynick
I am loading data from csv, for Landmark Recognition Challenge and still can’t get it work.
Kernel keeps dying. I am using Google Cloud with 26GB Ram and 1xK80 GPU

@wnurmi , i actually did the changes mentioned in your PR but still Kernel is dying. I am working on same Google Image recognition and Kernel dies while using ImageClassifierData.from_csv. Did any solution worked for you? if so, could you let me know what changes can be done and how?