Kaggle Kernel fails on COMMIT using FastAI and ResNet101

Kernel is found here

I can manually SHIFT+ENTER the entire kernel and it runs flawlessly, but when I COMMIT and RUN it goes all the way to the end and fails. I have no idea why.

Here are the only sections that are specific to getting the Kernel working with FastAI properly that maybe are causing the issues… I just don’t know and I’m frustrated.

# After some listdir fun we've determined the proper path
PATH = '../input/dice-d4-d6-d8-d10-d12-d20-images/dice-d4-d6-d8-d10-d12-d20/dice'

# Let's make the resnet101 model available to FastAI
# Credit to Shivam for figuring this out: 
# https://www.kaggle.com/shivamsaboo17/amazon-from-space-using-fastai/notebook AND http://forums.fast.ai/t/how-can-i-load-a-pretrained-model-on-kaggle-using-fastai/13941/7

from os.path import expanduser, join, exists
from os import makedirs
cache_dir = expanduser(join('~', '.torch'))
if not exists(cache_dir):
    makedirs(cache_dir)
models_dir = join(cache_dir, 'models')
if not exists(models_dir):
    makedirs(models_dir)

# copy time!
!cp ../input/resnet101/resnet101.pth /tmp/.torch/models/resnet101-5d3b4d8f.pth
# adjust the path for writing to something writeable (this avoids an error about read-only directories)
import pathlib
data.path = pathlib.Path('.')
learn.save("240_resnet101_all")

After some heavy commenting to try and find the culprit, it seems that if you have precompute=True for your learn it will cause the commit to fail.

ah, have you tried clearing the data generated by precompute? Did that fix it?