Memory full with previous models

Hey Adam,

I ran into this problem a lot with one of my models, have you tried setting up ec2 on AWS, or using the Kaggle free GPUs?

One thing I found that made a big difference (note that it was on a tabular dataset) was to set the validation size at 0.5. This seemed to free up more space to increase my batch_size and play around with the architecture without getting that error.

There are also a number of threads if you search for them relating to this problem, for example: