I have found this too, crashing at validation stage, which I hadn’t ever experienced with v0.7
Maybe it is because I am using fp16 quite often and perhaps validation may not (speculation).
You can see it discussed here Different batch_size for train and valid data loaders - #2 by sgugger