This makes be me believe that sentencepiece is not able to perform subword tokenization. My understanding is SPProcessor need to create a tmp/spm.model and tmp/spm.vocab in the directory where this notebook is located.
Also the code is using fastaiv1. And I haven’t used sentencepiece in V1 version.
Need additional information to diagnose further. Please review this How to debug your code and ask for help with fastai v2