the pre-trained model and the training process here:
https://github.com/hanan9m/hebrew_ULMFiT
the method get a significantly better result in this paper:
https://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew
with no pretrained - 90%
with pre-trained model (on hebrew wiki): 91.4%
see final_rivlin.ipynb in github.