Results
ULMFiT with sentence piece set new SOTA on Poleval 2018 language modeling task, with perplexity 95 the previous best was 146,
source code and weights
Available datasets and results

 Task 2 NER
 Task 3 Language model
 a modified version of ULMFit achieved Perplexity of 95, 35% better than the competition

 Task 2 Sentiment analysis
 best model (TreeLSTMNR) accuracy 0.795  the dataset is most likely broken a bit Paper from contest
 Task 2 Sentiment analysis

New Sentiment dataset like imdb data set
 We have approached few companies to publish their data set of comments with ratings we are waiting for their response. It will be published as part of Poleval 2019