Jeremy's demo - create new document from keywords and language model

(Gary Biggs) #1

I recall Jeremy doing an interesting demo where he created a new document from just a few keywords by using a pre-trained language model fine tuned with a large corpus of academic document abstracts. I can’t find it in the course material. Can someone please let me know which lesson that was in and, ideally, if there was a demo notebook? I’ve looked at all the 2018 course #1 and #2 notebooks and can’t find it.

The demo results looked something like what might be produced by OpenAI’s gpt-2. I want to fine tune a ULMFit model with my own corpus.

Thanks!

0 Likes

(Seemant) #2

You can find it here in this notebook.

0 Likes

(Gary Biggs) #3

Thanks Seemant. Looks like that notebook deal exclusively with the iMDB dataset but maybe I haven’t looked closely enough. Also possible that Jeremy did the academic paper demo without sharing the notebook. I’ll keep digging.

0 Likes

(Bobak Farzin) #4

.predict will work with any language model that you generate or load. You give it a word or set of words, it will then predict the next one and continue to do that till you tell it to stop (number of words.) This notebook above is a single example of that.

You can also find an example of using a beam search to predict the next work. All in the docs. This should all work “out of the box” with the Wiki pre-trained model and you could fine-tune to your particular needs.

0 Likes

(Gary Biggs) #5

Great info Bobak. Thank you very much!

1 Like