How to load pre-trained model (WT103) from local directory

Hello,
I’m new to Fastai and currently making my way through part 1. I am currently work in version 1.0.55 and I am behind a firewall where I can’t run the URLs.WT103 to download the pre-trained embeddings from the s3 bucket.

learn = language_model_learner(data_lm, pretrained_model=URLs.WT103, drop_mult=0.3)

However I do have access to the original wt103 model located here and I have downloaded it locally on my machine. Can anyone provide guidance on how I can load this file to get the language_model_learner working with my constraints of not being able to access it and download it from URLs.WT103.

Thanks

1 Like

Note that this model won’t fully work with v1.0.55 as we have changed things in the tokenization process or special token ids. It won’t be horrible, just not as good as the latest one. Also note that the line of code you passed can’t work as there is no pretrained_model argument in language_model_learner.

To use a pretrained model you have locally, you should put the files in .pth and .pkl in the models directory of your Learner (by default data_path/models) then use

learn = language_model_learner(data_lm, pretrained_fnames=[name_of_the_pth_file, name_of_the_pkl_file], drop_mult=0.3)

The names of the files should be without extensions.

2 Likes

@sgugger Thanks for the help on @mlxMantic’s question, wondering about a couple of things. First, a quick assumption I want to clear up. When you say “latest one” you mean the WT103_v1, correct?

Second, regarding placement of the pretrained models in the filesystem such that the language_model_learner is able to find them, where exactly is data_path? Is it an environmental variable that I need to set? I don’t see a folder in the filesystem that fastai would care about with this name. I also tried to set a relative path to the location I saved the files but I got a SyntaxError exception, which may be expected I guess.

  File "<ipython-input-29-f35fc69e9e65>", line 1
learner = language_model_learner(new_LM, pretrained_fnames=[../rajan/WT103_v1/lstm_wt103,../rajan/WT103_v1/itos_wt103],drop_mult=0.3)
                                                            ^
SyntaxError: invalid syntax

FWIW I started on this path because I couldn’t download the model the standard way (not related to @mlxMantic’s firewall problem). I get this exception if I try as is suggested:

learner = language_model_learner(new_LM, URLs.WT103_v1,drop_mult=0.3)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-30-04d9ce49c683> in <module>
----> 1 learner = language_model_learner(new_LM, URLs.WT103_v1,drop_mult=0.3)

AttributeError: type object 'URLs' has no attribute 'WT103_v1'

I’ll start another thread on this issue if it’s worthwhile (not trivial)

You should use URLs.WT103_FWD now, there is no URLs.WT103_V1 anymore.

@sgugger I keep getting this error when I try using that WT103_FWD model (or the WT103_BWD model)

KeyError: 'https://s3.amazonaws.com/fast-ai-modelzoo/wt103-fwd'

I can’t find anything online for this error so I think it’s probably something on my side. I am running this on GCP, but I’ve tried a few different instances and machine types without being able to figure this out.

Happy to start another thread if this isn’t a quick fix.

You don’t have to specify it when you create a Learner, it knows what model to download. The syntax is:

learn = language_model_learner(data_lm, AWD_LSTM)

as shown in the text example.

1 Like

Hello @sgugger !

I am having the following problem:

learn = language_model_learner(data_lm, arch = AWD_LSTM)

It starts downloading, but it stops and the message error is the following:

" Fix the download manually "

I followed the instructions, re -run the code, but the problem does not dissapear.

Do you know how to fix it?

Thanks !

1 Like

For what it’s worth. I was able to get the WT103 modeling working by downloading the latest files (lstm_wt103 and itos_wt103). I then had to add a few lines to the config parameter as shown below and wrote the lang_model_learner as shown below and it worked…


config = and_letm_lm_config.copy()
config[‘n_hid’] = 1150

learn = language_model_learner(insert_your_data, config = config, arch = AWD_LSTM, pertained_fnames=[‘lstm_wt103’,‘itos_wt103’], drop_mult=.3, pretrained=False)


1 Like

Hey !
I am getting the same error while doing the backward thing (data_bwd). Have you been able to solve this yet?

Hello
I am not able to understand this change. Can you please help me out in this?

Hello
What about the case in backward model?

learn = language_model_learner(??, AWD_LSTM) ?
I have been facing difficulty in the same. Ki

Kindly help.

I have run into similar issue. Is it possible to download the pre-trained model locally and use it from any location within local directory ? I do not have access to either download from internet or make changes to installation directory. For example, if I donwload wt103 to c:/temp/…Can i specify the location as part of language_model_learner ?

I have a pre-trained fastai tabular model saved as pickle file in s3. How can I use load_learner() to make predictions in SageMaker? Thanks.