Given that the base language model is Wiki103 and you use a domain specific language model that you train yourself.
If the domain language in the domain model is very different (technical terms etc.), do I add this vocab to the wiki103 one? I remember that Jeremy mentioned a limit of 60,000 words…
Is it normal that my GPU (gtx 1080 ti) is making like “tic tic tic” sound when I’m training model? It seems that it is making “tic” sound every time when percent is increasing.
It would be helpful if the course notebooks could have the top cell specify which version of fastai they were created with. I’m having a hard time getting the Planet notebook to work now, seeing this error:
Next question. Is there an interpretation class for multi-label classification? What’s the right way for us to do similar analyses here?
I would like to see confusion matrix, top losses, etc for my multi-label classification of pizza toppings, but the single image classification interp object doesn’t seem to be working for this.
The correct syntax is: data = (TextList.from_csv(path, ‘texts.csv’, col=‘text’) .split_from_df(col=2) .label_from_df(cols=0) .databunch()) This is for fastai version: import fastai fastai.version [9]: ‘1.0.24’
Running camvid segmentation notebook from lesson 3 as is from the repo but not able to get same results… The accuracy is going almost close to zero. Using fastai version 1.0.24
Anyone know why sometimes the training is super slow, with the same configuration but in 2 differences time, the training time is much different ? Sometimes I can fix it by reboot my computer but I don’t know why.
I found that if I restart the kernel so I can train much faster. I think the reason is because of the GPU memory. GPU memory not being freed after training is over . I will read to see how to free the memory without restarting the kernel
After banging my head around this for a lot of time, couldn’t identify the issue causing this unusual behaviour. Did a system reboot and looks like that has fixed it