Does Learning Transfer Have a Point of Diminishing Returns across Multiple Subject Matters?

Using language learning as our example, if I were to transfer learn new subject matter indefinitely, would there be a point where the capacity of the model to integrate the new vocab would begin to decrease, or does it have a seemingly indefinite capacity to integrate new ideas/concepts/manners of speech?

I am searching for papers on the topic but not yet finding anything and seems like it would be a fun thing to explore, if it hasn’t been done yet, though I doubt that’s the case. :slight_smile:

That’s a great question! I honestly don’t know the answer, but if I had to guess I’d say you don’t have to worry if youre not willing to train it to classify hundreds of thousands of things. I’m genuinely excited to hear an answer from someone who knows stuff about this topic.

I wouldn’t be willing to train the classifier manually, but I’d be more than happy to write some code that does that for us.