The bottleneck of specialization vs. generalization

If you have spent long enough with any of the top performing LLMs, you will figure out some thing.

LLMs much like humans are only specialized in some niche area. Because, problem solving highly depends on the context of that particular niche.

Now, adjusting the weights dynamically for a given problem in a given niche would make an LLM to be expert at everything tho.

But, i think that diverges from the idea of generalized LLM, an expert at everything.

What blows my mind is, the entire English language boils down to just the 26 alphabet letters, oh yeah among other special characters too though.

But my point is, 26 letters allowed us to make the soup we have now, ever expanding and ever improving.

People had models of understanding long before alphabets. Alphabets allow writting which allows records and transmission. Chinesse had no alphabet until Pinyin (1950) and uses concept symbols.
Although Chinesse revolutionaries commented in the First World War how the European could achieve literacy using alphabets whereas it was constrained in China by the complexity of the symbols (simplied in the 1950s)

Customer how many can I buy. Unlimited Madam. Employee how many days holiday can I have. …
So LLM need reality and that is why RAG is in the news although there was a MS paper a few years ago about using cognative model to keep Deep Learning work in reality.

Regards Conwyn