Is fast.ai in a grand movement of revolution?: software vs learnware

Andrej Karpathy’s post last week, “Software 2.0” renewed discussion of a hot topic: are we in a software revolution?

To uriy@dopamine.ai, the answer is “Yes”, and here is his comparison:

Some links here:



https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/fcs16learnware.pdf

How relevant is fast.ai and its community to this?

1 Like

i’ve been thinking about this too since reading that thought provoking post by Andrej Karpathy… makes total sense to me - esp because Andrei is such a hands-on practical guy…

also this sequel, from Pete Warden from Google: Deep Learning is Eating Software

#excitingtimes

Well, what is software? We have a whole bunch of low level plumbing. The drivers, the low level network protocol stacks, the OSes. I don’t see how to replace that with DL, all this low level stuff, there is an exact binary sequence of steps that must happen, and/or each network event or OS task must be handled an exact way. So it’s all written in low level languages, and a big problem is that these languages have inherent flaws and flaws in the model. For instance, each arithmetic operation, by default in C/C++ will potentially overflow or undergo a bunch of other faults, and the programmer is supposed to manually write code to handle all these faults every single time they add two numbers together. Similar with multi-threading - you can invoke all kinds of locks and mutexes, but it is trivially easy to have a single fault condition that requires perfect alignment between multiple threads that only happens every thousand operating hours, and you won’t know it.

I mean, some day, AI agents will be so sophisticated that maybe they could write all this stuff autonomously from spec. Maybe in the nearer future, they could do the chore of actually formally analyzing a program and proving that the usual faults are all handled.

Then we’ve got a whole bunch of high level application stuff. From the reams of more plumbing you need for something like a videogame or web browser, to stuff like all those endless online forms. The ‘revolution’ here is just reuse - everybody now just uses a premade videogame engine for a new project, they embed chromium into their app instead of trying to write something that complicated, they use some premade e-commerce framework instead of hand jamming their own.

Not exactly sure where DL can help all this to be honest. Maybe you guys can help me out here. I mean, obviously we can make the videogame AIs better by having them learn a policy by passively observing on a server cloud the outcomes of past games against humans, and then run locally on the gamer’s PC/console an AI agent that just looks up a policy. (making it fast enough it doesn’t need a dedicated GPU for this)

And we can make e-commerce better by building robotic systems using DL to actually deliver the physical goods, warehouse and pack them, and eventually to actually make the products and gather the raw materials to make more.

But I don’t understand how the existing software domains really get supplanted by DL. There should be less lucrative jobs in existing software, but that’s due to reuse and that the existing techniques are now taught on a mass scale worldwide.

Eventually we’ll be able to make some mountain of layed DL networks that’s a fully sentient, capable AI, but that’s many generations (of the algorithms, not of people) away.