How AI Might Encode Emotion From Entropy of Action

Sharing part of a conversation between Brian Roemmele & Jordan B Peterson
that I found facinating speculation about a possibilty for AI emotion…

Extract:
It looks like [human] anxiety is an index of emergent entropy.

So imagine that you’re moving towards a goal you’re driving your car to work and so you’ve calculated the complexity of the pathway that will take you to work and you’ve taken into account the energy and time demands that that pathway will that walking that pathway will require that binds your energy and resource output estimates.

Now imagine your car fails. Well what happens is the path length to your destination has now become unspecifiably complex and the anxiety that you experience is an index of that emergent entropy. So that’s negative. That’s a lot of negative emotion.

So now on the positive emotion side, Tristan taught me this the last time we talked… he said look positive emotions also an index of entropy but it’s entropy reduction. So if you’re heading towards a goal and you take a step forward and you’re now closer to your goal you’ve reduced the entropic distance between you and the goal and that’s signified by a dopaminergic spike and the dopaminergic spike feels good but it also reinforces the neural structures that underlie that successful step forward that’s very much analogous to how an AI system learns right because it’s rewarded when when it gets closer to a Target.

[…]

Emotion Theory […] relates negative emotion to the emergence of entropy
because at that point you’ve actually bridged the gap between
psychophysiology and thermodynamics itself and if you add this new insight
of Fristence on the positive emotion side you’ve linked positive emotion to it too.

But it also implies that a computer could calculate an emotion analog because it could index anxiety as increase in entropy and it could index hope as stepwise decrease in entropy in relationship to a goal and so we should be able to model positive and negative emotion that way.

1 Like

That is thought provoking! :thinking:

This could perhaps indeed be an interesting way to implement emotion in an AI system.

The problem coming to mind — may not really be a problem, but a point for thought — is that emotion, and consequently entropy, is relative. A situation that induces a feeling of, say, being overwhelmed may not necessarily induce the same feeling to somebody else. There won’t be a single right way to implement that feeling in an AI system. Then again, that could be the way to create various personalities.

I’m also trying to rack my brain for emotions that potentially could not be described in terms of entropy, and while not too much is coming to mind, a couple are excitement and jealousy. But on further thought, they probably could be. But how about similar emotions such as envy and jealousy? Both have the same feeling, but both are handled differently: envy is taken more positively, and jealousy less so. Then again, could probably be coded or trained in some way.

Lots of philosophical thoughts :smile:

1 Like

Calibration of an emotion would come from life experience (human & AI). For the example of the car breaking down, if its the first time the car breaks down I might feel one level of anxiety. If its the tenth time I’ll probably feel much less anxiety, knowing that I survived the other nine. Or someone that had a hard childhood might be de-sensitized to lesser hardships.

Yeah, that does mask sense; a reinforcement system of sorts. Make the AI system start from a blank slate and let it “figure out” how much it wants to “feel” emotion.

Can’t think of how it can figure that out though, if it hasn’t been “taught” emotion by showing it prior examples. Kinda brings us back to the notion that it would have been “taught” predetermined examples that induce certain levels of emotion; the emotion it feels would be trained off of the emotions it encounters from the examples provided to it.

1 Like