< lecture thoughts – BPTT I think >
I wonder how you could apply this to vision – say a self-driving car or a plane; context is very important… (maybe esp. if you could learn a weighting since some states of a car or plane have a large effect on what’s possible later on) … is there a way to encode images / state the way Jeremy just showed with words?
– around 1:50:00 in the lecture, when Jeremy showed the array of text after someone asked a question.
Hmm… how to do backprop-through-time for images… maybe a multi-input with perceived state --> fed into a ‘decision-maker’ NN…