I know this isn’t necessarily part of the course, but thought a lot of people here might have a better idea on how Theano works internally. Say that I want to do a simple dot product between a matrix and a vector

It makes sense that we first need to create the variables as that’s how the computation graph is built

```
A = T.matrix("A")
v = T.vector("v")
```

then we define the dot product, which continues building the computation graph adding an operation between `A`

and `v`

and stores the result of the dot product in a new graph node `x`

.

```
x = A.dot(v)
```

then it makes sense that we have to compile the graph into actual executable code, so I assume this is where the CUDA’s nvcc gets the job done and builds a native function:

```
f = theano.function(inputs=[A, v], outputs=w)
```

but what I don’t is why we need to specify both the `inputs`

and the `outputs`

? Why can’t theano simply infer what are the inputs based on the resulting node `w`

? Is this because we might create a function for only a portion of the whole graph?

If so, can I take an arbitrary computation graph and “slice” a part of it as a function by simply saying which nodes are the starting points and which I want as the result, and build a Theano function out of that?

edit: Just to clarify, my question is **why** do I need to specify both the `inputs`

and `outputs`

when the computation dependencies are already defined by the `outputs`

and Theano would have to verify that the graph is complete anyway, right?