How is learn/learn.model modified globally in fit and callback?

I am having a hard time understanding some fundamental aspects of fit and callbacks which may stem to some basic misunderstandings I have regarding the use of python in fastai.

Q1. In fit, I am seeing things like

loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler)

However, loss_batch only returns

return loss.detach().cpu()

Hence, the modifications made to learn.model inside loss_batch are not being returned by the function and used to set the new weights and biases of learn inside fit. How is learn being modified to proceed with fit?

I see this also when defining optimizer, where for example in the 03_minibatch_training.ipynb notebook the following is defined:

class Optimizer():
    def __init__(self, params, lr=0.5): self.params,self.lr=list(params),lr
        
    def step(self):
        with torch.no_grad():
            for p in self.params: p -= p.grad * lr

    def zero_grad(self):
        for p in self.params: p.grad.data.zero_()

opti = Optimizer( model.parameters(), lr=3e-3 )

How is opti.step() managing to modify the parameters in model? I am not seeing why changing self.parameters would change the model.parameters. These are two different variables, from what I see. I did a quick sanity check,

x =1
class check():
...     def __init__(self,x):
...             self.x =x 
...     def step(self):
...             self.x +=10
... 

print(x)
1
cc = check(x)
cc.step()
print(x)
1
print(cc.x)
11

Q2. This is probably related to my first question, but I would like to understand how defining a LearnerCallback allows modifications to the learner inside fit. If we look at the init method of LearnerCallback

> class LearnerCallback(Callback):
>     "Base class for creating callbacks for a `Learner`."
>     def __init__(self, learn):
>         self._learn = weakref.ref(learn)
>         self.exclude,self.not_min = ['_learn'],[]
>         setattr(self.learn, self.cb_name, self)

I am not quite sure what weakref.ref is, but I assume it is making some memory efficient copy of learn inside LearnerCallback. How is it then that changes to something like self.learn.data inside a on_batch_begin would change the original learn.data?

Lastly, what part of LearnerCallback allows me to reference self.learn later on? I only see a variable self._learn = weakref.ref(learn). Where is self.learn defined?

Any help would be greatly appreciated.