I’m a firm believer in log-log plots for showing loss-vs-epoch; it much more easily allows you to see what’s really happening, as well as to predict where the loss is likely to be at later epochs.

Could a backwards-compatible option be added to `learn.recorder.plot_losses()`

in basic_train.py to allow for this? Something like…

```
def plot_losses(self, log:bool=False)->None:
"Plot training and validation losses."
_, ax = plt.subplots(1,1)
iterations = range_of(self.losses)
val_iter = self.nb_batches
val_iter = np.cumsum(val_iter)
if log:
ax.loglog(iterations, self.losses)
ax.loglog(val_iter, self.val_losses)
else:
ax.plot(iterations, self.losses)
ax.plot(val_iter, self.val_losses)
```

(I couldn’t figure out a one-line way to do this as per the fastai coding style, because matplotlib requires separate commands for log plotting vs. linear plotting.)

How would one write a unit test for such a change?