I was talking about not using the learner class at all. It looks like I was wrong and there isn’t a simple pytorch train function implementation there. Here is a basic example (UNTESTED CODE):

```
names = ["error"]
layout = "{!s:10} " * len(names)
epochs = 50
criterion = F.binary_cross_entropy
dataloader =
net =
optimizer = optim.Adam(net.parameters(), lr=0.001, betas=(0.9, 0.999)
def print_stats(epoch, values, decimals=6):
layout = "{!s:^10}" + " {!s:10}" * len(values)
values = [epoch] + list(np.round(values, decimals))
print(layout.format(*values))
for epoch in tnrange(epochs, desc="Epoch"):
t = tqdm(iter(dataloader), leave=False, total=len(dataloader))
for i, batch in enumerate(t):
xs = Variable(batch[0]).cuda()
ys = Variable(batch[1]).cuda()
optimizer.zero_grad()
y_hats = net(xs)
err = criterion(ys, y_hats)
err.backward()
optimizer.step()
t.set_postfix(err=to_np(err.mean()))
if epoch == 0:
print(f"\n{layout.format(*names)}")
print_stats(epoch, [to_np(err.mean())])
```

I hope you can tailor it to be useful. Good luck