Chapter 4 - Further research MNIST

Hi! I wanted make the full MNIST digit classifier, I got some advises, so I made this digit classifier, I did everything excactly like in the book(naturally, I’ve read the book many times, revised Calculus,Slope intercept form, Limit, Matrix multiplication, spent days understandig how neuralt networks work, asked many questions, so in theory I understand the basic concepts… :slight_smile: )
However, I made some adjustments, and it doesn’t work as expected, first of all, my accuracy is alway s 0.9, I have no idea why, and secondly I can’t use my model… so the prediction method keeps throwing an AttributError (list object has no attribute decode_batch)
error, and I have no idea why. Here’s my code(As you can see I have difficulties using preformatted text, I’m sorry) It’s everything but efficient, good, fast, working but I’m trying.:

  `from import *`
           ` from fastbook import *`
    `    path = untar_data(URLs.MNIST) `
   `     path_train = path/'training'`
    `    path_valid = path/'testing'`
     `   train_x = get_image_files(path_train).sorted()
        train_y = [int( for element in train_x]`
  `      train_x = [tensor( for element in train_x]`
   `     train_x = torch.stack(train_x).float() / 255`
  `      train_x = train_x.view(-1,28*28)`
    `    for i in range(len(train_y)):`
      `         sample = [0] * 10`
         `   sample[train_y[i]] = 1`
           `train_y[i] = sample`
      `  train_y = tensor(train_y)`
      `  dset = list(zip(train_x,train_y))`
      `  valid_x = get_image_files(path_valid).sorted()`
   `     valid_y = [int( for element in valid_x]`
     `   valid_x = [tensor( for element in valid_x]`
    `    valid_x = torch.stack(valid_x).float() / 255`

valid_x = valid_x.view(-1,28*28)
for i in range(len(valid_y)):
sample = [0] * 10
sample[valid_y[i]] = 1
valid_y[i] = sample
valid_y = tensor(valid_y)
valid_dset = list(zip(valid_x,valid_y))
def mnist_loss(predictions, targets):
predictions = predictions.sigmoid()
return torch.where(targets==1, 1-predictions, predictions).mean()
dl = DataLoader(dset,batch_size=256)
valid_dl = DataLoader(valid_dset,batch_size=256)
dls = DataLoaders(dl,valid_dl)
simple_net = nn.Sequential(
def batch_accuracy(xb, yb):
preds = xb.sigmoid()
correct = (preds>0.5) == yb
return correct.float().mean()

It’s hard to understand the code without the formatting. Maybe changing the following will help?

def mnist_loss(predictions, targets):
    predictions = predictions.softmax(1)
    return torch.where(targets==1, 1-predictions, predictions).mean()

Same for the metric:

def batch_accuracy(xb, yb):
    preds = xb.softmax(1)
    correct = (preds>0.5) == yb
    return correct.float().mean()

Also a tip for formatting.

If you insert 4 spaces at the beginning of a line, it will be in code mode. The `code` is used for small in-line code fragments. You can check the preview while writing a post, to see if the formatting is right.

You can get even more fancy by using ```python at the beginning of the code section like this:

def foo():

which will become

def foo():

Thank you! I didn’t want to continue reading the book before I finish the further research part, however the next smaller chapter is about softmax :smiley:
Nevermind, thank you! Have a great day!
Update.: I ran a few calculations and got the same accuracy, with a bit better(in my case worse) loss

Hello, I am trying to do the research for chapter 4. You did a really nice work, may I ask what does this loop do ? ```
for i in range(len(train_y)): sample = [0] * 10 sample[train_y[i]] = 1 train_y[i] = sample`

Well, that single line of code is a mistake! :slight_smile:
After wise I continued reading the book, learned about CrossEntropyLoss and the Softmax activation function, so I rewrote my code.
After a single epoch I got a 90% accuracy, however I’m not sure if it’s working, but it seems sensible, I used the complete set that contains 60000 images.
Here’s the forum post about my code:

1 Like

Thank you very much, I appreciate the help!!