Draft of fastai book

Good point, thanks Sanyam :slight_smile:

How do I avoid jupyter notebook metadata conflicts while submitting PRs for fastbook. Can someone point me to steps?

standard jupyter commands I think without extension :

  • esc-up, esc- down moves up/down a cell, but you can do esc - down - down - down -down for 4 cells quickly.
  • esc - spacebar for pagedown, esc - shift-spacebar for pageup.

follow the fastai2 repo readme instructions - i.e. install nbdev and run the command for git hooks. thats about it


I have a comment on chapter 1 with regards to how which book you open, the full or the clean. Perhaps for a set up in a server environment this is automated. For my self and any individual running a local version, how you get to display these in jupiter depend on where you make the call to jupyter notebook.

I cloned the fastbook repostitory and changed directory to fastbook, I am now make the call to jupiter notebook in my terminal window and the result is :-

Screenshot 2020-03-08 at 11.32.52

As you see we have directories and files listed, not all are shown here, only the first few, the file with the green icon indicates a running notebook, which is open in my case as another Safari tab, the image shows the earlier tab which is the result of running the jupyter notebook command.

To get to the cleaned version of the note books we must click on the clean item with the folder image which gives this image :-

Screenshot 2020-03-08 at 11.33.15

This is an image of what’s shown after switching to the clean folder and clicking the clean version 0f 01_Intro. Note there are only now three tabs open as the result of clicking items and the jupyter notebook command, the original tab which now displays this image and the tab displaying the full version of 01_intro and another tab displaying the clean version.

Forgive me; here I have come to this as I thought it maybe an issue with people unfamiliar with jupyter. Perhaps in your top/down approach this gets resolved later, if not then I think something on these lines should be added. I feel that trying to gauge your audience’s needs is a difficult challenge.
Feel free to use any of this if required.

This item refers to the book help page and not to the book directly.

1 Like

Thanks but I find scrolling through Vimium a lot more similar to the mouse. I find using Jupyter commands is a lot more abrupt and not ideal for reading.

1 Like

Thanks @Brad_S. It looks fine now, let’s see if it gets accepted.

Thanks @jeremy for sharing the draft. I am reading book draft from O’Reilly.

I just found in chapter 4, Fig 4-1 erroneously there is name mismatch in caption.

One small typo, in chapter 2.

In chapter 2, Instead of Grizzly bear, Number 3 image is displayed.

Missing chapter reference in 04_mnist_basics.ipynb:

I’d love to see a concrete example of using a ‘black-box’ computation in the loss function. I have a case where it’s difficult to use pytorch tensors to do the math I need in the loss function. This is because I need the use of complex numbers, and scipy, but PyTorch doesn’t support complex math, and it’s non-trivial to rewrite the bits of scipy I need using only real numbers.

A sidebar in chapter 3 or maybe in a later more advanced chapter would be good. Basically it would describe that there are cases where it’s difficult to use PyTorch tensors in your loss function, show how to convert something from a torch.tensor to a numpy array, do some math, convert back (for the forward pass), and how to create gradients on what is essentially a black-box on the backward pass. Of course, there would be the caveat that it will be way slower than a proper GPU based backwards pass, but it’ll at least function.

Yea should be Left Right Centre

in chapter 10, nlp, in this line:
tokens = tfm(stream)

what is this “tfm”, it gives me error,
NameError: name ‘tfm’ is not defined

no other errors till that point

thank you :wink:

1 Like

I have a question regarding the loss function in 06_multicat.ipynb chapter.
The loss is stated as:

def binary_cross_entropy(inputs, targets):
    inputs = inputs.sigmoid()
    return torch.where(targets==1, 1-inputs, inputs).log().mean()

shouldn’t it be

def binary_cross_entropy_updated(inputs, targets):
    inputs = inputs.sigmoid()
    return -torch.where(targets==1, inputs, 1-inputs).log().mean()

two changes were:
1)the -ve sign
2)inputs and 1-inputs were interchanged in the torch.where


reading chapter 2, image augmentation. from what i understand RandomResizedCrop doesn’t squish images, however the third from left looks squished to me?

It does some random squishing by default too.

1 Like

Yes, in Colab use this function

    from google.colab import files
    uploaded = files.upload()

You can upload any filetypes also .obj for Pytorch3D http://www.bimhox.com/2020/03/15/pytorch3d-3d-deep-learning-in-architecture/


I also found these item not matching in the book. FYI

When this is available on Amazon, will there be a kindle version.
I’m happy to buy a physical book.
But with current state of the world, I think I have to wait a lot of time.
(Due to shipping issues around the world these days)

Yes I believe so.

1 Like