follow the fastai2 repo readme instructions - i.e. install nbdev and run the command for git hooks. thats about it
I have a comment on chapter 1 with regards to how which book you open, the full or the clean. Perhaps for a set up in a server environment this is automated. For my self and any individual running a local version, how you get to display these in jupiter depend on where you make the call to jupyter notebook
.
I cloned the fastbook repostitory
and changed directory to fastbook
, I am now make the call to jupiter notebook in my terminal window
and the result is :-
As you see we have directories and files listed, not all are shown here, only the first few, the file with the green icon indicates a running notebook, which is open in my case as another Safari tab, the image shows the earlier tab which is the result of running the jupyter notebook command
.
To get to the cleaned version
of the note books we must click on the clean item with the folder image
which gives this image :-
This is an image of what’s shown after switching to the clean folder and clicking the clean version 0f 01_Intro
. Note there are only now three tabs open as the result of clicking items and the jupyter notebook command
, the original tab which now displays this image and the tab displaying the full version of 01_intro
and another tab displaying the clean version.
Forgive me; here I have come to this as I thought it maybe an issue with people unfamiliar with jupyter. Perhaps in your top/down approach this gets resolved later, if not then I think something on these lines should be added. I feel that trying to gauge your audience’s needs is a difficult challenge.
Feel free to use any of this if required.
Edit:-
This item refers to the book help page and not to the book directly.
Thanks but I find scrolling through Vimium a lot more similar to the mouse. I find using Jupyter commands is a lot more abrupt and not ideal for reading.
Thanks @jeremy for sharing the draft. I am reading book draft from O’Reilly.
I just found in chapter 4, Fig 4-1 erroneously there is name mismatch in caption.
One small typo, in chapter 2.
In chapter 2, Instead of Grizzly bear, Number 3 image is displayed.
I’d love to see a concrete example of using a ‘black-box’ computation in the loss function. I have a case where it’s difficult to use pytorch tensors to do the math I need in the loss function. This is because I need the use of complex numbers, and scipy, but PyTorch doesn’t support complex math, and it’s non-trivial to rewrite the bits of scipy I need using only real numbers.
A sidebar in chapter 3 or maybe in a later more advanced chapter would be good. Basically it would describe that there are cases where it’s difficult to use PyTorch tensors in your loss function, show how to convert something from a torch.tensor to a numpy array, do some math, convert back (for the forward pass), and how to create gradients on what is essentially a black-box on the backward pass. Of course, there would be the caveat that it will be way slower than a proper GPU based backwards pass, but it’ll at least function.
Yea should be Left Right Centre
in chapter 10, nlp, in this line:
tokens = tfm(stream)
what is this “tfm”, it gives me error,
NameError: name ‘tfm’ is not defined
no other errors till that point
thank you
I have a question regarding the loss function in 06_multicat.ipynb chapter.
The loss is stated as:
def binary_cross_entropy(inputs, targets):
inputs = inputs.sigmoid()
return torch.where(targets==1, 1-inputs, inputs).log().mean()
shouldn’t it be
def binary_cross_entropy_updated(inputs, targets):
inputs = inputs.sigmoid()
return -torch.where(targets==1, inputs, 1-inputs).log().mean()
two changes were:
1)the -ve sign
2)inputs
and 1-inputs
were interchanged in the torch.where
reading chapter 2, image augmentation. from what i understand RandomResizedCrop doesn’t squish images, however the third from left looks squished to me?
It does some random squishing by default too.
Yes, in Colab use this function
from google.colab import files
uploaded = files.upload()
You can upload any filetypes also .obj for Pytorch3D http://www.bimhox.com/2020/03/15/pytorch3d-3d-deep-learning-in-architecture/
When this is available on Amazon, will there be a kindle version.
I’m happy to buy a physical book.
But with current state of the world, I think I have to wait a lot of time.
(Due to shipping issues around the world these days)
Yes I believe so.
Awesome. Thanks.
Further down from in chapter 4 notebook of lesson 3 in the SGD section there is an area which may give confusion.
def train_epoch(model, lr, params):
for xb,yb in dl:
calc_grad(xb, yb, model)
for p in params:
p.data -= p.grad*lr
p.grad.zero_()
it seems we have a reference to a global variable dl
in this method which may create confusion.
Perhaps put the method into a class initialised with dl etc
so the relationship is explicit and run the methods of that class.
Yes, the binary cross entropy loss is incorrect in the 06_multicat
notebook.
I believe the correct loss is:
def binary_cross_entropy(inputs, targets):
inputs = inputs.sigmoid()
return -torch.where(targets==0, 1-inputs, inputs).log().mean()
The change is in the last line. It was:
return -torch.where(targets==1, 1-inputs, inputs).log().mean()