Lesson 7 in-class chat ✅


(Jeremy Howard (Admin)) #1

Please ask any questions relevant to today’s lesson here. For more advanced or less directly relevant discussion, please use the further discussion topic.

Please remember to like a post instead of saying thanks, and that duplicate posts will be removed to make this topic easier to read (nothing personal).


(Sam) #3

@jeremy,

Some of the notebook titles look familiar. Will you please say a sentence or two about what is different about the Class7-V1 notebooks before you launch into it. It will help us all in making our notes.
Thank you very much for the efforts you put in


(Ariel Gamiño) #4

Fast.ai 1.0 now available in the Kaggle Kernel! Yeahh.


Fastai.tabular module not found on kaggle kernal
(Rick N) #5

It’s kind of sad to be at the last class! Do we get caps and gowns for the graduation ceremony? :grinning:


(Pierre Ouannes) #12

Thank you very much for the course ! Thanks to the whole team !


(Cedric Chee) #16

Link to Sylvain’s documentation improvements.


(Daryl Wilson) #22

This has been one of the most conceptually challenging things I have ever done. Once I allowed myself to understand how a neural network works and what each layer does greatly improved my learning. The axiom “Run the code, see what goes in and what comes out” is totally true. Thank you for this course @Jeremy, @Rachel and everyone involved.


(jaideep v) #26

how is it possible to view Augmented data batch
does Show batch shows the augmented images ??


(Sam) #27

I remember last year we had concluded cv2 is faster than PIL. Is that no longer our belief?


#29

It does.


(Charlie Harrington) #30

Is there a good way to know which components of from fastai import * we are using in a given notebook if we don’t want to import everything? I’ve been confused by the course notebooks using global variables (it’s hard to know where they come from). Especially seems relevant when we want to “producationalize” a model.


#31

We only use PIL for the reading jpeg and converting part, cv2 is faster for the transformations, but we do those internally now.


(Kevin Bird) #32

Can you use the full dataset for your normalize or do you need to choose a random set of the data?


#33

If you call normalize() it just grabs a batch, so if you want the stats on the full dataset, you’ll need to write your own loop.


#34

I wonder why libraries like fastai don’t support different image sizes and instead we choose to resize and crop? If the image sizes in a given batch are the same size it should work or am I missing something here?


#35

what’s the re reason black/white are inverted for data batch?


#36

The problem is that your model likes randomness, so you have to give it batches of images that aren’t necessarily the same sizes if you want to do that.


(Kevin Bird) #37

Is there an issue with this type of code?

(data.normalize(data.batch_stats()))

It uses the full training sets batch stats (i think)


#38

No, batch_stats is what Jeremy described: it grabs a batch and computes the stats on it.


#39

the feature map, probably.