i have one more interesting q
LR finder find s a rate staring with random weights, say 1e-3 but after many epochs when model is trained enough so should it still use the original LR which could be now very high at this point of time when model has got min loss or trying to get even lesser
See https://forums.fast.ai/t/lesson-3-in-class-discussion/29733/3?u=ricknta
Are you seeing the same problem?
How is it determined which loss function will be used by the library? We didn’t specify any in create_cnn().
I’ve used MoviePy in the past.
To extract images from video I’ve found ffmpeg to be much faster than opencv. I create a command line string to extract a specific set of frames.
Is it safe to re-fit the model using a higher learning rate on only the misclassified data based on the feedback given by the user? Wouldn’t this risk catastrophic forgetting by overfitting on the misclassified data? Or is there a way to fine tune on misclassified data again using the same principles like how transfer learning is generally done?
A post was merged into an existing topic: Lesson 3 Advanced Discussion 
Would you recommend to work on a single project for a long time and spending a bit of time of other tasks, or better pick a new task each week during course progress and don’t spend too much time on a single dataset/competition/model?
It infers it automatically from the data. Of course you can correct it when it’s wrong.
when we do fit_one_cycle after unfreeze, how to send a targeted databunch (those which were misclassified or reported by user as misqualified) for this cycle?
once per epoch
In short should we always use smae LR we get at the start of training or we should reduce it when model has got trained enough and we trying to get even better
Can you explain again what learn.unfreeze does? Is it related to learn.save_model()?
Is there a way to use learn.lr_find() and have it return a suggested number directly rather than having to plot it as a graph and then pick a learning rate by visually inspecting that graph?
You own function to open an image. This is more advanced so please discuss this in the advanced topic.
Will really be interesting and useful to have a data handler for videos
It is possible to call learning rate finder at the end of every epoch so we can watch that graph change?
No, because graphs can have a lot of different shapes, and bumps that you wouldn’t expect. Human eye is likely to make a better choice here.
Maybe we can create a neural net for that 
That wouldn’t be useful: use LRFinder when you change things during training, not when it goes the same.