Lesson 3 In-Class Discussion ✅

See https://forums.fast.ai/t/lesson-3-in-class-discussion/29733/3?u=ricknta

Are you seeing the same problem?

How is it determined which loss function will be used by the library? We didn’t specify any in create_cnn().

1 Like

I’ve used MoviePy in the past.

10 Likes

To extract images from video I’ve found ffmpeg to be much faster than opencv. I create a command line string to extract a specific set of frames.

1 Like

Is it safe to re-fit the model using a higher learning rate on only the misclassified data based on the feedback given by the user? Wouldn’t this risk catastrophic forgetting by overfitting on the misclassified data? Or is there a way to fine tune on misclassified data again using the same principles like how transfer learning is generally done?

2 Likes

A post was merged into an existing topic: Lesson 3 Advanced Discussion :white_check_mark:

Would you recommend to work on a single project for a long time and spending a bit of time of other tasks, or better pick a new task each week during course progress and don’t spend too much time on a single dataset/competition/model?

2 Likes

It infers it automatically from the data. Of course you can correct it when it’s wrong.

1 Like

when we do fit_one_cycle after unfreeze, how to send a targeted databunch (those which were misclassified or reported by user as misqualified) for this cycle?

2 Likes

once per epoch

In short should we always use smae LR we get at the start of training or we should reduce it when model has got trained enough and we trying to get even better

Can you explain again what learn.unfreeze does? Is it related to learn.save_model()?

2 Likes

Is there a way to use learn.lr_find() and have it return a suggested number directly rather than having to plot it as a graph and then pick a learning rate by visually inspecting that graph?

8 Likes

You own function to open an image. This is more advanced so please discuss this in the advanced topic.

Will really be interesting and useful to have a data handler for videos

It is possible to call learning rate finder at the end of every epoch so we can watch that graph change?

No, because graphs can have a lot of different shapes, and bumps that you wouldn’t expect. Human eye is likely to make a better choice here.

Maybe we can create a neural net for that :slight_smile:

5 Likes

That wouldn’t be useful: use LRFinder when you change things during training, not when it goes the same.

For web video, webRTC can be used.

Look at how we did it here:

You can pull a frame and put it canvas and then get the data url which has the base64 png data.

8 Likes