Lesson 6 - Official topic

ok, is that preserved if the model gets exported to ONNX / used by an ONNX runner

thanku sgugger
how about momentum variation over the epoch cycle. does it always works better

Any hint on what will happen when training the bear classification using multi-label? Will it really be able to identify that an image doesn’t contain any bear?

That’s what your homework is to figure out :wink:

3 Likes

In traditional ML, we perform cross validations and k-fold training to check for variance and bias tradeoff. Is this common in training DL models as well?

3 Likes

ok nevermind then

How do you deal with recommendations when there’s no data? IE your platform was just created and you have no data yet.

AKA cold start problem

1 Like

What is an approach to training using video streams (ie, from drone footage) instead of images? Would you need to break up the footage into image frames?

2 Likes

What would be some good applications of collaborative filtering outside of Recommender systems?

9 Likes

Sorry I did not express myself clearly, I was trying to find a link to fix the one in the notebooks.

I suspect that the first author was a visitor and they removed his personal section at ETH, which is way the link is dead.

New home seems to be here https://icu.ee.ethz.ch/research/datsets.html

Oh you should make a PR to fix the notebook then :slight_smile:

Pretty fitting that Jeremy mentioned the movie “The Mask.” :slight_smile:

3 Likes

I was corrected below, fast.ai does support this.

Fast.ai doesn’t support it out of the box, but you can modify the network by either changing the initial convolution from three layers to four or more and initializing those weights with the current weights, either a mean, or copying them. The pretrained model will probably require more training than when used on a three channel image.

You can see an example from Iafoss on Kaggle here.

1 Like

Yet it does :wink:

3 Likes

Without cross validation, how do we measure bias / variance tradeoff in deep learning models ? Or that’s also not a thing in deep learning ?

1 Like

Don’t know about any tutorial, but you may like this pull request and related discussion on the forums. The short answer is that you can pass in something like:

model = create_cnn_model(resnet34, n_in=4, n_out=dls.c)

(Also tagging @giacomov and @bwarner in case it’s useful – check out the updated fastai2 codebase!)

7 Likes

Can we say collaborative filtering is same as tabular dataset with only categorical variables? where we convert the categories as embedding and train a neural network.

if i used fit one cycle for say 20 epochs but i find that there still need for more training. fit one cycle begins lr all from beginning .
What is way to resume the training further from previous check points . i loaded the weights of previous chk points. How should i adjust parm of foc so it resume from previous left point

2 Likes

I am having hard times convincing myself of this.
I guess the same reasoning applies to progressive resizing, e.g. changing images shapes along the way.

My point is that we are basically applying an already trained model, which has weights matrices of specific shapes, so we are applying the exact same matrices to our problem, and I am trying to visualize how all fits together.

I get that convolutions are image-shape independent and, at the end of the day, it all boils down to how many filters we use. Still I am trying to wrap my head around it :smiley:

To put in simpler terms, Tabular dataset, you have a Xs and ys. In collaborative filtering, you have sort a table for each candidate with holes to be filled on the table. It doesn’t necessarily need to have categorical variables.

1 Like