The best chatbot so far is Meena, but it’s a humongous model and it still isn’t necessarily correct
Not really. It would be hard to get the best weights from multiple models to work together correctly.
That DL for tabular data is best at high cardinality features is fascinating—will we learn more about this in this course, or where can I find out more?
edit: oh, it’s chapter 9!
Will use of any Pytorch pretrained model work?
Is DL also good for regression or only classification
There is a chapter on tabular data
Yes, fastai can be used with any PyTorch model.
It is good for both.
It is also good for regression.We will see multiple examples of that.
A post was merged into an existing topic: Lesson 2 - Non-beginner discussion
What is your take on Deep Learning models for Information Retrieval ?
Dear Amazon, I bought a toilet seat because I needed one. Necessity, not desire. I do not collect them. I am not a toilet seat addict. No matter how temptingly you email me, I’m not going to think, oh go on then, just one more toilet seat, I’ll treat myself.
A really depressing example of this is someone that purchased an urn after the death of a loved one, and then was recommended more urns for months afterwards.
It’s a good thing to do because when you have a head-only part, which usually is bunch of dense layers, you want the weights to converge to a decent range where the predictions are as good as possible. The you allow all of the network to train but at a much slower learning rate. If you look at the source code for fine_tune, you do one-cycle training on head followed by unfreeze and another one-cycle training but at 1/100 learning rate. The later steps stabilizes and changes the weights based on the data.
The problem with this data set is that it is so relatively sparse at lower temperatures
. I would worry that the relatively small number of points at the lower temperatures have too much leverage
on the slope. I would not be inclined to believe that the result is different than the null hypothesis
.
FYI: daily effective reproductive
number = R
I’m assuming that paper has been peer reviewed. Was this something that was caught, or is this paper being accepted?
What are the numbers in the bracket? (on the slide below the equation)?
It seems like this aside might be more appropriate for the COVID video that comes out of tonight’s lecture, instead of the ML portion.