Looks like he is answering the question.
data.test_ds.fnames gives you the names of the files in the test in the same order as the predictions.
I found TTA made my results slightly worse on the Dog Breeds experiment. Is there a way to get the probabilities without using TTA?
Jeremy mentioned the fastai machine learning course? I’m having trouble finding it on the interwebs. Does someone have a link?
The files are in the ml1 folder in the github repo
@yinterian Thanks. Does the order of the files matter in the submission? In the dogs vs. cats one, I thought I was getting the files out of order.
They are awesome. I am only through lesson 2, but there is a ton of really good content.
What matters is that you match the prediction with the right “id”.
why model is giving different predictions for single image?
Initial 37 and then 33?
Thank you. I think I remembered the order looking more straightforward in the version 1 keras version, but I think it was because it was pulling from the test directory sorted by the file system. I thought I had specified something incorrectly in the fastai library’s predict method.
Is Octavio’s video publicly available? (if so, link please <3)
I don’t think so, It was unlisted.
where’s the Octavio video? Is it available? Tried searching and couldn’t find it.
How do we arrive at the filters?
Are these optimised as well (via gradient descent or other methods?)
Yes, the filters are optimized with Stochastic Gradient Descent (SGD) or a version of it.
Right now it gets about 98.5% acc on dogs and cats. I am still looking for the places where fastai (pytorch) and my lib differ to get it up to 99%
If @jeremy would have 3channel input in his excel spreadsheet, then the filter in first hidden layer would have dimension [3,3,3]?
Why using a simple sum of previous convolutions instead of weighted one?