And the most_confused:
[(‘Song’, ‘Nest’, 5),
(‘Tuck’, ‘Nest’, 5),
(‘Begging’, ‘Long Tonal’, 2),
(‘Nest’, ‘Whine’, 2),
(‘Whine’, ‘Nest’, 2)]
It seems the “Thuk, Tuck” are actually not so confused than the Nest one.
@radek, for the Task#5 .what do you mean by unsupervised way ? I’m thinking about working on this task and I think that I still need the label. I summarize what I will do as below:
Run a Siamese network to extract feature (I read before that with matching model like Siamese we will have better feature to cluster, than classification model like resnet)
Run clustering model (ex DBscan) through all the data to find the clusters and extract the anomaly sample. Or Running directly Anomaly Detection (ex Isolation Forest) in just one class to detect the anomaly sample
I think you are on the right track. My thinking was just an extension of what we learned in the class, how Jeremy was using the model to improve the data for the bear classifier. The idea is that the examples that the model gets wrong, or where it is most confused about, that maybe they are worth taking a look at, maybe they were mislabled or they are low quality or whatnot.
Task #5 could a precursor to this in some way. You could do something like I do here
Those are UMAP 2d embeddings obtained in an unsupervised way for the macaque coo calls. Quite a nice separation despite the algorithm not being informed of labels. Is there something similar you could do for the zebra_finch dataset? Do the calls from chicks and adults differ in some obvious way? What if you obtain embeddings from a classifier / siamese network? Are they better in some sense? How do you measure this? If you are training a model on spectrograms, which frequencies are most telling when it comes to discerning calls / bird identities? Would training a CAM model provide some useful insights? (CAM models are introduced in the fastbook
I think the biggest question is what angle of exploration you would find fun and interesting.
As a side note, its quite interesting to compare the results you are getting from the results in the paper
This is very useful! Have you tried listening to the calls? Do they sound to you like they might be misclassified? Quite a relief that the model seems to be happy with our tuck vs thuk split
Does the error rate vary depending on the train - val split? Or is there something different about the training that you are doing? If you use the same validation set, what can you do to absolutely squeeze the most performance out of your single model? What if you were to take an ensemble of models, how does that differ? BTW how are you accommodating the different lengths of the calls? What are some other ways one could approach this and would it perform better?
Maybe some of these questions will be interesting to you, maybe not Or maybe they will push you in a direction that is interesting to you No clue Exploring either of these directions can be extremely useful though especially if you can start creating some narrative around what you are exploring and what your findings are, and trying to tell the story through your notebooks.
I am also wondering if maybe we could try to identify individuals, if there is any hope on that. There is some information on that in the paper. If we have some nice way of framing this, that might be a really tough challenge, an open ended question in many regards as to how best to approach this.
Sorry, I do not have the answers myself, great to discuss ideas with you though And thank you for sharing all your wonderful work and findings!
I really really appreciate your help Radek. I think I have several ideas for myself.
You know, I think i’m kind of easily loosing motivation or lack of tenacity when I’m working alone (and it’s usually the case). However, I’m very interesting in sharing my knowledge to others. Now I have an idea of creating notebook that really telling the story how I’m learning. A emotional story is always attracting me more than a pure technical article.
Thanks again for your help and hoping me can really go to the end of this fastai course.