number channels are obtained by taking number of equivalent filters, which is our choice.
my understanding is that the disproportionate scales of bounding box error and classification error are one time fixes for a given model. For a given model tweaking them once to match their scales, as part of model building would be enough.
How about normalizing losses, with a desired limits.
Tagging @jeremy on this. The youtube link at the top now starts from mid-chapter(from where you started recording locally)
Maybe the youtube stream video (before the recording) can be glued to that one. was planning to go through the lecture again this evening. would be super if it can somehow be restored.
I don’t think it is a joke Would love to learn more of a story behind the person who authored this paper Thus far my encounters with YOLO has been quite strange so there must be some nice backstory. Love the graphs BTW.
I haven’t had a chance to read it fully but so far I think it is great
Love the finishing words of it as well.
For another easter egg that only takes a little bit of the light spirit from the yolo v3 paper, check out the first bibliographical reference in this very serious Inception paper
Thanks, time to read some papers.
Try this one out as well, AI Journal , Covers DL,NLP,RL,CV,Adversarial examples,GANs + Research (aspect) in depth.
Yes it would be great. The initial missing part - don’t know how much - makes difficult to connect the lessons. If it is not possible could someone describe what happened from the real beginning to this point ? Thanks.
I know Joseph Redmon comes up this kind of stuff regularly. Just look at the guy’s resume and you’ll have him figured.
The results published might be real but I think it wasn’t intended to be published. He wrote:
Times from either a K40 or Titan X, they are basically the same GPU.
Which is not true at all.
Although he later cleared it up on reddit that he mixed up names, he meant M40 not K40.
Anyway, I don’t think it is to be taken seriously. Let’s see if Redmon comes up with an explanation.
Really? I thought it was a real paper. Jeremy mentioned it in the class, no?
But to normalize we also need to know the scale of the loss, I think it would be similar to multiplying to some constant to make it similar.
My bad. I just saw it on reddit and posted the link here.
The YOLOv3 paper is written in a humorous style, but I’m pretty sure that the architecture updates and reported results are real
Don’t let the light-hearted style of the paper fool you, the YOLOv3 update is real The updated version is available for download on the official website including some other info on it. Feel free to check it out here: https://pjreddie.com/darknet/yolo/
Yes, I did not disagree on that. The update is real, most definitely. I still feel like the paper in itself is a joke. After seeing his resume, I could be wrong for all I know.
For me the coolest thing is that this paper cites itself (see #14)…recursion!
under Formulas. Use it together with
Trace Dependents as Excel debugger.
Definitely not a joke. Please stop confusing more people in the forums. It’s rather easy to look these things up if you’re unsure. Lighthearted language doesn’t necessarily mean that “It’s a joke paper”.
(Also, that’s one Damn Good Resume.)
EDIT: Re-reading this, I can see that I came off a bit rude. We’re all learning here, I apologize for the unnecessary tone.
Can someone point me to the video for this lesson? Would like to watch again.