Part 2 Lesson 9 wiki


(chunduri) #235

number channels are obtained by taking number of equivalent filters, which is our choice.


(chunduri) #236

my understanding is that the disproportionate scales of bounding box error and classification error are one time fixes for a given model. For a given model tweaking them once to match their scales, as part of model building would be enough.


(chunduri) #237

How about normalizing losses, with a desired limits.


(Suvash) #239

Tagging @jeremy on this. The youtube link at the top now starts from mid-chapter(from where you started recording locally)
Maybe the youtube stream video (before the recording) can be glued to that one. was planning to go through the lecture again this evening. would be super if it can somehow be restored.


#241

I don’t think it is a joke :slight_smile: Would love to learn more of a story behind the person who authored this paper :slight_smile: Thus far my encounters with YOLO has been quite strange so there must be some nice backstory. Love the graphs BTW.

I haven’t had a chance to read it fully but so far I think it is great :slight_smile:

Love the finishing words of it as well.

For another easter egg that only takes a little bit of the light spirit from the yolo v3 paper, check out the first bibliographical reference in this very serious Inception paper :slight_smile:


(Nikhil B ) #242

Thanks, time to read some papers.


(Prajjwal) #243

Try this one out as well, AI Journal , Covers DL,NLP,RL,CV,Adversarial examples,GANs + Research (aspect) in depth.


(Walter Vanzella) #244

Yes it would be great. The initial missing part - don’t know how much - makes difficult to connect the lessons. If it is not possible could someone describe what happened from the real beginning to this point ? Thanks.


(Tanmay Agrawal) #245

I know Joseph Redmon comes up this kind of stuff regularly. Just look at the guy’s resume and you’ll have him figured. :laughing:
The results published might be real but I think it wasn’t intended to be published. He wrote:

Times from either a K40 or Titan X, they are basically the same GPU.

Which is not true at all.
Although he later cleared it up on reddit that he mixed up names, he meant M40 not K40.
Anyway, I don’t think it is to be taken seriously. Let’s see if Redmon comes up with an explanation.


(Aseem Bansal) #246

Really? I thought it was a real paper. Jeremy mentioned it in the class, no?


(Lucas Goulart Vazquez) #247

But to normalize we also need to know the scale of the loss, I think it would be similar to multiplying to some constant to make it similar.


(Aseem Bansal) #248

My bad. I just saw it on reddit and posted the link here.


(William Horton) #249

The YOLOv3 paper is written in a humorous style, but I’m pretty sure that the architecture updates and reported results are real


(James Requa) #252

Don’t let the light-hearted style of the paper fool you, the YOLOv3 update is real :slight_smile: The updated version is available for download on the official website including some other info on it. Feel free to check it out here: https://pjreddie.com/darknet/yolo/


(Tanmay Agrawal) #253

Yes, I did not disagree on that. The update is real, most definitely. I still feel like the paper in itself is a joke. After seeing his resume, I could be wrong for all I know. :laughing:


(Arvind Nagaraj) #254

For me the coolest thing is that this paper cites itself (see #14)…recursion!


(Arvind Nagaraj) #255

Off the charts performance!!


(Sarada Lee) #256

image under Formulas. Use it together with Trace Dependents as Excel debugger.


(Suvash) #257

Definitely not a joke. Please stop confusing more people in the forums. It’s rather easy to look these things up if you’re unsure. Lighthearted language doesn’t necessarily mean that “It’s a joke paper”.

(Also, that’s one Damn Good Resume.)

EDIT: Re-reading this, I can see that I came off a bit rude. We’re all learning here, I apologize for the unnecessary tone. :beers:


(Ganesh Krishnan) #258

Can someone point me to the video for this lesson? Would like to watch again. :slight_smile: