on the neck and head area of the bluejay it seemed to checkerboard.
@rachel if there is a good time to fit this in, I’m curious if Jeremy has an explanation.
If not, no biggie!
DONT DO A PhD. I was enrolled for two years and dropped out. More opportunities outside without a PhD.
Or at the very least if you do one, take everything your professors tell you with a grain of salt (they have no idea what goes on in the real world). And do as many industry internships / work part time as you can.
I have a PhD, and worked for 2 startups during my degrees. There’s still a definitely opportunity cost to spend that many years of your life. But if you go in with the understanding that your responsible for also ensuring your industry success afterwards, you can leverage it into some interesting skills beyond your specific subject area.
@matttrent I did internships in summer during my abortive PhD. Was very happy to leave with some work experience.
Yes! And better to find that out 1-2 years in during your first internship than after 5+
What part of the journal publishing process would you place the blame on for having misleading stuff show up in published results?
How would somebody get these masks for training initially? MTurk or something?
Seems like getting labelled data for segmentation would be really expensive. What are your thoughts?
Why did Jeremy upsample in StdUpSample?
oops, I meant in simple_up
An interesting piece of research on annotating segmentation datasets: http://www.cs.toronto.edu/polyrnn/
Will it improve if we use attention instead of “copy and crop” in the U-Net?
Gray arrow in the U-Net diagram was labeled “copy and crop”. What does “crop” mean there?
In original paper, they didn’t use padding in CONV net, so the size becomes smaller at the later stage. To match the size, they cropped.
Sorry for going off-topic, but, then, what are the basic requirements for a ML/DL job?
Fairly off topic, but wasn’t there an XKCD where they talked about the nonsense of up-scaling a 4 by 4 pixel image - anyone remember? The first half of the class reminded me of it, but I’m not sure which post had it. Anyway, maybe not for 4x4, but it would seem that, for a specific domain already trained for, something like this could actually get interesting up-scaled results.
Maybe. On the one hand, we need to take guesses for the entire image, so I’d think we need the whole thing. But then again, only really the edges of the boundaries seem to be hard, so it might be good to focus computational resources on the hard part.
Why U-Net uses transposed convs instead of upsampling or subpixel filters from the first half of the lesson?