@neuradai Thanks a ton for the notebook - it’s merged into the master so everyone can use.
We got our resnet (XResNet!) tonight as I had expected/hoped, along with the final updates to FastAI 1.2 so I think we are set in terms of framework and base CNN to work with.
I did do some testing with the Lisht activation and it looks promising, though it’s prone to exploding due to the fast learning rate…but I will try and test with it in the XResNet soon.
I see we can do polls here, so maybe we need to do that but imo, the current question is how do we want to get started?
One idea I had was to just build a tiny toy dataset with the images and throw it into XResNet as a simple test to start seeing how that performs…then build up from there?
I’m in US PST and can do a video chat/hangout at pretty flex times.
@pierreguillou - thanks for the link to the Stanford code. I did not know it was published…I downloaded it…
and I had to double check b/c the model is is so basic. Here it is:
import torch
import torch.nn as nnfrom torchvision import models
class MRNet(nn.Module):
def init(self):
super().init()
self.model = models.alexnet(pretrained=True)
self.gap = nn.AdaptiveAvgPool2d(1)
self.classifier = nn.Linear(256, 1)def forward(self, x): x = torch.squeeze(x, dim=0) # only batch size 1 supported x = self.model.features(x) x = self.gap(x).view(x.size(0), -1) x = torch.max(x, 0, keepdim=True)[0] x = self.classifier(x) return x
I still question if I missed something b/c this looks like something an intern built, but anyway, if that’s their model then we should be able to blow the doors off of them with FastAI