Jeremy said it defaults to 0.003 if left blank.
I tested this wrapper on the lesson-1 example before running learner.fit and it works beautifully for running the model on multiple-gpus:
learn = ConvLearner(data, models.resnet34, metrics=error_rate) learn.model = torch.nn.DataParallel(learn.model)
look at feature visualisation papers. The first layer will produce a activation map where the activations are high are in their specific locations. The next layer will pass a filter through this, taking neighbour hood activations into account and produce another map of activations and their locations and so on…
does FAI does it by default… ,what different does it do
Thanks for the class and all your work responding to questions!
don’t forget to like the video! https://www.youtube.com/watch?v=7hX8yKCX6xM
Thank you to Jeremy and all involved!
thx a lot
Can anyone link to the thread/post of NLT project which Jeremy mentioned?
if we have segmentation problems where we have to read the images and predict the masks… in that case what version of loader can we use…as dont hve any labels for such kind of problem
Great ! thx a lot
as jeremy said, multi-cpu by default but only single gpu
Thank you, Jeremy!
Amazing lecture @jeremy. Looking forward to learn about how to create our own dataset from google images. Thanks!
gotta have label, this is Supervised training. Unsupervised training is possible, but that’s in Part 2.
if we do nn.parallel what different would it do