Lesson 1 In-Class Discussion ✅

Jeremy said it defaults to 0.003 if left blank.

1 Like

I tested this wrapper on the lesson-1 example before running learner.fit and it works beautifully for running the model on multiple-gpus:

learn = ConvLearner(data, models.resnet34, metrics=error_rate)
learn.model = torch.nn.DataParallel(learn.model)

Thank you!

look at feature visualisation papers. The first layer will produce a activation map where the activations are high are in their specific locations. The next layer will pass a filter through this, taking neighbour hood activations into account and produce another map of activations and their locations and so on…

does FAI does it by default… ,what different does it do

Thank you!

Thanks for the class and all your work responding to questions!

don’t forget to like the video! https://www.youtube.com/watch?v=7hX8yKCX6xM

1 Like

Thank you to Jeremy and all involved! :clap::clap::clap:

thx a lot

Can anyone link to the thread/post of NLT project which Jeremy mentioned?


if we have segmentation problems where we have to read the images and predict the masks… in that case what version of loader can we use…as dont hve any labels for such kind of problem

thx all!

Great ! thx a lot

as jeremy said, multi-cpu by default but only single gpu

Thank you, Jeremy!

Big claps!:clap::clap::clap::clap::clap:

1 Like

Amazing lecture @jeremy. Looking forward to learn about how to create our own dataset from google images. Thanks!

gotta have label, this is Supervised training. Unsupervised training is possible, but that’s in Part 2.

1 Like

if we do nn.parallel what different would it do