Lesson 2 In-Class Discussion

Check out this one, AMI tunneling


Sorry for the brief forum outage - was just upgrading # CPUs and RAM so we have no problems tonight.


Due to daylight saving for many folks in other countries the live stream will be shifted by 1 hour.

Just to confirm the data is from
https://www.kaggle.com/c/planet-understanding-the-amazon-from-space ?
as I am running my own box

1 Like

Hi, is there any way to check if all is up to date on crestle ? seems not to have anaconda installed. Do we need to set it up ? cheers

edit: refering to the

conda env update

1 Like

You can sync the schedule with your Calendar app.

Before understanding the DL we should understand timezones first :frowning:


Timezones are hard. DL (the way Jeremy is teaching it) is easier than timezones :stuck_out_tongue:


@danielfr3 under the section ‘Lesson resources’ at the top of this page.

@jeremy Are embeddings different than features or are they different ways of saying the same thing?


If so, I am having trouble downloading it with the Kaggle Command line tool

@jeremy can’t see you in the video


How come the validation loss is smaller than the training loss?


Is lesson1 using the test data at all? I don’t see a cats/dogs split in the test data dir.

Why aren’t we looking at validation accuracy? I thought validation accuracy and validation loss don’t correspond linearly. Or was it validation error?

In Kaggle competitions you don’t get the labels for the test dataset. You can only predict over the test dataset.

There’s a recent paper called “Don’t Decay the Learning Rate, Increase the Batch Size” Is adjusting the learning rate the most effective for converging or is adjusting the batch size effective as well?


Test data doesn’t have labels. These are unlabeled images from the Kaggle competition. If you want to make a Kaggle submission, you should get these images, predict a class using your model and upload predictions to Kaggle.

So the labels weren’t released after the fact?