General course chat

When I’m in the fastai forum and I click on a link to explore, then try to go back to the original page, I get this error (copied below) Has any one else seen this?

I’m running windows 7 64-bit, using the Firefox 63.0.1 (64-bit) browser.

==================
Corrupted Content Error

The site at https://forums.fast.ai/t/lesson-3-official-resources-and-updates/29732 has experienced a network protocol violation that cannot be repaired.

The page you are trying to view cannot be shown because an error in the data transmission was detected.

Please contact the website owners to inform them of this problem.

If I want to give images as input as well as labels, like we do in super resolution , how do I achieve this?

1 Like

Did others receive an email with the YouTube Live link for lecture 3 a day before the lecture (like it happened for the first two lectures)? I didn’t receive it this time.

Also, I’m not sure if this intended but there’s an infinite loop in the first post. The link “Where is the YouTube Live stream details?” leads to a post in another thread which links back to the first post in this thread.

are there any slack channels or online study groups going on?

no email. it’s probably worth checking forum before live stream start. that’s where you will find the link.

like this pinned post is for lesson 3. there is going to be similar post for lesson 4.

Thanks.

Hi Everyone,

This is to add my experience with progressive resizing (PR) taught in lesson 3.
I was able to get my error percentage down from 11.2% to 6.2% using progressive resizing in two steps of image sizes (128 and 256).
Before learning about PR in lesson 3, I was stuck with the error 11.2%.
I would recommend everyone to try it. I will soon share my nb and the webapp. (It is image classifier for 3 different type of leaves)

3 Likes

I am working on the planet amazon classification. The data resizing line is as follows

sz = 64
data = get_data(sz)
data = data.resize(int(sz*1.3), ‘/tmp’)

Afterwards, when I’m calling img.shape on the images inside the data object I getting values other than sz. What seems to be the issue?

Is it setting the data object to sz of 64, and gets resized while the training?

Is there any python module which can do the same work of function “untar_data” for zip files?

Within a Jupyter Notebook try:

!unzip FILENAME

4 Likes

I downloaded a zip file with a bunch of image files (.pngs) from the kaggle competition below. The zip file is 1GB.

I downloaded them into a folder called data on my AWS instance and forgot to put the data directory path into my .gitignore.

I unzipped the files and successfully moved them from a complex tree file system into two labeled folders for image classification.

Then I tried to git add -A

I got the error

error: file write error (No space left on device)
fatal: unable to write sha1 file

Yikes! It looks like I’m using all of the space according to below.

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             30G     0   30G   0% /dev
tmpfs           6.0G  8.8M  6.0G   1% /run
/dev/xvda1       73G   73G     0 100% /
tmpfs            30G     0   30G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            30G     0   30G   0% /sys/fs/cgroup
/dev/loop0       88M   88M     0 100% /snap/core/5662
/dev/loop1       17M   17M     0 100% /snap/amazon-ssm-agent/784
/dev/loop2       13M   13M     0 100% /snap/amazon-ssm-agent/295
/dev/loop3       17M   17M     0 100% /snap/amazon-ssm-agent/734
/dev/loop4       88M   88M     0 100% /snap/core/5548
/dev/loop5       88M   88M     0 100% /snap/core/5742
tmpfs           6.0G  4.0K  6.0G   1% /run/user/1000

$ df -hi
Filesystem     Inodes IUsed IFree IUse% Mounted on
udev             7.5M   330  7.5M    1% /dev
tmpfs            7.5M   442  7.5M    1% /run
/dev/xvda1       9.2M  1.6M  7.7M   17% /
tmpfs            7.5M     1  7.5M    1% /dev/shm
tmpfs            7.5M     3  7.5M    1% /run/lock
tmpfs            7.5M    16  7.5M    1% /sys/fs/cgroup
/dev/loop0        13K   13K     0  100% /snap/core/5662
/dev/loop1         15    15     0  100% /snap/amazon-ssm-agent/784
/dev/loop2         13    13     0  100% /snap/amazon-ssm-agent/295
/dev/loop3         15    15     0  100% /snap/amazon-ssm-agent/734
/dev/loop4        13K   13K     0  100% /snap/core/5548
/dev/loop5        13K   13K     0  100% /snap/core/5742
tmpfs            7.5M     6  7.5M    1% /run/user/1000

I guess my options are:
(1) get more disk space
(2) delete some of the .pngs

Is there a way to know how big a set of files is going to be after it is unzipped?

  • With this particular dataset, the first time I unzipped it, it led to a second zipped file that I also unzipped

If I want to go with (2), since I seem to have hit 100% of my space, how do I know if all the files have been unzipped? (I didn’t get any errors after unzipping.)

Help interpreting the results of df -h/hi and other suggestions on how to work around this problem will be very much appreciated.

Thanks!

can any one give lesson 4 link?

1 Like

You can always attach more hard disks with existing VMs. Create one and then mount it.

Is it correct that the lesson 4 link has not yet been released?

2 Likes

no email here neither, so hoping for forum to open indeed

1 Like

Lesson 4 link anyone? I didn’t see a thread created yet for Lesson 4 like I would normally find it at.

20 Likes

Anyone has the link for lesson 4?

2 Likes

Can anyone share the link for lesson 4 live stream?

2 Likes

I see that there is a recently added topic on the subject by @rachel here:

There is a placeholder for the link there to the livestream, however its not working as of the time of writing.

4 Likes

usually it’s up 15 mins before the class starts…

2 Likes