# Lesson 3 - Official Topic

Yes, I get this part but let’s say that the W is a tensor [1,2] and x is [1,1] so suming result would be 3 which is not a valid probality. What I don’t understand is why this sum is a valid probality.

I think, at this point in the notebook, the discussion is focused on interpreting the output of a linear layer. As you point out, there is no reason for the output to be between 0 and 1. In fact, the output can be any number, positive or negative. In order to get something interpretable as a probability, i.e. a number between `0` and `1`, pass this output through the `sigmoid` function, in case of binary classification.

3 Likes

I am using Paperpace for work. I am wondering why wouldn’t my fast.ai loading? What can I do to fix it? I have tried reloading nbs.

Were you able to find how to make it run on Paperspace?

There’s a trade-off between batch size, data size (image, text, etc.) and GPU memory. Experience playing this all these will help you develop some intuition.

Larger images results in smaller batch sizes, but you can overcome some of this by doing transfer learning by building a model with smaller images and building up to larger higher resolution images if that is what your application calls for. An older course from Fast.ai demonstrated this.

You can check out a post by @anotherjesse to see more on how this works. Exploring Image Size & Accuracy of Transfer Learning in Lesson 1 Pets

1 Like

When we download images from Bing for a classification problem, say for cats vs dogs, there are a few images of hand-drawn sketches or “clipart” type images. Should one rather keep these images or delete them? That is, does it help the model in any way or is it actually detrimental to training?

I am aware that if the proportion is very small it’s not a big deal. However if I am cleaning up a fairly large dataset of 30k images, it would be great to know if I should keep these images in or not. Does anyone have any experience with this?

1 Like

Please help me. I get this error: /opt/conda/envs/fastai/lib/python3.8/site-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images

I am working on 02_production file. I tried downloading my own image and datasets for a toy project and it gives me the error above, Nonetheless i am able to train the image but i cant clean it and train further.

1 Like

Hi Chikwado I hope you are having a fun day!

You could use the above to covert all the problem images.
You could delete the images causing the problem and replace them with others.

Doing a google search shows some possible reasons but many of the posts look quite old.

Cheers mrfabulous1

2 Likes

Thank you Mrfabulous1, I will check those out

I loved the image similarity stuff in this lesson so much that I decided to extend it to see how acurate it would be with all of the digits, and in the process made an intersting discovery as to how the L1 and L2 norm functions perform using this data - with L2 increasing the accuracy dramatically.
It’s fascinating to actually see HOW these things make a difference in practice and also then for the future to have an idea as to why you’d use one against the other.

HI gulliver hope all is well!

Good job.

Cheers mrfabulous1

Hi, how did you resolved this issue of “404:Not found. You are requesting a page that does not exist”. I been at it for few hours now. I will really appreciate any help.

Can somebody explain to me, why we need to add .sum() to the function when we pass it a vector (around 1:51:30 in the video)? I did some research but could not find any satisfactory answer.

@chschoenenberger

Imagine the structure of neural network and imagine the first and the second layer of nodes.
a) First layer nodes take X as an input and outputs (X * weights/or parameters)
b) The nodes in a second layer take (X * weights/or parameters) SUM as an input. The SUM is because we have multiple X’s in first layer.

Here is a picture of those layers
1 and 2 are first layer inputs
-0.55 and 0.1 are parameters/weights
SUM matters for the second layer, see the calculation

Hey guys, I have problem with Binder. Here how it looks like:

ERROR: voila 0.2.3 has requirement nbconvert<7,>=6.0.0, but you’ll have nbconvert 5.6.1 which is incompatible.

Where do I have to update nbconvert? My local machine? I have it updated already. How should I have different version of nbconvert? Should this be specified in my notebook which I am using for binder?

Thanks a lot. That’s what my guess was approximately

I would like to share a aproach to compare images that I use in some cases.This doesn’t use deep learning, but as the technique of the pixels mean used in this lesson, may have a great result. If anyone would like to undestand some theory behind this technique, please, ask me. I had 1.0 (100%) of accuracy in the 3, 7 MNIST.

def is_3_c(x): return correlation(x, mean3_2) > correlation(x, mean7_2)
mean3_c = mean3 - 0.5
mean7_c = mean7 - 0.5
accuracy_3s_c = is_3_c(valid_3_tens - 0.5).float() .mean()
accuracy_7s_c = (1 - is_3_c(valid_7_tens - 0.5).float()).mean()
accuracy_3s_c,accuracy_7s_c,(accuracy_3s_c+accuracy_7s_c)/2

Out: (tensor(1.), tensor(1.), tensor(1.))

1 Like

Hi, I was getting the same error was well. In my requirments file, I change voila to voila<0.2.0 and that error went away for me in Binder. Hope it helps.

Thank you for the solution @pierreguillou !

Wanted to note that your solution is missing a closing parenthesis right at the end, after `path/cat`. Should be this:

``````for idx,cat in cleaner.change():
real_dst = os.path.join(path/cat, cleaner.fns[idx].name)
if os.path.exists(real_dst):
old_file_path = cleaner.fns[idx]
old_cat = old_file_path.parent.stem
new_file_path = f'{path/cat/old_cat}_{str(old_file_path.name.replace(" ","").lower())}'
shutil.move(str(cleaner.fns[idx]), new_file_path)
else:
shutil.move(str(cleaner.fns[idx]), path/cat)
``````
1 Like