General course chat

Ok I’m confused. I’ve downloaded the fast.ai stuff onto a machine, and have it working, I pull up the web browser and it shows me a directory of courses etc… So then I listen to lecture 1, and try and follow along in the matching note book, and it doesn’t match. The notebook is about cats vs dogs, the lecture is about dog breeds, maybe it’s in lesson1-breeds instead of lesson1. But hey ho I go onto lesson 2, now lesson 2 in the notebooks is only “lesson2-image_models.ipynb”, but in the talk it’s unrelated… So clearly I’ve got the wrong notebooks/lessons or something. Or a different version of the course… Can someone just tell me where I’ve gone wrong because this course is excellent but I’m hugely confused about why nothing matches… :slight_smile: Thanks!

Did you clone the right repository? you need to clone the coursev3 repo.

The one I cloned was: https://github.com/fastai/course-v3.git

1 Like

Thanks, no I was using the one that came with the main fast.ai repository, I didn’t realize I needed a different one for the course. Much appreciated.

1 Like

Thanks for the help man:)

Hey All,
Where do the edge values initially come from? As I undestand it, they are random and then become more useful via backprop. Is there a range that the random value has to be within? If someone knows where Jeremy goes more in depth about this I would love a lecture number and timestamp. Thanks!

i have a question in regards to AI, im new to this but i fail to see the point in having a single ai doing all the work.

i had an idea like this but i don’t know if it’s too far off into space

Multiple AI’s with very specific jobs to do, that works in harmony

  • AI 0: BIOS , Basic needs encoded into the AI, this BIOS will override all other functions, reasoning to achieve the need, Survival instinct. ( Such as more battery power )
  • AI 1: Suppression of unneeded information , prediction of consequences and actions , the judge between good and bad , right or wrong answer. Frontal Lobe.
  • AI 2: Assistance in interpretation of senses , as well as numbers and jobs , understanding of objects , Shapes , and space. Pariental Lobe.
  • AI 3: Perception Assist , interpretation of sound , plays a role in visual memory and objects. Temporal Lobe.
  • AI 4: Processing of senses and visual stimuli as well as information. Occipital Lobe.
  • AI 5: This AI plays a major role in Balance and movement as well as other motor skills, Cerebellum.
  • AI 6: Allows the Transfer of information , between the Finalized actions decided between the AI’s to the body, this AI also controls Automatic Functions. Brainstem.
  • AI 7: Fight or Flight

Reasoning behind this: The reasoning behind this, is because the human mind can have multiple personalities, so I don’t see why we don’t have multiple personalities acting in unity as 1 singular personality.

I fail to see again, and again, why only 1 AI is used FOR ALL OF THE WORK, which if frankly ridiculous and stupid, but works for simple things.

The idea: the idea is very simple.

Instead of having a single AI, doing all the work and possibly making errors maybe once in a while, you need to have other AI’s that specialize in that particular field to have the final say, and may add upon the information and or correct the mistakes made.

Hello people, I created a Flower Classifier web app. You can upload and classify the type of Flower. Currently, it can only classify 3 types of flower daisy, dandelion, rose, sunflower, tulip. Here is the Link https://github.com/lkit57a03/Fastai-flask-flower-deploy. Please use regular pip version to run the code(because I had not tested docker).

I was watching the video about building the image recognition heatmap from scratch, and got to wondering about the different parts of an image that a neural net finds important for making a decision on its classification. It seems like for misidentified images, the neural net might focus on a given part of the image that is actually less important. In order to improve predictions, could you essentially tell the neural network to focus on something else by zeroing out those pixels that were originally thought to be important and training again on that image? I’m just brainstorming some crazy ideas here and am not really technically to the point where I’m comfortable implementing that myself, but I’m curious if there is something fundamentally wrong with that line of thinking. Thanks!

1 Like

Very cool. It would also be great if you had the code where you train your model! I :heart: metrics.

Does anyone here know exactly how to post new questions here?

Hello folks…this a total newbie question… I’d like to know how to pick a learning rate from the LR plot below…actually I don’t understand, in a value like (5e-6), where do we get the value 5 before the (e) here ?!
Do we count the dashes after (e-6) on the LR axis?!
How do we pick values between lets say e-6 and e-5 ?!
lr_finder

1e-6 is the same as 0.000001
So half between 1e-6 and 1e-5 is 5e-6
:slight_smile:

1 Like

Hello, Fast.ai newcomer here. Started the first lesson last week, and I’m running the salamander.ai Jupyter notebook on a win10 PC in the chrome browser. I’m not an experience python developer. I recently complete the edx/MITx Python 6.00.1 & 6.00.2 courses. (Intro to python and computer algorithms). I’m currently working on the Lesson 1 homework. I have images (crabs&crawfish) uploaded to the Jupyter NB. I’m working on JP notebook mods to reference the crab/crawfish paths and directories. I haven’t had any luck yet but am grinding thru it. Excited to be here, and look forward to completing the course.

1 Like

welcome,

let me know if you need help. I had the same thing when I started out.

Hi Matej, Thank you very much. I hadn’t used a Jupyter notebook prior to this course. I was able to complete most of the lesson1 homework (classify your own image set).

(I’m going to ramble a bit, just to document what I learned. no need to read it all. LOL)

I wasn’t able to scrape images from google and upload them. As an alternative, I downloaded the Caltech “101_ObjectCategories.tar” database. and create a subset to train. (crab, crayfish, crocodiles, seahorse, starfish, etc.).

I made a copy of the Lesson 1 notebook and modified the python code. The most significant problem was figuring of the correct path to the images, LOL. (I uploaded mine to salamander.ai, so I didn’t need the untar() function Jeremy used).

Finding the path took awhile. The answers where found in help(Path) and I used Path.home() and Path.cwd() to show the general location. Then I modified the path to point to the training data.

wrt to uploading the training and valdiation image, I found a clever tip from Stackoverflow. once i organize the images into train, valid and test directories, i zipped up the files and uploaded them all into the jupyter notebook. In the dataset folder, I created a notebook and follow code (had to change the file name). the unzip process created the folder substructure that was difficult to create manually.

import zipfile as zf
files = zf.ZipFile("ZippedFolder.zip", 'r')
files.extractall('directory to extract')
files.close()

This worked really nicely.

Next, I modified the ImageDataBunch to get tags from the folder names. Rewatching the end of the lesson 1, video showed how to do it.

I’ll need to loop back and re-read how to scrape google images and upload.

2 Likes

Hi, I’m in lesson-3, downloading Planet dataset from kaggle.
I’m able to download the dataset but unable to unzip.
I get a syntax error when i tried running this code(as in the lesson notebook).

! 7za -bd -y -so x {path}/train-jpg.tar.7z | tar xf - -C {path.as_posix()}

@matejthetree

I do not get the data after putting password?I tried west cost and east cost machine (I have two machine now, I will delete one after I get all setup right and complete). Any one know why I do not get DATA part? Thank you for your help.

Hi, I’m getting frustrated with versions of fast.ai and the course. So I installed fast.ai on windows, and it all seemed to work fine (the notebooks etc). But then I found the course was a different version, so instead I had to download this repository:


Now this has the right notebooks, so I want to go through this one:
lesson7-superres.ipynb
But to make it work I’ve got to put it at the same/right part of the directory tree. I tried putting it into the same folder as the ones I installed. So I access it here:
/courses/dl1/lesson7-superres.ipynb
And it fails with this error:
ModuleNotFoundError: No module named ‘fastai.vision’
The web suggests I do this: print(sys.modules[‘fastai’])
<module ‘fastai’ from ‘C:\fastai\fastai\courses\dl1\fastai\init.py’>
So it seems I’m using the older fastai library:

Directory of C:\fastai\fastai\courses\dl1
05/09/2019 02:22 PM fastai […\old\fastai]

So I try it outside that path to use the latest version.
http://192.168.1.11:8888/notebooks/examples/lesson7-superres.ipynb
And I get other weird errors about missing things, so I try installing the things that are missing, then I get:

ImportError: Something is wrong with the numpy installation. While importing we detected an older version of numpy in [‘C:\Users\chrisp\Anaconda3\envs\fastai\lib\site-packages\numpy’]. One method of fixing this is to repeatedly uninstall numpy until none is found, then reinstall this version.

so I uninstall all the numbpy’s and install a new one. Same error.

So clearly I’ve gone horribly wrong somewhere.

HELP :slight_smile:

ChrisP.

Ok, I found the solution.
Create a link to fastai from the coursev3 dl1 folder…

C:\fastai\fastai\course-v3-master\nbs\dl1>mklink /d fastai …\fastai
symbolic link created for fastai <<===>> …\fastai

And run juypiter notebook and fastai activate from the C:\fastai\fastai\course-v3-master path.

And possibly something else I did. :slight_smile: (sorry)

ChrisP.

were you able to make it work?