Lesson 1 discussion - beginner

Hi, look here:

2 Likes

Thanks, that’s worked out: https://i.imgur.com/QmIIM8L.jpg

Great!

Hi dortonway, I noticed you were able to train your own data. I am wondering how you did it.

I created a new folder in ‘data’ and created new train and valid folder, and I put my images into the folders. But when I run the model, it always automatically .ipynb_checkpoints, and it broke.

May I ask if you did the same process? Or what should I do to train my own images?

Thank you!!!

It shouldn’t happen now?
Are you on latest build of fast.ai?

I just tried this afternoon and had this issue

Have you solved the problem? I read all your posts but still can not fix it.

Try removing them manually using rm

I set up my paperspace and I seem to be having errors in the very beginning (when I am trying to import libraries). The kernel is dieing.

I updated the conda environment but all seems to be upto date.

I am unable to proceed with the rest of the code as it is throwing errors (like torch is not recognized, os is not recognised etc). I thought this might be the main reason.

Can anyone please let me know what is the problem?

Hi, superives.

You can see what I did / my experiments here.

Thanks for the reply. May I ask how you downloaded the images?
I used methods from here (https://github.com/hardikvasa/google-images-download) and moved pictures to new folders. Did you do the same?

Thank you!

Did you just use ‘rm .ipynb_checkpoints’ in the proper folder? I tried it but it said it was a directory so could not be deleted. Do you mind tell me if I made a mistake here? Thank you!

Yes, I’ve used that repo, but I did nothing to .ipynb_checkpoints.

Actually if it’s a directory, then rm -f
To recursively delete a directory,
Warning it will delete everything in that directoryFor you…

Just type man rm or Do a quick google search for more information

Thanks, I used rm -r .ipynb_checkpoints to get rid of this issue

1 Like

Hey ecdrid,
Did you complete the whole planet classification problem?
I am wondering how long it took you to run TTA after the 256 size pic training.

Thakns

I did it on AWS server, probably 30-45 mins
Can’t recall exactly

I did the same. I used the python tool that @ecdrid showed in the beginning of the wiki thread to download google images; and built a classifier for plants vs animals. It works pretty well, although some downloaded images couldn’t be opened.

If anyone want to see the Jupyter notebook with my code, it’s here on github.

1 Like

I did not know we should raise Qs here. I created separate threads and not getting answers.

I am stuck with a small issue in SGDR as explained here. Can any one kindly have a look?

So now that the model is trained on dogs and cats, how do I input a new image, and then let the model tell the accuracy for dog or cat? I guess normally you would write an application around it? For now, to be simple, could that be done inside Jupyter notebook?

What is the equivalent of below line in fastai 1.0?

tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)

I see there is get_transforms() but that does not include model architecture to take statistical transforms from??? Kindly help.