Hoy, merci! Ah, yes. I forgot to mention that I am on Safari. You said adblocker but in my preferences I only see settings for Content Blockers, Notifications, and Pop-up Windows.
in chrome for me it was that, if you allow pop ups you get the download file
actually, a did conda install in new conda env.
But, its in FAQ:
- If using conda please try:
conda update conda -y
conda update -c fastai fastai
Does anybody know where filenames are stored or how to modify data.show_batch()
to display the file name as the image title?
I tried using the widget for deleting bad images that was demo’d in class but it didn’t really work in my jupyter lab environment. In addition, I wanted to get a random batch of images rather than the most incorrect because the model isn’t good enough yet for most incorrect to be that useful (35% error rate). Given this data.show_batch()
already seems suited to the task. I just can’t seem to find where the file name is stored in the dataloader
or dataset
looks interesting, any experience with it? The Github repo seems created 1 month only?
Can we get some tips on how to deal with less number of samples while training an image classifier or may be a model in general ?Also can Jeremy repeat some details about rectangular images vs square images in the training data ?
fastai lib is changing from lesson to lesson, should we stick to a specific release from lesson to lesson or should we always get the latest release?
I have been doing:
pip install --upgrade fastai
Between lessons to keep current. YMMV.
You should stick to the latest conda update.
Just wanted ti quickly point out one thing:
In GCP, I had to do a git pull
in /home/jupyter/tutorials/fastai/course-v3
, as opposed to /home/jupyter/tutorials/fastai
to update the repo, and update the fastai
library subsequently. Not sure if this is just for me, or for everyone else using GCP.
Thanks.
I just want to thank Jeremy & Rachel for being so motivating and reminding us that it’s OK to not understand everything from the first go through! I have failed too many Coursera courses through nitpicking over initial setbacks.
When generating new image datasets, how do you know how many images are “enough”? What are ways to measure “enough”?
Any image copyright issues while doing that?
One can use this repo to download images and create a dataset…(Completely Automated)
It work’s out of the box
Is training on this dataset (where you pick images of off google as opposed to an open source dataset) legal?
What is the minimum number of images of each class you would usually need for your classifier to work reasonably well?
How do I get these collapsible headings in notebooks, they are nice. Anyone knows the right extension name ?
Generally +500 images per category is enough. You can always add more images and check how much this increases performance.
collapsible headers, in nb_extensions