Developer chat

Thanks for your always quick response. I guess you mean more than 1 batch here.
I slowly reduce the batch size until there are at least 4 batches in an epoch, the problem disappears. Last time I got this ZeroDivsionError due to the RandomLight transformation, I thought I was going back into this issue until I realize it was something else.

Maybe I could do a small PR to throw an error to prevent user doing it on a small dataset instead of throwing ZeroDivisionError?
Thanks!

You can find my attempt to reproduce this error for the examples/vision.ipynb, you can put this notebook in the examples/ directory and run it and reproduce the error. It create a subfolder “sample” in the original directory and copy 50 images in each class.

/home/mediumnok/.fastai/data/mnist_sample
├── models
├── sample
│ ├── models
│ ├── train
│ │ ├── 3
│ │ └── 7
│ └── valid
│ ├── 3
│ └── 7
├── train
│ ├── 3
│ └── 7
└── valid
├── 3
└── 7

Already fixed in master :wink:

that was quick…:open_mouth:

The model_sizes() function in hooks.py has error as below when I run model_sizes(learn.model)

I found that it is because, as default, x = torch.zeros(1,ch_in,*size) saves x in cpu rather than cuda. So change this line to x = torch.zeros(1,ch_in,*size).cuda() will fix the problem.

I don’t know if just my case, that create a tensor will save in cpu as default. I remember didn’t config it :smiley:

p/s: Is it ok if I do a PR and also post it in the forum ? I think in the forum more people will see the post and do a PR in the case the admin miss this post

There’s no problem with doing a PR to try and fix things. In this case, your solution only works when the model is on the GPU, whereas it could be in the CPU (in which case old version works :wink: ). The good answer is to put x to the same device as the model.

1 Like

Merged a big change: Learner objects now determine from the loss function if there is something to add on top of the models to get the true predictions. As a result:

  • get_preds now return the true probabilities
  • TTA averages the probabilities and not the last activations of the model
  • ClassificationInterpretation has been changed accordingly and the sigmoid argument has been deprecated
2 Likes

fastai.__version__ is still 1.0.15 as before… Is there an automated way to update the version number?
@stas This (maybe?) be an example of the use of date in .__version__ as is in torch.

When I try to define a ConvLearner after the most recent git pull I get the following error:

NameError: name 'ConvLearner' is not defined

Switching to the normal Learner class result in this error:

AttributeError: 'function' object has no attribute 'to'

Did something change under hood?

EDIT: Yes, something changed: http://docs.fast.ai/vision.learner.html
(I will write 100x: Better always check docs.fast.ai before asking a question. :wink: )

I don’t see any helper function for saving an Image class to a file. Maybe this could be useful for fastai/vision/image.py? I haven’t quite figure out the import yet. When I do from fastai.vision import *, I could do open_image() directly but for this new function I added I can only do image.save_image()
.

def save_image(fn:PathOrStr, img:Image):
    x = image2np(img.data*255).astype(np.uint8)
    PIL.Image.fromarray(x).save(fn)

p.s. Just notice that I need to add save_image to the all list, quite magical to me.

That only updates when there’s a release. If you want to use the bleeding edge version, use the ‘developer install’ approach in the readme.

You haven’t followed up on my reply to your question Developer chat so I’m not sure what you’re commenting on, @gsg. Why did you expect it to suddenly become different? Why do you want the date in the version? You can look at the fastai/version.py timestamp to see the date if you really need the date.

@gsg is asking to have the date embedded in the version like the pre-release versions of pytorch. But pytorch is only doing that temporarily until 1.0 is released if I understand their strategy correctly. We have already released 1.0, so we are on normal version numbers now.

So I tried to clone my fork repo and then run this command, it creates a repo inside my current fork repo, and then I have to go inside and checkout to the new branch. Did I do something wrong here? Where should I run this command? Thanks.

tools/fastai-make-pr-branch

Yes, I do use the bleeding edge version w/developer install.
Sorry about the confusion, I was referring to Sylvain’s Developer chat
" Merged a big change: Learner objects now determine from the loss function…"

I (tried to) upload that merge but the label was still 1.0.15, so was not sure if I did get the newer version (with Sylvain’s) changes or the previous version without them.
I understand that both before and after the merge are not “releases” so both are still 1.0.15… so adding a 1.0.15.20181028 would help differentiate between them.
Just a small bandaid for those at the bleeding-edge…

Thank you for trying the new tool, @nok.

First, to explain how it currently works:

It clones the repo into wherever you are running it from.

It first checks whether the directory you’re in is already a clone you’re wanting to make and then it doesn’t clone, but re-uses the current checkout. the logic is to compare the output of:

git config --get remote.origin.url

with the url you are asking for, so for example if I’m inside the original fastai repo, the above command will return:

git@github.com:fastai/fastai.git

but if I’m asking for the fork of the same, which in my case would be git@github.com:stas00/fastai.git, then it can’t reuse that checkout and must make a new one. And so it does.

However if I’m already inside a checkout that matches: git@github.com:stas00/fastai.git and I am invoking tools/fastai-make-pr-branch for the same repo, it will not do a new checkout and use the current one instead.


Now to how we can improve usability. I think the issue is that when you call it from the fastai repo, with tools/fastai-make-pr-branch - that’s where it will create the new clone. So ideally it should not be called it that way, but as explained here: https://docs-dev.fast.ai/git.html#helper-program

curl -O https://raw.githubusercontent.com/fastai/fastai/master/tools/fastai-make-pr-branch
chmod a+x fastai-make-pr-branch
./fastai-make-pr-branch https your-github-username fastai new-feature

another approach is to position yourself into the base directory you want the clone to happen in:

cd fastai
cd ..
fastai/tools/fastai-make-pr-branch https your-github-username fastai new-feature ../put-it-here

or put the script somewhere in your $PATH, so that you could invoke it from anywhere.

or we should instrument it to have an extra argument so that the user can specify where the output should go. So say if you do call it from the fastai checkout folder you could say:

cd fastai
tools/fastai-make-pr-branch https your-github-username fastai new-feature ../put-it-here

Thoughts?

Thank you for the clarification, @gsg. We already have that mechanism in place. It’s .dev0. If you have 1.0.15 then you’re using a released version, if you now do a dev install you will get 1.0.16.dev0 after git pull - and now you’re on the bleeding edge. You weren’t before.

The timeline is:

...
1.0.14
1.0.15.dev0
1.0.15
1.0.16.dev0
...

Thanks for the follow-up @stas.
My understanding now is that whenever we do the developer install

git pull https://github.com/fastai/fastai 
cd fastai       
tools/run-after-git-clone                                                                                                                
pip install -e .[dev]   

twice, If fastai.__version__ has not changed between the 2 deployments, then there have been no changes to fastai code.

In the above case, since it stayed the same, e.g., 1.0.15.dev0,
this indicates that the changes that Sylvain announced Sunday morning, were not yet in the latest “bleeding” edge…(or that they were already in the Saturday git pull)
Correct?

See the updated doc here: https://docs-dev.fast.ai/develop.html#development-editable-install

You only need to do it once. After that you just do git pull and nothing else.

If you do git pull right now you will see 1.0.16.dev0, so now you’re on the bleeding edge and all the commits should be there. Please double check that it’s the case.

you can also run:

git log --oneline

in your checkout and see the short log of everything you have. if you want pretty:

git log --graph --decorate --pretty=oneline --abbrev-commit

* 4c11bce (HEAD -> master, origin/master, origin/HEAD) improvements:
* 3856935 document version+timeline, adjust levels
*   ffa3f50 Merge branch 'master' of github.com:fastai/fastai
|\
| * 32377a3 new dev cycle: 1.0.16.dev0
| * 0a4e629 CHANGES
| * ef15dda rename create_cnn
* | 67d2ff3 require dev install for PR, plus run-after-git-clone, and split steps
* | b56c79a require coverage for dev, needed for testing on fastai and fastai_docs
* | 2f98955 azure support links
|/
*   14c02c2 Merge branch 'master' of github.com:fastai/fastai
|\
| * 70cb432 Add maybe copy tests (#980)
* | a456e56 Remove model type
|/
* fbd6235 Learner.create_cnn
*   768d606 Merge branch 'master' of github.com:fastai/fastai
|\
| * 2d63ae4 Update CHANGES
| * c037f61 Fix pred_batch
| * 644cb64 create x in cuda for model_sizes() (#990)
* | 01aec14 Learner.create_cnn
|/
* a1ff5c2 Auto activ (#992)
* 7da5bd3 SegmentationDataset classes
* bc255a8 document the issue with missing libcuda.so.1
* 2735255 document gpustat, and nvidia-smi dmon -s u (forum tips)
* bd62fdb add jekyll templates in the package
* 186739f Ensure that plot_pdp accepts an axis name. Fixes #986. (#987)
* ab4a39b Fix saleElapsed vs YearMade interaction plot in ml1/lesson2-rf_interpretation. Fixes #988. (#989)
* 0de3384 move property
*   7d68137 recurse flag

plus, there is CHANGES.md where important changes like bugfixes are logged.

Just pushed 1.0.15. Main change (from CHANGES.md):

ConvLearner ctor is replaced by a function called create_cnn
1 Like
If you do git pull right now you will see 1.0.16.dev0, so now you’re on the bleeding edge and all the commits should be there. Please double check that it’s the case.

Confirmed!
Bleeding again… :slight_smile:
Thanks!!

1 Like