Distributed lr_find fails

Hi, running lr_find in ditributed mode fails.

This issue was fixed in this PR earlier this year, however it seems to be problematic again.

To reproduce just call lr_find in the nbs/examples/train_imagenette.py.

The error raised is the following:

File “…/fastai/callback/schedule.py”, line 195, in after_fit
os.remove(tmp_f)
FileNotFoundError: [Errno 2] No such file or directory: ‘…/.fastai/data/imagenette2-320/models/_tmp.pth’

Am I missing something there?

I opened a Github issue here.
The fix is to restrict the os.remove call to the master GPU.