So, is there a temperature / humidity effect on the R0 of virus or not?
(Utility can’t alone determine the model validity?)
BTW, here is how to sign up for the API: https://docs.microsoft.com/en-au/azure/cognitive-services/cognitive-services-apis-create-account?tabs=singleservice%2Cwindows
Will Jeremy provide a full toy analysis of combining priors with R-temperature data ? I assume this is some sort of Bayesian regression model ?
Is there any consensus on whether using copyrighted images (without a license) to train a model is legal or not?
This is where behavioral economics and health behavior/psychology become very important: when you need people to engage in counter-intuitive behavior for which they can see in the moment neither the potential harm (if they don’t do something) or the benefit (if they do).
I think the consensus is that it’s ok, as long as you do not redistribute the images and do not use for anything else but training. That’s why ImageNet is a bunch of urls… it’s illegal to redistribute images in a tar file, for example, but it’s ok to publish the weights of a model trained with those images. That’s at least my understanding of the issue.
You might be interested in a Reddit discussion on this very topic!
adding on to the point,
the virus will survive longer if the conditions are suitable and high moisture sort of provides an atmosphere to survive for a longer period
Is the verify_images part of the fastai module(s)?
Yes it is.
Perhaps I should rephrase that it would be a great addition to the COVID video (as well as being in the primary material).
The material of surface will affect it as well.
high moisture sort of provides an atmosphere to survive for a longer period
Er, I think that higher humidity causes the virus to be less stable. It’s not obvious why this is the case, but maybe see, e.g., this paper on influenza. In particular the authors discuss “Possible mechanisms linking temperature and humidity to transmission” in that section.
Just reaching out here while I remember. Does anyone have experience deploying models to iOS with FastAI v2?
I see dls has a train and valid set. How do we load in the test set?
An example for mobile app deployment:
You may also be able to use the underlying PyTorch model with the PyTorch Mobile tools
Can you explain the bears.new()
syntax?
This is a part that I find confusing. I am used to use the names training, validation and test sets for things that all come from the same bucket of labeled images.
In fastai instead, the test dataset is unlabeled (that was at least in v1).
I am curious about why was this choice made?
This creates a new DataLoader based on other parameters you pass into it (this is also how the test_dl works)
Probably inherited from Kaggle, where the test set never comes with labels