Yes. I guess you were lagging before because you did not have the editable install.
I had the editable install for a long time now. I will monitor that and I will let you know if there is something.
Thank you again
In the dev.fast.ai install section (http://dev.fast.ai/#Installing) It is not clear what āfastcoreā is and how to link fastai2 repository to fastcore. Shouldnāt fastai2ās master branch already have the latest fastcore master?
These are two separate packages. When you are using a version of fastai2 thatās released, we can control the minimal version fo fastcore you have through requirements, but for an editable install, we donāt have a mean to do this automatically, so you need to make sure to pull them both.
Especially when we are working on changes that impact both packages at the same time like right now.
Can I make my network output a specific type of tensor? e.g. TensorImage
Iām doing style transfer and I have a Image-->Image
scenario. Of course I tried just wrapping the output of the model with TensorImage(pred)
but this seems to break gradient flowing.
What I want to do is take the model output and apply some TensorImage
specific transforms to it (mainly Normalize
). Iāll then take this result and feed it to the feature model (standard to the style transfer technique). I need the gradients to flow all the way back to the model, but wrapping with TensorImage
seems to break the chain.
Yes, our custom types do no support gradient computation. Youāll have to add that transform manually.
Is there still functionality in show_batch to display the image axes? Looks like hide_axis=False
doesnāt work anymore. Itās difficult to tell if thereās a new argument with the kwargs.
Also I saw a twitter post about automatic normalization based on whether using pretraining or not. If I set pretrained to false, does this mean it will automatically calculate normalization stats based on my dataset?
Is there a reason for this, or is it just in a long ātodoā list?
The subclasses need to create new tensors to work so there is no way to have gradients computation supported. At least until PyTorch implements OO tensors properly.
No there is not, hide_axis
is hard-coded to False
, so you need to create a new image type and its show method, or monkey-patch the show method of the type you use.
If you pass the Normalize
transform in your batch transforms, it will compute the stats on the first batch.
Iām no expert in the internals of pytorch, so pardon my ignorance
The gradients of a tensor are stored as an attribute right? Can we, when copying the tensor to subclass, just create a reference to the original gradient?
No the gradients are stored as an attribute once they are computed. Creating the new object of a subclass removes the history of its computation. Thatās what makes the gradient compute fail when creating new objects.
In general, the whole tensors OO is done for preprocessing.
Now that we are getting into this subject, just for curiosity, can you briefly explain why we need to create a copy when subclassing to the new type?
Looking at TensorBase
and cast
it seems the only thing we need to do is change res.__class__ = typ
. Obviously this is not the only thing that is happening, because the tensor in being copied in reality as your said, but Iām failing to see where
You were the one saying it broke the gradients computation I know no more than you do, just saying what we tested and add working for us.
Yeah lol, what I understood from the previous answer was that fastai was the one creating the copies and breaking the gradients. But itās actually pytorchs faults that this happens
I wonder if a composition instead of inheritance approach to the custom types would get around this issue
A minor thing but have not seen it mentioned before - importing all.py modules seems to conflict with the built-in all() function:
I may be blanking here, whereās the equivalent of c2i
in our DataLoaders
?
Edit: lives in vocab
If you were a fan of ClassConfusion in fastai1, Iāve ported it over to v2:
Currently it just supports Colab but working on bringing it to native jupyter. It supports Image and Tabular classification. If you donāt know what it is, see here:
Nice contribution! Really useful tool, pity I work mainly on multi-label problems.
I made it modular by design so I could eventually support more. How would you except multi-label behavior to look like? Iād assume similar to how images is except you can have a number of combinations present?