You might be interested in other approach for Siamese. One samples pairs of the same class in the dataset, pack in batches and negatives are mined in the batch. No specific class needed.
I have implemented it for patch descriptor learning:
You might be interested in other approach for Siamese. One samples pairs of the same class in the dataset, pack in batches and negatives are mined in the batch. No specific class needed.
I have implemented it for patch descriptor learning:
Btw, is there a way to skip computing validation loss?
You can raiske a CancelValidation
error (canāt remember the exact name but you should find it easily) that will skip the validation phase.
If the questions is skipping loss but still computing other metrics, the answer is no however.
I always use
pip install git+https://github.com/fastai/fastcore.git --upgrade
pip install git+https://github.com/fastai/fastai2.git --upgrade
which is the better approach?
I donāt know, I just added the branch at the end.
Skipping loss, but do val. Ok, Iāll do my callback then
So youāve written imshow_torch
to work with that, thatās pretty cool !
I wonder what would happen if you call _pre_show_batch
on your dataloaders
I was running lr_find and I see this printed
@log_args had an issue on LabelSmoothingCrossEntropy.__init__ -> missing a required argument: 'self'
Itās not throwing any error though, anything to be concerned about?
I think that you need to install it again, they updated something and need last fastcore.
You should try to install it again:
pip install git+https://github.com/fastai/fastcore.git@master
pip install git+https://github.com/fastai/fastai2.git@master
I did, this happened after I updated it. Before installing the latest one I was getting an error.
Itās a warning you can ignore, we are working on fixing those.
Can you share the notebook?
Even if it does not happen anymore Iād like to check it works as intended and you may have an edge case.
Here is a minimal example: https://colab.research.google.com/drive/1S1HyJs7a0ehwkltsBjGvRLDPSeOcXm0u
Quick qeustion, is there a Resize (pre size) images method on V2? To resize all images before running the trianing?
No, there is no utility for that.
Thanks, I found the issue and am sending a PR. This should not affect your program. This is mainly used by WandbCallback
.
Does everything in the CONTRIBUTING.md for fastai v1 still apply to v2, or are there important differences?
A post was split to a new topic: Questions on nbdev
I only have labels for my validation data, how should I build my dsets? I tried doing the following:
tls_raw = TfmdLists(fns, [lambda o: o.read()], splits=splits)
tls_train = TfmdLists(tls_raw.train, [tkzer, nmzer], split_idx=0)
dset_valid = Datasets(tls_raw.valid, [[tkzer, nmzer], [parent_label]], split_idx=1)
tknzer
and nmzer
are just instances of Tokenizer
and Numericalize
The problem is that dset_valid
is still considering itās items to be part of the training set
At the end I would like to have a single Datasets
object, where if I do dset.train[0]
it returns only an item
and dset.valid[0]
returns (item,label)
Itās not necessary to follow this TfmdLists
approach, this is just my best idea to solve this so farā¦ Iām open to new easier ways
If you have such different structures for training and validation, you canāt have them in a single Datasets
objects. When you are finished and independently created the DataLoader
s, you can combine them in a DataLoaders
.