@sgugger It seems to me that it is conflicting with splits. When I create a dataset without splits, the wgts seems to be working. Otherwise it complains about the sizes of the parameters (weighted_dataloaders):
Do you know how the weights must be passed to the weighted_dataloaders? When I do not have splits, it works fine with an array of weights. But when I have splits, I get the following error:
Iâm trying to run the fastbook notebooks. I installed fastai2 and fastcore as editable installs, but I get the following error when I try to run the first line of the notebook:
ModuleNotFoundError: No module named âfastai2â
Goodday , ill like to ask if you had success with it using fastai, i am stuck at trying to train, after passing it to a learner and trying to train i get a no target error.
Easiest solution is to create df[âtextâ] which contains your reviews/text data. Iâm not sure why, but I do know this fixed my issue. Then change get_x to âtextâ and text_cols to âtextâ
When you use item_tfms, resize is done on each file independently. This is needed so the data can be loaded on GPU.
After that, you can do additional transforms on GPU which are faster through batch_tfms, such as scaling/rotating and resizing again to a lower size.
In that case you would have sized to a higher size in item_tfms so that your other transforms are more accurate and include more details.
The tokenizer will read the texts in âreviewsâ and tokenize them, but the result will be in a column called âtextsâ (unless you pass an argument to change that, should be something like output_col). So your get_x should use the column named âtextsâ.
Sorry, that wasnât clear. get_x=col_reader(âtextâ), get_y=(âlabelâ). After it is tokenized, the text column and label column names are changed to text and label.