Actually, once you have arrived at a good training process (hyperparameters, unfreezing, number of epochs, etc.), you can re-run your training from scratch with val_idxs (or val_pct) set to zero / near-zero data. Your model will then be able to train with the full set of your training data, and you will just ignore the “validation accuracy” metric output since it is not meaningful anymore. More training data should mean that you can train to a lower loss with better general accuracy.
Of course, this depends on how long it takes/took to train the model, as you might not want to re-train from scratch if it took you aaaaages previously. In that case, as you said, you can just train a few more epochs with the full training set, which should still give you some benefit.
Regarding val_idxs (and val_pct), there are default values in the fast.ai code (v0.7 and v1). You will need to manually override the defaults, to give the validation set nearly no data. I have not looked at all the code in detail, but it might not like it if there is no validation sample at all, so maybe best to give it a single validation sample to load?
@Pomo has already answered your initial question about ‘how can we “fit” our network if 100% is training and there is no validation?’ : )