Text Classifier Input Size

Hey, this might be a simple question but I was just wondering if anyone could help me with this.
I’m trying to understand the code for Text Classification using a pretrained Language Model.
When creating an RNNLearner from a TextClasDataBunch and loading a pretrained encoder does anyone know what the size or shape of the input data is at the first layer of the network? As in, for my given pieces of text what way are they split up/put together before being loaded in? I’m reading through the docs and having trouble understanding batches.
https://docs.fast.ai/text.data.html#Classifier-data

I’m asking this as I want to experiment with passing in embeddings created from other models like BERT or InferSent