Help understaning bos and eos in language model

I am trying to perform ULMFiT on genomics data. My ultimate goal is to do both generative and predictive modeling via these models.

I have built a character tokenizer that currently looks like the following:

from fastai import *
from fastai.text import *

BOS,EOS,FLD,UNK,PAD = 'xxbos','xxeos','xxfld','xxunk','xxpad'
TK_MAJ,TK_UP,TK_REP,TK_WREP = 'xxmaj','xxup','xxrep','xxwrep'

defaults.text_spec_tok = [PAD]
class MolTokenizer(BaseTokenizer):
    def __init__(self, lang):
        pass
    def tokenizer(self, sequence):
        tokens = list( sequence.upper() )
        tokens = ['GO'] + tokens + ['END'] 
        return tokens    
    def add_special_cases(self, toks):
        pass

And to create my databunch:

tok = Tokenizer(partial(MolTokenizer), pre_rules=[], post_rules=[])
data = TextLMDataBunch.from_df(path, corpus_train, corpus_valid, bs=bs, tokenizer=tok, text_cols='sequence', min_freq=1, include_bos=False, include_eos=False, bptt=200)

I understand that inlcude_bos=False and include_eos=False means fastai does not automatically add these prior to tokenization. However, I do not understand the implications of not manually adding the EOS character during my manual tokenization. It appears the default for include_eos is False in the TextLMDataBunch making me think that adding the EOS character is either optional or potentially suboptimal in some cases. It would seem to me that having the model learn when a sentence, or in my case a DNA sequence, ends would be essential. Should I be adding the [‘END’] to my token list for each sequence?