Useless variable in text_classifier_learner?

I am slowly trying to create a learner and data loader for multi-label classification in text.

I am currently looking at the code for text_classifier_learner, and it seems like the variable ds is defined but not used. The method is quite short, so I copy the method here for convenience. The script in Github is here.

Am I missing something, or should it be removed?

def text_classifier_learner(data:DataBunch, bptt:int=70, emb_sz:int=400, nh:int=1150, nl:int=3, pad_token:int=1,
           drop_mult:float=1., qrnn:bool=False,max_len:int=70*20, lin_ftrs:Collection[int]=None,
           ps:Collection[float]=None, **kwargs) -> 'TextClassifierLearner':
"Create a RNN classifier from `data`."
dps = default_dropout['classifier'] * drop_mult
if lin_ftrs is None: lin_ftrs = [50]
if ps is None:  ps = [0.1]
ds = data.train_ds
vocab_size, n_class = len(data.vocab.itos), data.c
layers = [emb_sz*3] + lin_ftrs + [n_class]
ps = [dps[4]] + ps
model = get_rnn_classifier(bptt, max_len, n_class, vocab_size, emb_sz, nh, nl, pad_token,
            layers, ps, input_p=dps[0], weight_p=dps[1], embed_p=dps[2], hidden_p=dps[3], qrnn=qrnn)
learn = RNNLearner(data, model, bptt, split_func=rnn_classifier_split, **kwargs)
return learn
1 Like

Similarly, how is get_rnn_classifier using the parameter n_class? I fear I am not getting something important here!

In text_classifier_learner you can see:

layers = [emb_sz*3] + lin_ftrs + [n_class]

so it looks like n_class may indeed be an unused parameter here!

From models:

def get_rnn_classifier(bptt:int, max_seq:int, n_class:int, vocab_sz:int, emb_sz:int, n_hid:int, n_layers:int,
                   pad_token:int, layers:Collection[int], drops:Collection[float], bidir:bool=False, qrnn:bool=False,
                   hidden_p:float=0.2, input_p:float=0.6, embed_p:float=0.1, weight_p:float=0.5)->nn.Module:
"Create a RNN classifier model."
rnn_enc = MultiBatchRNNCore(bptt, max_seq, vocab_sz, emb_sz, n_hid, n_layers, pad_token=pad_token, bidir=bidir,
                  qrnn=qrnn, hidden_p=hidden_p, input_p=input_p, embed_p=embed_p, weight_p=weight_p)
model = SequentialRNN(rnn_enc, PoolingLinearClassifier(layers, drops))
model.reset()
return model
1 Like

Yup those two can be removed indeed. Feel free to submit a PR with those changes!

2 Likes

Done! I also updated the documentation for get_rnn_classifier to remove the mention of n_class. Hope it helps!

2 Likes

Funnily enough I was thinking of submitting a PR which actually uses n_class and removes it from layers :smiley:

2 Likes

I was thinking the same (having n_class disassociated from layers), since it’s “special” in that it cannot be arbitrarily chosen as a hyperparameter.