Hello everyone,

I am facing some problems with hook_outputs. Specifically, I don’t know how to vectorize the hook_outputs, so that the deep learning processing time runs fast enough to stay below the 12 hours Google Colab limit.

I operating on the MIT-BIH dataset, converting the signals into images so that they can be trained by transferring ImageNet weights from EfficientNet B1 to B5. I have completed training the five models, and I am trying to take the second last layer of each of my models as a vector of features, then concatenate them into a giant Numpy array, which I can then train a tabular model on. Here is a snippet of the working code:

```
1 columns_table = [0,1280,2688,4224,6016,8064]
2 tabular_train = np.zeros((109446, 8064))
3 def tabular_train_array():
4 for i in tqdm_notebook(range(1,6), 'Learner compiling: '):
5 nn_module = list(learner_dict[f'learn_no{i}'].model.modules())[-3]
6 for j in tqdm_notebook(range(109446), 'Row-by-row compiling'):
7 hook_example = transform_input(data.values[j, :187])
8 hook = hook_output(nn_module)
9 learner_dict[f'learn_no{i}'].predict(hook_example)
10 tabular_train[j,columns_table[i-1]:columns_table[i]]=hook.stored.cpu().numpy()[0]
```

Line 1 shows the ‘separation points’ for each of the five vectors for the columns of this giant table. The vector lengths, from EfficientNet from B1 to B5 are 1280, 1408, 1536, 1792, and 2048 activation neurons respectively.

Line 2 initializes the giant table, with 109446 examples in the original dataset (incl. both train and valid), and 8064 feature columns from the five different models.

Line 5 retrieves the 3rd last layer (which is the second last activation layer), that I will take to form my new dataset, in a for loop over the five different models stored in a dictionary.

Line 7 does the preprocessing. Here’s the preprocessing:

```
def transform_input(numpy_array):
assert(numpy_array.shape == (187,))
orig_img = proc_tfm_vec(numpy_array, augmentations = False)
x = Image(pil2tensor(orig_img, dtype=np.float32))
return x
```

For example: (this takes the 1000th row of the original dataset for preprocessing)

` hook_example = transform_input(data.values[1000, :187])`

Line 8 initializes the hook, while line 9 predicts the output, and finally, line 10 writes the stored hook into the giant NumPy array. My question is, is there a way to get rid of the for loop (and vectorize) in Line 6 as to reduce the time needed to run this pile of code? Since I believe @jeremy, and @sgugger will understand this very well, as I have a hard time understanding what is going on, would you help me with this problem?

Thank you very much for your help.