Feature importance in deep learning

@Pak let me know if you can figure out a way to link them together. I’m very lost on how to do that right now. For temporary I’ll not include the column that has an _na.

Realized the easier way by using the dataframe. But if you can think of a way to do it in the dataloader please let me know :slight_smile:

Oh I see, I didn’t had this problem as I did not mess with tensor itself, but I have shuffled the column in dataframe itself. Then I apply all the preprocesses we applied during initial training (I saved it ) so na_ is recreated if needed for the right row. And calculate the accuracy.
The same is for retraining part, _na columns are created or not depending on do we have to do it after throwing away particular column.
Am on a cellphone now so it’s hard to to read github code (even own code :slight_smile: ), but looks like my code work s that way :wink:

1 Like

Got it! I’m doing that restructuring right now, just wanted to be sure if you were doing the same thing :wink:

@Pak Fun fact, replacing the tensors with the same distribution is different than shuffling the columns in pandas. Even though it shouldn’t be

I tried fixing it as best I could, here’s what I got:

Still showed poor mismatches comparing drop to non_dropped, but I also believe anything < 0.00 is negligible. Let me know your thoughts

1 Like

Oh, looks like now I see why on adults dataset this difference in cont and cats is not so evident. Lets look at every cat feature, each of these can be just 2-3 different values (especially considering that age is cont here) and are represented in the model with only 2-3 float (as embedding size is def emb_sz_rule(n_cat:int)->int: return min(600, round(1.6 * n_cat**0.56)) where n_cat is caardinality ). And in Rossmann data Store feature for ex. can be one of thousands different values and it is represented with 70+ floats per one storeName/Id

As for your notebook it’s hard for me to judge as I’ve used some of different approach, but everything looks fine. I’m not thogh sure that passing procs as a parameter works as it seems to. I remember that I had to recreate procs one by one, but I don’t really remember why. Maybe it was a bug I had to overcome, or maybe I just couldnot find the right way

1 Like

Perhaps it was due to the procs not matching up. If you need that, that is why the processor = data.processor exists

I’ll rerun the above on Rossmann later today

1 Like

@Pak running it now, I wasn’t able to get to it yesterday. However I did run it today. My results differed from yours a bit. Here is my new function, where we take in a test dataframe and shuffle each column one at a time and validate over it:

import copy

def feature_importance(learn:Learner, cats:list, conts:list, dep_var:str, test:DataFrame):
  data = learn.data.train_ds.x
  procs = data.procs
  cat, cont = copy.deepcopy(cats), copy.deepcopy(conts)
  if 'CrossEntropyLoss' in str(learn.loss_func):
    dt = (TabularList.from_df(test, path='', cat_names=cat, cont_names=cont, 
                              procs=procs)
                             .split_none()
                             .label_from_df(cols=dep_var)
                             .databunch(bs=learn.data.batch_size))
  else:
    dt = (TabularList.from_df(test, path='', cat_names=cat, cont_names=cont, 
                              procs=procs)
                             .split_none()
                             .label_from_df(cols=dep_var, label_cls=FloatList, log=True)
                             .databunch(bs=learn.data.batch_size))
    
  learn.data.valid_dl = dt.train_dl
  loss0 = float(learn.validate()[1])
  
  fi=dict()
  cat, cont = copy.deepcopy(cats), copy.deepcopy(conts)
  types = [cat, cont]
  for j, t in enumerate(types):
    for i, c in enumerate(t):
      print(c)
      base = test.copy()
      base[c] = base[c].sample(n=len(base), replace=True).reset_index(drop=True)
      cat, cont = copy.deepcopy(cats), copy.deepcopy(conts)
      if 'CrossEntropyLoss' in str(learn.loss_func):
        dt = (TabularList.from_df(base, path='', cat_names=cat, cont_names=cont, 
                              procs=procs)
                             .split_none()
                             .label_from_df(cols=dep_var)
                             .databunch(bs=learn.data.batch_size))
      else:
        dt = (TabularList.from_df(test, path='', cat_names=cat, cont_names=cont, 
                              procs=procs)
                             .split_none()
                             .label_from_df(cols=dep_var, label_cls=FloatList, log=True)
                             .databunch(bs=learn.data.batch_size))
      learn.data.valid_dl = dt.train_dl
      fi[c] = float(learn.validate()[1]) - loss0
      
  d = sorted(fi.items(), key =lambda kv: kv[1], reverse=True)
  df = pd.DataFrame({'Variable': [l for l, v in d], 'Accuracy': [v for l, v in d]})
  df['Type'] = ''
  for x in range(len(df)):
    if df['Variable'].iloc[x] in cats:
      df['Type'].iloc[x] = 'categorical'
    if df['Variable'].iloc[x] in conts:
      df['Type'].iloc[x] = 'continuous'
  return df                  

This allows for a very standard approach to the two default loss functions Fast.AI will use. My results were different than yours though. Anything negative was a negative impact on the training, so they were the best.

╔════════╦══════════════════════════╦═══════════╦═════════════╗
β•‘ Number β•‘         Variable         β•‘ Accuracy  β•‘ Type        β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    0   β•‘       SchoolHoliday      β•‘ 0.001581  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    1   β•‘           trend          β•‘ 0.001569  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    2   β•‘     AfterStateHoliday    β•‘ 0.001444  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    3   β•‘           Month          β•‘ 0.001159  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    4   β•‘      StateHoliday_bw     β•‘ 0.001103  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    5   β•‘         trend_DE         β•‘ 0.001090  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    6   β•‘       Min_Humidity       β•‘ 0.001085  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    7   β•‘    Max_Wind_SpeedKm_h    β•‘ 0.000958  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    8   β•‘     Max_TemperatureC     β•‘ 0.000871  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 9      β•‘ StateHoliday             β•‘ 0.000795  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 10     β•‘ Min_TemperatureC         β•‘ 0.000791  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 11     β•‘ Events                   β•‘ 0.000748  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 12     β•‘ PromoInterval            β•‘ 0.000531  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 13     β•‘ Promo2Weeks              β•‘ 0.000477  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 14     β•‘ StoreType                β•‘ 0.000465  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 15     β•‘ Promo2SinceYear          β•‘ 0.000420  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 16     β•‘ Store                    β•‘ 0.000397  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 17     β•‘ Year                     β•‘ 0.000392  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 18     β•‘ CompetitionMonthsOpen    β•‘ 0.000334  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 19     β•‘ BeforeStateHoliday       β•‘ 0.000255  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 20     β•‘ State                    β•‘ 0.000107  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 21     β•‘ Assortment               β•‘ -0.000095 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 22     β•‘ Day                      β•‘ -0.000122 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 23     β•‘ Promo_bw                 β•‘ -0.000333 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 24     β•‘ CloudCover               β•‘ -0.000406 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 25     β•‘ Mean_TemperatureC        β•‘ -0.000516 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 26     β•‘ Promo                    β•‘ -0.001300 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 27     β•‘ SchoolHoliday_bw         β•‘ -0.001309 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 28     β•‘ Mean_Humidity            β•‘ -0.001415 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 29     β•‘ SchoolHoliday_fw         β•‘ -0.001569 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 30     β•‘ StateHoliday_fw          β•‘ -0.001817 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 31     β•‘ Week                     β•‘ -0.004419 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 32     β•‘ DayOfWeek                β•‘ -0.008283 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 33     β•‘ Max_Humidity             β•‘ -0.008312 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 34     β•‘ CompetitionDistance      β•‘ -0.008432 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 35     β•‘ CompetitionOpenSinceYear β•‘ -0.008464 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 36     β•‘ Mean_Wind_SpeedKm_h      β•‘ -0.008909 β•‘ continuous  β•‘
β•šβ•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•

Store wound up being somewhere in the middle here, so perhaps I am doing something wrong?

Here are the results given the old function from earlier posts:

╔════════╦══════════════════════════╦═══════════╦═════════════╗
β•‘ Number β•‘         Variable         β•‘ Accuracy  β•‘ Type        β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    0   β•‘    Mean_Wind_SpeedKm_h   β•‘ 0.000946  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    1   β•‘         Promo_bw         β•‘ 0.000924  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    2   β•‘           Promo          β•‘ 0.000844  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    3   β•‘           Store          β•‘ 0.000747  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    4   β•‘     SchoolHoliday_fw     β•‘ 0.000728  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    5   β•‘      Promo2SinceYear     β•‘ 0.000717  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    6   β•‘        Assortment        β•‘ 0.000653  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    7   β•‘         Promo_fw         β•‘ 0.000611  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘    8   β•‘      StateHoliday_fw     β•‘ 0.000428  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 9      β•‘ Day                      β•‘ 0.000400  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 10     β•‘ Max_Wind_SpeedKm_h       β•‘ 0.000358  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 11     β•‘ CompetitionDistance_na   β•‘ 0.000294  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 12     β•‘ Month                    β•‘ 0.000185  β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 13     β•‘ trend_DE                 β•‘ 0.000050  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 14     β•‘ BeforeStateHoliday       β•‘ 0.000014  β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 15     β•‘ SchoolHoliday            β•‘ -0.000037 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 16     β•‘ CompetitionMonthsOpen    β•‘ -0.000058 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 17     β•‘ StateHoliday             β•‘ -0.000058 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 18     β•‘ Max_Humidity             β•‘ -0.000077 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 19     β•‘ Mean_TemperatureC        β•‘ -0.000136 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 20     β•‘ StateHoliday_bw          β•‘ -0.000148 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 21     β•‘ StoreType                β•‘ -0.000163 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 22     β•‘ Mean_Humidity            β•‘ -0.000193 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 23     β•‘ SchoolHoliday_bw         β•‘ -0.000246 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 24     β•‘ DayOfWeek                β•‘ -0.000286 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 25     β•‘ trend                    β•‘ -0.000390 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 26     β•‘ Promo2Weeks              β•‘ -0.000517 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 27     β•‘ Min_Humidity             β•‘ -0.000906 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 28     β•‘ PromoInterval            β•‘ -0.000937 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 29     β•‘ Min_TemperatureC         β•‘ -0.001001 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 30     β•‘ CompetitionOpenSinceYear β•‘ -0.001064 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 31     β•‘ AfterStateHoliday        β•‘ -0.001515 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 32     β•‘ CloudCover               β•‘ -0.001570 β•‘ continuous  β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 33     β•‘ State                    β•‘ -0.002007 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 34     β•‘ Events                   β•‘ -0.002613 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 35     β•‘ Year                     β•‘ -0.003186 β•‘ categorical β•‘
╠════════╬══════════════════════════╬═══════════╬═════════════╣
β•‘ 36     β•‘ CompetitionDistance      β•‘ -0.005161 β•‘ continuous  β•‘
β•šβ•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•

Both of these are under the guise where importance is calculated by shuffled_accuracy - baseline_accuracy.

Let me know your thoughts.

1 Like

Hm… interesting results…
Problem with these dataset is in us :slight_smile: in the fact that we are not domain experts here. It’s hard to interpret is for ex. Mean_Wind_SpeedKm_h seems to affect sales much (first idea – it should not be at the top)… My intuition is that Store should be at the top (or top3), but it’s not a very educated guess.

Unfortunately I don’t have enough time to dig in here for how (maybe I can find some slot on a weekend).
But one process to check result I can come up with.

What if you take 4 distinct features from your feature importance table from different part of it (from top3, from bottom3 and 2 from the middle as far away from each other as possible).
Then you train your whole model (just by hand no automation) and output accuracy. Then you throw away one by one (and only one for the step) each of features (from the list of cont/cat features) you chose on a previous step and also save the accuracy (again manually with recreation databunch and learner each time, just to check that everything is ok on each step). And then by comparing accuracies you should be able to get the β€˜real’ relative feature importance of these four features. The order should match the order in your FI by created shuffling. If it’s competently different, that there is some problem with shuffling or it’s implementation
Oh and I’ve compared accuracies on validation sets. I’ve split initial dataframe in two (train and validation), just to be sure that I’m always compare the same data (fixed valid set). I do remember I’ve made a separate parameter if I want to check accuracy on the whole dataset (tarin+valid) or on valid only. And I don’t really remember why I ended up using only valid accuracies comparison.

And I’ve remembered one sanity check I’ve made there. I make a fake feature (or as I remember there was one in data) on that depended var in not, you know,… depended (for ex. it has the same value for every row). And after you apply your algorithm it’s feature importance should be close to zero, if it’s not it a is a big prove that there is something wrong.
Whats the only thing I can came up with – try to add some checks on each step as we are not able to determine if FI works fine just by looking at the output table (I can’t :frowning: ).
Hope that will help :slight_smile:
Sorry for my very messy and unstructured thoughts and my ugly English (it’s obviously not my first language). I know that in English I sound not as polite as it should be but it’s just an inherit features of Russian language (not to sound polite enough) :slight_smile:

1 Like

And there is one thing I did not quite understand why there are two tables of FI ?

Hi, did you have any progress there?
Also one thought came across to me considering this topic. Yesterday i was trying to implement (naive) partial dependency for text classification problem (substituting each word in piece of text with unknown, I was monitoring how probability of each class shifted). And I’ve also (along with many on this forum, there are a lot of topics about it like this) noticed that prediction bunch as a whole doesn’t work the same as prediction one by one (.predict() vs .get_preds()). It is weird. And I thought maybe this magic with substituting dataloader is not working as it intend too like learn.data.valid_dl = dt.train_dl (in .get_preds() something like this is used – .add_test() and substituting data object if I remember correct). Just one crazy thought…

Sometimes it may skip one or two but in my tests I’ve noticed it operate the same with tabular data (no skipping found)… I haven’t quite finished working on it yet, as I have a lot on my plate now. I will get to it very soon though!

Maybe, when I will find some time, I will try to compare your method of substituting validation set in learn.data with method of manually applying .predict() to each row. If it will produce the same result…

1 Like

Here’s a notebook where I explored just that :slight_smile: (If I missed something in there let me know. Or if you see any mistakes. It’s currently 5am and haven’t slept so a weary once-over may have missed something)

1 Like

@Pak I’m wondering if it had to do with the databunch generation itself (I haven’t looked into this yet it’s just an idea). For instance if we make a non-split databunch, is the order made the same as our original dataframe?

Edit: Confirmed it does.

Edit 2: I see the issue, or a hint to an issue. Say I call learn.get_preds() which will go to the validation dataset. We get an array of predictions, with the second item in that array being c2i indexs. I am not seeing the same prediction being generated at all, despite the confidence region being well above 80% in most cases. Part of that could be a differentiation in the model itself, but when I run learn.get_preds() multiple times I notice a large amount of changes in those predictions. Meanwhile learn.predict() always gives me the same output.

1 Like

Great point. I should definitely test my approach on applying testset to model with what you just say, if it outputs consistent results (but as I remember I was checking it with learn.predict() )
Upd 1: I’ve tested, it doesn’t, there must be some errors in my approach :frowning: will dive into it further
Upd 2: Something weird has just happened. I have tested the difference in results in .get_preds() and .predict() (there was one). And suddenly I was getting the same result out of nowhere. I can confirm that it’s not a change in code, cause I did dot edited it, I append new code to the notebook. Then I even reran my first experiments and it worked too. I have no ide what is going on. It looks like library updated itself (as progressbar also starting to work) I did not do that manually. Maybe your case is magically starts to work too :slight_smile:
Upd 3: I’ve figured out why my approach stopped working. Fastai changed how it deals with the last layers, so I had to update my code too. Now I get the same results with all 3 functions .get_preds(), .predict() and my own get_cust_preds()

1 Like

@Pak see the discussion here:

Turns out I was missing a step!

Hi again.
I have managed to make some experiments with my Rossmann notebook (updated it as well). And I’ve noticed that you probably were right the relative feature importance values (column permutation vs retrain methods) between different features in my notebook are really comparable. I was confused with absolute values, but if I normalize it, numbers will tell a different story (which by the way I have noticed only after I’ve plotted FI :frowning: )
My thoughts on this for now are the following:
Which to choose is a trick question. On the one hand naive (sorry I will call that, now I don’t thinks that it’s naive, but that’s how it is called in my notebook, so historical reasons I should say :slight_smile: it is really column permutation) method is waaaaaaaay faster. On the other – it depends what you mean on the word importance .
If we would have every feature as a separate entity, not related to one another, I would expect results to be much more similar (and I would definitely recommend naive method), but in real life they hardly ever do. In real life we have a mess of interconnected (as well as created by ourselves, derivative) features. And what do we really want to know? How our current model ranks features one to another relative to depended variable or how much unique info this current feature holds. I say, it depends. I see cases when first option will be better and some for the second one (at least the one where we try to eliminate redundant features).
So I think these two methods just answer two slightly different questions on the importance topic. I think, giving enough time, I would probably use both of them to get some insights

1 Like

@Pak thanks for being so thorough with this! I agree, both are doable and just depends on the budget (money and time) that you can accommodate for the methods, as the column permutation method is designed to look at what the model is looking at the most, whereas full retraining goes into what the model can find the most useful. Both are cut from the same cloth to some degree. But I agree that both could and should be done. The columnar permutation can help explain the models behavior quickly as well!