What is everyone preferred method for doing EDA for tabular with the updated library? Do you still use random forest and then move over to a NN? Or are there functions that I have overlooked?
During the machine learning courses, Jeremy discusses feature importance for the data and provides the famous bulldozer notebook. When going back through I noticed many of the functions are dependent on rf_feat_importance and with fast.ai now using tabular, the functionality broke.
After researching some hours on the forums, I saw a question for the Jeremy AMA, but it didn’t appear to get an answer. I also found Measuring Feature Importance in NNets for structured data however, it uses the old structured data format.