I am currently training an XGboost and a tabular learner from Fastai. My goal is to compare both on different criteria. On criteria is computational effort.
I tried measuring that using the magic time function from Jupyter Notebook.
However, now I realized that this approach does not make sense, because, with every run, the time can differ enormously (for example, with factor 5).
Does anybody have an idea for a better approach on how to measure and compare the computation efforts of models?