I find it difficult to keep track of my experiments, metrics (split by train, val & test) & hyper-parameters, architectures, cross validation results etc., when working on models. Any tips on how to do this efficiently & consistently across different algorithms (RFs, Linear/Logistic Regressions, Neural Nets etc.,)? I end up creating new excel for each project to cover all the experiments within the project and find it cumbersome to maintain when switching algorithms in the same project.
I came across dvc.org & comet-ml.org when I was searching for version control systems specifically for the machine learning use case. Have you come across this in the past? Any better alternatives? What was your experience like? How should one tackle this when collaborating in the team?
All suggestions are welcome