Automatic experiment management and live model visualization

It’s difficult to keep track of model results when you’re iterating with different hyperparameters and model architectures. It can be frustrating to even see and share how an individual model is performing.

We built after seeing many fellow data scientists trying to grapple with disjointed scripts, notebooks (both Jupyter and paper ones), and excel sheets to remember what they ran previously. helps you automatically track your machine learning code, experiments, hyperparameters, and results to achieve reproducibility, transparency, and more efficient iteration cycles. This way you can actually see the results coming from your models across all experiments.

Quick Example with + Comet together

I put together a quick example of a real experiment logging a model from within a notebook: (plus the example notebook from the video and a sample public project so you can test out more features!)

Would love to hear your feedback + answer questions!