Hi DL enthusiasts,
I’d like to share a thank you and then a question, feel free to skip to the question.
Shout out to Rachel and Jeremy for making this MOOC! I’m a professionel software architecht from Denmark. I was extremely enthusiastic about machine learning during the AI winter in the 90’s.
However, as some of you may recall, progress was slow back then and the limitations on funding and general interest made this area really hard to approach unless you wanted to pursue a career in academia. I gave it up, and left that part of my dream behind.
But here we are today with fast GPU’s, vast amounts of data, updated theories and an endless supply of applicable problems to tackle. That shift is simply so amazing that it makes you wonder if we are all part of some kind of elaborate simulation game and the kids playing it just drew a card from the pile saying “Your civilization develops AI, +2 points”.
So I made a new years resolution and dedicated a couple of hours every day to get up to speed on the current state of machine learning, and promised that I wanted to work professionally in the field as soon as possible. Fast.ai, is by far the best online resource out there when it comes to getting your hands dirty- working on kaggle competitions and just trying different stuff in rapid development.
I landed a dream project where I basically am free to research and explore whatever I want, as long as we have progress and I can demonstrate positive results. There is just no way I would have been able to make a career shift this fast without fast.ai (good name), I owe you guys so much - thank you!
I am currently working on a supervised expert system for selecting high quality sports bets where we model from samples given by expert human advisors, and combine the results with a more general statistical approach to the bet outcomes. We use an ensemble Deep NN for quality scoring and an XGBoost classifier model to detect outliers that are a result of overfitting from the few training sample that we can get from the human experts. Given the lack of training data, this works surprisingly well. I’d also be interested in knowing about other projects where logistic regression and linear regression are being used in combination.
As the human expert ratings are entered every day, we need to update the model every day. And I retrain the weights from scratch with a batch job, this again due to the fact that we have very few training samples to work on, and updates in the training set can potentially cause big fundamental and necessary weight updates.
So I have a couple of questions:
- Rachel writes on fast.ai blog (16-11-2017): “It is incredibly rare to need to train in production. Even if you want to update your model weights daily, you don’t need to train in production. Good news! This means that you are just doing inference (a forward pass through your model) in production, which is much quicker and easier than training.”.
I’m not sure I understand that. We do need to update our model weights daily on production data, is there a better way than running scheduled batch job for this? I can’t see how doing inference/forward pass only can help us here.
- When is the current fast.ai course coming online? I can hardly wait
All the best,
I suspect you’re focused on your particular task, which does indeed sound like it needs to be updated daily (or even hourly) rather than thinking about ML as a whole. In your case you’ve got a data situation where training examples are provided regularly and the situations are constantly changing so you can’t rely on old data/models to provide a reasonable estimates.
But in many situations a model is developed, deployed, and then sits in production for months or even years. This is possible because the data and assumptions surrounding the model don’t change and so there’s no need to retrain.
I think for your situation it might be possible to do if you had a representative set of betting behaviours and a way to generalize the inputs of the human advisors such that the data wasn’t per bet, but I suspect that model wouldn’t be quite as accurate. Right now you have a very human in the loop ML model, which to me is one of the more interesting paradigms.
Thinking a bit outside the box, the other option would be to train the model not on emulating the human samples directly, but train it generally to predict based on a set of human samples what the outcome is likely to be. In that case the human samples aren’t your training data, they’re the inputs to your model, and the model wouldn’t need to be updated as new samples came in.
Paul, your description suggests that you have a rather small number of training samples (few thousand?) and the number of features is also quite limited. In this case, I would retrain the model every time you get a new sample as it shouldn’t take longer than a few minutes to trail. You would probably need to weight later samples a bit more as well. (see sample_weight https://keras.io/models/model/)
If you have a continuous stream of data and don’t have time to retrain the whole model you can look at online training/learning, when you just feed new samples to your model as soon as they arrive.
Thank you. I get it, the blogpost was not directed to my situation then as we are the rare case. I wholeheartedly agree that the human in the loop being the interesting paradigm. Regarding other approaches we are also doing a general collaborate filtering system from userratings (fast.ai movielens style setup) to qualify bets, which I believe is in the same direction as your idea on using collective betting patterns to predict outcomes. Due to high variance, collective user bet from non expert users are not a very accurate predictor for actual outcome, but it tells us a lot about user preferences, and that is very valuable.
Dennis, you are exactly right Also you are correct regarding sample time recency importance, thanks for mentioning. I believe the easiest solution is to use features for the time and the training should take care of it. Do you perhaps have a link or a single pointer to the online training/learning you mentioned? (it is a very broad search term). Eventually we may want to train on a continuous stream of data, so I’d like to look into this very much.
Yeah. Online learning gives lots’ of links to coursera
See here. But’s it’s just about feeding your model new batches of data every time you receive them. So you are not retraining the model from scratch but rather feed new data as it arrives. It can delay the response to recent regime shifts and probably is better suited to some really massive tasks like click predictions, etc.
Ok Dennis, So from what I understand, online training makes a single pass weight update based on a continuous feed of data. Great idea, but I’m unsure how much data is needed to get an accurate model when you only have a single pass per sample (as you know we have very few samples). But I do understand Rachels blogpost now, this is what she is talking about I think…
You train your model once in the beginning with all available data and then update it as soon as you get new.
Yes I understand. It is the sparsity of the new data that will potentially be a problem, because the training on each individual sample is next to none. That means that the accuracy is highly dependant on the number of new samples, so I’m not sure it will be very effective. In any event I can benchmark this by running a simulation. Thanks for your input.