Is it just me or the notebooks run way slower after my latest conda env update. It is something about opencv, even opening single images is taking forever. My epochs are 3x slower. Nothing else changed on my p2 instance. I am working with the lesson7-CAM notebook.
I restarted my p2 instance for good measure.
Did something change in the env about pytorch? opencv? or cuda?
It is one of my goals in the soon future to improve this, despite the fact that validation metric could be a varying function of lists and what not !! The notebooks just don’t look right with those numbers spread out like that.
Also, I want to output something about the wall-time used per epoch.
I’m hoping to use some kind of reflection to determine what kind of loss functions and metric were passed, and print them out. Thus, we can get something more specific than plain “val metrics”
I will try to rerun the entire set of notebooks, and submit them for PR.
I just added a line for compatibility with scikit-learn:
Add scikit-learn wrapper for fastai, as done for Keras, XGBoost, or Pytorch with Skorch
I feel like this could make code simpler by making scikit-learn more accessible to fastai. It would also enable using tools like xcessiv, which seems like a really cool way to do parameter tuning and stacking/ensembling.
This is using Nvidia-Docker v2.0 and contains all dependencies for fast.ai. You still have to download fast.ai once you are in the container. You can view the Dockerfile here: https://hub.docker.com/r/hamelsmu/ml-gpu/~/dockerfile/
Description: Option in Fastai to utilize a set of GPU for a task, and have Fastai automatically handle parallelization, data distribution, and synchronization. For example, I know Jeremy has 4 GPUs as do others. I have 2. With this, you could have a list , GPUS=[0,2] for the first and third GPU to automagically work together on a task.
Make flags in Tokenizer in text.py that allow a user to set the different t_up, t_st, and t_mx flags optional. I am working on a project currently in SQL and it would be nice to have a quick way to not have the capitalization flagged. I am going to code this, but not tonight so if somebody else wants to do it, I think it would be a good one.
Let’s face it, performance matters but proper optimization takes hard work, time, and a lot of testing.
Weld enables end-to-end optimization across disjoint libraries and functions without changing the libraries’ user-facing APIs . So why not baking int into fast.ai ?