Wiki: Fastai Library Feature Requests

Yes, you can use a forward hook. We’ll learn about them on Monday :slight_smile:

1 Like

Guys,
a very small bug that you must have already seen.

When you define the data object and do provide test_name='test'

data = ImageClassifierData.from_csv(PATH, folder='train', csv_fname=f'{PATH}labels.csv',
                                    tfms=tfms, val_idxs=get_cv_idxs(n=4750), test_name='test', bs=12)

And, there’s no test folder or if the test folder is empty. This assignment results in the following error:


The function read_dir() in dataset.py is used to read the test. The function already contains a TODO: warn or error if no files are found?

def read_dir(path, folder):
    # TODO: warn or error if no files found?
    full_path = os.path.join(path, folder)
    fnames = iglob(f"{full_path}/*.*")
    return [os.path.relpath(f,path) for f in fnames]

read_dir() is only used to read the test data folder.

16 AM

I think this should raise an error is it returns an empty list. Something like the following:

21 PM

That way the error message would be more explicit.

You can edit your fastai code and check whether the that works and feel free to create a PR, Jeremy will see to it after that…

1 Like

Yes, I think it’s better to report the issue there and submit a PR.

How can we use multiple augmentation (RandomFlip, center crop, top down etc) on the same dataset?

By doing so will the learner become more versatile in nature as now it can handle different images better?

Can it be done in this way-:

  • train the learner with any one augs.
  • save and load the weights and then re-execute the learner exactly except for the augs.

Is it just me or the notebooks run way slower after my latest conda env update. It is something about opencv, even opening single images is taking forever. My epochs are 3x slower. Nothing else changed on my p2 instance. I am working with the lesson7-CAM notebook.

I restarted my p2 instance for good measure.

Did something change in the env about pytorch? opencv? or cuda?

LOL …

It is one of my goals in the soon future to improve this, despite the fact that validation metric could be a varying function of lists and what not !! The notebooks just don’t look right with those numbers spread out like that.

Also, I want to output something about the wall-time used per epoch.

I’m hoping to use some kind of reflection to determine what kind of loss functions and metric were passed, and print them out. Thus, we can get something more specific than plain “val metrics”

I will try to rerun the entire set of notebooks, and submit them for PR.

Would that be nice to have @jeremy ?

Thnx…

1 Like

I just added a line for compatibility with scikit-learn:

I feel like this could make code simpler by making scikit-learn more accessible to fastai. It would also enable using tools like xcessiv, which seems like a really cool way to do parameter tuning and stacking/ensembling.

4 Likes

Hi all experienced coders,

Is it possible to put fastai environment into a Docker container (see example below)? @radek Any comment?

Thank you for @hamelsmu to share his knowledge and congratulation to the recent success.

https://hub.docker.com/r/hamelsmu/ml-gpu/

It would be great if we can make this wiki accessible in the Part 2 category as well.

2 Likes

I think so :slight_smile: Today or tomorrow I am going to find out as I work through the tutorial from @hamelsmu (have not used docker before)

1 Like

Hey folks, I have made a docker container for this class: https://hub.docker.com/r/hamelsmu/ml-gpu/

This is using Nvidia-Docker v2.0 and contains all dependencies for fast.ai. You still have to download fast.ai once you are in the container. You can view the Dockerfile here: https://hub.docker.com/r/hamelsmu/ml-gpu/~/dockerfile/

I hope this is helpful!!! cc: @radek@Moody

4 Likes

Request: Multi-GPU automagic scaling

Description: Option in Fastai to utilize a set of GPU for a task, and have Fastai automatically handle parallelization, data distribution, and synchronization. For example, I know Jeremy has 4 GPUs as do others. I have 2. With this, you could have a list , GPUS=[0,2] for the first and third GPU to automagically work together on a task.

2 Likes

Hi @Moody I’ve posted some docker files for fastai on github

1 Like

Make flags in Tokenizer in text.py that allow a user to set the different t_up, t_st, and t_mx flags optional. I am working on a project currently in SQL and it would be nice to have a quick way to not have the capitalization flagged. I am going to code this, but not tonight so if somebody else wants to do it, I think it would be a good one.

HI,我发现senet154已经被添加到fastai库中,但我不知道如何使用它。
原始方法不可用:
arch = senet154
learn = ConvLearner.pretrained(arch,data
我应该怎么做?
谢谢!

Hi Rob, so was this implemented at the end?

1 Like

Anybody aware of any sklearn like wrapper for fastai ?

[Feature Request] Weld End-to-End optimization;

Let’s face it, performance matters but proper optimization takes hard work, time, and a lot of testing.

Weld enables end-to-end optimization across disjoint libraries and functions without changing the libraries’ user-facing APIs . So why not baking int into fast.ai ?

https://dawn.cs.stanford.edu/2018/07/30/weldopt/

https://www.weld.rs/

Can I request mermaid support in nbdev?