Improving/Expanding Functional Tests


(Stas Bekman) #183

Yes, of course, @PierreO

Alternatively, we use a different approach where we create a thread with no discussion and just the todo lists and howtos, and leave the discussion thread for discussion? Not sure… I’m thinking how to make it look less intimidating to the users.

That’s a good idea. A bit like the lessons are organised : a wiki post that recaps everything for a lesson, and then another post for discussion of said lesson. If you decide to use such an approach it would be best to have it set up before making the post about contributing in the part 1 category.

Alright, let’s do it this way then. It surely won’t look intimidating then.


(Stas Bekman) #184

Amazing, @PierreO! I guess you had better things to do than sleep :wink:

It’s live here, as you suggested and pinned: How to contribute to fastai

Your writing was almost perfect, tweaked a few small things and added a bunch more items. Hope it doesn’t become intimidating again by being largish :wink:

So everybody is welcome to edit that post to improve it.

@PierreO, if you get bothered by system notifications since you’re the owner of that post let us know and we will switch the ownership.


(benedikt herudek) #185

@PierreO this is super good

@stas I would suggest we add a forum and PR etiquette for contributing, there was a comment here also: Developer chat

Things like:

  • use PRs for things ready to merge
  • use PRs of size: small, functionally ready / chunks (?)
  • discuss design in the forums, discuss code snippets in the forums
  • This is how you ask the other developers, this is how and when you ask Stas and Sylvain
    -, status updates useful? (so you know if a task is still worked on but do not get bothered with too many notifications)
  • if there are timelines to changes (e.g. you need it for a course) best to indicate. Typically it is not, but it can take some unease for developers to know ‘okay, its really fine I look the following 4 sunday afternoons at that change, when I find time. Noone is needing that earlier, hence I can also pick a task without being concerned about that’

All things, that would make it a good experience for you and Sylvain. There is a little concern on the ‘other end’ of the forum that one might get too noisy or cause you guys more work than value added eventually. A little etiquette could help some here to overcome that.

@init_27


(benedikt herudek) #186

Yeeah, think so!

and added a draft for section ‘Low Hanging Fruits’ following @init_27 proposal - what do you think @PierreO?

If we can get people to add such easy items here, might be the best path for folks to do small things and grow along.


(Stas Bekman) #187

Go for it, as I said several times, please feel free to edit/add what you think will help, and especially those of you who went through a few cycles of submitting PRs and getting feedback and requests for changes are the most qualified to help others with the guidelines. Contributing to contributors is just as important. and thank you.


(benedikt herudek) #188

yes, added that comment on low hanging fruits ;-):mango:


(Stas Bekman) #189

Except, what has been added so far is not low-hanging fruit - you linked right back to the main topics that are already linked to, which are big and the user will have no idea which ones are low hanging. so it’s quite misleading.

I think if you want to create a list of low hanging fruit it should be actual actionable items. For example, writing tests in general is not one of them. Finding and fixing typos is.


(benedikt herudek) #190

ok, understand. Corrected - if I see how hanging :mango: I would add them to the list then!


(Stas Bekman) #191

I also think this is what we have: https://forums.fast.ai/c/fastai-users/dev-projects for…
So listing all kinds of small things to do would be very useful.
perhaps the small things could go directly into the wiki post, rather than create a thread for each one.
or may be there should be a dedicated thread for small things with the first top post wiki coordinating those.

For example @sgugger has just created learn.purge which resets your GPU memory to almost 100% free (amazing!), so someone could go over the course notebooks and replace gc.collect() calls with learn.purge() and of course, testing that they still work. but may be need to wait till the next release - this was just an example - i.e. watching CHANGES and adjust docs/notebooks to sync with the latest improvements.


(Stas Bekman) #192

To me if the spec is clear and well defined this is a relatively trivial project (the test_this() side of it, and the docs side of it I don’t know - Andrew knows that side), so by now I trust you have all the input you need, and of course feel free to ask questions. Watching someone making small commits as they figure things out is not so fun. That’s why I suggested that I will be happy to look at the code when it’s more or less done, via PR or just name of your branch that you are working in.

Trust me, you won’t want to watch me code in real time, trying all kinds of things, most failing, so it’s nothing personal.

PRs are for ready things with potentially some fixes. They are not a development tool.

The dev way is to create a branch that several committers have access to and work on it together. No need for github’s pretty, but often not so useful interface.


Doc_test project
(Sanyam Bhutani) #193

Thanks! @Benudek
Cross posting Stas’s answer


(benedikt herudek) #194

ok @init_27 , then we should incorparte these guidelines in the how to contribute I think ! @PierreO


(Sanyam Bhutani) #195

[Suggestion]
We could have some low-hanging fruits. For ex: A few “open issues” where new contributors can get started (beginner-friendly issues).
I found it nerve-shaking to be able to even think of being able to submit a PR at first but these simple open issues might help many contributors to find the momentum to get started.

Source: OpenMined community which follows a similar approach to get new contributors get started.


(benedikt herudek) #196

good, some (not all!) of the test scripts are low hanging fruit! They are a good way to get into the code base


(Pierre Ouannes) #197

Thanks Stas, glad you like it ! Hopefully the post will serve its purpose and more people will start to contribute :slight_smile:

Would the FAQ seems like a good place to put these ?


(Sanyam Bhutani) #198

Another thing that might be added is a few sections where we can have more contributions to the docs.
(Adding more examples/explainations, etc)


(benedikt herudek) #199

do it :wink: it is a wiki where one can add - inform then @PierreO :wink:


(Stas Bekman) #200

FYI, I split the doc_test project into its own thread as it completely dominated this topic with 90 posts and growing. and we have other testing-related things to discuss here.


(Stas Bekman) #201

Here are some notes that we might need later.

If we want global learn_vision, learn_text, etc. objects, we can pre-create them from conftest.py:


from fastai.vision import *
@pytest.fixture(scope="session", autouse=True)
def learn_vision():
    path = untar_data(URLs.MNIST_TINY)
    data = ImageDataBunch.from_folder(path, ds_tfms=(rand_pad(2, 28), []), num_workers=2)
    data.normalize()
    learn = Learner(data, simple_cnn((3,16,16,16,2), bn=True), metrics=[accuracy, error_rate])
    learn.fit_one_cycle(3)
    return learn

now inside let’s say tests/test_vision_train.py I can access this via fixture:

def test_accuracy(learn_vision):
    assert accuracy(*learn_vision.get_preds()) > 0.9

If we use:

@pytest.fixture(scope="session", autouse=True)

all global objects will be pre-created no matter whether the tests need them or not, so we probably don’t want autouse=True. w/o it these will be created on demand.


There is a cosmetic issue with having learn_vision, learn_text, since now we either have to spell out:

def test_accuracy(learn_vision):
    assert accuracy(*learn_vision.get_preds()) > 0.9

or rename:

def test_accuracy(learn_vision):
    learn = learn_vision
    assert accuracy(*learn.get_preds()) > 0.9

both aren’t very great…

We want to be able to copy-n-paste quickly and ideally it should always be learn.foo, especially since there are many calls usually.


Unrelated, with this and even with test module scoped variables (e.g. learn) that we currently use there is an issue that tests can and will modify the object and the next test doesn’t get the same object but a modified one, so this can be very bad for tests, as it can mask serious issues.

Instead we need to have some kind of clone() method which will give each test a clean copy of the object.


(Stas Bekman) #202

Writing tests that exercise APIs that have a large impact on gpu RAM (e.g., purge, load, destroy) proved to be very problematic, since the memory utilization patterns seem to vary greatly depending on:

  1. what has been run a second earlier to this test, so changing the order of test execution changes the numbers
  2. gpu card model

So at the moment I have to put all gpu tests in a single test, which could make troubleshooting somewhat difficult.

But the biggest problem is that the test assert numbers are now currently tied to the gpu card model I use and can’t be shared and tested by others unless they have the same model. Sure, we could prepare a few sets of these numbers based on common card models, but it’d be very difficult to maintain these sets as tests and the code base evolve.

I already use a small tolerance in the checks (i.e. not checking exact numbers), but making it large would defeat the purpose of these tests - since we want to make sure that these special methods either actually reclaim memory or they don’t leak memory. e.g. until recently save/load sequence actually wasted more memory than if you weren’t to call these, but fixed now. So we absolutely need to have tests that check for possible regressions and leaks.
And it wold have been easier to run a large tolerance of say 100MB if the tests were to load huge models and data, but it’s not possible in the testing environment since it’d slow things down a lot.

If you would like to see what I have been doing so far, skip to GPUMemTrace calls in https://github.com/fastai/fastai/blob/master/tests/test_basic_train.py

You can unskip the first part of test_memory to run these (unless you have the same card as I then it’ll just run).

You can uncomment:
#check_mem_expected = report_mem_real
earlier in the test module, to print the memory usage instead of asserting on it.

If you have some good ideas on how to handle this problem, I’m all ears.

Until then I will be testing it only on my machine.

p.s. I suppose perhaps the solution is to use large models and data sets, so that the memory usage numbers will become much larger and thus it’d be possible to greatly increase absolute tolerance numbers without defeating the purpose of these tests and thus support a wider range (all?) card models. And make those into slow tests that will be run only occasionally, and certainly before a new release.