@stas I cant say I had the time to go into all details of the perf & load tests you do (and if I had, it would take me a moment to understand), but splitting of normal, fast running and always running tests from such larger tests, maybe with larger models it imhe totally the way to go. So, one has two sets of tests - maybe there is a flag or alike. One could also use the fake class maybe to have either small or larger models based on such tests and then of course some test scripts for per & load would simply run only before releases.
Having said that, such tests add really a lot of value (compared to the sometimes trivial assert tests). Great you are helping out. If I had more time, after the current doc test would love to help but can so far only give (hopefully) these ‘smart comments’ here
So my 2 cents summarised:
- maybe one just has to agree on one reference environment with Jeremy and Sylvain. So you say: we guarantee / run load & perf tests on a typical, standard cloud like Google or Azure. Other setups might deviate but we tested on a standard setup and people can use these tests for their specific settings (this could be an addition to the doc then eventually)
- I do wonder if we might want to separate these complex tests with a switch (so they run only before releases) and maybe even separate out the code base or use naming conventions like test_perfload_[APIXYZ] For example, I could imagine in the doc_test project it would be great one can identify the perf load tests based on the name and @ashaw could potentially dump them later on in a separate section Perf & load tests.
This is of course completely fantastic and complex work you are doing here!