Description
- Version: v8.0.0-pre
- Platform: all
- Subsystem: benchmark, timers
The depth
benchmark for timers sets a timer that sets a timer that sets a timer that... 500K of them.
Since each timer has to wait for the next tick of the event loop:
- This benchmark takes a very long time to run compared to the
breadth
test that is already in the file. - This may be more of an event loop benchmark than a timer benchmark.
I wonder if it makes sense to do any of the following or something similar:
-
Reduce the number of iterations for the depth test as it's really just running the iterations in sequence, not in parallel. And even on an infinitely fast machine, it would take over 8 minutes to run because each tick of the event loop would have to wait 1ms before firing the timer.
-
Move and/or rename the depth benchmark as it is unlikely to be something significantly impacted by changes in the Node.js timers code.
I know I can send command line arguments to skip the depth test or change the value of N. I just suspect that the default behavior right now isn't ideal. Every time I touch timers code and run a benchmark, this is an annoyance.
Activity
Trott commentedon Nov 6, 2016
@mscdex @misterdjules @Fishrock123 @AndreasMadsen
Oh, I'll also note that it was moved from
misc
totimers
earlier this year without apparent significant disruption, so presumably another move or split would not be disruptive either.Fishrock123 commentedon Nov 6, 2016
We could probably remove it, a test assertion asserting that timers are indeed pooled in this case mentioning benchmarks should be good enough combined with other benchmarks.
Fishrock123 commentedon Nov 6, 2016
Hmmm, at a second look I definitely think there are not enough benchmarks to replace it yet.
AndreasMadsen commentedon Nov 6, 2016
There are definitely benchmarks that uses many more iterations than what is required. I guess this is one of them.
I'm don't like moving the benchmark to a diffrent category, I think
timer
benchmarks should be in thetimer
category. @Fishrock123 suggestion sounds reasonable!If you are interested in reducing the number of iterations, a rough estimate on the appropriate number of iterations could be found by tuning the coefficient of variation
std(x)/mean(x)
(use the unbiased estimate). See #8139 (comment) for the practical meaning of this.mscdex commentedon Nov 6, 2016
I think in general there are lots of cases like this in many of the different benchmarks where some configurations take longer than others but the same iteration count is used for all of them. I don't know of a good way to solve this, since implicitly altering the iteration count for certain configurations could be seen as unexpected (even if the new iteration count is reported in the output).
Trott commentedon Nov 6, 2016
Maybe it makes sense to split this benchmark into two files and but leave them both in the
timers
directory? This way they can have separateN
values but both will still be categorzied as timer benchmarks.Trott commentedon Nov 7, 2016
#9497
benchmark: split timers benchmark and refactor
benchmark: split timers benchmark and refactor
benchmark: split timers benchmark and refactor
18 remaining items