-
Notifications
You must be signed in to change notification settings - Fork 685
In-process provider is 40 to 60% slower than ganache-cli launched at command line #481
Comments
Interesting observation! One difference between the two tests is that the It could be that our assumptions about performance of in-process ganache RPC vs RPC of HTML (running It could be that the tests themselves are doing a lot of work in the process which is causing the "in-process ganache" to wait; whereas with I'll definitely want to figure out what is going on here (as well as investigate why 6.7.0 has caused such a slow down)! That said, have you run these benchmarks multiple times to make sure it wasn't a fluke of CircleCI? I've seen our Travis and AppVeyor vary test times by several minutes on successive runs without even changing any code. |
@davidmurdoch Thanks!
I am (pretty) sure that the provider vs. server is consistently slower - have rerun that many times. The version difference I am less sure about - that surprised me a bit. There's definitely variance run to run.
Yes, that makes a lot of sense. Do you know if this is something worker threads might help with? Have not looked into that stuff at all... |
Worker threads are not great for I/O bound tasks, according to the docs, and ganache currently is I/O bound. I know it doesn't make much sense for it to be this way and is something I've been wanting to optimize for. ...which gives me an idea. Try creating the provider with
This may make the Back to the idea of using |
@davidmurdoch Thanks so much - memdown does help a bit. There's still a gap but it's smaller. Also tried using a mocha reporter called 'min' that does almost no terminal writes and that seems a bit faster too. There might be several things adding up - I'm going to close because I suspect there isn't a silver bullet in the offing here. Thanks! |
Cross-linking to ganache-cli #677 - might be one piece of the differences seen here. |
@davidmurdoch Just wondering, how much interest would there be in having a flag in I was playing around with running ganache in Github Actions recently and they have a limit on file handles, making tests inconsistent (see aragon/aragon-court#219). Moving ganache to using an in-memory db not only sounds faster for tests, but also solves this particular issue with Github Actions :). |
If you'd like to put in the work to get this feature done I'd merge it in :-D |
In the past I've heard ganache engineers suggest that running as an in-process provider should be faster than running the client separately as a server. Intuitively this makes sense since there's no ipc overhead etc.
However, in practice I'm seeing the opposite. Running zeppelin-solidity (~2200 Truffle unit tests) using the two options consistently has ganache.provider running 40 - 60% slower than ganache-cli as a separate process. Examples can be seen the CircleCI jobs below.
Using 6.4.1 ( provider ~40% slower than server)
Using 6.7.0 ( provider ~60% slower than server)
Any ideas why this might be?
Is there any way to address the difference?
Context
I 'm working on a coverage tool which inspects opcodes with an in-process provider and seeing worse performance than expected. In some cases computationally intensive tests are taking > than twice as long to run with coverage than without. TLDR; I'm trying to isolate what the bottleneck is.
NB: this issue purely about perf disparities between server & in-process provider. In the CircleCI benchmarking jobs, coverage is run as a separate item.
(cf: solidity-coverage 372
Your Environment
The text was updated successfully, but these errors were encountered: