-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance Regression: Increased memory usage & slower overall execution in latest version #2932
Comments
This issue might also be related to a discussion I started earlier: Discussion #2896. In that discussion, I explored some performance concerns that could be connected to the memory usage and execution time regressions observed here. It might be useful to consider both cases together when investigating potential optimizations. |
Thanks for taking the time to look at this @kryaksy You are not testing latest perspective. This link has not existed since 2.0.1, which is the version you are actually testing as latest. Follow the examples in the repo to load 3.3.4, and take a look at the user guide on CDN loading. I've opened a PR to your repo that fixes this. I had been planning on writing a longer blog post about our benchmarking approach for the project but since you asked ... (TLDR recent benchmarks on Prospective.co here) The first thing I always qualify about benchmarking is that "Performance is not just a Number". What we want out of benchmarking is a lot of automated, granular data on Perspective's performance that we can use to guide our decision making. Proper benchmarking is hard, and generally requires dedicated engineering around isolation, measurement and analysis steps. It's really easy to make honest mistakes that wildly mis-measure, mis-interpret or mis-apply data. For Perspective, which is a project focused on performance as a core driver of User Experience, we decided to focus on measuring and improving our own performance version-to-version, and making sure our process allows us address performance issues permanently (avoiding micro benchmark whack-a-mole). To that end, we treat performance engineering the same as any other feature, focusing on regression testing and visibility to make sure we can measure, track and tactically apply fixes. Visibility helps us hold our code accountable for its performance impact, and emphasizes deterministic improvements that stay fixed across versions. Perspective has an entire package dedicated to cross-language, highly granular and configurable benchmarking in the For memory, we have a suite of leak tests for both the engine and UI, for which we measure the heap allocation of both the interpreter and the perspective's own instrumentation over a few thousand iterations of a variety of features to validate then when they are unregistered or deleted they do not lose track of heap memory. These tests can be configured locally to run with Wasm memory growth disabled and a limited static heap, which we use to diagnose discrete allocation anomalies. In addition to this data collection which we use to drive our own development priorities, when we tactically address performance issues (e.g. just recently #2885), I always make a point to document this as part of the PR, and this is an explicit requirement for PRs in the There are holes in our approach. While Perspective's semantic versioning has been pretty stable over the years, we have made semantic changes to APIs that make proper apples-to-apples benchmarking difficult, for example I see a few inspirations for improvements to our own repo in your benchmarks, namely
|
Thanks for the detailed and insightful explanation and the PR to my repo, I appreciate it. I now see that my benchmarking approach wasn’t as structured as it should be, and I’ll take some time to better understand the methodology you outlined. Still, I’m glad if it provided some inspiration. The benchmarks on Prospective.co also provide useful insights that I hadn’t looked into before. I’ll go through everything in more detail and see what I can take away from it. |
Description
I've been comparing the latest version of Perspective with 2.10.1, and I’ve noticed that while some operations have gotten faster, others have slowed down significantly, leading to an overall increase in execution time. On top of that, memory usage seems to have increased in the latest release.
To better understand the differences, I set up a simple benchmark where both versions run the same workload. The only difference between them is the Perspective version. The results show that Worker Create, Table Update, and View ToJson are slower in the latest version, while Table Create and View Create have improved. Despite these improvements, the total execution time is still worse than in v2.10.1, and memory consumption is noticeably higher.
Observations
How the Measurements Were Taken
I collected these measurements using browser-based performance tracking, monitoring execution times and memory usage from the client side. While this approach isn't as controlled as a low-level profiling tool, I believe it's the most relevant for real-world user experience, since it reflects how Perspective actually performs in practice.
I assume Perspective might have its own internal benchmarking tools, but I’m not familiar with them and I’m not sure if they would be applicable here. If there’s a better way to measure performance for this kind of scenario, I’d love to learn more!
Steps to Reproduce
Repo for Reproduction: perspective-benchmark
Expected vs. Actual Results
Environment
Additional Context
I’d love to hear thoughts on whether this behavior is expected due to recent changes, or if there are optimizations planned. If there’s anything else I can do to improve these measurements, let me know!
The text was updated successfully, but these errors were encountered: