-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add updates test for react-jss #785
Conversation
Thanks, I didn't know about this syntax. I noticed something strange when running this locally. The |
I am not sure why total time is higher, but updates are optimized in jss, we even allow them to be used for animations. It is actually faster than inline styles, because we are able to exclude reconciler by using In this benchmark we have no access to the component which renders all the dots, so we can't apply SCU false. One theory is that your total time is measured including initial renderings, but update time is pure, where JSS is actually faster. |
So there is no async magic involved in case you are wondering if we tricked the bench. |
Actually what you see with aphrodite is a trick, it is using asap for scheduling rendering, unless you mock it out, it renders probably outside of the bench. |
Aphrodite uses asap to inject styles into the document, not to schedule rendering. |
Yes, I ment styles rendering === injecting styles |
That happens before the benchmark is run and won't impact the results. Something weird is going on with the react-jss update test though - it's the only one that is visibly slow to update/complete but reports timings that suggest otherwise |
Are you saying rendering styles happens before benchmark is run? I am confused… |
Yes, the component under test is rendered in the preview panel before you start the test. I'm not testing how long it takes to inject styles one-time into a style sheet, just how libraries perform when rendering styled trees |
so, you don't measure first mount, but after that each update is measured, right? |
What I am saying is that when your component records time in componentDidUpdate, aphrodite has probably not flushed styles yet, because of asap. In order to measure correctly there, you would need to actually check styles using getComputedStyle |
For e.g. https://github.com/tuchk4/css-in-js-app/blob/master/src/probe.css asks each lib to render a probe of css and then waits for animation start event. To be sure that styles have been actually applied. |
The issue here is with the jss results. Why are you speculating about Aphrodite? The fact that the preview renders correctly shows that the styles have (of course) been injected into the document before you even start the test |
of course, but thats first render, you want to make sure you test each styles render properly, right? I talked about aphrodite because that case reminded me as I was digging into how to measure properly when a library uses async styles rendering. (Not speculating, I know it for a fact.) To get back to the original issue, I don't know why this is the case here, need to dig into how you measure more deeply. |
There are no changes to styles between mounts, and aphrodite doesn't support dynamic styles. Unnecessary to speculate that Aphrodite is "tricking" the benchmarks. I'm going to close this for now because of the issue mentioned |
So your conclusion is that react-jss does something wrong and tricks the benchmark? |
My conclusion is not to merge new code when there are issues associated. At the moment only the react-jss dynamic test seems to exhibit this peculiarity. I'm not speculating as to the reasons |
Alright, I will try to find out why, as I know react-jss code though, I can't imagine anything it can do to trick the bench. I will dig into benchmark logic for measuring and try understand the difference. |
Chatted with @paularmstrong a bit and he suggested a possible explanation: the benchmarks cycles complete before all browser work is done. So in some cases, the browser may be busy dealing with reflows, repaints, recalcs, etc., but that work is only reflected in the total run time (not individual the cycles that are used to calculate mean/stdDev). |
Reflow, repaint and recalc is async? |
They're run and measured outside of the context of React's mount/unmount/update, which is what react-component-benchmark is written for. Just because a component has mounted doesn't mean that the browser has actually completed its paint cycle. |
My understanding is that react calls componentDidMount and componentDidUpdate synchronously right after inserting/mutating DOM, same event loop. Then within lifecycle hooks we inject CSS. Also synchronously. Once we inject CSS, reflo/repaint/recalc happens also synchronously. Then parent component which is used for measuring get the componentDidUpdate called, where we measure the time afaik. |
One thing would be interesting to see if react 15 looks differently on this benchmark. Maybe the behavior I know of has changed in 16. |
Just read this, it suggests that all effects after dom changes are applied in the next loop. That would mean that my mistake was to think that reflo/repaint/recalc happen in sync, in that case componentDidUpdate of the parent will happen before. |
How can it actually be the next loop, if you inject some css, you can immediately use getComputedStyle on an element and it will return the new value, recalc happens definitely in sync, not sure how to test reflow/repaint though. |
Found this article: http://dbaron.github.io/browser-rendering/ Authors might make changes where the work needed to handle one subsumes the work for the other: element.style.backgroundColor = "blue"; It's time to redraw getComputedStyle(element, "").color (Maybe Gecko-specific; might only flush style) getComputedStyle(element, "").width I think we should do something like this in order to make sure the entire thing is done within the benchmark. |
You can see the trailing work in the performance profiles. Within the iteration's rAF callback, there's recalculation of style and layout happening after React has mounted the tree. The large GC events during the dynamic test (for libraries injecting a lot of styles) can occur within the mount phase or within the microtasks. |
It's easy enough to force browser-time to be included in the timings. I think I'd like to surface both scripting and recalc timings in the results; useful to understand where time is being spent. |
I'm going to reopen this as we've determined this test is fine. It just exposes something that is more obvious here but present for all the tests. It looks like browsers can take a few ms longer to recalculate styles when there are lots of class names on lots of elements. Libraries using "atomic css" are consistently producing longer style recalculation timings when layout is forced, even if they spend less time in scripting. So a lib might only need 1ms to calculate the style props for 1000 elements, but it might cause the browser to spend 10ms longer recalculating styles. I prototyped a patch to mitigate this in RNW and immediately saw style recalc times improve. So I think this is very useful information to know! While I update the benchmarks app to surface this info, you can add a couple of lines here to trigger a forced layout and record the time, e.g,
Then log the mean of |
This is interesting in regard of atomic css. |
Just checked the bench, now, react-jss is quite slow compared to others (updates), interesting, now I can dig into it. |
The S/L times are equivalent to "scripting" and "layout". |
No description provided.