You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
concrete-eval: Use concrete eval information even if the result is too large
When we originally implemented concrete-eval, we decided not to create `Const`
lattice elements for constants that are too large, on the fear that these
would end up in the IR and blowing up the size. Now that we have some
experience with this, I think that decision was incorrect for several reasons:
1. We've already performed the concrete evaluation (and allocated the big
object), so we're just throwing away precision here that we could
have otherwise used (Although if the call result is unused, we probably
shouldn't do concrete eval at all - see #46774).
2. There's a number of other places in the compiler where we read large
values into `Const`. Unless we add these kinds of check there too,
we need to have appropriate guards in the optimizer and the cache
anyway, to prevent the IR size blowup.
3. It turns out that throwing away this precision actually causes
significant performance problems for code that is just over the line.
Consider for example a lookup of a small struct inside a large,
multi-level constant structure. The final result might be quite
tiny, but if we refuse to propagate the intermediate levels,
we might end up doing an (expensive) full-constprop when propagating
the constant information could have given us a much cheaper
concrete evaluation.
This commit simply removes that check. If we see any regressions
as a result of this, we should see if there are additional guards
needed in the optimizer.
0 commit comments