Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't attempt concrete eval if there's no information to be gained #46774

Merged
merged 1 commit into from
Oct 23, 2022

Conversation

Keno
Copy link
Member

@Keno Keno commented Sep 15, 2022

If the effects are already maximized and the rt is already Const, there is nothing concrete evaluation could possibly tell us that we don't already know - just bail early in that case to save some inference time.

@Keno Keno requested a review from aviatesk September 17, 2022 22:02
Comment on lines 890 to 894
if is_removable_if_unused(result.effects)
if isa(result.rt, Const) || call_result_unused(sv)
# There is no more information to be gained here. Bail out early.
return nothing
end
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should implement this logic within concrete_eval_eligible? There might be a chance for const-prop' to shape up the optimized IR body even if it can't refine the type/effects information.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The IR will not be inlined ideally. We should see it's dead and remove it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the IR body is inlineable but not simple enough for it to be folded into ConstAPI, it may hit this branch

isinvoke && rewrite_invoke_exprargs!(stmt)
push!(todo, idx=>(case::InliningTodo))

and end up being expanded into the callee.

Having said that, we intentionally miss this sort of optimization opportunity, like:

if isa(rt, Const)
if !is_nothrow(result.effects)
# Could still be improved to Bottom (or at least could see the effects improved)
return true
end
end

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've thought about this some more, but I'm not sure it's really worth the cost to do the extra constprop or semiconcrete eval. Yes, it's possible that would shape up the IR farther, but we already have very strong effects on this. If a case comes up where this becomes an issue, I'd rather just see if there's something we can do to strengthen the optimizer.

@codecov
Copy link

codecov bot commented Sep 18, 2022

Codecov Report

Merging #46774 (f424431) into master (f424431) will not change coverage.
The diff coverage is n/a.

❗ Current head f424431 differs from pull request most recent head 728aa92. Consider uploading reports for the commit 728aa92 to get more accurate results

@@           Coverage Diff           @@
##           master   #46774   +/-   ##
=======================================
  Coverage   93.50%   93.50%           
=======================================
  Files         386      386           
  Lines       86523    86523           
=======================================
  Hits        80900    80900           
  Misses       5623     5623           

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@@ -887,6 +887,12 @@ function abstract_call_method_with_const_args(interp::AbstractInterpreter,
if !const_prop_enabled(interp, sv, match)
return nothing
end
if is_removable_if_unused(result.effects)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also bail out if the return type is Bottom?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if the rt is known to be Union{}, we need to skip concrete eval and let the regular heuristics take care of it, which disable const prop for this case unless explicitly forced (which could make sense to simplify the inlined IR).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's mostly unrelated to this PR though, so we should probably merge this first and then I can put up a separate PR for that change.

@Keno
Copy link
Member Author

Keno commented Oct 22, 2022

Rebased. I have some more tweaks I want to make to this logic, but let's get this in, since it should be a good improvement.

Keno added a commit that referenced this pull request Oct 22, 2022
…o large

When we originally implemented concrete-eval, we decided not to create `Const`
lattice elements for constants that are too large, on the fear that these
would end up in the IR and blowing up the size. Now that we have some
experience with this, I think that decision was incorrect for several reasons:

1. We've already performed the concrete evaluation (and allocated the big
   object), so we're just throwing away precision here that we could
   have otherwise used (Although if the call result is unused, we probably
   shouldn't do concrete eval at all - see #46774).
2. There's a number of other places in the compiler where we read large
   values into `Const`. Unless we add these kinds of check there too,
   we need to have appropriate guards in the optimizer and the cache
   anyway, to prevent the IR size blowup.
3. It turns out that throwing away this precision actually causes
   significant performance problems for code that is just over the line.
   Consider for example a lookup of a small struct inside a large,
   multi-level constant structure. The final result might be quite
   tiny, but if we refuse to propagate the intermediate levels,
   we might end up doing an (expensive) full-constprop when propagating
   the constant information could have given us a much cheaper
   concrete evaluation.

This commit simply removes that check. If we see any regressions
as a result of this, we should see if there are additional guards
needed in the optimizer.
If the effects are already maximized and the rt is already Const,
there is nothing concrete evaluation could possibly tell us
that we don't already know - just bail early in that case to
save some inference time.
Keno added a commit that referenced this pull request Oct 22, 2022
…o large

When we originally implemented concrete-eval, we decided not to create `Const`
lattice elements for constants that are too large, on the fear that these
would end up in the IR and blowing up the size. Now that we have some
experience with this, I think that decision was incorrect for several reasons:

1. We've already performed the concrete evaluation (and allocated the big
   object), so we're just throwing away precision here that we could
   have otherwise used (Although if the call result is unused, we probably
   shouldn't do concrete eval at all - see #46774).
2. There's a number of other places in the compiler where we read large
   values into `Const`. Unless we add these kinds of check there too,
   we need to have appropriate guards in the optimizer and the cache
   anyway, to prevent the IR size blowup.
3. It turns out that throwing away this precision actually causes
   significant performance problems for code that is just over the line.
   Consider for example a lookup of a small struct inside a large,
   multi-level constant structure. The final result might be quite
   tiny, but if we refuse to propagate the intermediate levels,
   we might end up doing an (expensive) full-constprop when propagating
   the constant information could have given us a much cheaper
   concrete evaluation.

This commit simply removes that check. If we see any regressions
as a result of this, we should see if there are additional guards
needed in the optimizer.
@aviatesk aviatesk merged commit 54e6899 into master Oct 23, 2022
@aviatesk aviatesk deleted the kf/skipconcrete branch October 23, 2022 04:10
aviatesk pushed a commit that referenced this pull request Oct 23, 2022
…o large

When we originally implemented concrete-eval, we decided not to create `Const`
lattice elements for constants that are too large, on the fear that these
would end up in the IR and blowing up the size. Now that we have some
experience with this, I think that decision was incorrect for several reasons:

1. We've already performed the concrete evaluation (and allocated the big
   object), so we're just throwing away precision here that we could
   have otherwise used (Although if the call result is unused, we probably
   shouldn't do concrete eval at all - see #46774).
2. There's a number of other places in the compiler where we read large
   values into `Const`. Unless we add these kinds of check there too,
   we need to have appropriate guards in the optimizer and the cache
   anyway, to prevent the IR size blowup.
3. It turns out that throwing away this precision actually causes
   significant performance problems for code that is just over the line.
   Consider for example a lookup of a small struct inside a large,
   multi-level constant structure. The final result might be quite
   tiny, but if we refuse to propagate the intermediate levels,
   we might end up doing an (expensive) full-constprop when propagating
   the constant information could have given us a much cheaper
   concrete evaluation.

This commit simply removes that check. If we see any regressions
as a result of this, we should see if there are additional guards
needed in the optimizer.
aviatesk pushed a commit that referenced this pull request Oct 24, 2022
…o large (#47283)

When we originally implemented concrete-eval, we decided not to create `Const`
lattice elements for constants that are too large, on the fear that these
would end up in the IR and blowing up the size. Now that we have some
experience with this, I think that decision was incorrect for several reasons:

1. We've already performed the concrete evaluation (and allocated the big
   object), so we're just throwing away precision here that we could
   have otherwise used (Although if the call result is unused, we probably
   shouldn't do concrete eval at all - see #46774).
2. There's a number of other places in the compiler where we read large
   values into `Const`. Unless we add these kinds of check there too,
   we need to have appropriate guards in the optimizer and the cache
   anyway, to prevent the IR size blowup.
3. It turns out that throwing away this precision actually causes
   significant performance problems for code that is just over the line.
   Consider for example a lookup of a small struct inside a large,
   multi-level constant structure. The final result might be quite
   tiny, but if we refuse to propagate the intermediate levels,
   we might end up doing an (expensive) full-constprop when propagating
   the constant information could have given us a much cheaper
   concrete evaluation.

This commit simply removes that check. If we see any regressions
as a result of this, we should see if there are additional guards
needed in the optimizer.
Copy link
Member

@vtjnash vtjnash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good find!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants