Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[red-knot] Unpacking and for loop assignments to attributes #16004

Merged
merged 4 commits into from
Feb 7, 2025

Conversation

sharkdp
Copy link
Contributor

@sharkdp sharkdp commented Feb 6, 2025

Summary

  • Support assignments to attributes in more cases:
    • assignments in for loops
    • in unpacking assignments
  • Add test for multi-target assignments (these were already working, but untested)
  • Add tests for all other possible assignments to attributes that could possibly occur (in decreasing order of likeliness):
    • augmented attribute assignments
    • attribute assignments in with statements
    • attribute assignments in comprehensions
    • Note: assignments to attributes in named expressions are not syntactically allowed

closes #15962

Test Plan

New Markdown tests

@sharkdp sharkdp added the red-knot Multi-file analysis & type inference label Feb 6, 2025
@sharkdp sharkdp changed the title [red-knot] For loop and unpacking assignments for attributes [red-knot] Unpacking and for loop assignments to attributes Feb 6, 2025
@sharkdp sharkdp force-pushed the david/more-attribute-assignments branch from a539809 to a00483a Compare February 6, 2025 22:10
Copy link
Contributor

@carljm carljm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love it!

//
// for self.name in <iterable>:

// TODO: Potential diagnostics resulting from the iterable are currently not reported.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expect they would still be reported by normal type inference, no? So I'm not sure any special handling is needed here, thus maybe we don't need a TODO either?

Maybe ideally we'd add some tests for cases where diagnostics occur either in iteration, or more generally in type inference of the RHS of a self.x = ... assignment, and verify that we do surface those diagnostics.

Copy link
Contributor Author

@sharkdp sharkdp Feb 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expect they would still be reported by normal type inference, no?

I also hoped that would be the case, but no. We emit the non-iterable diagnostic from infer_for_statement_definition, and attribute assignments are not definitions. I added a test to demonstrate this.

We have a similar problem where we do not yet identify cases where (normal) assignments to attributes conflict the declared type of that attribute. We have existing tests for this as well.

(Edit: no, I already implemented this 😄)

I'll also open a new ticket to address both of these issues.

or more generally in type inference of the RHS of a self.x = ... assignment

The general case works fine. We only fail to emit diagnostics if they originate from infer_…_definition calls, which do not apply to attribute assignments. I added the following test case:

class C:
    def __init__(self) -> None:
        # error: [too-many-positional-arguments]
        self.x: int = len(1, 2, 3)

@sharkdp sharkdp merged commit 97e6fc3 into main Feb 7, 2025
21 checks passed
@sharkdp sharkdp deleted the david/more-attribute-assignments branch February 7, 2025 10:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
red-knot Multi-file analysis & type inference
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[red-knot] Support unpacking attribute assignments: self.x, self.y = …
5 participants