-
Notifications
You must be signed in to change notification settings - Fork 813
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enforce query response size limit after decompression in query-frontend #6607
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Ahmed Hassan <[email protected]>
Signed-off-by: Ahmed Hassan <[email protected]>
Signed-off-by: Ahmed Hassan <[email protected]>
Signed-off-by: Ahmed Hassan <[email protected]>
Signed-off-by: Ahmed Hassan <[email protected]>
Signed-off-by: Ahmed Hassan <[email protected]>
4b1de34
to
967ef4f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we apply the limit only on instant query and range query tripperware, then we don't have a way to protect us for large response from GetSeries and GetLabels API.
Do we plan to do something there? Or rely on the existing gRPC message size limit.
validation.SmallestPositiveIntPerTenant(tenantIDs, c.limits.MaxFetchedDataBytesPerQuery), | ||
) | ||
|
||
maxByteSize := validation.SmallestPositiveIntPerTenant(tenantIDs, c.limits.MaxFetchedDataBytesPerQuery) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we use the same data bytes limit here instead of introducing a new limit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This limit config is under the limits section, so I thought it is more appropriate to use instead of creating a QFE specific limit config. We can update the description to mention that it is applied in query-frontend as well.
# The maximum combined size of all data that a query can fetch from each
# ingester and storage. This limit is enforced in the querier, ruler, **and query-frontend** for
# `query`, `query_range` and `series` APIs. 0 to disable.
# CLI flag: -querier.max-fetched-data-bytes-per-query
[max_fetched_data_bytes_per_query: <int> | default = 0]
I am okay with introducing another limit instead, but should it go under query-frontend section or limits section?
@@ -1,6 +1,7 @@ | |||
# Changelog | |||
|
|||
## master / unreleased | |||
* [ENHANCEMENT] Query Frontend: Limit query response size after decompression in query frontend. #6607 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changelog items are ordered in certain way. https://github.com/cortexproject/cortex/blob/master/RELEASE.md#prepare-your-release
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add an integration test?
What this PR does:
If
max_fetched_data_bytes_per_query
is configured, fail query response returned to query-frontend if the size of the response after decompression exceeds limit. Decompression reader stops once the max bytes limit is reached.Which issue(s) this PR fixes:
Fixes #
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]