Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple end points in http and grpc load test tasks #1364

Closed
sriumcp opened this issue Nov 3, 2022 · 4 comments
Closed

Support multiple end points in http and grpc load test tasks #1364

sriumcp opened this issue Nov 3, 2022 · 4 comments
Assignees
Labels

Comments

@sriumcp
Copy link
Member

sriumcp commented Nov 3, 2022

Is your feature request related to a problem? Please describe.
Current http and grpc tasks support load testing a single http and grpc endpoint respectively. We want to support testing of multiple endpoints.

Design:

  1. Load test multiple endpoints; they may share some configuration.
iter8 k launch \
--set "tasks={ready,http,assess}" \
--set ready.deploy=httpbin \
--set ready.service=httpbin \
--set ready.timeout=60s \
--set http.numRequests=200 \
--set http.endpoints.getit.url=http://httpbin.default/get \
--set http.endpoints.postit.url=http://httpbin.default/post \
--set http.endpoints.postit.payloadStr=hello \
--set assess.SLOs.upper.http/getit/latency-mean=50 \
--set assess.SLOs.upper.http/getit/error-count=0 \
--set assess.SLOs.upper.http/postit/latency-mean=150 \
--set assess.SLOs.upper.http/postit/error-count=0 \
--set runner=job
  1. Load test multiple grpc endpoints; they may share some configuration.
iter8 k launch \
--set "tasks={grpc,assess}" \
--set grpc.host="hello.default:50051" \
--set grpc.endpoints.hello.call="helloworld.Greeter.SayHello" \
--set grpc.endpoints.goodbye.call="helloworld.Greeter.SayGoodBye" \
--set grpc.protoURL="https://raw.githubusercontent.com/grpc/grpc-go/master/examples/helloworld/helloworld/helloworld.proto" \
--set assess.SLOs.upper.grpc/hello/error-rate=0 \
--set assess.SLOs.upper.grpc/goodbye/latency/p'97\.5'=800 \
--set runner=job
@sriumcp sriumcp self-assigned this Nov 3, 2022
@sriumcp sriumcp added the v0.12 label Nov 3, 2022
@kalantar
Copy link
Member

kalantar commented Nov 4, 2022

How should shared configuration be interpreted? In the HTTP example, does --set http.numRequests=200 mean 200 total requests? Or 200 per endpoint?

What does it look like if I want to test against a single endpoint? Do I need to include the endpoints label? In part this is a question about which configuration can be shared and which cannot.

@sriumcp
Copy link
Member Author

sriumcp commented Nov 4, 2022

Good questions! Here's my take...

  1. The tasks continue to behave as it does today -- i.e., if you do not have endpoints, then you get exactly the behavior you get today.
  2. If the tasks do have endpoint specification, then the final config of each endpoint = shared spec merged with endpoint spec. If fields are repeated across shared and endpoint specs, the latter takes precedence.
  3. Anything field of the spec can be shared (i.e., any field is syntactically valid for sharing). In practice, we expect that users will find it more useful to share certain fields (for e.g., numRequests, this is the number of requests used for each endpoint load gen process) than others (e.g., URLs, which might have distinct values for different endpoints).

@sriumcp sriumcp assigned Alan-Cha and unassigned sriumcp Dec 13, 2022
@sriumcp
Copy link
Member Author

sriumcp commented Feb 2, 2023

As an extension, we may want to introduce glob patterns or wild cards for SLOs. E.g.,

--set assess.SLOs.upper.http/*/latency-mean=50

@kalantar
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants