You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using Runner.run_streamed, the events for run_item_stream_event are not received until the agent message and tool execution completes. If on_tool_start and on_tool_end hooks are used, the events are streamed even after these methods return.
importasyncioimportrandomfromtypingimportAnyfromagentsimportAgent, AgentHooks, ItemHelpers, set_default_openai_key, RunContextWrapper, Runner, Tool, function_toolset_default_openai_key('sk-your-openai-key')
classJokeAgentHooks(AgentHooks):
asyncdefon_tool_start(self, wrapper: RunContextWrapper[Any], agent: Agent, tool: Tool):
print(f"-- Hook: On Tool Start --")
asyncdefon_tool_end(
self, wrapper: RunContextWrapper[Any], agent: Agent, tool: Tool, result: Any
):
print(f"-- Hook: On Tool End --")
@function_tooldefhow_many_jokes() ->int:
print(f"-- Tool Execution --")
# await asyncio.sleep(1)returnrandom.randint(1, 10)
asyncdefmain():
agent=Agent(
name="Joker",
model="gpt-4.1",
instructions='Tell the user you are searching jokes for them and call the `how_many_jokes` tool simultaneously. Then tell that many jokes',
tools=[how_many_jokes],
hooks=JokeAgentHooks()
)
result=Runner.run_streamed(
agent,
input="Hello",
)
print("=== Run starting ===")
asyncforeventinresult.stream_events():
ifevent.type=="raw_response_event":
ifevent.data.type=="response.output_item.added":
print(f"-- Raw Output: Item done --")
elifevent.type=="agent_updated_stream_event":
print(f"-- Agent updated --")
continueelifevent.type=="run_item_stream_event":
ifevent.item.type=="tool_call_item":
print("-- Run Item: Tool Called --")
elifevent.item.type=="tool_call_output_item":
print(f"-- Run Item: Tool Output --")
elifevent.item.type=="message_output_item":
print(f"-- Run Item: Message Output --")
else:
pass# Ignore other event typesprint("=== Run complete ===")
if__name__=="__main__":
asyncio.run(main())
Current Output
=== Run starting ===
-- Agent updated --
-- Raw Output: Item done: message --
-- Raw Output: Item done: function_call --
-- Hook: On Tool Start --
-- Tool Execution --
-- Hook: On Tool End --
-- Run Item: Message Output --
-- Run Item: Tool Called --
-- Run Item: Tool Output --
-- Raw Output: Item done: message --
-- Run Item: Message Output --
=== Run complete ===
Expected Output
=== Run starting ===
-- Agent updated --
-- Raw Output: Item done: message --
-- Run Item: Message Output -- # should be received as soon as raw item is added
-- Raw Output: Item done: function_call --
-- Run Item: Tool Called -- # should be received as soon as raw item is added
-- Hook: On Tool Start --
-- Tool Execution --
-- Run Item: Tool Output -- # should be received as soon as tool execution completes
-- Hook: On Tool End --
-- Raw Output: Item done: message --
-- Run Item: Message Output --
=== Run complete ===
Expected behavior
The run_item_stream_event events should stream as soon as the raw_items are completed and not wait for tool execution. This is important in applications where some logic is needed to be performed based on the events, where this ordering can caused issues
Another case where this will cause issues is where the LLM output is provided to the user through run_items and tool calls are involved. In such cases the user will receive any output only after the tool execution is completed
The text was updated successfully, but these errors were encountered:
Describe the bug
When using
Runner.run_streamed
, the events forrun_item_stream_event
are not received until the agent message and tool execution completes. Ifon_tool_start
andon_tool_end
hooks are used, the events are streamed even after these methods return.Debug information
v0.0.12
3.10.14
Repro steps
Adapted from https://github.com/openai/openai-agents-python/blob/main/examples/basic/stream_items.py
Current Output
=== Run starting === -- Agent updated -- -- Raw Output: Item done: message -- -- Raw Output: Item done: function_call -- -- Hook: On Tool Start -- -- Tool Execution -- -- Hook: On Tool End -- -- Run Item: Message Output -- -- Run Item: Tool Called -- -- Run Item: Tool Output -- -- Raw Output: Item done: message -- -- Run Item: Message Output -- === Run complete ===
Expected Output
Expected behavior
The
run_item_stream_event
events should stream as soon as the raw_items are completed and not wait for tool execution. This is important in applications where some logic is needed to be performed based on the events, where this ordering can caused issuesAnother case where this will cause issues is where the LLM output is provided to the user through run_items and tool calls are involved. In such cases the user will receive any output only after the tool execution is completed
The text was updated successfully, but these errors were encountered: