Skip to content

Ordering of events in Runner.run_streamed is incorrect #583

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
PrinceGupta1999 opened this issue Apr 24, 2025 · 2 comments
Open

Ordering of events in Runner.run_streamed is incorrect #583

PrinceGupta1999 opened this issue Apr 24, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@PrinceGupta1999
Copy link

PrinceGupta1999 commented Apr 24, 2025

Describe the bug

When using Runner.run_streamed, the events for run_item_stream_event are not received until the agent message and tool execution completes. If on_tool_start and on_tool_end hooks are used, the events are streamed even after these methods return.

Debug information

  • Agents SDK version: v0.0.12
  • Python version: 3.10.14

Repro steps

Adapted from https://github.com/openai/openai-agents-python/blob/main/examples/basic/stream_items.py

import asyncio
import random
from typing import Any

from agents import Agent, AgentHooks, ItemHelpers, set_default_openai_key, RunContextWrapper, Runner, Tool, function_tool

set_default_openai_key('sk-your-openai-key')

class JokeAgentHooks(AgentHooks):
    async def on_tool_start(self, wrapper: RunContextWrapper[Any], agent: Agent, tool: Tool):
        print(f"-- Hook: On Tool Start --")

    async def on_tool_end(
        self, wrapper: RunContextWrapper[Any], agent: Agent, tool: Tool, result: Any
    ):
        print(f"-- Hook: On Tool End --")

@function_tool
def how_many_jokes() -> int:
    print(f"-- Tool Execution --")
    # await asyncio.sleep(1)
    return random.randint(1, 10)


async def main():
    agent = Agent(
        name="Joker",
        model="gpt-4.1",
        instructions='Tell the user you are searching jokes for them and call the `how_many_jokes` tool simultaneously. Then tell that many jokes',
        tools=[how_many_jokes],
        hooks=JokeAgentHooks()
    )

    result = Runner.run_streamed(
        agent,
        input="Hello",
    )
    print("=== Run starting ===")
    async for event in result.stream_events():
        if event.type == "raw_response_event":
            if event.data.type == "response.output_item.added":
                print(f"-- Raw Output: Item done --")
        elif event.type == "agent_updated_stream_event":
            print(f"-- Agent updated --")
            continue
        elif event.type == "run_item_stream_event":
            if event.item.type == "tool_call_item":
                print("-- Run Item: Tool Called --")
            elif event.item.type == "tool_call_output_item":
                print(f"-- Run Item: Tool Output --")
            elif event.item.type == "message_output_item":
                print(f"-- Run Item: Message Output --")
            else:
                pass  # Ignore other event types

    print("=== Run complete ===")


if __name__ == "__main__":
    asyncio.run(main())

Current Output

=== Run starting ===
-- Agent updated --
-- Raw Output: Item done: message --
-- Raw Output: Item done: function_call --
-- Hook: On Tool Start --
-- Tool Execution --
-- Hook: On Tool End --
-- Run Item: Message Output --
-- Run Item: Tool Called --
-- Run Item: Tool Output --
-- Raw Output: Item done: message --
-- Run Item: Message Output --
=== Run complete ===

Expected Output

=== Run starting ===
-- Agent updated --
-- Raw Output: Item done: message --
-- Run Item: Message Output -- # should be received as soon as raw item is added
-- Raw Output: Item done: function_call --
-- Run Item: Tool Called -- # should be received as soon as raw item is added
-- Hook: On Tool Start --
-- Tool Execution --
-- Run Item: Tool Output -- # should be received as soon as tool execution completes
-- Hook: On Tool End --
-- Raw Output: Item done: message --
-- Run Item: Message Output -- 
=== Run complete ===

Expected behavior

The run_item_stream_event events should stream as soon as the raw_items are completed and not wait for tool execution. This is important in applications where some logic is needed to be performed based on the events, where this ordering can caused issues

Another case where this will cause issues is where the LLM output is provided to the user through run_items and tool calls are involved. In such cases the user will receive any output only after the tool execution is completed

@PrinceGupta1999 PrinceGupta1999 added the bug Something isn't working label Apr 24, 2025
@PrinceGupta1999
Copy link
Author

Hi team, let me know if more details are needed ? Currently, there is no obvious workaround for this issue

@PrinceGupta1999
Copy link
Author

@rm-openai bumping this up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant