Skip to content

Reasoning model items provide to General model #569

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
lyk0014 opened this issue Apr 22, 2025 · 13 comments
Open

Reasoning model items provide to General model #569

lyk0014 opened this issue Apr 22, 2025 · 13 comments
Labels
bug Something isn't working

Comments

@lyk0014
Copy link

lyk0014 commented Apr 22, 2025

Agent A use O4-mini,
Agent B use gpt-4.1

When provide A_result.to_input_list() as B.input, meet this error:

openai.BadRequestError: Error code: 400 - {'error': {'message': 'Reasoning input items can only be provided to a reasoning or computer use model. Remove reasoning items from your input and try again.', 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

@lyk0014 lyk0014 added the bug Something isn't working label Apr 22, 2025
@DanieleMorotti
Copy link
Contributor

Hi, @rm-openai , I would like to propose two possible PRs:

  1. Add a convert_reasoning_items parameter to the function RunResultBase.to_input_list(). When enabled, this option would convert reasoning items into standard assistant messages, for example:
{
    "id": "msg_id",
        content": [
            {
                "annotations": [],
                "text": msg["summary"],
                "type": "output_text"
            }
        ],
        "role": "assistant",
        "status": "completed",
        "type": "message"
}
  1. Implement the reasoning-message filter into a utility function (e.g. in utils) so it can be used across the package.

Let me know if you like these approaches, or if you would suggest a different solution, such that I can go ahead and open a PR, thanks

@somecoulombs
Copy link

@DanieleMorotti - Is there a way this can be implemented with input filters?

@DanieleMorotti
Copy link
Contributor

@somecoulombs Yes, if we want to restrict this kind of filtering only to handoffs, I can put the function in agents.extensions.handoff_filters submodule

@rm-openai
Copy link
Collaborator

@DanieleMorotti IMO we need a few things:

  1. A utility function that can strip reasoning items.
  2. A handoff filter that uses Updating screenshot to CDN link #1
  3. A lifecycle method def on_prepare_input(context, agent, input) -> input that lets you mutate the input before the LLM is called for an agent.

If you're interested, would love to see a PR for any/all of those!

@DanieleMorotti
Copy link
Contributor

Yes, I'm happy to collaborate. I've already implemented some code for the first two points. I just need a few clarifications:

  1. Which file should I place the utility function in?
  2. Regarding the hook, would I call it within _run_single_turn function, before generating the new response?

Thank you!

@rm-openai
Copy link
Collaborator

  1. Somewhere in extensions
  2. In _get_new_response (before calling the model) and _run_single_turn_streamed (before calling model.stream_response)

@DanieleMorotti
Copy link
Contributor

I'm struggling with some problems removing the reasoning items:

openai.BadRequestError: Error code: 400 - {'error': {'message': "Item 'fc_68089c9c67108191a3c1560734f28ed60227726a225d0030' of type 'function_call' was provided without its required 'reasoning' item: 'rs_68089c98b91c81919a88671d1d2d06ce0227726a225d0030'.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

I checked and I found similar cases mentioned in the forum. Let me know if I can resolve in my code or if it depends on the OpenAI API.

@rm-openai
Copy link
Collaborator

@DanieleMorotti can you share a simple repro script for this? Happy to take a look

@DanieleMorotti
Copy link
Contributor

This should reproduce the error:

import logging
import asyncio

from agents import (
    Agent,
    Runner,
    RunConfig,
    handoff,
    HandoffInputData,
    RunItem,
    ReasoningItem,
    TResponseInputItem
)
from agents.model_settings import ModelSettings
from agents.extensions.handoff_prompt import RECOMMENDED_PROMPT_PREFIX


##############
# SETUP LOGGER
##############

logger =  logging.getLogger("openai.agents") # or openai.agents.tracing for the Tracing logger
# To make all logs show up
logger.setLevel(logging.DEBUG)
logger.addHandler(logging.FileHandler("agents.log", mode="w"))



def _remove_reasoning_items(items: tuple[RunItem, ...]) -> tuple[RunItem, ...]:
    filtered_items = []
    for item in items:
        if (
            isinstance(item, ReasoningItem)
        ):
            continue
        filtered_items.append(item)
    return tuple(filtered_items)

def _remove_reasoning_types_from_input(
    items: tuple[TResponseInputItem, ...],
) -> tuple[TResponseInputItem, ...]:
    reasoning_types = [
        "reasoning"
    ]

    filtered_items: list[TResponseInputItem] = []
    for item in items:
        itype = item.get("type")
        if itype in reasoning_types:
            continue
        filtered_items.append(item)
    return tuple(filtered_items)

def remove_reasoning(handoff_input_data: HandoffInputData) -> HandoffInputData:
    """Filters out reasoning items."""

    history = handoff_input_data.input_history
    new_items = handoff_input_data.new_items

    filtered_history = (
        _remove_reasoning_types_from_input(history) if isinstance(history, tuple) else history
    )
    filtered_pre_handoff_items = _remove_reasoning_items(handoff_input_data.pre_handoff_items)
    filtered_new_items = _remove_reasoning_items(new_items)

    return HandoffInputData(
        input_history=filtered_history,
        pre_handoff_items=filtered_pre_handoff_items,
        new_items=filtered_new_items,
    )

async def main():

    check_agent = Agent(
        name="Syntax check",
        model="gpt-4.1",
        instructions=f"""{RECOMMENDED_PROMPT_PREFIX}
        You have to check if a received plan contains syntax error and then you can return control.
        """
    )

    agent = Agent(
        name="Planner agent",
        model="o4-mini",
        handoff_description="A helpful agent",
        instructions=f"""{RECOMMENDED_PROMPT_PREFIX}
        Your goal is to write a plan for a user input and then make syntax check before to return it to the user.
        """,
        handoffs=[handoff(check_agent, input_filter=remove_reasoning)],
        model_settings=ModelSettings(max_tokens=2048, reasoning={"effort": "low"})
    )

    check_agent.handoffs.append(handoff(agent))

    # Use the `add` tool to add two numbers
    message = "I want to visit Rome, make me a very short plan about the tour."
    result = await Runner.run(starting_agent=agent, input=message, run_config=RunConfig(tracing_disabled=True))
    print(result.final_output)


if __name__ == "__main__":
    asyncio.run(main())

Sorry for the messy code but I had to put all the functions in one file. Let me know if you have some time to investigate it

@rm-openai
Copy link
Collaborator

Thanks. Looking into this!

@DanieleMorotti
Copy link
Contributor

Hi @pakrym-oai , any news on this?

@someshfengde
Copy link

is there something which we can do cause if we use o3-mini it's not a problem but if we use o4-mini it's raising this error

{
  "error": "Error code: 400 - {'error': {'message': 'Reasoning input items can only be provided to a reasoning or computer use model. Remove reasoning items from your input and try again.', 'type': 'invalid_request_error', 'param': 'input', 'code': None}}"
}

@someshfengde
Copy link

someshfengde commented May 8, 2025

Hi @DanieleMorotti @lyk0014 I figured out the workaround to handoff reasoning outputs to the non reasoning models

here is the example.

from agents import Agent, Runner, handoff
from agents.handoffs import HandoffInputData

def remove_reasoning_items(handoff_input_data: HandoffInputData) -> HandoffInputData:
    """Filters out all reasoning items from the handoff data."""
    history = handoff_input_data.input_history
    
    # Filter out reasoning items from new_items
    filtered_new_items = tuple(item for item in handoff_input_data.new_items 
                              if getattr(item, 'type', None) != 'reasoning_item')
    
    # Filter out reasoning items from pre_handoff_items
    filtered_pre_handoff_items = tuple(item for item in handoff_input_data.pre_handoff_items 
                                      if getattr(item, 'type', None) != 'reasoning_item')

    return HandoffInputData(
        input_history=history,
        pre_handoff_items=filtered_pre_handoff_items,
        new_items=filtered_new_items,
    )

nighthawk_agent = Agent(
    name="NighthawkChatbot",
    instructions=(
        "You are a good chatbot"
    ),
    # Attach the guardrail to ensure only proper queries are processed.
    input_guardrails=[InputGuardrail(guardrail_function=dashboard_guardrail)],
    # Use handoffs to route the query after validation.
    handoffs=[handoff(routing_agent, input_filter=remove_reasoning_items)],
    tools = [get_what_you_are],
    model=REASONING_MODEL,
)

here routing agent is agent with non reasoning model
and nighthawk_agent is the reasoning model agent

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants