-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Reasoning model items provide to General model #569
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, @rm-openai , I would like to propose two possible PRs:
{
"id": "msg_id",
content": [
{
"annotations": [],
"text": msg["summary"],
"type": "output_text"
}
],
"role": "assistant",
"status": "completed",
"type": "message"
}
Let me know if you like these approaches, or if you would suggest a different solution, such that I can go ahead and open a PR, thanks |
@DanieleMorotti - Is there a way this can be implemented with input filters? |
@somecoulombs Yes, if we want to restrict this kind of filtering only to handoffs, I can put the function in |
@DanieleMorotti IMO we need a few things:
If you're interested, would love to see a PR for any/all of those! |
Yes, I'm happy to collaborate. I've already implemented some code for the first two points. I just need a few clarifications:
Thank you! |
|
I'm struggling with some problems removing the reasoning items:
I checked and I found similar cases mentioned in the forum. Let me know if I can resolve in my code or if it depends on the OpenAI API. |
@DanieleMorotti can you share a simple repro script for this? Happy to take a look |
This should reproduce the error: import logging
import asyncio
from agents import (
Agent,
Runner,
RunConfig,
handoff,
HandoffInputData,
RunItem,
ReasoningItem,
TResponseInputItem
)
from agents.model_settings import ModelSettings
from agents.extensions.handoff_prompt import RECOMMENDED_PROMPT_PREFIX
##############
# SETUP LOGGER
##############
logger = logging.getLogger("openai.agents") # or openai.agents.tracing for the Tracing logger
# To make all logs show up
logger.setLevel(logging.DEBUG)
logger.addHandler(logging.FileHandler("agents.log", mode="w"))
def _remove_reasoning_items(items: tuple[RunItem, ...]) -> tuple[RunItem, ...]:
filtered_items = []
for item in items:
if (
isinstance(item, ReasoningItem)
):
continue
filtered_items.append(item)
return tuple(filtered_items)
def _remove_reasoning_types_from_input(
items: tuple[TResponseInputItem, ...],
) -> tuple[TResponseInputItem, ...]:
reasoning_types = [
"reasoning"
]
filtered_items: list[TResponseInputItem] = []
for item in items:
itype = item.get("type")
if itype in reasoning_types:
continue
filtered_items.append(item)
return tuple(filtered_items)
def remove_reasoning(handoff_input_data: HandoffInputData) -> HandoffInputData:
"""Filters out reasoning items."""
history = handoff_input_data.input_history
new_items = handoff_input_data.new_items
filtered_history = (
_remove_reasoning_types_from_input(history) if isinstance(history, tuple) else history
)
filtered_pre_handoff_items = _remove_reasoning_items(handoff_input_data.pre_handoff_items)
filtered_new_items = _remove_reasoning_items(new_items)
return HandoffInputData(
input_history=filtered_history,
pre_handoff_items=filtered_pre_handoff_items,
new_items=filtered_new_items,
)
async def main():
check_agent = Agent(
name="Syntax check",
model="gpt-4.1",
instructions=f"""{RECOMMENDED_PROMPT_PREFIX}
You have to check if a received plan contains syntax error and then you can return control.
"""
)
agent = Agent(
name="Planner agent",
model="o4-mini",
handoff_description="A helpful agent",
instructions=f"""{RECOMMENDED_PROMPT_PREFIX}
Your goal is to write a plan for a user input and then make syntax check before to return it to the user.
""",
handoffs=[handoff(check_agent, input_filter=remove_reasoning)],
model_settings=ModelSettings(max_tokens=2048, reasoning={"effort": "low"})
)
check_agent.handoffs.append(handoff(agent))
# Use the `add` tool to add two numbers
message = "I want to visit Rome, make me a very short plan about the tour."
result = await Runner.run(starting_agent=agent, input=message, run_config=RunConfig(tracing_disabled=True))
print(result.final_output)
if __name__ == "__main__":
asyncio.run(main()) Sorry for the messy code but I had to put all the functions in one file. Let me know if you have some time to investigate it |
Thanks. Looking into this! |
Hi @pakrym-oai , any news on this? |
is there something which we can do cause if we use o3-mini it's not a problem but if we use o4-mini it's raising this error
|
Hi @DanieleMorotti @lyk0014 I figured out the workaround to handoff reasoning outputs to the non reasoning models here is the example.
here routing agent is agent with non reasoning model |
Agent A use O4-mini,
Agent B use gpt-4.1
When provide A_result.to_input_list() as B.input, meet this error:
openai.BadRequestError: Error code: 400 - {'error': {'message': 'Reasoning input items can only be provided to a reasoning or computer use model. Remove reasoning items from your input and try again.', 'type': 'invalid_request_error', 'param': 'input', 'code': None}}
The text was updated successfully, but these errors were encountered: