Description
Question
I have a non-reasoning model, gpt-4.1
, that does some tool calls, and then hands off to a reasoning model, o3
.
I am seeing that the server wants reasoning items with a tool call.
openai.BadRequestError: Error code: 400 - {'error': {'message': "Item 'fc_682ce5bd92248191940996c8d9a04dbb0a9f269894a5abbf' of type 'function_call' was provided without its required 'reasoning' item: 'rs_682ce5b7b5ac8191ac7a77406c8ef8780a9f269894a5abbf'.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}
I tried providing custom reasoning items during hand off using hooks, but the reasoning Id is validated; that will not work. Tried leaving the Id blank so maybe the backend will create it, also failed.
openai.NotFoundError: Error code: 404 - {'error': {'message': "Item with id 'rs_682ce817c17081919e38004aaf16b288031dc804e3dfc07c' not found.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}
I tried removing the tool call items, but then the server says no tool call results without a tool call item.
openai.BadRequestError: Error code: 400 - {'error': {'message': 'No tool call found for function call output with call_id call_4GvznfhXXkBVbpXwu2bpLjRS.', 'type': 'invalid_request_error', 'param': 'input', 'code': None}}
I do need the tool call results to be present in the context so o3
knows what happened. What do you suggest?
P.S. Switching from gpt-4.1
and o3
works great in the playground. From o3
to gpt-4.1
it does remove reasoning items, yes, but in the reverse, nothing seems to be the problem.
Many thanks.
Activity
rm-openai commentedon May 21, 2025
Would you mind sharing a quick code snippet that reproduces this? Looking into it
maininformer commentedon May 21, 2025
Definitely; the following will give the first error.
Since I posted, I also tried removing the function calls and converting the function call result to plain text, to pass it to the reasoning model. That also gives me a similar error as trying to add custom reasoning:
github-actions commentedon May 29, 2025
This issue is stale because it has been open for 7 days with no activity.
maininformer commentedon May 29, 2025
Not stale girlie pop, I am checking this every day.
lamaeldo commentedon May 29, 2025
Seconded, I am also facing this issue: it is somehow impossible to have a multi-turn conversation with the agent doing
next_input = result.to_input_list()
,result = await Runner.run(agent, next_input
in a looprm-openai commentedon May 29, 2025
Thanks - lost track of this, but going to try and fix today.
rm-openai commentedon May 30, 2025
deployed a fix, so you shouldn't see this error any more
Let me know if this resolves things or if there's more to be done!
maininformer commentedon May 30, 2025
@rm-openai thank you. Just to check, does this change the behavior where non-reasoning models like
gpt-4.1
would not accept reasoning items? If so that would be awesome!Just to make sure I understand: This is the same behavior in playground, where if one switched from, say,
o3
togpt-4.1
, there is a modal saying reasoning items will be removed. With your change, we would expect that that modal and behavior would be unnecessary, yes?rm-openai commentedon May 30, 2025
That's right. Reasoning items will be ignored if passed to gpt-4.1, instead of raising an error.
maininformer commentedon May 30, 2025
@rm-openai Getting a 500; same snippet ^
rm-openai commentedon May 30, 2025
@maininformer ah sorry about that. Just to confirm could you run once more and just make sure you keep getting a 500 error?
maininformer commentedon May 30, 2025
@rm-openai Yeah I ran this a couple of times before posting, but I just ran again 3 more times just to be sure. I do see you guys are having a couple of disruptions but unsure if it's related.
maininformer commentedon Jun 4, 2025
@rm-openai the snippet now works! 🙏🏼 🙏🏼
However, there is another thing that is broken now:
Fixed 🟢
o3 -> gpt-4.1 -> o3
Broken 🔴
o3 -> gpt-4.1 -> o4-mini
. Getting a 500 again. Here is the full breaking script for convenience:rm-openai commentedon Jun 4, 2025
@maininformer yeah just landed a fix earlier today.
As for the o3->o4-mini, yeah that's currently "expected" in the sense that different reasoning models aren't necessarily transferable. It shouldn't be a 500, but that's a separate issue. Let me open an internal thread and see what comes of it.
(For now I'd recommend sticking with the same reasoning model)
rm-openai commentedon Jun 5, 2025
@maininformer ok your snippet should now work, landed another fix. You also won't need the handoff filter.
maininformer commentedon Jun 5, 2025
yo legend! thank you very much.
rm-openai commentedon Jun 5, 2025
closing, feel free to reopen or create new issue if needed
fitzjalen commentedon Jun 6, 2025
Same error here. Trying to make Critic as multi turn agent but got this error. Main model is o3-mini high and search is gpt-4o
rm-openai commentedon Jun 9, 2025
@fitzjalen could you create a new issue with your repro steps? thank you!