Skip to content

How would I handoff a non-reasoning model with tool calls to a reasoning model? #722

Closed
@maininformer

Description

@maininformer

Question

I have a non-reasoning model, gpt-4.1, that does some tool calls, and then hands off to a reasoning model, o3.

I am seeing that the server wants reasoning items with a tool call.

openai.BadRequestError: Error code: 400 - {'error': {'message': "Item 'fc_682ce5bd92248191940996c8d9a04dbb0a9f269894a5abbf' of type 'function_call' was provided without its required 'reasoning' item: 'rs_682ce5b7b5ac8191ac7a77406c8ef8780a9f269894a5abbf'.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

I tried providing custom reasoning items during hand off using hooks, but the reasoning Id is validated; that will not work. Tried leaving the Id blank so maybe the backend will create it, also failed.

openai.NotFoundError: Error code: 404 - {'error': {'message': "Item with id 'rs_682ce817c17081919e38004aaf16b288031dc804e3dfc07c' not found.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

I tried removing the tool call items, but then the server says no tool call results without a tool call item.

openai.BadRequestError: Error code: 400 - {'error': {'message': 'No tool call found for function call output with call_id call_4GvznfhXXkBVbpXwu2bpLjRS.', 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

I do need the tool call results to be present in the context so o3 knows what happened. What do you suggest?

P.S. Switching from gpt-4.1 and o3 works great in the playground. From o3 to gpt-4.1 it does remove reasoning items, yes, but in the reverse, nothing seems to be the problem.

Many thanks.

Activity

rm-openai

rm-openai commented on May 21, 2025

@rm-openai
Collaborator

Would you mind sharing a quick code snippet that reproduces this? Looking into it

maininformer

maininformer commented on May 21, 2025

@maininformer
Author

Definitely; the following will give the first error.

import asyncio, random
from dataclasses import replace
from agents import Agent, Runner, function_tool, handoff
from agents.handoffs import HandoffInputData
from agents import MessageOutputItem, ToolCallOutputItem, ToolCallItem, ReasoningItem
from openai.types.responses import ResponseOutputMessage, ResponseOutputText

# ── filter that nukes reasoning + fn‑call, keeps content as text ─────────────
def strip_reasoning_bundle(data: HandoffInputData) -> HandoffInputData:
    def transform(seq):
        cleaned = []
        for item in seq:
            if isinstance(item, ReasoningItem):
                continue
            cleaned.append(item)
        return cleaned
    return replace(
        data,
        pre_handoff_items = transform(data.pre_handoff_items),
        new_items         = transform(data.new_items),
    )

# ── tools ────────────────────────────────────────────────────────────────────
@function_tool
def make_haiku_about_haikus() -> str:
    return "\n".join([
        "Seventeen small breaths,",
        "A world folded into three,",
        "Haiku makes haiku."
    ])

@function_tool
def choose_random_word(words: list[str]) -> str:
    return random.choice(words)

# ── agent 3 ──────────────────────────────────────────────────────────────────
final_haiku_agent = Agent(
    name="Final‑Haikuist",
    model="o3",
    instructions=("The prior agent will supply ONE word. "
                  "Write a 5‑7‑5 haiku containing that word exactly once."),
)

# ── agent 2 (general model) ─────────────────────────────────────────────────
word_picker_agent = Agent(
    name="Word‑Picker",
    model="gpt-4.1",
    instructions=("Call `choose_random_word` on the list you receive, then hand off "
                  "to Final‑Haikuist; add no extra text."),
    tools=[choose_random_word],
    handoffs=[handoff(final_haiku_agent)],
)

# ── agent 1 ──────────────────────────────────────────────────────────────────
intro_haiku_agent = Agent(
    name="Intro‑Haikuist",
    model="o3",
    instructions=("Call `make_haiku_about_haikus`, then hand off to Word‑Picker."),
    tools=[make_haiku_about_haikus],
    # filter strips reasoning, fn‑call, result bundle just before the hand‑off
    handoffs=[handoff(word_picker_agent, input_filter=strip_reasoning_bundle)],
)

# ── driver ───────────────────────────────────────────────────────────────────
WORDS = ["moon", "blossom", "mountain", "breeze", "river"]

async def main() -> None:
    run = await Runner.run(intro_haiku_agent,
                           f"The candidate words are: {WORDS}")

    print(run)

if __name__ == "__main__":
    asyncio.run(main())

Since I posted, I also tried removing the function calls and converting the function call result to plain text, to pass it to the reasoning model. That also gives me a similar error as trying to add custom reasoning:

Error getting response: Error code: 404 - {'error': {'message': "Item with id 'msg_682d6cbbd7108191a2175ab5ef8a66830aab4b9647120386' not found.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}. (request_id: req_00b8858b2f83d2c46553bb4f587f9934)
github-actions

github-actions commented on May 29, 2025

@github-actions

This issue is stale because it has been open for 7 days with no activity.

maininformer

maininformer commented on May 29, 2025

@maininformer
Author

This issue is stale because it has been open for 7 days with no activity.

Not stale girlie pop, I am checking this every day.

lamaeldo

lamaeldo commented on May 29, 2025

@lamaeldo

Seconded, I am also facing this issue: it is somehow impossible to have a multi-turn conversation with the agent doing next_input = result.to_input_list() , result = await Runner.run(agent, next_input in a loop

rm-openai

rm-openai commented on May 29, 2025

@rm-openai
Collaborator

Thanks - lost track of this, but going to try and fix today.

rm-openai

rm-openai commented on May 30, 2025

@rm-openai
Collaborator

deployed a fix, so you shouldn't see this error any more

Reasoning input items can only be provided to a reasoning or computer use model. Remove reasoning items from your input and try again

Let me know if this resolves things or if there's more to be done!

maininformer

maininformer commented on May 30, 2025

@maininformer
Author

@rm-openai thank you. Just to check, does this change the behavior where non-reasoning models like gpt-4.1 would not accept reasoning items? If so that would be awesome!

Just to make sure I understand: This is the same behavior in playground, where if one switched from, say, o3 to gpt-4.1, there is a modal saying reasoning items will be removed. With your change, we would expect that that modal and behavior would be unnecessary, yes?

rm-openai

rm-openai commented on May 30, 2025

@rm-openai
Collaborator

That's right. Reasoning items will be ignored if passed to gpt-4.1, instead of raising an error.

maininformer

maininformer commented on May 30, 2025

@maininformer
Author

@rm-openai Getting a 500; same snippet ^

openai.InternalServerError: Error code: 500 - {'error': {'message': 'An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_cb60ea1236ced3875808c3a8dab3ac67 in your message.', 'type': 'server_error', 'param': None, 'code': 'server_error'}}
rm-openai

rm-openai commented on May 30, 2025

@rm-openai
Collaborator

@maininformer ah sorry about that. Just to confirm could you run once more and just make sure you keep getting a 500 error?

maininformer

maininformer commented on May 30, 2025

@maininformer
Author

@rm-openai Yeah I ran this a couple of times before posting, but I just ran again 3 more times just to be sure. I do see you guys are having a couple of disruptions but unsure if it's related.

maininformer

maininformer commented on Jun 4, 2025

@maininformer
Author

@rm-openai the snippet now works! 🙏🏼 🙏🏼

However, there is another thing that is broken now:

Fixed 🟢 o3 -> gpt-4.1 -> o3
Broken 🔴 o3 -> gpt-4.1 -> o4-mini. Getting a 500 again. Here is the full breaking script for convenience:

import asyncio, random
from agents import Agent, Runner, function_tool, handoff 

# ── tools ────────────────────────────────────────────────────────────────────
@function_tool
def make_haiku_about_haikus() -> str:
    return "\n".join([
        "Seventeen small breaths,",
        "A world folded into three,",
        "Haiku makes haiku."
    ])

@function_tool
def choose_random_word(words: list[str]) -> str:
    return random.choice(words)

# ── agent 3 ──────────────────────────────────────────────────────────────────
final_haiku_agent = Agent(
    name="Final‑Haikuist",
    model="o4-mini",
    instructions=("The prior agent will supply ONE word. "
                  "Write a 5‑7‑5 haiku containing that word exactly once."),
)

# ── agent 2 (general model) ─────────────────────────────────────────────────
word_picker_agent = Agent(
    name="Word‑Picker",
    model="gpt-4.1",
    instructions=("Call `choose_random_word` on the list you receive, then hand off "
                  "to Final‑Haikuist; add no extra text."),
    tools=[choose_random_word],
    handoffs=[handoff(final_haiku_agent)],
)

# ── agent 1 ──────────────────────────────────────────────────────────────────
intro_haiku_agent = Agent(
    name="Intro‑Haikuist",
    model="o3",
    instructions=("Call `make_haiku_about_haikus`, then hand off to Word‑Picker."),
    tools=[make_haiku_about_haikus],
    # filter strips reasoning, fn‑call, result bundle just before the hand‑off
    handoffs=[handoff(word_picker_agent)],
)

# ── driver ───────────────────────────────────────────────────────────────────
WORDS = ["moon", "blossom", "mountain", "breeze", "river"]

async def main() -> None:
    run = await Runner.run(intro_haiku_agent,
                           f"The candidate words are: {WORDS}")

    print(run)

if __name__ == "__main__":
    asyncio.run(main())
rm-openai

rm-openai commented on Jun 4, 2025

@rm-openai
Collaborator

@maininformer yeah just landed a fix earlier today.

As for the o3->o4-mini, yeah that's currently "expected" in the sense that different reasoning models aren't necessarily transferable. It shouldn't be a 500, but that's a separate issue. Let me open an internal thread and see what comes of it.

(For now I'd recommend sticking with the same reasoning model)

rm-openai

rm-openai commented on Jun 5, 2025

@rm-openai
Collaborator

@maininformer ok your snippet should now work, landed another fix. You also won't need the handoff filter.

maininformer

maininformer commented on Jun 5, 2025

@maininformer
Author

yo legend! thank you very much.

rm-openai

rm-openai commented on Jun 5, 2025

@rm-openai
Collaborator

closing, feel free to reopen or create new issue if needed

fitzjalen

fitzjalen commented on Jun 6, 2025

@fitzjalen

Same error here. Trying to make Critic as multi turn agent but got this error. Main model is o3-mini high and search is gpt-4o

Image
rm-openai

rm-openai commented on Jun 9, 2025

@rm-openai
Collaborator

@fitzjalen could you create a new issue with your repro steps? thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionQuestion about using the SDK

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @maininformer@fitzjalen@lamaeldo@rm-openai

        Issue actions

          How would I handoff a non-reasoning model with tool calls to a reasoning model? · Issue #722 · openai/openai-agents-python