Closed
Description
Please read this first
- Have you read the docs?Agents SDK docs: Yes
- Have you searched for related issues? Others may have had similar requests: Yes
Question
I'd like to enforce LLM to chain-of-thought and step by step tool calls and ultimately want to return a structured output built with a Pydantic model. To achieve that, I'm using StopAtTools and stop when structured output is built.
tool_use_behavior = StopAtTools(stop_at_tool_names=[build_output.name])
where
@function_tool
def build_output(foo: Foo):
return foo
Today I can make this work by calling run_streamed, inspecting each interim tool-call result, and returning when the build_output call of the wanted type appears.
Is there a way to achieve the same thing without using run_streamed?
Metadata
Metadata
Assignees
Type
Projects
Milestone
Relationships
Development
No branches or pull requests
Activity
rm-openai commentedon Jun 2, 2025
@akira108 I might be missing something, but why can't you do this:
The reasoning model should automatically call tools in it's CoT, and keep going until it produces an output of that type.
akira108 commentedon Jun 2, 2025
@rm-openai
Thanks for the reply!
I’d love to use the reasoning model, but due to cost considerations, I’m trying to make it work with gpt-4.1.
Ref: https://cookbook.openai.com/examples/gpt4-1_prompting_guide#prompting-induced-planning–chain-of-thought
Do you have any suggestions for achieving similar behavior with gpt-4.1?
rm-openai commentedon Jun 2, 2025
Oh gotcha, your approach should work for that. Note that because you're prompting the agent, it might be finicky:
You could also try
o4-mini
, it might fit your budget.output:
akira108 commentedon Jun 2, 2025
@rm-openai
Thanks so much — that really helps!
I didn’t realize that even without specifying an output_type, final_output would take the return type of the tool.
Appreciate the heads-up on the two pitfalls when prompting gpt-4.1, and I’ll definitely give o4-mini a try too!