You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my agent setting, I set both local tools and mcp server tools, and set model_settings=ModelSettings(
temperature=TEMPERATURE, tool_choice="required"), but the tracing showed only local tools were used. Is it possible to use both local tools and mcp server tools?
Yes it is. tool_choice=required tells the model it must use some tool, but leaves the specific tool up to the model. You could add more stuff to your prompt to guide the model, or use an input that clearly requires the MCP tool.
Also note that the MCP server tool is called as part of the model, so it will show up in the call to responses API in the tracing UI. (i.e. the row that says POST/v1/responses)
You need to check whether the message sent when calling the LLM API includes all of your tools. If it does, then there's no issue. Which tool gets selected depends on your prompt and the LLM's decision-making, not on the Agent SDK itself.
Activity
rm-openai commentedon Jun 2, 2025
Yes it is.
tool_choice=required
tells the model it must use some tool, but leaves the specific tool up to the model. You could add more stuff to your prompt to guide the model, or use an input that clearly requires the MCP tool.Also note that the MCP server tool is called as part of the model, so it will show up in the call to responses API in the tracing UI. (i.e. the row that says
POST/v1/responses
)devtalker commentedon Jun 9, 2025
You need to check whether the message sent when calling the LLM API includes all of your tools. If it does, then there's no issue. Which tool gets selected depends on your prompt and the LLM's decision-making, not on the Agent SDK itself.
NCCYUNSONG commentedon Jun 9, 2025
Thanks a lot. I found the reason. The prompt was not strong enough to make it use MCP tools.