Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: johntango/openai-agents-python01
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: f976349
Choose a base ref
...
head repository: openai/openai-agents-python
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 5c7c678
Choose a head ref

Commits on May 14, 2025

  1. Fixed a bug for "detail" attribute in input image (openai#685)

    When an input image is given as input, the code tries to access the
    'detail' key, that may not be present as noted in openai#159.
    
    With this pull request, now it tries to access the key, otherwise set
    the value to `None`.
    @pakrym-oai  or @rm-openai let me know if you want any changes.
    DanieleMorotti authored May 14, 2025
    Copy the full SHA
    2c46dae View commit details
  2. feat: pass extra_body through to LiteLLM acompletion (openai#638)

    **Purpose**  
    Allow arbitrary `extra_body` parameters (e.g. `cached_content`) to be
    forwarded into the LiteLLM call. Useful for context caching in Gemini
    models
    ([docs](https://ai.google.dev/gemini-api/docs/caching?lang=python)).
    
    **Example usage**  
    ```python
    import os
    from agents import Agent, ModelSettings
    from agents.extensions.models.litellm_model import LitellmModel
    
    cache_name = "cachedContents/34jopukfx5di"  # previously stored context
    
    gemini_model = LitellmModel(
        model="gemini/gemini-1.5-flash-002",
        api_key=os.getenv("GOOGLE_API_KEY")
    )
    
    agent = Agent(
        name="Cached Gemini Agent",
        model=gemini_model,
        model_settings=ModelSettings(
            extra_body={"cached_content": cache_name}
        )
    )
    AshokSaravanan222 authored May 14, 2025
    Copy the full SHA
    1994f9d View commit details
  3. Update search_agent.py (openai#677)

    Added missing word "be" in prompt instructions.
    
    This is unlikely to change the agent functionality in most cases, but
    optimal clarity in prompt language is a best practice.
    leohpark authored May 14, 2025
    Copy the full SHA
    02b6e70 View commit details
  4. feat: Streamable HTTP support (openai#643)

    Co-authored-by: aagarwal25 <akshit_agarwal@intuit.com>
    Akshit97 and aagarwal25 authored May 14, 2025
    Copy the full SHA
    1847008 View commit details

Commits on May 15, 2025

  1. v0.0.15 (openai#701)

    rm-openai authored May 15, 2025
    Copy the full SHA
    5fe096d View commit details

Commits on May 18, 2025

  1. Create AGENTS.md (openai#707)

    Adding an AGENTS.md file for Codex use
    dkundel-openai authored May 18, 2025
    Copy the full SHA
    c282324 View commit details
  2. Added mcp 'instructions' attribute to the server (openai#706)

    Added the `instructions` attribute to the MCP servers to solve openai#704 .
    
    Let me know if you want to add an example to the documentation.
    DanieleMorotti authored May 18, 2025
    Copy the full SHA
    003cbfe View commit details

Commits on May 19, 2025

  1. Copy the full SHA
    428c9a6 View commit details

Commits on May 20, 2025

  1. Dev/add usage details to Usage class (openai#726)

    PR to enhance the `Usage` object and related logic, to support more
    granular token accounting, matching the details available in the [OpenAI
    Responses API](https://platform.openai.com/docs/api-reference/responses)
    . Specifically, it:
    
    - Adds `input_tokens_details` and `output_tokens_details` fields to the
    `Usage` dataclass, storing detailed token breakdowns (e.g.,
    `cached_tokens`, `reasoning_tokens`).
    - Flows this change through
    - Updates and extends tests to match
    - Adds a test for the Usage.add method
    
    ### Motivation
    - Aligns the SDK’s usage with the latest OpenAI responses API Usage
    object
    - Supports downstream use cases that require fine-grained token usage
    data (e.g., billing, analytics, optimization) requested by startups
    
    ---------
    
    Co-authored-by: Wulfie Bain <wulfie@openai.com>
    WJPBProjects and WJPBProjects authored May 20, 2025
    Copy the full SHA
    466b44d View commit details

Commits on May 21, 2025

  1. Upgrade openAI sdk version (openai#730)

    ---
    [//]: # (BEGIN SAPLING FOOTER)
    * openai#732
    * openai#731
    * __->__ openai#730
    rm-openai authored May 21, 2025
    Copy the full SHA
    ce2e2a4 View commit details
  2. Hosted MCP support (openai#731)

    ---
    [//]: # (BEGIN SAPLING FOOTER)
    * openai#732
    * __->__ openai#731
    rm-openai authored May 21, 2025
    Copy the full SHA
    9fa5c39 View commit details
  3. Copy the full SHA
    079764f View commit details
  4. v0.0.16 (openai#733)

    rm-openai authored May 21, 2025
    Copy the full SHA
    1992be3 View commit details
  5. Copy the full SHA
    1364f44 View commit details

Commits on May 23, 2025

  1. Fix visualization recursion with cycle detection (openai#737)

    ## Summary
    - avoid infinite recursion in visualization by tracking visited agents
    - test cycle detection in graph utility
    
    ## Testing
    - `make mypy`
    - `make tests` 
    
    Resolves openai#668
    rm-openai authored May 23, 2025
    Copy the full SHA
    db462e3 View commit details
  2. Update MCP and tool docs (openai#736)

    ## Summary
    - mention MCPServerStreamableHttp in MCP server docs
    - document CodeInterpreterTool, HostedMCPTool, ImageGenerationTool and
    LocalShellTool
    - update Japanese translations
    rm-openai authored May 23, 2025
    Copy the full SHA
    a96108e View commit details
  3. Fix Gemini API content filter handling (openai#746)

    ## Summary
    - avoid AttributeError when Gemini API returns `None` for chat message
    - return empty output if message is filtered
    - add regression test
    
    ## Testing
    - `make format`
    - `make lint`
    - `make mypy`
    - `make tests`
    
    Towards openai#744
    rm-openai authored May 23, 2025
    Copy the full SHA
    6e078bf View commit details

Commits on May 29, 2025

  1. Add Portkey AI as a tracing provider (openai#785)

    This PR adds Portkey AI as a tracing provider. Portkey helps you take
    your OpenAI agents from prototype to production.
    
    Portkey turns your experimental OpenAI Agents into production-ready
    systems by providing:
    
    - Complete observability of every agent step, tool use, and interaction
    - Built-in reliability with fallbacks, retries, and load balancing
    - Cost tracking and optimization to manage your AI spend
    - Access to 1600+ LLMs through a single integration
    - Guardrails to keep agent behavior safe and compliant
    - Version-controlled prompts for consistent agent performance
    
    
    Towards openai#786
    siddharthsambharia-portkey authored May 29, 2025
    Copy the full SHA
    d46e2ec View commit details
  2. Added RunErrorDetails object for MaxTurnsExceeded exception (openai#743)

    ### Summary
    
    Introduced the `RunErrorDetails` object to get partial results from a
    run interrupted by `MaxTurnsExceeded` exception. In this proposal the
    `RunErrorDetails` object contains all the fields from `RunResult` with
    `final_output` set to `None` and `output_guardrail_results` set to an
    empty list. We can decide to return less information.
    
    @rm-openai At the moment the exception doesn't return the
    `RunErrorDetails` object for the streaming mode. Do you have any
    suggestions on how to deal with it? In the `_check_errors` function of
    `agents/result.py` file.
    
    ### Test plan
    
    I have not implemented any tests currently, but if needed I can
    implement a basic test to retrieve partial data.
    
    ### Issue number
    
    This PR is an attempt to solve issue openai#719 
    
    ### Checks
    
    - [✅ ] I've added new tests (if relevant)
    - [ ] I've added/updated the relevant documentation
    - [ ✅] I've run `make lint` and `make format`
    - [ ✅] I've made sure tests pass
    DanieleMorotti authored May 29, 2025
    Copy the full SHA
    7196862 View commit details
  3. Copy the full SHA
    47fa8e8 View commit details

Commits on May 30, 2025

  1. Small fix for litellm model (openai#789)

    Small fix:
    
    Removing `import litellm.types` as its outside the try except block for
    importing litellm so the import error message isn't displayed, and the
    line actually isn't needed. I was reproducing a GitHub issue and came
    across this in the process.
    robtinn authored May 30, 2025
    Copy the full SHA
    b699d9a View commit details
  2. Fix typo in assertion message for handoff function (openai#780)

    ### Overview
    
    This PR fixes a typo in the assert statement within the `handoff`
    function in `handoffs.py`, changing `'on_input'` to `'on_handoff`' for
    accuracy and clarity.
    
    ### Changes
    
    - Corrected the word “on_input” to “on_handoff” in the docstring.
    
    ### Motivation
    
    Clear and correct documentation improves code readability and reduces
    confusion for users and contributors.
    
    ### Checklist
    
    - [x] I have reviewed the docstring after making the change.
    - [x] No functionality is affected.
    - [x] The change follows the repository’s contribution guidelines.
    Rehan-Ul-Haq authored May 30, 2025
    Copy the full SHA
    16fb29c View commit details
  3. Fix typo: Replace 'two' with 'three' in /docs/mcp.md (openai#757)

    The documentation in `docs/mcp.md` listed three server types (stdio,
    HTTP over SSE, Streamable HTTP) but incorrectly stated "two kinds of
    servers" in the heading. This PR fixes the numerical discrepancy.
    
    **Changes:** 
    
    - Modified from "two kinds of servers" to "three kinds of servers". 
    - File: `docs/mcp.md` (line 11).
    luochang212 authored May 30, 2025
    Copy the full SHA
    0a28d71 View commit details
  4. Update input_guardrails.py (openai#774)

    Changed the function comment as input_guardrails only deals with input
    messages
    venkatnaveen7 authored May 30, 2025
    Copy the full SHA
    ad80f78 View commit details
  5. docs: fix typo in docstring for is_strict_json_schema method (openai#775

    )
    
    ### Overview
    
    This PR fixes a small typo in the docstring of the
    `is_strict_json_schema` abstract method of the `AgentOutputSchemaBase`
    class in `agent_output.py`.
    
    ### Changes
    
    - Corrected the word “valis” to “valid” in the docstring.
    
    ### Motivation
    
    Clear and correct documentation improves code readability and reduces
    confusion for users and contributors.
    
    ### Checklist
    
    - [x] I have reviewed the docstring after making the change.
    - [x] No functionality is affected.
    - [x] The change follows the repository’s contribution guidelines.
    Rehan-Ul-Haq authored May 30, 2025
    Copy the full SHA
    6438350 View commit details
  6. Add comment to handoff_occured misspelling (openai#792)

    People keep trying to fix this, but its a breaking change.
    rm-openai authored May 30, 2025
    Copy the full SHA
    cfe9099 View commit details

Commits on Jun 2, 2025

  1. Fix openai#777 by handling MCPCall events in RunImpl (openai#799)

    This pull request resolves openai#777; If you think we should introduce a new
    item type for MCP call output, please let me know. As other hosted tools
    use this event, I believe using the same should be good to go tho.
    seratch authored Jun 2, 2025
    Copy the full SHA
    3e7b286 View commit details
  2. Ensure item.model_dump only contains JSON serializable types (openai#801

    )
    
    The EmbeddedResource from MCP tool call contains a field with type
    AnyUrl that is not JSON-serializable. To avoid this exception, use
    item.model_dump(mode="json") to ensure a JSON-serializable return value.
    westhood authored Jun 2, 2025
    Copy the full SHA
    775d3e2 View commit details
  3. Don't cache agent tools during a run (openai#803)

    ### Summary:
    Towards openai#767. We were caching the list of tools for an agent, so if you
    did `agent.tools.append(...)` from a tool call, the next call to the
    model wouldn't include the new tool. THis is a bug.
    
    ### Test Plan:
    Unit tests. Note that now MCP tools are listed each time the agent runs
    (users can still cache the `list_tools` however).
    rm-openai authored Jun 2, 2025
    Copy the full SHA
    d4c7a23 View commit details
  4. Only start tracing worker thread on first span/trace (openai#804)

    Closes openai#796. Shouldn't start a busy waiting thread if there aren't any
    traces.
    
    Test plan
    ```
    import threading
    assert threading.active_count() == 1
    import agents
    assert threading.active_count() == 1
    ```
    rm-openai authored Jun 2, 2025
    Copy the full SHA
    995af4d View commit details

Commits on Jun 3, 2025

  1. Add is_enabled to FunctionTool (openai#808)

    ### Summary:
    Allows a user to do `function_tool(is_enabled=<some_callable>)`; the
    callable is called when the agent runs.
    
    This allows you to dynamically enable/disable a tool based on the
    context/env.
    
    The meta-goal is to allow `Agent` to be effectively immutable. That
    enables some nice things down the line, and this allows you to
    dynamically modify the tools list without mutating the agent.
    
    ### Test Plan:
    Unit tests
    rm-openai authored Jun 3, 2025
    Copy the full SHA
    4046fcb View commit details

Commits on Jun 4, 2025

  1. v0.0.17 (openai#809)

    bump version
    rm-openai authored Jun 4, 2025
    Copy the full SHA
    204bec1 View commit details
  2. Copy the full SHA
    4a529e6 View commit details
  3. Copy the full SHA
    05db7a6 View commit details
  4. Add release documentation (openai#814)

    ## Summary
    - describe semantic versioning and release steps
    - add release page to documentation nav
    
    ## Testing
    - `make format`
    - `make lint`
    - `make mypy`
    - `make tests`
    - `make build-docs`
    
    
    ------
    https://chatgpt.com/codex/tasks/task_i_68409d25afdc83218ad362d10c8a80a1
    rm-openai authored Jun 4, 2025
    Copy the full SHA
    5c7c678 View commit details
Showing with 1,952 additions and 173 deletions.
  1. +69 −0 AGENTS.md
  2. +3 −0 README.md
  3. +5 −4 docs/ja/mcp.md
  4. +22 −0 docs/ja/repl.md
  5. +4 −0 docs/ja/tools.md
  6. +2 −1 docs/ja/tracing.md
  7. +4 −3 docs/mcp.md
  8. +6 −0 docs/ref/repl.md
  9. +18 −0 docs/release.md
  10. +19 −0 docs/repl.md
  11. +5 −1 docs/tools.md
  12. +2 −0 docs/tracing.md
  13. +1 −1 examples/agent_patterns/input_guardrails.py
  14. 0 examples/hosted_mcp/__init__.py
  15. +61 −0 examples/hosted_mcp/approvals.py
  16. +47 −0 examples/hosted_mcp/simple.py
  17. +13 −0 examples/mcp/streamablehttp_example/README.md
  18. +83 −0 examples/mcp/streamablehttp_example/main.py
  19. +33 −0 examples/mcp/streamablehttp_example/server.py
  20. +1 −1 examples/research_bot/agents/search_agent.py
  21. +34 −0 examples/tools/code_interpreter.py
  22. +54 −0 examples/tools/image_generator.py
  23. +4 −0 mkdocs.yml
  24. +3 −3 pyproject.toml
  25. +22 −0 src/agents/__init__.py
  26. +230 −6 src/agents/_run_impl.py
  27. +19 −3 src/agents/agent.py
  28. +1 −1 src/agents/agent_output.py
  29. +38 −5 src/agents/exceptions.py
  30. +15 −1 src/agents/extensions/models/litellm_model.py
  31. +35 −18 src/agents/extensions/visualization.py
  32. +1 −1 src/agents/handoffs.py
  33. +57 −3 src/agents/items.py
  34. +4 −0 src/agents/mcp/__init__.py
  35. +101 −9 src/agents/mcp/server.py
  36. +1 −1 src/agents/mcp/util.py
  37. +1 −1 src/agents/models/chatcmpl_converter.py
  38. +25 −1 src/agents/models/chatcmpl_stream_handler.py
  39. +31 −6 src/agents/models/openai_chatcompletions.py
  40. +44 −13 src/agents/models/openai_responses.py
  41. +65 −0 src/agents/repl.py
  42. +43 −13 src/agents/result.py
  43. +35 −6 src/agents/run.py
  44. +3 −0 src/agents/stream_events.py
  45. +128 −3 src/agents/tool.py
  46. +29 −3 src/agents/tracing/processors.py
  47. +21 −1 src/agents/usage.py
  48. +12 −0 src/agents/util/_pretty_print.py
  49. +2 −0 src/agents/voice/model.py
  50. +1 −0 tests/fake_model.py
  51. +39 −21 tests/mcp/test_mcp_tracing.py
  52. +14 −2 tests/models/test_litellm_chatcompletions_stream.py
  53. +44 −0 tests/models/test_litellm_extra_body.py
  54. +35 −0 tests/test_agent_runner.py
  55. +37 −0 tests/test_agent_runner_streamed.py
  56. +14 −6 tests/test_extra_headers.py
  57. +42 −1 tests/test_function_tool.py
  58. +49 −2 tests/test_openai_chatcompletions.py
  59. +14 −2 tests/test_openai_chatcompletions_stream.py
  60. +28 −0 tests/test_repl.py
  61. +23 −1 tests/test_responses_tracing.py
  62. +48 −0 tests/test_run_error_details.py
  63. +1 −1 tests/test_run_step_execution.py
  64. +18 −14 tests/test_run_step_processing.py
  65. +0 −4 tests/test_tracing_errors_streamed.py
  66. +52 −0 tests/test_usage.py
  67. +15 −0 tests/test_visualization.py
  68. +2 −0 tests/voice/test_workflow.py
  69. +20 −10 uv.lock
69 changes: 69 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
Welcome to the OpenAI Agents SDK repository. This file contains the main points for new contributors.

## Repository overview

- **Source code**: `src/agents/` contains the implementation.
- **Tests**: `tests/` with a short guide in `tests/README.md`.
- **Examples**: under `examples/`.
- **Documentation**: markdown pages live in `docs/` with `mkdocs.yml` controlling the site.
- **Utilities**: developer commands are defined in the `Makefile`.
- **PR template**: `.github/PULL_REQUEST_TEMPLATE/pull_request_template.md` describes the information every PR must include.

## Local workflow

1. Format, lint and type‑check your changes:

```bash
make format
make lint
make mypy
```

2. Run the tests:

```bash
make tests
```

To run a single test, use `uv run pytest -s -k <test_name>`.

3. Build the documentation (optional but recommended for docs changes):

```bash
make build-docs
```

Coverage can be generated with `make coverage`.

## Snapshot tests

Some tests rely on inline snapshots. See `tests/README.md` for details on updating them:

```bash
make snapshots-fix # update existing snapshots
make snapshots-create # create new snapshots
```

Run `make tests` again after updating snapshots to ensure they pass.

## Style notes

- Write comments as full sentences and end them with a period.

## Pull request expectations

PRs should use the template located at `.github/PULL_REQUEST_TEMPLATE/pull_request_template.md`. Provide a summary, test plan and issue number if applicable, then check that:

- New tests are added when needed.
- Documentation is updated.
- `make lint` and `make format` have been run.
- The full test suite passes.

Commit messages should be concise and written in the imperative mood. Small, focused commits are preferred.

## What reviewers look for

- Tests covering new behaviour.
- Consistent style: code formatted with `ruff format`, imports sorted, and type hints passing `mypy`.
- Clear documentation for any public API changes.
- Clean history and a helpful PR description.
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -4,6 +4,9 @@ The OpenAI Agents SDK is a lightweight yet powerful framework for building multi

<img src="https://cdn.openai.com/API/docs/images/orchestration.png" alt="Image of the Agents Tracing UI" style="max-height: 803px;">

> [!NOTE]
> Looking for the JavaScript/TypeScript version? Check out [Agents SDK JS/TS](https://github.com/openai/openai-agents-js).
### Core concepts:

1. [**Agents**](https://openai.github.io/openai-agents-python/agents): LLMs configured with instructions, tools, guardrails, and handoffs
9 changes: 5 additions & 4 deletions docs/ja/mcp.md
Original file line number Diff line number Diff line change
@@ -12,12 +12,13 @@ Agents SDK は MCP をサポートしており、これにより幅広い MCP

## MCP サーバー

現在、MCP 仕様では使用するトランスポート方式に基づき 2 種類のサーバーが定義されています。
現在、MCP 仕様では使用するトランスポート方式に基づき 3 種類のサーバーが定義されています。

1. **stdio** サーバー: アプリケーションのサブプロセスとして実行されます。ローカルで動かすイメージです。
1. **stdio** サーバー: アプリケーションのサブプロセスとして実行されます。ローカルで動かすイメージです。
2. **HTTP over SSE** サーバー: リモートで動作し、 URL 経由で接続します。
3. **Streamable HTTP** サーバー: MCP 仕様に定義された Streamable HTTP トランスポートを使用してリモートで動作します。

これらのサーバーへは [`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse] クラスを使用して接続できます。
これらのサーバーへは [`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse][`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp] クラスを使用して接続できます。

たとえば、[公式 MCP filesystem サーバー](https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem)を利用する場合は次のようになります。

@@ -46,7 +47,7 @@ agent=Agent(

## キャッシュ

エージェントが実行されるたびに、MCP サーバーへ `list_tools()` が呼び出されます。サーバーがリモートの場合は特にレイテンシが発生します。ツール一覧を自動でキャッシュしたい場合は、[`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse] の両方に `cache_tools_list=True` を渡してください。ツール一覧が変更されないと確信できる場合のみ使用してください。
エージェントが実行されるたびに、MCP サーバーへ `list_tools()` が呼び出されます。サーバーがリモートの場合は特にレイテンシが発生します。ツール一覧を自動でキャッシュしたい場合は、[`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse][`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp] の各クラスに `cache_tools_list=True` を渡してください。ツール一覧が変更されないと確信できる場合のみ使用してください。

キャッシュを無効化したい場合は、サーバーで `invalidate_tools_cache()` を呼び出します。

22 changes: 22 additions & 0 deletions docs/ja/repl.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
search:
exclude: true
---
# REPL ユーティリティ

`run_demo_loop` を使うと、ターミナルから手軽にエージェントを試せます。

```python
import asyncio
from agents import Agent, run_demo_loop

async def main() -> None:
agent = Agent(name="Assistant", instructions="あなたは親切なアシスタントです")
await run_demo_loop(agent)

if __name__ == "__main__":
asyncio.run(main())
```

`run_demo_loop` は入力を繰り返し受け取り、会話履歴を保持したままエージェントを実行します。既定ではストリーミング出力を表示します。
`quit` または `exit` と入力するか `Ctrl-D` を押すと終了します。
4 changes: 4 additions & 0 deletions docs/ja/tools.md
Original file line number Diff line number Diff line change
@@ -17,6 +17,10 @@ OpenAI は [`OpenAIResponsesModel`][agents.models.openai_responses.OpenAIRespons
- [`WebSearchTool`][agents.tool.WebSearchTool] はエージェントに Web 検索を行わせます。
- [`FileSearchTool`][agents.tool.FileSearchTool] は OpenAI ベクトルストアから情報を取得します。
- [`ComputerTool`][agents.tool.ComputerTool] はコンピュータ操作タスクを自動化します。
- [`CodeInterpreterTool`][agents.tool.CodeInterpreterTool] はサンドボックス環境でコードを実行します。
- [`HostedMCPTool`][agents.tool.HostedMCPTool] はリモート MCP サーバーのツールをモデルから直接利用できるようにします。
- [`ImageGenerationTool`][agents.tool.ImageGenerationTool] はプロンプトから画像を生成します。
- [`LocalShellTool`][agents.tool.LocalShellTool] はローカルマシンでシェルコマンドを実行します。

```python
from agents import Agent, FileSearchTool, Runner, WebSearchTool
3 changes: 2 additions & 1 deletion docs/ja/tracing.md
Original file line number Diff line number Diff line change
@@ -119,4 +119,5 @@ async def main():
- [Comet Opik](https://www.comet.com/docs/opik/tracing/integrations/openai_agents)
- [Langfuse](https://langfuse.com/docs/integrations/openaiagentssdk/openai-agents)
- [Langtrace](https://docs.langtrace.ai/supported-integrations/llm-frameworks/openai-agents-sdk)
- [Okahu‑Monocle](https://github.com/monocle2ai/monocle)
- [Okahu‑Monocle](https://github.com/monocle2ai/monocle)
- [Portkey AI](https://portkey.ai/docs/integrations/agents/openai-agents)
7 changes: 4 additions & 3 deletions docs/mcp.md
Original file line number Diff line number Diff line change
@@ -8,12 +8,13 @@ The Agents SDK has support for MCP. This enables you to use a wide range of MCP

## MCP servers

Currently, the MCP spec defines two kinds of servers, based on the transport mechanism they use:
Currently, the MCP spec defines three kinds of servers, based on the transport mechanism they use:

1. **stdio** servers run as a subprocess of your application. You can think of them as running "locally".
2. **HTTP over SSE** servers run remotely. You connect to them via a URL.
3. **Streamable HTTP** servers run remotely using the Streamable HTTP transport defined in the MCP spec.

You can use the [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse] classes to connect to these servers.
You can use the [`MCPServerStdio`][agents.mcp.server.MCPServerStdio], [`MCPServerSse`][agents.mcp.server.MCPServerSse], and [`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp] classes to connect to these servers.

For example, this is how you'd use the [official MCP filesystem server](https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem).

@@ -42,7 +43,7 @@ agent=Agent(

## Caching

Every time an Agent runs, it calls `list_tools()` on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass `cache_tools_list=True` to both [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse]. You should only do this if you're certain the tool list will not change.
Every time an Agent runs, it calls `list_tools()` on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass `cache_tools_list=True` to [`MCPServerStdio`][agents.mcp.server.MCPServerStdio], [`MCPServerSse`][agents.mcp.server.MCPServerSse], and [`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp]. You should only do this if you're certain the tool list will not change.

If you want to invalidate the cache, you can call `invalidate_tools_cache()` on the servers.

6 changes: 6 additions & 0 deletions docs/ref/repl.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# `repl`

::: agents.repl
options:
members:
- run_demo_loop
18 changes: 18 additions & 0 deletions docs/release.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Release process

The project follows a slightly modified version of semantic versioning using the form `0.Y.Z`. The leading `0` indicates the SDK is still evolving rapidly. Increment the components as follows:

## Minor (`Y`) versions

We will increase minor versions `Y` for **breaking changes** to any public interfaces that are not marked as beta. For example, going from `0.0.x` to `0.1.x` might include breaking changes.

If you don't want breaking changes, we recommend pinning to `0.0.x` versions in your project.

## Patch (`Z`) versions

We will increment `Z` for non-breaking changes:

- Bug fixes
- New features
- Changes to private interfaces
- Updates to beta features
19 changes: 19 additions & 0 deletions docs/repl.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# REPL utility

The SDK provides `run_demo_loop` for quick interactive testing.

```python
import asyncio
from agents import Agent, run_demo_loop

async def main() -> None:
agent = Agent(name="Assistant", instructions="You are a helpful assistant.")
await run_demo_loop(agent)

if __name__ == "__main__":
asyncio.run(main())
```

`run_demo_loop` prompts for user input in a loop, keeping the conversation
history between turns. By default it streams model output as it is produced.
Type `quit` or `exit` (or press `Ctrl-D`) to leave the loop.
6 changes: 5 additions & 1 deletion docs/tools.md
Original file line number Diff line number Diff line change
@@ -13,6 +13,10 @@ OpenAI offers a few built-in tools when using the [`OpenAIResponsesModel`][agent
- The [`WebSearchTool`][agents.tool.WebSearchTool] lets an agent search the web.
- The [`FileSearchTool`][agents.tool.FileSearchTool] allows retrieving information from your OpenAI Vector Stores.
- The [`ComputerTool`][agents.tool.ComputerTool] allows automating computer use tasks.
- The [`CodeInterpreterTool`][agents.tool.CodeInterpreterTool] lets the LLM execute code in a sandboxed environment.
- The [`HostedMCPTool`][agents.tool.HostedMCPTool] exposes a remote MCP server's tools to the model.
- The [`ImageGenerationTool`][agents.tool.ImageGenerationTool] generates images from a prompt.
- The [`LocalShellTool`][agents.tool.LocalShellTool] runs shell commands on your machine.

```python
from agents import Agent, FileSearchTool, Runner, WebSearchTool
@@ -266,7 +270,7 @@ The `agent.as_tool` function is a convenience method to make it easy to turn an
```python
@function_tool
async def run_my_agent() -> str:
"""A tool that runs the agent with custom configs".
"""A tool that runs the agent with custom configs"""

agent = Agent(name="My agent", instructions="...")

2 changes: 2 additions & 0 deletions docs/tracing.md
Original file line number Diff line number Diff line change
@@ -115,3 +115,5 @@ To customize this default setup, to send traces to alternative or additional bac
- [Langfuse](https://langfuse.com/docs/integrations/openaiagentssdk/openai-agents)
- [Langtrace](https://docs.langtrace.ai/supported-integrations/llm-frameworks/openai-agents-sdk)
- [Okahu-Monocle](https://github.com/monocle2ai/monocle)
- [Galileo](https://v2docs.galileo.ai/integrations/openai-agent-integration#openai-agent-integration)
- [Portkey AI](https://portkey.ai/docs/integrations/agents/openai-agents)
2 changes: 1 addition & 1 deletion examples/agent_patterns/input_guardrails.py
Original file line number Diff line number Diff line change
@@ -20,7 +20,7 @@
Guardrails are checks that run in parallel to the agent's execution.
They can be used to do things like:
- Check if input messages are off-topic
- Check that output messages don't violate any policies
- Check that input messages don't violate any policies
- Take over control of the agent's execution if an unexpected input is detected
In this example, we'll setup an input guardrail that trips if the user is asking to do math homework.
Empty file added examples/hosted_mcp/__init__.py
Empty file.
61 changes: 61 additions & 0 deletions examples/hosted_mcp/approvals.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
import argparse
import asyncio

from agents import (
Agent,
HostedMCPTool,
MCPToolApprovalFunctionResult,
MCPToolApprovalRequest,
Runner,
)

"""This example demonstrates how to use the hosted MCP support in the OpenAI Responses API, with
approval callbacks."""


def approval_callback(request: MCPToolApprovalRequest) -> MCPToolApprovalFunctionResult:
answer = input(f"Approve running the tool `{request.data.name}`? (y/n) ")
result: MCPToolApprovalFunctionResult = {"approve": answer == "y"}
if not result["approve"]:
result["reason"] = "User denied"
return result


async def main(verbose: bool, stream: bool):
agent = Agent(
name="Assistant",
tools=[
HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "gitmcp",
"server_url": "https://gitmcp.io/openai/codex",
"require_approval": "always",
},
on_approval_request=approval_callback,
)
],
)

if stream:
result = Runner.run_streamed(agent, "Which language is this repo written in?")
async for event in result.stream_events():
if event.type == "run_item_stream_event":
print(f"Got event of type {event.item.__class__.__name__}")
print(f"Done streaming; final result: {result.final_output}")
else:
res = await Runner.run(agent, "Which language is this repo written in?")
print(res.final_output)

if verbose:
for item in res.new_items:
print(item)


if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--verbose", action="store_true", default=False)
parser.add_argument("--stream", action="store_true", default=False)
args = parser.parse_args()

asyncio.run(main(args.verbose, args.stream))
47 changes: 47 additions & 0 deletions examples/hosted_mcp/simple.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
import argparse
import asyncio

from agents import Agent, HostedMCPTool, Runner

"""This example demonstrates how to use the hosted MCP support in the OpenAI Responses API, with
approvals not required for any tools. You should only use this for trusted MCP servers."""


async def main(verbose: bool, stream: bool):
agent = Agent(
name="Assistant",
tools=[
HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "gitmcp",
"server_url": "https://gitmcp.io/openai/codex",
"require_approval": "never",
}
)
],
)

if stream:
result = Runner.run_streamed(agent, "Which language is this repo written in?")
async for event in result.stream_events():
if event.type == "run_item_stream_event":
print(f"Got event of type {event.item.__class__.__name__}")
print(f"Done streaming; final result: {result.final_output}")
else:
res = await Runner.run(agent, "Which language is this repo written in?")
print(res.final_output)
# The repository is primarily written in multiple languages, including Rust and TypeScript...

if verbose:
for item in res.new_items:
print(item)


if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--verbose", action="store_true", default=False)
parser.add_argument("--stream", action="store_true", default=False)
args = parser.parse_args()

asyncio.run(main(args.verbose, args.stream))
13 changes: 13 additions & 0 deletions examples/mcp/streamablehttp_example/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# MCP Streamable HTTP Example

This example uses a local Streamable HTTP server in [server.py](server.py).

Run the example via:

```
uv run python examples/mcp/streamablehttp_example/main.py
```

## Details

The example uses the `MCPServerStreamableHttp` class from `agents.mcp`. The server runs in a sub-process at `https://localhost:8000/mcp`.
Loading