-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Insights: openai/openai-agents-python
Overview
4 Releases published by 2 people
-
v0.0.11
published
Apr 15, 2025 -
v0.0.12
published
Apr 22, 2025 -
v0.0.13
published
Apr 24, 2025 -
v0.0.14 v0.0.14
published
Apr 30, 2025
35 Pull requests merged by 14 people
-
feat: Streamable HTTP support
#643 merged
May 14, 2025 -
Update search_agent.py
#677 merged
May 14, 2025 -
feat: pass extra_body through to LiteLLM acompletion
#638 merged
May 14, 2025 -
Fixed a bug for "detail" attribute in input image
#685 merged
May 14, 2025 -
0.0.14 release
#635 merged
Apr 30, 2025 -
Update litellm version
#626 merged
Apr 29, 2025 -
docs: add FutureAGI to tracing documentation
#592 merged
Apr 25, 2025 -
Make the TTS voices type exportable
#577 merged
Apr 24, 2025 -
Add usage to context in streaming
#595 merged
Apr 24, 2025 -
v0.0.13
#593 merged
Apr 24, 2025 -
More tests for cancelling streamed run
#590 merged
Apr 24, 2025 -
Fix stream error using LiteLLM
#589 merged
Apr 24, 2025 -
Prevent MCP ClientSession hang
#580 merged
Apr 24, 2025 -
Create to_json_dict for ModelSettings
#582 merged
Apr 24, 2025 -
Allow cancel out of the streaming result
#579 merged
Apr 23, 2025 -
Examples: Fix financial_research_agent instructions
#573 merged
Apr 23, 2025 -
Adding extra_headers parameters to ModelSettings
#550 merged
Apr 23, 2025 -
v0.0.12
#564 merged
Apr 22, 2025 -
Pass through organization/project headers to tracing backend, fix speech_group enum
#562 merged
Apr 21, 2025 -
Docs and tests for litellm
#561 merged
Apr 21, 2025 -
RFC: automatically use litellm if possible
#534 merged
Apr 21, 2025 -
Fix visualize graph filename to without extension.
#554 merged
Apr 21, 2025 -
Start and finish streaming trace in impl metod
#540 merged
Apr 21, 2025 -
Enable non-strict output types
#539 merged
Apr 21, 2025 -
Examples for image inputs
#553 merged
Apr 21, 2025 -
Docs: Switch to o3 model; exclude translated pages from search
#533 merged
Apr 17, 2025 -
Docs for LiteLLM integration
#532 merged
Apr 16, 2025 -
Litellm integration
#524 merged
Apr 16, 2025 -
Show repo name/data in docs
#525 merged
Apr 16, 2025 -
Extract chat completions streaming helpers
#523 merged
Apr 15, 2025 -
Extract chat completions conversion code into helper
#522 merged
Apr 15, 2025 -
Run CI on all commits, not just ones on main
#521 merged
Apr 15, 2025 -
v0.0.11
#520 merged
Apr 15, 2025 -
Only include stream_options when streaming
#519 merged
Apr 15, 2025 -
Examples and tests for previous_response_id
#512 merged
Apr 15, 2025
23 Pull requests opened by 21 people
-
Added cached_tokens to the usage monitoring.
#555 opened
Apr 20, 2025 -
Add File Loading Utilities for Agent Instructions
#565 opened
Apr 22, 2025 -
Make input/new items available in the run context
#572 opened
Apr 22, 2025 -
Add a new GH Actions job to automatically update translated document pagse
#598 opened
Apr 24, 2025 -
Add ProviderError exception and handle missing LLM response
#609 opened
Apr 26, 2025 -
feat: use stream api only
#629 opened
Apr 29, 2025 -
[MCP][Utils] Add support for FastMCP processing
#631 opened
Apr 30, 2025 -
feat: patch prompt for models only support json-mode
#633 opened
Apr 30, 2025 -
feat: Implement get_tool_call_output method in RunResultBase and update doc
#637 opened
May 1, 2025 -
fix: add ensure_ascii=False to json.dumps for correct Unicode output
#639 opened
May 2, 2025 -
[fix] use openai model provider as default
#644 opened
May 3, 2025 -
Examples: Fixed agent_patterns/streaming guardrails
#648 opened
May 5, 2025 -
Fix typos in documentation and event naming across multiple files
#651 opened
May 6, 2025 -
feat: Add support for image function tools
#654 opened
May 6, 2025 -
Added support for gpt4o-realtime models for Speect to Speech interactions
#659 opened
May 7, 2025 -
Add keep_last_n_items filter to handoff_filters module
#660 opened
May 7, 2025 -
Add Galileo to external tracing processors list
#662 opened
May 7, 2025 -
Fixed Python syntax
#665 opened
May 8, 2025 -
feat: storage adapter (bubble / supabase)
#669 opened
May 9, 2025 -
Use `max_completion_tokens` param for OpenAI Chat Completion API
#679 opened
May 11, 2025 -
Added response cost in the Usage
#682 opened
May 12, 2025 -
Feature/message_filter
#687 opened
May 12, 2025 -
docs: add DeepWiki badge for AI-powered project documentation
#689 opened
May 13, 2025
76 Issues closed by 24 people
-
Braicool
#675 closed
May 12, 2025 -
Does StopAtTools returns tool result directly to user instead of to LLM?
#632 closed
May 12, 2025 -
additionalProperties should not be set for object types
#608 closed
May 11, 2025 -
Handoff Agent and Tool Call Not Triggering Reliably in Multi-Agent Setup
#617 closed
May 11, 2025 -
How to use on_handoff content in the agent
#627 closed
May 11, 2025 -
What is the role of ReasoningItem
#480 closed
May 10, 2025 -
example streaming events to the client
#653 closed
May 10, 2025 -
Triage agent can not delegate task to handoff agent
#575 closed
May 9, 2025 -
Agent gets stuck 'in-progress'
#647 closed
May 8, 2025 -
How to use custom LLM Gateway having JWT authetication
#652 closed
May 7, 2025 -
Integration of deterministic conversations and other agents
#603 closed
May 6, 2025 -
Are MCPServer and MCPServerSse clients?
#640 closed
May 5, 2025 -
how to use Code Interpreter or Image Output in OpenAI Agents SDK
#360 closed
May 5, 2025 -
## Custom Model Provider Not Working
#485 closed
May 5, 2025 -
function call can not get call_id
#559 closed
May 4, 2025 -
How to use llm outputs in the on_handoff function
#567 closed
May 4, 2025 -
Tools should not be exeucted until all input guardrails have completed
#624 closed
May 2, 2025 -
Files in the input user prompt
#557 closed
May 2, 2025 -
from agents.extensions.models.litellm_model import LitellmModel
#621 closed
May 1, 2025 -
Accessing reasoning tokens of another llm model in agents sdk
#462 closed
May 1, 2025 -
[Bug]: ModuleNotFoundError: No module named 'enterprise' When Using litellm==1.48.1 in Google Colab
#614 closed
Apr 30, 2025 -
ModuleNotFoundError: No module named 'enterprise' #10353
#613 closed
Apr 30, 2025 -
Add HTTP (non-stdio) MCP server support to Agents SDK
#616 closed
Apr 29, 2025 -
OpenAI Agents SDK unable to contact local endpoint hosted by Ollama / LM Studio
#625 closed
Apr 29, 2025 -
https://static.hotmart.com/checkout/widget.min.js
#619 closed
Apr 29, 2025 -
AWS Bedrock via LiteLLM
#620 closed
Apr 29, 2025 -
how to print Mcp tools print
#615 closed
Apr 28, 2025 -
Is RunContext thread safe?
#537 closed
Apr 28, 2025 -
agent output format and handoff stability
#568 closed
Apr 27, 2025 -
Optimize Latency for Parallel Agent Runs with Streaming
#498 closed
Apr 27, 2025 -
How would you handle Pydantic output_type validation and retries?
#530 closed
Apr 27, 2025 -
handoff() returned Handoff objects aren’t recognized by Agent(handoffs=…) without manual .name alias
#599 closed
Apr 27, 2025 -
Error in Stream in Runner.run_streamed() with LitellmModel(Model) class
#601 closed
Apr 26, 2025 -
Support AWS Bedrock
#86 closed
Apr 26, 2025 -
Issue when processing real time audio from a Twilio media stream
#304 closed
Apr 26, 2025 -
what is the best practices to have faster voices conversation?
#306 closed
Apr 26, 2025 -
AttributeError: module 'planner_agent' has no attribute 'handoffs'
#596 closed
Apr 25, 2025 -
It seems I'm unable to access the file `sample.txt`
#584 closed
Apr 25, 2025 -
Optional Fields in Agent Output Cause JSON Schema Error with AWS Bedrock
#586 closed
Apr 25, 2025 -
'AgentOutputSchema' object has no attribute '__mro__'
#597 closed
Apr 25, 2025 -
How can I pass dynamic instruction to the agent
#482 closed
Apr 25, 2025 -
context.usage returns 0 in streaming mode
#594 closed
Apr 24, 2025 -
LiteLLM extension crashes with run_streamed
#587 closed
Apr 24, 2025 -
Returning function call's responses in `raw_response_event`
#328 closed
Apr 24, 2025 -
Canceling the stream from result.stream_events()
#574 closed
Apr 23, 2025 -
When will the .NET version be available?
#571 closed
Apr 23, 2025 -
How to make hand-off decisions more reliable?
#541 closed
Apr 23, 2025 -
Nested objects in function tool input are empty
#563 closed
Apr 22, 2025 -
openai-agents package ot installing correctly. "ModuleNotFound agents"
#399 closed
Apr 22, 2025 -
Missing packages on install for voice with macOS Intel
#478 closed
Apr 22, 2025 -
Adding Langsmith trace processor introduces huge latency to chat
#529 closed
Apr 22, 2025 -
ValueError when finishing trace in Runner.run_streamed(): “token was created in a different Context”
#538 closed
Apr 21, 2025 -
StreamingResponse with FastAPI return error ContextVar
#435 closed
Apr 21, 2025 -
how to use custom json schema for agent's output?
#528 closed
Apr 21, 2025 -
Incorrect package name in documentation: 'pip install openai-agents' should be 'openai_agents'
#560 closed
Apr 21, 2025 -
function_call can not work
#467 closed
Apr 21, 2025 -
MCP server with custom OpenAI client
#470 closed
Apr 21, 2025 -
multi-agent use handoffs
#471 closed
Apr 21, 2025 -
多智能体协作完成一个复杂的任务
#473 closed
Apr 21, 2025 -
Fact Checking Guardrails
#416 closed
Apr 20, 2025 -
Expose max_turns Parameter When Running an Agent as a Tool
#551 closed
Apr 20, 2025 -
[non-fatal] Tracing client error 401, Incorrect API key provided
#535 closed
Apr 17, 2025 -
MCP with context
#411 closed
Apr 17, 2025 -
why?
#441 closed
Apr 17, 2025 -
UTF-8 encoding issues in tool calling params with GPT-4o model
#315 closed
Apr 16, 2025 -
context management between the agents in a multi agent set up
#348 closed
Apr 16, 2025 -
timeout not work for post_writer in mcp
#517 closed
Apr 16, 2025 -
Model stream error when running in Docker
#518 closed
Apr 15, 2025 -
Why full history as input is still necessary?
#102 closed
Apr 15, 2025 -
Runner hangs when using MCPServerStdio after successful tools/list
#434 closed
Apr 15, 2025
59 Issues opened by 52 people
-
How to pass hardcoded dynamic messages as agent's responses in the chat history ?
#695 opened
May 15, 2025 -
Multiple handoffs requested Error tracing platform
#694 opened
May 14, 2025 -
MCP server restart cause Agent to fail
#693 opened
May 14, 2025 -
Feature Request: Support streaming tool call outputs
#692 opened
May 14, 2025 -
ImportError: cannot import name 'MCPServerStdio' from 'agents.mcp'
#691 opened
May 14, 2025 -
Please add time travel
#688 opened
May 13, 2025 -
Agent attempts to use non-existing tool
#686 opened
May 12, 2025 -
Feature Request: Allow Separate Models for Tool Execution and Final Response in OpenAI Agent SDK
#684 opened
May 12, 2025 -
Add response cost in the Usage
#683 opened
May 12, 2025 -
Troubleshooting Agent Handoff in Multi-Agent Workflow
#681 opened
May 12, 2025 -
Unable to use reasoning models with tool calls using LitellmModel
#678 opened
May 11, 2025 -
How to provide resources with a MCP server?
#676 opened
May 11, 2025 -
Threads api
#674 opened
May 11, 2025 -
Error code: 400 "No tool output found for function call"
#673 opened
May 10, 2025 -
Add an example for telephony voice agent
#672 opened
May 9, 2025 -
How to make a LiteLLM models run in Reasoning mode
#671 opened
May 9, 2025 -
llm.txt?
#670 opened
May 9, 2025 -
Infinite recursion in src/agents/extensions/visualization.py due to circular references
#668 opened
May 9, 2025 -
Question about streaming for subagents and tools, and tool hallucinations
#667 opened
May 8, 2025 -
from agents.extensions.models.litellm_model import LitellmModel
#666 opened
May 8, 2025 -
Does the formatted output (Agent.output_type) require model support?
#664 opened
May 8, 2025 -
Custom model provider ignored when using agents as tools
#663 opened
May 7, 2025 -
First-class streaming tool output
#661 opened
May 7, 2025 -
the same isinstance(output, ResponseFunctionToolCall) check twice in "_run_impl.py "
#658 opened
May 7, 2025 -
OAuth support for MCPServerSse
#657 opened
May 7, 2025 -
Function calling fails on “application/json” MIME type with the latest Gemini models
#656 opened
May 6, 2025 -
Input format in agent as tool
#655 opened
May 6, 2025 -
When using Japanese in AzureOpenAI, answers may not be displayed
#649 opened
May 5, 2025 -
Providing a pydantic model instead of docstring for tool parameters.
#646 opened
May 5, 2025 -
Is there a way to access reasoning_content when calling Runner.run?
#645 opened
May 4, 2025 -
How to add messages to the conversation history
#642 opened
May 2, 2025 -
Creating Agents Dynamically
#641 opened
May 2, 2025 -
Human-In-The-Loop Architecture should be implemented on top priority!
#636 opened
May 1, 2025 -
no attribute error occurs while calling MCP
#630 opened
Apr 30, 2025 -
Intent Classifier Support
#628 opened
Apr 29, 2025 -
on_agent_start hook should be more performant
#623 opened
Apr 29, 2025 -
Can we use agent.run instead of Runner.run(starting_agent=agent)
#622 opened
Apr 29, 2025 -
Resource tracker warning (leaked semaphores) with MCPServerStdio
#618 opened
Apr 28, 2025 -
Bug: style guideline and formatting inconsistencies
#611 opened
Apr 27, 2025 -
[Bug]: UnicodeDecodeError when importing litellm_model on Windows
#610 opened
Apr 26, 2025 -
[Bug]: SDK crashes when `choices` is `None` (provider-error payload)
#604 opened
Apr 25, 2025 -
Add HTTP Streamable support for MCP's
#600 opened
Apr 25, 2025 -
openai-agents-dotnet
#588 opened
Apr 24, 2025 -
is there a way to block the handoff to an agent based on a custom logic ?
#585 opened
Apr 24, 2025 -
Ordering of events in Runner.run_streamed is incorrect
#583 opened
Apr 24, 2025 -
input_guardrail is skipped
#576 opened
Apr 23, 2025 -
bugs in run.py
#570 opened
Apr 22, 2025 -
Reasoning model items provide to General model
#569 opened
Apr 22, 2025 -
Add llms.txt in the documentation
#556 opened
Apr 20, 2025 -
Add the possibility to add extra header fields in the RunConfig or Agents
#549 opened
Apr 19, 2025 -
Usage tokens no longer automatically show
#548 opened
Apr 18, 2025 -
Why does the Computer protocol not have the goto method?
#547 opened
Apr 18, 2025 -
History Cleaning
#545 opened
Apr 18, 2025 -
Support for MCP prompts and resources
#544 opened
Apr 18, 2025 -
Websocket streaming audio in realtime from client
#536 opened
Apr 17, 2025 -
How to make the conversation finally back to the MAIN AGENT
#527 opened
Apr 16, 2025 -
Cannot get the last tool_call_output event in stream_events when MaxTurnsExceeded
#526 opened
Apr 16, 2025
17 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
add reasoning content to ChatCompletions
#494 commented on
May 5, 2025 • 7 new comments -
Add tool call parameters for `on_tool_start` hook
#253 commented on
Apr 29, 2025 • 1 new comment -
Support for OpenAI agents sdk with Javascript/Typescript
#240 commented on
May 14, 2025 • 0 new comments -
Retry mechanism for ModelBehaviorError
#325 commented on
May 8, 2025 • 0 new comments -
Timeout after 300 seconds with any error message. Could it be rate limiting?
#511 commented on
May 7, 2025 • 0 new comments -
Add Intro message function for VoicePipeline
#488 commented on
May 6, 2025 • 0 new comments -
Streamed Voice Agent Demo - Multiple Performance Issues
#301 commented on
May 6, 2025 • 0 new comments -
Tool Calling Running in Loop Until Max-Turn
#191 commented on
May 5, 2025 • 0 new comments -
human-in-the-loop
#378 commented on
May 5, 2025 • 0 new comments -
Add reasoning support for custom models.
#492 commented on
May 1, 2025 • 0 new comments -
invalid_request_error when using "chat_completions" with triage agent (gemini -> any other model)
#237 commented on
May 1, 2025 • 0 new comments -
Random transcript gets printed/generated when talking to the voice agent implemented using "VoicePipline" . Eg - "Transcription: Kurs." Mind you there is no background noise.
#368 commented on
Apr 29, 2025 • 0 new comments -
Support For CodeAct In The Future?
#383 commented on
Apr 29, 2025 • 0 new comments -
'handoffs' and 'agent.as_tool' have different performances.
#224 commented on
Apr 29, 2025 • 0 new comments -
Enhance `on_tool_start` Hook to Include Tool Call Arguments
#252 commented on
Apr 26, 2025 • 0 new comments -
Add `reasoning_content` to ChatCompletions
#415 commented on
Apr 22, 2025 • 0 new comments -
Golang implementation / alternative
#350 commented on
Apr 15, 2025 • 0 new comments