Skip to content

how to use Code Interpreter or Image Output in OpenAI Agents SDK #360

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
tasif498 opened this issue Mar 27, 2025 · 11 comments
Closed

how to use Code Interpreter or Image Output in OpenAI Agents SDK #360

tasif498 opened this issue Mar 27, 2025 · 11 comments
Labels
question Question about using the SDK stale

Comments

@tasif498
Copy link

Is there currently a way to use the Code Interpreter (Python execution) tool or return images (e.g. charts) in responses when using the OpenAI Agents SDK?
If not, I’d like to request one of the following:
Option 1 Code Interpreter Integration
Support for securely executing Python code (like in the Assistants API with the code interpreter tool) to enable dynamic data processing and chart generation.

Option 2 Image Output Support
If code execution isn't feasible, provide a way for agents to return image outputs (e.g. base64-encoded charts) in their responses. This would allow external tools to handle code, and the agent simply returns the result.

Use Case:
Building data analysis agents that can respond to user queries with summaries and visual charts (bar, pie, line, etc.).

@tasif498 tasif498 added the question Question about using the SDK label Mar 27, 2025
@pierluigi-D-segatto
Copy link

is there anyone that can answer on how/when these features can/will be implemented?

Copy link

github-actions bot commented Apr 6, 2025

This issue is stale because it has been open for 7 days with no activity.

@github-actions github-actions bot added the stale label Apr 6, 2025
@tasif498
Copy link
Author

tasif498 commented Apr 6, 2025

Can we get any update on this?

@github-actions github-actions bot removed the stale label Apr 7, 2025
@bruno-curta
Copy link

I'm also struggling with this point.
I have tried to create assistants and tools to use code interpreter to generate the HTML based on the encoded image URL, but it is not working. This should be easier to configure, I believe.
Any updates on that?

@portizv26
Copy link

I am struggling with this as well.
My initial approach is to orchestrate a flow in which an assistant creates an image an uploads it into a repository that can be later be reached by my agent, but I don't know if this approach is secure or if there is a better approach for this.

Copy link

This issue is stale because it has been open for 7 days with no activity.

@github-actions github-actions bot added the stale label Apr 16, 2025
@rm-openai
Copy link
Collaborator

Hey I'm so sorry I totally missed this. I don't think we have a code-interpreter hosted tool. I can add some image examples. Can you share some sample code and comment on parts where you're not sure what to do?

@isaac47
Copy link

isaac47 commented Apr 19, 2025

Hi, Can you please give an example of code showing agent taking image as input to generate description ?

@rm-openai
Copy link
Collaborator

@isaac47 #553

@github-actions github-actions bot removed the stale label Apr 23, 2025
Copy link

This issue is stale because it has been open for 7 days with no activity.

@github-actions github-actions bot added the stale label Apr 30, 2025
Copy link

github-actions bot commented May 5, 2025

This issue was closed because it has been inactive for 3 days since being marked as stale.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale May 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question about using the SDK stale
Projects
None yet
Development

No branches or pull requests

6 participants