Closed
Description
Support for Short-term Memory in OpeAI Agents SDK
The current SDK doesn't support short-term memory, which is needed at two levels:
- Tools output
- User requests
For tools, managing this at the server side (e.g., caching at the API or database level) seems like a reasonable approach.
When it comes to caching for LLM calls, what would be the ideal approach?
I’d love to get feedback and suggestions on how best to tackle this.