Skip to content

Proper way of managing large context window for ComputerTool #111

Open
@huntersgordon

Description

@huntersgordon

I am enjoying using ComputerTool, but want to create a persistent agent that lasts for many generations, for example, max_turns > 10000

I run into the basic error:
Error getting response: Error code: 400 - {'error': {'message': 'This request exceeds the context size limit. Please reduce the size of the prompt and try again.', 'type': 'invalid_request_error', 'param': None, 'code': None}}

  • Are there builtin functions that can help me prune this?

  • I understand this comes from the openAI core API, but can someone please direct me to where I can manage this? What are some good approaches to managing context/input as the size increases for a persistent agent?

Activity

rm-openai

rm-openai commented on Mar 13, 2025

@rm-openai
Collaborator

There are a few possibilities here, and I am not sure which one you are hitting. Would you be able to share some sample code?

I am basically wondering if its:

  1. Too many history items being sent to the model (in which case you can just trim the input between calls to Runner.run()
  2. Too much data in the request (which could be fixed by using response or reference IDs)

I'm guessing its the former, but let me know!

huntersgordon

huntersgordon commented on Mar 13, 2025

@huntersgordon
Author

@rm-openai

sure, I just have my basic Agent instantiation here, followed by an arbitrarily large max_turns

agent = Agent(
              ...,
                tools=[ComputerTool(computer)],
                model="computer-use-preview",
                model_settings=ModelSettings(truncation="auto"),
            )

await Runner.run(agent, prompt, max_turns=10000, hooks=hooks)

Now, I made a modification to run.pyto truncate the input, along the lines of:

run.py -> _run_single_turn()

//if context too long:

pruned_input = input[len(input) // 2:]
print(f"Length of input after pruning: {len(pruned_input)}")

# Retry with pruned input
new_response = await cls._get_new_response(
    agent,
    system_prompt,
    pruned_input,
    output_schema,
    handoffs,
    context_wrapper,
    run_config,
)

and get:

Error getting response: Error code: 400 - {'error': {'message': "Item 'fc_67d21f40a2e88191b3b06fea5e92c66a07505fdb833b0cf2' of type 'function_call' was provided without its required 'reasoning' item: 'rs_67d21f3ce3fc8191a98c966d5161440407505fdb833b0cf2'.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

Thanks for your feedback.

rm-openai

rm-openai commented on Mar 13, 2025

@rm-openai
Collaborator

@huntersgordon do you happen to have a request ID so we can debug this? #114 adds the request ID to the error log

(specifically want to debug the original issue with This request exceeds the context size limit)

huntersgordon

huntersgordon commented on Mar 13, 2025

@huntersgordon
Author

openai.BadRequestError: Error code: 400 - {'error': {'message': 'This request exceeds the context size limit. Please reduce the size of the prompt and try again.', 'type': 'invalid_request_error', 'param': None, 'code': None}}

Hi @rm-openai , not seeing any request ID here. What should it be under?

rm-openai

rm-openai commented on Mar 13, 2025

@rm-openai
Collaborator

@huntersgordon It should have printed a bunch of text including a stack trace - at the very top of the stack trace it should say something like

Error getting response: <error> (request_id: ...)

(I'm assuming you have cloned the repo and hence have #114 included)

rm-openai

rm-openai commented on Mar 13, 2025

@rm-openai
Collaborator

If you havent cloned, just uninstall and reinstall the SDK with

pip install git+ssh://git@github.com/openai/openai-agents-python.git
github-actions

github-actions commented on Mar 20, 2025

@github-actions

This issue is stale because it has been open for 7 days with no activity.

maininformer

maininformer commented on May 20, 2025

@maininformer

@rm-openai

sure, I just have my basic Agent instantiation here, followed by an arbitrarily large max_turns

agent = Agent(
              ...,
                tools=[ComputerTool(computer)],
                model="computer-use-preview",
                model_settings=ModelSettings(truncation="auto"),
            )

await Runner.run(agent, prompt, max_turns=10000, hooks=hooks)

Now, I made a modification to run.pyto truncate the input, along the lines of:

run.py -> _run_single_turn()

//if context too long:

pruned_input = input[len(input) // 2:]
print(f"Length of input after pruning: {len(pruned_input)}")

# Retry with pruned input
new_response = await cls._get_new_response(
    agent,
    system_prompt,
    pruned_input,
    output_schema,
    handoffs,
    context_wrapper,
    run_config,
)

and get:

Error getting response: Error code: 400 - {'error': {'message': "Item 'fc_67d21f40a2e88191b3b06fea5e92c66a07505fdb833b0cf2' of type 'function_call' was provided without its required 'reasoning' item: 'rs_67d21f3ce3fc8191a98c966d5161440407505fdb833b0cf2'.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

Thanks for your feedback.

Having the same issue as this second one here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    needs-more-infoWaiting for a reply/more info from the authorquestionQuestion about using the SDKstale

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @maininformer@huntersgordon@rm-openai

        Issue actions

          Proper way of managing large context window for ComputerTool · Issue #111 · openai/openai-agents-python