Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

root ERROR Please provide OPENAI_API_KEY in preferences or via environment variable #15150

Open
kittaakos opened this issue Mar 7, 2025 · 5 comments

Comments

@kittaakos
Copy link
Contributor

kittaakos commented Mar 7, 2025

Bug Description:

When I start Theia, I see this error repeatedly in the console:

2025-03-07T14:06:54.269Z root ERROR Please provide OPENAI_API_KEY in preferences or via environment variable Error: Please provide OPENAI_API_KEY in preferences or via environment variable
    at OpenAiModel.initializeOpenAi (/Users/kittaakos/dev/git/theia/examples/browser/lib/backend/main.js:6827:19)
    at OpenAiModel.request (/Users/kittaakos/dev/git/theia/examples/browser/lib/backend/main.js:6745:29)
    at LanguageModelFrontendDelegateImpl.request (/Users/kittaakos/dev/git/theia/examples/browser/lib/backend/main.js:5250:38)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async RpcProxyFactory.onRequest (/Users/kittaakos/dev/git/theia/examples/browser/lib/backend/packages_core_lib_common_index_js-node_modules_vscode-languageserver-types_lib_umd_sync_recursive.js:4265:24)
    at async RpcProtocol.handleRequest (/Users/kittaakos/dev/git/theia/examples/browser/lib/backend/packages_core_lib_common_index_js-node_modules_vscode-languageserver-types_lib_umd_sync_recursive.js:3728:28)

Steps to Reproduce:

  1. Start the browser example
  2. Open a workspace
  3. Edit a file (maybe it's not necessary)
Screen.Recording.2025-03-07.at.15.11.23.mov

Additional Information

  • Operating System:
  • Theia Version: afde78f
@sdirix
Copy link
Member

sdirix commented Mar 7, 2025

AI Autocomplete is enabled by default if the overall AI feature is turned on. By default, it attempts to use an OpenAI model, which requires an OPENAI_API_KEY to be set either in your preferences or as an environment variable. The autocomplete feature will attempt to make LLM requests and the Theia OpenAI model will throw these "key is missing" errors before actually performing the request against OpenAI.

To resolve this issue, you can:

  • Configure the Autocomplete agent to use an LLM that is properly set up in your application.
  • Disable the AI autocomplete feature if it's not needed.

@kittaakos
Copy link
Contributor Author

  • Disable the AI autocomplete feature if it's not needed.

Could Theia be smart enough to check it once and not try to fetch OpenAI endpoints if it has failed instead of spamming the log while the app runs on every user interaction? A handler can run if the settings have changed.

@sdirix
Copy link
Member

sdirix commented Mar 7, 2025

Yes, I think some improvements could be done there 👍

At the moment the error occurs late in the chain, i.e. the Autocomplete agent has no clue that the model it talks with is basically non-functional. Therefore it triggers the request and the model then throws the error.

I think the most sensible change would be to not even offer the Open AI models (or any other model) when it would be in a non-functional state. In that case the agent does not even try to perform a request. However at the moment this would then just log another error. That location is probably then the best place to make sure the error is only logged once between successful usages of autocomplete.

@kittaakos
Copy link
Contributor Author

kittaakos commented Mar 7, 2025

Could this work?

diff --git a/packages/ai-openai/src/node/openai-language-model.ts b/packages/ai-openai/src/node/openai-language-model.ts
index 33aa9874f..13427a08d 100644
--- a/packages/ai-openai/src/node/openai-language-model.ts
+++ b/packages/ai-openai/src/node/openai-language-model.ts
@@ -69,7 +69,15 @@ export class OpenAiModel implements LanguageModel {
 
     async request(request: LanguageModelRequest, cancellationToken?: CancellationToken): Promise<LanguageModelResponse> {
         const settings = this.getSettings(request);
-        const openai = this.initializeOpenAi();
+        let openai: OpenAI | undefined;
+        try {
+            openai = this.initializeOpenAi()
+        } catch (err) {
+            if (err instanceof NoOpenAiApiKeyError) {
+                return { text: '' };
+            }
+            throw err;
+        }
 
         if (request.response_format?.type === 'json_schema' && this.supportsStructuredOutput) {
             return this.handleStructuredOutputRequest(openai, request);
@@ -158,7 +166,7 @@ export class OpenAiModel implements LanguageModel {
     protected initializeOpenAi(): OpenAI {
         const apiKey = this.apiKey();
         if (!apiKey && !(this.url)) {
-            throw new Error('Please provide OPENAI_API_KEY in preferences or via environment variable');
+            throw new NoOpenAiApiKeyError();
         }
 
         const apiVersion = this.apiVersion();
@@ -176,6 +184,13 @@ export class OpenAiModel implements LanguageModel {
     }
 }
 
+export class NoOpenAiApiKeyError extends Error {
+    constructor() {
+        super('Please provide OPENAI_API_KEY in preferences or via environment variable');
+        this.name = 'NoOpenAiApiKeyError';
+    }
+}
+
 /**
  * Utility class for processing messages for the OpenAI language model.
  *

@sdirix
Copy link
Member

sdirix commented Mar 7, 2025

If we throw the error and immediately catch and filter it again, the whole error becomes unnecessary. The issue with this workaround is that the user/caller does not receive any message what went wrong. For example they selected OpenAI 4o model for the chat and forget to set their environment variable. Now they send a request and just get an empty response back.

As said above, I think the whole instance of the model should not exist in the cases where we would throw this error, i.e. we could extend the OpenAiLanguageModelsManagerImpl and not even create the official models when there is currently no key available.

Alternatively we could enrich the model interface with a isReady method and filter accordingly when querying the model registry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants