-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
root ERROR Please provide OPENAI_API_KEY in preferences or via environment variable #15150
Comments
AI Autocomplete is enabled by default if the overall AI feature is turned on. By default, it attempts to use an OpenAI model, which requires an To resolve this issue, you can:
|
Could Theia be smart enough to check it once and not try to fetch OpenAI endpoints if it has failed instead of spamming the log while the app runs on every user interaction? A handler can run if the settings have changed. |
Yes, I think some improvements could be done there 👍 At the moment the error occurs late in the chain, i.e. the Autocomplete agent has no clue that the model it talks with is basically non-functional. Therefore it triggers the request and the model then throws the error. I think the most sensible change would be to not even offer the Open AI models (or any other model) when it would be in a non-functional state. In that case the agent does not even try to perform a request. However at the moment this would then just log another error. That location is probably then the best place to make sure the error is only logged once between successful usages of autocomplete. |
Could this work? diff --git a/packages/ai-openai/src/node/openai-language-model.ts b/packages/ai-openai/src/node/openai-language-model.ts
index 33aa9874f..13427a08d 100644
--- a/packages/ai-openai/src/node/openai-language-model.ts
+++ b/packages/ai-openai/src/node/openai-language-model.ts
@@ -69,7 +69,15 @@ export class OpenAiModel implements LanguageModel {
async request(request: LanguageModelRequest, cancellationToken?: CancellationToken): Promise<LanguageModelResponse> {
const settings = this.getSettings(request);
- const openai = this.initializeOpenAi();
+ let openai: OpenAI | undefined;
+ try {
+ openai = this.initializeOpenAi()
+ } catch (err) {
+ if (err instanceof NoOpenAiApiKeyError) {
+ return { text: '' };
+ }
+ throw err;
+ }
if (request.response_format?.type === 'json_schema' && this.supportsStructuredOutput) {
return this.handleStructuredOutputRequest(openai, request);
@@ -158,7 +166,7 @@ export class OpenAiModel implements LanguageModel {
protected initializeOpenAi(): OpenAI {
const apiKey = this.apiKey();
if (!apiKey && !(this.url)) {
- throw new Error('Please provide OPENAI_API_KEY in preferences or via environment variable');
+ throw new NoOpenAiApiKeyError();
}
const apiVersion = this.apiVersion();
@@ -176,6 +184,13 @@ export class OpenAiModel implements LanguageModel {
}
}
+export class NoOpenAiApiKeyError extends Error {
+ constructor() {
+ super('Please provide OPENAI_API_KEY in preferences or via environment variable');
+ this.name = 'NoOpenAiApiKeyError';
+ }
+}
+
/**
* Utility class for processing messages for the OpenAI language model.
* |
If we throw the error and immediately catch and filter it again, the whole error becomes unnecessary. The issue with this workaround is that the user/caller does not receive any message what went wrong. For example they selected OpenAI 4o model for the chat and forget to set their environment variable. Now they send a request and just get an empty response back. As said above, I think the whole instance of the model should not exist in the cases where we would throw this error, i.e. we could extend the OpenAiLanguageModelsManagerImpl and not even create the official models when there is currently no key available. Alternatively we could enrich the model interface with a |
Bug Description:
When I start Theia, I see this error repeatedly in the console:
Steps to Reproduce:
Screen.Recording.2025-03-07.at.15.11.23.mov
Additional Information
The text was updated successfully, but these errors were encountered: