-
Notifications
You must be signed in to change notification settings - Fork 626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tool use agent #266
Comments
anyone working on this? |
Hello ! What kind of I believe that the Model Context Protocol, would be more appropriate than standard
Happy to discuss more on the topic :) Proposed Target
|
I agree. MCP would be more appropriate and cover a wider array of use cases. Anyone working on MCP integration? |
Great question @ziiw. Right now, we have tools for:
We currently have no "lint error" tool - if a lint error exists, it's automatically passed to the LLM and prompted for a fix.
We weren't worried about tools being fixed on the first call, because last time I experimented, it seemed OpenAI's models can only output a message OR a tool call (not both like Anthropic), so we were going to prompt after every message to deal with OpenAI's models anyway. Happy to talk more about how this all works, would love to hear your thoughts. @abhicombine: |
Thanks for the details @andrewpareles |
This would be amazing if void can support MCP, it will unlock a lock of potential. Here's a good youtube video on it: https://www.youtube.com/watch?v=oAoigBWLZgE |
I think we should be careful with how we get this to automatically react to linting errors. I've had multiple times where a model recognized a lint error where there was none, and kept automatically trying to fix it until the code was entirely broken and it tried to throw out an entire file. The cursor settings to turn off auto fix don't get respected either - so even time you try to alter a file with this thing the model thinks is a mistake but isn't, it goes into a self destructive loop. I've seen this tool calling be useful, but also spiral very quickly when it comes to linting. These models still do hallucinate. The setting to enable auto fixing and not seems like a good solution if we could get the model to actually respect that setting, or having a flow that allows the user to fix the issue before then continuing on. |
Great suggestion, totally agree that an LLM can just confuse itself and downward spiral. We can definitely add a setting for disabling lint-fixing, especially for OSS models in general which might not be as smart. One guiding principle for us is that linting fixes should always be done by an agent with tool use, because you might need to go into another file to fix a lint error. I think if we passively give the agent tool use but don't force it to fix the error before moving on, that might help, but totally open to other ways of dealing with lint errors. |
No description provided.
The text was updated successfully, but these errors were encountered: