Replies: 1 comment
-
Also interested in this topic. Thanks for the mention of cline. I dug into their code a bit and it seems that they expose the resources directly in the prompt with a description. Similar to tools, but not literally a tool. I found it in their prompt here. I think I'll give that a go in my project, but let me know if others have any experience and recommendations here! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Pre-submission Checklist
Question Category
Your Question
Hey there,
I'm looking into building a client to work with MCP servers, and I'd like to know how others solve the problem of including resources in the context.
The official spec leaves this question rather purposefully open-ended, which is understandable.
However, I was wondering what approaches others take to integrate the resources into an LLM's context, especially related to the third point of automatic context inclusion.
I tried looking into different open-source implementations that utilize MCP, including continue and cline and I found different approaches. Continue seems to rely on the user to identify when they want to inject a specific resource into the context, whereas cline seems to wrap resource injection as a tool call to allow the LLM to automatically identify which context should be included.
Does anyone else have ideas or experience in this topic regarding what works and what doesn't?
Beta Was this translation helpful? Give feedback.
All reactions