WRT prompts vs sampling: why does the Prompts interface exclude model hints that are present in the Sampling interface? Maybe I am misunderstanding.
It appears that clients retrieve prompts from a server to hydrate them with context only, to then execute/complete somewhere else (like Claude Desktop, using Anthropic models). The server doesn’t know how effective the prompt will be in the model that the client has access to. It doesn’t even know if the client is a chat app, or Zed code completion.
In the sampling interface - where the flow is inverted, and the server presents a completion request to the client - it can suggest that the client uses some model type /parameters. This makes sense given only the server knows how to do this effectively.
Given the server doesn’t understand the capabilities of the client, why the asymmetry in these related interfaces?
There’s only one server example that uses prompts (fetch), and the one prompt it provides returns the same output as the tool call, except wrapped in a PromptMessage. EDIT: lols like there are some capabilities classes in the mcp, maybe these will evolve.
Our thinking is that prompts will generally be a user initiated feature of some kind. These docs go into a bit more detail:
https://modelcontextprotocol.io/docs/concepts/prompts
https://spec.modelcontextprotocol.io/specification/server/pr...
… but TLDR, if you think of them a bit like slash commands, I think that's a pretty good intuition for what they are and how you might use them.