jascha_eng 3 days ago

Hmm I like the idea of providing a unified interface to all LLMs to interact with outside data. But I don't really understand why this is local only. It would be a lot more interesting if I could connect this to my github in the web app and claude automatically has access to my code repositories.

I guess I can do this for my local file system now?

I also wonder if I build an LLM powered app, and currently simply to RAG and then inject the retrieved data into my prompts, should this replace it? Can I integrate this in a useful way even?

The use case of on your machine with your specific data, seems very narrow to me right now, considering how many different context sources and use cases there are.

5
jspahrsummers 3 days ago

We're definitely interested in extending MCP to cover remote connections as well. Both SDKs already support an SSE transport with that in mind: https://modelcontextprotocol.io/docs/concepts/transports#ser...

However, it's not quite a complete story yet. Remote connections introduce a lot more questions and complexity—related to deployment, auth, security, etc. We'll be working through these in the coming weeks, and would love any and all input!

jascha_eng 3 days ago

Will you also create some info on how other LLM providers can integrate this? So far it looks like it's mostly a protocol to integrate with anthropic models/desktop client. That's not what I thought of when I read open-source.

It would be a lot more interesting to write a server for this if this allowed any model to interact with my data. Everyone would benefit from having more integration and you (anthropic) still would have the advantage of basically controlling the protocol.

somnium_sn 3 days ago

Note that both Sourcegraph's Cody and the Zed editor support MCP now. They offer other models besides Claude in their respective application.

The Model Context Protocol initial release aims to solve the N-to-M relation of LLM applications (mcp clients) and context providers (mcp servers). The application is free to choose any model they want. We carefully designed the protocol such that it is model independent.

jascha_eng 3 days ago

LLM applications just means chat applications here though right? This doesn't seem to cover use cases of more integrated software. Like a typical documentation RAG chatbot.

nl 3 days ago

OpenAI has Actions which is relevant for this too: https://platform.openai.com/docs/actions/actions-library

Here's one for performing GitHub actions: https://cookbook.openai.com/examples/chatgpt/gpt_actions_lib...

mike_hearn 3 days ago

Local only solves a lot of problems. Our infrastructure does tend to assume that data and credentials are on a local computer - OAuth is horribly complex to set up and there's no real benefit to messing with that when local works fine.

TeMPOraL 3 days ago

I'm honestly happy with them starting local-first, because... imagine what it would look like if they did the opposite.

> It would be a lot more interesting if I could connect this to my github in the web app and claude automatically has access to my code repositories.

In which case the "API" would be governed by a contract between Anthropic and Github, to which you're a third party (read: sharecropper).

Interoperability on the web has already been mostly killed by the practice of companies integrating with other companies via back-channel deals. You are either a commercial partner, or you're out of the playground and no toys for you. Them starting locally means they're at least reversing this trend a bit by setting a different default: LLMs are fine to integrate with arbitrary code the user runs on their machine. No need to sign an extra contact with anyone!

bryant 3 days ago

> It would be a lot more interesting if I could connect this to my github in the web app and claude automatically has access to my code repositories.

From the link:

> To help developers start exploring, we’re sharing pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.

jascha_eng 3 days ago

Yes but you need to run those servers locally on your own machine. And use the desktop client. That just seems... weird?

I guess the reason for this local focus is, that it's otherwise hard to provide access to local files. Which is a decently large use-case.

Still it feels a bit complicated to me.

singularity2001 3 days ago

For me it's complementary to openai's custom GPTs which are non-local.