github.com

Hey HN,

I built pg-mcp, a Model Context Protocol (MCP) server for PostgreSQL that provides structured schema inspection and query execution for LLMs and agents. It's multi-tenant and runs over HTTP/SSE (not stdio)

Features - Supports multiple database connections from multiple agents

- Schema Introspection: Returns table structures, types, indexes and constraints; enriched with descriptions from pg_catalog. (for well documented databases)

- Read-Only Queries: Controlled execution of queries via MCP.

- EXPLAIN Tool: Helps smart agents optimize queries before execution.

- Extension Plugins: YAML-based plugin system for Postgres extensions (supports pgvector and postgis out of the box).

- Server Mode: Spin up the container and it's ready to accept connections at http://localhost:8000/sse

163
78
saberience 2 days ago

Just for everyone here, the code for "building an MCP server", is importing the standard MCP package for Typescript, Python, etc, then writing as little as 10 lines of code to define something is an MCP tool.

Basically, it's not rocket science. I also built MCP servers for Mysql, Twilio, Polars, etc.

spennant 2 days ago

I built this to fill a specific need. It's not really made to talk to Claude Desktop (although it can). It's built to take multiple connections from agents who bring their own LLM (via API key or whatever) and provide context to the LLM model. Yes, it does import the standard Python MCP package, however it does quite a few non-standard things in order to achieve the multi-tenancy. I encourage you to look at the code.

koakuma-chan 2 days ago

What is multi-tenancy?

spennant 2 days ago

Just a fancy way of saying that multiple agents (with their own LLMs) can concurrently connect, also that pg-mcp can simultaneously connect to multiple Postgres servers as well.

yroc92 2 days ago

User isolation in a single data store. Basically, many customers sharing a single database securely without sharing their data.

runako 2 days ago

From HN guidelines:

> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

We are hackers here. Building is good. Sharing is good. All this is true even if you personally know how to do what is being shared, and it is easy for you. I promise you there are people who encounter every sharing post here and do not think what is posted is easy.

brulard 2 days ago

I think we exactly need to hear things like that. This is what I was wondering. Why is every MCP project such a big news? Isn't it just a few lines of code?

runako 2 days ago

Is this really “big news” or is it a GitHub link titled “Show HN”?

Is there a glitzy corporate PR page trying to sell something, or is this just code for people to read?

Did Ars Technica breathlessly cover it, or did a random hacker post and share something they worked on?

If it’s the work of a random hacker not promoted by media outlets, who benefits from negative comments about that person’s work?

Is it possible that there are at least some people who read this site who know less about the topics covered than you do, and so might find this interesting or useful?

When you post something, will it help you to improve if people post non-constructive negative feedback? Will dismissive comments like these make you more or less likely to show your work publicly?

Just food for thought…

BoorishBears 2 days ago

I'm not so sure why you're writing so much against a reasonable clarification that was made.

MCP is getting trendy, but a lot of people being drawn in can't find the actual meat of what it is outside of self-referential definitions.

Someone pointing out the reason for the flood of MCP servers being how trivial they are to implement is a helpful data point. Simple as that.

runako 2 days ago

> MCP is getting trendy, but a lot of people being drawn in can't find the actual meat of what it is outside of self-referential definitions.

My comment was against the shallow dismissal. Do you think that the negative comments helped anyone locate the meat of what it is? Did they clarify or educate?

Even the infamous Dropbox comment[1] was more constructive. I gently push back in the hope that we can have better discussions here than on other sites.

1 - https://news.ycombinator.com/item?id=9224

saberience 1 day ago

Saying you "built an MCP server" is literally the same as saying you wrote a method though.

Like you can write a single 20 line method to connect to a MySQL database, add the MCP tool decorator to the method and boom you "built an MCP server".

But no one is posting up to HN saying "Look guys, I wrote a method which connects to a MySQL server", because it takes almost zero expertise and is not novel at all.

BoorishBears 1 day ago

I didn't engage further because their comment made it transparent their protest is more about virtue signaling than anything of substance.

brulard 1 day ago

I didn't find the original comment negative. Saying it's not that hard to try to build an MCP server yourself is actually very helpful and constructive. That's how I read it. Nothing that would discourage me from submitting a "Show HN" in the future.

asdev 2 days ago

5% hackers, 95% FANG employees who think this is meaningless

esafak 2 days ago

You do have to implement every functionality that you want to expose.

1zael 2 days ago

This is wild. Our company has like 10 data scientists writing SQL queries on our DB for business questions. I can deploy pg-mcp for my organization so everyone can use Claude to answer whatever is on their mind? (e.x."show me the top 5 customers by total sales")

sidenote: I'm scared of what's going to happen to those roles!

clusterfook 2 days ago

Yep gonna be easy

Q: show me the top 5 customers by total sales

A: System.Data.Odbc.OdbcException (0x80131937): ERROR [57014] ERROR: canceling statement due to statement timeout;

Q: Why do I get this error

A: Looks like it needs an index, let me create that for you. Done. Rerunnign query.

could not close temporary statistics file "pg_stat_tmp/global.tmp": No space left on device

Q: Why this error

A: 429 Too Many Requests

Rub hands... great next 10 years to be a backend dev.

curious_cat_163 2 days ago

That’s a good example of a worst case scenario. This is why we would still need humans loitering about.

The question is do they still need 10? Or 2 would suffice? How about 5?

This does not need to be a debate about the absolutes.

clusterfook 1 day ago

You will need 2. BUT, and here is the rub. https://en.wikipedia.org/wiki/Jevons_paradox

I am hoping Jevon will keep employing me. He has been a great boss for the last 25 years TBH.

brulard 1 day ago

I have to say I had a very good results creating and optimizing quite complex queries with Sonnet. But letting LLM run them on their own in production is quite a different beast.

fullstackchris 2 days ago

and the next 10 after that, and the next 10 after that, and...

otabdeveloper4 2 days ago

Probably nothing. "Expose the database to the pointy-haired boss directly, as a service" is an idea as old a computing itself. Even SQL itself was originally an iteration of that idea. Every BI system (including PowerBI and Tableau) were supposed to be that.

It doesn't work because the PHB doesn't have the domain knowledge and doesn't know which questions to ask. (No, it's never as simple as group-by and top-5.)

jaccola 2 days ago

I would say SQL still is that! My wife had to learn some SQL to pull reports in some non-tech finance job 10 years ago. (I think she still believes this is what I do all day…)

I suppose this could be useful in that it prevents everyone in the company having to learn even the basics of SQL which is some barrier, however minimal.

Also the LLM will presumably be able to see all the tables/fields and ‘understand’ them (with the big assumption that they are even remotely reasonably named) so English language queries will be much more feasible now. Basically what LLMs have over all those older attempts is REALLY good fuzziness.

I see this being useful for some subset of questions.

pclmulqdq 2 days ago

A family friend maintains a SQL database of her knitting projects that she does as a hobby. The PHB can easily learn SQL if they want.

conradfr 2 days ago

But he doesn't.

The project manager also won't learn behat and write tests.

Your client also won't use the CMS to update their website.

spennant 2 days ago

It won’t be that easy. First off, most databases in the wild are not well documented. LLMs benefit from context, and if your tables/columns have non-intuitive or non-descriptive names, the SQL may not even work. Second, you might benefit from an LLM fine-tuned on writing code and/or an intelligent Agent that checks for relevancy and ambiguity in user input prior to attempting to answer the question. It would also help if the agent executed the query to see how it answered the user’s question. In other words “reasoning”… pg-mcp simply exposes the required context for Agents to do that kind of reasoning.

nickdichev 2 days ago

The COMMENT command will finally be useful :)

dinfinity 2 days ago

Then let the AI first complete the documentation by looking at the existing documentation, querying the DB (with pg-mcp), etc.

Do human reviewing and correcting of the updated documentation. Then ensure that the AI knows that the documentation might still contain errors and ask it to do the 'actual' work.

moltar 2 days ago

There are LLM SQL benchmarks. [1] And state of the art solution is still only at 77% accuracy. Would you trust that?

[1] https://bird-bench.github.io/

flappyeagle 2 days ago

Yes. Ask it to do it 10 times and pick the right answer

pclmulqdq 2 days ago

That only works if you assume the fail cases are uncorrected. Spoiler alert: they are not.

flappyeagle 2 days ago

Ask 10 different models then

pclmulqdq 2 days ago

Same problem: The models are also correlated on what they can and can't solve.

To give you an extreme example, I can ask 1000000 different models for a counterexample to the 3n + 1 problem, and all will get it wrong.

flappyeagle 2 days ago

No. What a bizarre example to choose. This is so easy to demonstrate. They will all come back with the exact same correct answer

pclmulqdq 2 days ago

If it's so easy, go do it. You can publish the result in any math journal you like with just a title and a number, because this is one of the hardest problems in mathematics.

For reference: https://en.wikipedia.org/wiki/Collatz_conjecture

flappyeagle 2 days ago

My guy, every LLM has read Wikipedia

pclmulqdq 1 day ago

I don't know if you're purposely being dense. The first sentence of Wikipedia is that this is a famous unsolved problem.

So no, sampling 1000000 LLMs will not get you a solution to it. I guarantee you that.

flappyeagle 1 day ago

It will get you the correct answer, not a solution. Once again it’s a terrible example, I don’t know why you used it. It’s certainly not a gotcha

pclmulqdq 1 day ago

The reason I used it is that the correct answer to the actual problem is unknown and nobody has any idea how to solve it. No amount of sampling an LLM will give you a correct answer. It will give you the best known answer today, but it won't give you a correct answer. This is an example where LLMs all give correlated answers that do not solve the problem.

If you want to scale back, many programming problems are going to be like this, too. Failure points of different models are correlated as much as failure points during sampling are correlated. You only gain information from repeated trials when those trials are uncorrelated, and sampling multiple LLMs is still correlated.

flappyeagle 4 hours ago

the correct answer is "the solution is unknown"

risyachka 2 days ago

So you will ask "What is our churn?", get a random result, and then turn your whole marketing strategy around wrong number?

Thats cute.

Kiro 2 days ago

There are hundreds of text-to-SQL companies and integrations already. What's different about this that makes you react like that?

romanovcode 2 days ago

Those companies will be dead once this goes mainstream. Why pay to a 3rd party company when you can ask LLM to create graphs and analysis of whatever you want. Pair it with scheduled tasks and I really don't see any value in those SaaS products.

slt2021 2 days ago

there are a lot of nuances in Business Analytics, you maybe can get away with GenAI for naiive questions like "Who are my top5 customers?", but thats not the type of insight usually needed. Most companies already know their top5 customers by heart and these don't change a lot.

Nuanced BI analytics can have a lot of toggles and filters and drilldowns, like compare sales of product A in category B subcategory C, but only for stores in regions X,Y and that one city Z during time periods T1, T2. and out of these sales, look at sales of private brand vs national brand, and only retail customers, but exclude purchases via business credit card or invoiced.

with every feature in a DB (of which there could be thousands), the number of permutations and dimensions grows very quickly.

whats probably going to happen, is simple questions could be self-served by GenAI, but more advanced usage is still needed interention by specialist. So we would see some improvement in productivity, but people will not lose jobs. Perhaps number of jobs could even increase due to increased demand for analytics, as it often happens with increased efficiency/productivity (Jevon's paradox)

Kiro 2 days ago

Those companies and integrations are already using LLMs. That's the whole point. I'm only talking about LLM products, many of which are free and open source. This has been mainstream for years.

a-dub 2 days ago

is that true? i'd like that, but i get the sense that this mcp stuff is more oriented around programming assistant and agent applications.

i suppose the desktop app can use it, but how good is it for this general purpose "chat with the database for lightweight analytics" use cases is it worth the trouble of dealing with some electron app to make it work?

sshine 2 days ago

> i get the sense that this mcp stuff is more oriented around programming assistant and agent applications

Agents will become ubiquitous parts of the user interface that is currently the chat.

So if you bother with a website or an electron app now, MCP will just add more capabilities to what you can control using agents.

a-dub 1 day ago

yeah, i understand the premise. my question revolves around how well it actually works today for bi style applications. specifically, how close is it to being something that you can just drop in as a smart query and plotting interface rather than a bi stack that is built around something like tableau.

when i've read through documentation for mcp servers, it seems like the use cases they've mostly been focused on are improving effectiveness of programming assistants by letting them look at databases associated with codebases they're looking to modify.

i understand that these things are meant to be generic in nature, but you never really know if something is fit for purpose until it's been used for that purpose. (at least until agi, i suppose)

slt2021 2 days ago

didn't Tableau (and some other BI solutions) have this feature out of the box?

fulafel 2 days ago

From docker-compose

    ports:
      - "8000:8000"
This will cause Docker to expose this to the internet and even helpfully configure an allow rule to the host firewall, at least on Linux.

rubslopes 2 days ago

Good catch.

OP, exposing your application without authentication is a serious security risk!

Quick anecdote: Last week, I ran a Redis container on a VPS with an exposed port and no password (rookie mistake). Within 24 hours, the logs revealed someone attempting to make my Redis instance a slave to theirs! The IP traced back to Tencent, the Chinese tech giant... Really weird. Fortunately, there was nothing valuable stored in it.

acheong08 2 days ago

> The IP traced back to Tencent, the Chinese tech giant... Really weird.

They're a large cloud provider in Asia like Amazon AWS or Microsoft Azure. I doubt such a tech company would make it that obvious when breaking the law.

rubslopes 1 day ago

I didn't know that, thank you.

spennant 2 days ago

I made a few assumptions about the actual deployer and their environment that I shouldn’t have… I’ll need to address this. Thanks!

tudorg 2 days ago

This is great, I like in particular that there are extensions plugins. I’ll be looking at integrating this in the Xata Agent (https://github.com/xataio/agent) as custom tooling.

spennant 2 days ago

Xata.io looks very interesting!!! I was thinking about building an intelligent agent for pg-mcp as my net project but it looks like you did a lot of the hard work already. When thinking about the "AI Stack" I usually separate concerns like this:

  UI <--> Agent(s) <--> MCP Server(s) <--> Tools/Resources
             |
           LLM(s)

tudorg 1 day ago

That's very similar to what we are thinking as well, and we'd like to separate the Agent tools into an MCP server as well as use MCP for custom tools.

scottpersinger 2 days ago

Where's the pagination? How does a large query here not blow up my context:

https://github.com/stuzero/pg-mcp/blob/main/server/tools/que...

spennant 1 day ago

It's coming...

jillesvangurp 2 days ago

Is there more to MCP than being a simple Remote Procedure Call framework that allows AI interactions to include function calls driven by the AI model? The various documentation pages are a bit hand wavy on what the protocol actually is. But it sounds to me that RPC describes all/most of it.

doug_durham 2 days ago

The biggest contribution is the LLM compatible metadata that describes the tool and its argument. It is trivial to adopt. In python you can use FASTMcp to add a decorator to a function, and as long as that function returns a JSON string you are in business. The decorator extracts the arguments and doc strings and presents that to the LLM.

jillesvangurp 2 days ago

What makes a spec LLM compatible? I've thrown a lot of different things at gpt o1 and it generally understands them more better than I do. OpenAI specifications, unstructured text, log output, etc.

spennant 2 days ago

Indeed. Anything you do with MCP can be done in more traditional ways.

oulipo 2 days ago

Nice!

What I'd be looking for is a MCP server where I can run in "biz/R&D exploration-mode", eg:

- assume I'm working on a replica (shared about all R&D engineers) - they can connect and make read-only queries to the replica for the company data - they have a temporary read-write schema just for their current connection so they can have temporary tables and caches - temporary data is deleted when they close the session

How could you make a setup like that so that when using your MCP server, I'm not worried about the model / users modifying the data, but only doing their own private queries/tables?

ahamilton454 2 days ago

I don’t understand the advantage of having the transport protocol be HTTP/SSE rather than studio especially in this case when it’s literally running locally.

spennant 2 days ago

The use case for pg-mcp is server deployment - local running is just for dev purposes. HTTP/SSE enables multiple concurrent connections and network access, which stdio can't provide.

runako 2 days ago

I'm still trying to grok MCP, would be awesome if you could include usage examples in the doc.

Good luck!

saberience 2 days ago

It's not complicated at all.

All it does is expose methods as a "tool" which is then brought back to your LLM and defined with its name, description and input parameters.

E.g. Name: "MySqlTool", Description: "Allows arbitrary MySQL queries to the XYZ database", Parameters: "string: sqlToExecute"

The MCP Client (e.g. Claude Desktop, Claude Code), is configured to talk to an MCP server via stdio or sse, and calls a method like "tools/list", the server just sends a list back (in JSON) of all the tools, names, descriptions, params.

Then, if the LLM gets a query that mentions e.g. do a web search, or a web scraping, etc, it just outputs a tool use token then stops inferencing. Then the code calls that tool via stdio/sse (json-rpc), to the MCP server, which just runs that method, returns the result, then its added to the message history in the LLM, then inferencing runs again from the beginning.

runako 2 days ago

I think people who have been building with LLMs have a different view on what is complicated vs not :-)

It may be easy for you to configure, but you dropped some acronyms in there that I would have to look up. I have definitely not personally set up anything like this.

bavell 2 days ago

It's basically a simple rpc server, there's nothing complicated going on...

runako 2 days ago

Then please write up and share here some example usage documentation for someone who has never used MCP. (That was my suggestion upthread.)

As a side note, do people here not realize that less complicated examples are often better for learning? Have we as a community forgotten this basic truism?

Since it’s not complicated, you should be able to write it up quickly, and parallel commmenters to mine suggest there is an audience for such documentation.

Thanks!

mparis 2 days ago

+1

My first foray into using MCP was via Claude Desktop. Would be great if you packaged your tool such that one could add it with a few lines in their ‘~/Library/Application Support/Claude/claude_desktop_config.json’

jamestimmins 2 days ago

Same here. Tonight I added Whatsapp to Claude Desktop via Luke Harries' https://github.com/lharries/whatsapp-mcp. Very solid intro into how it all works.

romanovcode 2 days ago

It's very simple and this is actually good example.

1. You add this MCP to your DB (make sure it is securely connected to your AI of choice of course)

2. Ask anything about your data, ask to make graphs, ask to make scheduled tasks, ask to analyze queries and show optimizations and so on.

3. Profit, literally. No need to pay BI companies thousands each month.

teaearlgraycold 2 days ago

The main things that made MCP hard for me to understand at first is that it’s both transport agnostic (so no leveraging semantic HTTP) and is an async task management protocol as well as a tool use protocol. The name itself is also poorly chosen. I would call it Tool Use Protocol.

Think about each MCP implementer like an agent’s input/output device.

revskill 2 days ago

Everytime i see a cloud API_KEY is required, i'm off.

Jaxkr 2 days ago

Well the key it wants is for Anthropic and you can’t run those models locally.

spennant 2 days ago

The pg-mcp server doesn’t need an API key. Those variables are only needed to run the example claude-client. You can write your own agent to talk to whatever LLM you like.