throwpoaster 5 days ago

Let's do it! What's the idea?

1
__MatrixMan__ 5 days ago

A list of ways people can trust each other (e.g. to be a skilled plumber, or to be a fair mediator, to be a real human, to not let their key get compromised, and many other things). I call these colors.

Also there's a directed graph where the nodes are users. Edges on this graph indicate that this user trusts that user, the edges are colored to indicate which type of trust it is.

Given two users, they can compare graphs to decide if they both trust (transitively) any other users in some set of colors. Also, if cycles appear, then that cycle is a community of experts and they can follow the graph in reverse to find out which other users consider them experts.

It's sort of like how we have representatives in congress, except instead of being one layer deep with millions of people being represented by one, it can be as nested as needed to ensure that the experts are not overloaded (since many of us are somewhere in the middle, we distribute the load by playing both roles--depending on who we're dealing with). It also differs from typical representative democracy because you can express trust in somebody's diplomacy and simultaneously avoid trusting their understanding of economics (or whatever other colors you care to).

Ideally it would be a system in which the most trusted and capable people for any job are easy to find and easy to support, and in which we focus on becoming skilled and trustworthy rather than on the ownership of scarce things.

Human societies already work like this, they have for a million years or so, but it stops working well when the cognitive burden of walking all of these trust graphs becomes too much to bear, then things get authoritarian. We now have the technology to scale it better, but the implicit non-specialized authorities are still in charge.

A couple of applications for this that could work in the near term:

1. If you have a dispute, you can use the data find a mediator who is trusted by you and the other party. And not just trusted, but trusted in the relevant way. This is a step towards a better court system, better because the arbiter is explicitly trusted by the complainants and because the arbiter is an expert in the color of the complaint.

This would solve the well-somebody-needs-to-be-able-to-undo-the-transaction problem without invoking a bank and without leaving it unsolved. The transaction arbitrator would be determined by the trust settings of the parties to the transaction. There's a lot more consent and specificity in that than in what we're doing.

2. If you find a dubious claim, you can see who signed it and check for a trust path between you and that person. 1,000,000 fake amazon reviews mean nothing through that lens, since you don't trust them. But two or three reviews signed by people that you explicitly trust (perhaps transitively) would mean a great deal. This gives us a way to ignore scammers and malicious AI's and creates a space in which being trustworthy is an asset (contrast this to the world we've built which is more about commanding the most attention).

I'm not saying I have it right, but such things are worth trying in general, and "crypto" isa much better medium for them than bureaucracy (although I'm more excited about CRDT's than blockchains, because I think partition tolerance is more important than consistency).