Those assume robots that are smarter than us. What if we assume, as we likely have now, robots that are dumber? Address the actual current issues with code-as-law, expectations-versus-rules, and dealing with conflict of laws in an actual structured fashion without relying on vibes (like people) or a bunch of rng (like an llm)?
What system do you propose that implements the code-as-law? What type of architecture does it have?
I don’t know! I’m currently trying a strong bayesian prior for the RL action planner, which has good tradeoffs with enforcement but poor tradeoffs with legibility and ingestion. Aside from Spain, there’s not a lot of computer-legible law to transpile; llm support always needs to be checked and some of the larger submodels reach the limits of the explainability framework I’m using.
There’s also still the HF step that needs to be incorporated, which is expensive! But the alternative is Waymo, which keeps the law perfectly even when “everybody knows” it needs to be broken sometimes for traffic(society) to function acceptably. So the above strong prior needs to be coordinated with HF and the appropriate penalties assigned…
In other words. It’s a mess! But assumptions of “AGI” don’t really help anyone.