The whole issue of trusting other peoples code does have a solution besides just not running it: capability based code execution ("Capability-based security" if you want to look it up)
Processes can forfeit the permission to do certain things. similarily you could allow functions to forfeit the permission to call certain functions down the callstack (transitively, even) or only allow a function to be called from certain contexts/functions.
While certainly a (minor) performance hit when done dynamically, it should be possible, especially when using an interpreted language. I have not yet seen this done in the context of programming languages, which is probably due to how difficult it would be to set a proper scope and get it right. If A is forbidden from deleting files, for example, but the program allows it to send messages to B (perhaps even by user choice), which is allowed to do so, then that is a natural escape hatch unless B is specifically designed to take this into account. Nonetheless, a hybrid system should be possible to achive, which would at least improve the situation by a non-negligible margin.
In the spirit of the original article: Most editor plugins don't need to interact with other processes or the file system directly, and instead could only affect an associated buffer. In this scenario, I believe that it is actually possible to solve this issue. For other applications, such as git integration, the mass of code that would have to be inspected can at least be reduced.
MacOS has lots of "capabilities" stuff now. It's constantly giving me anonymous prompts asking for some such privilege or not, without any explanation as to why it's necessary, what will or will not happen if I do/don't do it.
I just had the joy of kicking off a large crawl across my file system(s), expecting it to take some time, so I kicked it off just before I went to bed. This morning I arrived to a dialog "Terminal would like to access...", which naturally stalled my 3 command pipeline in its tracks, thus eliminating the primary benefit of doing the job overnight in the first place.
The other day, I guess I installed Discord, or updated it, or something. Anyway, Discord was asking for keystrokes from other apps. "Why!?"
Why does my IDE need root access to install?
What happens if I don't let random program crawl my network?
As a carbon based user, "capabilities" are a pain in the neck.
Windows lost this battle decades ago, when every. single. new program required "admin" privs to be installed. Every single one. Heck, I bet Calculator asked for it. To wit we numb users, wanting to just "use the software" said, "yes. Yes. YES! PLEASE MAKE IT STOP!" to the point where you just gave it no second glance at all.
Capabilities look great on paper. They're fine for things like systemd and daemons and "stuff administrator folks" install.
For human beings, not so much.
The problem here is that Emacs is the Elisp interpreter. They are the same thing.
Emacs would have to start another process for Elisp analysis and code completion. That would be a massive reachitecture of the system.
For folks that have never poked around in emacs, the specific difficulty will be that the odds are very high that if you are in an emacs lisp file, you are almost certainly going to want to edit what emacs itself is doing.
I'm specifically talking about scenarios such as "you set debug-on-error."
To that end, the proposal would probably be something like "flymake/flycheck" use a child emacs process to query the code, but the user evaluating the code would still be done in the main emacs?
Wouldn't it be sufficient to "just" write a kind of context manager that watches the macro expansion and then looks at each step what's being done and divvies that up into safe and unsafe execution, so that at least the example of the article
(rx (eval (call-process "touch" nil nil nil "/tmp/owned")))
doesn't just automatically run? Obviously it's a lot of work depending on how sophisticated you want that to be but you probably don't need to rearchitect much. I feel like capability systems are... the right solution to the right problem at the wrong time.
By that I mean, in an ideal world, nefarious code should never end up on my system in the first place. Regardless of whether it gets ran, regardless if whether it is properly sandboxed to avoid damage. At the end of the day, I don't want bad code in my system at all.
"Easier said than done" is why I said "right solution, right problem, wrong time." But it comes at a cost. A rather extreme cost, in some cases. Walled garden app stores. Runtime overhead.
Development overhead. I'm not just a user, I'm also a developer, so these things end up being roadblocks I have to navigate. And I have to navigate them for no reason of my own. I am not trying to steal my users' data. But in a way, I am getting punished for the actions of others.
Anyway. No solutions here.