Sure, I won't disagree that there exist plenty of unqualified people doing things wrong while pretending that they know what they're doing. That seems obvious enough in the general sense.
I'll also agree that there are systems which exist that for whatever reason can't realistically be simplified.
However, on what basis do you claim that email - or rather email anti-abuse - qualifies as such?
> The alternative, having no requirements is having no messaging at all. You literally can't have it both ways.
You seem to be implying that the usefulness of the system derives from or otherwise depends on the difficulty of configuring it. However it doesn't seem to me that you've provided evidence of that. On the contrary, isn't the entire point of a reputation system that it avoids such gatekeeping by depending on historical behavior rather than some arbitrary barrier to entry?
I would make my own claim. That there exist software implementations that are far more complex than they realistically need to be, often because the thing being implemented has evolved over time and the resources or motivation or whatever needed to re-engineer and rewrite the implementation aren't available.
I would also claim that sometimes software has shitty UX for no better reason than the person developing it doesn't understand the needs of (some subset of) the people using it.
When configuring a network node to exchange messages in a really quite primitive protocol requires professional expertise to do correctly I'd say that's a clear indication that something is very wrong somewhere in the stack. Where exactly is certainly up for debate but a well behaved entity should not find it difficult to self host such basic functionality.
Communication as a whole, not just email. The failures to address this, point to an inherent limitation of the systems we've built for computation. You'll have to revisit automata theory, and have some knowledge of why CPUs are able to do work at the lowest levels of abstraction.
Boiling it down, it comes down to system properties that are preserved, and Von-Neumann Architecture acts as a DFA. Computers act on a single state at any one time, moving only ever one edge on a abstract state graph at each operation.
People generally are considered NFAs that can operate on multiple states and decompose states, and have a wider range of problems in the types of problems we can solve.
This is abstract but the gist is, the computer follows an abstract rail of decisions that is really quite dumb, but necessarily so, and it doesn't halt or runaway except with bugs, because we preserve properties limiting the math to areas where it cannot have the problems except outside the working environment (i.e. power loss, hardware failure etc).
There's a reduction to an abstract algebra system inherent in the architecture by preserving certain properties in the design. You first run across this paradigm in first year EE (Systems and Signals) and a course is available on OCW if you haven't taken that, detailed knowledge is not needed though unless you plan on designing these hardware systems.
Any time you have an underlying state that is both true and false given the same state (the message), and in adversarial environments the property requirements for computation are broken. This can naturally occurs in any communication system, and the hoops we have to jump through that we add on in the form of requirements is defining a way to differentiate that hidden state indirectly by the presence of the requirements which good actors follow more closely than bad actors. This is decomposing the state in structure from an NFA type problem to a series of DFA type problems as I'm sure you might recall from your Compiler Design Courses (if you've taken them), or learned from the Dragon Book.
Any message sent must be sent in an identical structure. Any bad actor will adapt to ensure their messages get sent through flooding and raising the noise floor. Any good actor will adapt in a number of ways sometimes by no longer using a system that doesn't provide benefit. You can only operate on the same state.
If you can only process and interact with the message structure itself. No computation system will ever be able to skew what is sent or received so that only the legitimate messages are sent, and the illegitimate ones aren't. Everything goes through the same point. With everything going through, the noise floor is so high nothing gets through, and communication is the sharing of meaning/signal between two parties, people adapt and abandon the system for systems that work.
The core issue is a fundamental computer science issue.
When a computer hardware system first boots up, the bringup stage in hardware sets up the constraints needed to do work. Ask yourself what about the design of computers today prevents the classic unsolved computer science problems and you'll find this staring back. Halting and Decidability (usually).
There are impossible to solve problems, because we've proven that math is incomplete, which impacts on decideability.
Computers work on specific principles, and when you don't understand or know how those work you can easily jump to magical conclusions that simply do not work or have a basis in reality.
A very simple example of this same problem demonstrates this. You are given two spreadsheets without distinct (unique) names. You have 10,000 rows of employees, and you have a list to deactivate 400 people's accounts in an hour, the list of people to be deactivated is by name. You have a script to do all that's necessary for that for individual accounts given a specific account, but some of those people's names are identical to others, and they are different people. The first match you happen to see is the CEO.
How do you solve this?
If you pass the names to automation blindly, you'll deactivate people's accounts that should not be deactivated and you get fired. If you don't in the time period alotted, your fired. How do you solve this?
The only possible way to solve this given the constraints is you ask for a list that includes a unique identifier for the people that need to be deactivated, and a matching list to work from and then the automation can work.
If you just did it blindly, the computer would do it blindly. It has no way to know otherwise. The function is a deactivation so it would deactivate every item passed to it, ending in... you are fired.
There is no other way that does not result in you being fired. Fuzzy matching doesn't work because without the identifier you know that one of those two or three needs to be deactivated but you don't know which, and getting it wrong ends in you being fired. This type of problem is called decidability.
You get the same types of this subtle problem all over automation in different forms. Like in Linux with ldd's output, which is why it fails silently when passed to any automation. The overloaded null state means two different things, and its undecidable when it flattens, and if you examine it carefully it breaks regular expressions. Why? That property isn't preserved.
You are used to dealing with the top of the stack where these properties are preserved, unless you or others break them with a bug.