I guess what you are saying is the problem is the company culture - from a technical operations point of view at least - sucks. An no one wants or can put the effort into fixing it.
I see normally in oncall threads people complaining about "I got paged by an alerts because of another system X" - but in at least in a big enough organization this should not happen and it's an organizational failure. There should be an operations center on 24h/24h able to triage, escalate and evaluate, possibly not staffed only with L1 techs and given enough freedom to actually improve and automate. I know there are places where that is not true, and I ran away screaming from some in my career once I understood tech leadership had no understanding why it was needed.
But you would be surprised how much of the oncall pain is actually self inflicted by application teams themselves (some examples I encountered in the last year: TCP connect timeouts in the minutes and with no retries, no retry policies in general and things that should be idempotent that are not, no circuit breaker strategies, connection pools churning as they're shared between 10+ remote endpoints, wrong expectations about transaction isolation levels and how to handle conflicts at least in simple scenarios).
> I guess what you are saying is the problem is the company culture ... sucks.
I believe the problem is the way devops is often practiced. I've worked as a developer, a manager and an operator and I've occasionally carried a pager. I think there is value in rotating between those roles at different times since it enables engineers to gain knowledge and insight they often won't get any other way. But assigning engineers to after-hours on-call duties when they're simultaneously responsible for product development "because they built the system" is just a stupid unethical and unsustainable practice that needs to end.
Good companies hire and train engineers to develop, manage and operate systems sustainably.