Stable API implies stable size, but not the other way around. If I have a vector class that is a pointer and two sizes and change that to a vector class that is three pointers then I didn't change the size, but I broke ABI.
Any change to the value representation of a class is an ABI break. A change that also changes size is just an obvious one. And value representation is an abstraction which is determined by the semantics of member functions, not something a linker can easily have access to.
I think you're probably right, but maybe a bit too dismissive of the thought experiment.
> value representation is an abstraction which is determined by the semantics of member functions, not something a linker can easily have access to
This is the real problem. The GP's hypothetical extended linker could work even with semantic changes to the meaning of member variables, like in your sizes to pointers example, so long as all member functions are dynamically obtained from the shared library for that class (and no member variables are publicly exposed for use by application code). That means disabling inlining, which is a problem for templated code. Where does the machine code for std::vector<MyClass>::begin() go when MyClass is by definition unknown at the point when we're compiling the standard library? Even an exhaustive set of implementations for those known at that time isn't feasible (e.g. should the library contain code for vector<vector<int>>::begin()? What about 3 or more levels of nesting?)
One option might be if template class implementations were tailored to this situation by ensuring that any template class is just a thin inlined wrapper around a non-templated class (with non-inlined methods). Early template libraries actually were often a bit like this to avoid "code bloat", or still are to some extent. But to do it fully, the inner class would need to hold the size of the data type at runtime, and need callbacks for copy constructors etc. This is where the concept really starts to break down.