There seems to be a bit of a disconnect between the first and the second sentence (to my completely uneducated mind).
If topological qubits turn out to be so much more reliable then it doesn't really matter how much time was spent trying to make other types of qubits more reliable. It's not really a head start, is it?
Or are there other problems besides preventing unwanted decoherence that might take that much time to solve?
The point I think is this: if topological qubits are similar to other types of qubits, then investing in them is going to be disappointing because the other approaches have so much more work put into them.
So, he is saying that this approach will only pay off if topological qubits are a fundamentally better approach than the others being tried. If they turn out to be, say, merely twice as good as trapped ion qubits, they'll still only get to the achievements of current trapped ion designs with another, say, 10-15 years of continued investment.
The whole point though is that they are step function better than traditional qubits, in a way that is simply a type error to compare.
The utility of traditional qubits depends entirely on how reliable and long-lived they are, and how to can scale to larger numbers of qubits. These topological qubits are effectively 100% reliable, infinite duration, and scale like semiconductors. According to the marketing literature, at least…
There are caveats there too. Generally topological qubits can be immune to all kinds of noise (i.e. built-in error correction) but Majorana zero modes aren't exact the right kind of topological for that to be true. They only enjoy protection on most operations, but not all. So there is a still a need for error correction here (and all the complication that entails) it is just hopefully less onerous since only essentially one operation requires it.
All the other qubits scaled the same way when they were in a simulator, too. When they actually hit reality, they all had huge problems.
Other qubits in general do not scale the same way. Some for example do not allow arbitrary point-to-point interactions, which means doubling your physical qubits doesn’t double your number of logical qubits. There are other ways in which scaling was sometimes nonlinear.
Note also that this isn’t a simulated result. Microsoft has an 8-qubit chip they are making available on Azure.
I am well aware of how other qubits scale, but I am also aware that the physicists who created them didn't expect decoherence to scale this rapidly at the time they took that approach.
IBM sells you 400 qubits with huge coherence problems. When IBM had an 8-qubit chip, they were also pretty stable.