It seems ridiculous to correct a parenthetical remark that’s wrong on the Internet, but I’m seeing this one repeated, so here goes.
Steve Bellovin says
(Some of the work in secure multiparty computation suggests that we need not trust anything, if we’re willing to accept a very significant performance penalty.)
I don’t know of any work in multiparty computation that eliminates trust. It’s impossible to do.
In multiparty computation, two or more parties cooperate to compute a function, where each party supplies a private input to the function, and they share the results. A good example is a secret ballot: each party supplies a vote, and the results of the voting are made public, but each individual’s vote remains secret, known only to themselves.
It’s easy to see how to conduct a secret ballot, provided you can entrust all of the votes to a third party (e.g., the government). Remarkably, multiparty computation eliminates the trusted third party: the parties can compute the result themselves, without revealing their inputs, via a cryptographic protocol. It’s a fascinating field that has spawned decades of research.
What it does not do, however, is eliminate trust. True, the parties do not need to trust a third party with all of their secret inputs; this trust is replaced with trust in mathematics (good), plus trust in an implementation of the mathematics (a bit more sketchy, but good in the long term). More importantly, it does not eliminate the trust that the parties must have in each other. There is nothing in the definition of multiparty computation that says that a party’s input is a correct input. For example, in a secret ballot, a party can vote for a candidate because they have been paid or coerced to do so. Multiparty computation always takes place in a social context, where trust is always in the background.