I had to laugh at this recent tweet:
Matthew Green @matthew_d_green
Particularly frustrating aspect of academic infosec is the bias against “attack papers”. The community systematically disincentivizes these.
If you’ve been following along, you’ll know that this is not even remotely credible: attacks get all the attention, all the publicity, and most of the rewards, while defense gets ignored.
Proving the point, within two weeks of that tweet Green released an attack paper with a hype-tastic title, Dancing on the Lip of the Volcano: Chosen Ciphertext Attacks on Apple iMessage, and the work was the subject of an article in the Washington Post (Johns Hopkins researchers poke a hole in Apple’s encryption). A day after that, Apple released iOS 9.3, which corrected the issue and explicitly thanked Green’s team.
The problem of “novelty”
To be sure, there are many problems with academia and academic publishing in computer science generally. One that is rarely discussed is the requirement for “novelty.” Or as Charlie Miller tweeted,
Charlie Miller @0xcharlie
One of the things I really hate about academic papers is how they always feel the need to describe their work as “novel”
Program committees and journals will not publish papers unless they are novel in some way. This makes some sense: if the purpose of academia is to further the advance of knowledge and human society, then a paper that puts forth new ideas seems like a contribution. However, novelty is not the same as an advance in knowledge, and making novelty a requirement for publication actually harms the advance of knowledge.
One of the bad effects is that authors will always say that their work is novel even when it is not—thus the sarcastic quote marks in Miller’s tweet. There is an incentive, indeed, an imperative, for authors to avoid citing any work that is too close to their own. One of the dirty secrets of academia is that this happens all the time.
We can see a more subtle bad effect by looking at Apple’s note that credits Green, About the security content of iOS 9.3. Including Green’s issue, there are 28 security issues listed in that note. Of those 28 issues, at least 13—almost half—are clearly due to the use of unsafe programming languages (they are caused by memory corruption issues, uses-after-free, integer overflows, null pointer dereferences, and out-of-bounds reads that are prevented in safe programming languages). And these issues are among the most severe on the list because they can lead to arbitrary code execution.
In other words, many of the severe security issues that plague us today are solved by technology that was introduced in academia 57 years ago. Anyone trying to work in academic computer security has to grapple with this. It’s not novel to find an attack that causes a buffer overflow. And a novel defense against memory corruption may satisfy academia’s publishing requirements, but it’s probably not interesting because by the standards of academia, memory corruption is a solved problem.
Of course in the real world, memory corruption is not a solved problem. Academia should be helping here, but a paper showing yet another variant of ALSR or CFI that provides not quite as much protection as a safe language is not an advance, regardless of novelty.