Negative reporting and security research
13 Dec 2016

In academia, there’s an entrenched problem where negative results are not published. Researchers found something interesting and published a paper; that’s great! Now we need other teams to confirm or deny whether this happened. This does not happen. Reproductions (either positive or negative) are rarely published, and negative results especially (we ran the test but did not observe the same outcome) do not get published.

This is bad, because a single claim of a result is not very strong evidence. If multiple independent sources achieve the same result, that is something we can be confident of knowing is true.

Likewise, in security, we only report positive results. We only report on things which are broken. All day, every day, every IoT device that people look at has security problems.

Every. Single. One.

So you might as well assume that everything is insecure. Security almost demands that you take that approach – it’s the safe, conservative way to build secure systems! Don’t know how secure something is? Assume that it is insecure and plan accordingly.

What is useful in this environment is… negative results! “We tested this device and were not able to break into it.” That’s extremely useful information! Yeah, you want someone to double-check it, to try more things. You want to know what things the researchers found that smelled funny but didn’t consistitute a vulnerability. You want to know what the researchers tried. You can then refer back to your threat model, adjust your ‘probability’ figures appropriately, and have a better idea of what risks your business is exposed to.

Please publish negative results. Please tell us attacks you tried but which failed. You’ll stand out as doing something odd. You might look silly if someone contradicts you later. But it’s the best information we can get right now.

comments powered by Disqus