Is responsible disclosure appropriate for IoT devices?
3 Jan 2017

Story time.

When I was an undergraduate, we did a research project. I looked at how difficult it was to break symmetric crypto using FPGAs. Nothing super novel, but it added some data points to the literature.

We presented these projects to industry. An IBM employee asked, “why are you breaking crypto? Are you a terrorist?” I did a double take. She was serious.

I’d forgotten that regular people on the street haven’t internalised why we intentionally break things. So let’s lay it out there and see if it still works.

Why we break things

We don’t know how secure our {software|devices|systems} are. We can’t prove the absence of security problems. We can prove the presence of security problems.

We don’t know what other people know, either. If we have reports of what is broken, we conservatively assume that attackers already know this and assess our risk accordingly. Absence of reports gives us weak confidence that a system has not been broken (but negative reporting is not commonplace at this time, so it doesn’t tell us much).

Let’s take a concrete example: breaking DES. I want to use DES in an application. My one-line threat model is “I want it to withstand attack by opposing militaries”. Should I use DES?

Well, the literature (my paper!) says that back in 2003, you can break a DES-encrypted message with a $100 FPGA board in about three weeks. My 2016 estimate is that you can break DES in about 6 seconds, if you can muster the computing power of the Bitcoin network.

A large proportion of the Bitcoin network is controlled by one government. If I needed to keep something secret from an opposing military for more than 6 seconds, I would not use DES.

If my threat model says “I want it to withstand attack by college students”, then yeah… you can probably use DES! Maybe not against Engineering students, but most of them, sure.

Crypto and attacks by foreign powers are a clear-cut case. We know that computing and attacks improve constantly. Defenders (people, governments, militaries, companies) have a legitimate case to use cryptography. Attacking cryptography and publishing results of breaks is therefore valuable.

Responsible disclosure

Another common case is where we attack software – say, a public-facing web application. Let’s call it BookFace. You attack BookFace wanting to know if your private data is safe there. You find problems. What do you do with this information?

Keep it to yourself?

Don’t tell anyone. This does the public a disservice; an attack is possible but they don’t know about it. Most government infosec organisations (e.g. the NSA) take this approach. They want to keep attacks to themselves so they can be used later.

Sell it?

People will buy information on how to perform an attack. This is where malware black markets come from. This puts a lower bound on the value of a discovered attack. If you (the vendor) want some other form of disclosure, you’d better make sure that what you’re offering is better than the market price of the attack. Not all attackers are motivated by the mad props they get by publishing papers.

Tell the world about it?

You could just tell the world. You get your mad props immediately. Like in the academic world, there is no risk that someone else will publish your discovery and steal your mad props.

Crypto research takes the ‘tell the world’ approach. Crypto is widely deployed and there’s no single entity that can do anything about a break. Adversaries are usually governments. You might as well just publish the result and flee to a country with no extradition laws.

If you’ve just breached BookFace, anyone can now breach BookFace and steal people’s private data. You might have good intentions, but not everyone does.

Tell the vendor about it?

You could tell the vendor first. For a while, this would result in lawsuits and gag orders from the vendor trying to force you to never reveal details of the attack.

Most vendors nowadays are more enlightened and will work with you. This is the path that Responsible Disclosure takes.

Let’s make a checklist of Desirable Outcomes For Responsible Disclosure:

But for IoT…

IoT vendors usually can’t fix their problems.

So let’s review the checklist:

By publishing the attack, we didn’t really improve the state of the world. Miscreants can reuse the attack but vendors and users remain insecure.

Who’s the vendor?

Crypto algorithms are used by everyone. We don’t bother to disclose attacks because there is no single authority.

Websites and software have a single controlling entity. Websites are the ideal case for responsible disclosure because they are completely in control of a single entity. They update, everyone updates.

Desktop software and apps are controlled by the user, but most users will update most of the time. Not always. Windows updates are forced; this comes at a cost to the user but is a responsible thing for the broader community.

IoT devices have a single vendor, but very little control over the deployed device. Disclosing an attack to an IoT vendor doesn’t really help them.

Are you on the right side?

Responsible disclosure is about improving vendor and user security while making sure researchers are recognised for their research efforts.

Consider attacks on content control systems – DVD, Blu-ray, XBox, Playstation, iOS, Pay TV. The security system is designed to:

So let’s say you publish an attack on one of these. The vendor will improve their future systems, but they can’t do anything about the existing ones (these are all embedded devices in the people’s homes, of course). Depending on the severity of the breach, that content might be open forever and cheating rampant.

So, let’s look at our Responsible Disclosure checklist again:

The user might not want to be secure. Users generally don’t like content protection. The security system protects the vendor, not the user.

So by breaking content protection, did you do good? Did you improve the world? That’s a tricky question which I’m not touching; “content wants to be free”, “commercial interests need to be protected”, “artists won’t produce art if they won’t be compensated” and so on. I’m not getting into that.

So what to do?

Let’s think:

One major problem with academic science research is that papers often lack enough detail to reproduce the result. Specifically, software and datasets are not released. The researcher gets props but does not advance human knowledge. We don’t even know if they did the research or just made it up.

Recently, there is movement in the ‘full disclosure’ direction: everything is released. People can repeat the analyses. This can be bad for the publishing researcher (the new information can be used to dispute the result) but is good for scientific progress as we have a fuller, more confident ‘truth’ than before.

Likewise, with security research, you could publish everything. Method, firmware images, exploit. Make it easy for someone to verify your result. The poses a particular risk for IoT systems where it is difficult for the vendor to react to the publication.

You could publish just enough to prove that an attack is possible, in line with traditional scientific research. The recent Pay TV hacks are a good example; the methods are shown along some pretty compelling evidence. We don’t know for sure that it works; a demonstration device is great evidence but not absolute proof. There’s not enough information that you could do it yourself. The existence of the paper and talks makes it much easier to do, but the attack is technically challenging enough that the most common attackers (people at home) probably can’t execute it.

This is a good position. The vendor and users are not significantly hurt. The researcher shows evidence of a successful attack on a challenging system.

There’s no universal answer. For IoT, the fundamental problem is that devices aren’t updated. It’s not even clear that “vendors must do remote OTA updates” is a good strategy. Not all devices can afford downtime.


comments powered by Disqus