How do we fix IoT security?
13 Dec 2016

So, given the constraints that I’ve ranted on about over and over again:

What are we to do? How do we improve the current state of affairs?

Produce a threat model and publish it

It’s a big dream, but I would like to see every device have a threat model produced, and I would like the vendors to publish those threat models.

This serves two purposes. One, the vendor thinks about security, if only briefly. And two, consumers use the threat model to judge if it’s appropriate to their situation (or, more likely, judge based on whether it has been published at all).

Don’t rely on the user to make a security decision

Users, generally, do not take the time to learn about security. They also don’t necessarily make the right security decision. Where possible, you need to make it for them.

This will sometimes conflict with your goals. If you’re selling a WiFi access point, you’ll get less support calls if you leave it open by default. Adding reasonable security (unique WPA2 passwords on each device) costs you money in manufacturing time, documentation, but especially support.

Apple doesn’t require, but it strongly encourages users to use passcodes and Touch ID. It also enables disk encryption by default. These are good decisions. They’re well tolerated by most users.

Many IP cameras use a default password and open UPnP ports by default. These are bad decisions. If the user does not explicitly intervene (they won’t!) then the camera is exposed to the world.

Forcing software updates is a good step in this direction, though users tend to hate it.

Government regulation

Probably nothing will come of regulation. Design and manufacturing occurs across several countries; regulation would need to be on sale, like the EU with RoHS. Unlike RoHS, which is easy to define (you can’t use this list of materials), ‘adequate security’ is completely different depending on the type of device and the context in which it is used.

For many product categories, adding regulatory overhead would completely kill it as a business.

It’s difficult regulation to write. You can’t write something blanket like “all comms must be encrypted”; this would make many devices impossible to design. Security needs to be tailored to the environment. As a result, a conventional risk management process is more appropriate (identify the risks, implement appropriate controls, then the regulator signs off to say that you’ve done that well enough.)

Product categories where there is significant risk of personal or property damage (medical devices, cars, industrial equipment, aircraft) might need specific security regulation. Chances are, they already have regulation by virtue of being high risk.

Regulation can also be politically motivated. You don’t want bogus media coverage of ‘pacemakers can be remotely hacked’ or ‘terrorists can explode your laptop’ to impact your business.

Positive and negative reporting

I’ve advocated for reports of negative results – i.e. “we pentested this device and did not find any problems”. I believe that these are more useful than positive (“dis shit be busted”) reports.

If negative reports became more commonplace, they might give vendors a reason to actually think about security. They want people to publish good things about them!

Right now, researchers have to speculate as to the sort of security threats a device is designed to handle. If researchers publish a report claiming vulnerabilities that a device was never intended to handle, the vendor loses both ways: they paid the cost of implementing security and they still got bad press. If vendors explained their threat model up-front, it primes the conversation; all future discussion will be from that reference point, and vendors get to choose that reference point.

Right now, ‘security researcher publishes bad report’ is the most plausible security threat that many vendors face.

We’ve got the ‘stick’ side of incentives right – vendors that ship bad security sometimes get bad press. We should have a ‘carrot’ side too – vendors that take the time to document and openly discuss their security decisions get good press.

Make the network resilient

Mirai and botnets are not an IoT phenomenon. They’ve been around for decades, often using unpatched desktop/laptop machines. There are still Windows XP machines out there, and they’re not receiving updates any more!

We need to get more vendors producing updates, and we need to get more end-users installing updates, but we can never patch everything. I think a more pragmatic way to proceed is to make the network tolerate and/or prevent malicious behaviour.

Broadly, botnets are used for:

We can’t do much about relaying, but DDoS and spam are easy to detect: lots of outbound traffic. DDoS is usually a lot of IP/UDP to a small number of IPs. Spam is particularly easy to detect because it’s identifiable by destination port number!

Consumer routers could potentially rate limit this traffic and/or warn the user (though communicating with the user is an unsolved problem). Internet service providers, likewise, could detect this on the consumer side. Both, however, would incur costs to do so.

Network-enforced killswitches

A more radical proposition would be to globally share information on compromised devices, like what we do with spam blacklists. Routers could automatically take corrective action (blocking UPnP or all network traffic) to bad devices.

At a basic level, you’d want to be able to block ranges of MAC addresses. Some sort of software version detection would also be needed. This could potentially cover XP machines.

Consumer Internet routers are unfortunately very price-sensitive. It’s unlikely that they would add the software to do this, especially given it might harm the vendor’s other products.

Done properly, this might incentivise vendors to properly address security in their products rather than risk being blacklisted.

Consumer education

For the entire history of computing, consumer education on security has been a total failure. Users just don’t care about security and they don’t want to learn about it.

The most effective security is either enforced on users (which they hate) or is built-in and convenient – e.g. Touch ID.

Better frameworks

Ehhh.

So here’s the thing. The popular dev boards from Intel, Google, Raspberry Pi and so on – they’re all Linux machines. Which is fine. But:

So go and build your product on one – you can find out quickly if it’s going to work. Building on cheap, small hardware is a premature optimisation. Just don’t be surprised if manufacturing and/or management ask you to save $20 BOM cost by switching away from Linux and your favourite framework.


comments powered by Disqus