Commentary on the Sony IPELA IP Camera backdoor
7 Dec 2016

It turns out that a range of Sony IP cameras had a hidden telnet/SSH server: http://blog.sec-consult.com/2016/12/backdoor-in-sony-ipela-engine-ip-cameras.html?m=1

What’s good about the design?

The servers weren’t wide open to the world. Getting access required:

So while this looks bad on the surface, this hack actually required a lot of effort. While obviously critically flawed, in the ecosystem of IoT devices, this one is better than most.

Sony responded appropriately and released an update for the cameras.

What’s bad about the design?

The servers weren’t disabled before the cameras were shipped out. This, in my mind, is the critical problem. The manufacturing line needs to have privileged access to the device; this is where firmware gets uploaded, hardware gets calibrated and the device is tested. The servers need to be present. They must not be enabled after device shipment.

I can understand each device having the same passwords. This is a manufacturing convenience which saves money and time. Every device gets the same passwords, has the same public keys and the same binary firmware image. If you’re coming from a desktop/mobile security perspective this is problematic, but in the IoT space, sorry, cost concerns override the impurity of having a million devices with the same keys.

Some devices – especially Internet routers – will assign a different password either at manufacture time or based off a unique device ID embedded in the hardware. The cameras certainly have a unique MAC address, so the hardware is present.

What did we learn?

If you have hidden secrets on your device, they will be discovered given enough time.

Once any secrets are out, they’re out for a whole class of devices. All cameras with this firmware are now vulnerable. One key mitigation for this type of attack is that each device (singular) get different keys and passwords; at least then a breach of one device only affects that device.

Speculation

I don’t think this was intentional (in the sense of “hey let’s run telnet and SSH nobody will notice”). Certainly Sony have enough smart engineers to consider the security ramifications of onboard web, telnet and SSH servers; it’s even likely that they have a threat model and risk analysis. They also have enough history manufacturing this sort of device that they know that manufacturing wants certain access to the device. And this same mistake hasn’t been found in other Sony devices to date.

My bet is that the firmware engineers have handed this off to manufacturing saying, “hey, we enable telnet and SSH so you can test and calibrate on the line. Make sure you turn it off.” And manufacturing, being a totally different sort of engineer, have written their scripts that run over SSH, set up the production lines and forgotten the warning. Or maybe they left it on not understanding the security impact; disabling the servers makes their life difficult if they want to retest or service a device. It’s an easy mistake to make in a big company.

What’s the impact?

Usual vulnerability disclosure ethics require that you give the vendor some time to correct the vulnerability before publishing it. This has been done here; all credit to the SEC Consult team. But I can’t help but feel that this is bad policy for IoT devices. Sony have produced new firmware, distributed it, and yet… the vast majority of cameras in the wild will not get the update. They will be vulnerable and had someone not gone looking (using the specialist knowledge and tools above) it’s unlikely that the vulnerability would have been discovered. Certainly, there are more profitable places for miscreants to search for vulnerabilities on their own.

So while I advocate openness and disclosure, I think the usual disclosure policy might need some adjustment for devices which can’t be easily updated.

As a result, any Internet-connected devices using this firmware will now be easily harvested for botnets.


comments powered by Disqus