Often, the software or firmware developers end up building threat models. This is a terrible idea for two reasons:
Software developers have one priority: ship the product faster. Faster faster faster. You’re asking them to produce a report (taking time) which will require them to implement controls (taking more time). Some of the controls will be burdensome and require major changes. So they will silently omit threats which are difficult to control, downgrade their probability/impact, or select an ineffective but easy-to-implement control.
Wait, let me back up. Do you even have software developers? A large proportion of IoT vendors don’t employ software developers at all – the hardware/electrical engineers write the firmware. Sometimes the firmware is outsourced, so the firmware vendors aren’t interested in satisfying long-term needs.
So, IF you have software people and IF they have security training and IF they ever use it, they might be in a position to discuss security issues. But given point (1) – effective threat modelling conflicts with their goals – it’s not a great idea.
Someone who isn’t interested in the ship date. Preferably, someone who has no relationship with your firmware team.
If you use an external consultant, they can deliver the bad news and leave without causing conflict between staff.
It becomes software requirements. (You have requirements, right?)
Because they’re requirements, the developers are more free to select an appropriate control. It fits in with their existing workflow and gives them something to test against. (You have tests, right?)
I know, I know: most orgs don’t have requirements or tests. Hopefully your threat model does not show any significant risks to your business. If it does, you’re now in a much better position to advocate for proper testing. You can clearly show the risk and cost of NOT testing.