"Whatever, put a chip in it."
The careless days in IoT are numbered
by Sebastian Floss
6 minute read
About the author
Sebastian Floss
... writes about electronics, design for manufacturability, embedded software and is on an eternal mission to teach people about secure development. He tries to avoid any preaching, though. You can reach him through the contact form at the bottom of the article.
Some will recognize, the headline has been borrowed from the infamous twitter account @internetofshit. I like it for the unfortunate reason, that if you take a closer look at the growing market of Internet connected devices you will quickly come to terms: It is really not the worst summary of what is going on in IoT right now.
Not in a sense where there is only stupid or silly additions to previously “offline” products. I am referring to a simplified description of the apparent mindset of companies when jumping on the IoT bandwagon.
A few numbers: according to the European Commission’s Eurobarometer for Cyber Security 2017 69% of companies have no or basic understanding of their exposure to cyber risks. On the other hand Gartner says we have more than 8 billion IoT devices out there in the wild. A study by HP conducted in 2015 found that 70% of the most commonly used IoT devices are being shipped with insecure defaults, a lack of encryption and so on. Even though that was three years ago, you can still do the math here.
Sure, consumers demand “smart products now!“, vendors are forced to react quickly and the easiest way of course is: take what you already have and „put a chip in it“. But why ignore security? Isn’t that becoming a selling point?
Well, first of all: No it isn’t. People might start to worry about cyber security as some abstract danger, but there are no benchmarks or other means of comparing products regarding cyber security. Features on the other hand can be compared easily, so there really is no incentive for a vendor to spend money on security.
While it is easy to put the blame for the lack of IoT security solely on a bunch of cheapskate manufacturers, what’s truly flawed is placing responsibility for security solely in the hands of consumers
It basically is a vicious circle: A new product gets released as "minimum viable" with a big technical debt in security. The vendor wants to test its marketability at the least possible cost (what is abusively called agile development). Later if the product achieves the desired success, it will get the attention of security researchers (or it just gets hacked by someone). Then, to avoid further damage to their brand, vendors will react and pay off their debt . But the same thing can and will happen again, even to the same product by the same manufacturer: Remember Intel’s disastrous last year? First the Management Engine then Spectre and Meltdown. Yes, that's not pure IoT, but you get the picture.
Admittedly, In the very long term this behavioural pattern could lead to consumers mistrusting new “smart” products, especially in niche-markets, making it difficult for companies to launch new products when they don’t have a certain security reputation.
But let’s be honest: a lot has to happen before consumers arrive there.
I firmly believe trying to raise consumer awareness is a rather futile endeavour. Unless vendors will be forced to put labels on the box saying: "The device may try to attack your government or start mining bitcoins for organized crime" we will not be get the consumers' attention.
As long as the product works as described, how is the consumer even supposed to know it poses a threat - especially if not immediately to them!? We as consumers are not expected to tell if a device can be a safety risk, contains hazardous substances or disturbs the Wi-Fi in the whole building. Here, in governmental regulations we trust!
But these regulations do not include cybersecurity. At least not yet.
There will be governmental regulations. The question is to what extent.
For some time now, the voices demanding regulatory measures regarding IoT security have been getting louder. Unfortunately but not surprisingly there is no single solution in sight, since different stakeholders have different demands. When looking into current discussions in industry and politics, there are three major proposed approaches:
Mandatory minimum security
Since requirements need to be assessable, it is likely that there is going to be a list of specifications that can be easily met, like being forced to use a particular encryption standard or forbid default passwords. But it will not be possible to asses the quality of the overall security implementation based on such standards. So while a set of minimum security-requirements is easy to be set up by regulatory bodies and to be implemented by vendors, it provides the least amount of security.
Non-mandatory certifications
In their state of the union 2017 press release the EU mentions plans for a cybersecurity certification programe: “[Vendors] will have to go through one single process in order to obtain a European certificate valid in all Member States. […] Finally, as the demand for more secure solutions is expected to rise worldwide, vendors and providers will also enjoy a competitive advantage to satisfy such a need“. Much likely such certifications will wear off in serving as a marketing advantage, especially since not every vendor will be able to afford them, so specific (niche-)products will only be available “uncertified”. But what good is it to live in a fortress when some doors do not lock properly?
Software Liability
The third likely approach: extending manufacturer liabilities, like establishing maximum reaction times to provide updates after a vulnerability is discovered or else get fined. That means, during a product’s lifetime there will be costs for dealing with incidents. Easy to implement for governments, since we do have product liability laws that only need to be extended.
Since I am not a lawyer and making assumptions about tech developments is dangerous in general, what will be bestowed upon us in the end I do not know. But come what may, it will swirl things up. Just think of the huge burden the GDPR is placing on companies right now. A perfect example of what happens when politics tries to solve what the markets missed out on.
What I do know though, is that companies involved with IoT, vendors and users, should prepare themselves. And especially the vendors need to start now if they do not want to be coping with huge costs. FOMO - the „fear of missing out“ - should from now on only be dreaded when preparing for securing a product. For starters, it is time to get security in the heads of people involved in software development. At the present time, even universities are rarely ever teaching classes about secure software development. If any, the topic will be treated as a mere side note. Make security a part of every software developer's education and it will become a given in the design process
A product that has been designed for security from the ground up will need less alterations once we get laws or regulations and it will pass certification easier, too.
The only way to avoid additional costs is to start educating software developers
To prove a point how it really is not that hard to get started, here is a short excerpt from our secure development guidelines at ImagineOn: four simple rules that do not cost any money if they are lived by. Don't get me wrong: I am not suggesting this list is even remotely complete. I am claiming, that if in the past every vendor had followed just a few basic rules like these, we would not be talking about IoT security here and now.
#1: Establish secure defaults
Minimize attack surfaces by disabling anything you do not need. Provide security by avoiding things like default passwords. Yet, remember: "Security at the expense of usability, comes at the expense of security."
#2: Don’t talk to strangers
Neither trust your users to interact with your device in the desired manner nor trust the cloudservice, backend or whatever else you connect to. Every data provided can be of malicious intent.
#3: Security ≠ Secrecy
Avoid security by obscurity and keep things transparent. The black-box approach to protect intellectual property is acceptable, but be under no illusions: any chip’s data can be read out, any software reverse-engineered.
#4: Accept it, it will be hacked
Develop your software with the ability to react to failure and be prepared to provide updates in case of a disclosed vulnerability. (The money-saving part here is „be prepared“)
Admittedly, to follow these rules you do need to know what you are doing. So they will not help with a “Whatever, put a chip in it!” attitude.
But companies without some security expertise among their staff should no longer be developing their own IoT hardware anyways. Go, seek guidance from others or buy the required parts and software from third parties now. Go now, time's a waistin'!
Your questions about IoT Security will be answered here
We couldn't send your message for the following reasons:
Thank you for your message!