Information security policies are essential for tackling organisations’ biggest weakness: their employees.
Everything an organisation does to stay secure, from implementing technological defences to physical barriers, is reliant on people using them properly. It only takes one employee opening a phishing email or letting a crook into the premises for you to suffer a data breach.
Information security policies are designed to mitigate that risk by helping staff understand their data protection obligations in various scenarios.
Organisations can have as many policies as they like, covering anything that’s relevant to their business processes. But to help you get started, here are five policies that every organisation must have.
1. Remote access
The days of 9-to-5 office work were over even before COVID-19 – and many organisations will continue to allow employees to work remotely when life as normal resumes.
That will not only mean that employees use work computers but may also use their phones to check their work emails outside of business hours or while travelling.
This is great for productivity and flexibility, but it also creates security concerns. Remote workers don’t have the privilege of the organisation’s physical and network security provisions, so they need to be instructed on what they can do to prevent breaches.
Policies should cover the use of public Wi-Fi, accessing sensitive information in public places and storing devices securely at a minimum.
2. Password creation
Pretty much everyone uses passwords at home and at work to access secure information, so you’d think we’d all have the hang of it by now.
Unfortunately, that’s not the case. Hacked passwords are among the most common causes of data breaches, and it’s hardly a surprise when people set weak passwords such as ‘123456’ and ‘Password’.
Organisations should mitigate this threat by creating a password policy that outlines specific instructions for creating passwords.
The received wisdom about passwords is that they should be a combination of at least eight letters, numbers and special characters. However, this doesn’t always guarantee a strong password, as employees are still susceptible to easily guessable phrases such as ‘Password#1’.
You might be better off encouraging employees to use a mnemonic, such as taking the first letter, as well as numbers and punctuation, from a memorable sentence. So, for example, ‘The old man caught the 15:50 train’ becomes ‘Tomct15:50t’.
3. Password management
Strong passwords only work if their integrity remains intact. If you leave them written down, share them or select ‘remember this password’ on a public computer, you risk them falling into the wrong hands.
The same is true if you use the same password on multiple accounts. Let’s say a criminal hacker breaks into a database and finds the credentials for your personal email account.
If they can work out where you work (which they have a good chance of through a Google, Facebook or LinkedIn search), they’ll probably try that password on your work email and other work-related accounts.
It’s therefore essential that organisations include a policy that instructs employees not to share passwords, write them down or use them on multiple accounts.
You might also suggest that employees use a password manager such as LastPass and 1Password to help them generate and keep track of unique passwords.
4. Portable media
Cyber criminals can easily infect an organisation’s systems by planting malware on a removable device and then plugging it into a company computer.
Many organisations counteract this threat by banning removable devices and relying on email or the Cloud to transfer information.
This might not be viable for you, but there should always be safeguards in place. For example, you might set limits on who can use removable devices or create a rule instructing employees to scan devices before use.
5. Acceptable use
Organisations should never expect employees to spend 100% of their time at work doing work-related activities, because everyone needs a break now and then.
But just because you give employees this leeway, it doesn’t mean you can’t keep a careful eye on what they do during those breaks.
If an employee wants to spend a few minutes checking their personal email or how many likes their latest Instagram post got, there’s not much to complain about.
Indeed, giving employees the chance quickly the deal with personal issues or gain the validation of strangers on social media should lead to a happier, more productive workforce.
However, the same can’t be said if an employee wants to spend their time downloading files from a dodgy website or visiting other sites that are notorious for malware.
You can prevent much of the risk by blocking certain websites, but this isn’t a fool-proof system, so you should also include a policy prohibiting employees from visiting any site that you deem unsafe.
The policy should clearly state the types of site that are off-limits and the punishment that anyone found violating the policy will receive.
In part 1 of this series, I posited that the IoT landscape is an absolute mess but Home Assistant (HA) does an admirable job of tying it all together. In part 2, I covered IP addresses and the importance of a decent network to run all this stuff on, followed by Zigbee and the role of low power, low bandwidth devices. I also looked at custom firmware and soldering and why, to my mind, that was a path I didn't need to go down at this time.
Now for the big challenge - security. As with the rest of the IoT landscape, there's a lot of scope for improvement here and also just like the other IoT posts, it gets very complex for normal people very quickly. But there are also some quick wins, especially in the realm of "using your common sense". Let's dive into it.
The "s" in IoT is for Security
Ok, so the joke is a stupid oldie, but a hard truth lies within it: there have been some shocking instances of security lapses in IoT devices. I've been directly involved in the discovery or disclosure of a heap of these and indeed, security is normally the thing I most commonly write about. Let me break this down into logical parts and use real world examples of where things have gone wrong and I'd like to cover it in two different ways:
Risks that impact IoT devices themselves
Risks that impact data collected by IoT devices
Let's take that first point and what immediately came to mind was the Nissan Leaf vulnerability someone in my workshop found almost 5 years ago now. Here we had a situation where an attacker could easily control moving parts within a car from a remote location. Fortunately, that didn't include driving functions, but it did include the ability to remotely manage the climate control and as you can see in the video embedded in that post, I warmed things up for my mate Scott Helme from the other side of the world whilst he sat there on a cold, damp, English night.
Back to the bit about risks impacting data collected by IoT devices and back again to CloudPets, Context Security's piece aligned with my own story about kids' CloudPets messages being left exposed to the internet. I can't blame this on the teddy bears themselves, rather the fact that the MongoDB holding all the collected data was left publicly facing without a password. Same again with VTech who collected a bunch of data via children's tablets (IMHO, an IoT device as they're first and foremost a toy) then left it open to very simple vulnerabilities. Are these examples actually risks in IoT? Or are they just the same old risks we've always had with data stored on the internet? It's both, here's why:
Let's use smart vibrators as an example (yes, they're a real thing), in particular the WeVibe situation:
At the August Def Con conference in Las Vegas, two New Zealand hackers demonstrated that the We-Vibe 4 Plus vibrator was sending information — including device temperature and vibration intensity — back to its manufacturer, Standard Innovation.
If this data was compromised, it could potentially expose a huge amount of very personal information about their owners, information that never existed in digital form before the advent of IoT. Whilst the underlying risk that exposes the data may well be a classic lack of auth CloudPets style, there'd be no data to expose were it not for adding internet to devices that never had it before. Adult toys have been around forever and a day, they're not new, but recording their usage and storing it on the cloud is a whole different story.
So, what's to be done about it? Let's got through the options:
I'll start with the devices themselves and pose a question to you: can you remember the last time you patched the firmware in your light globes? Yeah, me either, because most of mine are probably like yours: the simplest electrical devices in the house. Some of them, however, are more like the LIFX example from before in that they have little microprocessors and are Wi-Fi (or Zigbee) enabled. And, just like the LIFX devices, they're going to need patching occasionally. They're complex little units doing amazing things and they run software written by humans which inevitably means that sooner or later, one of us (software developers) is going to screw something up that'll require patching.
IoT firmware should be self-healing. This is super important because your average person simply isn't going to manually patch their light bulbs. Or talking teddy bear. Or vibrator. Can you imagine - with any of those 3 examples - your non-tech friends consciously thinking about firmware updates? How often would you think about firmware updates? How often would I? To test that last question, I fired up a bunch of IoT device apps to see which ones are auto-updating (so I don't have to think about patching) versus requiring a manual update (in which case, I should have been thinking about patching). I started with the Philips Hue app which was both auto-updating and at the latest firmware version:
Ok, that's good, not something I need to think about then. Let's try Nanoleaf which are the LED light panels both kids have on their walls:
Ok, so they're up to date, but will they stay up to date? By themselves? I honestly don't know because it's not clear if, to use my earlier term again, they're self-healing. Same with the Shellys I've become so dependent on:
And just to perfectly illustrate the problem, I snapped that screen cap the day before posting this part of the series. Just over a day later, it's a different story and I only knew there was an update pending because I fired up the app and looked at the device:
I checked just one of the couple of dozen connected lights running in the Tuya app:
This looks good, but it wasn't the default state! I had to manually enabled automatic updates and I had to do it on a per-device basis. People just aren't going to do this themselves.
The next thing I checked was my Thermomix and the firmware situation is directly accessible via the device itself:
I'm not sure whether this auto-updates itself or not (it's still fairly new in the house), but with a big TFT screen and the ability to prompt the user whilst in front of the device, I'd be ok if it required human interaction.
Uh... is that good? Bad? Does it need an update? Turns out you can't tell by looking at the device itself, you need to jump back out to the main menu, go down to settings, into firmware update then you see everything pending for all devices:
I don't know how to auto-update these nor do I have any desire to continue returning to the app and checking what's pending. I hit the update button and assumed all would be fine... (it wasn't, but I'll come back to shortly)
Here's what I'm getting at with all this and I'll hark back to the title of part 1: it's a mess. There's no consistency across manufacturers or devices either in terms of defaulting to auto-updates or even where to find updates. And before anyone starts jumping up and down suggesting that devices shouldn't auto-update because you should carefully test any patches before rolling out to production and ensuring you have a robust rollback strategy, these are consumer devices made for people like my mum and dad! It needs to be easy. It's not.
When Patching Goes Wrong
Now that I've finished talking about how patching should be autonomous, let's talk about the problems with that starting with an issue I raised in this tweet from yesterday:
In the first of my IoT blog series yesterday, I lamented how one of my smart plugs was unexplainably inaccessible. Looks like @tplinkuk broke it with a firmware update which will now break a bunch of stuff around the house. Check out the angry responses to their tweet, wow 😯 https://t.co/RKFxNF7v9a
What appears to have happened is that in order to address "security vulnerabilities on the plug", TP-Link issued a firmware update that killed the HA integration. More specifically, they closed off the port that allowed HA to talk directly to the smart plug which broke the integration, but didn't break the native Kasa app. As at the time of writing, the fix is to raise a support ticket with TP-Link, send them your MAC address then they'll respond with a firmware downgrade you can use to restore the device to its previous state. Ugh. (Sidenote: regarding this particular issue, it looks like work has been done to make HA play nice with the newer version of the firmware.)
Let's start by looking at this from a philosophical standpoint:
But here’s the bigger philosophical question: the device still worked fine with the native app, should @TPLINKUK be held accountable for supporting non-documented use cases? Probably “no”, but in a perfect world they’d document local connections by other apps and not break that.
Clearly it was never TP-Link's intention for people to use their plugs in the fashion HA presently is and I'll talk more about why HA does this in the next section of this post. But rightly or wrongly, the risk you take when using devices in a fashion they weren't designed for is that the manufacturer may break that functionality at some time. One way of dealing with that is to simply block the devices from receiving any updates:
Troy, Firewall Rule number 1 for HA and Home IoT subnets (although breaks Wiz Bulb connectivity even though they have a “local” access API) pic.twitter.com/RGOhsGaq7F
But what if that device was the LIFX light bulb from earlier on and the patch was designed to fix a serious security vulnerability? Now you've introduced another risk because you're not taking patches and you have to trade that off against the risk you run when you do take patches! As @GerryD says further down that thread, it's a calculated risk and ultimately, you're trading one problem off against another one.
Speaking of trading problems, another approach is just to flash the devices with custom firmware like Tasmota:
Moral of story, avoid anything requiring proprietary access. They can always screw you. Use devices you can drop Tasmota onto. If only a company would sell devices that need no specific cloud service.
Tasmota is designed for precisely this sort of use case and I have a high degree of confidence that they wouldn't break functionality in the same way as TP-Link did. However, I also have a high degree of confidence that Tasmota is software, all software has bugs (open source or not), and you still need a patching mechanism. To my point about @GerryD's tweet earlier, firewalling off devices still remains a problem even when running open source custom firmware.
So, what's the right approach? For your average consumer (and remember, that's probably 99%+ of people buying TP-Link smart plugs), automatically updating firmware is key. For the rest of us, we need to recognise that we take on risks when using IoT devices in ways they weren't designed for. In a perfect world, companies would approach this in the same way Shelly has:
One company that we have partnered with is Shelly. They supplied the people working on the integration with the products, access to pre-release firmware and a dedicated QA group to talk to the CEO + engineers. The integration is maturing fast and next release will be really 👌
Paulus is the founder of HA and I've had a few chats with him during my IoT journey. This tweet is exemplary behaviour by Shelly and if I'm honest, my opinion of them raised a few bars after reading this. In that perfect world, TP-Link wouldn't necessarily need to go as far as devoting resources to building HA integrations (although that would be nice!), but they would make a commitment to ensure their devices are "open" and accessible to other platforms in a documented, supported fashion that won't be broken by future patches. Perhaps that's just a matter of time and as demand grows, who knows, we might even see HA on the TP-Link box alongside the tech behemoths.
This whole discussion about devices updating their firmware raised another philosophical debate which I want to delve into now, and that's the one about how self-contained the IoT ecosystem should be within the LAN versus having cloud dependencies.
Cloud Versus Local Only Access
In part 1 of the series I quoted from the HA website about how the project "puts local control and privacy first". What this means in practical terms is that HA can operate in a self-contained fashion within the local network. For example, before the aforementioned TP-Link firmware update, HA could reach out from its home in my server cabinet directly to the smart plug in Ari's room and communicate with it over port 9999. It would still work if there was no internet connectivity (local control) and TP-Link were none the wiser that I'd just toggled a switch (privacy first). There's also the added upside of the resiliency this brings with it should an IoT manufacturer have an outage on their cloud:
for my gear that is Tuya based, Tasmota has been flawless for me. I know Troy isn't fond of the firmware replacement approach, but I don't want to wake up one day (or not wake up!) to find that all my HA has broken because of an outage with the Tuya cloud servers.
That resiliency extends beyond just a cloud outage too; what if Tuya shuts down the service? Still want to be able to turn your lights on? There's a lot to be said about local control. That said, there's also a lot to be said about cloud integration and a perfect example of that is weather stations. I'm looking around at devices (the Davis Vantage Pro2 is the frontrunner at present, but I'm open to suggestions), and that then raises the question: which ones have an integration with HA? But also (and based on the TP-Link experience above), which ones have an integration that won't break in the future? A weather station is a sizable outlay compared to a smart plug and I don't want to go into it with an expectation of it working a certain way and then one day having that broken.
One approach is that rather than trying to integrate directly between the weather station and HA, you find a weather station that can integrate with Weather Underground (which Davis can do with WeatherLink Live) then use the Weather Underground integration. Now you're dependent on the cloud, but you've also dramatically widened your scope of compatible devices (WU integration is very common) and done so in a way that's a lot less hacky than custom integrations connecting to non-standard services. I don't have a problem with this, and I think that being too religious about "though shalt not have any cloud dependencies" robs you of a lot of choices.
That said, from a simple security and privacy perspective (and often a performance perspective too), I always prioritise local communication. For example, each Shelly device in the house has cloud integration disabled:
That doesn't stop me controlling the device remotely because I can use HA's Nabu Casa to do that, but it does stop my being dependent on yet another IoT vendor to remotely manage my home. It also grants me more privacy as the devices aren't perpetually polling someone else's cloud... almost. For some reason, the Shelly on my garage door is making a DNS request for api.shelly.cloud once every second!
That data is from my Pi-hole and the Shelly is configured precisely per the earlier image. I've even pulled the JSON from the /settings API on the Shelly (you can hit that path on the IP of any Shelly on the network and retrieve all the config data), diffed it with other Shellys not displaying this behaviour and I still can't work out why it's so chatty. The point I'm making here is that devices can do a lot of communicating back to the mothership and where possible, this should be disabled.
IoT SSIDs and VLANs
If we recognise this whole thing is a mess and that at least as of today, we don't have a good strategy for keeping things patched, what should we do? One popular approach is to isolate the network the IoT things are on from the network the non-IoT things are on. This mindset is akin to putting all the potentially bad eggs in the one basket and the good eggs (such as your PC) in another basket.
The requirement for doing this is to have networking gear in the home that supports it. In part 2 I talked about the importance of good networking gear and indeed I've written many pieces before about Ubiquiti before, both their AmpliFi consumer line and UniFi prosumer line, the latter having run in my house for the last 4 years. Running UniFi, I can easily create multiple Wi-Fi networks:
And yes, I name my SSIDs "HTTP403" 😊
As we then look at which clients have connected to which SSIDs, we can see them spread across the primary (HTTP403) and IoT (HTTP403 IoT) networks:
I've also got a heap of access points across my house so different devices are connected to different APs depending on where they're located and what signal strength they have. I've chosen to place all my highly trusted devices such as my iPhone, iPad and PCs on the primary network and all the IoT things on the IoT network. I've also placed the Ubiquiti cameras (including their doorbell) on the primary network figuring they're all essentially part of the UniFi ecosystem anyway.
But this is just segmentation by SSID; every device is on the same subnet and the same logical VLAN and there's not presently any segmentation of clients such that the Shelly controlling the lights on my fireplace can't see my iPhone. Ubiquiti has a good writeup of how to do this and in the first version of my UniFi network, that's precisely how things were configured. (Also check out how to configure interVLAN routing.) But there were problems...
The main problem is that you end up with all sorts of scenarios where a particular IoT device needs to see the app that controls it but because the very purpose of the VLAN is to lock the IoT things away, things would fail. So, you end up tracking down devices, ports and protocols and creating ever more complex firewall rules between networks. Troubleshooting was painful; every time I had an IoT device not behaving as expected, I'd look suspiciously at the firewall rules between the VLANs. I ended up constantly debugging network traffic and searching across endless threads just like this one trying to work out why Sonos wasn't playing nice across VLANs.
When I set up version 2 of my UniFi network (complete tweet thread here), I kept the IoT SSID but never bothered with the VLAN. It made it easy for all the existing devices to jump onto the new network (I used the same password from the v1 network) and it gives me the option to segment traffic later on. It also gives me the option to easily put it all on a different subnet later on, for example if I genuinely get to the point of IPV4 exhaustion on the 192.168.1.0/24 subnet. (Sidenote: even this can be painful as the native apps for many IoT devices want to join them to the same SSID the phone running the app is on so I found myself continually joining my iPhone to the IoT SSID before pairing... then forgetting I'd done that and later wondering why my phone was on the IoT network! It's painful.)
Getting back to network compatibility, whilst Ubiquiti's UniFi range will happily support this approach, AmpliFi won't. To the best of my knowledge, most consumer-focused network products won't and why would they? Can you imagine your parents VLAN'ing their IoT things? It's painful enough for me! We need to think differently.
The main concept behind zero trust, is that networked devices, such as laptops, should not be trusted by default, even if they are connected to a managed corporate network such as the corporate LAN and even if they were previously verified.
It's become a bit of a buzzword of late but the principle is important: instead of assuming everything on the network is safe because you only put good things on the network, assume instead that everything is bad and that each client must protect itself from other clients. It's akin to moving away from the old thinking that all the bad stuff was outside the network perimeter and all the good stuff was inside. That logic started eroding as soon as we had floppy disks, went quickly downhill with USB sticks and is all but gone in the era of cloud. We've been heading in this direction with enterprise security for years, now we also need to adopt that same thinking in the home.
A critical flaw we found in testing meant that an attacker could seize total control of the plug, and of the power going to the connected device. The vulnerability is the result of weak encryption used by TP-Link. The attacker would have to be on your wi-fi network to do the hack.
The whole premise of an attacker already being on your network is precisely why zero trust is important. Somewhat ironically though, I suspect that whilst on the one hand the TP-Link situation is viewed as a vulnerability, the ability to connect directly to it on the local network is probably what made the HA integration feasible in the first place! In other words, one person's vulnerability is another person's integration 😎
When we put this into the context of your average consumer, it means that stuff just needs to work out of the box. The Windows machine should be resilient to a connected IoT vacuum cleaner gone bad. The personal NAS shouldn't be wide open to a connected sous vide turned rogue. Consumers can't configure this stuff nor should they, rather we need to do a better job as an industry of making IoT devices resilient to each other.
I appreciate this isn't concise "do this and you'll be fine" advice, but it's where we need to head in the future, and I'd be remiss not to push that view here. Let's look at one more related topic - TLS.
Transport Layer Security
Our view of SSL or HTTPS or TLS (and all those terms get used a bit interchangeably), has really changed over the years. Once upon a time, it was the sole domain of banks and e-commerce sites and it meant you were "secure" (Chrome literally used to use that word). The good guys had it, the bad guys didn't. In fact, most websites didn't have it but these days, it's quite the opposite; most websites do serve their traffic securely regardless of the type of business they are. The growth has been driven by the free and easy availability of certificates, largely due to the emergence of Let's Encrypt in 2016.
As it relates to IoT, let's look at it in 2 different ways:
Devices talking to hosted services over HTTPS
Devices hosting services that could support HTTPS
The first point is a bit of a no brainer because all the certificate management is done centrally by, say, Amazon for their Echo devices. Every time one of the kids asks Alexa a question, a TLS connection is established to Amazon's services and they get the benefit of confidentiality, integrity and authenticity.
Every Shelly I have in the house has its own little web server and I connect to it locally via IP address... over HTTP. An adversary sitting at the network routing level (i.e. on one of my switches) would be able to observe the traffic (no confidentiality), modify it (no integrity) or redirect it (no authenticity). Beyond a cursory Google search that returned no results, I haven't even begun to think about the logistics of installing a cert on a Shelly let alone the dozen other Shelly devices I have in the house.
Out of curiosity, I asked this question earlier today and got a response from Paulus just before publishing this blog post:
For Shelly we use a mix of HTTP (settings, control) and CoAP (state). Neither is encrypted.
I think the way IKEA does CoAP is neat. Bottom of gateway is a key / QR that can be used to generate an access key. Then use DTLs for encryption.
Reading through the responses to my original question, the resounding feedback was that when it comes to IoT communicating inside home networks, people weren't too concerned about a lack of transport layer encryption. I can understand that conclusion insofar as the LAN is a much lower risk part of the whole IoT ecosystem. I'd like everything to be sent over a secure transport layer (perhaps per Paulus' IKEA suggestion), and certainly any devices acting as clients communicating with external servers should be doing this already, but inevitably, there will be gaps.
There are, however, some very practical, very common-sense things we can do right now to improve the security posture of our IoT things so let's finish up by talking about those.
Security goes well beyond just digital controls, indeed there are many ways we can influence our IoT security posture simply by adjusting the way we think about the devices. I want to break this down into 3, common-sense approaches:
1. You cannot lose what you do not have: This is an old adage often used in a digital privacy context and it's never been truer than with IoT. Headlines such as Stranger hacks into baby monitor, tells child, 'I love you' are a near daily occurrence and there's a sure way to ensure a hacker doesn't end up watching and talking to your child: don't put a camera with a mic and speaker in their bedroom! Right about now, a small subset of my readership is getting ready to leave angry comments about "victim blaming" and I'll ask them to start with a blog post from almost 5 years ago titled Suggesting you shouldn’t digitise your sexual exploits isn’t “victim blaming”, it’s common-sense. The point in all these cases isn't to say someone is "wrong" for using a connected baby monitor or making kinky home movies, rather that doing so increases the chances of an otherwise private event being seen by others. Do your own assessment on whether you're willing to take that risk or not.
As it relates to my own approach to IoT, all cameras I have point at places that are publicly observable. (The only exceptions are inside my garage and my boatshed, both places where nothing happens I wouldn't be comfortable with the public seeing.) My worst-case scenario if my cameras are pwned isn't the exposure of my kids to strangers or an intimate moment with my partner, it's only publicly observable activity.
Those black boxes are recorded onto all video the camera captures and shield both the master bedroom and the pool from view should someone obtain the video. If an adversary gained full control to the UniFi Protect server then yes, they could remove the privacy zones, but that would only apply to future videos and only until I cottoned on to something being wrong.
2. Be selective with what you connect: This whole journey began with me trying to automate my garage door, which I eventually did. But I actually have 2 garage doors with one leading to what could more appropriately be called a carport (a covered area inside the property boundary) and the other then leading inside the house. It looks like this:
I've divided this into risk zones and the reason the upper area is low risk is that it's easily accessible. There's a wall around the house behind those green palms, but it can be jumped. That door is internet connected and it allows me to remotely open it so couriers can drop off packages or I can easily ride my bike back inside the property boundary (I just ask Siri on my watch to open it up). The higher risk zone contains things like bikes, wakeboards and life vests (not to mention my beer fridge!) I've not connected that door as it presents a greater risk and provides less upside if connected than the external door thus is harder to justify being IoT enabled.
The point here is that I'm effectively doing my own little risk assessment on each IoT device, and you can too. What upside does it bring you? What downside does it present? How likely is that to happen? And finally, what's the impact if it does? Easy 🙂
3. Choose who to trust: I'll give you a real-world example here, starting with this tweet:
Helping some friends out who are looking for a connected doorbell, what's the best option these days? Main thing is support for a chime box inside the house (also required) plus the usual video and audio to mobile devices.
The back story to this was that I'd just installed Ubiquiti's AmpliFi ALIEN unit at this friend's house and in doing so, set up a brand new network with new SSID and subsequently set about migrating all the connected things to the new one. Everything came over just fine... except the doorbell. I have absolutely no idea who made that doorbell; it seemed to be a cheap Chinese model with very little documentation and no clear way to join a new network. I was stumped and the doorbell was kinda crap anyway thus the tweet above.
Now, if I had to choose between trusting that old doorbell with the ones suggested in that thread (namely Ring, Nest and Ubiquiti), it's an easy decision. These companies invest serious dollars in their security things in just the same way Amazon does with their Echo devices. Why mention Echo? Because people often ask if I trust them given I have one in each kids' room. Now that's a binary question with a non-binary response because trust is not as simple as "completely" or "not at all", it's much more nuanced. What I know about each of the multi-billion dollar tech companies mentioned here is that they have huge budgets for this stuff and are the most likely not just to get it right in the first place, but to deal with it responsibly if they get it wrong.
But a caveat: Nissan is also a huge company with massive budgets and they made an absolute mess of the security around their car. It doesn't surprise me that CloudPets and TicTocTrack made the mistakes they did because they're precisely the sorts of small organisations shipping cheap products that I expect to get this wrong, but clearly organisation size alone is not a measure of security posture.
There will be those who respond to this blog post with responses along the lines of "well, you really don't need any of these things connected anyway, why take the risk?" There's an easy answer: because it improves my life. In the final part of this series I'm going to do video walkthroughs of a whole bunch of different ways in which I benefit from my connected environment, showing how each connected thing operates. I like my IoT devices and in order to reap the benefits they provide, I'm willing to wear some risk.
Coming back to a recurring theme from this series, the security situation as it relates to normal everyday people using IoT devices isn't great and I've given plenty of examples of why that's the case. I also don't believe the approaches taken by enthusiasts solves the problem in any meaningful way, namely custom firmware, blocking device updates and creating VLANs. It's fiddly, time consuming, fraught with problems and most importantly, completely out of reach for the huge majority of people using IoT devices. We need to do better as an industry; better self-healing devices, better zero trust networks and better interoperability.
Finally, and per the last couple of blogs in the series, Scott and I will be talking live about all things IoT (and definitely drilling much deeper into the security piece given the way both of us make a living), later this week via this scheduled broadcast 👇