In the traditional parlance of infosec, we've been taught repeatedly that the C-I-A triad (confidentiality, integrity, availability) must be balanced in accordance with the needs of the business. This concept is foundational to all of infosec, ensconced in standards and certification exams and policies. Yet, today, it's essentially wrong, and moreover isn't a helpful starting point for a security discussion.
The simple fact is this: availability is king, while confidentiality and integrity are secondary considerations that rarely have a default predisposition. We've reached this point thanks in large part to the cloud and the advent of utility computing. That is, we've reached a point where we assume uptime and availability will always be optimal, and thus we don't need to think about it much, if at all. And, when we do think about it, it falls under the domain of site reliability engineering (SRE) rather than being a security function. And that's a good thing!
If you remove availability from the C-I-A triad, you're then left with confidentiality and integrity, which can be boiled down to two main questions:
1) What are the data protection requirements for each dataset?
2) What are the anti-corruption requirements for each dataset and environment?
In the first case you quickly go down the data governance path (inclusive of data security), which must factor in requirements for control, retention, protection (including encryption), and masking/redaction, to name a few things. From an overall "big picture" perspective, we can then more clearly view data protection from an inforisk perspective, and interestingly enough it now makes it much easier to drill down in a quantitative risk analysis process to evaluate the overall exposure to the business.
As for anti-corruption (integrity) requirements, this is where we can see traditional security practices entering the picture, such as through ensuring systems are reasonably hardened against compromise, as well as appsec testing (to protect the app), but then also dovetailing back into data governance considerations to determine the potential impact of data corruption on the business (whether that be fraudulent orders/transactions; or, tampering with data, like a student changing grades or an employee changing pay rates; or, even data corruption in the form of injection attacks).
What's particularly interesting about integrity is applying it to cloud-based systems and viewing it through a cost control lens. Consider, if you will, a cloud resource being compromised in order to run cryptocurrency mining. That's a violation of system integrity, which in turn may translate into sizable opex burn due to unexpected resource utilization. This example, of course, once again highlights how you can view things through a quantitative risk assessment perspective, too.
At the end of the day, C-I-A are still useful concepts, but we're beyond the point of thinking about them in balance. In a utility compute model, availability is assumed to approach 100%, which means it can largely be left to operations teams to own and manage. Even considerations like DDoS mitigations frequently fall to ops teams these days, rather than security. Making the shift here then allows one to more easily talk about inforisk assessment and management within each particular vertical (confidentiality and integrity), and in so doing makes it much easier to apply quantitative risk analysis, which in turn makes it much easier to articulate business exposure to executives in order to more clearly manage the risk portfolio.
(PS: Yes, I realize business continuity is often lumped under infosec, but I would challenge people to think about this differently. In many cases, business continuity is a standalone entity that blends together a number of different areas. The overarching point here is that the traditional status quo is a failed model. We must start doing things differently, which means flipping things around to identify better approaches. SRE is a perfect example of what happens when you move to a utility computing model and then apply systems and software engineering principles. We should be looking at other ways to change our perspective rather than continuing to do the same old broken things.)
Now that the 2018 Winter Olympics are over, we're now learning who was responsible for hacking the games' systems... and the culprit won't surprise you at all. US intelligence officials speaking anonymously to the Washington Post claimed that spies at Russia's GRU agency had compromised up to 300 Olympics-related PCs as of early February, hacked South Korean routers in January and launched new malware on February 9th, the day the Olympics began. They even tried to make it look like North Korea was responsible by using North Korean internet addresses and "other tactics," according to the American sources.
Source: Washington Post
Internet together with small and inexpensive digital cameras have made us aware of the potential privacy concerns of sharing digital photos. The mobile phone cameras have escalated this development even further. Many people are today carrying a camera with ability to publish photos and videos on the net almost in real-time. Some people can handle that and act in a responsible way, some can’t. Defamatory pictures are constantly posted on the net, either by mistake or intentionally. But that’s not enough. Now it looks like the next revolution that will rock the privacy scene is around the corner, Google Glass.
Having a camera in your phone has lowered the threshold to take photos tremendously. It’s always with you and ready to snap. But you still have to take it out of the pocket and aim it at your object. The “victim” has a fair chance to notice that you are taking photos, especially if you are working at close distance.
Google Glass is a smartphone-like device that is integrated in a piece of headgear. You wear it all the time just like ordinary glasses. The screen is a transparent piece in your field of view that show output as an overlay layer on top of what’s in front of you. No keyboard, mouse or touchscreen. You control it by voice commands. Cool, but here comes the privacy concern. Two of the voice commands are “ok, glass, take a picture” and “ok, glass, record a video”. Yes, that’s right. It has a camera too.
Imagine a world where Google Glasses are as common as mobile phones today. You know that every time you talk to someone, you have a camera and microphone pointed at you. You have no way of knowing if it is recording or not. You have to take this into account when deciding what you say, or run the risk of having an embarrassing video on YouTube in minutes. A little bit like in the old movie RoboCop, where the metallic law enforcement officer was recording constantly and the material was good to use as evidence in court. Do we want a world like that? A world where we all are RoboCops?
We have a fairly clear and good legislation about the rules for taking photos. It is in most countries OK to take photos in public places, and people who show up there must accept to be photographed. Private places have more strict rules and there are also separate rules about publishing and commercial use of a photo. This is all fine and it applies to any device, also the Google Glass. The other side of the coin is peoples’ awareness of these laws, or actually lack thereof. In practice we have a law that very few care about, and a varying degree of common sense. People’s common sense do indeed prevent many problems, but not all. It may work fairly OK today, but will it be enough if the glasses become common?
I think that if Google Glass become a hit, then it will force us to rethink our relationship to photo privacy. Both as individuals and as a society. There will certainly be problems if 90% of the population have glasses and still walk around with only a rudimentary understanding about how the law restricts photography. Some would suffer because they broke the law unintentionally, and many would suffer because of the published content.
I hope that our final way to deal with the glasses isn’t the solution that 5 Point Cafe in Seattle came up with. They became the first to ban the Google Glass. It is just the same old primitive reaction that has followed so many new technologies. Needless to say, much fine technology would be unavailable if that was our only way to deal with new things.
But what will happen? That is no doubt an interesting question. My guess is that there will be a compromise. Camera users will gradually become more aware of what boundaries the law sets. Many people also need to redefine their privacy expectation, as we have to adopt to a world with more cameras. That might be a good thing if the fear of being recorded makes us more thoughtful and polite against others. It’s very bad if it makes it harder to mingle in a relaxed way. Many questions remain to be answered, but one thing is clear. Google Glass will definitively be a hot topic when discussing privacy.
PS. I have an app idea for the Glass. You remember the meteorite in Russia in February 2013? It was captured by numerous car cameras, as drivers in Russia commonly use constantly recording cameras as measure against fraudulent accusations. What if you had the same functionality on your head all the time? There would always be a video with the last hour of your life. Automatically on all the time and ready to get you out of tricky situations. Or to make sure you don’t miss any juicy moments…
Photo by zugaldia @ Flickr