Category Archives: Culture

NBlog Sept 8 – chew before swallowing

The Global State of Online Digital Trust is a typical vendor-sponsored piece, a white paper (= marketing promotion in the guise of a 'survey') prepared by Frost & Sullivan for CA Technologies.

I say 'typical' in that they have disclosed hardly any information about the survey method and sample. press release instructs us to see the report for "Full survey methodology details" but unless I'm blind, it looks to me as if someone either 'forgot' to write the materials-and-methods section or casually neglected to incorporate it in the published report.  Oh dear.

A CA marketing VP called it "a survey of 1,000 consumers, 350 cybersecurity professionals and 325 business executives from all over the world" whereas the press release referred to it as "The global online survey of 990 consumers, 336 security professionals and 324 business executives across 10 countries". 

We can only guess at how they might have assigned respondents between the three categories e.g. who would not qualify as a 'consumer'? Wouldn't a CISO fall into all three groups? In the report, numbers next to the graphs appear to indicate the sample sizes up to about 990

Last time I checked, there were rather more than 10 countries in the world aside from USA BRA UK FRA GER ITA AUS IND JPN and CHN as listed the report. If I'm interpreting those abbreviations correctly, that's well short of "all over the world".

If indeed the survey was online, that rather suggests the sample only consisted of people from the ten countries who were happy to answer an online survey - which itself implies a degree of trust in online security as well as a willingness to respond to a vendor-sponsored survey. 

It is unclear whether or how the report's conclusions relate to the survey findings ... and they are somewhat predictable given the report sponsor's commercial interests:
"CULTIVATE A CULTURE OF SECURITY Implement data protection policies that are in accordance with the world’s strictest data privacy regulations. Ensure company-wide familiarity with security policies, including among non-technical staff to reduce the risk of data breaches. 
START AT THE TOP Too many business executives see security initiatives as a negative return on investment. Alert the C-Suite to the tangible business impacts of a breach and a loss of consumer trust. 
COVER YOUR BASES Consumers consider both social and technical factors when determining whether to trust an organization; be sure that your organization has the technical foundation in place to mitigate attacks and have a response team ready to minimize damage to consumer trust in the event of a breach. 
KEEP IT SIMPLE Clear communication from organizations around policies and data handling practices is critical for building trust. Far too many organizations overestimate the degree to which consumers can easily manage their personal data online. Present your policies in simple language, and provide important details without overwhelming the consumer."
So they evidently equate "a culture of security" with data protection, data privacy and data breaches. Spot the common factor. A similar bias towards privacy law compliance and the protection of "customer data" is evident in all four paragraphs. That is an important issue, I agree, along with "cybersecurity" (an undefined term ... but I guess they mean IT security) but what about all the rest of information security: trade secrets, intellectual property, business continuity, physical and procedural security, information integrity, blah blah blah?

I freely admit to being heavily prejudiced in favour of both cultural development and management-level security awareness but their emphasis on breach impacts and consumer trust once again betrays a myopic focus on privacy breach incidents, while the conclusion about return on investment seems very suspect to me. I wonder if the survey question/s in that area were unambiguous enough to be interpreted in the same way by all the respondents? Or are the reported differences between the groups of respondents merely indicative of their distinct perspectives and assumptions? Did they even face the same questions? We can't tell since they choose not to disclose the survey questions.

The report introduces the term "Digital trust index". Sounds great, right? A metric concerning trust in, errr, digits? A percentage value relative to, um, what exactly? Oh let me guess, relative to the score conjured out of the air for this, the first report. And unfortunately for the sponsors, the term "Digital Trust Index" is already in use elsewhere.

Overall, a disappointing and essentially pointless read, like most other commercially-sponsored and heavily-promoted "survey" I have read in my career with few exceptions. 

Clearly, I'm a slow learner, stubborn as an old boot. Venting my spleen through this blog is immensely helpful though, along with the vain hope that you might perhaps be persuaded to take a more critical look at the next "survey" that plops onto your screen. Chew it over rather than swallowing whole.

The Quest for Optimal Security

There's no shortage of guidance available today about how to structure, build, and run a security program. Most guidance comes from a standpoint of inherent bias, whether it be to promote a product class, specific framework/standard, or to best align with specific technologies (legacy/traditional infrastructure, cloud, etc.). Given all the competing advice out there, I often find it's hard to suss out exactly what one should be doing. As someone actively on the job hunt, this reality is even more daunting because job descriptions will typically contain a smattering of biases, confirmed or contradicted through interview processes. But, I digress...

At end of day, the goal of your security program should be to chart a path to an optimal set of capabilities. What exactly constitutes "optimal" will in fact vary from org to org. We know this is true because otherwise there would already be a settled "best practice" framework to which everyone would align. That said, there are a lot of common pieces that can be leveraged in identifying the optimal program attributes for your organization.

The Basics

First and foremost, your security program must account for basic security hygiene, which creates the basis for arguing legal defensibility; which is to say, if you're not doing the basics, then your program can be construed insufficient, exposing your organization to legal liability (a growing concern). That said, what exactly constitutes "basic security hygiene"?

There are a couple different ways to look at basic security hygiene. For starters, you can look at it be technology grouping:
- Network
- Endpoint
- Data
- Applications
- IAM
- etc.

However, listing out specific technologies can become cumbersome, plus it doesn't necessarily lend itself well to thinking about security architecture and strategy. A few years ago I came up with an approach that looks like this:

Ben-matrix.png

More recently, I learned of the OWASP Cyber Defense Matrix, which takes a similar approach to mine above, but mixing it with the NIST Cybersecurity Framework.


Overall, I like the simplicity of the CDM approach as I think it covers sufficient bases to project a legally defensible position, while also ensuring a decent starting point that will cross-map to other frameworks and standards depending on the needs of your organization (e.g., maybe you need to move to ISO 27001 or complete a SOC 1/2/3 certification).

Org Culture

One of the oft-overlooked, and yet insanely important, aspects of designing an approach to optimal security for your organization is to understand that it must exist completely within the organization's culture. After all, the organization is comprised of people doing work, and pretty much everything you're looking to do will have some degree of impact on those people and their daily lives.

Ben-pyramid.png

As such, when you think about everything, be it basic security hygiene, information risk management, or even behavioral infosec, you must first consider how it fits with org culture. Specifically, you need to look at the values of the organization (and its leadership), as well as the behaviors that are common, advocated, and rewarded.

If what you're asking people to do goes against the incentive model within which they're operating, then you must find a way to either better align with those incentives or find a way to change the incentives such that they encourage preferred behaviors. We'll talking more about behavioral infosec below, so for this section the key takeaway is this: organizational culture creates the incentive model(s) upon which people make decisions, which means you absolutely must optimize for that reality.

For more on my thoughts around org culture, please see my post "Quit Talking About "Security Culture" - Fix Org Culture!"

Risk Management

Much has been said about risk management over the past decade+, whether it be PCI DSS advocating for a "risk-based approach" to vulnerability management, or updates to the NIST Risk Management Framework, or various advocation by ISO 27005/31000 or proponents of a quantitative approach (such as the FAIR Institute).

The simply fact is that, once you have a reasonable base set of practices in place, almost everything else should be driven by a risk management approach. However, what this means within the context of optimal security can vary substantially, not the least being due to staffing challenges. If you are a small-to-medium-sized business, then your reality is likely one where you, at best, have a security leader of some sort (CISO, security architect, security manager, whatever) and then maybe up to a couple security engineers (doers), maybe someone for compliance, and then most likely a lot of outsourcing (MSP/MSSP/MDR, DFIR retainer, auditors, contractors, consultants, etc, etc, etc).

Risk management is not your starting point. As noted above, there are a number of security practices that we know must be done, whether that be securing endpoints, data, networks, access, or what-have-you. Where we start needing risk management is when we get beyond the basics and try to determine what else is needed. As such, the crux of optimal security is having an information risk management capability, which means your overall practice structure might look like this:

Ben-pyramid2.png

However, don't get wrapped around the axel too much on how the picture fits together. Instead, be aware that your basics come first (out of necessity), then comes some form of risk mgmt., which will include gaining a deep understanding of org culture.

Behavioral InfoSec

The other major piece of a comprehensive security program is behavioral infosec, which I have talked about previously in my posts "Introducing Behavioral InfoSec" and "Design For Behavior, Not Awareness." In these posts, and other places, I talk about the imperative to key in on organizational culture, and specifically look at behavior design as part of an overall security program. However, there are a couple key differences in this approach that set it apart from traditional security awareness programs.
1) Behavioral InfoSec acknowledges that we are seeking preferred behaviors within the context of organizational culture, which is the set of values of behaviors promoted, supported, and rewarded by the organization.
2) We move away from basic "security awareness" programs like annual CBTs toward practices that seek measurable, lasting change in behavior that provide positive security benefit.
3) We accept that all security behaviors - whether it be hardening or anti-phishing or data security (etc) - must either align with the inherent cultural structure and incentive model, or seek to change those things in order to heighten the motivation to change while simultaneously making it easier to change.

To me, shifting to a behavioral infosec mindset is imperative for achieving success with embedding and institutionalizing desired security practices into your organization. Never is this more apparent than in looking at the Fogg Behavior Model, which explains behavior thusly:

In writing, it says that behavior happens when three things come together: motivation, ability, and a trigger (prompt or cue). We can diagram behavior (as above) wherein motivate is charted on the Y-axis from low to high, ability is charted on the X-axis from "hard to do" to "easy to do," and then a prompt (or trigger) that falls either to the left or right of the "line of action," which means the prompt itself is less important than one's motivation and the ease of the action.

We consistently fail in infosec by not properly accounting for incentive models (motivation) or by asking people to do something that is, in fact, too difficult (ability; that is, you're asking for a change that is hard, maybe in terms of making it difficult to do their job, or maybe just challenging in general). In all things, when we think about information risk mgmt. and the kinds of changes we want to see in our organizations beyond basic security hygiene, it's imperative that we also under the cultural impact and how org culture will support, maybe even reward, the desired changes.

Overall, I would argue that my original pyramid diagram ends up being more useful insomuch as it encourages us to think about info risk mgmt. and behavioral infosec in parallel and in conjunction with each other.

Putting It All Together

All of these practices areas - basic security hygiene, info risk mgmt, behavioral infosec - ideally come together in a strategic approach that achieves optimal security. But, what does that really mean? What are the attributes, today, of an optimal security program? There are lessons we can learn from agile, DevOps, ITIL, Six Sigma, and various other related programs and research, ranging from Deming to Senge and everything in between. Combined, "optimal security" might look something like this:


Conscious
   - Generative (thinking beyond the immediate)
   - Mindful (thinking of people and orgs in the whole)
   - Discursive (collaborative, communicative, open-minded)

Lean
   - Efficient (minimum steps to achieve desired outcome)
   - Effective (do we accomplish what we set out to do?)
   - Managed (haphazard and ad hoc are the enemy of lasting success)

Quantified
   - Measured (applying qualitative or quantitative approaches to test for efficiency and effectiveness)
   - Monitored (not just point-in-time, but watched over time)
   - Reported (to align with org culture, as well as to help reform org culture over time)

Clear
   - Defined (what problem is being solved? what is the desired outcome/impact? why is this important?)
   - Mapped (possibly value stream mapping, possibly net flows or data flows, taking time to understand who and what is impacted)
   - Reduced (don't bite off too much at once, acknowledge change requires time, simplify simplify simplify)

Systematic
   - Systemic understanding (the organization is a complex organism that must work together)
   - Automated where possible (don't install people where an automated process will suffice)
   - Minimized complexity (perfect is the enemy of good, and optimal security is all about "good enough," so seek the least complex solutions possible)


Obviously, much, much more can be said about the above, but that's fodder for another post (or a book, haha). Instead, I present the above as a starting point for a conversation to help move everyone away from some of our traditional, broken approaches. Now is the time to take a step back and (re-)evaluate our security programs and how best to approach them.

NBlog August 6 – twins or triplets?


The next awareness and training module trundling into sight on the NoticeBored conveyor belt concerns "outsider threats" - principally malicious threats to corporate information that originate externally, coming from outside the organization's notional boundary.  

It's the obvious follow-up, a twin for August's module on "insider threats". This month's scope is reasonably straightforward except that once again we face the issue of people and organizations spanning organizational boundaries - contractors, consultants, temps, interns, ex-employees etc. plus outsiders colluding with, socially engineering, manipulating, fooling or coercing insiders. Maybe there's enough there for a further awareness module at some future point, turning the twins into triplets!

For now we'll stick to Plan A, focusing on threatening outsiders of which there are many, quite a variety in fact. For completeness, we should probably mention benign, accidental or incidental outside threats and we'll definitely pick up on vulnerabilities and impacts in the risk analysis, as well as exploring ways to avoid or mitigate outsider threats. 

Leaning back from the keyboard, it occurs to me that there is no shortage of relevant issues here for awareness and training purposes - the very opposite in fact. Even at this early stage I'm already thinking about narrowing the scope. 

Traditional IT/cybersecurity awareness approaches would barely have touched these topics, focusing purely on technology-related threats such as hackers. Broadening our perspective makes NoticeBored a more comprehensive service and, we trust, more interesting, engaging and thought-provoking, and more valuable. We'll bring up hacking, of course, and a whole lot more besides.

If your security awareness program consists of a few dog-eared posters and dire warning notices along the lines of "Comply with the policies ... or face the consequences", don't be surprised if bored stiff workers simply tune out. "La la la, can't hear you, don't see you ...". Worse still, the ones who pay attention find out about a narrow strip of a long, long tapestry. What are the chances of that strip covering all they ought to know, everything that matters? Not good.


Quit Talking About "Security Culture" – Fix Org Culture!

I have a pet peeve. Ok, I have several, but nonetheless, we're going to talk about one of them today. That pet peeve is security professionals wasting time and energy pushing a "security culture" agenda. This practice of talking about "security culture" has arisen over the past few years. It's largely coming from security awareness circles, though it's not always the case (looking at you anti-phishing vendors intent on selling products without the means and methodology to make them truly useful!).

I see three main problems with references to "security culture," not the least of which being that it continues the bad old practices of days gone by.

1) It's Not Analogous to Safety Culture

First and foremost, you're probably sitting there grinding your teeth saying "But safety culture initiatives work really well!" Yes, they do, but here's why: Safety culture can - and often does - achieve a zero-sum outcome. That is to say, you can reduce safety incidents to ZERO. This factoid is excellent for when you're around construction sites or going to the hospital. However, I have very bad news for you. Information (or cyber or computer) security will never be a zero-sum game. Until the entirety of computing is revolutionized, removing humans from the equation, you will never prevent all incidents. Just imagine your "security culture" sign by the entrance to your local office environment, forever emblazoned with "It Has Been 0 Days Since Our Last Incident." That's not healthy or encouraging. That sort of thing would be outright demoralizing!

Since you can't be 100% successful through preventative security practices, you must then shift mindset to a couple things: better decisions and resilience. Your focus, which most of your "security culture" programs are trying to address (or should be), is helping people make better decisions. Well, I should say, some of you - the few, the proud, the quietly isolated - have this focus. But at the end of the day/week/month/year you'll find that people - including well-trained and highly technical people - will still make mistakes or bad decisions, which means you can't bank on "solving" infosec through better decisions.

As a result, we must still architect for resiliency. We must assume something will breakdown at some point resulting in an incident. When that incident occurs, we must be able to absorb the fault, continue to operate despite degraded conditions, while recovering to "normal" as quickly, efficiently, and effectively as possible. Note, however, that this focus on resiliency doesn't really align well with the "security culture" message. It's akin to telling people "Safety is really important, but since we have no faith in your ability to be safe, here's a first aid kit." (yes, that's a bit harsh, to prove a point, which hopefully you're getting)

2) Once Again, It Creates an "Other"

One of the biggest problems with a typical "security culture" focus is that it once again creates the wrong kind of enablement culture. It says "we're from infosec and we know best - certainly better than you." Why should people work to make better decisions when they can just abdicate that responsibility to infosec? Moreover, since we're trying to optimize resiliency, people can go ahead and make mistakes, no big deal, right?

Part of this is ok, part of it is not. On the one hand, from a DevOps perspective, we want people to experiment, be creative, be innovative. In this sense, resilience and failure are a good thing. However, note that in DevOps, the responsibility for "fail fast, recover fast, learn fast" is on the person doing the experimenting!!! The DevOps movement is diametrically opposed to fostering enablement cultures where people (like developers) don't feel the pain from their bad decisions. It's imperative that people have ownership and responsibility for the things they're doing. Most "security culture" dogma I've seen and heard works against this objective.

We want enablement, but we don't want enablement culture. We want "freedom AND responsibility," "accountability AND transparency," etc, etc, etc. Pushing "security culture" keeps these initiatives separate from other organizational development initiatives, and more importantly it tends to have at best a temporary impact, rather than triggering lasting behavioral change.

3) Your Goal Is Improving the Organization

The last point here is that your goal should be to improve the organization and the overall organizational culture. It should not be focused on point-in-time blips that come and go. Additionally, your efforts must be aimed toward lasting impact and not be anchored around a cult of personality.

As a starting point, you should be working with org dev personnel within your organization, applying behavior design principles. You should be identifying what the target behavior is, then working backward in a piecemeal fashion to determine whether that behavior can be evoked and institutionalized through one step or multiple steps. It may even take years to accomplish the desired changes.

Another key reason for working with your org dev folks is because you need to ensure that anything "culture" that you're pursuing is fully aligned with other org culture initiatives. People can only assimilate so many changes at once, so it's often better to align your work with efforts that are already underway in order to build reinforcing patterns. The worst thing you can do is design for a behavior that is in conflict with other behavior and culture designs underway.

All of this is to underline the key point that "security culture" is the wrong focus, and can in some cases even detract from other org culture initiatives. You want to improve decision-making, but you have to do this one behavior at a time, and glossing over it with the "security culture" label is unhelpful.

Lastly, you need to think about your desired behavior and culture improvements in the broader context of organizational culture. Do yourself a favor and go read Laloux's Reinventing Organizations for an excellent treatise on a desirable future state (one that aligns extremely well with DevOps). As you read Laloux, think about how you can design for security behaviors in a self-managed world. That's the lens through which you should view things, and this is where you'll realize a "security culture" focus is at best distracting.

---
So... where should you go from here? The answer is three-fold:
1) Identify and design for desirable behaviors
2) Work to make those behaviors easy and sustainable
3) Work to shape organizational culture as a whole

Definitionally, here are a couple starters for you...

First, per Fogg, Behavior happens when three things come together: Motivation, Ability (how hard or easy it is to do the action), and a Trigger (a prompt or cue). When Motivation is high and it's easy to do, then it doesn't take much prompting to trigger an action. However, if it's difficult to take the action, or the motivation simply isn't there, you must then start looking for ways to address those factors in order to achieve the desired behavioral outcome once triggered. This is the basis of behavior design.

Second, when you think about culture, think of it as the aggregate of behaviors collectively performed by the organization, along with the values the organization holds. It may be helpful, as Laloux suggests, to think of the organization as its own person that has intrinsic motivations, values, and behaviors. Eliciting behavior change from the organization is, then, tantamount to changing the organizational culture.

If you put this all together, I think you'll agree with me that talking about "security culture" is anathema to the desired outcomes. Thinking about behavior design in the context of organizational culture shift will provide a better path to improvement, while also making it easier to explain the objectives to non-security people and to get buy-in on lasting change.

Bonus reference: You might find this article interesting as it pertains to evoking behavior change in others.

Good luck!