Category Archives: Culture

NBlog Dec 7 – who owns the silos?


Michael Rasmussen has published an interesting, thought-provoking piece about the common ground linking specialist areas such as risk, security and compliance, breaking down the silos.

“Achieving operational resiliency requires a connected view of risk to see the big picture of how risk interconnects and impacts the organization and its processes. A key aspect of this is the close relationship between operational risk management (ORM) and business continuity management (BCM). It baffles me how these two functions operate independently in most organizations when they have so much synergy.”

While Michael’s perspective makes sense, connecting, integrating or simply seeking alignment between diverse specialist functions is, let's say, challenging. Nevertheless, I personally would much rather collaborate with colleagues across the organization to find and jointly achieve shared goals that benefit the business than perpetuate today's blinkered silos and turf wars. At the very least, I'd like to understand what drives and constrains, inspires and concerns the rest of the organization, outside my little silo.

Once you start looking, there are lots of overlaps, common ground, points of mutual interest and concern. Here are a few illustrative examples:
  • Information risk, information security, information technology: the link is glaringly obvious, and yet usually the second words are emphasized leaving the first woefully neglected;
  • Risk and reward, challenge and opportunity: these are flip sides of the same coin that all parts of the business should appreciate. Management is all about both minimizing the former and maximizing the latter. Business is not a zero-sum game: it is meant to achieve objectives, typically profit and other forms of successful outcomes. And yes, that includes information security!
  • Business continuity involves achieving resilience for critical business functions, activities, systems, information flows, supplies, services etc., often by mitigating risks through suitable controls. The overlap between BCM, [information] risk management and [information] security is substantial, starting with the underlying issue of what 'critical' actually means to the organization;
  • Human Resources, Training, Health and Safety and Information Risk and Security are all concerned with people, as indeed is Management. People are tricky to direct and control. People have their own internal drivers and constraints, their biases and prejudices, aims and objectives. Taming the people without destroying the sparks of creativity and innovation that set us apart from the robots is a common challenge ... and, before long, taming those robots will be the next common challenge.

Dig deeper still and you'll also find points of mutual disinterest and conflicts within the organization. Marketing, for instance, yearns to obtain and exploit all the information it can possibly obtain on prospective customers, causing sleepless nights for the Privacy Officer. Operations find it convenient or necessary to use shared accounts on shop-floor IT systems in the interest of speed, efficiency, safety etc. whereas Information Risk and Security point out that they are prohibited under corporate-wide security policies for accountability and control reasons.

You could view the organization as a multi-dimensional framework of interconnections and tensions between its constituent parts, all heading towards roughly the same goal/s (hopefully!) but on occasions pulling any which way at different speeds to get there. To make matters still more complex, the web of influence extends beyond the organization through its proximal contacts to The World At Large. That takes us into the realm of chaos theory, global politics and sociology. 'Nuff said.

All the organization's activities fall under the umbrella of corporate governance, senior managers clarifying the organization's grand objectives and optimizing the organization's overall performance by  establishing and monitoring the corporate structures, hierarchies, strategies, policies and other directives, information flows, relationships, systems, management arrangements etc. necessary to achieve them. Driving alignment and reducing conflicts is part of the governance art. Silos are governance failures.

NBlog Nov 14 – lack of control =/= vulnerability

A common misunderstanding among infosec professionals is that vulnerabilities include the lack or inadequacy of various infosec controls e.g. 'the lack of security awareness training'.

No     No!    NO!

Vulnerabilities are the inherent weaknesses that may be exposed and exploited by the threats, leading to impacts. 

In the lack-of-awareness example, people's naivete and ignorance are inherent weaknesses that may be exposed in various situations (e.g. when someone receives a phishing email) and exploited by threats (the phishers in this case i.e. fraudsters using social engineering techniques to mislead or misdirect victims into clicking dubious links etc.) leading to various impacts (malware infection, identity fraud, blackmail or whatever), hence risk. Naivete and ignorance are vulnerabilities. There are others too, including human tendencies such as greed and situations that distract us from important points, such as security warnings from our email and browser software, or that little voice in our head whispering "Too good to be true!".

Vulnerabilities exist with or without the controls. Sure, well-designed and implemented controls mostly reduce vulnerabilities but the lack of a control is not itself a vulnerability. It's a lack of control, something fundamentally different. 

Effective infosec awareness and training compensate for and reduce the naivete and ignorance, in part, and give people the skills and motivation to spot and deal appropriately with threats to  information, such as phishing. The control is imperfect, though - we know that - hence the risk is not totally eliminated, merely reduced ('mitigated' in the lingo). The limitations are two-fold: (1) those inherent issues run deep, and (2) the threats are constantly morphing.

I've blogged about this before and was reminded of it yet again today when checking out some 'infosec threat catalogs' on the Web. There are some potentially useful generic infosec threat lists out there but most also list non-vulnerabilities such as lack of awareness, catching my beady eye and distracting me. Those hijack my attention and wind me up, to the point that I refuse to recommend the associated threat catalogs even if those bits are sound. I won't propagate the misconception that lack of control is vulnerability.

Yes, I'm vulnerable too. I'm human. Allegedly. My button is hot.

To complicate matters further, controls can contain or be associated with vulnerabilities. Controls sometimes fail to work as designed. They break or are broken, get bypassed, misconfigured or turned off, or are simply overwhelmed - a genuine concern for phishing given the sheer number and growing variety of attacks. Nevertheless, I maintain that control weaknesses are not vulnerabilities. They are conceptually distinct.

Weak or missing controls result from inherent weaknesses or flaws in our information security practices, which are vulnerabilities. Misunderstanding "vulnerability" is both a vulnerability and a threat, at which point I'm going to leave this top a-spinning as I stagger back to my morning coffee.

NBlog Nov 13 – what to ask in a security gap assessment (reprise)

Today on the ISO27k Forum, a newly-appointed Information Security Officer asked us for "a suitable set of questions ... to conduct security reviews internally to departments".

I pointed him at "What to ask in a gap assessment" ... and made the point that if I were him, I wouldn't actually start with ISO/IEC 27002's security controls as he implied. I'd start two steps back from there:
  1. One step back from the information security controls controls are the information risks. The controls help address the risks by avoiding, reducing or limiting the number and severity of incidents affecting or involving information: but what information needs to be protected, and against what kinds of incident? Without knowing that, I don't see how you can decide which controls are or are not appropriate, nor evaluate the controls in place.
  2. Two steps back takes us to the organizational or business context for information and the associated risks. Contrast, say, a commercial airline company against a government department: some of their information is used for similar purposes (i.e. general business administration and employee comms) but some is quite different (e.g. the airline is heavily reliant on customer and engineering information that few government departments would use if at all). Risks and controls for the latter would obviously differ ... but less obviously there are probably differences even in the former - different business priorities and concerns, different vulnerabilities and threats. The risks, and hence the controls needed, depend on the situation.
I recommend several parallel activities for a new info sec pro, ISO, ISM or CISO – a stack of homework to get started:
  • First, I find it helps to start any new role deliberately and consciously “on receivei.e. actively listening for the first few weeks at least, making contacts with your colleagues and sources and finding out what matters to them.  Try not to comment or criticize or commit to anything much at this stage, although that makes it an interesting challenge to get people to open up!  Keep rough notes as things fall into place.  Mind-mapping may help here.
  • Explore the information risks of most obvious concern to your business. Examples:
    • A manufacturing company typically cares most about its manufacturing/factory production processes, systems and data, plus its critical supplies and customers;
    • A services company typically cares most about customer service, plus privacy;
    • A government department typically cares most about ‘not embarrassing the minister’ i.e. compliance with laws, regs and internal policies & procedures;
    • A healthcare company typically cares most about privacy, integrity and availability of patient/client data;
    • Any company cares about strategy, finance, internal comms, HR, supply chains and so on – general business information – as well as compliance with laws, regs and contracts imposed on it - but which ones, specifically, and to what extent?;
    • Any [sensible!] company in a highly competitive field of business cares intensely about protecting its business information from competitors, and most commercial organizations actively gather, assess and exploit information on or from competitors, suppliers, partners and customers, plus industry regulators, owners and authorities;
    • Not-for-profit organizations care about their core missions, of course, plus finances and people and more (they are business-like, albeit often run on a shoestring);
    • A mature organization is likely to have structured and stable processes and systems (which may or may not be secure!) whereas a new greenfield or immature organization is likely to be more fluid, less regimented (and probably insecure!);
  • Keep an eye out for improvement opportunities - a polite way of saying there are information risks of concern, plus ways to increase efficiency and effectiveness – but don’t just assume that you need to fix all the security issues instantly: it’s more a matter of first figuring out you and your organization’s priorities. Being information risk-aligned suits the structured ISO27k approach. It doesn’t hurt to mention them to the relevant people and chat about them, but be clear that you are ‘just exploring options’ not ‘making plans’ at this stage: watch their reactions and body language closely and think on;
  • Consider the broader historical and organizational context, as well as the specifics. For instance:
    • How did things end up the way they are today? What most influenced or determined things? Are there any stand-out issues or incidents, or current and future challenges, that come up often and resonate with people?
    • Where are things headed? Is there an appetite to ‘sort this mess out’ or conversely a reluctance or intense fear of doing anything that might rock the boat? Are there particular drivers or imperatives or opportunities, such as business changes or compliance obligations? Are there any ongoing initiatives that do, could or should have an infosec element to them?
    • Is the organization generally resilient and strong, or fragile and weak? Look for examples of each, comparing and contrasting. A SWOT or PEST analysis generally works for me. This has a bearing on the safe or reckless acceptance of information and other risks;
    • Is information risk and security an alien concept, something best left to the grunts deep within IT, or a broad business issue? Is it an imposed imperative or a business opportunity, a budget black hole (cost centre) or an investment (profit centre)? Does it support and enable the business, or constrain and prevent it?
    • Notice the power and status of managers, departments and functions. Who are the movers and shakers? Who are the blockers and naysayers? Who are the best-connected, the most influential, the bright stars? Who is getting stuff done, and who isn’t? Why is that?
    • How would you characterize and describe the corporate culture? What are its features, its high and low points? What elements or aspects of that might you exploit to further your objectives? What needs to change, and why? (How will come later!)
  • Dig out and study any available risk, security and audit reports, metrics, reviews, consultancy engagements, post-incident reports, strategies, plans (departmental and projects/initiatives), budget requests, project outlines, corporate and departmental mission statements etc. There are lots of data here and plenty of clues that you should find useful in building up a picture of What Needs To Be Done. Competent business continuity planning, for example, is also business-risk-aligned, hence you can’t go far wrong by emphasizing information risks to the identified critical business activities. At the very least, obtaining and discussing the documentation is an excellent excuse to work your way systematically around the business, meeting knowledgeable and influential people, learning and absorbing info like a dry sponge.
  • Build your team. It may seem like you’re a team of 1 but most organizations have other professionals or people with an interest in information risk and security etc. What about IT, HR, legal/compliance, sales & marketing, production/operations, research & development etc.? Risk Management, Business Continuity Management, Privacy and IT Audit pro’s generally share many of your/our objectives, at least there is substantial overlap (they have other priorities too). Look out for opportunities to help each other (give and take). Watch out also for things, people, departments, phrases or whatever to avoid, at least for now.
  • Meanwhile, depending partly on your background, it may help to read up on the ISO27k and other infosec standards plus your corporate strategies, policies, procedures etc., not just infosec. Consider attending an ISO27k lead implementer and/or lead auditor training course, CISM or similar.  There’s also the ISO27k FAQ, ISO27k Toolkit and other info from ISO27001security.com, plus the ISO27k Forum archive (worth searching for guidance on specific issues, or browsing for general advice).  If you are to become the organization’s centre of excellence for information risk and security matters, it’s important that you are well connected externally, a knowledgeable expert in the field. ISSA, InfraGard, ISACA and other such bodies, plus infosec seminars, conferences and social media groups are all potentially useful resources, or a massive waste of time: your call. 
Yes, I know, I know, that’s a ton of work, and I appreciate that it’s not quite what was asked for i.e. questions to ask departments about their infosec controls. My suggestion, though, is to tackle this at a different level: the security controls in place today are less important than the security controls that the organization needs now and tomorrow. Understanding the information risks is key to figuring out the latter.

As a relative newcomer, doing your homework and building the bigger picture will give you an interesting and potentially valuable insight into the organization, not just on the information risk and security stuff … which helps when it comes to proposing and discussing strategies, projects, changes, budgets etcHowyou go about doing that is just as important as what it is that you are proposing to do. In some organizations, significant changes happen only by verbal discussion and consensus among a core/clique (possibly just one all-powerful person), whereas in some others nothing gets done without the proper paperwork, in triplicate, signed by all the right people in the correct colours of ink! The nature, significance and rapidity of change all vary, as do the mechanisms or methods.

So, in summary, there's rather more to do than assess the security controls against 27002. 



PS  For the more cynical among us, there’s always the classic three envelope approach.

NBlog Oct – phishing awareness & training module

It's out: a fully revised (almost completely rewritten!) awareness and training module on phishing.

Phishing is one of many social engineering threats, perhaps the most widespread and most threatening.

Socially-engineering people into opening malicious messages, attachments and links has proven an effective way to bypass many technical security controls.

Phishing is a business enterprise, a highly profitable and successful one making this a growth industry. Typical losses from phishing attacks have been estimated at $1.6m per incident, with some stretching into the tens and perhaps hundreds of millions of dollars.

Just as Advanced Persistent Threat (APT) takes malware to a higher level of risk, so Business Email Compromise (BEC) puts an even more sinister spin on regular phishing. With BEC, the social engineering is custom-designed to coerce employees in powerful, trusted corporate roles to compromise their organizations, for example by making unauthorized and inappropriate wire transfers or online payments from corporate bank accounts to accounts controlled by the fraudsters.

As with ordinary phishing, the fraudsters behind BEC and other novel forms of social engineering have plenty of opportunities to develop variants of existing attacks as well as developing totally novel ones. Therefore, we can expect to see more numerous, sophisticated and costly incidents as a result. Aggressive dark-side innovation is a particular feature of the challenges in this area, making creative approaches to awareness and training (such as NoticeBored!) even more valuable. We hope to prompt managers and professionals especially to think through the ramifications of the specific incidents described, generalize the lessons and consider the broader implications. We’re doing our best to make the organization future-proof. It’s a big ask though! Good luck.

Learning objectives

October’s module is designed to:
  • Introduce and explain phishing and related threats in straightforward terms, illustrated with examples and diagrams;
  • Expand on the associated information risks and controls, from the dual perspectives of individuals and the organization;
  • Encourage individuals to spot and react appropriately to possible phishing attempts targeting them personally;
  • Encourage workers to spot and react appropriately to phishing and BEC attacks targeting the organization, plus other social engineering attacks, frauds and scams;
  • Stimulate people to think - and most of all act - more securely in a general way, for example being more alert for the clues or indicators of trouble ahead, and reporting them.
Consider your organization’s learning objectives in relation to phishing. Are there specific concerns in this area, or just a general interest? Has your organization been used as a phishing lure, maybe, or suffered spear-phishing or BEC incidents? Do you feel particularly vulnerable in some way, perhaps having narrowly avoided disaster (a near-miss)? Are there certain business units, departments, functions, teams or individuals that could really do with a knowledge and motivational boost? Lots to think about this month!

Content outline



Get in touch to purchase the phishing module alone, or to subscribe to the NoticeBored service for more like this every month. Phishing is undoubtedly an important topic for awareness and training, but definitely not the only one. Build and sustain your corporate security culture through NoticeBored.

NBlog Sept 23 – what’s the best development method for security?

In answer to someone on CISSPforum asking for advice about the impact of various software development lifecycles, methods or (as if we need another ology) methodologies, I asserted that the SDLC method affects the way or the manner in which infosec is achieved (spec'd, built, confirmed, delivered, used, managed, monitored, maintained ...) more than how effective it ends up being.

There are pros and cons to all the methods - different strengths and weaknesses, different purposes, opportunities, risks and constraints. Software or systems development involves a load of trade-off and compromises. For example, if information risks absolutely must be minimized, formal methods are a good way to achieve that ... at huge cost in terms of both the investment of money and time for the development, and the functionality and rigidity of the developed system. However, an even better way to minimize the risk is to avoid using software, sidestepping the whole issue!

In most circumstances, I would argue that other factors are more significant in relation to the information security achieved in the developed system than the choice of development method e.g.:
  • Governance, management and compliance arrangements, especially around the extended dev team and the key stakeholders;
  • Strategies (e.g. business drivers for information security), priorities, resources available (including maturity, skills and competence on infosec matters - not just $$$);
  • Policies and standards, especially good security practices embedding sound principles such as:
    • Don't bolt it on - build security in;
    • Be information risk-driven;
    • Address CIA and other security, privacy, compliance and related matters;
    • Secure the whole system, not just the software;
    • Focus on important security requirements and controls, taking additional care, increasing both strength and assurance over those;
    • Later security, in anticipation of layers being breached: make it harder and more costly for adversaries and incidents to occur;
    • Trust but verify;
    • Accept that perfect or absolute security is literally unachievable, and security maturity is more quest than goal, hence provide for resilience, recovery and contingency as well as incident management and continuous improvement.
  • Well-defined critical decision points, sometimes known as hurdles, stage gates etc., plus the associated criteria and assurance requirements, plus the associated management processes to measure progress, handle issues, re-prioritize ...;
  • Corporate culture, attitudes towards information risk, infosec, cybersec, IT, compliance etc., among management, the intended system users, IT and the dev team, plus awareness and training;
  • Documentation: more than simply red tape, good quality documentation on information risk and security indicates a mature, considered, rational approach, facilitates wider involvement plus review and authorization, captures good practices and helps those not closely involved with the project appreciate what is being developed, how and why;
  • Systems thinking: alongside people, hardware, networks and other system, and dynamics, the software is just part of the bigger thing being developed;
  • Team working: high performance teamwork can achieve more, better security and higher quality products with the same resources, especially if the extended team includes a wide range of experts, users, administrators, managers and more;
  • Suitable metrics, such that 'more, better security and higher quality products' is more than just a hand-waving notion, becoming criteria, measures and drivers;
  • Risk and change management practices and attitudes, maturity, support, drive etc.;
  • Most of all, the deep understanding that underpins sound requirements specs, planning and execution, and leadership: infosec is an integral part not a bolt-on, ideally to the point that it is taken for granted by all concerned that It Will Be Done Properly.
I would love an opportunity to try out dev-races, where two or more development teams set out in parallel to build and deliver whatever it is, in friendly competition with each other.  They will all have the same fixed specs for some aspects of the delivery, but latitude to innovate in other respects e.g. methods/approaches.  At the appropriate points during the project, the 'losers' admit defeat and either depart or join the 'winners', pushing through the final, toughest activities together on the home straight.  At first glance, it sounds like it will double the costs ... but that's only for the early stages, and has the advantages of improving both motivation and the end product.  Personally, from both the security and business perspectives, I see more investment in the early stages as an opportunity more than a cost!

NBlog Sept 8 – chew before swallowing

The Global State of Online Digital Trust is a typical vendor-sponsored piece, a white paper (= marketing promotion in the guise of a 'survey') prepared by Frost & Sullivan for CA Technologies.

I say 'typical' in that they have disclosed hardly any information about the survey method and sample. press release instructs us to see the report for "Full survey methodology details" but unless I'm blind, it looks to me as if someone either 'forgot' to write the materials-and-methods section or casually neglected to incorporate it in the published report.  Oh dear.

A CA marketing VP called it "a survey of 1,000 consumers, 350 cybersecurity professionals and 325 business executives from all over the world" whereas the press release referred to it as "The global online survey of 990 consumers, 336 security professionals and 324 business executives across 10 countries". 

We can only guess at how they might have assigned respondents between the three categories e.g. who would not qualify as a 'consumer'? Wouldn't a CISO fall into all three groups? In the report, numbers next to the graphs appear to indicate the sample sizes up to about 990

Last time I checked, there were rather more than 10 countries in the world aside from USA BRA UK FRA GER ITA AUS IND JPN and CHN as listed the report. If I'm interpreting those abbreviations correctly, that's well short of "all over the world".

If indeed the survey was online, that rather suggests the sample only consisted of people from the ten countries who were happy to answer an online survey - which itself implies a degree of trust in online security as well as a willingness to respond to a vendor-sponsored survey. 

It is unclear whether or how the report's conclusions relate to the survey findings ... and they are somewhat predictable given the report sponsor's commercial interests:
"CULTIVATE A CULTURE OF SECURITY Implement data protection policies that are in accordance with the world’s strictest data privacy regulations. Ensure company-wide familiarity with security policies, including among non-technical staff to reduce the risk of data breaches. 
START AT THE TOP Too many business executives see security initiatives as a negative return on investment. Alert the C-Suite to the tangible business impacts of a breach and a loss of consumer trust. 
COVER YOUR BASES Consumers consider both social and technical factors when determining whether to trust an organization; be sure that your organization has the technical foundation in place to mitigate attacks and have a response team ready to minimize damage to consumer trust in the event of a breach. 
KEEP IT SIMPLE Clear communication from organizations around policies and data handling practices is critical for building trust. Far too many organizations overestimate the degree to which consumers can easily manage their personal data online. Present your policies in simple language, and provide important details without overwhelming the consumer."
So they evidently equate "a culture of security" with data protection, data privacy and data breaches. Spot the common factor. A similar bias towards privacy law compliance and the protection of "customer data" is evident in all four paragraphs. That is an important issue, I agree, along with "cybersecurity" (an undefined term ... but I guess they mean IT security) but what about all the rest of information security: trade secrets, intellectual property, business continuity, physical and procedural security, information integrity, blah blah blah?

I freely admit to being heavily prejudiced in favour of both cultural development and management-level security awareness but their emphasis on breach impacts and consumer trust once again betrays a myopic focus on privacy breach incidents, while the conclusion about return on investment seems very suspect to me. I wonder if the survey question/s in that area were unambiguous enough to be interpreted in the same way by all the respondents? Or are the reported differences between the groups of respondents merely indicative of their distinct perspectives and assumptions? Did they even face the same questions? We can't tell since they choose not to disclose the survey questions.

The report introduces the term "Digital trust index". Sounds great, right? A metric concerning trust in, errr, digits? A percentage value relative to, um, what exactly? Oh let me guess, relative to the score conjured out of the air for this, the first report. And unfortunately for the sponsors, the term "Digital Trust Index" is already in use elsewhere.

Overall, a disappointing and essentially pointless read, like most other commercially-sponsored and heavily-promoted "survey" I have read in my career with few exceptions. 

Clearly, I'm a slow learner, stubborn as an old boot. Venting my spleen through this blog is immensely helpful though, along with the vain hope that you might perhaps be persuaded to take a more critical look at the next "survey" that plops onto your screen. Chew it over rather than swallowing whole.

The Quest for Optimal Security

There's no shortage of guidance available today about how to structure, build, and run a security program. Most guidance comes from a standpoint of inherent bias, whether it be to promote a product class, specific framework/standard, or to best align with specific technologies (legacy/traditional infrastructure, cloud, etc.). Given all the competing advice out there, I often find it's hard to suss out exactly what one should be doing. As someone actively on the job hunt, this reality is even more daunting because job descriptions will typically contain a smattering of biases, confirmed or contradicted through interview processes. But, I digress...

At end of day, the goal of your security program should be to chart a path to an optimal set of capabilities. What exactly constitutes "optimal" will in fact vary from org to org. We know this is true because otherwise there would already be a settled "best practice" framework to which everyone would align. That said, there are a lot of common pieces that can be leveraged in identifying the optimal program attributes for your organization.

The Basics

First and foremost, your security program must account for basic security hygiene, which creates the basis for arguing legal defensibility; which is to say, if you're not doing the basics, then your program can be construed insufficient, exposing your organization to legal liability (a growing concern). That said, what exactly constitutes "basic security hygiene"?

There are a couple different ways to look at basic security hygiene. For starters, you can look at it be technology grouping:
- Network
- Endpoint
- Data
- Applications
- IAM
- etc.

However, listing out specific technologies can become cumbersome, plus it doesn't necessarily lend itself well to thinking about security architecture and strategy. A few years ago I came up with an approach that looks like this:

Ben-matrix.png

More recently, I learned of the OWASP Cyber Defense Matrix, which takes a similar approach to mine above, but mixing it with the NIST Cybersecurity Framework.


Overall, I like the simplicity of the CDM approach as I think it covers sufficient bases to project a legally defensible position, while also ensuring a decent starting point that will cross-map to other frameworks and standards depending on the needs of your organization (e.g., maybe you need to move to ISO 27001 or complete a SOC 1/2/3 certification).

Org Culture

One of the oft-overlooked, and yet insanely important, aspects of designing an approach to optimal security for your organization is to understand that it must exist completely within the organization's culture. After all, the organization is comprised of people doing work, and pretty much everything you're looking to do will have some degree of impact on those people and their daily lives.

Ben-pyramid.png

As such, when you think about everything, be it basic security hygiene, information risk management, or even behavioral infosec, you must first consider how it fits with org culture. Specifically, you need to look at the values of the organization (and its leadership), as well as the behaviors that are common, advocated, and rewarded.

If what you're asking people to do goes against the incentive model within which they're operating, then you must find a way to either better align with those incentives or find a way to change the incentives such that they encourage preferred behaviors. We'll talking more about behavioral infosec below, so for this section the key takeaway is this: organizational culture creates the incentive model(s) upon which people make decisions, which means you absolutely must optimize for that reality.

For more on my thoughts around org culture, please see my post "Quit Talking About "Security Culture" - Fix Org Culture!"

Risk Management

Much has been said about risk management over the past decade+, whether it be PCI DSS advocating for a "risk-based approach" to vulnerability management, or updates to the NIST Risk Management Framework, or various advocation by ISO 27005/31000 or proponents of a quantitative approach (such as the FAIR Institute).

The simply fact is that, once you have a reasonable base set of practices in place, almost everything else should be driven by a risk management approach. However, what this means within the context of optimal security can vary substantially, not the least being due to staffing challenges. If you are a small-to-medium-sized business, then your reality is likely one where you, at best, have a security leader of some sort (CISO, security architect, security manager, whatever) and then maybe up to a couple security engineers (doers), maybe someone for compliance, and then most likely a lot of outsourcing (MSP/MSSP/MDR, DFIR retainer, auditors, contractors, consultants, etc, etc, etc).

Risk management is not your starting point. As noted above, there are a number of security practices that we know must be done, whether that be securing endpoints, data, networks, access, or what-have-you. Where we start needing risk management is when we get beyond the basics and try to determine what else is needed. As such, the crux of optimal security is having an information risk management capability, which means your overall practice structure might look like this:

Ben-pyramid2.png

However, don't get wrapped around the axel too much on how the picture fits together. Instead, be aware that your basics come first (out of necessity), then comes some form of risk mgmt., which will include gaining a deep understanding of org culture.

Behavioral InfoSec

The other major piece of a comprehensive security program is behavioral infosec, which I have talked about previously in my posts "Introducing Behavioral InfoSec" and "Design For Behavior, Not Awareness." In these posts, and other places, I talk about the imperative to key in on organizational culture, and specifically look at behavior design as part of an overall security program. However, there are a couple key differences in this approach that set it apart from traditional security awareness programs.
1) Behavioral InfoSec acknowledges that we are seeking preferred behaviors within the context of organizational culture, which is the set of values of behaviors promoted, supported, and rewarded by the organization.
2) We move away from basic "security awareness" programs like annual CBTs toward practices that seek measurable, lasting change in behavior that provide positive security benefit.
3) We accept that all security behaviors - whether it be hardening or anti-phishing or data security (etc) - must either align with the inherent cultural structure and incentive model, or seek to change those things in order to heighten the motivation to change while simultaneously making it easier to change.

To me, shifting to a behavioral infosec mindset is imperative for achieving success with embedding and institutionalizing desired security practices into your organization. Never is this more apparent than in looking at the Fogg Behavior Model, which explains behavior thusly:

In writing, it says that behavior happens when three things come together: motivation, ability, and a trigger (prompt or cue). We can diagram behavior (as above) wherein motivate is charted on the Y-axis from low to high, ability is charted on the X-axis from "hard to do" to "easy to do," and then a prompt (or trigger) that falls either to the left or right of the "line of action," which means the prompt itself is less important than one's motivation and the ease of the action.

We consistently fail in infosec by not properly accounting for incentive models (motivation) or by asking people to do something that is, in fact, too difficult (ability; that is, you're asking for a change that is hard, maybe in terms of making it difficult to do their job, or maybe just challenging in general). In all things, when we think about information risk mgmt. and the kinds of changes we want to see in our organizations beyond basic security hygiene, it's imperative that we also under the cultural impact and how org culture will support, maybe even reward, the desired changes.

Overall, I would argue that my original pyramid diagram ends up being more useful insomuch as it encourages us to think about info risk mgmt. and behavioral infosec in parallel and in conjunction with each other.

Putting It All Together

All of these practices areas - basic security hygiene, info risk mgmt, behavioral infosec - ideally come together in a strategic approach that achieves optimal security. But, what does that really mean? What are the attributes, today, of an optimal security program? There are lessons we can learn from agile, DevOps, ITIL, Six Sigma, and various other related programs and research, ranging from Deming to Senge and everything in between. Combined, "optimal security" might look something like this:


Conscious
   - Generative (thinking beyond the immediate)
   - Mindful (thinking of people and orgs in the whole)
   - Discursive (collaborative, communicative, open-minded)

Lean
   - Efficient (minimum steps to achieve desired outcome)
   - Effective (do we accomplish what we set out to do?)
   - Managed (haphazard and ad hoc are the enemy of lasting success)

Quantified
   - Measured (applying qualitative or quantitative approaches to test for efficiency and effectiveness)
   - Monitored (not just point-in-time, but watched over time)
   - Reported (to align with org culture, as well as to help reform org culture over time)

Clear
   - Defined (what problem is being solved? what is the desired outcome/impact? why is this important?)
   - Mapped (possibly value stream mapping, possibly net flows or data flows, taking time to understand who and what is impacted)
   - Reduced (don't bite off too much at once, acknowledge change requires time, simplify simplify simplify)

Systematic
   - Systemic understanding (the organization is a complex organism that must work together)
   - Automated where possible (don't install people where an automated process will suffice)
   - Minimized complexity (perfect is the enemy of good, and optimal security is all about "good enough," so seek the least complex solutions possible)


Obviously, much, much more can be said about the above, but that's fodder for another post (or a book, haha). Instead, I present the above as a starting point for a conversation to help move everyone away from some of our traditional, broken approaches. Now is the time to take a step back and (re-)evaluate our security programs and how best to approach them.

Quit Talking About "Security Culture" – Fix Org Culture!

I have a pet peeve. Ok, I have several, but nonetheless, we're going to talk about one of them today. That pet peeve is security professionals wasting time and energy pushing a "security culture" agenda. This practice of talking about "security culture" has arisen over the past few years. It's largely coming from security awareness circles, though it's not always the case (looking at you anti-phishing vendors intent on selling products without the means and methodology to make them truly useful!).

I see three main problems with references to "security culture," not the least of which being that it continues the bad old practices of days gone by.

1) It's Not Analogous to Safety Culture

First and foremost, you're probably sitting there grinding your teeth saying "But safety culture initiatives work really well!" Yes, they do, but here's why: Safety culture can - and often does - achieve a zero-sum outcome. That is to say, you can reduce safety incidents to ZERO. This factoid is excellent for when you're around construction sites or going to the hospital. However, I have very bad news for you. Information (or cyber or computer) security will never be a zero-sum game. Until the entirety of computing is revolutionized, removing humans from the equation, you will never prevent all incidents. Just imagine your "security culture" sign by the entrance to your local office environment, forever emblazoned with "It Has Been 0 Days Since Our Last Incident." That's not healthy or encouraging. That sort of thing would be outright demoralizing!

Since you can't be 100% successful through preventative security practices, you must then shift mindset to a couple things: better decisions and resilience. Your focus, which most of your "security culture" programs are trying to address (or should be), is helping people make better decisions. Well, I should say, some of you - the few, the proud, the quietly isolated - have this focus. But at the end of the day/week/month/year you'll find that people - including well-trained and highly technical people - will still make mistakes or bad decisions, which means you can't bank on "solving" infosec through better decisions.

As a result, we must still architect for resiliency. We must assume something will breakdown at some point resulting in an incident. When that incident occurs, we must be able to absorb the fault, continue to operate despite degraded conditions, while recovering to "normal" as quickly, efficiently, and effectively as possible. Note, however, that this focus on resiliency doesn't really align well with the "security culture" message. It's akin to telling people "Safety is really important, but since we have no faith in your ability to be safe, here's a first aid kit." (yes, that's a bit harsh, to prove a point, which hopefully you're getting)

2) Once Again, It Creates an "Other"

One of the biggest problems with a typical "security culture" focus is that it once again creates the wrong kind of enablement culture. It says "we're from infosec and we know best - certainly better than you." Why should people work to make better decisions when they can just abdicate that responsibility to infosec? Moreover, since we're trying to optimize resiliency, people can go ahead and make mistakes, no big deal, right?

Part of this is ok, part of it is not. On the one hand, from a DevOps perspective, we want people to experiment, be creative, be innovative. In this sense, resilience and failure are a good thing. However, note that in DevOps, the responsibility for "fail fast, recover fast, learn fast" is on the person doing the experimenting!!! The DevOps movement is diametrically opposed to fostering enablement cultures where people (like developers) don't feel the pain from their bad decisions. It's imperative that people have ownership and responsibility for the things they're doing. Most "security culture" dogma I've seen and heard works against this objective.

We want enablement, but we don't want enablement culture. We want "freedom AND responsibility," "accountability AND transparency," etc, etc, etc. Pushing "security culture" keeps these initiatives separate from other organizational development initiatives, and more importantly it tends to have at best a temporary impact, rather than triggering lasting behavioral change.

3) Your Goal Is Improving the Organization

The last point here is that your goal should be to improve the organization and the overall organizational culture. It should not be focused on point-in-time blips that come and go. Additionally, your efforts must be aimed toward lasting impact and not be anchored around a cult of personality.

As a starting point, you should be working with org dev personnel within your organization, applying behavior design principles. You should be identifying what the target behavior is, then working backward in a piecemeal fashion to determine whether that behavior can be evoked and institutionalized through one step or multiple steps. It may even take years to accomplish the desired changes.

Another key reason for working with your org dev folks is because you need to ensure that anything "culture" that you're pursuing is fully aligned with other org culture initiatives. People can only assimilate so many changes at once, so it's often better to align your work with efforts that are already underway in order to build reinforcing patterns. The worst thing you can do is design for a behavior that is in conflict with other behavior and culture designs underway.

All of this is to underline the key point that "security culture" is the wrong focus, and can in some cases even detract from other org culture initiatives. You want to improve decision-making, but you have to do this one behavior at a time, and glossing over it with the "security culture" label is unhelpful.

Lastly, you need to think about your desired behavior and culture improvements in the broader context of organizational culture. Do yourself a favor and go read Laloux's Reinventing Organizations for an excellent treatise on a desirable future state (one that aligns extremely well with DevOps). As you read Laloux, think about how you can design for security behaviors in a self-managed world. That's the lens through which you should view things, and this is where you'll realize a "security culture" focus is at best distracting.

---
So... where should you go from here? The answer is three-fold:
1) Identify and design for desirable behaviors
2) Work to make those behaviors easy and sustainable
3) Work to shape organizational culture as a whole

Definitionally, here are a couple starters for you...

First, per Fogg, Behavior happens when three things come together: Motivation, Ability (how hard or easy it is to do the action), and a Trigger (a prompt or cue). When Motivation is high and it's easy to do, then it doesn't take much prompting to trigger an action. However, if it's difficult to take the action, or the motivation simply isn't there, you must then start looking for ways to address those factors in order to achieve the desired behavioral outcome once triggered. This is the basis of behavior design.

Second, when you think about culture, think of it as the aggregate of behaviors collectively performed by the organization, along with the values the organization holds. It may be helpful, as Laloux suggests, to think of the organization as its own person that has intrinsic motivations, values, and behaviors. Eliciting behavior change from the organization is, then, tantamount to changing the organizational culture.

If you put this all together, I think you'll agree with me that talking about "security culture" is anathema to the desired outcomes. Thinking about behavior design in the context of organizational culture shift will provide a better path to improvement, while also making it easier to explain the objectives to non-security people and to get buy-in on lasting change.

Bonus reference: You might find this article interesting as it pertains to evoking behavior change in others.

Good luck!