Category Archives: Incidents

NBlog March 17 – cat-skinning

Incident reporting is a key objective of April's NoticeBored module. More specifically, we'd like workers to report information security matters promptly. 

So how might we achieve that through the awareness and training materials? Possible approaches include:
  1. Tell them to report incidents. Instruct them. Give them a direct order.
  2. Warn them about not doing it. Perhaps threaten some form of penalty if they don't.
  3. Convince them that it is in the organization's interests for workers to report stuff. Persuade them of the value.
  4. Convince workers that it is in their own best interest to report stuff. Persuade them.
  5. Explain the reporting requirement (e.g. what kinds of things should they report, and how?) and encourage them to do so.
  6. Make reporting incidents 'the easy option'.
  7. Reward people for reporting incidents.
  8. Something else? Trick them? Goad them? Follow up on those who did not report stuff promptly, asking about their reasons?
Having considered all of them, we'll combine a selection of these approaches in the awareness content and the train-the-trainer guide.

In the staff seminar and staff briefing, for instance, the line we're taking is to describe everyday situations where reporting incidents directly benefits the reporter (approach #4 in the list). Having seeded the idea in the personal context, we'll make the connection to the business context (#3) and expand a little on what ought to be reported (#5) ... and that's pretty much it for the general audience. 

For managers, there is mileage in #1 (policies and procedures) and #7 (an incentive scheme?) ... and #8 in the sense that we are only suggesting approaches, leaving NoticeBored subscribers to interpret or adapt them as they wish. Even #2 might be necessary in some organizations, although it is rather negative compared to the alternatives. 

For professionals, #6 hints at designing reporting systems and processes for ease of use, encouraging people to report stuff ... and, where appropriate, automatic reporting if specific criteria are met, which takes the awareness materials into another potentially interesting area. If the professionals are prompted at least to think about the issue, our job is done

Mandatory reporting of incidents to third parties is a distinct but important issue, especially for management. The privacy breach reporting deadline under GDPR (a topical example) is a very tough challenge for some organizations, requiring substantial changes in their approach to internal incident reporting, escalation and external reporting, and more generally the attitudes of those involved, making this a cultural issue. 

NBlog March 16 – terrorism in NZ

Last evening I turned on the TV to veg-out at the end of a busy week. Instead of my favourite NZ comedy quiz show, both main national channels were looping endlessly with news of the terrorist incident in Christchurch. Well I say 'news': mostly it was lame interviews with people tenuously connected to Christchurch or the Muslim community in NZ, and fumbling interviewers seemingly trying to fill air-time. Ticker-tape banners across the bottom of the screen, ALL IN CAPS, kept repeating the same few messages about the PM mentioning terrorism, yet neglected to say what had actually happened. I managed to piece together a sketchy outline of the incident before eventually giving up. Too much effort for a Friday night.

I gather around 50 people died yesterday in the event. Also yesterday, about 90 other people died, and another ~90 will die today, and every day on average according to the official government statistics:  



This year, ~6,000 Kiwis will die of heart disease, and between 300 and 400 of us will die on the roads.  

Against that backdrop, deaths due to terrorism do not even feature in the stats, so here I'll give you a very rough idea of where we stand:

Don't get me wrong, it is tragic that ~50 people died in the incident yesterday and of course I regret that anyone died. But get real. The media have, as usual, blown it out of all proportion, and turned a relatively minor incident into an enormous drop-everything disaster. 

So what it is about 'terrorism' that sends the media - and it seems the entire population - into such a frenzy? Why is 'terrorism' so newsworthy? Why is it reported so badly? Who benefits from scaring the general population in this way?

Oh, hang on, the clue is in the name. Terrorism only works if we are terrified.

This looks to me like yet another example of 'outrage', a fascinating social phenomenon involving an emotional rather than rational response, amplified by the news and social media with positive feedback leading to a runaway situation. Here I am providing a little negative feedback to redress the balance but I'm sure I will be criticised for having the temerity to even express this. And that, to me, is terrorism of a different kind - information terrorism.

NBlog March 12 – pragmatic information risk management

Over the past ~three or four decades, the information risk and security management profession has moved slowly from absolute security (also known as "best practices") to relative security (aka "good practices" or "generally-accepted security") such as ISO27k.

Now as we totter into the next phase we find ourselves navigating our way through pragmatic security (aka "good enough"). The idea, in a nutshell, is to satisfy local information risk management requirements (mostly internal organizational/business-related, some externally imposed including social/societal norms) using a practicable, workable assortment of security controls where appropriate and necessary, plus other risk treatments including risk acceptance. 

The very notion of accepting risks is a struggle for those of us in the field with high standards of integrity and professionalism. Seeing the dangers in even the smallest chinks in our armor, we expect and often demand more. It could be argued that we are expected to push for high ideals but, in practice at some point, we have no choice but to acknowledge reality and make the best of the situation before us - or resign, which achieves little except lamely register our extreme displeasure.

Speaking personally, my strategy for backing-off the pressure and accepting "good enough" security involves Business Continuity Management: I'll endorse incomplete, flawed and (to me) shoddy information security as being "good enough" IF management is willing to pay enough attention and invest sufficiently in BCM just in case unmitigated risks eventuate. 

That little bargain with management has two nice bonuses:
  1. Determining the relative criticality of various business processes, IT systems, business units, departments, teams, relationships, projects, initiatives etc. to the organization involves understanding the business in some depth, leading to a better appreciation of the associated information risks. Provided it is done well, the Business Impact Assessment part of BCM is sheer gold: it forces management to clarify, rationalize and prioritize ... which gives me a much tighter steer on where to push harder or back off the pressure. If we all agree that situation A is more valuable or important or critical to the organization than B, then I can readily justify (both to myself and to management, the auditors and other stakeholders) mitigating the risks in situation B to a lesser extent than for A. That's relative security in a form that makes sense and works for me. It gives me the rationale to accept imperfections.
  2. BCM (as I do it!) involves investing in appropriate resilience, recovery and contingency measures. The resilience part supports information security in a very general yet valuable way: it means not compromising too far on the preventive controls, ensuring they are sufficiently robust not to fall over like dominoes at the first whiff of trouble. The recovery part similarly involves detecting and responding reasonably effectively to incidents, hence I still have the mandate to maintain those areas too. Contingency adds a further element of preparing to deal with the unexpected, including information risks that weren't even foreseen, plus those that were in fact wrongly evaluated and only partially mitigated. Contingency thinking leads to flexible arrangements such as empowerment, multi-skilling, team working and broad capability development with numerous business benefits, adding to those from security, resilience and recovery.

My personal career-survival strategy also involves passing the buck, quite deliberately and explicitly. I value the whole information ownership thing, in particular the notion that whoever has the most to lose (or indeed gain) if information risks eventuate and incidents occur should be the one to determine and allocate resources for the risk treatments required. For me, it comes back to the oft-misunderstood distinction between accountability (being held to account for decisions, actions and inactions by some authority) and responsibility (being tasked with something, making the best of available resources). If an information owner - typically a senior manager for the department or business unit that most clearly has an interest in the information - is willing to live with greater information risks than I personally would feel comfortable accepting, and hence is unwilling to invest in even stronger information security, then fine: I'll help firstly in the identification and evaluation of information risks, and secondly by squeezing the most value I can from the available resources. 

At the end of the day, if it turns out that's not enough to avoid incidents, well too bad. Sorry it all turned to custard but my hands were tied. I'm only accountable for my part in the mess. Most of the grief falls to senior management, specifically the information owners. Now, let's learn the lessons here and make sure it doesn't happen again, eh?

So that's where we are at the moment but where next? Hmm, that's something interesting to mull over while I feed the animals and get my head in gear for the work-day ahead, writing security awareness and training content on incident detection.

I'd love to hear your thoughts on where we've come from, where we are now and especially where we're heading. There's no rush though: on past performance we have, oooh, about 10 or 20 years to get to grips with pragmatic security!

Meanwhile, here are two stimulating backgrounders to read and contemplate: The Ware Report from Rand, and a very topical piece by Andrew Odlyzko.

NBlog March 6 – new topic: detectability

On the SecurityMetrics.org discussion form, Walt Williams posed a question about the value of 'time and distance' measures in information security, leading to someone suggesting that 'speed of response' might be a useful metric. However, it's bit tricky to define and measure: exactly when does an incident occur? What about the response? Assuming we can define them, do we time the start, the end, or some intermediate point, or perhaps even measure the ranges?

Next month in the NoticeBored security awareness and training program, we're exploring a new topic: 'incident detectability' concerning the ease and hence likelihood of detection of information security incidents. 

Incidents that are highly visible and obvious to all (e.g. a ransomware attack at the point of the Denial of Service and ransom being demanded) are materially different from those that remain unrecognized for a long period, perhaps forever (e.g. a spyware attack) even if otherwise similar (using very similar remote-control Trojans in those cases). Detectability therefore might be a valuable third dimension to the classic Probability Impact Graphsfor assessing and comparing risks. 

However, that still leaves the question of how one might measure detectability. 

As is my wont, I'm leaning towards a subjective measure using a continuous scale along these lines:



For the awareness module, we'll be defining four or five waypoints, indicators or scoring norms for each of several relevant criteria, helping users of the metric assess, compare and score whatever information risks or incidents they have in mind. 

You may have noticed the implicit 'detection time' element to detectability, ranging from infinity down to zero. That's a fairly simple concept and parameter to explain and discuss, but not so easy to determine or measure in, say, a risk workshop situation. In practice we prefer subjective or relative scales, reducing the measurement issue from "What is the probable detection time for incidents of type X?" to "Would type X incidents generally be detected before or after types Y and Z?" - in other words a classic bubble-sort or prioritization approach, with which managers generally are comfortable. The absolute value of a given point on the measurement scale is almost incidental, an optional outcome of the discussion and prioritization decisions made rather than an input or driver. What matters more is the overall pattern and spread of values, and even more important is the process of considering and discussing these matters in some depth. The journey trumps the destination.

To those who claim "It's not a metric if it doesn't have a unit of measurement!", I say "So what?  It's still a useful way to understand, compare and contrast risks ... which is more important in practice than satisfying some academic and frankly arbitrary and unhelpful definition!" As shown on the sketch, we normally do assign a range of values (percentages) to the scale for convenience (e.g. to facilitate the discussion and for recording outcomes) but the numeric values are only ever meant to be indicative and approximate. Scale linearity and scientific/mathematical precision don’t particularly matter in the risk context, especially as uncertainty is an inherent factor anyway. It's good enough for government work, as they say.

Finally, circling back, 'speed of response' could add yet another dimension to the risk assessment process, or more accurately the risk treatment part of risk management. I envisage a response-speed percentage scale (naturally), ranging from 'tectonic or never' up to 'instantaneous', with an implied pressure to speed up responses, especially to certain types of incident ... sparking an interesting and perhaps enlightening discussion about those types. "Regardless of what we are actually capable of doing at present, which kinds of incidents should we respond to most or least urgently, and why is that?" ... a discussion point that we'll be bringing out in the management materials for April. 

NBlog Feb 25 – TL;DR vs More details please


A substantial part our effort goes into generating worthwhile and engaging awareness and training materials for a wide range of people, some of whom are too busy, too disinterested or simply cant be bothered with lengthy pieces, whereas others enjoy and in some cases need the details. 

Focusing on a single infosec topic each month gives us the chance to address both ends of the scale. Both of them require useful information: the shorter stuff isn't simply a summary cut-down version of the long. They each have to reflect the different needs of the intended audiences, which changes their focus and style as well as the length.

Personally, my preferred approach is to delve deep and work on the detailed stuff and conceptual diagrams/models etc. first, then pull out, gradually preparing the more succinct higher-level pieces ... but in practice we usually end up spiraling. Producing the more strategic stuff involves reviewing the models and reassessing perspectives ... or something. Anyway, as I draw out the key messages, I end up revisiting and revising the detailed stuff, and back around I go.

It is a spiral, though, not a circle because the monthly delivery deadline means eventually we have to call a halt. Often there are still loose ends, things we simply don't have the time to get into right now ... but it's not hard to park them for the next time we cover the same or a related topic - which hints at another part of our approach, namely creating a completely new awareness and training module, focused on one or more loose ends left dangling from previous topics.

Talking of which, next month we'll be working on "Spotting incidents" - the detection and initial notification part of incident management, specifically. Although we've covered incidents many times before, that will be a new angle. It was prompted by the thought that the probability and impacts of incidents does not fully describe the risks: incidents that remain undetected for long periods (perhaps indefinitely) are a particularly insidious concern. 'Detectability' is therefore another factor to take into account when assessing or evaluating information risks.


NBlog Feb 24 – how to challenge an audit finding

Although I wrote this in the context of ISO/IEC 27001 certification audits, it applies in other situations where there is a problem with something the auditors are reporting such as a misguided, out of scope or simply wrong audit finding.

Here are some possible strategies to consider:
  • Have a quiet word with the auditor/s about it, ideally before it gets written up and finalized in writing. Discuss the issue – talk it through, consider various perspectives. Negotiate a pragmatic mutually-acceptable resolution, or at least form a better view of the sticking points.
  • Have a quiet word with your management and specialist colleagues about it, before the audit gets reported. Discuss the issue. Agree how you will respond and try to resolve this. Develop a cunning plan and gain their support to present a united front. Ideally, get management ready to demonstrate that they are definitely committing to fixing this e.g. with budget proposals, memos, project plans etc. to substantiate their commitment, and preferably firm timescales or agreed deadlines.
  • Gather your own evidence to strengthen your case. For example:
    • If you believe an issue is irrelevant to certification since there is no explicit requirement in 27001, identify the relevant guidance about the audit process from ISO/IEC 27007 plus the section of 27001 that does not state the requirement (!)
    • If the audit finding is wrong, prove it wrong with credible counter-evidence, counter-examples etc. Quality of evidence does matter but quantity plays a part. Engage your extended team, management and the wider business in the hunt.
    • If it’s a subjective matter, try to make it more objective e.g. by gathering and evaluating more evidence, more examples, more advice from other sources etc. ‘Stick to the facts’. Be explicit about stuff. Choose your words carefully.
    • Ask us for second opinions and guidance e.g. on the ISO27k Forum and other social media, industry peers etc.
  • Wing-it. Duck-and-dive. Battle it out. Cut-and-thrust. Wear down the auditor’s resolve and push for concessions, while making limited concessions yourself if you must. Negotiate using concessions and promises in one area to offset challenges and complaints in another. Agree on and work towards a mutually-acceptable outcome (such as, um, being certified!).
  • Be up-front about it. Openly challenge the audit process, findings, analysis etc. Provide counter-evidence and arguments. Challenge the language/wording. Push the auditors to their limit. [NB This is a distinctly risky approach! Experienced auditors have earned their stripes and are well practiced at this, whereas it may be your first time. As a strategy, it could go horribly wrong, so what’s your fallback position? Do you feel lucky, punk?]
  • Suck it up! Sometimes, the easiest, quickest, least stressful, least risky (in terms of being certified) and perhaps most business-like response is to accept it, do whatever you are being asked to do by the auditors and move on. Regardless of its validity for certification purposes, the audit point might be correct and of value to the business. It might actually be something worth doing … so swallow your pride and get it done. Try not to grumble or bear a grudge. Re-focus on other more important and pressing matters, such as celebrating your certification!
  • Negotiate a truce. Challenge and discuss the finding and explore possible ways to address it. Get senior management to commit to whichever solution/s work best for the business and simultaneously persuade/convince the auditors (and/or their managers) of that.
  • Push back informally by complaining to the certification body’s management and/or the body that accredited them. Be prepared to discuss the issue and substantiate your concerns with some evidence, more than just vague assertions and generalities.
  • Push back hard. Review your contract with the certification body for anything useful to your case. Raise a formal complaint with the certification body through your senior management … which means briefing them and gaining their explicit support first. Good luck with that. You’ll need even stronger, more explicit evidence here. [NB This and the next bullet are viable options even after you have been certified … but generally, by then, nobody has the energy to pursue it and risk yet more grief.]
  • Push back even harder. Raise a complaint with the accreditation body about the certification body’s incompetence through your senior management … which again means briefing them and gaining their explicit support first, and having the concrete evidence to make a case. Consider enlisting the help of your lawyers and compliance experts willing to get down to the brass tacks, and with the experience to build and present your case.
  • Delay things. Let the dust settle. Review, reconsider, replan. Let your ISMS mature further, particularly in the areas that the auditors were critical of. Raise your game. Redouble your efforts. Use your metrics and processes fully.
  • Consider engaging a different certification body (on the assumption that they won’t raise the same concerns … nor any others: they might be even harder to deal with!).
  • Consider engaging different advisors, consultants and specialists. Review your extended ISMS team. Perhaps push for more training, to enhance the team’s competence in the problem areas. Perhaps broaden ‘the team’ to take on-board other specialists from across the business. Raise awareness.
  • Walk away from the whole mess. Forget about certification. Go back to your cave to lick your wounds. Perhaps offer your resignation, accepting personal accountability for your part in the situation. Or fire someone else!
Although that's a long shopping list, I'm sure there are other possibilities including some combination of the above. The fact is is that you have choices in how to handle such challenges: your knee-jerk response may not be ideal.

For bonus marks, you might even raise an incident report concerning the issue at hand, then handle it in the conventional manner through the incident management part of your ISMS. An adverse audit finding is, after all, a concern that needs to be addressed and resolved just like other information incidents. It is an information risk that has eventuated. You will probably need to fix whatever is broken, but first you need to assess and evaluate the incident report, then decide what (if anything) needs to be done about it. The process offers a more sensible, planned and rational response than jerking your knee. It's more business-like, more professional. I commend it to the house.

NBlog Feb 21 – victimization as a policy matter


An interesting example of warped thinking from Amos Shapir in the latest RISKS-List newsletter:

"A common tactic of authoritarian regimes is to make laws which are next to impossible to abide by, then not enforce them. This creates a culture where it's perfectly acceptable to ignore such laws, yet the regime may use selective enforcement to punish dissenters -- since legally, everyone is delinquent."
Amos is talking (I believe) about national governments and laws but the same approach could be applied by authoritarian managers through corporate rules, including policies. Imagine, for instance, a security policy stating that all employees must use a secret password of at least 35 random characters: it would be unworkable in practice but potentially it could be used by management as an excuse to single-out, discipline and fire a particularly troublesome employee, while at the same time ignoring noncompliance by everyone else (including themselves, of course).

It's not quite as straightforward as I've implied, though, since organizations have to work within the laws of the land, particularly employment laws designed to protect individual workers from rampant exploitation by authoritarian bosses. There may be a valid legal defense for workers sacked in such circumstances due to the general lack of enforcement of the policy and the reasonable assumption that the policy is not in force, regardless of any stated mandate or obligations to comply ... which in turn has implications for all corporate policies and other rules (procedures, work instructions, contracts and agreements): if they are not substantially and fairly enforced, they may not have a legal standing. 

[IANAL  This piece is probably wrong and/or inapplicable. It's a thought-provoker, not legal advice.]

NBlog Feb 14 – online lovers, offline scammers





Social engineering scams are all the rage, a point worth noting today of all days.

A Kiwi farmer literally lost the farm to a scammer he met and fell for online. 

Reading the news report, this was evidently a classic advance fee fraud or 419 scam that cost him a stunning $1.25m. 

This is not the first time I've heard about victims being drawn-in by the scammers to the extent that they refuse to accept that they have been duped when it is pointed out to them. There's probably something in the biology of our brains that leads us astray - some sort of emotional hijack going on, bypassing the normal rational thought processes.

On a more positive note, the risks associated with online dating are reasonably well known and relatively straightforward to counter. And old-school offline dating is not risk free either. 

Relationships generally are a minefield ... but tread carefully and amazing things can happen. Be careful, be lucky.

DNS Manipulation in Venezuela in regards to the Humanitarian Aid Campaign

Venezuela is a country facing an uncertain moment in its history. Reports suggests it is in significant need of humanitarian aid.

On February 10th, Mr. Juan Guaidó made a public call asking for volunteers to join a new movement called “Voluntarios por Venezuela” (Volunteers for Venezuela). According to the media, it already numbers thousands of volunteers, willing to help international organizations to deliver humanitarian aid to the country. How does it work? Volunteers sign up and then receive instructions about how to help. The original website asks volunteers to provide their full name, personal ID, cell phone number, and whether they have a medical degree, a car, or a smartphone, and also the location of where they live:

This website appeared online on February 6th. Only a few days later, on February 11th, the day after the public announcement of the initiative, another almost identical website appeared with a very similar domain name and structure.

In fact, the false website is a mirror image of the original website, voluntariosxvenezuela.com

Both the original and the false website use SSL from Let’s Encrypt. The differences are as follows:

Original voluntariosxvenezuela.com website Deception website
First day on the Internet, Feb 6th First day on the Internet, Feb 11th
Whois information:

Registered on the name of Sigerist Rodriguez on Feb 4, 2019

Whois information:

Registered via GoDaddy using Privacy Protection feature on Feb 11, 2019

Hosted on Amazon Web Services Hosted first on GoDaddy and then on DigitalOcean

Now, the scariest part is that these two different domains with different owners are resolved within Venezuela to the same IP address, which belongs to the fake domain owner:

That means it does not matter if a volunteer opens a legitimate domain name or a fake one, in the end will introduce their personal information into a fake website.

Both domains if resolved outside Venezuela present different results:

Kaspersky Lab blocks the fake domain as phishing.

In this scenario, where the DNS servers are manipulated, it’s strongly recommended to use public DNS servers such as Google DNS servers (8.8.8.8 and 8.8.4.4) or CloudFlare and APNIC DNS servers (1.1.1.1 and 1.0.0.1). It’s also recommended to use VPN connections without a 3rd party DNS.

Forcing the Adversary to Pursue Insider Theft

Jack Crook pointed me toward a story by Christopher Burgess about intellectual property theft by "Hongjin Tan, a 35 year old Chinese national and U.S. legal permanent resident... [who] was arrested on December 20 and charged with theft of trade secrets. Tan is alleged to have stolen the trade secrets from his employer, a U.S. petroleum company," according to the criminal complaint filed by the US DoJ.

Tan's former employer and the FBI allege that Tan "downloaded restricted files to a personal thumb drive." I could not tell from the complaint if Tan downloaded the files at work or at home, but the thumb drive ended up at Tan's home. His employer asked Tan to bring it to their office, which Tan did. However, he had deleted all the files from the drive. Tan's employer recovered the files using commercially available forensic software.

This incident, by definition, involves an "insider threat." Tan was an employee who appears to have copied information that was outside the scope of his work responsibilities, resigned from his employer, and was planning to return to China to work for a competitor, having delivered his former employer's intellectual property.

When I started GE-CIRT in 2008 (officially "initial operating capability" on 1 January 2009), one of the strategies we pursued involved insider threats. I've written about insiders on this blog before but I couldn't find a description of the strategy we implemented via GE-CIRT.

We sought to make digital intrusions more expensive than physical intrusions.

In other words, we wanted to make it easier for the adversary to accomplish his mission using insiders. We wanted to make it more difficult for the adversary to accomplish his mission using our network.

In a cynical sense, this makes security someone else's problem. Suddenly the physical security team is dealing with the worst of the worst!

This is a win for everyone, however. Consider the many advantages the physical security team has over the digital security team.

The physical security team can work with human resources during the hiring process. HR can run background checks and identify suspicious job applicants prior to granting employment and access.

Employees are far more exposed than remote intruders. Employees, even under cover, expose their appearance, likely residence, and personalities to the company and its workers.

Employees can be subject to far more intensive monitoring than remote intruders. Employee endpoints can be instrumented. Employee workspaces are instrumented via access cards, cameras at entry and exit points, and other measures.

Employers can cooperate with law enforcement to investigate and prosecute employees. They can control and deter theft and other activities.

In brief, insider theft, like all "close access" activities, is incredibly risky for the adversary. It is a win for everyone when the adversary must resort to using insiders to accomplish their mission. Digital and physical security must cooperate to leverage these advantages, while collaborating with human resources, legal, information technology, and business lines to wring the maximum results from this advantage.

NBlog Jan 31 – why so many IT mistakes?


Well, here we are on the brink of another month-end, scrabbling around to finalize and deliver February's awareness module in time for, errr, February.  

This week we've completed the staff and management security awareness and training materials on "Mistakes", leaving just the professional stream to polish-off today ... and I'm having some last-minute fun finding notable IT mistakes to spice-up the professionals' briefings. 

No shortage there!

Being 'notable' implies we don't need to explain the incidents in any detail - a brief reminder will suffice with a few words of wisdom to highlight some relevant aspect of the awareness topic. Link them into a coherent story and the job's a good 'un.

The sheer number of significant IT mistakes constitutes an awareness message in its own right: how come the IT field appears so extraordinarily error-prone? Although we don't intend to explore that question in depth through the awareness materials, our cunning plan is that it should emerge from the content and leave the audience pondering, hopefully chatting about it. Is IT more complex than other fields, making it harder to get right? Are IT pro's unusually inept, slapdash and careless? What are the real root causes underlying IT's poor record? Does the blame lay elsewhere? Or is the assertion that IT has a poor record false, a mistake? 

The point of this ramble is that we've teased out something interesting and thought-provoking, directly relevant to the topic, contentious and hence stimulating. In awareness terms, that's a big win. Our job is nearly done. Just a few short hours to go now before the module is packaged and delivered, and the fun begins for our customers. 

NBlog Jan 23 – infosec policies rarer than breaches

I'm in shock.  While studying a security survey report, my eye was caught by the title page:


Specifically, the last bullet point is shocking: the survey found that less than a third of UK organizations have "a formal cyber security policy or policies". That seems very strange given the preceding two bullet points, firstly that more than a third have suffered "a cyber security breach or attack in the last 12 months" (so they can hardly deny that the risk is genuiine), and secondly a majority claim that "cyber security is a high priority for their organisation's senior management" (and yet they don't even bother setting policies??).

Even without those preceding bullets, the third one seems very strange - so strange in fact that I'm left wondering if maybe there was a mistake in the survey report (e.g. a data, analytical or printing error), or in the associated questions (e.g. the questions may have been badly phrased) or in my understanding of the finding as presented. In my limited first-hand experience with rather less than ~2,000 UK organizations, most have information security-related policies in place today ... but perhaps that's exactly the point: they may have 'infosec policies' but not 'cybersec policies' as such. Were the survey questions in this area worded too explicitly or interpreted too precisely? Was 'cyber security' even defined for respondents, or 'policy' for that matter? Or is it that, being an infosec professional, I'm more likely to interact with organizations that have a clue about infosec, hence my sample is biased?

Thankfully, a little digging led me to the excellent technical annex with very useful  details about the sampling and survey methods. Aside from some doubt about the way different sizes of organizations were sampled, the approach looks good to me, writing as a former research scientist, latterly an infosec pro - neither a statistician nor surveyor by profession. 

Interviewers had access to a glossary defining a few potentially confusing terms, including cyber security:
"Cyber security includes any processes, practices or technologies that organisations have in place to secure their networks, computers, programs or the data they hold from damage, attack or unauthorised access." 
Nice! That's one of the most lucid definitions I've seen, worthy of inclusion in the NoticeBored glossary. It is only concerned with "damage, attack or unauthorised access" to "networks, computers, programs or the data they hold" rather than information risk and security as a whole, but still it is quite wide in scope. It is not just about hacks via the Internet by outsiders, one of several narrow interpretations in circulation. Nor is it purely about technical or technological security controls.

"Breach" was not defined though. Several survey questions used the phrase "breach or attack", implying that a breach is not an attack, so what is it? Your guess is as good as mine, or the interviewers' and the interviewees'!

Overall, the survey was well designed, competently conducted by trustworthy organizations, and hence the results are sound. Shocking, but sound.

I surmise that my shock relates to a mistake on my part. I assumed that most organizations had policies in this area. As to why roughly two thirds of them don't, one can only guess since the survey didn't explore that aspect, at least not directly. Given my patent lack of expertise in this area, I won't even hazard a guess. Maybe you are willing to give it a go?

Blog comments are open. Feedback is always welcome. 

NBlog Jan 17 – another day, another breach



https://haveibeenpwned.com/ kindly emailed me today with the news that my email credentials are among the 773 million disclosed in “Collection #1”.  Thanks Troy Hunt!

My email address, name and a whole bunch of other stuff about me is public knowledge so disclosure of that is no issue for me. I hope the password is an old one no longer in use. Unfortunately, though for good reasons, haveibeenpwned won’t disclose the passwords so I can’t tell directly which password was compromised … but I can easily enough change my password now so I have done, just in case.

I went through the tedious exercise of double-checking that all my hundreds of passwords are long, complex and unique some time ago – not too hard thanks to using a good password manager. [And, yes, I do appreciate that I am vulnerable to flaws, bugs, config errors and inept use of the password manager but I'm happy that it is relatively, not absolutely, secure. There are other information risks that give me more concern.]

If you haven’t done that yet, take this latest incident as a prompt. Don't wait for the next one. 

Email compromises are pernicious. Aside from whatever salacious content there might be on my email account, most sites and apps now use email for password changes (and it’s often a fallback if multifactor authentication fails) so an email compromise may lead on to others, even if we use strong, unique passwords everywhere.

NBlog Jan 14 – mistaken awareness


Our next security awareness and training module for February concerns human error. "Mistakes" is its catchy title but what will it actually cover? What is its purpose? Where is it heading? 

[Scratches head, gazes vacantly into the distance]

Scoping any module draws on:
  • The preliminary planning, thinking, research and pre-announcements that led us to give it a title and a few vague words of description on the website;
  • Other modules, especially recent ones that are relevant to or touched on this topic with an eye to it being covered in February;
  • Preliminary planning for future topics that we might introduce or mention briefly in this one but need not cover in any depth - not so much a grand master plan covering all the awareness topics as a reasonably coherent overview, the picture-on-the-box showing the whole jigsaw;
  • Customer suggestions and feedback, plus conjecture about aspects or concerns that seem likely to be relevant to our customers given their business situations and industries e.g. compliance drivers;
  • General knowledge and experience in this area, including our understanding of good practices ... which reminds me to check the ISO27k and other standards for guidance and of course Google, an excellent way to dig out potentially helpful advice, current thinking in this area plus news of recent, public incidents involving human error;
  • Shallow and deep thought, day and night-dreaming, doodling, occasional caffeine-fueled bouts of mind-mapping, magic crystals and witchcraft a.k.a. creative thinking.

Scoping the module is not a discrete one-off event, rather we spiral-in on the final scope during the course of researching, designing, developing and finalizing the materials. Astute readers might have noticed this happen before, past modules sometimes changing direction and titles in the course of production. Maybe the planned scope turned out to be too ambitious or for that matter too limiting, too dull and boring for our demanding audiences, or indeed for us. Some topics are more inspiring than others.

So, back to "Mistakes": what will the NoticeBored module cover? What we have roughly in mind at this point is: human error, computer error, bugs and flaws, data-entry errors and GIGO, forced and unforced accidents, errors of commission and omission. Little, medium and massive errors, plus those that change. Errors that are are immediately and painfully obvious to all concerned, plus those that lurk quietly in the shadows, perhaps forever unrecognized as such. Error prevention, detection and correction. Failures of all sizes and kinds, including failures of controls to prevent, mitigate, detect and recover from incidents. Conceptual and practical errors. Strategic, tactical and operational errors, particularly mistaken assumptions, poor judgement and inept decision making (the perils of management foresight given incomplete knowledge and imperfect information). Mistakes by various third parties (customers, suppliers, partners, authorities, regulators, advisers, investors, other stakeholders, journalists, social media wags, the Great Unwashed ...) as well as by management and staff. Cascading effects due to clusters and dependencies, some of which are unappreciated until little mistakes lead to serious incidents.

Hmmm, that's more than enough already, if an unsightly jumble!

Talking of incidents, we've started work on a brand new awareness module due for April about incident detection, hence we won't delve far into incident management in February, merely titillating our audiences (including you, dear blog reader) with brief tasters of what's to come, sweet little aperitifs to whet the appetite.  

Q: is an undetected incident an incident?  

A: yes. The fact that it hasn't (yet) been detected may itself constitute a further incident, especially if it turns out to be serious and late/non-detection makes matters even worse.