Category Archives: Incidents

NBlog Sept 17 – fragility

In preparation for a forthcoming NoticeBored security awareness module, I'm researching business continuity.  Today, by sheer coincidence, I've stumbled into a business discontinuity: specifically, the website for a commercial company advertising/sponsoring a popular multi-week New Zealand radio show promotion is currently unavailable. It seems to have been so fragile that it broke.

This is how the web page looks right now:

Mostly white space. 502 is the standard error message number indicating a 'bad gateway', meaning that the company's website cannot be contacted by some intermediate network system. It appears to be dead. Resting maybe.

The HTML code for the sparse error page is almost as sparse - just these 14 lines, half of which are comments:


DownForEveryoneOrJustMe.com tells me its not just my Internet connection playing up.  The website really is unreachable.

That's the NZ website. The company's Australian website is also unavailable, whereas its US site is up and running. 

nginx is the name of a webserver front-end load-balancer utility/application/system.  Given the radio promotion, it is possible the company is using nginx as a cache to reduce an anticipated heavy load on the webserver, or to balance the load across several webservers, but either way evidently it isn't working out right now.  

Summing up the situation:

  • The company has planned and paid for a radio promotion including links to its website: management must have known this was coming;
  • Management appears (at some point) to have made technical arrangements to cope with a heavy load on the webserver: presumably, it anticipated the risk of the website being overloaded;
  • The technical arrangements appear to have failed: the website is currently unavailable;
  • Either management doesn't know the corporate website is down (due to the lack of effective monitoring) or it knows but hasn't reacted effectively (maybe nginx was the response: it hasn't worked for me, today);
  • The company has fallen off the web, making it hard for potential customers to make contact and do business;
  • That, in turn, has implications for its public image: its brand is becoming somewhat tarnished by this incident. It's not a good look.

This is a classic information security (availability and integrity) incident with business implications. The website evidently wasn't sufficiently resilient, and the incident does not appear to have been handled effectively. 

Of course, we can only guess at some of this in the absence of further information. Perhaps my assumptions are wrong. Maybe the fault lies elsewhere and/or the situation is more complex than it appears. Conceivably, the site might even have been taken down deliberately as a response to some other incident. We just don't know.

But we do have a little case study for the awareness module. I'll continue checking the site to see what happens next - how the situation resolves and perhaps gleaning further information about the incident.

[I haven't named the company because it isn't necessary to do so, and I don't want to make the incident any worse for them than it already is by prompting YOU to go check out their website as well!]

NBlog Sept 15 – the business value of infosec

Thanks to a heads-up from Walt Williams, I'm mulling over a report by CompariTech indicating that the announcement of serious "breaches" by commercial organizations leads to a depression in their stock prices relative to the stock market.

I'm using "breach" in quotes because the study focuses on public disclosures by large US commercial corporations of significant incidents involving the unauthorized release of large quantities of personal data, credit card numbers etc. That's just one type of information security incident, or breach of security, and just one type of organization. There are many others.


The situation is clearly complex with a number of factors, some of which act in opposition (e.g. the publicity around a "breach" is still publicity!). There are several constraints and assumptions in the study (e.g. small samples) so personally I'm quite dubious about the conclusions ... but it adds some weight to the not unreasonable claim that "breaches" are generally bad for business. At the very least, it disproves the null hypothesis that "breaches" have no effect on business.

Personally, I'm intrigued to find that "breaches" do not have a more marked effect on stock price. The correlation seems surprisingly weak to me, suggesting that I am biased, over-estimating the importance of infosec - another not unreasonable assumption given that I am an infosec pro! It's the centre of my little world after all!

Aside from the fairly weak "breach" effect, I'd be fascinated to learn more about the approaches towards information risk, security, privacy, governance, incident management, risk & security strategy, compliance etc. that differentiate relatively strong from relatively weak performers on the stock market, using that as an indicator of business performance ... and indeed various other indicators such as turnover, profitability, market share, brand value etc. I'm particularly interested in leading indicators - the things that tend to precede relatively strong or weak performance.

On the flip side, I'd be interested to know whether 'good news' security disclosures/announcements (such as gaining ISO27k or other security certifications, or winning court cases over intellectual property) can be demonstrated to be good for business. Given my inherent personal bias and focus on infosec, I rather suspect the effect (if any) will be weaker than I expect ... but I'm working on it!

NBlog August 30 – A-to-Z of outsider threats

I love it when a plan comes together!  

We're close to completing the NoticeBored 'outsider threats' security awareness module for September, checking and finalizing the materials. Things are getting tense as the IsecT office clock ticks away the remaining hours.

Normally, we develop awareness briefings for each of the three audience groups from the corresponding three awareness seminar slide decks, using the graphics and notes as donor/starter content and often following a similar structure. 

Having finished the staff seminar this morning, I anticipated using that as the basis for a staff briefing as usual ... but, on reflection, I realized that we have more than enough content to prepare a lengthier A-to-Z guide to outsider threats instead. 

The sheer number and variety of outsider threats and incidents is itself a strong awareness message. Listing and (briefly) describing them in an alphabetical sequence makes sense. 








This will be an interesting read for awareness and training purposes and, I believe, a useful reference document - essentially a 'threat catalog' to help identify and assess the information risks relating to outsiders and other external threats. 

If your current list of outsider threats and risks has only a handful of entries, you should expect to be caught out by any of the dozens you have failed to consider.

Preparing it sounds great in theory but potentially it's too much work for the little time remaining ... except that I had the foresight to prepare a Word template for the A-to-Z guides from the last one we prepared. Now 'all I have to do' is paste in lists of threats and incidents already written in other awareness materials, click the magic button to sort them alphabetically, apply the Word styles to make the whole product look presentable then check it through for consistency. OK so there's a bit more to it than that but it's coming along rapidly and will be done in time. Having written about 9 pages so far, I'm taking a break after some 9 hours' intense concentration, resting and hoping not to wake up at 2 or 3 am with a head full of it!  It needs about 2 or 3 more hours' work in the morning to complete the remaining 2 or 3 pages (spot the formula!). At least, that's the plan.

When it's all done, maybe we could offer it for sale as a combined awareness/training piece and outsider threat catalog through the SecAware website: what do you think?  Is it something that would interest you dear reader? Would you be prepared to invest a few dollars for immediate download? NoticeBored subscribers will receive it as part of their subscription, naturally, but I think it has some potential and value as a standalone product for wider readership. 

Failing that, we might just release it as a freebie for marketing purposes, or seek to get it published in one of the trade journals. Or sit on it, updating it from time to time as inspiration strikes. We'll see how it goes.  

For now, though, I'm all in and off to bed to recharge my flagging grey matter for the final slog.

NBlog August 29 – outsider threats and incidents

The wide variety of threatening people, organizations and situations Out There, and the even wider variety of outsider incidents, is quite overwhelming ... which means we need to simplify things for awareness purposes. If we try to cover too much at once, we'll confuse, overwhelm and maybe lose our audiences, if not ourselves.

On the other hand, that variety is itself an important lesson from September's awareness module. It's not sufficient, for instance, for the cybersecurity team to lock down the corporate firewall in order to block hackers and malware while neglecting other outsider threats such as intellectual property theft and disinformation. Organizations are in a difficult position, trying to avoid, prevent or limit all manner of outsider incidents, some of which are particularly difficult to even identify let alone control. It's soot-juggling really.

With our start-of-month delivery deadline imminent, we're currently finalizing September's NoticeBored slide decks and briefings, focusing on the key messages and making sure they have enough impact to resonate with the awareness audiences - our own version of soot-juggling. We have the advantage of being able to delve into things in more depth later, thanks to the rolling program of awareness topics. Next month, for example, we'll focus on phishing, specifically, so this month we'll take the opportunity to mention phishing as a form of outsider social engineering cyber-attack,  briefly, without having to explain all of that just now.

Things always become a bit frantic in the IsecT office as the deadline looms. On the bright side, we've done a stack of prep-work during the month plus research prior to that so we have no shortage of content. And we've been here many times before - every single month for the past 15 years in fact! So, that's it for now. Must dash. Speling to dubble-chek. Shiny things to polish.

NBlog August 13 – xenophobic PIGs

Today, I'm meandering (rambling!) on from Friday's post about systematically managing outsider threats, returning to an older theme about using Probability Impact Graphs (PIGs) for both risk analysis and security awareness purposes.

One of the more unusual information risks on our radar for September's outsider threats awareness module is xenophobia - the fear of strangers. It has a deep biological basis: most animals naturally congregate and live with others of their kind, forming social groups (families, flocks, tribes etc.) while excluding those who are 'different' - most obviously predators. The differences aren't always obvious to us humans. Sheep, for instance, recognize each other more by sound and smell than by color.

Compared to other risks in this domain, xenophobia is fairly widespread, putting it roughly half way along the probability scale. But what of the business impacts of xenophobia afflicting employees? Hmmm, not so easy. As is often the way, the consequences depend on the circumstances or context in which incidents may occur. In this specific case, there may even be benefits (such as spotting possible intruders - corporate predators!) as well as adverse impacts (such as racism). Personally, on balance and bearing in mind the other outsider threats we're also concerned about, I'd put the impacts towards the bottom of the scale, putting xenophobia somewhere left of center on the generic Probability Impact Graph ...

... but it doesn't end there. How does the xenophobia information risk compare to others?  I've shown just one other risk here of the 8 or so we have identified already as an indication of what we mean by 'information risk', and to illustrate the range. In our estimation, the risk of a "Targeted hack or malware attack" is slightly less likely than xenophobia but has a significantly higher impact on the organization if it does occur.

OK, are you with me so far? What are you thinking at this point? My guess is that you're either cruising along, going with the flow, or puzzling over the meanings, implications and positions of those two information risks. Maybe that prior almost incidental mention of racism has lit your blue touchpaper already, and maybe you don't consider xenophobia even remotely relevant to the topic at hand. Perhaps you would put xenophobia elsewhere on the PIG, or split it into various incidents with differing implications - and likewise with the other risk. Possibly you are confused over the meaning of xenophobia, or consider it something that insiders might have and therefore out of scope of the outsider threats topic ...

Fantastic! In terms of the key objectives of security awareness and training, the PIG is working nicely: it has set you thinking about the topic area, considering those two risks, comparing and contrasting them. 

Now imagine there are another 6+ information risks plonked on the same PIG, described in fairly straightforward terms and analyzed subjectively in the much same manner, with similar issues and concerns arising ... and you'll appreciate the power of this technique, especially in a group setting such as a risk workshop or online discussion forum. It is both creative/stimulating and analytical/pragmatic, leading naturally in to the discussions around what ought to be done about the information risks, particularly any in the red zone (clear priorities). It harnesses the group's expertise and experience, challenges prejudices and biases, and helps people contemplate quite complex matters productively.

I commend it to the house.

NBlog July 13 – ISO/IEC 27001 Annex A status


I've just completed an internal audit of an ISO27k ISMS for a client. By coincidence, a thread on ISO27k Forum this morning brought up an issue I encountered on the audit, and reminded me of a point that has been outstanding for several years now.

The issue concerns the formal status of ISO/IEC 27001:2013 Annex A arising from ambiguities or conflicts in the main body wording and in the annex. 

Is Annex A advisory or mandatory? Are the controls listed in Annex A required by default, or optional, simply to be considered or taken into account?

The standard is distinctly ambiguous on this point, in fact there are direct conflicts within the wording - not good for a formal specification against which organizations are being audited and certified compliant.

Specifically, main body clause 6.1.3 Information security risk treatment clearly states as a note that "Organizations can design controls as required, or identify them from any source." ... which means they are not required to use Annex A.

So far so good .... however, the very next line of the standard requires them to "compare the controls determined in 6.1.3 b) above with those in Annex A and verify that no necessary controls have been omitted". This, to me, is a badly-worded suggestion to use Annex A as a checklist. Some readers may interpret it to mean that, by default, all the Annex A controls are "necessary", but (as I understand the position) that was not the intent of SC 27. Rather, "necessary" here refers to the organization's decision to treat some information risks by mitigating them using specific controls, or not. If the organization chooses to use certain controls, those controls are "necessary" for the organization, not mandatory for compliance with the standard.

To make matters worse still, a further note describes Annex A as "a comprehensive list of control objectives and controls", a patently false assertion. No list of control objectives and controls can possibly be totally comprehensive since that is an unbounded set. For starters, someone might invent a novel security control today, one that is not listed in the standard since it didn't exist when it was published. Also, there is a near-infinite variety of controls including variants and combinations of controls: it is literally impossible to identify them all, hence "comprehensive" is wrong.

The standard continues, further muddying the waters: "Control objectives are implicitly included in the controls chosen. The control objectives and controls listed in Annex A are not exhaustive and additional control objectives and controls may be needed." This directly contradicts the previous use of "comprehensive".

As if that's not bad enough already, the standard's description of the Statement of Applicability yet again confuses matters. "d) produce a Statement of Applicability that contains the necessary controls (see 6.1.3 b) and c)) and justification for inclusions, whether they are implemented or not, and the justification for exclusions of controls from Annex A". So, despite the earlier indication that Annex A is merely one of several possible checklists or sources of information about information security controls, the wording here strongly implies, again, that it is a definitive, perhaps even mandatory set after all.

Finally, Annex A creates yet more problems. It is identified as "Normative", a key word in ISO-land meaning "mandatory". Oh. And then several of the controls use the key word "shall", another word reserved for mandatory requirements in ISO-speak.

What a bloody mess!

Until this is resolved by wording changes in a future release of the standard, I suggest taking the following line:
  • Identify and examine/analyse/assess/evaluate your information risks;
  • Decide how to treat them (avoid, mitigate, share and/or accept);
  • Treat them however you like: it is YOUR decision, and you should be willing to justify your decision … but I generally recommend prioritizing and treating the most significant risks first and best, working systematically down towards the trivia where the consequences of failing to treat them so efficiently and effectively are of less concern;
  • For risks you decide to mitigate with controls, choose whatever controls suit your situation. Aside from Annex A, there are many other sources of potential controls, any of which might be more suitable and that’s fine: go right ahead and use whatever controls you believe mitigate your information risks, drawing from Annex A or advice from NIST, DHS, CSA, ISACA, a friend down the pub, this blog, whatever. It is your choice. Knock yerself out;
  • If they challenge your decisions, refer the certification auditors directly to the note under 6.3.1 b: Organizations can design controls as required, or identify them from any source. Stand your ground on that point and fight your corner. Despite the other ambiguities, I believe that note expresses what the majority of SC27 intended and understood. If the auditors are really stubborn, demonstrate why your controls are at least as effective or even better than those suggested in Annex A;
  • Perhaps declare the troublesome Annex A controls “Not applicable” because you prefer to use some other more appropriate control instead;
  • As a last resort, declare that the corresponding risks are acceptable, at least for now, pending updates to the standard and clearer, more useful advice;
  • Having supposedly treated the risks, check that the risk level remaining after treatment (“residual risk”) is acceptable, otherwise cycle back again, adjusting the risk treatment accordingly (e.g. additional or different controls).
If you are still uncertain about this, talk it through with your certification auditors – preferably keeping a written record of their guidance or ruling. If they are being unbelievably stubborn and unhelpful, find a different accredited certification body and/or complain about this to the accreditation body. You are the paying customer, after all, and it’s a free market!