Category Archives: Incidents

NBlog Jan 23 – infosec policies rarer than breaches

I'm in shock.  While studying a security survey report, my eye was caught by the title page:


Specifically, the last bullet point is shocking: the survey found that less than a third of UK organizations have "a formal cyber security policy or policies". That seems very strange given the preceding two bullet points, firstly that more than a third have suffered "a cyber security breach or attack in the last 12 months" (so they can hardly deny that the risk is genuiine), and secondly a majority claim that "cyber security is a high priority for their organisation's senior management" (and yet they don't even bother setting policies??).

Even without those preceding bullets, the third one seems very strange - so strange in fact that I'm left wondering if maybe there was a mistake in the survey report (e.g. a data, analytical or printing error), or in the associated questions (e.g. the questions may have been badly phrased) or in my understanding of the finding as presented. In my limited first-hand experience with rather less than ~2,000 UK organizations, most have information security-related policies in place today ... but perhaps that's exactly the point: they may have 'infosec policies' but not 'cybersec policies' as such. Were the survey questions in this area worded too explicitly or interpreted too precisely? Was 'cyber security' even defined for respondents, or 'policy' for that matter? Or is it that, being an infosec professional, I'm more likely to interact with organizations that have a clue about infosec, hence my sample is biased?

Thankfully, a little digging led me to the excellent technical annex with very useful  details about the sampling and survey methods. Aside from some doubt about the way different sizes of organizations were sampled, the approach looks good to me, writing as a former research scientist, latterly an infosec pro - neither a statistician nor surveyor by profession. 

Interviewers had access to a glossary defining a few potentially confusing terms, including cyber security:
"Cyber security includes any processes, practices or technologies that organisations have in place to secure their networks, computers, programs or the data they hold from damage, attack or unauthorised access." 
Nice! That's one of the most lucid definitions I've seen, worthy of inclusion in the NoticeBored glossary. It is only concerned with "damage, attack or unauthorised access" to "networks, computers, programs or the data they hold" rather than information risk and security as a whole, but still it is quite wide in scope. It is not just about hacks via the Internet by outsiders, one of several narrow interpretations in circulation. Nor is it purely about technical or technological security controls.

"Breach" was not defined though. Several survey questions used the phrase "breach or attack", implying that a breach is not an attack, so what is it? Your guess is as good as mine, or the interviewers' and the interviewees'!

Overall, the survey was well designed, competently conducted by trustworthy organizations, and hence the results are sound. Shocking, but sound.

I surmise that my shock relates to a mistake on my part. I assumed that most organizations had policies in this area. As to why roughly two thirds of them don't, one can only guess since the survey didn't explore that aspect, at least not directly. Given my patent lack of expertise in this area, I won't even hazard a guess. Maybe you are willing to give it a go?

Blog comments are open. Feedback is always welcome. 

NBlog Jan 17 – another day, another breach



https://haveibeenpwned.com/ kindly emailed me today with the news that my email credentials are among the 773 million disclosed in “Collection #1”.  Thanks Troy Hunt!

My email address, name and a whole bunch of other stuff about me is public knowledge so disclosure of that is no issue for me. I hope the password is an old one no longer in use. Unfortunately, though for good reasons, haveibeenpwned won’t disclose the passwords so I can’t tell directly which password was compromised … but I can easily enough change my password now so I have done, just in case.

I went through the tedious exercise of double-checking that all my hundreds of passwords are long, complex and unique some time ago – not too hard thanks to using a good password manager. [And, yes, I do appreciate that I am vulnerable to flaws, bugs, config errors and inept use of the password manager but I'm happy that it is relatively, not absolutely, secure. There are other information risks that give me more concern.]

If you haven’t done that yet, take this latest incident as a prompt. Don't wait for the next one. 

Email compromises are pernicious. Aside from whatever salacious content there might be on my email account, most sites and apps now use email for password changes (and it’s often a fallback if multifactor authentication fails) so an email compromise may lead on to others, even if we use strong, unique passwords everywhere.

NBlog Jan 14 – mistaken awareness


Our next security awareness and training module for February concerns human error. "Mistakes" is its catchy title but what will it actually cover? What is its purpose? Where is it heading? 

[Scratches head, gazes vacantly into the distance]

Scoping any module draws on:

  • The preliminary planning, thinking, research and pre-announcements that led us to give it a title and a few vague words of description on the website;
  • Other modules, especially recent ones that are relevant to or touched on this topic with an eye to it being covered in February;
  • Preliminary planning for future topics that we might introduce or mention briefly in this one but need not cover in any depth - not so much a grand master plan covering all the awareness topics as a reasonably coherent overview, the picture-on-the-box showing the whole jigsaw;
  • Customer suggestions and feedback, plus conjecture about aspects or concerns that seem likely to be relevant to our customers given their business situations and industries e.g. compliance drivers;
  • General knowledge and experience in this area, including our understanding of good practices ... which reminds me to check the ISO27k and other standards for guidance and of course Google, an excellent way to dig out potentially helpful advice, current thinking in this area plus news of recent, public incidents involving human error;
  • Shallow and deep thought, day and night-dreaming, doodling, occasional caffeine-fueled bouts of mind-mapping, magic crystals and witchcraft a.k.a. creative thinking.
Scoping the module is not a discrete one-off event, rather we spiral-in on the final scope during the course of researching, designing, developing and finalizing the materials. Astute readers might have noticed this happen before, past modules sometimes changing direction and titles in the course of production. Maybe the planned scope turned out to be too ambitious or for that matter too limiting, too dull and boring for our demanding audiences, or indeed for us. Some topics are more inspiring than others.

So, back to "Mistakes": what will the NoticeBored module cover? What we have roughly in mind at this point is: human error, computer error, bugs and flaws, data-entry errors and GIGO, forced and unforced accidents, errors of commission and omission. Little, medium and massive errors, plus those that change. Errors that are are immediately and painfully obvious to all concerned, plus those that lurk quietly in the shadows, perhaps forever unrecognized as such. Error prevention, detection and correction. Failures of all sizes and kinds, including failures of controls to prevent, mitigate, detect and recover from incidents. Conceptual and practical errors. Strategic, tactical and operational errors, particularly mistaken assumptions, poor judgement and inept decision making (the perils of management foresight given incomplete knowledge and imperfect information). Mistakes by various third parties (customers, suppliers, partners, authorities, regulators, advisors, investors, other stakeholders, journalists, social media wags, the Great Unwashed ...) as well as by management and staff. Cascading effects due to clusters and dependencies, some of which are unappreciated until little mistakes lead to serious incidents.

Hmmm, that's more than enough already, if an unsightly jumble!

Talking of incidents, we've started work on a brand new awareness module due for April about incident detection, hence we won't delve far into incident management in February, merely titillating our audiences (including you, dear blog reader) with brief tasters of what's to come, sweet little aperitifs to whet the appetite.  

Q: is an undetected incident an incident?  

A: yes. The fact that it hasn't (yet) been detected may itself constitute a further incident, especially if it turns out to be serious and late/non-detection makes matters even worse.

NBlog Jan 1 – putting resilience on the agenda

Resilience means bending not breaking, surviving issues or incidents that might otherwise be disastrous. Resilient things aren’t necessary invulnerable but they are definitely not fragile. Under extreme stress, their performance degrades gracefully but, mostly, they just keep on going ... like we do.  

In the 15 years since we launched the NoticeBored service, we've survived emigrating to New Zealand, the usual ups and downs of business plus the Global Financial Crisis. Lately we're seeing an upturn in sales as customers release the strangle-holds on their awareness and training budgets ... and invest in becoming more resilient to survive future challenges.  

The following snippet from the Financial Conduct Authority's new report "Cyber and Technology Resilience" caught my beady eye in the course of researching and writing January's materials:










The NoticeBored service supplies top-quality creative content for security awareness and training programs on a market-leading range of topics, with parallel streams of material aimed at the workforce in general plus managers and professionals specifically. Getting all three audiences onto the same page and encouraging interaction is a key part of socializing information risk and security, promoting the security culture.

'Resilience' is the 189th NoticeBored awareness and training module and the 67th topic in our bulging portfolioIf your security awareness and training program is limited to basic topics such as viruses and passwords, with a loose assortment of materials forming an unsightly heap, it's no wonder your employees are bored stiff. A dull and uninspiring program achieves little value, essentially a waste of money. Furthermore if it only addresses "end users" and "cybersecurity" i.e. just IT security, again you're missing a trick. Resilience, for example, has profound implications for the entire business and beyond, with supply chains and stakeholders to consider. Resilient computing is just part of it.

For a highly cost-effective approach, read about January's NoticeBored security awareness and training module on resilience and get in touch to subscribe. I'm not just talking about the 'disappointing' 10% of financial companies apparently lacking an awareness program (!) but all organizations, including those of you who already get it and have something running. As we plummet towards 2020, seize the opportunity and ear-mark a tiny sliver of budget to energize your organization's approach to security awareness and training with NoticeBored. We're keen to help you toughen-up, making 2019 a happy, resilient and secure year ahead. Make security awareness your new year's resolution.

Happy new year!

NBlog Dec 29 – awareness case study

The drone incident at Gatwick airport makes a good backdrop for a security awareness case study discussion around resilience.  

It's a big story globally, all over the news, hence most participants will have heard something about it. Even if a few haven't, the situation is simple enough for them to pick up on and engage in the conversation.

The awareness objective is for participants to draw out, consider, discuss and learn about the information risk, information or cybersecurity aspects, in particular the resilience angle ... but actually, that's just part of it. It would be better if participants were able to generalize from the Gatwick drone incident, seeing parallels in their own lives (at work and at home) and ultimately respond appropriately. The response we're after involves workers changing their attitudes, decisions and behaviors e.g.:
  • Considering society's dependence on various activities, services, facilities, technologies etc., as well as the organization and their own dependencies, and ideally reducing dependence on vulnerable aspects;
  • Becoming more resilient i.e. stronger, more willing and able to cope with incidents and challenges of all kinds;
  • Identifying and reacting appropriately to various circumstances that are short on resilience e.g. avoiding placing undue reliance on relatively fragile or unreliable systems, comms, processes and relationships;
  • Perhaps even actively exploiting situations, gaining business advantage by persuading competitors or adversaries to rely unduly on their resilience arrangements (!).
Assorted journalists, authorities and bloggers are keen to point out that the Gatwick drone incident is 'a wake-up call' and that 'something must be done'. Most imply that they are concerned about other airports and, fair enough, the lessons are crystal clear in that context ... but we have deliberately expanded across other areas where resilience is just as important, along with risk, security, safety, reliability, technology and more.

That's a lot of awareness mileage from a public news story but, as with the awareness challenge, putting the concept into practice is where we earn our trivial fees!


Visit the website or contact me to find out more about the NoticeBored service, and to quote you a trivial price - so low in fact that avoiding a single relatively minor incident should more than justify the annual running costs of your entire security awareness and training program. 

By the way, we set our sights much higher than that!

NBlog Dec 28 – US Dept of Commerce shutdown




Earlier this year I heard about the threatened shutdown of WWV and WWVH, NIST's standard time and frequency services, due to the withdrawal of government funding - an outrageous proposal for those of us around the world who use NIST's scientific services routinely to calibrate our clocks and radios.

Today while hunting for a NIST security standard that appears to no longer be online, I was shocked to learn that it's not just WWV that is closing down: it turns out all of NIST is under threat, in fact the entire US Department of Commerce.

Naturally, being a large bureaucratic government organization, there is a detailed plan for the shutdown with details of certain 'exempt' government services that must be maintained according to US law although how those services and people are to be paid is unclear to me. After the funding ceases, DoC employees are required (or is that requested?) to turn up for work for a few more hours to set their out-of-office notifications (on the IT systems that are presumably about to be turned off?), then piss off basically.  

To me, that's an almost unbelievably callous way to treat public servants. 

So is this fake news? Is it "just politics", brinkmanship by Mr Trump's administration I wonder? 

The root cause, I presume, is the usual disparity between the government's income and expenses, fueled by battles between the political parties plus their 'lobbyists' and the extraordinarily xenophobic pressure to spend spend spend on 'defense'. I gather US-Mexico border wall is, after all (surprise surprise) to be funded by the US, so that's yet another splash of red ink across the government's books.

DarkVishnya: Banks attacked through direct connection to local network

While novice attackers, imitating the protagonists of the U.S. drama Mr. Robot, leave USB flash drives lying around parking lots in the hope that an employee from the target company picks one up and plugs it in at the workplace, more experienced cybercriminals prefer not to rely on chance. In 2017-2018, Kaspersky Lab specialists were invited to research a series of cybertheft incidents. Each attack had a common springboard: an unknown device directly connected to the company’s local network. In some cases, it was the central office, in others a regional office, sometimes located in another country. At least eight banks in Eastern Europe were the targets of the attacks (collectively nicknamed DarkVishnya), which caused damage estimated in the tens of millions of dollars.

Each attack can be divided into several identical stages. At the first stage, a cybercriminal entered the organization’s building under the guise of a courier, job seeker, etc., and connected a device to the local network, for example, in one of the meeting rooms. Where possible, the device was hidden or blended into the surroundings, so as not to arouse suspicion.

High-tech tables with sockets are great for planting hidden devices

High-tech tables with sockets are great for planting hidden devices

The devices used in the DarkVishnya attacks varied in accordance with the cybercriminals’ abilities and personal preferences. In the cases we researched, it was one of three tools:

  • netbook or inexpensive laptop
  • Raspberry Pi computer
  • Bash Bunny, a special tool for carrying out USB attacks

Inside the local network, the device appeared as an unknown computer, an external flash drive, or even a keyboard. Combined with the fact that Bash Bunny is comparable in size to a USB flash drive, this seriously complicated the search for the entry point. Remote access to the planted device was via a built-in or USB-connected GPRS/3G/LTE modem.

At the second stage, the attackers remotely connected to the device and scanned the local network seeking to gain access to public shared folders, web servers, and any other open resources. The aim was to harvest information about the network, above all, servers and workstations used for making payments. At the same time, the attackers tried to brute-force or sniff login data for such machines. To overcome the firewall restrictions, they planted shellcodes with local TCP servers. If the firewall blocked access from one segment of the network to another, but allowed a reverse connection, the attackers used a different payload to build tunnels.

Having succeeded, the cybercriminals proceeded to stage three. Here they logged into the target system and used remote access software to retain access. Next, malicious services created using msfvenom were started on the compromised computer. Because the hackers used fileless attacks and PowerShell, they were able to avoid whitelisting technologies and domain policies. If they encountered a whitelisting that could not be bypassed, or PowerShell was blocked on the target computer, the cybercriminals used impacket, and winexesvc.exe or psexec.exe to run executable files remotely.

Verdicts

not-a-virus.RemoteAdmin.Win32.DameWare
MEM:Trojan.Win32.Cometer
MEM:Trojan.Win32.Metasploit
Trojan.Multi.GenAutorunReg
HEUR:Trojan.Multi.Powecod
HEUR:Trojan.Win32.Betabanker.gen
not-a-virus:RemoteAdmin.Win64.WinExe
Trojan.Win32.Powershell
PDM:Trojan.Win32.CmdServ
Trojan.Win32.Agent.smbe
HEUR:Trojan.Multi.Powesta.b
HEUR:Trojan.Multi.Runner.j
not-a-virus.RemoteAdmin.Win32.PsExec

Shellcode listeners

tcp://0.0.0.0:5190
tcp://0.0.0.0:7900

Shellcode connects

tcp://10.**.*.***:4444
tcp://10.**.*.**:4445
tcp://10.**.*.**:31337

Shellcode pipes

\\.\xport
\\.\s-pipe

KoffeyMaker: notebook vs. ATM

Despite CCTV and the risk of being caught by security staff, attacks on ATMs using a direct connection — so-called black box attacks — are still popular with cybercriminals. The main reason is the low “entry requirements” for would-be cyber-robbers: specialized sites offer both the necessary tools and how-to instructions.

Kaspersky Lab’ experts investigated one such toolkit, dubbed KoffeyMaker, in 2017-2018, when a number of Eastern European banks turned to us for assistance after their ATMs were quickly and almost freely raided. It soon became clear that we were dealing with a black box attack — a cybercriminal opened the ATM, connected a laptop to the cash dispenser, closed the ATM, and left the crime scene, leaving the device inside. Further investigation revealed the “crime instrument” to be a laptop with ATM dispenser drivers and a patched KDIAG tool; remote access was provided through a connection to a USB GPRS modem. The operating system was Windows, most likely XP, ME, or 7 for better driver compatibility.

ATM dispenser connected to a computer without the necessary drivers

ATM dispenser connected to a computer without the necessary drivers

The situation then unfolded according to the usual scenario: the cybercriminal returned at the appointed hour and pretended to use the ATM, while an accomplice remotely connected to the hidden laptop, ran the KDIAG tool, and instructed the dispenser to issue banknotes. The attacker took the money and later retrieved the laptop, too. The whole operation could well be done solo, but the scheme whereby a “mule” handles the cash and ATM side, while a second “jackpotter” provides technical support for a share of the loot, is more common. A single ATM can spit out tens of thousands of dollars, and only hardware encryption between an ATM PC and its dispenser can prevent an attack from occurring.

Overall, the attack was reminiscent of Cutlet Maker, which we described last year, except for the software tools. We were able to reproduce all the steps of KoffeyMaker in our test lab. All the required software was found without too much difficulty. Legitimate tools were used to carry out the attack with the exception of the patched KDIAG utility, which Kaspersky Lab products detect as RiskTool.Win32.DIAGK.a. Note that the same version of this program was previously used by cybercriminals from the Carbanak group.

Hash sums

KDIAG, incl. patched files
49c708aad19596cca380fd02ab036eb2
9a587ac619f0184bad123164f2aa97ca
2e90763ac4413eb815c45ee044e13a43
b60e43d869b8d2a0071f8a2c0ce371aa
3d1da9b83fe5ef07017cf2b97ddc76f1
45d4f8b3ed5a41f830f2d3ace3c2b031
f2c434120bec3fb47adce00027c2b35e
8fc365663541241ad626183d6a48882a
6677722da6a071499e2308a121b9051d
a731270f952f654b9c31850e9543f4ad
b925ce410a89c6d0379dc56c85d9daf0
d7b647f5bcd459eb395e8c4a09353f0d
0bcb612e6c705f8ba0a9527598bbf3f3
ae962a624866391a4321c21656737dcb
83ac7fdba166519b29bb2a2a3ab480f8

Drivers
84c29dfad3f667502414e50a9446ed3f
46972ca1a08cfa1506d760e085c71c20
ff3e0881aa352351e405978e066d9796
4ea7a6ca093a9118df931ad7492cfed5
a8da5b44f926c7f7d11f566967a73a32
f046dc9e38024ab15a4de1bbfe830701
9a1a781fed629d1d0444a3ae3b6e2882

YARA rule

rule software_zz_patched_KDIAG
{
meta:
 author = "Kaspersky Lab"
 filetype = "PE"
 date = "2018-04-28"
 version = "1.0"
 hash = "49c708aad19596cca380fd02ab036eb2"

strings:
$b0 = { 25 80 00 00 00 EB 13 FF 75 EC }
$b1 = { EB 1F 8D 85 FC FE FF FF 50 68 7B 2F 00 00 }
$s0 = "@$MOD$ 040908 0242/0000 CRS1.EXE W32 Copyright (c) Wincor Nixdorf"
condition:
 (
  uint16(0) == 0x5A4D and
  all of ( $s* ) and
  all of ( $b* )
 )
}

NBlog Dec 1 – security awareness on ‘oversight’

We bring the year to a close with an awareness and training module on a universal control that is applicable and valuable in virtually all situations in some form or other.  Oversight blends monitoring and watching-over with directing, supervising and guiding, a uniquely powerful combination.
The diversity and flexibility of the risk and control principles behind oversight are applied naturally by default, and can be substantially strengthened where appropriate. Understanding the fundamentals is the first step towards making oversight more effective, hence this is a cracker of an awareness topic with broad relevance to information risk and security, compliance, governance, safety and all that jazz.
It’s hard to conceive of a security awareness and training program that would not cover oversight, but for most it is implicit, lurking quietly in the background.  NoticeBored draws it out, putting it front and center.  
In the most general sense, very few activities would benefit from not being overseen in some fashion, either by the people and machines performing them or by third parties.
To a large extent, management is the practical application of oversight.  It’s also fundamental to governance, compliance and many controls, including most of those in information risk and security. 
Imagine if you can a world without any form of oversight where:
  • People and organizations were free to do exactly as they wish without fear of anyone spotting and reacting to their activities;
  • Machines operated totally autonomously, with nobody monitoring or controlling them;
  • Organizations, groups and individuals acted with impunity, doing whatever they felt like without any guidance, direction or limits, nobody checking up on them or telling them what to do or not to do;
  • Compliance was optional at best, and governance was conspicuously absent. 
Such a world may be utopia for anarchists, egocentrics and despots but a nightmare scenario for information risk and security professionals, and for any civilized society!

Read more about December's NoticeBored security awareness and training module then get in touch to subscribe.

NBlog Nov 22 – SEC begets better BEC sec

According to an article on CFO.com by Howard Scheck, a former chief accountant of the US Securities and Exchange Commission’s Division of Enforcement: 
"Public companies must assess and calibrate internal accounting controls for the risk of cyber frauds. Companies are now on notice that they must consider cyber threats when devising and maintaining a system of internal accounting controls."

A series of Business Email Compromise frauds (successful social engineering attacks) against US companies evidently prompted the SEC to act. Specifically, according to Howard:
"The commission made it clear that public companies subject to Section 13(b)(2)(B) of the Securities Exchange Act — the federal securities law provision covering internal controls — have an obligation to assess and calibrate internal accounting controls for the risk of cyber frauds and adjust policies and procedures accordingly."
I wonder how the lawyers will interpret that obligation to 'assess and calibrate' the internal accounting controls? I am not a lawyer but 'assessing' typically involves checking or comparing something against specified requirements or specifications (compliance assessments), while 'calibration' may simply mean measuring the amount of discrepancy. 'Adjusting' accounting-related policies and procedures may help reduce the BEC risk, but what about other policies and procedures? What about the technical and physical controls such as user authentication and access controls on the computer systems? What about awareness and training on the 'adjusted' policies and procedures? Aside from 'adjusting', how about instituting entirely new policies and procedures to plug various gaps in the internal controls framework? Taking that part of the CFO article at face value, the SEC appears (to this non-lawyer) very narrowly focused, perhaps even a little misguided. 

Turns out there's more to this:
"As the report warns, companies should be proactive and take steps to consider cyber scams. Specific measures should include:
  • Identify enterprise-wide cybersecurity policies and how they intersect with federal securities laws compliance
  • Update risk assessments for cyber-breach scenarios
  • Identify key controls designed to prevent illegitimate disbursements, or accounting errors from cyber frauds, and understand how they could be circumvented or overridden. Attention should be given to controls for payment requests, payment authorizations, and disbursements approvals — especially those for purported “time-sensitive” and foreign transactions — and to controls involving changes to vendor disbursement data.
  • Evaluate the design and test the operating effectiveness of these key controls
  • Implement necessary control enhancements, including training of personnel
  • Monitor activities, potentially with data analytic tools, for potential illegitimate disbursements
While it’s not addressed in the report, companies could be at risk for disclosure failures after a cyber incident, and CEOs and CFOs are in the SEC’s cross-hairs due to representations in Section 302 Certifications. Therefore, companies should also consider disclosure controls for cyber-breaches."
The Securities Exchange Act became law way back in 1934, well before the Internet or email were invented ... although fraud has been around for millennia. In just 31 pages, the Act led to the formation of the SEC itself and remains a foundation for the oversight and control of US stock exchanges, albeit supported and extended by a raft of related laws and regulations. Todays system of controls has come a long way already and is still evolving.

NBlog Nov 20 – go ahead, make my day


What can be done about the semi-literate reprobates spewing forth this sort of technobabble nonsense via email? 
"hello, my prey.
I write you since I attached a trojan on the web site with porn which you have visited.My malware captured all your private data and switched on your camera which recorded the act of your wank. Just after that the malware saved your contact list.I will erase the compromising video records and data if you pay me 350 EURO in bitcoin. This is wallet address for payment : [string redacted]
I give you 30h after you view my message for making the transaction.As soon as you read the message I'll know it immediately.It is not necessary to tell me that you have paid to me. This wallet address is connected to you, my system will delete everything automatically after transfer confirmation.If you need 48h just Open the calculator on your desktop and press +++If you don't pay, I'll send dirt to all your contacts.      Let me remind you-I see what you're doing!You can visit the police office but anyone can't help you.
If you try to cheat me , I'll see it immediately!
I don't live in your country. So anyone can not track my location even for 9 months.Goodbye for now. Don't forget about the disgrace and to ignore, Your life can be destroyed."

It's straightforward blackmail - a crime in New Zealand and elsewhere - but the perpetrators are of course lurking in the shadows, hoping to fleece their more naive and vulnerable victims then cash-out anonymously via Bitcoin. Identifying them is hard enough in the first place without the added burden of having to gather sufficient forensic evidence to build a case, then persuade the authorities to prosecute.



So instead I'm fighting back through awareness. If you receive vacuous threats of this nature, simply laugh at their ineptitude and bin them. Go ahead, bin them all. Train your spam filters to bin them automatically. Bin them without hesitation or concern. 



Then, please help me pass the word about these ridiculous scams. Let your friends and family (especially the most vulnerable) know. Share this blog with your classmates and work colleagues. Send journalists and reporters the URL. Hold a bin-the-blackmail party. 

By all means call your national CERT or the authorities if that makes you feel better. Just don't expect much in the way of a response beyond "We're inundated! Sorry, this is not a priority. We simply don't have the resources."

If enough of us call their bluff, these pathetic social engineering attacks will not earn enough to offset the scammers' risks of being caught ... and who knows, we might just draw some of them into the open in the process. Let's find out just how confident their are of their security, their untraceability and invincibility. 

Recite after me: "Go ahead, make my day ..."

NBlog Nov 13 – what to ask in a security gap assessment (reprise)

Today on the ISO27k Forum, a newly-appointed Information Security Officer asked us for "a suitable set of questions ... to conduct security reviews internally to departments".

I pointed him at "What to ask in a gap assessment" ... and made the point that if I were him, I wouldn't actually start with ISO/IEC 27002's security controls as he implied. I'd start two steps back from there:
  1. One step back from the information security controls controls are the information risks. The controls help address the risks by avoiding, reducing or limiting the number and severity of incidents affecting or involving information: but what information needs to be protected, and against what kinds of incident? Without knowing that, I don't see how you can decide which controls are or are not appropriate, nor evaluate the controls in place.
  2. Two steps back takes us to the organizational or business context for information and the associated risks. Contrast, say, a commercial airline company against a government department: some of their information is used for similar purposes (i.e. general business administration and employee comms) but some is quite different (e.g. the airline is heavily reliant on customer and engineering information that few government departments would use if at all). Risks and controls for the latter would obviously differ ... but less obviously there are probably differences even in the former - different business priorities and concerns, different vulnerabilities and threats. The risks, and hence the controls needed, depend on the situation.
I recommend several parallel activities for a new info sec pro, ISO, ISM or CISO – a stack of homework to get started:
  • First, I find it helps to start any new role deliberately and consciously “on receivei.e. actively listening for the first few weeks at least, making contacts with your colleagues and sources and finding out what matters to them.  Try not to comment or criticize or commit to anything much at this stage, although that makes it an interesting challenge to get people to open up!  Keep rough notes as things fall into place.  Mind-mapping may help here.
  • Explore the information risks of most obvious concern to your business. Examples:
    • A manufacturing company typically cares most about its manufacturing/factory production processes, systems and data, plus its critical supplies and customers;
    • A services company typically cares most about customer service, plus privacy;
    • A government department typically cares most about ‘not embarrassing the minister’ i.e. compliance with laws, regs and internal policies & procedures;
    • A healthcare company typically cares most about privacy, integrity and availability of patient/client data;
    • Any company cares about strategy, finance, internal comms, HR, supply chains and so on – general business information – as well as compliance with laws, regs and contracts imposed on it - but which ones, specifically, and to what extent?;
    • Any [sensible!] company in a highly competitive field of business cares intensely about protecting its business information from competitors, and most commercial organizations actively gather, assess and exploit information on or from competitors, suppliers, partners and customers, plus industry regulators, owners and authorities;
    • Not-for-profit organizations care about their core missions, of course, plus finances and people and more (they are business-like, albeit often run on a shoestring);
    • A mature organization is likely to have structured and stable processes and systems (which may or may not be secure!) whereas a new greenfield or immature organization is likely to be more fluid, less regimented (and probably insecure!);
  • Keep an eye out for improvement opportunities - a polite way of saying there are information risks of concern, plus ways to increase efficiency and effectiveness – but don’t just assume that you need to fix all the security issues instantly: it’s more a matter of first figuring out you and your organization’s priorities. Being information risk-aligned suits the structured ISO27k approach. It doesn’t hurt to mention them to the relevant people and chat about them, but be clear that you are ‘just exploring options’ not ‘making plans’ at this stage: watch their reactions and body language closely and think on;
  • Consider the broader historical and organizational context, as well as the specifics. For instance:
    • How did things end up the way they are today? What most influenced or determined things? Are there any stand-out issues or incidents, or current and future challenges, that come up often and resonate with people?
    • Where are things headed? Is there an appetite to ‘sort this mess out’ or conversely a reluctance or intense fear of doing anything that might rock the boat? Are there particular drivers or imperatives or opportunities, such as business changes or compliance obligations? Are there any ongoing initiatives that do, could or should have an infosec element to them?
    • Is the organization generally resilient and strong, or fragile and weak? Look for examples of each, comparing and contrasting. A SWOT or PEST analysis generally works for me. This has a bearing on the safe or reckless acceptance of information and other risks;
    • Is information risk and security an alien concept, something best left to the grunts deep within IT, or a broad business issue? Is it an imposed imperative or a business opportunity, a budget black hole (cost centre) or an investment (profit centre)? Does it support and enable the business, or constrain and prevent it?
    • Notice the power and status of managers, departments and functions. Who are the movers and shakers? Who are the blockers and naysayers? Who are the best-connected, the most influential, the bright stars? Who is getting stuff done, and who isn’t? Why is that?
    • How would you characterize and describe the corporate culture? What are its features, its high and low points? What elements or aspects of that might you exploit to further your objectives? What needs to change, and why? (How will come later!)
  • Dig out and study any available risk, security and audit reports, metrics, reviews, consultancy engagements, post-incident reports, strategies, plans (departmental and projects/initiatives), budget requests, project outlines, corporate and departmental mission statements etc. There are lots of data here and plenty of clues that you should find useful in building up a picture of What Needs To Be Done. Competent business continuity planning, for example, is also business-risk-aligned, hence you can’t go far wrong by emphasizing information risks to the identified critical business activities. At the very least, obtaining and discussing the documentation is an excellent excuse to work your way systematically around the business, meeting knowledgeable and influential people, learning and absorbing info like a dry sponge.
  • Build your team. It may seem like you’re a team of 1 but most organizations have other professionals or people with an interest in information risk and security etc. What about IT, HR, legal/compliance, sales & marketing, production/operations, research & development etc.? Risk Management, Business Continuity Management, Privacy and IT Audit pro’s generally share many of your/our objectives, at least there is substantial overlap (they have other priorities too). Look out for opportunities to help each other (give and take). Watch out also for things, people, departments, phrases or whatever to avoid, at least for now.
  • Meanwhile, depending partly on your background, it may help to read up on the ISO27k and other infosec standards plus your corporate strategies, policies, procedures etc., not just infosec. Consider attending an ISO27k lead implementer and/or lead auditor training course, CISM or similar.  There’s also the ISO27k FAQ, ISO27k Toolkit and other info from ISO27001security.com, plus the ISO27k Forum archive (worth searching for guidance on specific issues, or browsing for general advice).  If you are to become the organization’s centre of excellence for information risk and security matters, it’s important that you are well connected externally, a knowledgeable expert in the field. ISSA, InfraGard, ISACA and other such bodies, plus infosec seminars, conferences and social media groups are all potentially useful resources, or a massive waste of time: your call. 
Yes, I know, I know, that’s a ton of work, and I appreciate that it’s not quite what was asked for i.e. questions to ask departments about their infosec controls. My suggestion, though, is to tackle this at a different level: the security controls in place today are less important than the security controls that the organization needs now and tomorrow. Understanding the information risks is key to figuring out the latter.

As a relative newcomer, doing your homework and building the bigger picture will give you an interesting and potentially valuable insight into the organization, not just on the information risk and security stuff … which helps when it comes to proposing and discussing strategies, projects, changes, budgets etcHowyou go about doing that is just as important as what it is that you are proposing to do. In some organizations, significant changes happen only by verbal discussion and consensus among a core/clique (possibly just one all-powerful person), whereas in some others nothing gets done without the proper paperwork, in triplicate, signed by all the right people in the correct colours of ink! The nature, significance and rapidity of change all vary, as do the mechanisms or methods.

So, in summary, there's rather more to do than assess the security controls against 27002. 



PS  For the more cynical among us, there’s always the classic three envelope approach.