- Tell them to report incidents. Instruct them. Give them a direct order.
- Warn them about not doing it. Perhaps threaten some form of penalty if they don't.
- Convince them that it is in the organization's interests for workers to report stuff. Persuade them of the value.
- Convince workers that it is in their own best interest to report stuff. Persuade them.
- Explain the reporting requirement (e.g. what kinds of things should they report, and how?) and encourage them to do so.
- Make reporting incidents 'the easy option'.
- Reward people for reporting incidents.
- Something else? Trick them? Goad them? Follow up on those who did not report stuff promptly, asking about their reasons?
- Determining the relative criticality of various business processes, IT systems, business units, departments, teams, relationships, projects, initiatives etc. to the organization involves understanding the business in some depth, leading to a better appreciation of the associated information risks. Provided it is done well, the Business Impact Assessment part of BCM is sheer gold: it forces management to clarify, rationalize and prioritize ... which gives me a much tighter steer on where to push harder or back off the pressure. If we all agree that situation A is more valuable or important or critical to the organization than B, then I can readily justify (both to myself and to management, the auditors and other stakeholders) mitigating the risks in situation B to a lesser extent than for A. That's relative security in a form that makes sense and works for me. It gives me the rationale to accept imperfections.
- BCM (as I do it!) involves investing in appropriate resilience, recovery and contingency measures. The resilience part supports information security in a very general yet valuable way: it means not compromising too far on the preventive controls, ensuring they are sufficiently robust not to fall over like dominoes at the first whiff of trouble. The recovery part similarly involves detecting and responding reasonably effectively to incidents, hence I still have the mandate to maintain those areas too. Contingency adds a further element of preparing to deal with the unexpected, including information risks that weren't even foreseen, plus those that were in fact wrongly evaluated and only partially mitigated. Contingency thinking leads to flexible arrangements such as empowerment, multi-skilling, team working and broad capability development with numerous business benefits, adding to those from security, resilience and recovery.
Incidents that are highly visible and obvious to all (e.g. a ransomware attack at the point of the Denial of Service and ransom being demanded) are materially different from those that remain unrecognized for a long period, perhaps forever (e.g. a spyware attack) even if otherwise similar (using very similar remote-control Trojans in those cases). Detectability therefore might be a valuable third dimension to the classic Probability Impact Graphsfor assessing and comparing risks.
For the awareness module, we'll be defining four or five waypoints, indicators or scoring norms for each of several relevant criteria, helping users of the metric assess, compare and score whatever information risks or incidents they have in mind.
You may have noticed the implicit 'detection time' element to detectability, ranging from infinity down to zero. That's a fairly simple concept and parameter to explain and discuss, but not so easy to determine or measure in, say, a risk workshop situation. In practice we prefer subjective or relative scales, reducing the measurement issue from "What is the probable detection time for incidents of type X?" to "Would type X incidents generally be detected before or after types Y and Z?" - in other words a classic bubble-sort or prioritization approach, with which managers generally are comfortable. The absolute value of a given point on the measurement scale is almost incidental, an optional outcome of the discussion and prioritization decisions made rather than an input or driver. What matters more is the overall pattern and spread of values, and even more important is the process of considering and discussing these matters in some depth. The journey trumps the destination.
- Have a quiet word with the auditor/s about it, ideally before it gets written up and finalized in writing. Discuss the issue – talk it through, consider various perspectives. Negotiate a pragmatic mutually-acceptable resolution, or at least form a better view of the sticking points.
- Have a quiet word with your management and specialist colleagues about it, before the audit gets reported. Discuss the issue. Agree how you will respond and try to resolve this. Develop a cunning plan and gain their support to present a united front. Ideally, get management ready to demonstrate that they are definitely committing to fixing this e.g. with budget proposals, memos, project plans etc. to substantiate their commitment, and preferably firm timescales or agreed deadlines.
- Gather your own evidence to strengthen your case. For example:
- If you believe an issue is irrelevant to certification since there is no explicit requirement in 27001, identify the relevant guidance about the audit process from ISO/IEC 27007 plus the section of 27001 that does not state the requirement (!)
- If the audit finding is wrong, prove it wrong with credible counter-evidence, counter-examples etc. Quality of evidence does matter but quantity plays a part. Engage your extended team, management and the wider business in the hunt.
- If it’s a subjective matter, try to make it more objective e.g. by gathering and evaluating more evidence, more examples, more advice from other sources etc. ‘Stick to the facts’. Be explicit about stuff. Choose your words carefully.
- Ask us for second opinions and guidance e.g. on the ISO27k Forum and other social media, industry peers etc.
- Wing-it. Duck-and-dive. Battle it out. Cut-and-thrust. Wear down the auditor’s resolve and push for concessions, while making limited concessions yourself if you must. Negotiate using concessions and promises in one area to offset challenges and complaints in another. Agree on and work towards a mutually-acceptable outcome (such as, um, being certified!).
- Be up-front about it. Openly challenge the audit process, findings, analysis etc. Provide counter-evidence and arguments. Challenge the language/wording. Push the auditors to their limit. [NB This is a distinctly risky approach! Experienced auditors have earned their stripes and are well practiced at this, whereas it may be your first time. As a strategy, it could go horribly wrong, so what’s your fallback position? Do you feel lucky, punk?]
- Suck it up! Sometimes, the easiest, quickest, least stressful, least risky (in terms of being certified) and perhaps most business-like response is to accept it, do whatever you are being asked to do by the auditors and move on. Regardless of its validity for certification purposes, the audit point might be correct and of value to the business. It might actually be something worth doing … so swallow your pride and get it done. Try not to grumble or bear a grudge. Re-focus on other more important and pressing matters, such as celebrating your certification!
- Negotiate a truce. Challenge and discuss the finding and explore possible ways to address it. Get senior management to commit to whichever solution/s work best for the business and simultaneously persuade/convince the auditors (and/or their managers) of that.
- Push back informally by complaining to the certification body’s management and/or the body that accredited them. Be prepared to discuss the issue and substantiate your concerns with some evidence, more than just vague assertions and generalities.
- Push back hard. Review your contract with the certification body for anything useful to your case. Raise a formal complaint with the certification body through your senior management … which means briefing them and gaining their explicit support first. Good luck with that. You’ll need even stronger, more explicit evidence here. [NB This and the next bullet are viable options even after you have been certified … but generally, by then, nobody has the energy to pursue it and risk yet more grief.]
- Push back even harder. Raise a complaint with the accreditation body about the certification body’s incompetence through your senior management … which again means briefing them and gaining their explicit support first, and having the concrete evidence to make a case. Consider enlisting the help of your lawyers and compliance experts willing to get down to the brass tacks, and with the experience to build and present your case.
- Delay things. Let the dust settle. Review, reconsider, replan. Let your ISMS mature further, particularly in the areas that the auditors were critical of. Raise your game. Redouble your efforts. Use your metrics and processes fully.
- Consider engaging a different certification body (on the assumption that they won’t raise the same concerns … nor any others: they might be even harder to deal with!).
- Consider engaging different advisors, consultants and specialists. Review your extended ISMS team. Perhaps push for more training, to enhance the team’s competence in the problem areas. Perhaps broaden ‘the team’ to take on-board other specialists from across the business. Raise awareness.
- Walk away from the whole mess. Forget about certification. Go back to your cave to lick your wounds. Perhaps offer your resignation, accepting personal accountability for your part in the situation. Or fire someone else!
"A common tactic of authoritarian regimes is to make laws which are next to impossible to abide by, then not enforce them. This creates a culture where it's perfectly acceptable to ignore such laws, yet the regime may use selective enforcement to punish dissenters -- since legally, everyone is delinquent."
Venezuela is a country facing an uncertain moment in its history. Reports suggests it is in significant need of humanitarian aid.
On February 10th, Mr. Juan Guaidó made a public call asking for volunteers to join a new movement called “Voluntarios por Venezuela” (Volunteers for Venezuela). According to the media, it already numbers thousands of volunteers, willing to help international organizations to deliver humanitarian aid to the country. How does it work? Volunteers sign up and then receive instructions about how to help. The original website asks volunteers to provide their full name, personal ID, cell phone number, and whether they have a medical degree, a car, or a smartphone, and also the location of where they live:
This website appeared online on February 6th. Only a few days later, on February 11th, the day after the public announcement of the initiative, another almost identical website appeared with a very similar domain name and structure.
In fact, the false website is a mirror image of the original website, voluntariosxvenezuela.com
Both the original and the false website use SSL from Let’s Encrypt. The differences are as follows:
|Original voluntariosxvenezuela.com website||Deception website|
|First day on the Internet, Feb 6th||First day on the Internet, Feb 11th|
Registered on the name of Sigerist Rodriguez on Feb 4, 2019
Registered via GoDaddy using Privacy Protection feature on Feb 11, 2019
|Hosted on Amazon Web Services||Hosted first on GoDaddy and then on DigitalOcean|
Now, the scariest part is that these two different domains with different owners are resolved within Venezuela to the same IP address, which belongs to the fake domain owner:
That means it does not matter if a volunteer opens a legitimate domain name or a fake one, in the end will introduce their personal information into a fake website.
Both domains if resolved outside Venezuela present different results:
Kaspersky Lab blocks the fake domain as phishing.
In this scenario, where the DNS servers are manipulated, it’s strongly recommended to use public DNS servers such as Google DNS servers (188.8.131.52 and 184.108.40.206) or CloudFlare and APNIC DNS servers (220.127.116.11 and 18.104.22.168). It’s also recommended to use VPN connections without a 3rd party DNS.
Tan's former employer and the FBI allege that Tan "downloaded restricted files to a personal thumb drive." I could not tell from the complaint if Tan downloaded the files at work or at home, but the thumb drive ended up at Tan's home. His employer asked Tan to bring it to their office, which Tan did. However, he had deleted all the files from the drive. Tan's employer recovered the files using commercially available forensic software.
This incident, by definition, involves an "insider threat." Tan was an employee who appears to have copied information that was outside the scope of his work responsibilities, resigned from his employer, and was planning to return to China to work for a competitor, having delivered his former employer's intellectual property.
When I started GE-CIRT in 2008 (officially "initial operating capability" on 1 January 2009), one of the strategies we pursued involved insider threats. I've written about insiders on this blog before but I couldn't find a description of the strategy we implemented via GE-CIRT.
We sought to make digital intrusions more expensive than physical intrusions.
In other words, we wanted to make it easier for the adversary to accomplish his mission using insiders. We wanted to make it more difficult for the adversary to accomplish his mission using our network.
In a cynical sense, this makes security someone else's problem. Suddenly the physical security team is dealing with the worst of the worst!
This is a win for everyone, however. Consider the many advantages the physical security team has over the digital security team.
The physical security team can work with human resources during the hiring process. HR can run background checks and identify suspicious job applicants prior to granting employment and access.
Employees are far more exposed than remote intruders. Employees, even under cover, expose their appearance, likely residence, and personalities to the company and its workers.
Employees can be subject to far more intensive monitoring than remote intruders. Employee endpoints can be instrumented. Employee workspaces are instrumented via access cards, cameras at entry and exit points, and other measures.
Employers can cooperate with law enforcement to investigate and prosecute employees. They can control and deter theft and other activities.
In brief, insider theft, like all "close access" activities, is incredibly risky for the adversary. It is a win for everyone when the adversary must resort to using insiders to accomplish their mission. Digital and physical security must cooperate to leverage these advantages, while collaborating with human resources, legal, information technology, and business lines to wring the maximum results from this advantage.
Thankfully, a little digging led me to the excellent technical annex with very useful details about the sampling and survey methods. Aside from some doubt about the way different sizes of organizations were sampled, the approach looks good to me, writing as a former research scientist, latterly an infosec pro - neither a statistician nor surveyor by profession.
Interviewers had access to a glossary defining a few potentially confusing terms, including cyber security:
"Cyber security includes any processes, practices or technologies that organisations have in place to secure their networks, computers, programs or the data they hold from damage, attack or unauthorised access."Nice! That's one of the most lucid definitions I've seen, worthy of inclusion in the NoticeBored glossary. It is only concerned with "damage, attack or unauthorised access" to "networks, computers, programs or the data they hold" rather than information risk and security as a whole, but still it is quite wide in scope. It is not just about hacks via the Internet by outsiders, one of several narrow interpretations in circulation. Nor is it purely about technical or technological security controls.
"Breach" was not defined though. Several survey questions used the phrase "breach or attack", implying that a breach is not an attack, so what is it? Your guess is as good as mine, or the interviewers' and the interviewees'!
Overall, the survey was well designed, competently conducted by trustworthy organizations, and hence the results are sound. Shocking, but sound.
I surmise that my shock relates to a mistake on my part. I assumed that most organizations had policies in this area. As to why roughly two thirds of them don't, one can only guess since the survey didn't explore that aspect, at least not directly. Given my patent lack of expertise in this area, I won't even hazard a guess. Maybe you are willing to give it a go?
Blog comments are open. Feedback is always welcome.
- The preliminary planning, thinking, research and pre-announcements that led us to give it a title and a few vague words of description on the website;
- Other modules, especially recent ones that are relevant to or touched on this topic with an eye to it being covered in February;
- Preliminary planning for future topics that we might introduce or mention briefly in this one but need not cover in any depth - not so much a grand master plan covering all the awareness topics as a reasonably coherent overview, the picture-on-the-box showing the whole jigsaw;
- Customer suggestions and feedback, plus conjecture about aspects or concerns that seem likely to be relevant to our customers given their business situations and industries e.g. compliance drivers;
- General knowledge and experience in this area, including our understanding of good practices ... which reminds me to check the ISO27k and other standards for guidance and of course Google, an excellent way to dig out potentially helpful advice, current thinking in this area plus news of recent, public incidents involving human error;
- Shallow and deep thought, day and night-dreaming, doodling, occasional caffeine-fueled bouts of mind-mapping, magic crystals and witchcraft a.k.a. creative thinking.