Category Archives: compliance

With No Permission, Facebook Slurped up ‘Hundreds of Millions’ of Email Contacts

Move Fast and Break Things

This story only gets worse for Facebook: Two weeks ago, I told you about how Zuckerberg’s firm was demanding some users enter their email passwords, so they can be “verified.” But now, further revelations make the situation look much, much worse. It appears Facebook was actually copying those users’ entire list of contacts—without permission.

The post With No Permission, Facebook Slurped up ‘Hundreds of Millions’ of Email Contacts appeared first on Security Boulevard.

Microsoft 365 security: Protecting users from an ever-evolving threat landscape

In this age of frequent security and data breaches, the statement “We take our customers’ privacy and security very seriously” has been heard from breached companies so often as to become a point of mockery, anger and frustration. But when Rob Lefferts, CVP of Microsoft 365 Security and Compliance, tells me the same thing (and the statement is not in response to a security breach), I believe him. If they didn’t, this cloud-based SaaS offering … More

The post Microsoft 365 security: Protecting users from an ever-evolving threat landscape appeared first on Help Net Security.

Healthcare orgs have to achieve true cybersecurity, not only compliance

How many organizations in the healthcare sector are conforming with the HIPAA Security and Privacy Rules and the National Institute of Standards and Technology Cybersecurity Framework (NIST CSF)? According to a report by CynergisTek, which is based on aggregate ratings from privacy and security assessments performed in 2018 at nearly 600 healthcare provider organizations and business associates across the US, an average of 72% of orgs conform with the HIPAA’s rules and a 47% with … More

The post Healthcare orgs have to achieve true cybersecurity, not only compliance appeared first on Help Net Security.

The top emerging risks organizations are facing

Gartner surveyed 98 senior executives across industries and geographies and found that “accelerating privacy regulation” had overtaken “talent shortages” as the top emerging risk in the Q1 2019 Emerging Risk Monitor survey. Concerns around privacy regulations were consistently spread across the globe, denoting the increasingly numerous and geographically specific regulations that companies must now comply with. “With the General Data Protection Regulation (GDPR) now in effect, executives realize that complying with privacy regulations is more … More

The post The top emerging risks organizations are facing appeared first on Help Net Security.

Regulating the IoT: Impact and new considerations for cybersecurity and new government regulations

In 2019 we have reached a new turning point in the adoption of IoT – more markets and industries are migrating to a cloud-based infrastructure, and as the IoT continues to gain popularity and more devices and data move online, lawmakers and legislators around the globe are taking note. An often-critiqued part of IoT growth is its impact on cybersecurity and concerns around the ability to keep networks secure from cyber-attacks as they grow in … More

The post Regulating the IoT: Impact and new considerations for cybersecurity and new government regulations appeared first on Help Net Security.

NBlog Apr 11 – the KISS approach to ISO27k

From time to time on the ISO27k Forum, someone claims that certification auditors 'like to see', 'require' or even 'insist on' or 'demand' certain information security controls. Sometimes, it is further claimed or implied that certification auditors have actually raised or might yet raise nonconformances regarding the lack of certain controls, and consequently might refuse to certify their clients.

I'm not entirely convinced that such claims are true, for starters, but if so that hints at a problem with the certification and perhaps accreditation processes.

In accordance with ISO/IEC 27006, ISO/IEC 27007, ISO 19011 (revised last year) and their own internal certification audit procedures, accredited certification auditors should be certifying an ISO27k Information Security Management System against the requirements formally specified in the main body clauses of ISO/IEC 27001. They should definitely raise major nonconformances and refuse to certify if they have evidence that an organization has not fulfilled particular requirements in the main body of '27001. However, if there are issues regarding the organization’s interpretation and/or implementation of '27001 Annex A controls, that’s a different matter because Annex A itself is not mandatory.

A (re)current example on the Forum concerns asset inventories. The main body of '27001 does not formally require that organizations prepare and maintain inventories, databases or lists of their assets. Compliant organizations are required to consider the advice in Annex A regarding inventories and other matters, but they do not have to take the advice and they are free to interpret it in whatever way happens to suit their purposes.

Arguably, if an organization has identified and evaluated its information risks and decided to implement certain mitigating controls based on Annex A, but has not in fact done so yet (at least not satisfactorily) and has no real intention, then that suggests a failure of the ISMS processes which would likely constitute a reportable nonconformance. However, if the organization acknowledges that the controls are not fully implemented yet and is in the process of addressing that (ideally with some evidence of genuine intent, such as approved projects with allocated resources), then the ISMS processes appear to be working as planned … which would be a basis to challenge a nonconformance raised by the certification auditors. One of the objectives for an ISO27k ISMS is to drive and facilitate systematic improvement and maturity in this area: that’s nothing to be ashamed of - quite the reverse!

Unfortunately a number of myths and misunderstandings persist in the field, including allegedly common practices and widespread approaches that are not entirely aligned with the ISO standards. Even if many certified organizations happen to have asset inventories, that does not mean the standard formally requires everyone to do so. The same thing applies to information classification, antivirus controls, backups and so forth – in fact, the whole of Annex A ("Reference control objectives and controls") is advisory: certified organizations are formally required to check their selection of controls against Annex A "to ensure that no necessary controls have been overlooked" [27001 cluse 6.1.3c note 1] but they are not formally required to adopt and implement the Annex A controls. They are encouraged to select whatever controls happen to best address their risk mitigation needs, from any sources they choose including controls of their own invention. 
"Organizations can design controls as required, or identify them from any source." 
[ISO/IEC 27001:2013 clause 6.1.3b (note)]
Oh and by the way, mitigation is just one of four perfectly acceptable forms of risk treatment, along with avoidance, sharing and acceptance. Again, the organization is fully within its rights to choose its approach and the auditors should not complain (with some provisos concerning how those choices were made).

This point drove our development of the ISMS mandatory documentation checklist for the ISO27k Toolkit (free!). If you analyze the wording of ‘27001 carefully and narrowly, almost like a lawyer analyzing a contract, you find that many common practices are optional, not mandatory after all. This has implications for the certification auditors: clients have a sound basis to challenge audit findings or nonconformances on options that, for whatever reason, they have chosen not to take up. Provided the process through which they evaluated and chose their options is compliant with '27001, and provided they duly complied with their own policies and procedures, the auditors should not insist that those options are in fact required.

Having said all that, there is more to this than certified compliance with '27001. It could equally be argued that Annex A constitutes good practice, hence in accordance with '27001 6.3.1d, organizations that choose not to adopt Annex A controls should at least be able to justify their decisions in a Statement of Applicability. Right or wrong, discretion is appropriate and necessary under various circumstances, in practice. 

Furthermore, while certification auditors might be going beyond their brief if they refuse to certify organizations that choose not to adopt all the controls in Annex A, they might appear negligent if they didn’t at least point out substantial information security concerns which crop up in the course of their audits … which is where minor nonconformances, ‘other findings’, ‘potential points of concern’, informal reporting and the negotiations towards the end of an audit generally come into play. 'We will certify your ISMS, but we advise you of the following issues: ...'.

ISMS management reviews, ISMS internal audits etc. probably should dig out and report concerns of this nature too: they generally have a wider brief than certification and are not necessarily constrained to compliance auditing solely against the formal requirements. Almost anything is potentially reportable internally if a competent person believes and has evidence that is in the organization’s best interests. That includes audits and reviews of the ISMS against other requirements such as quality assurance or health and safety or environmental protection or corporate strategies or whatever. Organizations have many obligations and expectations in addition to those in ‘27001, not least meeting their own business objectives and duties towards various stakeholders.

So what does this all mean? Personally, despite being a fan of good security practices, I understand the value of a minimalist KISS approach (as in Keep your ISMS Simple, Stupid) with benefits such as:
  • Ease of understanding, use, management, maintenance and auditing;
  • Focus on the essentials, and do those well, make them slick;
  • Lack of red tape and bloat - often itself a rats nest of security issues as well as the obvious costs and delays;
  • Maximize bang for buck - the core processes and an ISO/IEC 27001 compliance certificate are valuable, even if the certified ISMS is minimalist;
  • Release the organization from the constraints of overbearing security, encouraging investment and effort in other more valuable business opportunities;
  • A solid foundation on which to build appropriate extensions at some future point - meaning both maturity and the flexibility to respond to novel situations as they arise.

Adhering to the mobility requirements of NIST 800-171 does not have to keep you awake at night

The majority of companies in the United States and Europe are required to comply with at least one IT security regulation – often times more. This forces companies to exert strong control over how data is transferred, accessed and maintained throughout its lifecycle. One particularly toothy regulation is referred to as NIST SP 800-171, and it requires that all non-federal organizations that want to continue working with U.S. government agencies need to be compliant with … More

The post Adhering to the mobility requirements of NIST 800-171 does not have to keep you awake at night appeared first on Help Net Security.

Insights gained from working on more than 750 cybersecurity incidents

Many entities face the same security risks so it is essential to have an insight on how to manage them and respond in case of occurrence. BakerHostetler’s privacy and data protection team released its 2019 Data Security Incident Response Report, which leverages the metrics and insights drawn from 750 potential incidents in 2018 to help entities identify and prioritize the measures necessary to address their digital risk posture. “Privacy laws around the globe are shifting … More

The post Insights gained from working on more than 750 cybersecurity incidents appeared first on Help Net Security.

NBlog April 7 – time resilience


It's official - summer's over in the Southern hemisphere.  

Not only did we need to light a fire to keep warm yesterday but at 3 am last night our clocks went back an hour at the end of NZ Daylight Savings Time. We're now 12 hours ahead of UTC.

◄ My Windows PC clock reset itself automagically, dropping an information entry into the system logs 12 seconds later ▼



Consequently the normally sequential Windows system log appears out of sequence. According to the time stamps ► log entries at 02:55 and 02:56 were followed by the informational entry at 02:00. 

That's just an reporting/display artifact though. Under the covers, the operating system uses UTC. UTC didn't change by an hour at 02:00 but just kept ticking away like normal. Log entries always join the top of the heap in a strictly sequential log.

UTC does occasionally change by a second, though, to keep it in step with the Earth's rotation which is how we animals measure time - by reference to the cycle of days and nights, sunrises and sunsets.

We all know days and nights change gradually in length throughout the year. Thanks to their atomic clocks, the scientists know that the 'gradual change' is not, in fact, entirely consistent. For reasons that escape me, atomic clocks are more consistent than the Earth's rotation, hence UTC is not entirely accurate.

UTC is only ever adjusted in whole 1 second increments ... which presents a problem for computer systems and processes that depend on UTC. Loggable events occurring within the period of a step adjustment could be logged with the wrong times, so a better approach is to speed up or slow down the clock tick rate ever so slightly until the one second change is achieved. Now, log entries will be ever so slightly wrong for the period of the change, but provided 'ever so slightly' is less than the resolution of the date-time-stamps, it shouldn't matter, hopefully.

Some systems and clocks don't adjust themselves, such as Sun.exe, a neat little Windows utility that displays a yellow or blue sun icon on the task bar depending on whether it is day or night. The times shown on its pop-up message about sunrise and sunset are wrong by an hour:


After terminating and restarting Sun.exe, the times are correct:


So it looks as if Sun.exe takes its time reference as it launches, not as it calculates and displays the pop-up message and colours the task bar icon.

Along with assorted battery-powered clocks around the place, the 1 hour error in Sun.exe is a trivial issue. For forensics purposes, accuracy of date-time-stamps to the second may be important when establishing the precise sequence of events, perhaps down to millisecond levels in some business situations (such as recording the precise moment that a bargain is struck in a volatile trading market). There might be safety or other implications as a result of strictly sequential activities getting out of sequence, unless the systems involved are coordinated to change at the same rate, which I guess is the reason for 'coordinated' in Coordinated Universal Time (i.e. UTC - the acronym is based on the French version of the phrase, as if this wasn't confusing enough already). What matters there is relative time ... and no, I'm not going into relativity at this point.

Overall, though, we manage. As with the much-feared Y2K, we scrape through. We're quite resilient, you could say. It takes me maybe a couple of days to adjust my body-clock to the 1 hour changes between winter and summer time, or other stepwise changes that occur when I fly East or West through one or more time zones. Of course I could cross just one time zone at the very point the clocks change between summer and winter time to cancel out the changes but the stress of figuring out whether I should change my watch, by how much and which way, would be worse than just coping with it. I'm glad I don't schedule flights though. 

So here I sit at 0730am roughly an hour after sunrise this Sunday morning, in daylight outside. Yesterday at this clock time, I needed the desk lamp on because it was still quite dark. This evening, it will be drink o'clock an hour earlier than yesterday. Drink o'clock is more daylight- than clock-related ... so I'd better push on. Things to do while it's light.


PS  As I tagged this blog piece, I realised that the issue has numerous implications for information security. There's more to it than it seems.

Preparing for the CCPA: Leverage GDPR Investments to Accelerate Readiness

The European Union (EU)’s General Data Protection Regulation (GDPR) is about to celebrate its first birthday, and similar regulations scheduled to go into effect early in 2020 — such as Brazil’s Lei Geral de Proteção de Dados (LGPD) and the California Consumer Privacy Act (CCPA) — will press organizations to look more holistically at how they address privacy. Because I’m an optimist, I think it’s possible a U.S. federal privacy law could also be passed in the next 18 months. In my experience, modern data privacy readiness and controls are largely based on common privacy principles and practices from the GDPR, which began enforcement on May 25, 2018.

But what does that really mean?

Apply GDPR Best Practices to Your CCPA Readiness Plan

Let’s take a step back and look at several of the high-level overlaps between the GDPR and the CCPA as an example. Keep in mind that within each regulation there are fine points that clearly differentiate them. While those are beyond the scope of this article, we suggest seeking legal advice should you need further help on this topic. Here is a high-level review:

  • While definitions vary, the general definition of “personal data” or “personal information” is virtually anything that can be used to identify an individual. Both regulations define and enumerate rules to enforce protecting an individual’s rights around his or her personal information.
  • According to the important right of disclosure or access, individuals have rights to transparency around the collection of their personal data and also to receipt or deletion of the data altogether.
  • The CCPA does not directly impose specific data security requirements, but establishes a right of action for certain data breaches caused by business failure to maintain reasonable security practices and procedures appropriate to the risk. Somewhat similarly, the GDPR requires appropriate technical and organizational measures necessary to ensure security appropriate to the risk.

As these basic overlaps between the GDPR and the CCPA illustrate, there is a set of common principles about transparency, including an individual’s right to access or request deletion of personal data, the need for security, and the potential for substantial penalties for noncompliance. While there are implementation differences between the various regulations — such as which organizations and individuals qualify, personal data definitions and individual rights (access, correction, deletion) — the IT best practices required to help your compliance program are largely the same. Some of these include:

  1. Security and privacy by design and by default;
  2. Locating, identifying and classifying personal data;
  3. Tracking personal data use via audit trails to demonstrate compliance;
  4. Providing for response capabilities to individual requests for access, correction, deletion and transfer of personal data and audit trails to demonstrate compliance;
  5. Implementing security controls according to risk (vulnerability assessments, access controls, activity monitoring, encryption); and
  6. Effectively preparing for and responding to breaches.

A Repeatable Framework for Protecting Regulated Data

In my experience as a practitioner, I find that it’s often helpful to follow a framework that guides you as you bring these best practices to life in your data privacy program. That’s why IBM created a five-step program to help you establish a repeatable process for protecting personal and regulated data, known as the Critical Data Protection Program:

Key features of an approach to protecting personal data

Figure 1: IBM’s Critical Data Protection Program

When it comes to preparing for the CCPA (and other regulations down the road), consider what steps you can take as an IT organization and how you will be working with your privacy/legal/compliance organizations. Your privacy team will undertake many of these activities, including assessments, policy setting and creating business processes.

  1. Start by obtaining executive sponsorship and budgets to support your privacy program. The higher up the executive chain, the better. The changes you may need to make will cross organizational boundaries, so support from the top will be critical to your success.
  2. Next, assess and understand your obligations — in other words, do a gap analysis. This may mean seeking legal counsel. Review your existing privacy policies, notices and statements. Do you have them? Where are they presented, and when were they last updated? Are they clearly written and easy to understand?
  3. Create a cross-functional team. When it comes to implementation, be sure to have all the right stakeholders involved. Privacy is not just a security issue, or even just a privacy issue; your cross-functional team should include departments such as marketing and HR, for example, due to the potentially regulated data they may be dealing with.
  4. Regardless of regulation, you will need to know what personal data assets you store, where they are located and how they are used. You will hear this often referred to as a data map. Data discovery is an essential part of creating a data map; it’s the process of identifying, inventorying and mapping personal data and data flows across your organization. A data security solution can help automate the process to avoid approaching it manually — after all, who couldn’t use fewer spreadsheets and more time?
  5. Review data retention schedules. How long do you retain the personal data you collect? It should be either as long as required for a legitimate business need or as required by law.
  6. Document privacy compliance activities, including processing operations involving personal data.
  7. Develop audit capabilities and processes. You will be required to demonstrate what you are doing to address your compliance obligations. You will need a robust audit plan and process to monitor ongoing conformity and help mitigate risk, both internally and with your data processors and other vendors.
  8. Implement privacy by design and security by design. Although not spelled out in the CCPA, this is an important GDPR requirement and it can save you a lot of redundant work regardless of the regulation. Going forward, if you develop new services and systems, it is likely that you will be expected to embed — by default and by design — processes and features that will help ensure privacy of personal data.
  9. Create breach response and notification protocols. In the event of a breach with the GDPR, under certain scenarios, you have 72 hours to notify the regulatory authority. Other states and jurisdictions have varied timelines; sectoral regulations such as New York’s Department of Financial Services 23 NYCRR 500 also mandate 72 hours. Achieving these tight deadlines may depend on having defined processes and protocols in place for investigating, containing and responding to data breaches.

The bottom line is that approaching any privacy regulation requires a combination of people, process and technology. There is no one solution that can meet all needs. There are many technologies from IBM Security that can help — from data activity monitoring solutions to software-as-a-service (SaaS)-based risk analysis to encryption — and our privacy experts can help you get started in creating or augmenting your privacy program with services such as a CCPA readiness assessment.

Accelerate Your Readiness for New Data Privacy Regulations

Privacy regulations will continue to evolve, both in the U.S. and abroad. While there are many implementation differences, the IT controls and requirements for protecting personal data are largely the same. As you build out your program, don’t forget to leverage the existing investments you’ve made in preparing for other regulations — from both an organizational and technology perspective — to accelerate your readiness for new regulations.

With the right tools in place, you can implement a consolidated approach to help organize and automate your privacy controls program and, in the process, help build trust and accountability, whether with consumers, business partners or employees.

Learn more about privacy regulations: Download the white paper

Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations. The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability. IBM does not provide legal, accounting or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation. Learn more about IBM’s own GDPR readiness journey and our GDPR capabilities and offerings to support your compliance journey here.

The post Preparing for the CCPA: Leverage GDPR Investments to Accelerate Readiness appeared first on Security Intelligence.

WHOIS after GDPR: A quick recap for CISOs

2018 was a big year for data protection with the implementation of the General Data Protection Regulation (GDPR) last May — forcing CISOs and other professionals to rethink how the personal data of European consumers should be collected and processed. Taking a closer a look at WHOIS in connection to that, the protocol gives access to public domain data including TLDs and ccTLDs as well as more personal information like the names and addresses of … More

The post WHOIS after GDPR: A quick recap for CISOs appeared first on Help Net Security.

SecurityWeek RSS Feed: US Colleges Halt Work With Huawei Following Federal Charges

Some of the nation’s top research universities are cutting ties with Chinese tech giant Huawei as the company faces allegations of bank fraud and trade theft.

read more



SecurityWeek RSS Feed

79% of organizations want a federal privacy law amid lack of compliance

There is a significant enthusiasm for a federal privacy law amid organizations’ lack of ability to comply with data privacy rules stemming from both mushrooming government regulations and complex data sharing agreements between companies. Organizations are also overconfident in knowing where private data resides, and tend to use inadequate tools such as spreadsheets to track it. Integris Software’s 2019 Data Privacy Maturity Study gathered detailed responses from 258 mid to senior executives from IT, general … More

The post 79% of organizations want a federal privacy law amid lack of compliance appeared first on Help Net Security.

Announcing new capabilities for the Microsoft Azure Security Center

Microsoft Azure Security Center—the central hub for monitoring and protecting against related incidents within Azure—has released new capabilities. The following features—announced at Hannover Messe 2019—are now generally available for the Azure Security Center:

  • Advanced Threat Protection for Azure Storage—Layer of protection that helps customers detect and respond to potential threats on their storage account as they occur—without having to be an expert in security.
  • Regulatory compliance dashboard—Helps Security Center customers streamline their compliance process by providing insight into their compliance posture for a set of supported standards and regulations.
  • Support for Virtual Machine Scale Sets (VMSS)—Easily monitor the security posture of your VMSS with security recommendations.
  • Dedicated Hardware Security Module (HSM) service, now available in U.K., Canada, and Australia—Provides cryptographic key storage in Azure and meets the most stringent customer security and compliance requirements.
  • Azure disk encryption support for VMSS—Now Azure disk encryption can be enabled for Windows and Linux VMSS in Azure public regions—enabling customers to help protect and safeguard the VMSS data at rest using industry standard encryption technology.

In addition, support for virtual machine sets are now generally available as part of the Azure Security Center. To learn more, read our Azure blog.

The post Announcing new capabilities for the Microsoft Azure Security Center appeared first on Microsoft Security.

SecurityWeek RSS Feed: Facebook’s Call for Global Internet Regulation Sparks Debate

Facebook chief Mark Zuckerberg's call for "globally harmonized" online regulation raises questions about how internet platforms can deal with concerns about misinformation and abusive content while remaining open to free speech.

Here are key questions about the latest proposal from Facebook:

read more



SecurityWeek RSS Feed

UK Identifies Fresh Huawei Risks to Telecom Networks

Britain has identified "significant" issues in Huawei's engineering processes that pose "new risks" for the nation's telecommunications, a government report found Thursday amid lingering global suspicion over the Chinese technology giant.

read more

SecurityWeek RSS Feed: UK Identifies Fresh Huawei Risks to Telecom Networks

Britain has identified "significant" issues in Huawei's engineering processes that pose "new risks" for the nation's telecommunications, a government report found Thursday amid lingering global suspicion over the Chinese technology giant.

read more



SecurityWeek RSS Feed

Missed DNS Flag Day? It’s Not Too Late to Upgrade Your Domain Security

The Domain Name System (DNS) is a crucial component for the good functioning of the internet. It’s the system that makes it possible for visitors to find your websites and customers to exchange email with your organization.

DNS has been around for a very long time and a lot has changed since its original publications in RFC882 and RFC883 in the early 1980s. Over time, new features have been added, but unfortunately, the system still had to guarantee backward compliance with previous implementations — that is, until DNS Flag Day.

Extension Mechanisms for DNS

One of those new features was the extension mechanisms for DNS (EDNS), which extended the old system with additional information such as new flags and response codes. A major benefit of EDNS is that it allows a server to understand whether it is talking to another server that supports EDNS as well. If they both support EDNS, they can exchange the additional information, and if a server doesn’t support EDNS, it will just ignore this information. This is important because it’s not mandatory to implement EDNS.

EDNS itself relies on resource records (RR). RRs are, in essence, domain name server database entries and can include A or CNAME records — the pointers to your website — or MX records, which tell everyone which mail servers to use when they want to contact your organization.

One record type, the OPT “pseudo” RR type, was specifically foreseen for EDNS. Whereas “normal” record types, such as A or MX records, are included in zone files, OPT record types will never appear in a zone file. The OPT record only exists in the messages exchanged between a server and a client. For completeness, when referring to EDNS, based on the latest published standard (RFC 6891), you should refer to it as EDNS(0).

EDNS is also essential for the implementation of DNS Security Extensions (DNSSEC). More on that later.

Backward Compatibility and DNS Flag Day

In theory, the published specifications (and extension protocols) should result in a uniform implementation. Unfortunately, these specifications sometimes leave room for interpretation and certain coding decisions result in different implementations. Additionally, some implementations are just broken and are not at all compliant with these standards.

The end result is that developers continuously have to address interoperability between different existing implementations. This also means that a substantial amount of code (and effort) is spent on coping with these broken implementations. All this adds to the complexity of the solution and hogs up valuable resources that developers could otherwise spend on improving the software.

To resolve these problems, different DNS software and service providers agreed to coordinate removing accommodations for these noncompliant implementations so their developers could focus more on deploying new features. That switch, which was referred to as DNS Flag Day, happened on February 1, 2019.

Wait … Nothing Happened, My DNS Still Functions

This is not unusual, as the changes only affected noncompliant software. However, because of the potential impact of a nonfunctioning DNS infrastructure, it is important to raise awareness. Also, note that Feb. 1 was merely the day that open-source resolver vendors released their updates. It doesn’t necessarily mean that these updates are already applied everywhere. You might still have to face the consequences of this update at a later stage.

Inventory and Test Your Domains

The first thing you should do is review if your asset inventory contains sufficient information on your domains, as well as where the DNS service providers for these domains are located. This type of information is also essential to your incident response plans — for example, the plan to combat distributed denial-of-service (DDoS) attacks. Ideally, you should be able to answer each of the following questions:

  • Is the service in-house or outsourced? If it’s in-house, which software do you use and what is the patch management policy?
  • On which logical and physical networks are the servers located?
  • What is the update process for domain and record information (via a web console, a master server on your network, etc.)?
  • Which protection measures are put in place to access the management consoles?
  • Is there an alert system if a change was committed? Do you monitor the changes in your domain records and registrations?
  • Do you use Registrar-Lock code to protect your domains (e.g., RFC3632)?
  • Are zone files versioned and can you easily roll back to older versions? This is not the same as the serial number in the Start of Authority (SOA) record.

When you have inventoried all your domains, you can test them on the DNS Flag Day homepage. The test results will tell you if the servers supporting your domain are in compliance with the new standards or not. You can do bulk testing of domains via the EDNS Compliance Scanner. Note that if you have multiple domains all hosted on the same server, then you only need to do the test once, for one domain. This is because the changes are tied to the server providing the service, not your actual domain configuration (zone file).

Uh Oh, the Test Failed

If the test fails, you shouldn’t immediately start to panic. It is important to rule out any intermediate problems that could have caused the test to fail. Analyze the results and check if the failure could be caused by timeout problems or problems with load balancers or firewalls. If you’re unsure, try the test again. A temporary network hiccup might have negatively affected the results.

If the test continues to fail, it’s time to upgrade your software. If you control your own servers then you can do this yourself and repeat the testing after upgrading. Make sure you also review your firewall setup and verify that they do not drop packets with EDNS extensions. Also, check that your software is configured to support requests via both Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).

If your services are hosted somewhere else, you’ll have to convince your host to upgrade. Be sure to rule out that the security protection measures put in place by your host are not the cause of the test failure. Some hosts implement rate limiting and the volume of test queries could potentially be seen as an attack. It’s best to conduct the tests on different occasions. You can then use the information provided by PowerDNS and cz.nic to convince them of the necessity to upgrade.

What DNS Security Features Can I Deploy?

There is an additional benefit to making your DNS software compliant with the new standards and applying the latest updates; it allows you to make use of some of the new security features.

DNS Cookies

One of these features is the so-called DNS cookies. This cookie is an EDNS option that, if supported by both the server and the client, can help you ignore spoofed responses. Cookies are one of the mechanisms that you can implement as a protection against DDoS attacks.

A server can use DNS cookies to rate limit the number of requests a client can send. Here, cookies are used to whitelist known clients for Response Rate Limiting (RRL). The queries coming from clients with valid cookies will not be rate limited.

This mechanism can also be used to prevent a server from replying to spoofed packets. By using cookies, a server can filter out the legitimate requests from spoofed queries — as long as the client also supports the cookie option. This effectively prevents your server from flooding a victim with responses if the source IP was spoofed to the address of the victim. There will still be a relatively low amount of traffic sent to the victim (the error messages), but the total volume will be low.

An additional benefit is that resolvers can also reject replies that do not contain the correct cookies, preventing cache poisoning and answer forgery attacks.

As with most features, there’s also a downside to DNS cookies. Just like traditional HTTP cookies, DNS cookies can be used to track users on the web. One difference is that DNS cookies can also be used outside a web browser.

It will probably take some time before cookies are supported by all servers. Does this mean you should wait? No! Servers that are properly compliant with regard to EDNS options but do not support cookies will have no issues, they will simply ignore the cookies.

DNS Security Extensions

DNSSEC is another feature that adds a layer of security to the lookup and exchange processes for domain name resolution. It requires the queries for a given domain to be digitally signed, and protects users (and systems) from going to unintended, possibly fraudulent, locations. Note that DNSSEC does not prevent perpetrators from listening in on traffic; it will not encrypt your traffic or queries. Also, to be successful, this feature needs to be implemented for both signing (the zones) and validation (the responses).

DNSSEC is a very effective solution for mitigating DNS spoofing or hijacking attacks. A spoofing attack is when an attacker is able to inject a change in data into the resolvers’ cache. A hijacking attack is when an attacker is able to force the victim into using a different nameserver (resolver) or is able to change the actual resource records that are configured for a domain. The net result is that, as a victim of such an attack, you get redirected to infrastructure under control of the attacker. These widespread attacks can also potentially lead to attackers gaining access to your sensitive data, including, for example, the authentication credentials for email.

Unfortunately, despite the fact that the core of DNSSEC specifications were already published in RFCs in 2005 and about 90 percent of top-level domains (TLDs) are now signed, it’s adoption rate is still fairly low. If you want to focus on further protecting your domain name infrastructure, then it’s absolutely worth investing some extra time and resources in deploying DNSSEC for your domains.

Have You Updated Yet?

Whether or not you plan on using DNS cookies or DNSSEC, foreseeing an upgrade plan for your software to the latest version made available as part of DNS Flag Day is highly advised. Not only will you experience a performance gain, but updating today allows you to make use of existing security features and those still to come.

The post Missed DNS Flag Day? It’s Not Too Late to Upgrade Your Domain Security appeared first on Security Intelligence.

Adopting the NIST 800-53 Control Framework? Learn More About the Anticipated Changes in 2019

The final version of the National Institute of Standard and Technology (NIST)’s Special Publication (SP) 800-53 Revision 5 is on the horizon for 2019. What does the initial public draft tell us about what we can expect in its final version? Even more importantly, what does it mean for organizations seeking to adopt the new guidelines?

NIST SP 800-53 Revision 5 is expected to deliver major updates to the existing fourth revision, which was originally published in 2013. Since its inception, this publication has been the de facto guideline for security control implementations, security assessments and Authorization to Operate (ATO) processes for government information systems. There are many draft changes in the fifth revision, but one of the most significant impacts is that it marks a departure from limiting the control sets to federal information systems. The framework is now recommended for all systems in all industries.

In addition to control baseline updates, other major changes NIST anticipates will be in the final version include:

  • Organizations must now designate a senior management official responsible for managing the security policies and procedures associated with each control family.
  • Changing the structure of the controls to be more outcome-based, which leads to increased clarity, consistency and understanding.
  • Full integration of privacy controls into the security control catalog to create a consolidated view of all controls.
  • The addition of two new privacy control families: Individual Participation (IP) and Privacy Authorization (PA).
  • Program Management (PM) control family nearly doubles in scope (includes additional emphasis on privacy and data management).
  • New appendices to detail the relationship between security and privacy controls.

What Will NIST 800-53 Rev. 5 Mean For Organizations?

The changes expected in the fifth revision touch on a variety of subjects and affect a wide range of business and security functions. Below are some areas that will be particularly affected and considerations that will have a significant impact on how organizations manage their security programs.

Senior Management Ownership

First and foremost, leadership accountability is given much greater emphasis across the framework. Organizations will need to identify key senior management personnel to own specific policy efforts and oversight actions for the life of each system. By driving accountability from the top down, organizations stand to benefit from executive sponsorship of security policies and gain better visibility into the effectiveness of governance controls and the organization’s overall security status.

Data Privacy

Dedicated privacy control families and new privacy guidance woven into existing controls drive greater focus on privacy and sensitive data management. Privacy needs to be ingrained into all aspects of cybersecurity now and in the future, especially with new regulations in place to protect personal data. Organizations may need to review their org chart to ensure it provides the most effective strategic alignment between C-suite, security and privacy teams. Ownership of control implementations between security and privacy will be a key decision point when transitioning to the final release of Revision 5 in the near future.

Third-Party Assessments

NIST SP 800-53A will undergo a fifth revision in conjunction with the updates to SP 800-53. This is the companion document third-party assessors use as part of the ATO process to determine the effectiveness of control implementations and evaluate risk posture. Implementing and adapting the updated controls will be crucial to new or existing ATO renewals in the long term.

How Can Business Leaders Enhance Security Over Time?

Chief information officers (CIOs), chief information security officers (CISOs) and other organizational leaders need to start thinking about how to advance security and privacy initiatives in unison to achieve business goals and manage risk effectively. The update to NIST 800-53 will affect each organization differently. It’s still important to perform due diligence to determine how the final changes apply in each unique situation; however, as a whole, adopting recommended guideline serves to unify security standards and help all organizations strengthen their security posture as the threat and regulatory landscapes evolve.

Additional information and the full list of changes in the NIST 800-53 Revision 5 draft can be found on the NIST website, along with the publication schedule.

The post Adopting the NIST 800-53 Control Framework? Learn More About the Anticipated Changes in 2019 appeared first on Security Intelligence.

SecurityWeek RSS Feed: D.C. Attorney General Introduces New Data Security Bill

Karl A. Racine, the attorney general for the District of Columbia, on Thursday announced the introduction of a new bill that aims to expand data breach notification requirements and improve the way personal information is protected by organizations.

read more



SecurityWeek RSS Feed

SecurityWeek RSS Feed: Kaspersky Files Complaint Against Apple Over App Store Policy

Kaspersky Lab on Tuesday filed a complaint against Apple with the Russian Federal Antimonopoly Service after the tech giant introduced a new App Store policy requiring it to remove some important features from its Safe Kids app.

read more



SecurityWeek RSS Feed

Unsurprisingly, only 14% of companies are compliant with CCPA

With less than 10 months before the California Consumer Privacy Act (CCPA) goes into effect, only 14% of companies are compliant with CCPA and 44% have not yet started the implementation process. Of companies that have worked on GDPR compliance, 21% are compliant with CCPA, compared to only 6% for companies that did not work on GDPR, according to the TrustArc survey conducted by Dimensional Research. “At TrustArc, we’ve seen a significant increase in the … More

The post Unsurprisingly, only 14% of companies are compliant with CCPA appeared first on Help Net Security.

Smarter Vendor Security Assessments: Tips to Improve Response Rates

I have been on the receiving end of many vendor security assessments from customers and prospects.  Here are some tips to increase the likelihood that you’ll get a timely, usable response to the next vendor security assessment that you send out. Understand what data you will be providing One size doesn’t fit all. The level […]… Read More

The post Smarter Vendor Security Assessments: Tips to Improve Response Rates appeared first on The State of Security.

The State of Security: Smarter Vendor Security Assessments: Tips to Improve Response Rates

I have been on the receiving end of many vendor security assessments from customers and prospects.  Here are some tips to increase the likelihood that you’ll get a timely, usable response to the next vendor security assessment that you send out. Understand what data you will be providing One size doesn’t fit all. The level […]… Read More

The post Smarter Vendor Security Assessments: Tips to Improve Response Rates appeared first on The State of Security.



The State of Security

SecurityWeek RSS Feed: EU to Slap Google With Fresh Fine: Sources

The EU's anti-trust regulator is to slap tech giant Google with a new fine over unfair competition practices, sources told AFP on Friday.

Brussels has targeted the Silicon Valley firm's AdSense advertising service, saying it restricts some client websites from displaying ads from third parties.

read more



SecurityWeek RSS Feed

NBlog March 17 – cat-skinning

Incident reporting is a key objective of April's NoticeBored module. More specifically, we'd like workers to report information security matters promptly. 

So how might we achieve that through the awareness and training materials? Possible approaches include:
  1. Tell them to report incidents. Instruct them. Give them a direct order.
  2. Warn them about not doing it. Perhaps threaten some form of penalty if they don't.
  3. Convince them that it is in the organization's interests for workers to report stuff. Persuade them of the value.
  4. Convince workers that it is in their own best interest to report stuff. Persuade them.
  5. Explain the reporting requirement (e.g. what kinds of things should they report, and how?) and encourage them to do so.
  6. Make reporting incidents 'the easy option'.
  7. Reward people for reporting incidents.
  8. Something else? Trick them? Goad them? Follow up on those who did not report stuff promptly, asking about their reasons?
Having considered all of them, we'll combine a selection of these approaches in the awareness content and the train-the-trainer guide.

In the staff seminar and staff briefing, for instance, the line we're taking is to describe everyday situations where reporting incidents directly benefits the reporter (approach #4 in the list). Having seeded the idea in the personal context, we'll make the connection to the business context (#3) and expand a little on what ought to be reported (#5) ... and that's pretty much it for the general audience. 

For managers, there is mileage in #1 (policies and procedures) and #7 (an incentive scheme?) ... and #8 in the sense that we are only suggesting approaches, leaving NoticeBored subscribers to interpret or adapt them as they wish. Even #2 might be necessary in some organizations, although it is rather negative compared to the alternatives. 

For professionals, #6 hints at designing reporting systems and processes for ease of use, encouraging people to report stuff ... and, where appropriate, automatic reporting if specific criteria are met, which takes the awareness materials into another potentially interesting area. If the professionals are prompted at least to think about the issue, our job is done

Mandatory reporting of incidents to third parties is a distinct but important issue, especially for management. The privacy breach reporting deadline under GDPR (a topical example) is a very tough challenge for some organizations, requiring substantial changes in their approach to internal incident reporting, escalation and external reporting, and more generally the attitudes of those involved, making this a cultural issue. 

Google Took Down 2.3 Billion Bad Ads in 2018

Google this week revealed that it took down 2.3 billion bad ads last year, including 58.8 million phishing ads.

The ads were taken down for violations of both new and existing policies, and the Internet company said it faced challenges in areas where online advertising was used to scam or defraud users offline.

read more

SecurityWeek RSS Feed: Google Took Down 2.3 Billion Bad Ads in 2018

Google this week revealed that it took down 2.3 billion bad ads last year, including 58.8 million phishing ads.

The ads were taken down for violations of both new and existing policies, and the Internet company said it faced challenges in areas where online advertising was used to scam or defraud users offline.

read more



SecurityWeek RSS Feed

NATO Takes Huawei Security Concerns Seriously: Stoltenberg

Security concerns about the role of Huawei in Western 5G telecom infrastructure are to be taken seriously, the head of NATO said Thursday, as Washington steps up pressure on Europe not to use the Chinese firm.

read more

Data breach reports delayed as organizations struggle to achieve GDPR compliance

Businesses routinely delayed data breach disclosure and failed to provide important details to the ICO in the year prior to the GDPR’s enactment. On average, businesses waited three weeks after discovery to report a breach to the ICO, while the worst offending organization waited 142 days. The vast majority (91%) of reports to the ICO failed to include important information such as the impact of the breach, recovery process and dates, according to the Redscan’s … More

The post Data breach reports delayed as organizations struggle to achieve GDPR compliance appeared first on Help Net Security.

NBlog March 14 – carving up the policy pie


Today being Pi day 2019, think of the organization's suite of policies as a delicious pie with numerous ingredients, maybe a crunchy crust and toppings. Whether it's an award winning blue cheese and steak pie from my local baker, or a pecan pie with whipped cream and honey, the issue I'm circling around is how to slice up the pie. Are we going for symmetric segments, chords or layers? OK, enough of the pi-puns already, today I'm heading off at a tangent, prompted by an ongoing discussion around policies on the ISO27k Forum - specifically a thread about policy compliance.

Last month I blogged about policy management. Today I'll explore the policy management process and governance in more depth in the context of information risk and security or cybersecurity if you will.

In my experience, managers who are reluctant or unable to understand the [scary cyber] policy content stick to the bits they can do i.e. the formalities of 'policy approval' ... and that's about it. They leave the experts to write the guts of the policy, and even take their lead on whether there ought to be a policy at all, plus what the actual policy position should be. I rather suspect some don't even properly read and understand the policies they are asked to approve, not that they'd ever admit it!

The experts, in turn, naturally concentrate on the bits they are most comfortable with, namely writing that [cyber] content. Competent and experienced policy authors are well aware of the potential implications of [cyber] policies in their areas of specialty, so a lot of their effort goes into the fine details, crafting the specific wording to achieve [their view of] the intended effect with the least amount of collateral damage: they are busy down in the weeds of the standards and procedures, thinking especially about implementation issues and practicalities rather than true policies. For some of them anyway, everything else is dismissed as 'mere formalities'. 

Incompetent and inexperienced policy authors - well, they just kind of have a go at it in the hope that either it's good enough or maybe someone else will sort it out. Mostly they don't even appreciate the issues I'm discussing. Those dreadful policies written in pseudo-legal language are a bit of a giveaway, plus the ones that are literally unworkable, half-baked, sometimes unreadable and usually unhelpful. Occasionally worse than useless. 

Many experts and managers address each policy independently as if it exists in a vacuum, potentially leading to serious issues down the road such as direct conflicts with other policies and directives, perhaps even laws, regulations, strategies, contractual commitments, statements of intent, corporate values and so forth. Pity the poor worker instructed to comply with everything! The underlying issue is that the policies, procedures, directives, laws etc. form a complex and dynamic multidimensional matrix including but stretching far beyond the specific subject area of any one: they should all support and complement each other with few overlaps and no conflicts or gaps but good luck to anyone trying to achieve that in practice! Simply locating and mapping them all would be a job in itself, let alone consistently managing the entire suite as a coherent whole. 

So, in practice, organizations normally structure their policies into clusters around business departments such as finance, IT and HR. If we're lucky, the policies use templates making them are reasonably consistent in style and tone, look and feel, across all areas, and hopefully consistent in content within each area ... but that enterprise-wide consistency and integration of the entire suite is almost as rare as trustworthy politicians. 

That, to me, smells very much like a governance issue. Where is the high-level oversight, vision and direction? What kind of pie is it and how should it be sliced? Should 'cyber' policies (whatever that means) be part of the IT domain, or risk, or [information or IT] security, or assurance ... or should they form another distinct cluster? Who is going to deal with all those boundaries and interfaces, potential conflicts and overlaps? And how, in fact? 

But wait, there's more! Re the process, have you ever seen one of those, in practice - an actual, designed, documented and operational Policy Management Process? They do exist but I suspect only in mature, strongly ISO 9000-driven quality assurance cultures such as aerospace, or compliance-driven cultures such as finance, or highly bureaucratic organizations such as governments. Most organizations just seem to muddle through, essentially making things up as they go along. As auditors, we consider ourselves fortunate to find the basics such as identified policy owners and issue/approval status with a date! Refinements such as version numbers, defined review cycles, and systematic review processes, are sheer luxuries. As to proactively managing the entirety of the policy lifecycle from concept through to retirement, nah forgeddabahtit! 

Compliance is an example of something that ought to addressed in the policy management process, ideally leading to the compliance aspects being designed and then documented in the policies themselves and at implementation time being supported by associated awareness and training, metrics and activities to both enforce and reinforce compliance. Again, in practice, we're lucky if there is any real effort to 'implement' new policies: it's often an afterthought.

Finally, there's the time dimension: I just mentioned process maturity and policy lifecycle, but that's not all. The requirements and the organizational context are also dynamic. Laws, regs, contractual terms, standards and societal norms frequently change, sometimes quite sharply and dramatically (GDPR for a recent example) but usually more subtly. Statutes are relatively stable but the way they are interpreted and used in practice ('case law') evolves, especially early and late in their lifecycles - a bathtub curve. Various implementation challenges and incidents within the organization quite often lead to calls to 'update the policies and procedures', whether that's amending or drafting (seldom explicitly withdrawing or retiring failed or superseded policies!), plus there's the constant ebb and flow of new/amended policies (and strategies and objectives and ...) throughout the business - a version of the butterfly effect from chaos theory. And of course the people change. We come and go. We each have our interests and concerns, our blind spots and hot buttons. 

Bottom line: it's a mess because of those complications and dynamics. You may feel I'm over-complicating matters and yes maybe I am for the purposes of drawing attention to the issues ... but then I've been doing this stuff for decades, often stumbling across and trying to deal with similar issues in various organizations along the way. I see patterns. YMMV. 

I'm not sure these issues are even solvable but I believe that, as professionals, we could and should do better. This is the kind of thing that ISO27k could get further into, providing succinct, generic advice based on (I guess) ISO 9000 and governance practices. 

There's still more to say on this - another time. Meanwhile, I must press on with the awareness and training materials on 'spotting incidents'.

The Power of Vulnerability Management: Are You Maximizing Its Value?

Tripwire has been in the business of providing vulnerability management solutions with IP360 for about 20 years. With over 20,000 vulnerabilities discovered last year alone, vulnerability management continues to be an important part of most security plans. And most organizations agree. In a recent survey, 89 percent of respondents said that their organizations runs vulnerability […]… Read More

The post The Power of Vulnerability Management: Are You Maximizing Its Value? appeared first on The State of Security.

Security roundup: March 2019

We round up interesting research and reporting about security and privacy from around the web. This month: ransomware repercussions, reporting cybercrime, vulnerability volume, everyone’s noticing privacy, and feeling GDPR’s impact.

Ransom vs ruin

Hypothetical question: how long would your business hold out before paying to make a ransomware infection go away? For Apex Human Capital Management, a US payroll software company with hundreds of customers, it was less than three days. Apex confirmed the incident, but didn’t say how much it paid or reveal which strain of ransomware was involved.

Interestingly, the story suggests that the decision to pay was a consensus between the company and two external security firms. This could be because the ransomware also encrypted data at Apex’s newly minted external disaster recovery site. Most security experts strongly advise against paying extortionists to remove ransomware. With that in mind, here’s our guide to preventing ransomware. We also recommend visiting NoMoreRansom.org, which has information about infections and free decryption tools.

Bonus extra salutary security lesson: while we’re on the subject of backup failure, a “catastrophic” attack wiped the primary and backup systems of the secure email provider VFE Systems. Effectively, the lack of backup put the company out of business. As Brian Honan noted in the SANS newsletter, this case shows the impact of badly designed disaster recovery procedures.

Ready to report

If you’ve had a genuine security incident – neat segue alert! – you’ll probably need to report it to someone. That entity might be your local CERT (computer emergency response team), to a regulator, or even law enforcement. (It’s called cybercrime for a reason, after all). Security researcher Bart Blaze has developed a template for reporting a cybercrime incident which you might find useful. It’s free to download at Peerlyst (sign-in required).

By definition, a security incident will involve someone deliberately or accidentally taking advantage of a gap in an organisation’s defences. Help Net Security recently carried an op-ed arguing that it’s worth accepting that your network will be infiltrated or compromised. The key to recovering faster involves a shift in mindset and strategy from focusing on prevention to resilience. You can read the piece here. At BH Consulting, we’re big believers in the concept of resilience in security. We’ve blogged about it several times over the past year, including posts like this.

In incident response and in many aspects of security, communication will play a key role. So another helpful resource is this primer on communicating security subjects with non-experts, courtesy of SANS’ Lenny Zeltser. It takes a “plain English” approach to the subject and includes other links to help security professionals improve their messaging. Similarly, this post from Raconteur looks at language as the key to improving collaboration between a CISO and the board.

Old flaws in not-so-new bottles

More than 80 per cent of enterprise IT systems have at least one flaw listed on the Common Vulnerabilities and Exposures (CVE) list. One in five systems have more than ten such unpatched vulnerabilities. Those are some of the headline findings in the 2019 Vulnerability Statistics Report from Irish security company Edgescan.

Edgescan concluded that the average window of exposure for critical web application vulnerabilities is 69 days. Per the report, an average enterprise takes around 69 days to patch a critical vulnerability in its applications and 65 days to patch the same in its infrastructure layers. High-risk and medium-risk vulnerabilities in enterprise applications take up to 83 days and 74 days respectively to patch.

SC Magazine’s take was that many of the problems in the report come from companies lacking full visibility of all their IT assets. The full Edgescan report has even more data and conclusions and is free to download here.

From a shrug to a shun

Privacy practitioners take note: consumer attitudes to security breaches appear to be shifting at last. PCI Pal, a payment security company, found that 62 per cent of Americans and 44 per cent of Britons claim they will stop spending with a brand for several months following a hack or breach. The reputational hit from a security incident could be greater than the cost of repair. In a related story, security journalist Zack Whittaker has taken issue with the hollow promise of websites everywhere. You know the one: “We take your privacy seriously.”

If you notice this notice…

Notifications of data breaches have increased since GDPR came into force. The European Commission has revealed that companies made more than 41,000 data breach notifications in the six-month period since May 25. Individuals or organisations made more than 95,000 complaints, mostly relating to telemarketing, promotional emails and video surveillance. Help Net Security has a good writeup of the findings here.

It was a similar story in Ireland, where the Data Protection Commission saw a 70 per cent increase in reported valid data security breaches, and a 56 per cent increase in public complaints compared to 2017. The summary data is here and the full 104-page report is free to download.

Meanwhile, Brave, the privacy-focused browser developer, argues that GDPR doesn’t make doing business harder for a small company. “In fact, if purpose limitation is enforced, GDPR levels the playing field versus large digital players,” said chief policy officer Johnny Ryan.

Interesting footnote: a US insurance company, Coalition, has begun offering GDPR-specific coverage. Dark Reading’s quotes a lawyer who said insurance might be effective for risk transference but it’s untested. Much will depend on the policy’s wording, the lawyer said.

Things we liked

Lisa Forte’s excellent post draws parallels between online radicalisation and cybercrime. MORE

Want to do some malware analysis? Here’s how to set up a Windows VM for it. MORE

You give apps personal information. Then they tell Facebook (PAYWALL). MORE

Ever wondered how cybercriminals turn their digital gains into cold, hard cash? MORE

This 190-second video explains cybercrime to a layperson without using computers. MORE

Blaming the user for security failings is a dereliction of responsibility, argues Ira Winkler. MORE

Tips for improving cyber risk management. MORE

Here’s what happens when you set up an IoT camera as a honeypot. MORE

The post Security roundup: March 2019 appeared first on BH Consulting.

How to Pick the Right Solution for FISMA SI-7 Compliance

It can be hard to know how to best allocate your federal agency’s resources and talent to meet FISMA compliance, and a big part of that challenge is feeling confident that you’re choosing the right cybersecurity and compliance reporting solution. A Few FISMA SI-7 Basics So what sorts of specifications do you need to look […]… Read More

The post How to Pick the Right Solution for FISMA SI-7 Compliance appeared first on The State of Security.

Why Is Penetration Testing Critical to the Security of the Organization?

A complete security program involves many different facets working together to defend against digital threats. To create such a program, many organizations spend much of their resources on building up their defenses by investing in their security configuration management (SCM), file integrity monitoring (FIM), vulnerability management (VM) and log management capabilities. These investments make sense, […]… Read More

The post Why Is Penetration Testing Critical to the Security of the Organization? appeared first on The State of Security.

Threat Actor Using Fake LinkedIn Job Offers to Deliver More_eggs Backdoor

Security researchers discovered that a threat actor is targeting LinkedIn users with fake job offers to deliver the More_eggs backdoor.

Since mid-2018, Proofpoint has observed various campaigns distributing More_eggs, each of which began with a threat actor creating a fraudulent LinkedIn profile. The attacker used these accounts to contact targeted employees at U.S. companies — primarily in retail, entertainment, pharmaceuticals and other industries that commonly employ online payments — with a fake job offer via LinkedIn messaging.

A week after sending these messages, the attacker contacted the targeted employees directly using their work email to remind them of their LinkedIn correspondence. This threat actor incorporated the targets’ professional titles into subject lines and sometimes asked recipients to click on a link to a job description. Other times, the message contained a fake PDF with embedded links.

These URLs all pointed to a landing page that spoofed a legitimate talent and staffing management company. There, the target received a prompt to download a Microsoft Word document that downloaded the More_eggs backdoor once macros were enabled. Written in JScript, this backdoor malware is capable of downloading additional payloads and profiling infected machines.

A Series of Malicious Activities on LinkedIn

The threat actor responsible for these campaigns appears to have had a busy 2019 so far. Proofpoint found ties between these operations and a campaign first disclosed by Krebs on Security in which phishers targeted anti-money laundering officers at U.S. credit unions. Specifically, the security firm observed similar PDF email attachments and URLs all hosted on the same domain.

This isn’t the first time an online actor has used LinkedIn for malicious activity, either. Back in September 2017, Malwarebytes Labs found evidence of attackers compromising peoples’ LinkedIn accounts and using them to distribute phishing links via private messages. Less than a year later, Alex Hartman of Network Solutions, Inc. disclosed a similar campaign in which threat actors attempted to spread malware via LinkedIn using fake business propositions.

How to Defend Against Backdoors Like More_eggs

Security professionals can help defend against backdoors like More_eggs by consistently monitoring endpoints and devices for suspicious activity. Security teams should simultaneously use real-time compliance rules to automate remediation in the event they observe behavior that appears to be malicious.

Additionally, experts recommend testing the organization’s phishing defenses by contacting a reputable penetration testing service that employs the same tactics, techniques and procedures (TTPs) as digital criminals.

The post Threat Actor Using Fake LinkedIn Job Offers to Deliver More_eggs Backdoor appeared first on Security Intelligence.

Max Schrems: lawyer, regulator, international man of privacy

Almost one decade ago, disparate efforts began in the European Union to change the way the world thinks about online privacy.

One effort focused on legislation, pulling together lawmakers from 28 member-states to discuss, draft, and deploy a sweeping set of provisions that, today, has altered how almost every single international company handles users’ personal information. The finalized law of that effort—the General Data Protection Regulation (GDPR)—aims to protect the names, addresses, locations, credit card numbers, IP addresses, and even, depending on context, hair color, of EU citizens, whether they’re customers, employees, or employers of global organizations.

The second effort focused on litigation and public activism, sparking a movement that has raised at least nearly half a million dollars to fund consumer-focused lawsuits meant to uphold the privacy rights of EU citizens, and has resulted in the successful dismantling of a 15-year-old intercontinental data-transfer agreement for its failure to protect EU citizens’ personal data. The 2015 ruling sent shockwaves through the security world, and forced companies everywhere to scramble to comply with a regulatory system thrown into flux.

The law was passed. The movement is working. And while countless individuals launched investigations, filed lawsuits, participated in years-long negotiations, published recommendations, proposed regulations, and secured parliamentary approval, we can trace these disparate yet related efforts back to one man—Maximilian Schrems.

Remarkably, as the two efforts progressed separately, they began to inform one another. Today, they work in tandem to protect online privacy. And businesses around the world have taken notice.

The impact of GDPR today

A Portuguese hospital, a German online chat platform, and a Canadian political consultancy all face GDPR-related fines issued last year. In January, France’s National Data Protection Commission (CNIL) hit Google with a 50-million-euros penalty—the largest GDPR fine to date—after an investigation found a “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.”

The investigation began, CNIL said, after it received legal complaints from two groups: the nonprofit La Quadrature du Net and the non-governmental organization None of Your Business. None of Your Business, or noyb for short, counts Schrems as its honorary director. In fact, he helped crowdfund its launch last year.

Outside the European Union, lawmakers are watching these one-two punches as a source of inspiration.

When testifying before Congress about a scandal involving misused personal data, the 2016 US presidential election, and a global disinformation campaign, Facebook CEO Mark Zuckerberg repeatedly heard calls to regulate his company and its data-mining operations.

“The question is no longer whether we need a federal law to protect consumers privacy,” said Republican Senator John Thune of South Dakota. “The question is what shape will that law take.”

Democratic Senator Mark Warner of Virginia put it differently: “The era of the Wild West in social media is coming to an end.”

A new sheriff comes to town

In 2011, Schrems was a 23-year-old law student from Vienna, Austria, visiting the US to study abroad. He enrolled in a privacy seminar at the Santa Clara University School of Law where, along with roughly 22 other students, he learned about online privacy law from one of the field’s notable titans.

Professor Dorothy Glancy practiced privacy law before it had anything to do with the Internet, cell phones, or Facebook. Instead, she navigated the world of government surveillance, wiretaps, and domestic spying. She served as privacy counsel to one of the many subcommittees that investigated the Watergate conspiracy.

Later, still working for the subcommittee, she examined the number of federal agency databases that contained people’s personally identifiable information. She then helped draft the Privacy Act of 1974, which restricted how federal agencies collected, used, and shared that information. It is one of the first US federal privacy laws.

The concept of privacy has evolved since those earlier days, Glancy said. It is no longer solely about privacy from the government. It is also about privacy from corporations.

“Over time, it’s clear that what was, in the 70s, a privacy problem in regards to Big Brother and the federal government, has now gotten so that a lot of these issues have to do with the private [non-governmental] collection of information on people,” Glancy said.

In 2011, one of the biggest private, non-governmental collectors of that information was Facebook. So, when Glancy’s class received a guest presentation from Facebook privacy lawyer Ed Palmieri, Schrems paid close attention, and he didn’t like what he heard.

For starters, Facebook simply refused to heed Europe’s data privacy laws.

Speaking to 60 Minutes, Schrems said: “It was obviously the case that ignoring European privacy laws was the much cheaper option. The maximum penalty, for example, in Austria, was 20,000 euros. So, just a lawyer telling you how to comply with the law was more expensive than breaking it.”

Further, according to Glancy, Palmieri’s presentation showed that Facebook had “absolutely no understanding” about the relationship between an individual’s privacy and their personal information. This blind spot concerned Schrems to no end. (Palmieri could not be reached for comment.)

“There was no understanding at all about what privacy is in the sense of the relationship to personal information, or to human rights issues,” Glancy said. “Max couldn’t quite believe it. He didn’t quite believe that Facebook just didn’t understand.”

So Schrems investigated. (Schrems did not respond to multiple interview requests and he did not respond to an interview request forwarded by his colleagues at Noyb.)

Upon returning to Austria, Schrems decided to figure out just how much information Facebook had on him. The answer was astonishing: Facebook sent Schrems a 1,200-page PDF that detailed his location history, his contact information, information about past events he attended, and his private Facebook messages, including some he thought he had deleted.

Shocked, Schrems started a privacy advocacy group called “Europe v. Facebook” and uploaded redacted versions of his own documents onto the group’s website. The revelations touched a public nerve—roughly 40,000 Europeans soon asked Facebook for their own personal dossiers.

Schrems then went legal. With Facebook’s international headquarters in Ireland, he filed 22 complaints with Ireland’s Data Protection Commissioner, alleging that Facebook was violating EU data privacy law. Among the allegations: Facebook didn’t really “delete” posts that users chose to delete, Facebook’s privacy policy was too vague and unclear to constitute meaningful consent by users, and Facebook engaged in illegal “excessive processing” of user data.

The Irish Data Protection Commissioner rolled Schrems’ complaints into an already-running audit into Facebook, and, in December 2011, released non-binding guidance for the company. Facebook’s lawyers also met with Schrems in Vienna for six hours in February 2012.

And then, according to Schrems’ website, only silence and inaction from both Facebook and the Irish Data Protection Commissioner’s Office followed. There were no meaningful changes from the company. And no stronger enforcement from the government.

Frustrating as it may have been, Schrems kept pressing. Luckily, according to Glancy, he was just the right man for the job.

“He is innately curious,” Glancy said. “Once he sees something that doesn’t quite seem right, he follows it up to the very end.”

Safe Harbor? More like safety not guaranteed

On June 5, 2013, multiple newspapers exposed two massive surveillance programs in use by the US National Security Agency. One program, then called PRISM (now called Downstream), implicated some of the world’s largest technology companies, including Facebook.

Schrems responded by doing what he did best: He filed yet another complaint against Facebook—his 23rd—with the Irish Data Protection Commissioner. Facebook Ireland, Schrems claimed, was moving his data to Facebook Inc. in the US, where, according to The Guardian, the NSA enjoyed “mass access” to user data. Though Facebook and other companies denied their participation, Schrems doubted the accuracy of these statements.

“There is probable cause to believe that ‘Facebook Inc’ is granting the NSA mass access to its servers that goes beyond merely individual requests based on probable cause,” Schrems wrote in his complaint. “The statements by ‘Facebook Inc’ are in light of the US laws not credible, because ‘Facebook Inc’ is bound by so-called ‘gag orders.’”

Schrems argued that, when his data left EU borders, EU law required that it receive an “adequate level of protection.” Mass surveillance, he said, violated that.

The Irish Data Protection Commissioner disagreed. The described EU-to-US data transfer was entirely legal, the Commissioner said, because of Safe Harbor, a data privacy carve-out approved much earlier.

In 1995, the EU adopted the Data Protection Directive, which, up until 2018, regulated the treatment of EU citizens’ personal data. In 2000, the European Commission approved an exception to the law: US companies could agree to a set of seven principles, called the Safe Harbor Privacy Principles, to allow for data transfer from the EU to the US. This self-certifying framework proved wildly popular. For 15 years, nearly every single company that moved data from the EU to the US relied, at least briefly, on Safe Harbor.

Unsatisfied, Schrems asked the Irish High Court to review the Data Protection Commissioner’s inaction. In October 2013, the court agreed. Schrems celebrated, calling out the Commissioner’s earlier decision.

“The [Data Protection Commissioner] simply wanted to get this hot potato off his table instead of doing his job,” Schrems said in a statement at the time. “But when it comes to the fundamental rights of millions of users and the biggest surveillance scandal in years, he will have to take responsibility and do something about it.”

Less than one year later, the Irish High Court came back with its decision—the Court of Justice for the European Union would need to review Safe Harbor.

On March 24, 2015, the Court heard oral arguments for both sides. Schrems’ legal team argued that Safe Harbor did not provide adequate protection for EU citizen’s data. The European Commission, defending the Irish DPC’s previous decision, argued the opposite.

When asked by the Court how EU citizens might best protect themselves from the NSA’s mass surveillance, the lawyer arguing in favor of Safe Harbor made a startling admission:

“You might consider closing your Facebook account, if you have one,” said Bernhard Schima, advocate for the European Commission, all but admitting that Safe Harbor could not protect EU citizens from overseas spying. When asked more directly if Safe Harbor provided adequate protection of EU citizens’ data, the European Commission’s legal team could not guarantee it.

On September 23, 2015, the Court’s advocate general issued his initial opinion—Safe Harbor, in light of the NSA’s mass surveillance programs, was invalid.

“Such mass, indiscriminate surveillance is inherently disproportionate and constitutes an unwarranted interference with the rights [to respect for privacy and family life and protection of personal data,]” the opinion said.

Less than two weeks later, the entire Court of Justice agreed.

Ever a lawyer, Schrems responded to the decision with a 5,500-word blog post (assigned a non-commercial Creative Commons public copyright license) exploring current data privacy law, Safe Harbor alternatives, company privacy policies, a potential Safe Harbor 2.0, and mass surveillance. Written with “limited time,” Schrems thanked readers for pointing out typos.

The General Data Protection Regulation

Before the Court of Justice struck down Safe Harbor, before Edward Snowden shed light on the NSA’s mass surveillance, before Schrems received a 1,200-page PDF documenting his digital life, and before that fateful guest presentation in professor Glancy’s privacy seminar at Santa Clara University School of Law, a separate plan was already under way to change data privacy.

In November 2010, the European Commission, which proposes legislation for the European Union, considered a new policy with a clear goal and equally clear title: “A comprehensive approach on personal data protection in the European Union.”

Many years later, it became GDPR.

During those years, the negotiating committees looked to Schrems’ lawsuits as highly informative, Glancy said, because Schrems had successfully proven the relationship between the European Charter of Fundamental Human Rights and its application to EU data privacy law. Ignoring that expertise would be foolish.

“Max [Schrems] was a part of just about all the committees working on [GDPR]. His litigation was part of what motivated the adoption of it,” Glancy said. “The people writing the GDPR would consult him as to whether it would solve his problems, and parts of the very endless writing process were also about what Max [Schrems] was not happy with.”

Because Schrems did not respond to multiple interview requests, it is impossible to know his precise involvement in GDPR. His Twitter and blog have no visible, corresponding entries about GDPR’s passage.

However, public records show that GDPR’s drafters recommended several areas of improvement in the year before the law passed, including clearer definitions of “personal information,” stronger investigatory powers to the EU’s data regulators, more direct “data portability” to allow citizens to directly move their data from one company to another while also obtaining a copy of that data, and better transparency in how EU citizens’ online profiles are created and targeted for ads.

GDPR eventually became a sweeping set of 99 articles that tightly fasten the collection, storage, use, transfer, and disclosure of data belonging to all EU citizens, giving those citizens more direct control over how their data is treated.

For example, citizens have the “right to erasure,” in which they can ask a company to delete the data collected on them. Citizens also have the “right to access,” in which companies must provide a copy of the data collected on a person, along with information about how the data was collected, who it is shared with, and why it is processed.

Approved by a parliamentary vote in April 2016, GDPR took effect two years later.

GDPR’s immediate and future impact

On May 23, 2018, GDPR’s arrival was sounded not by trumpets, but by emails. Facebook, TicketMaster, eBay, PricewaterhouseCoopers, The Guardian, Marriott, KickStarter, GoDaddy, Spotify, and countless others began their public-facing GDPR compliance strategies by telling users about updated privacy policies. The email deluge inspired rankings, manic tweets, and even a devoted “I love GDPR” playlist. The blitz was so large, in fact, that several threat actors took advantage, sending fake privacy policy updates to phish for users’ information.

Since then, compliance looks less like emails and more like penalties.

Early this year, Google received its €50 million ($57 million) fine out of France. Last year, a Portuguese hospital received a €400,000 fine for two alleged GDPR violations. Because of a July 2018 data breach, a German chat platform got hit with a €20,000 fine. And in the reported first-ever GDPR notice from the UK, Canadian political consultancy—and murky partner to Cambridge Analytica—AggregateIQ received a notice about potential fines of up to €20 million.

To Noyb, the fines are good news. Gaëtan Goldberg, a privacy lawyer with the NGO, said that data privacy law compliance has, for many years, been lacking. Hopefully GDPR, which Goldberg called a “major step” in protecting personal data, can help turn that around, he said.

“[We] hope to see strong enforcement measures being taken by courts and data protection authorities around the EU,” Goldberg said. “The fine of 50 [million] euros the French CNIL imposed on Google is a good start in this direction.”

The future of data privacy

Last year, when Senator Warner told Zuckerberg that “the era of the Wild West in social media is coming to an end,” he may not have realized how quickly that would come true. In July 2018, California passed a statewide data privacy law called the California Consumer Privacy Act. Months later, three US Senators proposed their own federal data privacy laws. And just this month, the Government Accountability Office recommended that Congress pass a data privacy law similar to GDPR.

Data privacy is no longer a concept. It is the law.

In the EU, that law has released a torrent of legal complaints. Hours after GDPR came into effect, Noyb lodged a series of complaints against Google, Facebook, Instagram, and WhatsApp.

Goldberg said the group’s legal complaints are one component of meaningful enforcement on behalf of the government. Remember: Google’s massive penalty began with an investigation that the French authorities said started after it received a complaint from Noyb.

Separately, privacy group Privacy International filed complaints against Europe’s data-brokers and advertising technology companies, and Brave, a privacy-focused web browser, filed complaints against Google and other digital advertising companies.

Google and Facebook did not respond to questions about how they are responding to the legal complaints. Facebook also did not respond to questions about its previous legal battles with Schrems.

Electronic Frontier Foundation International Director Danny O’Brien wrote last year that, while we wait for the results of the above legal complaints, GDPR has already motivated other privacy-forward penalties and regulations around the world:

“In Italy, it was competition regulators that fined Facebook ten million euros for misleading its users over its personal data practices. Brazil passed its own GDPR-style law this year; Chile amended its constitution to include data protection rights; and India’s lawmakers introduced a draft of a wide-ranging new legal privacy framework.”

As the world moves forward, one man—the one who started it all—might be conspicuously absent. Last year, Schrems expressed a desire to step back from data privacy law. If anything, he said, it was time for others to take up the mantle.

“I know I’m going to be deeply engaged, especially at the beginning, but in the long run [Noyb] should absolutely not be Max’s personal NGO,” Schrems told The Register in a January 2018 interview. Asked to clarify about his potential future beyond privacy advocacy, Schrems said: “It’s retirement from the first line of defense, let’s put it that way… I don’t want to keep bringing cases for the rest of my life.”

Surprisingly, for all of Schrems’ public-facing and public-empowering work, his interviews and blog posts sometimes portray him as a deeply humble, almost shy individual, with a down-to-earth sense of humor, too. When asked during a 2016 podcast interview if he felt he would be remembered in the same vein as Edward Snowden, Schrems bristled.

“Not at all, actually,” Schrems said. “What I did is a very conservative approach. You go to the courts, you have your case, you bring it and you do your thing. What Edward Snowden did is a whole different ballgame. He pretty much gave up his whole life and has serious possibilities to some point end up in a US prison. The worst thing that happened to me so far was to be on that security list of US flights.”

During the same interview, Schrems also deflected his search result popularity.

“Everyone knows your name now,” the host said. “If you Google ‘Schrems,’ the first thing that comes up is ‘Max Schrems’ and your case.”

“Yeah but it’s also a very specific name, so it’s not like ‘Smith,’” Schrems said, laughing. “I would have a harder time with that name.”

If anything, the popularity came as a surprise to Schrems. Last year, in speaking to Bloomberg, he described Facebook as a “test case” when filing his original 22 complaints.

“I thought I’d write up a few complaints,” Schrems said. “I never thought it would create such a media storm.”

Glancy described Schrems’ initial investigation into Facebook in much the same way. It started not as a vendetta, she said, but as a courtesy.

“He started out with a really charitable view of [Facebook],” Glancy said. “At some level, he was trying to get Facebook to wake up and smell the coffee.”

That’s the Schrems that Glancy knows best, a multi-faceted individual who makes time for others and holds various interests. A man committed to public service, not public spotlight. A man who still calls and emails her with questions about legal strategy and privacy law. A man who drove down the California coast with some friends during spring break. Maybe even a man who is tired of being seen only as a flag-bearer for online privacy. (He describes himself on his Twitter profile as “(Luckily not only) Law, Privacy and Politics.)

“At some level, he considers himself a consumer lawyer,” Glancy said. “He’s interested in the ways in which to empower the little guy, who is kind of abused by large entities that—it’s not that they’re targeting them, it’s that they just don’t care. [The people’s] rights are not being taken account of.”

With GDPR in place, those rights, and the people they apply to, now have a little more firepower.

The post Max Schrems: lawyer, regulator, international man of privacy appeared first on Malwarebytes Labs.

NBlog Feb 24 – how to challenge an audit finding

Although I wrote this in the context of ISO/IEC 27001 certification audits, it applies in other situations where there is a problem with something the auditors are reporting such as a misguided, out of scope or simply wrong audit finding.

Here are some possible strategies to consider:
  • Have a quiet word with the auditor/s about it, ideally before it gets written up and finalized in writing. Discuss the issue – talk it through, consider various perspectives. Negotiate a pragmatic mutually-acceptable resolution, or at least form a better view of the sticking points.
  • Have a quiet word with your management and specialist colleagues about it, before the audit gets reported. Discuss the issue. Agree how you will respond and try to resolve this. Develop a cunning plan and gain their support to present a united front. Ideally, get management ready to demonstrate that they are definitely committing to fixing this e.g. with budget proposals, memos, project plans etc. to substantiate their commitment, and preferably firm timescales or agreed deadlines.
  • Gather your own evidence to strengthen your case. For example:
    • If you believe an issue is irrelevant to certification since there is no explicit requirement in 27001, identify the relevant guidance about the audit process from ISO/IEC 27007 plus the section of 27001 that does not state the requirement (!)
    • If the audit finding is wrong, prove it wrong with credible counter-evidence, counter-examples etc. Quality of evidence does matter but quantity plays a part. Engage your extended team, management and the wider business in the hunt.
    • If it’s a subjective matter, try to make it more objective e.g. by gathering and evaluating more evidence, more examples, more advice from other sources etc. ‘Stick to the facts’. Be explicit about stuff. Choose your words carefully.
    • Ask us for second opinions and guidance e.g. on the ISO27k Forum and other social media, industry peers etc.
  • Wing-it. Duck-and-dive. Battle it out. Cut-and-thrust. Wear down the auditor’s resolve and push for concessions, while making limited concessions yourself if you must. Negotiate using concessions and promises in one area to offset challenges and complaints in another. Agree on and work towards a mutually-acceptable outcome (such as, um, being certified!).
  • Be up-front about it. Openly challenge the audit process, findings, analysis etc. Provide counter-evidence and arguments. Challenge the language/wording. Push the auditors to their limit. [NB This is a distinctly risky approach! Experienced auditors have earned their stripes and are well practiced at this, whereas it may be your first time. As a strategy, it could go horribly wrong, so what’s your fallback position? Do you feel lucky, punk?]
  • Suck it up! Sometimes, the easiest, quickest, least stressful, least risky (in terms of being certified) and perhaps most business-like response is to accept it, do whatever you are being asked to do by the auditors and move on. Regardless of its validity for certification purposes, the audit point might be correct and of value to the business. It might actually be something worth doing … so swallow your pride and get it done. Try not to grumble or bear a grudge. Re-focus on other more important and pressing matters, such as celebrating your certification!
  • Negotiate a truce. Challenge and discuss the finding and explore possible ways to address it. Get senior management to commit to whichever solution/s work best for the business and simultaneously persuade/convince the auditors (and/or their managers) of that.
  • Push back informally by complaining to the certification body’s management and/or the body that accredited them. Be prepared to discuss the issue and substantiate your concerns with some evidence, more than just vague assertions and generalities.
  • Push back hard. Review your contract with the certification body for anything useful to your case. Raise a formal complaint with the certification body through your senior management … which means briefing them and gaining their explicit support first. Good luck with that. You’ll need even stronger, more explicit evidence here. [NB This and the next bullet are viable options even after you have been certified … but generally, by then, nobody has the energy to pursue it and risk yet more grief.]
  • Push back even harder. Raise a complaint with the accreditation body about the certification body’s incompetence through your senior management … which again means briefing them and gaining their explicit support first, and having the concrete evidence to make a case. Consider enlisting the help of your lawyers and compliance experts willing to get down to the brass tacks, and with the experience to build and present your case.
  • Delay things. Let the dust settle. Review, reconsider, replan. Let your ISMS mature further, particularly in the areas that the auditors were critical of. Raise your game. Redouble your efforts. Use your metrics and processes fully.
  • Consider engaging a different certification body (on the assumption that they won’t raise the same concerns … nor any others: they might be even harder to deal with!).
  • Consider engaging different advisors, consultants and specialists. Review your extended ISMS team. Perhaps push for more training, to enhance the team’s competence in the problem areas. Perhaps broaden ‘the team’ to take on-board other specialists from across the business. Raise awareness.
  • Walk away from the whole mess. Forget about certification. Go back to your cave to lick your wounds. Perhaps offer your resignation, accepting personal accountability for your part in the situation. Or fire someone else!
Although that's a long shopping list, I'm sure there are other possibilities including some combination of the above. The fact is is that you have choices in how to handle such challenges: your knee-jerk response may not be ideal.

For bonus marks, you might even raise an incident report concerning the issue at hand, then handle it in the conventional manner through the incident management part of your ISMS. An adverse audit finding is, after all, a concern that needs to be addressed and resolved just like other information incidents. It is an information risk that has eventuated. You will probably need to fix whatever is broken, but first you need to assess and evaluate the incident report, then decide what (if anything) needs to be done about it. The process offers a more sensible, planned and rational response than jerking your knee. It's more business-like, more professional. I commend it to the house.

NBlog Feb 22 – classification versus tagging



I'm not happy with the idea of 'levels' in many contexts, including information classification schemes. The term 'level' implies a stepped progression in one dimension. Information risk and security is more nuanced or fine-grained than that, and multidimensional too.
The problems with 'levels' include:
  • Boundary/borderline cases, when decisions about which level is appropriate are arbitrary but the implications can be significant; 
  • Dynamics - something that is a medium level right now may turn into a high or a low at some future point, perhaps when certain event occurs; 
  • Context e.g. determining the sensitivity of information for deliberate internal distribution is not the same as for unauthorized access, especially external leakage and legal discovery (think: internal email); 
  • Dependencies and linkages e.g. an individual data point has more value as part of a time sequence or data set ... 
  • ... and aggregation e.g. a structured and systematic compilation of public information aggregated from various sources can be sensitive; 
  • Differing perspectives, biases and prejudices, plus limited knowledge, misunderstandings, plain mistakes and secret agendas of those who classify stuff, almost inevitably bringing an element of subjectivity to the process despite the appearance of objectivity; 
  • And the implicit "We've classified it and [maybe] done something about securing it ... so we're done here. Next!". It's dismissive. 
The complexities are pretty obvious if you think about it, especially if you have been through the pain of developing and implementing a practical classification scheme. Take a blood pressure reading, for instance, or an annual report or a system security log. How would you classify them? Whatever your answer, I'm sure I can think of situations where those classifications are inappropriate. We might agree on the classification for a particular situation, hence a specific level or label might be appropriate right there, but information and situations are constantly changing, in general, hence in the real world the classification can be misleading and unhelpful. And if you insist on narrowing the classification criteria, we're moving away from the main advantage of classification which is to apply broadly similar risk treatments to each level. Ultimately, every item needs its own unique classification, so why bother?

Another issue with classification schemes is that they over-emphasize one aspect or feature of information - almost always that's confidentiality. What about integrity, availability, utility, value and so forth? I prefer a conceptually different approach using several tags or parameters rather than single classification 'levels'. A given item of information, or perhaps a collection of related items, might usefully be measured and tagged according to several parameters such as:
  • Sensitivity, confidentiality or privacy expectations; 
  • Source e.g. was it generated internally, found on the web, or supplied by a third party?; 
  • Trustworthiness, credibility and authenticity - could it have been faked?; 
  • Accuracy and precision which matters for some applications, quite a lot really; 
  • Criticality for the business, safety, stakeholders, the world ...; 
  • Timeliness or freshness, age and history, hinting at the information lifecycle; 
  • Extent of distribution, whether known and authorized or not; 
  • Utility and value to various parties - not just the current or authorized possessors; 
  • Probability and impact of various incidents i.e. the information risks; 
  • Etc. 
The tags or parameters required depend on what needs to be done. If we're determining access rights, for instance, access-related tags are more relevant than the others. If we're worried about fraud and deception, those integrity aspects are of interest. In other words, there's no need to attempt to fully assess and tag or measure everything, right now: a more pragmatic approach (measuring and tagging whatever is needed for the job in hand) works fine.

Within each parameter, you might consider the different tags or labels to represent levels but I'm more concerned with the broader concept of taking into account a number of relevant parameters in parallel, not just sensitivity or whatever. 

All that complexity can be hidden within Gary's Little World, handled internally within the information risk and security function and related colleagues. Beyond that in the wider organization, things get messy in practice but, generally speaking, people working routinely with information "just know" how important/valuable it is, what's important about it, and so on. They may express it in all sorts of ways (not just in words!), and that's fine. They may need a little guidance here and there but I'm not keen on classification as a method for managing information risk. It's too crude for me, except perhaps as a basic starting point. More useful is the process of getting people to think about this stuff and do whatever is appropriate under the circumstances. It's one of those situations where the journey is more valuable than the destination. The analysis generates understanding and insight which are more important that the 'level'.

NBlog Feb 21 – victimization as a policy matter


An interesting example of warped thinking from Amos Shapir in the latest RISKS-List newsletter:

"A common tactic of authoritarian regimes is to make laws which are next to impossible to abide by, then not enforce them. This creates a culture where it's perfectly acceptable to ignore such laws, yet the regime may use selective enforcement to punish dissenters -- since legally, everyone is delinquent."
Amos is talking (I believe) about national governments and laws but the same approach could be applied by authoritarian managers through corporate rules, including policies. Imagine, for instance, a security policy stating that all employees must use a secret password of at least 35 random characters: it would be unworkable in practice but potentially it could be used by management as an excuse to single-out, discipline and fire a particularly troublesome employee, while at the same time ignoring noncompliance by everyone else (including themselves, of course).

It's not quite as straightforward as I've implied, though, since organizations have to work within the laws of the land, particularly employment laws designed to protect individual workers from rampant exploitation by authoritarian bosses. There may be a valid legal defense for workers sacked in such circumstances due to the general lack of enforcement of the policy and the reasonable assumption that the policy is not in force, regardless of any stated mandate or obligations to comply ... which in turn has implications for all corporate policies and other rules (procedures, work instructions, contracts and agreements): if they are not substantially and fairly enforced, they may not have a legal standing. 

[IANAL  This piece is probably wrong and/or inapplicable. It's a thought-provoker, not legal advice.]

NBlog Feb 20 – policy governance

Kaspersky blogged about security policies in the context of human factors making organizations vulnerable to malware:
"In many cases, policies are written in such a difficult way that they simply cannot be effectively absorbed by employees. Instead of communicating risks, dangers and good practices in clear and comprehensive instructions, businesses often give employees multipage documents that everyone signs but very few read – and even less understand."
That is just the tip of an iceberg. Lack of readability is just one of at least six reasons why corporate security policies are so often found lacking in practice:
  • Lack of scope: ‘security policies’ are typically restricted to IT/cyber security matters, leaving substantial gaps, especially in the wider aspects of information risk and security such as human factors, fraud, privacy, intellectual property and business continuity.
  • Lack of consistency: policies that were drafted by various people at various times for various reasons, and may have been updated later by others, tend to drift apart and become disjointed. It is not uncommon to find bald contradictions, gross discrepancies or conflicts. Security-related obligations or expectations are often scattered liberally across the organization, partly on the corporate intranet, partly embedded in employment contracts, employee handbooks, union rulebooks, printed on the back of staff/visitor passes and so on. 
  • Lack of awareness: policies are passive, formal and hence rather boring written documents - dust-magnets. They take some effort to find, read and understand. Unless they are accompanied by suitable standards, procedures, guidelines and other awareness materials, and supported by structured training, awareness and compliance activities to promote and bring them to life, employees can legitimately claim that they didn’t even know of their existence - which indeed they often do when facing disciplinary action. 
  • Lack of accountability: if it is unclear who owns the policies and to whom they apply, noncompliance is the almost inevitable outcome. This, in turn, makes it risky for the organization to discipline, sack or prosecute people for noncompliance, even if the awareness, compliance and enforcement mechanisms are in place. Do your policies have specific owners and explicit responsibilities, including their promotion through awareness and training? Are people - including managers - actually held to account for compliance failures and incidents?
  • Lack of compliance: policy compliance and enforcement activities tend to be minimalist, often little more than sporadic reviews and the occasional ticking-off. Circulating a curt reminder to staff shortly before the auditors arrive, or shortly after a security incident, is not uncommon. Policies that are simply not enforced for some reason are merely worthless, whereas those that are literally unenforceable (including those where strict compliance would be physically impossible or illegal) can be a liability: management believes they have the information risks covered while in reality they do not. Badly-written, disjointed and inconsistent security policies are literally worse than useless.
Many of these issues can be traced back to lacking or inconsistent policy management processes. Policy ownership and purpose are often unclear. Even simple housekeeping activities such as version control and reviews are beyond many organizations, while policies generally lag well behind emerging issues.

That litany of issues and dysfunctional organizational practices stems from poor governance ... which intrigues me to the extent that I'm planning to write an article about it in conjunction with a colleague. He has similar views to me but brings a different perspective from working in the US healthcare industry. I'm looking forward to it.

NBlog Feb 7 – risks and opportunities defined


In the ISO27k context, 'risks and opportunities' has at least four meanings or interpretations:
  1. Information risks and information opportunities are the possibilities of information being exploited in a negative and positive sense, respectively. The negative sense is the normal/default meaning of risk in our field, in other words the possibility of harmful consequences arising from incidents involving information, data, IT and other ‘systems’, devices, IT and social networks, intellectual property, knowledge etc. This blog piece is an example of positively exploiting information: I am deliberately sharing information in order to inform, stimulate and educate people, for the benefit of the wider ISO27k user community (at least, that's my aim!). 
  2. Business risks and business opportunities arise from the use of information, data, IT and other ‘systems’, devices, IT and social networks, intellectual property, knowledge etc. to harm or further the organization’s business objectives, respectively. The kind of manipulative social engineering known as ‘marketing’ and ‘advertising’ is an example of the beneficial use of information for business purposes. The need for the organization to address its information-related compliance obligations is an example that could be a risk (e.g. being caught out and penalized for noncompliance) or an opportunity (e.g. not being caught and dodging the penalties) depending on circumstances.
  3. The ISMS itself is subject to risks and opportunities. Risks here include sub-optimal approaches and failure to gain sufficient support from management, leading to lack of resources and insufficient implementation, severely curtailing the capability and effectiveness of the ISMS, meaning that information risks are greater and information opportunities are lower than would otherwise have been achieved. Opportunities include fostering a corporate security culture through the ISMS leading to strong and growing support for information risk management, information security, information exploitation and more.
  4. There are further risks and opportunities in a more general sense. The possibility of gaining an ISO/IEC 27001 compliance certificate that will enhance organization’s reputation and lead to more business, along with the increased competence and capabilities arising from having a compliant ISMS, is an example of an opportunity that spans the 3 perspectives above. ‘Opportunities for improvement’ involve possible changes to the ISMS, the information security policies and procedures, other controls, security metrics etc. in order to make the ISMS work better, where ‘work better’ is highly context-dependent. This is the concept of continuous improvement, gradual evolution, maturity, proactive governance and systematic management of any management system. Risks here involve anything that might prevent or slow down the ongoing adaptation and maturity processes, for example if the ISMS metrics are so poor (e.g. irrelevant, unconvincing, badly conceived and designed, or the measurement results are so utterly disappointing) that management loses confidence in the ISMS and decides on a different approach, or simply gives up on the whole thing as a bad job. Again, the opportunities go beyond the ISMS to include the business, its information, its objectives and constraints etc.
Unfortunately in my opinion, ISO/IEC JTC1/SC27 utterly confused interpretation (1) with (3) in 27001 clause 6. As I understand it, the ISO boilerplate text for all management systems standards concerns sense (3), specifically. Clause 6 should therefore have focused on the planning required by an organization to ensure that its ISMS meets its needs both initially and in perpetuity, gradually integrating the ISMS as a routine, integral and beneficial part of the organization’s overall governance and management arrangements. Instead, ‘27001 clause 6 babbles on about information security objectives rather than the governance, management and planning needed to define and satisfy the organization’s objectives for its ISMS. The committee lost the plot - at least, that’s what I think, as a member of SC27: others probably disagree! 

NBlog Jan 8 – audit questions (braindump)


"What questions should an auditor ask?" is an FAQ that's tricky to answer since "It depends" is true but unhelpful.  

To illustrate my point, here are some typical audit questions or inquiries:
  • What do you do in the area of X
  • Tell me about X
  • Show me the policies and procedures relating to X
  • Show me the documentation arising from or relating to X
  • Show me the X system from the perspectives of a user, manager and administrator
  • Who are the users, managers and admins for X
  • Who else can access or interact or change X
  • Who supports X and how good are they
  • Show me what happens if X
  • What might happen if X
  • What else might cause X
  • Who might benefit or be harmed if X
  • What else might happen, or has ever happened, after X
  • Show me how X works
  • Show me what’s broken with X
  • Show me how to break X
  • What stops X from breaking
  • Explain the controls relating to X
  • What are the most important controls relating to X, and why is that
  • Talk me through your training in X
  • Does X matter
  • In the grand scheme of things, is X important relative to, say, Y and Z
  • Is X an issue for the business, or could it be
  • Could X become an issue for the business if Y
  • Under what circumstances might X be a major problem
  • When might X be most problematic, and why
  • How big is X - how wide, how heavy, how numerous, how often ... 
  • Is X right, in your opinion
  • Is X sufficient and appropriate, in your opinion
  • What else can you tell me about X
  • Talk me through X
  • Pretend I am clueless: how would you explain X
  • What causes X
  • What are the drivers for X
  • What are the objectives and constraints relating to X
  • What are the obligations, requirements and goals for X
  • What should or must X not do
  • What has X achieved to date
  • What could or should X have achieved to date
  • What led to the situation involving X
  • What’s the best/worst thing about X
  • What’s the most/least successful or effective thing within, about or without X
  • Walk or talk me through the information/business risks relating to X
  • What are X’s strengths and weaknesses, opportunities and threats
  • What are the most concerning vulnerabilities in X
  • Who or what might threaten X
  • How many changes have been made in X
  • Why and how is X changed
  • What is the most important thing about X
  • What is the most valuable information in X
  • What is the most voluminous information in X
  • How accurate is X …
  • How complete is X …
  • How up-to-date is X …
    • … and how do you know that (show me)
  • Under exceptional or emergency conditions, what are the workarounds for X
  • Over the past X months/years, how many Ys have happened … how and why
  • If X was compromised in some way, or failed, or didn’t perform as expected etc., what would/might happen
  • Who might benefit from or be harmed by X 
  • What has happened in the past when X failed, or didn’t perform as expected etc.
  • Why hasn’t X been addressed already
  • Why didn’t previous efforts fix X
  • Why does X keep coming up
  • What might be done to improve X
  • What have you personally tried to address X
  • What about your team, department or business unit: what have they done about X
  • If you were the Chief Exec, Managing Director or god, what would you do about X
  • Have there been any incidents caused by or involving X and how serious were they
  • What was done in response – what changed and why
  • Who was involved in the incidents
  • Who knew about the incidents
  • How would we cope without X
  • If X was to be replaced, what would be on your wishlist for the replacement
  • Who designed/built/tested/approved/owns X
  • What is X made of: what are the components, platforms, prerequisites etc.
  • What versions of X are in use
  • Show me the configuration parameters for X
  • Show me the logs, alarms and alerts for X
  • What does X depend on
  • What depends on X
  • If X was preceded by W or followed by Y, what would happen to Z
  • Who told you to do ... and why do you think they did that
  • How could X be done more efficiently/effectively
  • What would be the likely or possible consequences of X
  • What would happen if X wasn’t done at all, or not properly
  • Can I have a read-only account on system X to conduct some enquiries
  • Can I have a full-access account on test system X to do some audit tests
  • Can I see your test plans, cases, data and  results
  • Can someone please restore the X backup from last Tuesday 
  • Please retrieve tape X from the store, show me the label and lend me a test system on which I can explore the data content
  • If X was so inclined, how could he/she cause chaos, or benefit from his/her access, or commit fraud/theft, or otherwise exploit things
  • If someone was utterly determined to exploit, compromise or harm X, highly capable and well resourced, what might happen, and how might we prevent them succeeding
  • If someone did exploit X, how might they cover their tracks and hide their shenanigans
  • If X had been exploited, how would we find out about it
  • How can you prove to me that X is working properly
  • Would you say X is top quality or perfect, and if not why not
  • What else is relevant to X
  • What has happened recently in X
  • What else is going on now in X
  • What are you thinking about or planning for the mid to long term in relation to X
  • How could X be linked or integrated with other things
  • Are there any other business processes, links, network connections, data sources etc. relating to X
  • Who else should I contact about X
  • Who else ought to know about the issues with X
  • A moment ago you/someone else told me about X: so what about Y
  • I heard a rumour that Y might be a concern: what can you tell me about Y
  • If you were me, what aspects of X would concern you the most
  • If you were me, what else would you ask, explore or conclude about X
  • What is odd or stands out about X
  • Is X good practice
  • What is it about X that makes you most uncomfortable
  • What is it about this audit that makes you most uncomfortable
  • What is it about me that makes you most uncomfortable
  • What is it about this situation that makes you most uncomfortable
  • What is it about you that makes me most uncomfortable
  • Is there anything else you’d like to say
I could go on all day but that is more than enough already and I really ought to be earning a crust! If I had more time, stronger coffee and thought it would help, I might try sorting and structuring that braindump ... but in many ways it would be better still if you did so, considering and revising the list to suit your purposes if you are planning an audit. 

Alternatively, think about the questions you should avoid or not ask. Are there any difficult areas? What does that tell you?

It's one of those situations where the journey trumps the destination. Developing a set of audit concerns and questions is a creative process. It's fun.

I’m deliberately not specifying “X” because that is the vital context. The best way I know of determining X and the nature of the questions/enquiries arising is risk analysis. The auditor looks at the subject area, considers the possibilities, evaluates the risks and picks out the ones that are of most concern, does the research and fieldwork, examines the findings … and re-evaluates the situation (possibly leading to further investigation – it’s an iterative process, hence all the wiggly arrows and loops on the process diagram). 

Auditing is not simply a case of picking up and completing a questionnaire or checklist, although that might be part of the audit preparation. Competent, experienced auditors feed on lists, books, standards and Google as inputs and thought-provokers for the audit work, not definitive or restrictive descriptions of what to do. On top of all that, the stuff they discover often prompts or leads to further enquiries, sometimes revealing additional issues or risks or concerns almost by accident. The real trick to auditing is to go in with eyes, ears and minds wide open – curious, observant, naïve, doubtful (perhaps even cynical) yet willing to consider and maybe be persuaded.

[For yet more Hinson tips along these lines, try the computer audit FAQ.]

Information Security and the Zero-Sum Game

A zero-sum game is a mathematical representation of a situation in which each participant’s gain or loss is exactly balanced by the losses or gains of the other participant. In Information Security a zero-sum game usually references the trade-off between being secure and having privacy. However, there is another zero-sum game often played with Information […]

Measure Security Performance, Not Policy Compliance

I started my security (post-sysadmin) career heavily focused on security policy frameworks. It took me down many roads, but everything always came back to a few simple notions, such as that policies were a means of articulating security direction, that you had to prescriptively articulate desired behaviors, and that the more detail you could put into the guidance (such as in standards, baselines, and guidelines), the better off the organization would be. Except, of course, that in the real world nobody ever took time to read the more detailed documents, Ops and Dev teams really didn't like being told how to do their jobs, and, at the end of the day, I was frequently reminded that publishing a policy document didn't translate to implementation.

Subsequently, I've spent the past 10+ years thinking about better ways to tackle policies, eventually reaching the point where I believe "less is more" and that anything written and published in a place and format that isn't "work as usual" will rarely, if ever, get implemented without a lot of downward force applied. I've seen both good and bad policy frameworks within organizations. Often they cycle around between good and bad. Someone will build a nice policy framework, it'll get implemented in a number of key places, and then it will languish from neglect and inadequate upkeep until it's irrelevant and ignored. This is not a recipe for lasting success.

Thinking about it further this week, it occurred to me that part of the problem is thinking in the old "compliance" mindset. Policies are really to blame for driving us down the checkbox-compliance path. Sure, we can easily stand back and try to dictate rules, but without the adequate authority to enforce them, and without the resources needed to continually update them, they're doomed to obsolescence. Instead, we need to move to that "security as code" mentality and find ways to directly codify requirements in ways that are naturally adapted and maintained.

End Dusty Tomes and (most) Out-of-Band Guidance

The first daunting challenge of security policy framework reform is to throw away the old, broken approach with as much gusto and finality as possible. Yes, there will always be a need for certain formally documented policies, but overall an organization Does. Not. Need. large amounts of dusty tomes providing out-of-band guidance to a non-existent audience.

Now, note a couple things here. First, there is a time and a place for providing out-of-band guidance, such as via direct training programs. However, it should be the minority of guidance, and wherever possible you should seek to codify security requirements directly into systems, applications, and environments. For a significant subset of security practices, it turns out we do not need to repeatedly consider whether or not something should be done, but can instead make the decision once and then roll it out everywhere as necessary and appropriate.

Second, we have to realize and accept that traditional policy (and related) documents only serve a formal purpose, not a practical or pragmatic purpose. Essentially, the reason you put something into writing is because a) you're required to do so (such as by regulations), or b) you're driven to do so due to ongoing infractions or the inability to directly codify requirements (for example, requirements on human behavior). What this leaves you with are requirements that can be directly implemented and that are thus easily measurable.

KPIs as Policies (et al.)

If the old ways aren't working, then it's time to take a step back and think about why that might be and what might be better going forward. I'm convinced the answer to this query lies in stretching the "security as code" notion a step further by focusing on security performance metrics for everything and everyone instead of security policies. Specifically, if you think of policies as requirements, then you should be able to recast those as metrics and key performance indicators (KPIs) that are easily measured, and in turn are easily integrated into dashboards. Moreover, going down this path takes us into a much healthier sense of quantitative reasoning, which can pay dividends for improved information risk awareness, measurement, and management.

Applied, this approach scales very nicely across the organization. Businesses already operate on a KPI model, and converting security requirements (née policies) into specific measurables at various levels of the organization means ditching the ineffective, out-of-band approach previously favored for directly specifying, measuring, and achieving desired performance objectives. Simply put, we no longer have to go out of our way to argue for people to conform to policies, but instead simply start measuring their performance and incentivize them to improve to meet performance objectives. It's then a short step to integrating security KPIs into all roles, even going so far as to establish departmental, if not whole-business, security performance objectives that are then factored into overall performance evaluations.

Examples of security policies-become-KPIs might include metrics around vulnerability and patch management, code defect reduction and remediation, and possibly even phishing-related metrics that are rolled up to the department or enterprise level. When creating security KPIs, think about the policy requirements as they're written and take time to truly understand the objectives they're trying to achieve. Convert those objectives into measurable items, and there you are on the path to KPIs as policies. For more on thoughts on security metrics, I recommend checking out the CIS Benchmarks as a starting point.

Better Reporting and the Path to Accountability

Converting policies into KPIs means that nearly everything is natively built for reporting, which in turn enables executives to have better insight into the security and information risk of the organization. Moreover, shifting the focus to specific measurables means that we get away from the out-of-band dusty tomes, instead moving toward achieving actual results. We can now look at how different teams, projects, applications, platforms, etc., are performing and make better-informed decisions about where to focus investments for improvements.

This notion also potentially sparks an interesting future for current GRC-ish products. If policies go away (mostly), then we don't really need repositories for them. Instead, GRC products can shift to being true performance monitoring dashboards, allowing those products to broaden their scope while continuing to adapt other capabilities, such as those related to the so-called "SOAR" market (Security Orchestration, Automation, and Response). If GRC products are to survive, I suspect it will be by either heading further down the information risk management path, pulling in security KPIs in lieu of traditional policies and compliance, or it will drive more toward SOAR+dashboards with a more tactical performance focus (or some combination of the two). Suffice to say, I think GRC as it was once known and defined is in its final days of usefulness.

There's one other potentially interesting tie-in here, and that's to overall data analytics, which I've noticed slowly creeping into organizations. A lot of the focus has been on using data lakes, mining, and analytics in lieu of traditional SIEM and log management, but I think there's also a potentially interesting confluence with security KPIs, too. In fact, thinking about pulling in SOAR capabilities and other monitoring and assessment capabilities and data, it's not unreasonable to think that KPIs become the tweakable dials CISOs (and up) use to balance out risk vs reward in helping provide strategic guidance for address information risk within the enterprise. At any rate, this is all very speculative and unclear right now, but something to nonetheless watch. But I have digressed...

---
The bottom line here is this: traditional policy frameworks have generally outlived their usefulness. We cannot afford to continue writing and publishing security requirements in a format that isn't easily accessible in a "work as usual" format. In an Agile/DevOps world, "security as code" is imperative, and that includes converting security requirements into KPIs.

Compliance and Security Seals from a Different Perspective


Compliance attestations. Quality seals like “Hacker Safe!” All of these things bother most security people I know because to us, these provide very little insight into the security of anything in a tangible way. Or do they? I saw this reply to my blog post on compliance vs. security which made an interesting point. A point, I dare say, I had not really put front-of-mind but probably should have.

Ron Parker was of course correct…and he touched on a much bigger point that this comment was a part of. Much of the time compliance and ‘security badges, aka “security seals” on websites, aren’t done for the sake of making the website or product actually more secure … they’re done to assure the customer that the site or entity is worthy of their trust and business. This is contrary to conventional thinking in the security community.

Think about that for a second.

With that frame of reference, all the push to compliance and all the silly little “Hacker Safe!” security seals on websites make sense. Maybe they’re not secure, or maybe they are, but the point isn’t to demonstrate some level of absolute security. The point is to reassure you, the user, that you are doing business with someone who thought about your interests. Well…at least they pretended to. Whether it’s privacy, security, or both… the proprietors of this website or that store want to give you some way to feel safe doing business with them.

All this starts to bend the brain a bit, around the idea of why we really do security things. We need to earn someone’s business, through his or her trust. The risks we take on the road to earn their business …well that’s up to us to worry about. Who do you suppose is more qualified to make the assessment of ‘appropriate risk level’ – you or your customers? With some notable exception the answer won’t be your customers.

Realistically you don’t want your customers trying to decide for themselves what is or isn’t appropriate levels of security. Frankly, I wouldn’t be comfortable with this either. The reality behind this thinking is that the customer simply doesn’t know any better, typically, and would likely make the wrong decision given the chance. So it’s up to you to decide, and that’s fair. Of course, this makes the assumption that you as the proprietor have the customer’s interests in mind, and have some clue on how to do risk assessments and balance risk/reward. Lots to assume, I know. Also, you know what happens when you ass-u-me, right?

So let’s wind back to my point now. Compliance and security seals are a good thing. Before you pick up that rock to throw at me, think about this again. The problem isn’t that compliance and “security seals” exist but that I think we’re mis-understanding their utility. The answer isn’t to throw these tools away and create something else, because that something else will likely be just as complicated (or useless) and needlessly waste resources on solving a problem that already is somewhat on its way. Instead, let’s look to make compliance and security seals more useful to the end customer so you can focus on making that risk equation balance in your favor. I don’t quite know what that solution would look like, yet, but I’m going to investigate it with some smart people. I think ultimately there needs to be some way to convey the level security ‘effort’ by the proprietor, which becomes binding and the owner can be held liable for providing false information, or stretching the truth.

With this perspective I think we could take these various compliance regulations and align them with expectations that customers have, while tying them to some security and risk goals. This makes more sense than what I see being adopted today. The goal isn’t to be compliant, well, I mean, it is … but it’s not to be compliant and call that security. It’s to be compliant as a result of being more secure. Remembering that the compliance thing and security seal is for your customers is liberating and lets you focus on the bigger picture of balancing risk/reward for your business.


What do you think? Am I totally off my rocker?

Virtual Directory as Database Security

I've written plenty of posts about the various use-cases for virtual directory technology over the years. But, I came across another today that I thought was pretty interesting.

Think about enterprise security from the viewpoint of the CISO. There are numerous layers of overlapping security technologies that work together to reduce risk to a point that's comfortable. Network security, endpoint security, identity management, encryption, DLP, SIEM, etc. But even when these solutions are implemented according to plan, I still see two common gaps that need to be taken more seriously.

One is control over unstructured data (file systems, SharePoint, etc.). The other is back door access to application databases. There is a ton of sensitive information exposed through those two avenues that aren't protected by the likes of SIEM solutions or IAM suites. Even DLP solutions tend to focus on perimeter defense rather than who has access. STEALTHbits has solutions to fill the gaps for unstructured data and for Microsoft SQL Server so I spend a fair amount of time talking to CISOs and their teams about these issues.

While reading through some IAM industry materials today, I found an interesting write-up on how Oracle is using its virtual directory technology to solve the problem for Oracle database customers. Oracle's IAM suite leverages Oracle Virtual Directory (OVD) as an integration point with an Oracle database feature called Enterprise User Security (EUS). EUS enables database access management through an enterprise LDAP directory (as opposed to managing a spaghetti mapping of users to database accounts and the associated permissions.)

By placing OVD in front of EUS, you get instant LDAP-style management (and IAM integration) without a long, complicated migration process. Pretty compelling use-case. If you can't control direct database permissions, your application-side access controls seem less important. Essentially, you've locked the front door but left the back window wide open. Something to think about.