Category Archives: Data Protection and Privacy

Security roundup: June 2019

Every month, we dig through cybersecurity trends and advice for our readers. This edition: GDPR+1, the cost of cybercrime revealed, and a ransomware racket.

If you notice this notice…

If year one of GDPR has taught us anything, it’s that we can expect more data breach reports, which means more notifications. Most national supervisory authorities saw an increase in queries and complaints compared to 2017, the European Data Protection Board found.

But are companies following through with breach notifications that are effective, and easy to understand? Possibly not. Researchers from the University of Michigan analysed 161 sample notifications using readability guidelines, and found confusing language that doesn’t clarify whether consumers’ private data is at risk.

The researchers had previously found that people often don’t take action after being informed of a data breach. Their new findings suggest a possible connection with poorly worded notifications. That’s why the report recommends three steps for creating more usable and informative breach notifications.

  • Pay more attention to visual attractiveness (headings, lists and text formatting) and visually highlight key information.
  • Make the notice readable and understandable to everyone by using short sentences, common words (and very little jargon), and by not including unnecessary information.
  • Avoid hedge terms and wording claims like “there is no evidence of misuse”, because consumers could misinterpret this as as evidence of absence of risk).

AT&T inadvertently gave an insight into its own communications process after mistakenly publishing a data breach notice recently. Vice Motherboard picked up the story, and pointed out that its actions would have alarmed some users. But it also reckoned AT&T deserves praise for having a placeholder page ready in case of a real breach. Hear, hear. At BH Consulting, we’re big advocates of advance planning for potential incidents.

The cost of cybercrime, updated

Around half of all property crime is now online, when measured by volume and value. That’s the key takeaway from a new academic paper on the cost of cybercrime. A team of nine researchers from Europe and the USA originally published work on this field in 2012 and wanted to evaluate what’s changed. Since then, consumers have moved en masse to smartphones over PCs, but the pattern of cybercrime is much the same.

The body of the report looks at what’s known about the various types of crime and what’s changed since 2012. It covers online card frauds, ransomware and cryptocrime, fake antivirus and tech support scams, business email compromise, telecoms fraud along with other related crimes. Some of these crimes have become more prominent, and there’s also been fallout from cyberweapons like the NotPetya worm. It’s not all bad news: crimes that infringe intellectual property are down since 2012.

Ross Anderson, professor of security engineering at Cambridge University and a contributor to the research, has written a short summary. The full 32-page study is free to download as a PDF here.

Meanwhile, one expert has estimated fraud and cybercrime costs Irish businesses and the State a staggering €3.5bn per year. Dermot Shea, chief of detectives with the NYPD, said the law is often behind criminals. His sentiments match those of the researchers above. They concluded: “The core problem is that many cybercriminals operate with near-complete impunity… we should certainly spend an awful lot more on catching and punishing the perpetrators.” Speaking of which, Europol released an infographic showing how the GozNym criminal network operated, following the arrest of 10 people connected with the gang.

Ransom-go-round

Any ransomware victim will know that their options are limited: restore inaccessible data from backups (assuming they exist), or grudgingly pay the criminals because they need that data badly. The perpetrators often impose time limits to amp up the psychological squeeze, making marks feel like they have no other choice.

Enter third-party companies that claim to recover data on victims’ behalf. Could be a pricey but risk-free option? It turns out, maybe not. If it sounds too good to be true, it probably is. And that’s just what some top-quality sleuthing by ProPublica unearthed. It found two companies that just paid the ransom and pocketed the profit, without telling law enforcement or their customers.

This is important because ransomware is showing no signs of stopping. Fortinet’s latest Q1 2019 global threat report said these types of attacks are becoming targeted. Criminals are customising some variants to go after high-value targets and to gain privileged access to the network. Figures from Microsoft suggest ransomware infection levels in Ireland dropped by 60 per cent. Our own Brian Honan cautioned that last year’s figures might look good just because 2017 was a blockbuster year that featured WannaCry and NotPetya.

Links we liked

Finally, here are some cybersecurity stories, articles, think pieces and research we enjoyed reading over the past month.

If you confuse them, you lose them: a post about clear security communication. MORE

This detailed Wired report suggests Bluetooth’s complexity is making it hard to secure. MORE

Got an idea for a cybersecurity company? ENISA has published expert help for startups. MORE

A cybersecurity apprenticeship aims to provide a talent pipeline for employers. MORE

Remember the Mirai botnet malware for DDoS attacks? There’s a new variant in town. MORE

The hacker and pentester Tinker shares his experience in a revealing interview. MORE

So it turns out most hackers for hire are just scammers. MORE

The cybersecurity landscape and the role of the military. MORE

What are you doing this afternoon? Just deleting my private information from the web. MORE

The post Security roundup: June 2019 appeared first on BH Consulting.

GDPR one year on

May 2019 marks the first anniversary since the General Data Protection Regulation came into force. What has changed in the world of privacy and data protection since then? BH Consulting looks at some of the developments around data breaches, and we briefly outline some of the high-profile cases that could impact on local interpretation of the GDPR.

Breach reporting – myths and misconceptions

Amongst the most immediate and visible impacts of the GDPR was the requirement to report data breaches to the supervisory authority. In the context of GDPR, a personal data breach means a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data. The regulation introduced a duty on all organisations to report personal data breaches to the supervisory authority where they are likely to pose a risk to data subjects. This report must take place within 72 hours of the controller becoming aware of the breach, where feasible. There are additional obligations to report the breach to data subjects, without undue delay, if the breach is likely to result in a high risk of adversely affecting individuals’ rights and freedoms.

Between May 2018 when GDPR came into force, and January 2019, there were 41,502 personal data breaches reported across Europe, according to figures from the European Data Protection Board. In Ireland, the Data Protection Commission recorded 3,542 valid data security breaches from 25 May to 31 December 2018. This was a 70 per cent increase in reported valid data breaches compared to 2017.

Notwithstanding the uptick in the number of reported breaches, it has been suggested that many organisations are still unsure how to spot a data breach, when a breach may meet the criteria for reporting, or even how to go about reporting. With this in mind, the key lessons to consider are:

Not every breach needs to be reported

Organisations controlling and processing personal data should have a process in place to assess the risks to data subjects if a breach occurs. This assessment should focus on the severity and likelihood of the potential negative consequences of the breach on the data subject.

Assess the risks

When assessing whether to report, the controller will need to consider the type of breach, sensitivity and volume of the personal data involved, how easily individuals can be identified from it, the potential consequences and the characteristics of the individual or the controller (such as if the data relates to children or it involves medical information).

Who’s reporting first?

It’s possible the supervisory authority may hear about the breach from other sources including the media or affected data subjects. If this is the case, an authority such as the DPC may reach out to the affected organisation first, even before that entity has reported.

Establish the facts

As a final point, it is important not to forget that, even if you do not need to report a breach, the GDPR requires you to document the facts relating to it, its effects and remedial action taken. Therefore, you should keep a record should of all privacy incidents, even if they do not rise to a reportable level. This will help you learn from any mistakes and to meet accountability obligations.

Points to note

Keep in mind that it is not just about reporting a breach; organisations must also contain the breach, attempt to mitigate its negative effects, evaluate what happened, and prevent a repeat.

Breach reporting myths

Several misconceptions quickly emerged about GDPR, so here is a short primer to clarify them:

  1. Not all data breaches need to be reported to the supervisory authority
  2. Not all details need to be provided as soon as a data breach occurs
  3. Human error can be a source of a data breach
  4. Breach reporting is not all about punishing organisations
  5. Fines are not necessarily automatic or large if you don’t report in time

Resource cost – beyond the obvious

There have been a limited number of GDPR-related fines to date (see below) but this amount is likely to increase. Aside from financial penalties relating to breaches, organisations and businesses also need to consider the cost involved in complying with the regulation more generally.

This includes the resources needed to engage with a supervisory authority like the Data Protection Commission, as well as the amount of time it typically takes to manage a subject access request (SAR). The number of SARs is increasing because GDPR allows individuals to make a request free of charge.

GDPR enforcement actions: Google

In the runup to May 25 2018, there had been significant doubts about effective enforcement of the GDPR. If the seemingly invulnerable American social media and technology giants were able to ignore requirements without consequence, what would happen to the credibility and enforceability elsewhere? But against the current global backdrop, those technology companies have become far less invulnerable than they once seemed. Most cases are still making their way through the appeals procedure, but initial verdicts and sanctions are causing ripples for everyone within scope.

On January 21, 2019, the French Supervisory Authority for data protection (CNIL) fined Google €50 million for GDPR violations – the largest data protection fine ever imposed. The case raises several important privacy issues and provides useful insights into how one supervisory authority interprets the GDPR.

CNIL’s decision focuses on two main aspects: (i) violation of Google’s transparency obligations under the GDPR (specifically under Articles 12 and 13) and (ii) the lack of a legal basis for processing personal data (a requirement under Article 6). The CNIL is of the opinion that the consent obtained by Google does not meet the requirements for consent under the GDPR. Google is appealing the decision.

The decision dismisses the application of the GDPR’s one-stop-shop mechanism by holding that Google Ireland Limited is not Google’s main establishment in the EU (which would have made Ireland’s DPC the competent authority, rather than the CNIL). Since the fine is more than €2 million, it is clearly based on the turnover of Alphabet, Google’s holding company in the United States, not on any European entity.

GDPR enforcement actions: Facebook

On 7 February, Germany’s competition law regulator, FCO, concluded a lengthy investigation into Facebook and found that the company abused its dominant market position by making the use of its social network conditional on the collection of user data from multiple sources.

Facebook has not been fined; instead, the FCO imposed restrictions on its processing of user data from private users based in Germany. Facebook-owned services such as WhatsApp and Instagram may continue to collect data but assigning that data to a Facebook user account will only be possible with the user’s voluntary consent. Collecting data from third party websites and assigning it to a Facebook user account will also only be possible with a user’s voluntary consent.

Facebook is required to implement a type of internal unbundling; it can no longer make use of its social network conditional on agreeing to its current data collection and sharing practices relating to its other services or to third party apps and websites. Facebook intends to appeal this landmark decision under both competition and data protection law in the EU.

Other enforcement actions

After Birmingham Magistrates’ Court fined workers in two separate cases for breaching data protection laws, the UK Information Commissioner’s Office warned that employees could face a criminal prosecution if they access or share personal data without a valid reason.

The first hospital GDPR violation penalty was issued in Portugal after the Portuguese supervisory authority audited the hospital and discovered 985 hospital employees had access rights to sensitive patient health information when there were only 296 physicians employed by the hospital. The failure to implement appropriate access controls is a violation of the GDPR, and the hospital was fined €400,000 for the violations.

Lessons from year one

For data controllers and processors, the lessons to be learned from the first year of GDPR are clear:

Transparency is key

You must give users clear, concise, easily accessible information to allow them to understand fully the extent of the processing of their data. Without this information, it is unlikely any consent we collect will be considered to be a GDPR level of consent.

Fines can be large

CNIL’s response to Google demonstrates that regulators will get tough when it comes to fines and take several factors into account when determining the level of fine.

Watch the investigations

There are current 250 ongoing investigations – 200 from complaints or breaches and 50 opened independently by the data protections authorities so these will be interesting to watch in 2019.

Lead Supervisory Authority identity

Google and Facebook have both appointed the DPC in Ireland as their lead supervisory authority and have included this in the appeals process. CNIL took the lead in Google investigation, even though Google has its EU headquarters in Ireland – because the complaints were made against Google LLC (the American entity) in France.

Further challenges

There are further challenges to the way for the tech giants use personal data show no sign of dwindling. A complaint has been filed with Austria’s data protection office in respect of a breach of Article 15 GDPR, relating to users of Amazon, Apple, Netflix, Google (again) and Spotify being unable to access their data. 2019 should be an interesting year for Privacy.

What lies ahead?

The GDPR cannot be seen in isolation; it emerged at the same time as a growing public movement that frames privacy as a fundamental right. The research company Gartner identified digital ethics and privacy as one of its top trends for 2019. From a legislative perspective, the GDPR is part of a framework aimed at making privacy protection more robust.

PECR is the short form of the Privacy and Electronic Communications (EC Directive) Regulations 2003. They implement the e-privacy directive and they sit alongside the Data Protection Act and the GDPR. They give people specific privacy rights on electronic communications and they contain specific rules on marketing calls, emails, texts and faxes, cookies and similar technologies, keeping communications services secure and customer privacy relating to traffic and location data, itemised billing, line identification, and directory listings.

Further afield in the US, the California Consumer Privacy Act (CCPA) was signed into law in June 2018 and will come into effect on 1 January 2020. It’s intended to give California residents the right to know what personal data is being collected about them, and whether that information is sold or disclosed. Many observers believe the Act will trigger other U.S. states to follow suit.

For the remainder of 2019 and beyond, it promises to be an interesting time for privacy and data protection.

The post GDPR one year on appeared first on BH Consulting.

That’s classified! Our top secret guide to helping people protect information

As information security professionals, we often face a challenge when trying to explain what we mean by ‘data classification’. So here’s my suggestion: let’s start by not calling it that. In my experience, the minute you call it that, people switch off.

Our role should be to try to engage an audience, not scare them away. Classification sounds like a military term, and if the reaction that greets you is an eye-roll that says: ‘you’re talking security again’, then they’ve zoned out before you’ve even got to the second sentence. I try and change the language, because otherwise, what we have here is a failure to communicate.

In reality, it’s very simple if you explain what you mean by classification. If we strip away any jargon or names, what we’re doing is asking an organisation to decide what information is most important to it. Then, it’s about asking the organisation’s people to apply appropriate layers of protection to that information based on its level of importance.

De do do do, de da da da

Who needs to use data classification? These days, it’s everyone. Why is it important? Why make people do this work? Data is a precious commodity. Think of it like water in many parts of the world: there’s a lot of it about, it’s too easily leaked if you don’t protect it, it’s extremely valuable if you control the source, and you can combine it with other things to increase its worth. Well, it’s a similar story with data. Data is just a bunch of numbers, but context turns it into information. You could have 14 seemingly random numbers, and that’s data. Now, split them into two groups, one of eight digits and another of six digits with some dashes in between. Suddenly those numbers become a bank account number and sort code. Then it’s information.

Message in a bottle

The first step for security professionals to win people over to the concept is to make it real for their audience. If your message is personal, people can relate it to what they have to do in their work.

We handle types of information in different ways and make decisions all the time on who should have access to it. Think of it this way: do you file paperwork – utility bills, appointment letters, bank statements – at home? Would you leave your payslip lying around the home for your kids to read?

In a work context, a CEO might want their executive assistant to access their calendar for meetings, but they don’t necessarily want to share their bank account details to see how much money they make or what they spend it on.

Naturally, the type of information that’s most valuable will vary by industry, so you have to adapt any message to suit. In healthcare, it might be sensitive medical records about someone’s health. For someone working in food and drinks industry, maybe IP (intellectual property) like the recipe to the secret sauce or the package design are the most valuable items to protect. In pharmaceuticals, it might be the blueprints or ingredients in a new drug.

You don’t have to put on the red light

So now we’ve established that information may have different values, how do we group them? Deciding on the value of information may require the employee to apply good judgement. I like using the traffic light idea of three tiers of information (red amber and green) rather than the binary option of just public or private. Those three levels then become public (green), confidential (amber), and restricted or private (red). It allows for an extra level of data management, and therefore protection, where needed but is still a simple number to grasp.

Photo by Harshal Desai on Unsplash

This approach is easy to picture. People can very quickly understand what category information falls into, and what to do with it. Using the traffic light approach, public material (green) might be a brochure about a new product, or it could be the menu in the staff canteen. That’s the material that you want many people to see. The company contact directory or minutes from a meeting would be confidential (amber). Items that aren’t for general distribution outside board level (such as merger discussions) are extremely sensitive or privileged (red).

Once we know what we’re protecting, we get to the how.

  • If we’re dealing with physical paper documents, we can mark the sensitive information with a red sticker or red mark on the corner. The rule might be: never leave a red file unattended unless an authorised person is actively reading it and doing something with it. You know it shouldn’t leave the building unless it’s extremely well protected.
  • If the mark or sticker is amber, the person holding it must lock it away overnight.
  • Any document with a green mark doesn’t have to be locked away.

Every breath you take

You can extend that system beyond individual files to folders and to filing cabinets if necessary. You can apply this very easily by adding the appropriate colour to each document, folder, filing cabinet or even rooms in the building. Leave marker pens, stickers or anything that clearly shows the classification available for people to use.

It’s relatively easy to get people to apply the exact same marking system to electronic data. So you mark the Word file or Excel sheet with the same colour scheme, and folders, and so on. Once you’ve put the colours on it, the application of it is easy. If you use templates or forms of any kind it’s easy to start applying rules automatically, and you can then tie in the classification to your data leakage prevention tools, or DLP solutions, by blocking the most sensitive information from leaving the organisation, or at least flagging it for attention. It’s possible to put markers in the metadata of document templates, so amber or red documents could flag to the user that they need to encrypt before sending.

Ultimately, we’re in the business of changing behaviour, and the net result should be that people become more aware of information and data protection because it’s a relatable concept that they’re applying in their daily work, almost without realising.

So if not classification, what do we call it? The importance of information? Data management? It’s still not very snappy, so any suggestions or answers on a postcard please.

Oh, and as a footnote, if you have any information you want everyone in the company to read, just put it in an unsealed envelope marked “CONFIDENTIAL” and leave it near the printer/photocopier/coffee area. I guarantee everyone passing will take a look.

The post That’s classified! Our top secret guide to helping people protect information appeared first on BH Consulting.

Upcoming cybersecurity events featuring BH Consulting

Here, we list upcoming events, conferences, webinars and training featuring members of the BH Consulting team presenting about cybersecurity, risk management, data protection, GDPR, and privacy. 

ISACA Last Tuesday: Dublin, 25 June

BH Consulting COO Valerie Lyons will present a talk on building an emotionally intelligent security team, and the role that leadership plays in influencing team style. It will be an interactive and fun session with several takeaways and directions to free online tools to help analyse team member roles. The evening event will take place at the Carmelite Community Centre on Aungier Street in Dublin 2. Attendance is free; to register, visit this link

Data Protection Officer certification course: Vilnius/Maastricht June/July

BH Consulting contributes to this specialised hands-on training course that provides the knowledge needed to carry out the role of a data protection officer under the GDPR. This course awards the ECPC DPO certification from Maastricht University. Places are still available at the courses scheduled for June and July, and a link to book a place is available here

IAM Annual Conference: Dublin, 28-30 August

Valerie Lyons is scheduled to speak at the 22nd annual Irish Academy of Management Conference, taking place at the National College of Ireland. The event will run across three days, and its theme considers how business and management scholarship can help to solve societal challenges. For more details and to register, visit the IAM conference page

The post Upcoming cybersecurity events featuring BH Consulting appeared first on BH Consulting.

Password-less future moves closer as Google takes FIDO2 for a walk

For years, many organisations – and their users – have struggled with the challenge of password management. The technology industry has toiled on this problem by trying to remove the need to remember passwords at all. Recent developments suggest we might finally be reaching a (finger) tipping point.

At Mobile World Congress this year, Google and the FIDO Alliance announced that most devices running Android 7.0 or later can provide password-less logins in their browsers. To clarify, the FIDO2 authentication standard is sometimes called password-less web authentication. Strictly speaking, that’s a slightly misleading name because people still need to authenticate to their devices a PIN, or a using a biometric identifier like a fingerprint. It’s more accurate to say FIDO2 authentication, but not surprisingly, the term ‘password-less’ seems to have caught the imagination.

Wired reported that web developers can now make their sites work with FIDO2, which would mean people can log in to their online accounts on their phones without a password. This feature will be available to an estimated one billion Android devices, so it’s potentially a significant milestone on the road to a password-less future. Last November, Microsoft announced password-less sign-in for its account users, with the same FIDO2 standard. One caveat: Microsoft’s option requires using the Edge browser on Windows 10 1809 build. So, the true number of users is likely to be far lower than the 800 million Microsoft had been promising. But this is just the latest place where Microsoft has inserted FIDO technology into its products.

It’s not what you know

I spoke to Neha Thethi, BH Consulting’s senior information security analyst, who gave her reaction to this development. “Through this standard, FIDO and Google pave way for users to authenticate primarily using ‘something they have’ the phone – rather than ‘something they know’ the password. While a fingerprint or PIN would typically be required to unlock the device itself, no shared secret or private key is transferred over the network or stored with the website, as it is in case of a password. Only a public key is exchanged between the user and the website.”  

From the perspective of improving security, Google’s adoption of FIDO2 is a welcome development, Neha added. “Most of the account compromises that we’ve seen in past few years is because of leaked passwords, on the likes of Pastebin or through phishing, exploited by attackers. The HaveIbeenpwned website gives a sense of the scale of this problem. By that measure, going password-less for logging in to online accounts will definitely decrease the attack surface significantly,” she said.

“The technology that enables this ease of authentication is public key cryptography, and it has been around since the 1970s. The industry has recognised this problem of shared secrets for a long time now. Personally, I welcome this solution to quickly and securely log in to online accounts. It might not be bulletproof, but it takes an onerous task of remembering passwords away from individuals,” she said.

Don’t try to cache me

Organisations have been using passwords for a long time to log into systems that store their confidential or sensitive information. However, even today, many of these organisations don’t have a systematic way of managing passwords for their staff. If an organisation or business wants to become certified to the ISO 27001 security standard, for example, they will need to put in place measures in the form of education, process and technology, to ensure secure storage and use of passwords. Otherwise, you tend to see less than ideal user behaviour like storing passwords on a sticky note or in the web browser cache. “I discourage clients from storing passwords in the browser cache because if their machine gets hacked, the attacker will have access to all that information,” said Neha. 

That’s not to criticise users, she emphasised. “If an organisation is not facilitating staff with a password management tool, they will find the means. They try the best they can, but ultimately they want to get on with their work.”

The credential conundrum

The security industry has struggled with the problem of access and authentication for years. It hasn’t helped by shifting the burden onto the people least qualified to do something about it. Most people aren’t security experts, and it’s unfair to expect them to be. Many of us struggle to remember our own phone numbers, let alone a complex password. Yet some companies force their employees to change their passwords regularly. What happens next is the law of unintended consequences in action. People choose a really simple password, or one that barely changes from the one they’d been using before.

For years, many security professionals followed the advice of the US National Institute of Standards and Technology (NIST) for secure passwords. NIST recommended using a minimum of seven characters, and to include numbers, capital letters or special characters. By that measure, a password like ‘Password1’ would meet the recommendations even if no-one would think it was secure.

Poor password advice

Bill Burr, the man who literally wrote the book on passwords for NIST, has since walked back on his own advice. In 2017, he told the Wall Street Journal, “much of what I did I now regret”. He added: “In the end, it was probably too complicated for a lot of folks to understand very well, and the truth is, it was barking up the wrong tree”. NIST has since updated its password advice, and you can find the revised recommendations here.

As well as fending off cybercrime risks, another good reason for implementing good access control is GDPR compliance. Although the General Data Protection Regulation doesn’t specifically refer to passwords, it requires organisations to process personal data in a secure manner. The UK’s Information Commissioner’s Office has published useful free guidance about good password practices with GDPR in mind.

Until your organisation implements the password-less login, ensure you protect your current login details. Neha recommends using a pass phrase instead of a password along with two factor authentication where possible. People should also use different pass phrases for each website or online service we use, because using the same phrase over and over again puts us at risk if attackers compromised any one of those sites. Once they get one set of login credentials, they try them on other popular websites to see if they work. She also recommends using a good password manager or password keeper in place of having to remember multiple pass phrases or passwords. Just remember to think of a strong master password to protect all of those other login details!

The post Password-less future moves closer as Google takes FIDO2 for a walk appeared first on BH Consulting.

When is it fair to infer?

While the GDPR framework is robust in many respects, it struggles to provide adequate protection against the emerging risks associated with inferred data (sometimes called derived data, profiling data, or inferential data). Inferred data pose potentially significant risks in terms of privacy and/or discrimination, yet they would seem to receive the least protection of the personal data types prescribed by GDPR. Defined as assumptions or predictions about future behaviour, inferred data cannot be verified at the time of decision-making. Consequently, data subjects are often unable to predict, understand or refute these inferences, whilst their privacy rights, identity and reputation are impacted.

Reaching dangerous conclusions

Numerous applications drawing potentially troubling inferences have emerged; Facebook is reported to be able to infer protected attributes such as sexual orientation and race, as well as political opinions and the likelihood of a data subject attempting suicide. Facebook data has also been used by third parties to decide on loan eligibility, to infer political leniencies, to predict views on social issues such as abortion, and to determine susceptibility to depression. Google has attempted to predict flu outbreaks, other diseases and medical outcomes. Microsoft can predict Parkinson’s and Alzheimer’s from search engine interactions. Target can predict pregnancy from purchase history, users’ satisfaction can be determined by mouse tracking, and China infers a social credit scoring system.

What protections does GDPR offer for inferred data?

The European Data Protection Board (EDPB) notes that both verifiable and unverifiable inferences are classified as personal data (for instance, the outcome of a medical assessment regarding a user’s health, or a risk management profile). However it is unclear whether the reasoning and processes that led to the inference are similarly classified. If inferences are deemed to be personal data, should the data protection rights enshrined in GDPR also equally apply?

The data subjects’ right to being informed, right to rectification, right to object to processing, and right to portability are significantly reduced when data is not ‘provided by the data subject’ for example the EDPB note (in their guidelines on the rights to data portability) that “though such data may be part of a profile kept by a data controller and are inferred or derived from the analysis of data provided by the data subject, these data will typically not be considered as “provided by the data subject” and thus will not be within scope of this new right’.

The data subject however can still exercise their “right to obtain from the controller confirmation as to whether or not personal data concerning the data subject has being processed, and, where that is the case, access to the personal data”. The data subject also has the right to information about “the existence of automated decision-making, including profiling (Article 22(1),(4)) meaningful information about the logic involved, as well as the significance and consequences of such processing” (Article 15). However the data subject must actively make such an access request, and if the organisation does not provide the data, how will the data subject know that derived or inferred data is missing from their access request?

A data subject can also object to direct marketing based on profiling and/or have it stopped, however there is no obligation on the controller to inform the data subject that any profiling is taking place – “unless it produces legal or significant effects on the data subject”.

No answer just yet…

Addressing the challenges and tensions of inferred and derived data, will necessitate further case law on the interpretation of “personal data”, particularly regarding interpretations of GDPR. Future case law on the meaning of “legal effects… or similarly significantly affects”, in the context of profiling, would also be helpful. It would also seem reasonable to suggest that where possible data subjects should be informed at collection point, that data is derived by the organisation and for what purposes. If the data subject doesn’t know that an organisation uses their data to infer new data, the data subject cannot exercise fully their data subject rights, since they won’t know that such data exists.

In the meantime, it seems reasonable to suggest that inferred data which has been clearly informed to the data subject, is benevolent in its intentions, and offers the data subject positive enhanced value, is ‘fair’.

The post When is it fair to infer? appeared first on BH Consulting.

Five data protection tips from the DPC’s annual report

The first post-GDPR report from the Data Protection Commission makes for interesting reading. The data breach statistics understandably got plenty of coverage, but there were also many pointers for good data protection practice. I’ve identified five of them which I’ll outline in this blog.

Between 25 May and 31 December 2018, the DPC recorded 3,542 valid data security breaches. (For the record, the total number of breaches for the calendar year was 4,740.) This was a 70 per cent increase in reported valid data security breaches compared to 2017 (2,795), and a 56 per cent increase in public complaints compared to 2017.

1. Watch that auto-fill!

By far the largest single category was “unauthorised disclosures”, which was 3,134 out of the total. Delving further, we find that many of the complaints to the DPC relate to unauthorised disclosure of personal data in an electronic context. In other words, an employee at a company or public sector agency sent email containing personal data to the wrong recipient.

Data breaches in Ireland during 2018 and their causes

A case study on page 21 of the report illustrates this point: a data subject complained to the DPC after their web-chat with a Ryanair employee “was accidentally disclosed by Ryanair in an email to another individual who had also used the Ryanair web-chat service. The transcript of the webchat contained details of the complainant’s name and that of his partner, his email address, phone number and flight plans”.

It’s a common misconception that human error doesn’t count as a data breach, but in the eyes of GDPR, this isn’t the case. The most common reason for breaches like this comes from the auto-fill function in some software applications like email clients.

Where an organisation deals with high-risk data like healthcare information (because of the sensitivity involved), best practice is to disable auto-fill. I recommend this step to many of my clients. Many organisations don’t like doing this because it disrupts staff and makes their jobs a little bit harder. In my experience, employees soon get used to the inconvenience, while organisations greatly reduce their chances of a breach.

2. Encrypted messaging may not be OK

Another misconception I hear a lot is that it’s OK to use WhatsApp as a messaging tool because it’s encrypted. The case study on page 19 of the DPC report clarifies this position. A complainant claimed the Department of Foreign Affairs and Trade’s Egypt mission had shared his personal data with a third party (his employer) without his knowledge. A staff member at the mission was checking the validity of a document and the employer had no email address, so they sent a supporting document via WhatsApp.

In this case, the DPC “was satisfied that given the lack of any other secure means to contact the official in question, the transmission via WhatsApp was necessary to process the personal data for the purpose provided (visa eligibility)”.

My reading of this is that although the DPC ruled that WhatsApp was sufficient in this case, this was only because no other secure means of communication was available.

3. Do you need a DPO?

The report tells us that there were 900 Data Protection Officers appointed between 25 May and 31 December 2018. My eyes were immediately drawn to some text accompanying that graph (below). “During 2019, the DPC plans to undertake a programme of work communicating with relevant organisations regarding their obligations under the GDPR to designate a DPO.” This suggests to me that the DPC doesn’t believe there are enough DPOs, hence the outreach and awareness-raising efforts.

Notifications of new DPOs between 25 May and 31 December 2018

Private and public organisations will need to decide whether they should appoint a full-time DPO or avail of a service-model from a third-party data protection specialist.

4. A data protection policy is not a ‘get out of jail free’ card

Case study 9 from the report concerns an employee of a public-sector body who lost an unencrypted USB device. The device contained personal information belonging to a number of colleagues and service users. The data controller had policies and procedures in place that prohibited the removal and storage of personal data on unencrypted devices. But the DPC found that it “lacked the appropriate oversight and supervision necessary to ensure that its rules were complied with”.

The lesson I take from this is, “user error” is not a convenient shield for all data protection shortcomings. Many organisations expended effort last year in writing policies, and some think they’re covered from sanction because they did so. But unless they implement and enforce the policy – and provide training to staff about it – then it’s not enough.

5. Email marketing penalties may change

My final point is more of an observation than advice. Between 25 May and 31 December, the DPC prosecuted five entities for 30 offences involving email marketing. The reports detail those cases. A recurring theme is that the fines were mostly in the region of a couple of thousand euro. However, all of these cases began before GDPR was in force; since then, the DPC has the power to levy fines directly rather than going through the courts. This is an area I expect the DPC to address. Any organisation that took a calculated risk in the past because the fines were low should not expect this situation will continue.

There are plenty of other interesting points in the 104-page report, which is free to download here.

The post Five data protection tips from the DPC’s annual report appeared first on BH Consulting.

Security roundup: March 2019

We round up interesting research and reporting about security and privacy from around the web. This month: ransomware repercussions, reporting cybercrime, vulnerability volume, everyone’s noticing privacy, and feeling GDPR’s impact.

Ransom vs ruin

Hypothetical question: how long would your business hold out before paying to make a ransomware infection go away? For Apex Human Capital Management, a US payroll software company with hundreds of customers, it was less than three days. Apex confirmed the incident, but didn’t say how much it paid or reveal which strain of ransomware was involved.

Interestingly, the story suggests that the decision to pay was a consensus between the company and two external security firms. This could be because the ransomware also encrypted data at Apex’s newly minted external disaster recovery site. Most security experts strongly advise against paying extortionists to remove ransomware. With that in mind, here’s our guide to preventing ransomware. We also recommend visiting NoMoreRansom.org, which has information about infections and free decryption tools.

Bonus extra salutary security lesson: while we’re on the subject of backup failure, a “catastrophic” attack wiped the primary and backup systems of the secure email provider VFE Systems. Effectively, the lack of backup put the company out of business. As Brian Honan noted in the SANS newsletter, this case shows the impact of badly designed disaster recovery procedures.

Ready to report

If you’ve had a genuine security incident – neat segue alert! – you’ll probably need to report it to someone. That entity might be your local CERT (computer emergency response team), to a regulator, or even law enforcement. (It’s called cybercrime for a reason, after all). Security researcher Bart Blaze has developed a template for reporting a cybercrime incident which you might find useful. It’s free to download at Peerlyst (sign-in required).

By definition, a security incident will involve someone deliberately or accidentally taking advantage of a gap in an organisation’s defences. Help Net Security recently carried an op-ed arguing that it’s worth accepting that your network will be infiltrated or compromised. The key to recovering faster involves a shift in mindset and strategy from focusing on prevention to resilience. You can read the piece here. At BH Consulting, we’re big believers in the concept of resilience in security. We’ve blogged about it several times over the past year, including posts like this.

In incident response and in many aspects of security, communication will play a key role. So another helpful resource is this primer on communicating security subjects with non-experts, courtesy of SANS’ Lenny Zeltser. It takes a “plain English” approach to the subject and includes other links to help security professionals improve their messaging. Similarly, this post from Raconteur looks at language as the key to improving collaboration between a CISO and the board.

Old flaws in not-so-new bottles

More than 80 per cent of enterprise IT systems have at least one flaw listed on the Common Vulnerabilities and Exposures (CVE) list. One in five systems have more than ten such unpatched vulnerabilities. Those are some of the headline findings in the 2019 Vulnerability Statistics Report from Irish security company Edgescan.

Edgescan concluded that the average window of exposure for critical web application vulnerabilities is 69 days. Per the report, an average enterprise takes around 69 days to patch a critical vulnerability in its applications and 65 days to patch the same in its infrastructure layers. High-risk and medium-risk vulnerabilities in enterprise applications take up to 83 days and 74 days respectively to patch.

SC Magazine’s take was that many of the problems in the report come from companies lacking full visibility of all their IT assets. The full Edgescan report has even more data and conclusions and is free to download here.

From a shrug to a shun

Privacy practitioners take note: consumer attitudes to security breaches appear to be shifting at last. PCI Pal, a payment security company, found that 62 per cent of Americans and 44 per cent of Britons claim they will stop spending with a brand for several months following a hack or breach. The reputational hit from a security incident could be greater than the cost of repair. In a related story, security journalist Zack Whittaker has taken issue with the hollow promise of websites everywhere. You know the one: “We take your privacy seriously.”

If you notice this notice…

Notifications of data breaches have increased since GDPR came into force. The European Commission has revealed that companies made more than 41,000 data breach notifications in the six-month period since May 25. Individuals or organisations made more than 95,000 complaints, mostly relating to telemarketing, promotional emails and video surveillance. Help Net Security has a good writeup of the findings here.

It was a similar story in Ireland, where the Data Protection Commission saw a 70 per cent increase in reported valid data security breaches, and a 56 per cent increase in public complaints compared to 2017. The summary data is here and the full 104-page report is free to download.

Meanwhile, Brave, the privacy-focused browser developer, argues that GDPR doesn’t make doing business harder for a small company. “In fact, if purpose limitation is enforced, GDPR levels the playing field versus large digital players,” said chief policy officer Johnny Ryan.

Interesting footnote: a US insurance company, Coalition, has begun offering GDPR-specific coverage. Dark Reading’s quotes a lawyer who said insurance might be effective for risk transference but it’s untested. Much will depend on the policy’s wording, the lawyer said.

Things we liked

Lisa Forte’s excellent post draws parallels between online radicalisation and cybercrime. MORE

Want to do some malware analysis? Here’s how to set up a Windows VM for it. MORE

You give apps personal information. Then they tell Facebook (PAYWALL). MORE

Ever wondered how cybercriminals turn their digital gains into cold, hard cash? MORE

This 190-second video explains cybercrime to a layperson without using computers. MORE

Blaming the user for security failings is a dereliction of responsibility, argues Ira Winkler. MORE

Tips for improving cyber risk management. MORE

Here’s what happens when you set up an IoT camera as a honeypot. MORE

The post Security roundup: March 2019 appeared first on BH Consulting.