Category Archives: GDPR

Cyber Incident Response Plan – It Won’t Be Any Worse Off

Take a glance at the most discussed cybersecurity news of the week. Get to know ...

The post Cyber Incident Response Plan – It Won’t Be Any Worse Off appeared first on EdGuards - Security for Education.

The post Cyber Incident Response Plan – It Won’t Be Any Worse Off appeared first on Security Boulevard.

The Business Value of Cybersecurity

Cybersecurity is rising as a key issue on the radar of virtually all organisations. According to a recent AT Kearney report, cyber-attacks have been topping executives’ lists of business risks for

The post The Business Value of Cybersecurity appeared first on The Cyber Security Place.

The top emerging risks organizations are facing

Gartner surveyed 98 senior executives across industries and geographies and found that “accelerating privacy regulation” had overtaken “talent shortages” as the top emerging risk in the Q1 2019 Emerging Risk Monitor survey. Concerns around privacy regulations were consistently spread across the globe, denoting the increasingly numerous and geographically specific regulations that companies must now comply with. “With the General Data Protection Regulation (GDPR) now in effect, executives realize that complying with privacy regulations is more … More

The post The top emerging risks organizations are facing appeared first on Help Net Security.

What is personal information? In legal terms, it depends

In early March, cybersecurity professionals around the world filled the San Francisco Moscone Convention Center’s sprawling exhibition halls to discuss and learn about everything infosec, from public key encryption to incident response, and from machine learning to domestic abuse.

It was RSA Conference 2019, and Malwarebytes showed up to attend and present. Our Wednesday afternoon session—“One person can change the world—the story behind GDPR”—explored the European Union’s new, sweeping data privacy law which, above all, protects “personal data.”

But the law’s broad language—and finite, severe penalties—left audience members with a lingering question: What exactly is personal data?

The answer: It depends.

Personal data, as defined by the EU’s General Data Protection Regulation, is not the same as “personally identifiable information,” as defined by US data protection and cybersecurity laws, or even “personal information” as defined by California’s recently-signed data privacy law. Further, in the US, data protection laws and cybersecurity laws serve separate purposes and, likewise, bestow slightly separate definitions to personal data.

Complicating the matter is the public’s instinctual approach to personal information, personal data, and online privacy. For everyday individuals, personal information can mean anything from telephone numbers to passport information to postal codes—legal definitions be damned.

Today, in the latest blog for our cybersecurity and data privacy series, we discuss the myriad conditions and legal regimes that combine to form a broad understanding of personal information.

Companies should not overthink this. Instead, data privacy lawyers said businesses should pay attention to what information they collect and where they operate to best understand personal data protection and compliance.

As Duane Morris LLP intellectual property and cyber law partner Michelle Donovan said:

“What it comes down to, is, it doesn’t matter what the rules are in China if you’re not doing business in China. Companies need to figure out what jurisdictions apply, what information are they collecting, where do their data subjects reside, and based on that, figure out what law applies.”

What law applies?

The personal information that companies need to protect changes from law to law. However, even though global data protection laws define personal information in diverse ways, the definitions themselves are not important to every business.

For instance, a small company in California that has no physical presence in the European Union and makes no concerted efforts to market to EU residents does not have to worry about GDPR. Similarly, a Japanese startup that does not collect any Californians’ data does not need to worry about that state’s recently-signed data privacy law. And any company outside the US that does not collect any US personal data should not have to endure the headaches of complying with 50 individual state data breach notification laws.

Baker & McKenzie LLP of counsel Vincent Schroeder, who advises companies on privacy, data protection, information technology, and e-commerce law, said that the various rules that determine which laws apply to which businesses can be broken down into three basic categories: territorial rules, personal rules, and substantive rules.

Territorial rules are simple—they determine legal compliance based on a company’s presence in a country, state, or region. For instance, GDPR applies to companies that physically operate in any of the EU’s 28 member-states, along with companies that directly market and offer their products to EU citizens. That second rule of direct marketing is similar to another data privacy law in Japan, which applies to any company that specifically offers its products to Japanese residents.

“That’s the ‘marketplace rule,’ they call it,” Schroeder said. “If you’re doing business in that market, consciously, then you’re affecting the rights of the individuals there, so you need to adhere to the local regulatory law.” 

Substantive rules, on the other hand, determine compliance based on a company’s characteristics. For example, the newly-passed California Consumer Privacy Act applies to companies that meet any single one of the following three criteria: pull in annual revenue of $25 million, derive 50 percent or more of that annual revenue from selling consumers’ personal information, or buy, receive, sell, or share the personal information of 50,000 or more consumers, households, or devices.

Businesses that want to know what personal information to legally protect should look first to which laws apply. Only then should they move forward, because “personal information” is never just one thing, Schroeder said.

“It’s an interplay of different definitions of the territorial, personal, and substantive scopes of application, and for definitions of personal data,” Schroeder said.

Personal information—what’s included?

The meaning of personal information changes depending on who you ask and which law you read. Below, we focus on five important interpretations. What does personal information mean to the public? What does it mean according to GDPR? And what does it mean according to three state laws in California—the country’s legislative vanguard in protecting its residents’ online privacy and personal data.

The public

Let’s be clear: Any business concerned with legal obligations to protect personal information should not start a compliance journey by, say, running an employee survey on Slack and getting personal opinions.

That said, public opinions on personal data are important, as they can influence lawmakers into drafting new legislation to better protect online privacy.

Jovi Umawing, senior content writer for Malwarebytes Labs who recently compiled nearly 4,000 respondents’ opinions on online privacy, said that personal information is anything that can define one person from another.

“Personal information for me is relevant data about a person that makes them unique or stand out,” Umawing wrote. “It’s something intangible that one owns or possesses that (when combined with other information) points back to the person with very high or unquestionable accuracy.”

Pieter Arntz, malware intelligence researcher for Malwarebytes, provided a similar view. He said he considers “everything that can be used to identify me or find more specific information about me as personal information.” That includes addresses, phone numbers, Social Security numbers, driver’s license info, passport info, and, “also things like the postal code,” which, for people who live in very small cities, can be revealing, Arntz said.

Interestingly, some of these definitions overlap with some of the most popular data privacy laws today.


In 2018, the General Data Protection Regulation took effect, granting EU citizens new rights to access, transport, and delete personal data. In 2019, companies are still figuring out what that personal data encompasses.

The text of the law offers little clarity, instead providing this ocean-wide ideology: “Personal data should be as broadly interpreted as possible.”

According to GDPR, the personal data that companies must protect includes any information that can “directly or indirectly” identify a person—or subject—to whom the data belongs or describes. Included are names, identification numbers, location data, online identifiers like screen names or account names, and even characteristics that describe the “physical, physiological, genetic, mental, commercial, cultural, or social identity of a person.”

That last piece could include things like an employee’s performance record, a patient’s medical diagnosis history, a user’s specific anarcho-libertarian political views, and even a person’s hair color and length, if it is enough to determine that person’s identity.

Donovan, the attorney from Duane Morris, said that GDPR’s definition could include just about any piece of information about a person that is not anonymized.

“Even if that information is not identifying [a person] by name, if it identifies by a number, and that number is known to be used to identify that person—either alone or in combination—it could still associate with that person,” Donovan said. “You should assume that if you have any data about an individual that is not anonymized when you get it, it’s likely going to be covered.”

The California Consumer Privacy Act

In June 2018, California became the first state in the nation to respond to frequent online privacy crises by passing a comprehensive, statewide data privacy law. The California Consumer Privacy Act, or CCPA, places new rules on companies that collect California residents’ personal data.

The law, which will go into effect in 2020, calls this type of data “personal information.”

“Personal information,” according to the CCPA, is “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”

What that includes in practice, however, is a broad array of data points, including a person’s real name, postal address, and online IP address, along with biometric information—like DNA and fingerprint data—and even their browsing history, education history, and what the law vaguely describes as “audio, electronic, visual, thermal, olfactory, or similar information.”

Aside from protecting several new data types, the CCPA also makes a major change to how Californians can assert their data privacy rights in court. For the first time ever, a statewide data privacy law details “statutory damages,” which are legislatively-set, monetary amounts that an individual can ask to recover when filing a private lawsuit against a company for allegedly violating the law. Under the CCPA, people who believe their data privacy rights were violated can sue a company and ask for up to $750.

This is a huge shift in data privacy law, Donovan said.

“For the first time, there’s a real privacy law with teeth,” Donovan said.

Previously, if individuals wanted to sue a company for a data breach, they needed to prove some type of economic loss when asking for monetary damages. If, say, a fraudulent credit card was created with stolen data, and then fraudulent charges were made on that card, monetary damages might be easy to figure out. But it’s rarely that simple.  

“Now, regardless of the monetary damage, you can get this statutory damage of $750 per incident,” Donovan said.

California’s data breach notification law and data protection law

If we stay in California but go back in time several years, we see the start of a trend—California has been the first state, more than once, to pass data protection legislation.

In 2002, California passed its data breach notification law. The first of its kind in the United States, the law forced companies to notify California residents about unauthorized access to their “personal information.”

The previous definitions of personal information and data that we’ve covered—GDPR’s broad, anything-goes approach, and CCPA’s inclusion of heretofore unimagined “olfactory,” smell-based personal data—do not apply here.

Instead, personal information in the 17-year-old law—which received an update five years ago—is defined as a combination of types of information. The necessary components include a Californian’s first and last name, or first initial and last name, paired up with things like their Social Security number, driver’s license number, and credit card number and corresponding security code, along with an individual’s email address and password.

So, if a company suffers a data breach of a California resident’s first and last name plus their Social Security number? That’s considered personal information. If a data breach compromises another California resident’s first initial, last name, and past medical insurance claims? Once again, that data is considered personal information, according to the law.

In 2014, this definition carried somewhat over into California’s data protection law. That year, then-California governor Jerry Brown signed changes to the state’s civil code that created data protection requirements for any company that owns, licenses, or maintains the “personal information” of California residents.

According to Assembly Bill No. 1710, “personal information” is, once again, the combination of information that includes a first name and last name (or first initial and last name), plus a Social Security number, driver’s license number, credit card number and corresponding security number, and medical information and health information.

The definitions are not identical, though. California’s data protection law, unlike its data breach notification law, does not cover data collected by automated license plate readers, or ALPRs. ALPRs can indiscriminately—and sometimes disproportionately—capture the license plate numbers of any vehicles that cross into their field of vision.

Roughly one year later, California passed a law to strengthen protections of ALPR-collected data.

The takeaway

By now, it’s probably easier to define what personal information isn’t rather than what it is (obviously, there is a legal answer to that, too, but we’ll spare the details). These evolving definitions point to a changing legal landscape, where data is not protected solely because of its type, but because of its inherent importance to people’s privacy.

Just as there is no one-size-fits-all definition to personal information, there is no one-size-fits-all to personal data protection compliance. If a company finds itself wondering what personal data it should protect, may we suggest something we have done for every blog in this series: Ask a lawyer.

Join us again soon for the next blog in our series, in which we will discuss consumer protections for data breaches and online privacy invasions.  

The post What is personal information? In legal terms, it depends appeared first on Malwarebytes Labs.

Article 13, Copyright and Internet Freedom

Ongoing efforts to regulate Internet usage in the European Union have created some serious and wide-ranging legislation. The General Data Protection Regulation (GDPR) introduced significant fines for firms that mishandle personal data belonging to EU citizens.

Recently, attention shifted to the issue of copyright protections and infringement on the Internet. And this had led to the creation of a new copyright directive.

The (in)famous Article 13

The modern Internet is hugely interactive, allowing users to easily upload and share content. Facebook has become the default place for sharing family photos, and hundreds of thousands of young people upload videos to their YouTube channels every day. And so long as users own the rights to the content they share, everything will be fine.

However popular sites like Reddit, 9GAG and even YouTube are also popular places for sharing content protected by copyright. TV shows or music videos that are shared without the permission of the rights owner has always been illegal, but the new regulation has tightened responsibilities.

Under Article 13, site operators now have a responsibility to remove copyright-protected content immediately, or face a major fine. For very large sites like YouTube and Facebook, this will mean using technology to try and detect and remove protected content automatically. Smaller sites will have to devote time and attention to managing content automatically; some experts worry that the expense of enforcing the regulation will force smaller businesses to shut.

When the regulation first started to attract attention, many were concerned that memes – images and jokes built around short-lived pop-culture references – would be outlawed. The good news for those who like a laugh as they browse the web is that memes have been exempted from the copyright ban “for purposes of quotation, criticism, review, caricature, parody and pastiche”.

Automation can’t solve every problem

Large Internet players like Google are warning that it is actually normal people who will be most badly affected by the new law. Despite massive improvements in automated recognition technologies, experts warn that current filters are still not accurate enough to identify and block all infringing uploads.

This means that many images will be blocked incorrectly. YouTube is even warning that they may have to stop showing videos to European users altogether because they cannot guarantee that their filter will be 100% accurate.

Bad, but in a different way

Where many users and campaigners were concerned that the copyright regulation would prevent free speech online, the reality is slightly different. People will still be able to share memes on their Facebook pages for instance.

Instead we now have a situation where content will be automatically filtered as it is uploaded. In the event that the filter cannot decide if a video is protected or not, the video will be deleted “just in case”.

Worse still, sites and services that cannot provide adequate filters may simply block access to European users. This isn’t a threat either – some US news sites already ban EU access because they cannot guarantee they comply with GDPR requirements. So you may find you are unable to visit some of your favourite sites in the near future.

The copyright regulation is an important step towards helping protect the incomes of people who create and sell digital content. For users, Internet freedom remains intact. Almost. Unfortunately, current technical limitations means that the new law causes more problems than it solves.

The post Article 13, Copyright and Internet Freedom appeared first on Panda Security Mediacenter.

Don’t build a maginot line of data security because without cyber security you are still vulnerable

Data security and cyber security overlap, but they are different, and there is a risk that if you focus too much on data security you could be left exposed. Bridewell’s

The post Don’t build a maginot line of data security because without cyber security you are still vulnerable appeared first on The Cyber Security Place.

Security roundup: April 2019

We round up interesting research and reporting about security and privacy from around the web. This month: gender rebalance, cookie walls crumble, telecom threats and incident response par excellence.

A welcome improvement

Women now make up almost a quarter of information security workers, according to new figures from ISC(2). For years, female participation in security roles hovered around the 10-11 per cent mark. The industry training and certification group’s latest statistics show that figure is much higher than was generally thought.

Some of this increase is due to the group widening its parameters beyond pure cybersecurity roles. The full report shows that higher percentages of women security professionals are attaining senior roles. This includes chief technology officer (7 per cent of women vs. 2 per cent of men), vice president of IT (9 per cent vs. 5 per cent), IT director (18 per cent vs. 14 per cent) and C-level or executive (28 per cent vs. 19 per cent).

“While men continue to outnumber women in cybersecurity and pay disparity still exists, women in the field are buoyed by higher levels of education and certifications, and are finding their way to leadership positions in higher numbers,” ISC(2) said.

The trends are encouraging for any girls or women who are considering entering the profession; as the saying goes, if you can see it, you can be it. (The report’s subtitle is ‘young, educated and ready to take charge’.) After the report was released, Kelly Jackson Higgins at Dark Reading tweeted a link to her story from last year about good practice for recruiting and retaining women in security.

Great walls of ire

You know those annoying website pop-ups that ask you to accept cookies before reading further? They’re known as cookie walls or tracker walls, and the Dutch data protection authority has declared that they violate the General Data Protection Regulation. If visitors can’t access a website without first agreeing to be tracked, they are being forced to share their data. The argument is that this goes against the principle of consent, since the user has no choice but to agree if they want to access the site.

Individual DPAs have taken different interpretations on GDPR matters. SC Magazine quoted Omar Tene of the International Association of Privacy Professionals, who described the Dutch approach as “restrictive”.

This might be a case of GDPR solving a problem of its own making: The Register notes that cookie consent notices showed a massive jump last year, from 16 per cent in January to 62.1 per cent by June.

Hanging on the telephone

Is your organisation’s phone system in your threat model? New research from Europol’s European Cybercrime Centre and Trend Micro lifts the lid on network-based telecom fraud and infrastructure attacks. The Cyber-Telecom Crime Report includes case studies of unusual attacks to show how they work in the real world.

By accessing customers’ or carriers’ accounts, criminals have a low-risk alternative to traditional forms of financial fraud. Among the favoured tactics are vishing, which is a voice scam designed to trick people into revealing personal or financial information over the phone. ‘Missed call’ scams, also known as Wangiri, involve calling a number once; when the recipient calls back, thinking it’s a genuine call, they connect to a premium rate number. The report includes the eye-watering estimate that criminals make €29 billion per year from telecom fraud.

Trend Micro’s blog takes a fresh angle on the report findings, focusing on the risks to IoT deployments and to the arrival of 5G technology. The 57-page report is free to download from this link. Europol has also launched a public awareness page about the problem.  

From ransom to recovery

Norsk Hydro, one of the world’s largest aluminium producers, unexpectedly became a security cause célèbre following a “severe” ransomware infection. After the LockerGoga variant encrypted data on the company’s facilities in the US and Europe, the company shut its global network, switched to manual operations at some of its plants, and stopped production in others.

Norsk Hydro said it planned to rely on its backups rather than paying the ransom. Through it all, the company issued regular updates, drawing widespread praise for its openness, communication and preparedness. Brian Honan wrote: “Norsk Hydro should be a case study in how to run an effective incident response. They were able to continue their business, although at a lower level, in spite of their key systems being offline. Their website contains great examples of how to provide updates to an issue and may serve as a template for how to respond to security breaches.”

Within a week, most of the company’s operations were back running at capacity. Norsk Hydro has released a video showing how it was able to recover. Other victims weren’t so lucky. F-Secure has a good analysis of the ransomware that did the damage, as does security researcher Kevin Beaumont.

Links we liked

Remember the Melissa virus? Congratulations, you’re old: that was 20 years ago. MORE

New trends in spam and phishing, whose popularity never seems to fade. MORE and MORE

For parents and guardians: videos to spark conversations with kids about online safety. MORE

A look behind online heists on Mexican banks that netted perpetrators nearly $20 million. MORE

While we’re on the subject, more cybercriminal tactics used against financial institutions. MORE

This is a useful high-level overview of the NIST cybersecurity framework. MORE

This campaign aims to hold tech giants to account for fixing security and privacy issues. MORE

How can security awareness programmes become more effective at reducing risk? MORE

An excellent security checklist for devices and accounts, courtesy of Bob Lord. MORE

Shodan Monitor alerts organisations when their IoT devices become exposed online. MORE

The post Security roundup: April 2019 appeared first on BH Consulting.

Equifax breach leads U.S. Senate to propose America draft its own GDPR

A US Senate report on an investigation into the monumental Equifax breach chastises the company for lax security, and proposes heading off similar incidents in the future – by making American companies punishable by law for mishandling personally identifiable information.

The 67-page report is replete with information on the 2017 incident, including that Equifax was aware it had cybersecurity deficiencies as early as 2015. One statement in the report, though, could serve to summarize the investigator’s findings:

“Equifax was unable to detect attackers entering its networks because it failed to take the steps necessary to see incoming malicious traffic online.”

The Executive Summary is a few pages long, but it aggregates the key findings. Those curious to learn more can access the report here.

For those tired of reading stories covering the incident, an interesting proposal in the Senate’s report would create an American version of the E.U.’s General Data Protection Regulation. In short, the breach has convinced some lawmakers that America needs its own unified legal framework for protecting personally identifiable information of residents in all 50 states. Under Findings of Fact and Recommendations (page 11), the upper chamber of the legislature proposes the following:

“Congress should pass legislation that establishes a national uniform standard requiring private entities that collect and store PII to take reasonable and appropriate steps to prevent cyberattacks and data breaches. Several cybersecurity recommendations, including a widely known framework from NIST, already exist. However, the framework is not mandatory, and there is no federal law requiring private entities to take steps to protect PII.

Congress should pass legislation requiring private entities that suffer a data breach to notify affected consumers, law enforcement, and the appropriate federal regulatory agency without unreasonable delay. There is no national uniform standard requiring a private entity to notify affected individuals in the event of a data breach. All 50 states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands have enacted legislation requiring data breach notification laws. In the absence of a national standard, states have taken significantly different approaches to notification standards with different triggers for notifications and different timelines for notifying individuals whose information has been stolen or improperly disclosed.”

The report outlines some of this new law’s scope, such as forcing private entities to re-examine their data retention policies.

In related news, outspoken politician Elizabeth Warren last week proposed an amendment that would establish criminal liability for negligent executive officers of major corporations. The Corporate Executive Accountability Act seeks to fine and even imprison executives of companies that suffer data breaches or engage in scams. The act would apply to entities that turn over $1 billion or more annually.

Equifax’s blunder, revealed soon after the WannaCry and Petya ransomware pandemics that same year, has served as inspiration for legislators and corporations alike on a global scale. Two years after the incident, the repercussions are still palpable for the credit reporting agency, highlighting once again the importance of having the right tools and processes to keep hackers at bay.

HOTforSecurity: Equifax breach leads U.S. Senate to propose America draft its own GDPR

A US Senate report on an investigation into the monumental Equifax breach chastises the company for lax security, and proposes heading off similar incidents in the future – by making American companies punishable by law for mishandling personally identifiable information.

The 67-page report is replete with information on the 2017 incident, including that Equifax was aware it had cybersecurity deficiencies as early as 2015. One statement in the report, though, could serve to summarize the investigator’s findings:

“Equifax was unable to detect attackers entering its networks because it failed to take the steps necessary to see incoming malicious traffic online.”

The Executive Summary is a few pages long, but it aggregates the key findings. Those curious to learn more can access the report here.

For those tired of reading stories covering the incident, an interesting proposal in the Senate’s report would create an American version of the E.U.’s General Data Protection Regulation. In short, the breach has convinced some lawmakers that America needs its own unified legal framework for protecting personally identifiable information of residents in all 50 states. Under Findings of Fact and Recommendations (page 11), the upper chamber of the legislature proposes the following:

“Congress should pass legislation that establishes a national uniform standard requiring private entities that collect and store PII to take reasonable and appropriate steps to prevent cyberattacks and data breaches. Several cybersecurity recommendations, including a widely known framework from NIST, already exist. However, the framework is not mandatory, and there is no federal law requiring private entities to take steps to protect PII.

Congress should pass legislation requiring private entities that suffer a data breach to notify affected consumers, law enforcement, and the appropriate federal regulatory agency without unreasonable delay. There is no national uniform standard requiring a private entity to notify affected individuals in the event of a data breach. All 50 states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands have enacted legislation requiring data breach notification laws. In the absence of a national standard, states have taken significantly different approaches to notification standards with different triggers for notifications and different timelines for notifying individuals whose information has been stolen or improperly disclosed.”

The report outlines some of this new law’s scope, such as forcing private entities to re-examine their data retention policies.

In related news, outspoken politician Elizabeth Warren last week proposed an amendment that would establish criminal liability for negligent executive officers of major corporations. The Corporate Executive Accountability Act seeks to fine and even imprison executives of companies that suffer data breaches or engage in scams. The act would apply to entities that turn over $1 billion or more annually.

Equifax’s blunder, revealed soon after the WannaCry and Petya ransomware pandemics that same year, has served as inspiration for legislators and corporations alike on a global scale. Two years after the incident, the repercussions are still palpable for the credit reporting agency, highlighting once again the importance of having the right tools and processes to keep hackers at bay.


WHOIS after GDPR: A quick recap for CISOs

2018 was a big year for data protection with the implementation of the General Data Protection Regulation (GDPR) last May — forcing CISOs and other professionals to rethink how the personal data of European consumers should be collected and processed. Taking a closer a look at WHOIS in connection to that, the protocol gives access to public domain data including TLDs and ccTLDs as well as more personal information like the names and addresses of … More

The post WHOIS after GDPR: A quick recap for CISOs appeared first on Help Net Security.

79% of organizations want a federal privacy law amid lack of compliance

There is a significant enthusiasm for a federal privacy law amid organizations’ lack of ability to comply with data privacy rules stemming from both mushrooming government regulations and complex data sharing agreements between companies. Organizations are also overconfident in knowing where private data resides, and tend to use inadequate tools such as spreadsheets to track it. Integris Software’s 2019 Data Privacy Maturity Study gathered detailed responses from 258 mid to senior executives from IT, general … More

The post 79% of organizations want a federal privacy law amid lack of compliance appeared first on Help Net Security.

The global data privacy roadmap: a question of risk

For most American businesses, complying with US data privacy laws follows a somewhat linear, albeit lengthy, path. Set up a privacy policy, don’t lie to the consumer, and check the specific rules if you’re a health care provider, video streaming company, or kids’ app maker.

For American businesses that want to expand to a new market, though, complying with global data privacy laws is more akin to finding dozens of forks in the road, each one marked with an indecipherable signpost.

Should a company expand to China? That depends on whether the company wants to have its source code potentially analyzed by the Chinese government. Okay, what about South Korea? Well, is the company ready to pay three percent of its revenue for a wrongful data transfer, or to have one of its executives spend time behind bars?

Europe is an obvious market to capture, right? That’s true, but, depending on which country, the local data protection authorities could issue enormous fines for violating the General Data Protection Regulation.

What if a company just follows in the footsteps of the more established firms, like Google, Amazon, or Microsoft, which all opened data centers in Singapore in the past two years? Once again, the answer depends on the company. If it’s providing a service that Singapore considers “essential,” it will have to heed a new cybersecurity law there.

At this point, a company might think about entering a country with no data privacy laws. No laws, no getting in trouble, right? Wrong. Data privacy laws can sprout up seemingly overnight, and future compliance costs could severely cut into a company’s budget.

While this may appear overcomplicated, one guiding principle helps: If a company cannot afford to comply with a country’s data privacy laws, it probably should not expand to that country. The risk, which could be millions in penalties, might not outweigh the reward.

Today, for the third piece in our data privacy and cybersecurity blog series, which also took a look at current US data privacy laws and federal legislation on the floor, we explore the decision-making process of a mid-market-sized company that wants to expand its business outside the United States.

With the help of Reed Smith LLP counsel Xiaoyan Zhang, we looked at several notable data privacy laws in Europe, Asia, Latin America, the Middle East, and Africa.

Issue-spotting within a culturally-crafted landscape

Before a company expands into a new country, it should try to truly comprehend the data privacy laws located within, Zhang said. She said this involves more than just reading the law; it requires training one’s thinking into an entirely different culture.

Unlike crimes including manslaughter and robbery—which have near-universal definitions—Zhang said data privacy violations fluctuate from region to region, with interpretations rooted in a country’s history, economy, public awareness, and opinions on privacy.

“Data privacy is not like murder, which is much more straightforward,” Zhang said. “Privacy law is very intimately tied into culture.”

So, while overseas concepts might appear familiar— like protecting “personally identifiable information” in the US and protecting “personal information” in the European Union—the culture behind those concepts varies.

For example, in the European Union, a history of fierce antitrust regulation and government enforcement helped usher GDPR’s passage. In fact, Austrian online privacy advocate Max Schrems—whose legal complaints against Facebook heavily influenced the final text of GDPR—remarked years ago that he was surprised at the lack of tall garden hedges around Americans’ homes. The country’s understanding of privacy, Schrems realized, was different than that of Austria, and so, too, are its data privacy laws.

Similarly, Zhang said she has fielded many questions from EU lawyers who assume that data privacy regulations around the world are similar to those in GDPR.

“EU lawyers are used to thinking that, for every data collection, there must be a legitimate purpose, and they insist on asking the same questions,” Zhang said. “When I’m talking about legal advice in China, they’ll say ‘Oh, our medical device needs to collect data from users, does China have any law or statutes that give us a legitimate business purpose to collect that data?’”

Zhang continued: “No. In China, you don’t need that. It’s totally different.”

The differences can be managed with the right help, though.

The safest path for market expansion is to rely on a global data privacy lawyer to “issue-spot” any obvious global compliance issues, Zhang said. These experts will look at what type of data a company handles—including medical, financial, geolocation, biometric, and others—what type of service the company performs, and whether the company will need to perform frequent cross-border data transfers. Depending on all these factors, each company’s individual roadmap for data privacy compliance will be unique.

However, Zhang led us on a bit of a world tour, detailing some of the notable data privacy laws in Europe, Asia, Africa, the Middle East, and Latin America. Company expansion into these markets, Zhang emphasized, depends on whether a company is ready for compliance.

Many countries, many laws


Starting with Europe there is, of course, GDPR. Complying with the sweeping set of provisions is tricky because GDPR gives each EU member-state the authority to enforce the new data protection law on its own turf.

This enforcement is done through Data Protection Authorities (DPAs), which oversee, investigate, and issue fines for GDPR violation. Each member-state has its own DPA, and, in the months before GDPR’s implementation, the DPAs gave mixed signals about what local enforcement would look like.

France’s DPA, the National Data Protection Commission (CNIL), said that companies that are at least trying to comply with GDPR “can expect to be treated leniently initially, provided that they have acted in good faith.”

Less than one year later, though, that leniency met its limit. CNIL hit Google with the largest GDPR-violation fine on record, at roughly $57 million.

The best defense to these penalties, Zhang said, is to consult with local legal experts who know the region’s enforcement history and details.

“You cannot just seek consultation from a GDPR expert. If you want to go specifically to Germany, you need German lawyers who can offer insight on things that are specific to Germany,” Zhang said. “That’s for all of Europe.”

Latin America

Outside of Europe—but still inspired by GDPR—is Latin America. Zhang said several Latin American countries have enacted, or are considering, legislation that protects the data privacy rights of individuals.

In 2018, Brazil passed its comprehensive data protection law, which protects people’s personal information and includes tighter protections for sensitive information that discloses race, ethnicity, religion, political affiliation, and biometrics. Argentina also forwarded privacy protections for its citizens, and it earned a special clearance in GDPR as a “whitelisted” party, meaning that personal data can be moved to Argentina from the EU without extra safeguards.


Moving to China, a whole new risk factor comes into play—surveillance.

China’s cybersecurity law grants the Chinese government broad, invasive powers to spy on Internet-related businesses that operate within the country. Implemented in 2017, the law allows China’s foreign intelligence agency to perform “national security reviews” on technology that foreign companies want to sell or offer in China.

This authority raised alarm bells for the researchers at Recorded Future, who attributed past cyberattacks directly to the Chinese government. Researchers said the law could give the Chinese government the power to both find and exploit zero-day vulnerabilities in foreign companies’ products, all for the price of admission into the Chinese market.

“China’s law has a hidden angle for government control and monitoring,” Zhang said. “It has a different rationale.”

Outside of China, Singapore has garnered the attention of Google, Microsoft, and Amazon, which all built data centers in the country in the past few years. The country passed its Personal Data Protection Act in 2012 and its Cybersecurity Act in 2018, the latter of which sets up a framework for monitoring cybersecurity threats in the country.

The law has a narrow scope, as it only applies to companies and organizations that control what the Singaporean government calls “critical information infrastructure,” or CII. This includes computer systems that manage banking, government, healthcare, and aviation services, among others. The law also includes data breach notification requirements.

Moving to South Korea, the risk for organizations goes up dramatically, Zhang said. The country’s Personal Information Protection Act preserves the privacy rights of its citizens, and its penalties include criminal and regulatory fines, and even jail time. Cross-border data transfers, in particular, are strictly guarded. One wrongful transfer can result in a fine of up to three percent of a company’s revenue.


Traveling once again, expansion into Africa requires an understanding of the continent’s burgeoning, or sometimes non-existent, data privacy laws. Zhang said that, of Africa’s more than 50 countries, only about 15 have data protection laws, and even fewer have the regulators necessary to enforce those laws.

“Among [the countries], nine have no regulators to enforce the law, and five have a symbolic law but it’s not enforced,” Zhang said.

So, that invites the question: What exactly does happen if a company expands into a country that doesn’t have any data privacy laws?

What happens is potentially more risk.

First, a country could actually develop and pass a data privacy law within years of a company’s expansion into its borders. It’s not unheard of—less than one year after Amazon announced its rollout into Bahrain, the country introduced its first comprehensive data privacy law. Second, compliance with the new data privacy law could be expensive, Zhang said, forcing a company into a tough situation where it might have to withdraw entirely from the new market.

“One common misconception is that if a country doesn’t have a law at all, it’s a good country to go to,” Zhang said. “You should think twice about whether that’s the case.

Expand or not? It’s up to each company

There is no single roadmap for companies entering new markets outside the United States. Instead, there are multiple paths a company can take depending on its product, services, the data it collects, data it will need to move between borders, and its tolerance for risk.

The safest path, Zhang said, is to ask questions upfront. It is far better to make an informed decision about how to enter a market—even if compliance is costly—than to be surprised with fines or penalties later on.

The post The global data privacy roadmap: a question of risk appeared first on Malwarebytes Labs.

Putting trust back into your supply chain with a little help from blockchain

Ian Hume, GM for UK&I at Lenovo DCG, discusses the vital role blockchain is playing in optimising the efficiency, reliability and visibility of the supply chainBlockchain is not just confined

The post Putting trust back into your supply chain with a little help from blockchain appeared first on The Cyber Security Place.

US Congress proposes comprehensive federal data privacy legislation—finally

The United States might be the only country of its size—both in economy and population—to lack a comprehensive data privacy law protecting its citizens’ online lives.

That could change this year.

Never-ending cybersecurity breaches, recently-enacted international privacy laws, public outrage, and crisis after crisis from the world’s largest social media company have pushed US Senators and Representatives into rarely-charted territory: regulation.

Before Congressmembers’ desks are at least four federal bills that would change how companies handle and protect Americans’ private data. The bills seek better user privacy through increased transparency, oversight, fines, and liability, and, in the case of one bill, the possibility of jail time for dishonest tech executives.

Several US states are also considering comprehensive data privacy bills, taking inspiration from California, which passed its own law last year. If those state laws pass, a new wrinkle will be added to the broader country-wide debate: Should state privacy protections be respected or should one federal law supersede those rules?

This month, Malwarebytes Labs launched its limited blog series about data privacy and cybersecurity laws. In this second blog in the series, we explore five federal data privacy bills.

How we got here

For decades, Congress regulated data privacy based on single, sector-specific issues. Rather than writing laws to protect all types of data, they instead wrote laws to combat individual crises.

In the late 80s, that crisis was a Supreme Court nominee’s video rental history being leaked to the press, resulting in the Video Privacy Protection Act. In the late 90s, that crisis was the potential targeting of children online, resulting in the Children’s Online Privacy Protection Act. In the mid-2000s, the kidnapping and murder of a Kansas teenager prompted lawmakers to discuss lowering protections on GPS data held by cell phone providers. (The proposed bill failed passage multiple times.)

This reactive approach is just how Congress works, said Michelle Richardson, director of the data and privacy project at Center for Democracy and Technology (CDT).

“This country has generally allowed companies to do their thing until something goes quite wrong,” Richardson said. “It has to get worse before the US and its decision-makers and its cowboy personality feel ready to intervene.”

Today, Congress is again ready to intervene. The crisis at hand is two-fold.

First, data breaches of Yahoo, Uber, Equifax, Marriot, Target, the Sony PlayStation Network, Facebook, Anthem, JPMorgan Chase, and many more have resulted in Americans’ personally identifiable information being stolen or accessed by cybercriminals. This PII includes names, Social Security numbers, credit card numbers, passport numbers, dates of birth, account passwords, physical and email addresses, and even employment histories.

Second, even when a company hasn’t suffered a breach, Americans’ personal data has been misused or left astray. The FBI searched private company DNA databases. A period-tracking app shared its users’ pregnancy decisions and menstrual tracking information with Facebook. And political beliefs were reaped in an effort to sway a US presidential election.

Congress has concluded that user privacy can no longer be solely entrusted to America’s technology companies.

“The digital space can’t keep operating like the Wild West at the expense of our privacy,” said Amy Klobuchar, Democratic Senator of Minnesota and presidential candidate.

Data privacy legislation has huge support outside of Capitol Hill, too—from the public. Richardson said that, thanks to the work of researchers, journalists, and civil liberties advocates, the public better understands how their data moves from company to company.

“We don’t give nearly enough credit to civil media [outlets] and civil society [groups] for the research they’ve done into data practices and for giving people cold, hard facts about how their data is collected,” Richardson said.

That research has exposed not just personal data misuse, but also corporate irresponsibility.

Last year, Reuters showed that Facebook failed to fulfill its promise to control the wildfire-like spread of hate speech on its platform in Myanmar. The Intercept exposed Google’s plans to build a censored version of its online search tool in China, resulting in several employee departures and renewed questions about Google’s removal of its “Don’t Be Evil” tagline. ACLU showcased the failures in Amazon’s facial recognition software, revealing that the technology falsely matched 28 members of Congress with mugshots of arrestees.

Some US states have already responded.

Last year, Vermont passed a law regulating data brokers, and California passed its California Consumer Privacy Act. The law gives Californians the right to know which data is collected on them, whether that data is sold, the option to opt out of those sales, and the right to access that data. The law will take effect at the start of 2020.

In the meantime, other states are aiming to follow suit. Washington, Utah, and New York legislatures are all considering new laws that could give their residents better access and control to the information that companies collect on them.

International data privacy law is even further ahead.

Last year, the European Union successfully completed its effort to pull together the data privacy laws of its 28 member-states into one cohesive package. The General Data Protection Regulation came into effect on May 25, 2018, and since then, it has produced lawsuits against Facebook and a record fine out of France against Google.

At home and abroad, regulation is in the air.

The proposals

Since last April, multiple US Senators have tried to take on the mantle of the public’s chief data privacy protector. Some tried to show their commitment to data privacy by asking Facebook CEO Mark Zuckerberg pointed questions during his Congressional testimony regarding the Cambridge Analytica scandal. One Senator—and presidential candidate—made a direct public appeal to break up Amazon, Google, and Facebook.

But in putting actual ideas onto paper, four Senators have emerged as frontrunners in America’s data privacy debate. Senators Klobuchar, Ron Wyden of Oregon, Marco Rubio of Florida, and Brian Schatz of Hawaii have directly sponsored individual, separate bills to protect Americans from opaque and unfair data collection.

Google, Facebook, Amazon, Apple, Microsoft, Yahoo, Uber, Netflix, and countless others could be affected by these proposals.

The bills ask for essentially the same thing: tighter controls on user data. Consequences often include higher fines from the Federal Trade Commission (FTC), which currently serves as the country’s primary data misuse regulator.

Sen. Klobuchar’s bill—the first of the four to be formally introduced in April 2018—would require certain companies to write their terms of service agreements in “language that is clear, concise, and well-organized.” It would also require companies to give users the right to access data collected on them (similar to California’s state bill and to GDPR), along with notifying users about a data breach within 72 hours.

Sen. Rubio’s bill—the American Data Dissemination Act (ADD)—would require the FTC to write its own privacy recommendations for Congress to later approve. The ADD asks that the FTC’s  rules closely align with the Privacy Act of 1974, which restricts how federal agencies collect, store, and share Americans’ personal information. If passed, the FTC would have up to 27 months to get its own recommendations approved.

The ADD would also “preempt”—meaning, it would nullify—current and upcoming state data privacy laws. If passed, companies would only need to comply with the FTC’s federal rules that Congress would later approve. California and Vermont would wave goodbye to their newly-passed laws, and Utah, Washington, and New York would likely shut down their own efforts.

But preemption could be a deal-breaker for free speech advocates, digital rights groups, and government representatives.

“Under the Rubio bill, Americans would not have their privacy protected,” said Center for Digital Democracy Executive Director Jeff Chester, in speaking to Bloomberg. “State preemption is a non-starter as far as the consumer and privacy groups community and their allies in Congress are concerned.”

In California, the state’s attorney general also pushed back.

“For those of you following debate over data #privacy, note: We oppose any attempt to pre-empt #California’s privacy laws…” wrote Sarah Lovenheim, communications advisor to California Attorney General Xavier Becerra.

The opposition to Sen. Rubio’s bill is compounded by its slow timeline, making it impossible for lawmakers to know what specific rules they could be asked to approve in two years’ time.

The ADD demands Congress make an unknown, gameshow-style choice: Keep the data privacy protections you have, or choose what’s behind Door Number Two?

Sen. Wyden’s bill—the Consumer Data Protection Act—sets itself apart as the only bill that includes jail time consequences.

Sen. Wyden’s bill would require data-collecting companies to deliver annual reports that detail their internal privacy-protecting efforts. Those reports would need to be signed and confirmed by a high-level company executive, like a CEO or CTO. But if those executives confirm a false report, they could face jail time, the bill proposes.

The Consumer Data Protection Act would also require the FTC to set up a “Do Not Track” website where Americans could register to opt out of online tracking and third-party data sharing. Companies that fail to comply with consumers’ wishes would face fines.

This “Do Not Track” proposal is far from perfect. If a company’s requirement to get user consent clashes with that user’s Do Not Track preferences, the bill proposes a harmful compromise: Put the services behind a price tag. Paying for privacy is wrong, and, even if the bill passes, companies should refuse to engage in such a dangerous practice.

Finally, there is Sen. Schatz’s Data Care Act, which relies on a novel interpretation of corporate responsibility. The bill equates the responsibility that doctors have to their patients’ information as the same responsibility that technology companies should have to user data.

“Just as doctors and lawyers are expected to protect and responsibly use the personal data they hold, online companies should be required to do the same,” Sen. Schatz said in a press release.

The bill creates rules under five broad umbrellas—the “duty to care,” the “duty of loyalty,” the “duty of confidentiality,” federal and state enforcement, and rulemaking authority by the FTC to enforce the bill.

Fifteen Senators from both parties have signed on as co-sponsors, including Sen. Klobuchar. (Sens. Rubio and Wyden have not.) Several civil rights organizations, including Free Press, EFF, and CDT, have voiced support.

“We commend Senator Schatz for tackling the difficult task of drafting privacy legislation that focuses on routine data processing practices instead of consumer data self-management,” said CDT’s Richardson in a press release.

Here, Richardson is talking about something that she and the policy team at CDT find particularly important: consent. Many of today’s data privacy bills lean heavily on the idea that clearer terms of service and more notifications and more annual reports will somehow empower consumers to make the right choices for themselves when consenting to use online platforms.

But that’s unfair, Richardson said.

“[CDT’s] biggest concern is that a lot of these proposals are a notice-and-consent model. They look at these agreements we sign and say, ‘Maybe make them clearer,’ for example,” Richardson said. “That’s doubling down on our existing system, where it’s up to individuals to micromanage their relationships with hundreds, if not thousands of companies that touch their data every day.”

So, CDT—which routinely discusses already-authored legislation with Congressmembers—took a different approach. The organization wrote its own bill.

The bill’s rules are not built on consent. Instead, CDT’s bill focuses, Richardson said, on “what are the things you can’t sign away? What are your digital civil rights?”

CDT’s bill would give US persons—including residents—the rights to access, correct, and delete data that is collected on them, along with the right to take their personal data and move it somewhere else (which is similar to a right granted in the European Union’s GDPR). The bill would also require the FTC to investigate and write rules barring discriminatory practices in online advertising.

Companies affected by CDT’s bill would be given 30 days to put into place mechanisms for users to exercise their above rights. Also, if those companies license or sell personal information to third parties, they would need to assure that their third-party partners are practicing the same privacy commitments as the companies themselves.

Similar to Sen. Rubio’s bill, CDT’s bill would pre-empt state laws, but only those that focus on data privacy. Laws that deal with, say, consumer protection or data breaches, would remain intact.

As to which federal bill will prevail—it’s a bit of a tossup. Passing a bill into law is never as easy as getting the best idea forward. Big Tech is sure to lobby against any bill that would cut into its business model, and civil liberties groups could, depending on the legislation, disagree with one another about the best path forward.

Until then, CDT thinks it is taking the right approach, removing the burden from users and instead protecting what their rights should look like in the future.

Richardson put it plainly: “This is a moment about having corporations treat us better.”

In our next blog in the series, we will look at data privacy compliance for businesses seeking to expand outside the US market.

The post US Congress proposes comprehensive federal data privacy legislation—finally appeared first on Malwarebytes Labs.

Washington D.C. takes a leaf from GDPR book, introduces new data privacy bill

The US capital region is on track to implement new regulations akin to the EU’s GDPR, the local government of Washington D.C. said in a press release. The law seeks to expand protections for residents’ personal data and includes new compliance requirements for entities handling data of D.C. residents.

Attorney General Karl A. Racine says D.C. residents have been among those recently hit by some of the most serious data breaches in history. The Equifax breach alone, which exposed personal information of over 143 million people, affected 350,000 District residents, he said.

“Data breaches and identify theft continue to pose major threats to District residents and consumers nationwide,” Racine said. “The District’s current data security law does not adequately protect residents. Today’s amendment will bolster the District’s ability to hold companies responsible when they collect and use vast amounts of consumer data and do not protect it. I urge the Council to pass this legislation quickly for the benefit of District residents.”

The Security Breach Protection Amendment Act of 2019 seeks to:

  • Expand the definition of personal information subject to legal protection, including passport numbers, military ID numbers, health and biometric data, and even genetic information.
  • Create new compliance requirements for companies that handle personal information, so as to provide identity theft protection if they expose Social Security numbers, and to inform customers of their rights when a breach occurs and their personal data is at risk.

The Office of the Attorney General would also become the go-to authority for reporting any violation of the District’s Consumer Protection Procedures Act, according to the news release. Readers can view the full bill here.

HOTforSecurity: Washington D.C. takes a leaf from GDPR book, introduces new data privacy bill

The US capital region is on track to implement new regulations akin to the EU’s GDPR, the local government of Washington D.C. said in a press release. The law seeks to expand protections for residents’ personal data and includes new compliance requirements for entities handling data of D.C. residents.

Attorney General Karl A. Racine says D.C. residents have been among those recently hit by some of the most serious data breaches in history. The Equifax breach alone, which exposed personal information of over 143 million people, affected 350,000 District residents, he said.

“Data breaches and identify theft continue to pose major threats to District residents and consumers nationwide,” Racine said. “The District’s current data security law does not adequately protect residents. Today’s amendment will bolster the District’s ability to hold companies responsible when they collect and use vast amounts of consumer data and do not protect it. I urge the Council to pass this legislation quickly for the benefit of District residents.”

The Security Breach Protection Amendment Act of 2019 seeks to:

  • Expand the definition of personal information subject to legal protection, including passport numbers, military ID numbers, health and biometric data, and even genetic information.
  • Create new compliance requirements for companies that handle personal information, so as to provide identity theft protection if they expose Social Security numbers, and to inform customers of their rights when a breach occurs and their personal data is at risk.

The Office of the Attorney General would also become the go-to authority for reporting any violation of the District’s Consumer Protection Procedures Act, according to the news release. Readers can view the full bill here.


The privacy risks of pre-installed software on Android devices

Many pre-installed apps facilitate access to privileged data and resources, without the average user being aware of their presence or being able to uninstall them. On the one hand, the permission model on the Android operating system and its apps allow a large number of actors to track and obtain personal user information. At the same time, it reveals that the end user is not aware of these actors in the Android terminals or of … More

The post The privacy risks of pre-installed software on Android devices appeared first on Help Net Security.

Businesses have cybersecurity best practice guidelines but fail in practice

Almost 70% of companies have cybersecurity best practice guidelines in place but neglect to take the necessary steps to secure their business. A staggering 44% of businesses admitted to not securing removable devices using anti-virus software, leaving their IT systems exposed to cybersecurity risks and GDPR fines, according to a new research conducted by ESET and Kingston Digital. The ESET and Kingston research looked at over 500 British business leaders to investigate how they are … More

The post Businesses have cybersecurity best practice guidelines but fail in practice appeared first on Help Net Security.

When is it fair to infer?

While the GDPR framework is robust in many respects, it struggles to provide adequate protection against the emerging risks associated with inferred data (sometimes called derived data, profiling data, or inferential data). Inferred data pose potentially significant risks in terms of privacy and/or discrimination, yet they would seem to receive the least protection of the personal data types prescribed by GDPR. Defined as assumptions or predictions about future behaviour, inferred data cannot be verified at the time of decision-making. Consequently, data subjects are often unable to predict, understand or refute these inferences, whilst their privacy rights, identity and reputation are impacted.

Reaching dangerous conclusions

Numerous applications drawing potentially troubling inferences have emerged; Facebook is reported to be able to infer protected attributes such as sexual orientation and race, as well as political opinions and the likelihood of a data subject attempting suicide. Facebook data has also been used by third parties to decide on loan eligibility, to infer political leniencies, to predict views on social issues such as abortion, and to determine susceptibility to depression. Google has attempted to predict flu outbreaks, other diseases and medical outcomes. Microsoft can predict Parkinson’s and Alzheimer’s from search engine interactions. Target can predict pregnancy from purchase history, users’ satisfaction can be determined by mouse tracking, and China infers a social credit scoring system.

What protections does GDPR offer for inferred data?

The European Data Protection Board (EDPB) notes that both verifiable and unverifiable inferences are classified as personal data (for instance, the outcome of a medical assessment regarding a user’s health, or a risk management profile). However it is unclear whether the reasoning and processes that led to the inference are similarly classified. If inferences are deemed to be personal data, should the data protection rights enshrined in GDPR also equally apply?

The data subjects’ right to being informed, right to rectification, right to object to processing, and right to portability are significantly reduced when data is not ‘provided by the data subject’ for example the EDPB note (in their guidelines on the rights to data portability) that “though such data may be part of a profile kept by a data controller and are inferred or derived from the analysis of data provided by the data subject, these data will typically not be considered as “provided by the data subject” and thus will not be within scope of this new right’.

The data subject however can still exercise their “right to obtain from the controller confirmation as to whether or not personal data concerning the data subject has being processed, and, where that is the case, access to the personal data”. The data subject also has the right to information about “the existence of automated decision-making, including profiling (Article 22(1),(4)) meaningful information about the logic involved, as well as the significance and consequences of such processing” (Article 15). However the data subject must actively make such an access request, and if the organisation does not provide the data, how will the data subject know that derived or inferred data is missing from their access request?

A data subject can also object to direct marketing based on profiling and/or have it stopped, however there is no obligation on the controller to inform the data subject that any profiling is taking place – “unless it produces legal or significant effects on the data subject”.

No answer just yet…

Addressing the challenges and tensions of inferred and derived data, will necessitate further case law on the interpretation of “personal data”, particularly regarding interpretations of GDPR. Future case law on the meaning of “legal effects… or similarly significantly affects”, in the context of profiling, would also be helpful. It would also seem reasonable to suggest that where possible data subjects should be informed at collection point, that data is derived by the organisation and for what purposes. If the data subject doesn’t know that an organisation uses their data to infer new data, the data subject cannot exercise fully their data subject rights, since they won’t know that such data exists.

In the meantime, it seems reasonable to suggest that inferred data which has been clearly informed to the data subject, is benevolent in its intentions, and offers the data subject positive enhanced value, is ‘fair’.

The post When is it fair to infer? appeared first on BH Consulting.

Unsurprisingly, only 14% of companies are compliant with CCPA

With less than 10 months before the California Consumer Privacy Act (CCPA) goes into effect, only 14% of companies are compliant with CCPA and 44% have not yet started the implementation process. Of companies that have worked on GDPR compliance, 21% are compliant with CCPA, compared to only 6% for companies that did not work on GDPR, according to the TrustArc survey conducted by Dimensional Research. “At TrustArc, we’ve seen a significant increase in the … More

The post Unsurprisingly, only 14% of companies are compliant with CCPA appeared first on Help Net Security.

Data breach reports delayed as organizations struggle to achieve GDPR compliance

Businesses routinely delayed data breach disclosure and failed to provide important details to the ICO in the year prior to the GDPR’s enactment. On average, businesses waited three weeks after discovery to report a breach to the ICO, while the worst offending organization waited 142 days. The vast majority (91%) of reports to the ICO failed to include important information such as the impact of the breach, recovery process and dates, according to the Redscan’s … More

The post Data breach reports delayed as organizations struggle to achieve GDPR compliance appeared first on Help Net Security.

Whack-a-Fraud: EU’s Crackdown Could Increase U.S. Payments Scams

U.S. providers should be "on alert" for an increase in payments fraud experts warn. The European Union's (EU's) new Payment Services Directive (PSD2) raises the bar for security and may cause cybercriminals to focus on targets in this country.

The post Whack-a-Fraud: EU’s Crackdown Could Increase U.S. Payments Scams appeared first on ...

Read the whole entry... »

Related Stories

Five data protection tips from the DPC’s annual report

The first post-GDPR report from the Data Protection Commission makes for interesting reading. The data breach statistics understandably got plenty of coverage, but there were also many pointers for good data protection practice. I’ve identified five of them which I’ll outline in this blog.

Between 25 May and 31 December 2018, the DPC recorded 3,542 valid data security breaches. (For the record, the total number of breaches for the calendar year was 4,740.) This was a 70 per cent increase in reported valid data security breaches compared to 2017 (2,795), and a 56 per cent increase in public complaints compared to 2017.

1. Watch that auto-fill!

By far the largest single category was “unauthorised disclosures”, which was 3,134 out of the total. Delving further, we find that many of the complaints to the DPC relate to unauthorised disclosure of personal data in an electronic context. In other words, an employee at a company or public sector agency sent email containing personal data to the wrong recipient.

Data breaches in Ireland during 2018 and their causes

A case study on page 21 of the report illustrates this point: a data subject complained to the DPC after their web-chat with a Ryanair employee “was accidentally disclosed by Ryanair in an email to another individual who had also used the Ryanair web-chat service. The transcript of the webchat contained details of the complainant’s name and that of his partner, his email address, phone number and flight plans”.

It’s a common misconception that human error doesn’t count as a data breach, but in the eyes of GDPR, this isn’t the case. The most common reason for breaches like this comes from the auto-fill function in some software applications like email clients.

Where an organisation deals with high-risk data like healthcare information (because of the sensitivity involved), best practice is to disable auto-fill. I recommend this step to many of my clients. Many organisations don’t like doing this because it disrupts staff and makes their jobs a little bit harder. In my experience, employees soon get used to the inconvenience, while organisations greatly reduce their chances of a breach.

2. Encrypted messaging may not be OK

Another misconception I hear a lot is that it’s OK to use WhatsApp as a messaging tool because it’s encrypted. The case study on page 19 of the DPC report clarifies this position. A complainant claimed the Department of Foreign Affairs and Trade’s Egypt mission had shared his personal data with a third party (his employer) without his knowledge. A staff member at the mission was checking the validity of a document and the employer had no email address, so they sent a supporting document via WhatsApp.

In this case, the DPC “was satisfied that given the lack of any other secure means to contact the official in question, the transmission via WhatsApp was necessary to process the personal data for the purpose provided (visa eligibility)”.

My reading of this is that although the DPC ruled that WhatsApp was sufficient in this case, this was only because no other secure means of communication was available.

3. Do you need a DPO?

The report tells us that there were 900 Data Protection Officers appointed between 25 May and 31 December 2018. My eyes were immediately drawn to some text accompanying that graph (below). “During 2019, the DPC plans to undertake a programme of work communicating with relevant organisations regarding their obligations under the GDPR to designate a DPO.” This suggests to me that the DPC doesn’t believe there are enough DPOs, hence the outreach and awareness-raising efforts.

Notifications of new DPOs between 25 May and 31 December 2018

Private and public organisations will need to decide whether they should appoint a full-time DPO or avail of a service-model from a third-party data protection specialist.

4. A data protection policy is not a ‘get out of jail free’ card

Case study 9 from the report concerns an employee of a public-sector body who lost an unencrypted USB device. The device contained personal information belonging to a number of colleagues and service users. The data controller had policies and procedures in place that prohibited the removal and storage of personal data on unencrypted devices. But the DPC found that it “lacked the appropriate oversight and supervision necessary to ensure that its rules were complied with”.

The lesson I take from this is, “user error” is not a convenient shield for all data protection shortcomings. Many organisations expended effort last year in writing policies, and some think they’re covered from sanction because they did so. But unless they implement and enforce the policy – and provide training to staff about it – then it’s not enough.

5. Email marketing penalties may change

My final point is more of an observation than advice. Between 25 May and 31 December, the DPC prosecuted five entities for 30 offences involving email marketing. The reports detail those cases. A recurring theme is that the fines were mostly in the region of a couple of thousand euro. However, all of these cases began before GDPR was in force; since then, the DPC has the power to levy fines directly rather than going through the courts. This is an area I expect the DPC to address. Any organisation that took a calculated risk in the past because the fines were low should not expect this situation will continue.

There are plenty of other interesting points in the 104-page report, which is free to download here.

The post Five data protection tips from the DPC’s annual report appeared first on BH Consulting.

Explained: Payment Service Directive 2 (PSD2)

Payment Service Directive 2 (PSD2) is the implementation of a European guideline designed to further harmonize money transfers inside the EU. The ultimate goal of this directive is to simplify payments across borders so that it’s as easy as transferring money within the same country. Since the EU was set up to diminish the borders between its member states, this make sense. The implementation offers a legal framework for all payments made within the EU.

After the introduction of PSD in 2009, and with the Single Euro Payments Area (SEPA) migration completed, the EU introduced PSD2 on January 13, 2018. However, this new harmonizing plan came with a catch— the use of new online payment and account information services provided by third parties, such as financial institutions, who needed to be able to access the bank accounts of EU users. While they first need to obtain users’ consent to do so, we all know consent is not always freely given or with a full understanding of the implications. Still, it must be noted: Nothing will change if you don’t give your consent, and you are not obliged to do so.

Which providers

Before these institutions are allowed to ask for consent, they have to be authorized and registered under PSD2. The PSD2 already sets out information requirements for the application as payment institution and for the registration as account information services provider (AISP). The European Banking Authority (EBA) published guidelines on the information to be provided by applicants intending to obtain authorization as payment and electronic money institutions, as well as to register as an AISP.

From the pages of the Dutch National Bank (De Nederlandsche Bank):

“In this register are also (foreign) Account information service providers based upon the European Passport. These Account information service providers are supervised by the home supervisor. Account information service providers from other countries of the European Economic Area (EEA) could issue Account information services based upon the European Passport through an Agent in the Netherlands. DNB registers these agents of foreign Account information service providers without obligation to register. The registration of these agents are an extra service to the public. However the possibility may exist that the registration of incoming agents differs from the registration of the home supervisor.”

So, an AISP can obtain a European Passport to conduct its services across the entire EU, while only being obligated to register in its country of origin. And even though the European Union is supposed to be equal across the board, the reality is, in some countries, it’s easier to worm yourself into a comfortable position than in others.

Access to bank account = more services

Wait a minute. What exactly does all of this mean? Third parties often live under a separate set of rules and are not always subject to the same scrutiny. (Case in point: AISPs can move to register in “easier” countries and get away with much more.) So while that offers an AISP better flexibility to provide smooth transfer services, it would also allow those payment institutions to offer new services based on their view into your bank account. That includes a wealth of information, such as:

  • How much money is coming into and out of the account each month
  • Spending habits: what you spend money on and where you spend it
  • Payment habits: Are you paying bills way ahead of deadline or tardy?

AISPs can check your balance, request your bank to initiate a payment (transfer) on your behalf, or create a comprehensive overview of your balances for you.

Simple example: There is an AISP service that keeps tabs on your payments and income and shows you how much you can spend freely until your next payment is expected to come in. This is useful information to have when you are wondering if you can make your money last until the end of the month if you buy that dress.

However, imagine this information in the hands of a commercial party that wants to sell you something. They would be able to figure out how much you are spending with their competitors and make you a better offer. Or pepper you with ads tailored to your spending habits. Is that a problem? Yes, because why did you choose your current provider in the first place? Better service or product? Customer friendliness? Exactly what you needed? In short, the competitor might use your information to help themselves, and not necessarily you.

What is worrying about PSD2?

Consumer consent is a good thing. But if we can learn from history, as we should, it will not be too long before consumers are being tricked into clicking a big green button that gives a less trustworthy provider access to their banking information. Maybe they don’t even have to click it themselves. We can imagine Man-in-the-Middle attacks that sign you up for such a service.

Any offer of a service that requires your consent to access banking information should be carefully examined. How will AISPs that work for free make money? Likely by advertising to you or selling your data.

And then there is the possibility for “soft extortion,” like a mortgage provider that doesn’t want to do business with you unless you provide them with the access to your banking information. Or will offer you a better deal if you do.

In all of these scenarios, consent was given in one way or another, but is the deal really all that beneficial for the customer?

What we’d like to see

Some of the points below may already be under consideration in some or all of the EU member states, but we think they offer a good framework for the implementation of these new services.

  • We only want AISPs that work for the consumer and not for commercial third parties. In fairness, the consumer will pay the AISP for their services so that abuse or misuse of free product business models does not take place.
  • AISPs that want to do business in a country should be registered in that country, as well as in other countries where they want to do business.
  • AISPs should be constantly monitored, with the option to revoke their license if they misbehave. Note that GDPR already requires companies to delete data after services have stopped or when consent is withdrawn.
  • Access to banking information should not be used as a requirement for unrelated business models, or be traded for a discount on certain products.
  • GDPR regulations should be applied with extra care in this sensitive area. Some data- and privacy-related bodies have already expressed concerns about the discrepancies between GDPR and PSD2, even though they come from the same source.
  • Obligatory double-check through another medium by the AISP whether the customer has signed up out of their own free will, with a cooling-off period during which they can withdraw the permission.

Would anyone consent to PSD2 access?

For the moment, it’s hard to imagine a reason for allowing another financial institution or other business access to personal banking information. But despite the obvious red flags, it’s possible that people might be convinced with discounts, denials of service, or appealing benefits to give their consent.

And some of our wishes could very well be implemented as some kinks are still being ironed out. The Dutch Data Protection Authority (DPA) has pointed out that there are discrepancies between GDPR and PSD2 and expressed their concern about them. The DPA acknowledges this in their recommendation on the Implementation Act, and most recently in the Implementation Decree.

In both recommendations, the DPA concludes, in essence, that the GDPR has not been taken in consideration adequately in the course of the Dutch implementation of PSD2. The same may happen in other EU member states. Of course, the financial world tells us that licenses will not be issued to just anybody, but the public has not entirely forgotten the global 2008 banking crisis.

On top of that, there are major lawsuits in progress against insurance companies and other companies that sold products constructed in a way the general public could not possibly understand. These products are now considered misleading, and some even fraudulent. To put it mildly, the trust of the European public in financials is not high at the moment.

And we are not just looking at traditional financials.

Did you know that Google has obtained an eMoney license in Lithuania and that Facebook did the same in Ireland?

Are you worried now? Let me explain that all of these concerns have been brought up before, and the general consensus is that the regulations are strict enough to warrant an introduction of PSD2 that will only allow trustworthy partners which have been vetted and will be monitored by the authorities.

Nevertheless, you can rest assured that we will keep an eye on this development. When the times comes that PSD2 is introduced to the public, it might also turn out to be a subject that phishers are interested in. We can already imagine the “Thank you for allowing us to access your bank account; click here to revoke permission” email buried in junk mail.

Stay safe, everyone!

The post Explained: Payment Service Directive 2 (PSD2) appeared first on Malwarebytes Labs.

Learning from the Big Data Breaches of 2018

Guest article by Cybersecurity Professionals

What can we learn from the major data breaches of 2018?
2018 was a major year for cybersecurity. With the introduction of GDPR, the public’s awareness of their cyber identities has vastly increased – and the threat of vulnerability along with it. The Information Commissioner’s Office received an increased number of complaints this year and the news was filled with reports of multi-national and multi-millionaire businesses suffering dramatic breaches at the hand of cybercriminals.

2018 Data Breaches
Notable breaches last year include:

5. British Airways
The card details of 380,000 customers were left vulnerable after a hack affected bookings on BA’s website and app. The company insists that no customer’s card details have been used illegally but they are expected to suffer a major loss of money in revenue and fines as a result of the attack.

4. T-Mobile
Almost 2 million users had their personal data, including billing information and email addresses accessed through an API by an international group of hackers last August.

3. Timehop
A vulnerability in the app’s cloud computing account meant that the names and contact details of 21 million users were affected on Timehop. The company assured users that memories were only shared on the day and deleted after, meaning that the hackers were not able to access their Facebook and Twitter history.

2. Facebook & Cambridge Analytica
One of the most sensationalised news stories of the last year, Facebook suffered a string of scandals after it was released that analytics firm Cambridge Analytica had used the Facebook profile data of 87 million users in an attempt to influence President Trump’s campaign and potentially aid the Vote Leave campaign in the UK-EU referendum.

1. Quora
After a “malicious third party” accessed Quora’s system, the account information, including passwords, names and email addresses, of 100 million users was compromised. The breach was discovered in November 2018.

As the UK made the switch from the Data Protection Act to GDPR, businesses and internet users across the country suddenly became more aware of their internet identities and their rights pertaining to how businesses handled their information.

With the responsibility now firmly on the business to protect the data of UK citizens, companies are expected to keep a much higher standard of security in order to protect all personal data of their clients.

How many complaints to the ICO?
Elizabeth Denham, the UK’s Information Commissioner, said that the year 2017-18 was ‘one of increasing activity and challenging actions, some unexpected, for the office’.

This is shown in an increase in data protection complaints by 15%, as well as an increase in self-reported breaches by 30%. Since this is the first year of GDPR, it is expected that self-reported breaches have increased as businesses work to insure themselves against much higher fines for putting off their announcement.

The ICO also reports 19 criminal prosecutions and 18 convictions last year and fines totalling £1.29 million for serious security failures under the Data Protection Act 1998. The office has assured that they don’t intend to make an example of firms reporting data breaches in the early period of GDPR but as time goes on, leniency is likely to fade as businesses settle into the higher standards.

What does it mean for SMEs?
With 36% of SMEs having no cybersecurity plan, the general consensus is that they make for unpopular targets. However, with the GDPR, the responsibility is on the business to protect their data so being vulnerable could result in business-destroying costs. Considering the cost to businesses could total the higher of 2% of annual turnover or €10 million, data protection is of paramount importance to small businesses.

How exposed are we in the UK?
At 31%, our vulnerability rating is higher than the Netherlands, Germany, Estonia (30%) and Finland (29%), but the UK is a more likely target for cybercriminals looking to exploit high tech and financial services industries, which are some of the most vulnerable across Great Britain.

Despite a higher level of vulnerability, the UK has one of the largest cyber security talent pools, showing there is time and manpower being dedicated to the protection of our data online.

The not-so-definitive guide to cybersecurity and data privacy laws

US cybersecurity and data privacy laws are, to put it lightly, a mess.

Years of piecemeal legislation, Supreme Court decisions, and government surveillance crises, along with repeated corporate failures to protect user data, have created a legal landscape that is, for the American public and American businesses, confusing, complicated, and downright annoying.

Businesses are expected to comply with data privacy laws based on the data’s type. For instance, there’s a law protecting health and medical information, another law protecting information belonging to children, and another law protecting video rental records. (Seriously, there is.) Confusingly, though, some of those laws only apply to certain types of businesses, rather than just certain types of data.

Law enforcement agencies and the intelligence community, on the other hand, are expected to comply with a different framework that sometimes separates data based on “content” and “non-content.” For instance, there’s a law protecting phone call conversations, but another law protects the actual numbers dialed on the keypad.

And even when data appears similar, its protections may differ. GPS location data might, for example, receive a different protection if it is held with a cell phone provider versus whether it was willfully uploaded through an online location “check-in” service or through a fitness app that lets users share jogging routes.

Congress could streamline this disjointed network by passing comprehensive federal data privacy legislation; however, questions remain about regulatory enforcement and whether states’ individual data privacy laws will be either respected or steamrolled in the process.

To better understand the current field, Malwarebytes is launching a limited blog series about data privacy and cybersecurity laws in the United States. We will cover business compliance, sectoral legislation, government surveillance, and upcoming federal legislation.

Below is our first blog in the series. It explores data privacy compliance in the United States today from the perspective of a startup.

A startup’s tale—data privacy laws abound

Every year, countless individuals travel to Silicon Valley to join the 21st century Gold Rush, staking claims not along the coastline, but up and down Sand Hill Road, where striking it rich means bringing in some serious venture capital financing.

But before any fledgling startup can become the next Facebook, Uber, Google, or Airbnb, it must comply with a wide, sometimes-dizzying array of data privacy laws.

Luckily, there are data privacy lawyers to help.

We spoke with D. Reed Freeman Jr., the cybersecurity and privacy practice co-chair at the Washington, D.C.-based law firm Wilmer Cutler Pickering Hale and Dorr about what a hypothetical, data-collecting startup would need to become compliant with current US data privacy laws. What does its roadmap look like?

Our hypothetical startup—let’s call it—is based in San Francisco and focused entirely on a US market. The company developed an app that collects users’ data to improve the app’s performance and, potentially, deliver targeted ads in the future.

This is not an exhaustive list of every data privacy law that a company must consider for data privacy compliance in the US. Instead, it is a snapshot, providing information and answers to potentially some of the most common questions today.’ online privacy policy

To kick off data privacy compliance on the right foot, Freeman said the startup needs to write and post a clear and truthful privacy policy online, as defined in the 2004 California Online Privacy Protection Act.

The law requires businesses and commercial website operators that collect personally identifiable information to post a clear, easily-accessible privacy policy online. These privacy policies must detail the types of information collected from users, the types of information that may be shared with third parties, the effective date of the privacy policy, and the process—if any—for a user to review and request changes to their collected information.

Privacy policies must also include information about how a company responds to “Do Not Track” requests, which are web browser settings meant to prevent a user from being tracked online. The efficacy of these settings is debated, and Apple recently decommissioned the feature in its Safari browser.

Freeman said companies don’t need to worry about honoring “Do Not Track” requests as much as they should worry about complying with the law.

“It’s okay to say ‘We don’t,’” Freeman said, “but you have to say something.”

The law covers more than what to say in a privacy policy. It also covers how prominently a company must display it. According to the law, privacy policies must be “conspicuously posted” on a website.

More than 10 years ago, Google tried to test that interpretation and later backed down. Following a 2007 New York Times report that revealed that the company’s privacy policy was at least two clicks away from the home page, multiple privacy rights organizations sent a letter to then-CEO Eric Schmidt, urging the company to more proactively comply.

“Google’s reluctance to post a link to its privacy policy on its homepage is alarming,” the letter said, which was signed by the American Civil Liberties Union, Center for Digital Democracy, and Electronic Frontier Foundation. “We urge you to comply with the California Online Privacy Protection Act and the widespread practice for commercial web sites as soon as possible.”

The letter worked. Today, users can click the “Privacy” link on the search giant’s home page.

What About COPPA and HIPAA?, like any nimble Silicon Valley startup, is ready to pivot. At one point in its growth, it considered becoming a health tracking and fitness app, meaning it would collect users’ heart rates, sleep regimens, water intake, exercise routines, and even their GPS location for selected jogging and cycling routes. also once considered pivoting into mobile gaming, developing an app that isn’t made for children, but could still be downloaded onto children’s devices and played by kids.’ founder is familiar with at least two federal data privacy laws—the Health Insurance Portability and Accountability Act (HIPAA), which regulates medical information, and the Children’s Online Privacy Protection Act (COPPA), which regulates information belonging to children.’ founder wants to know: If her company stars collecting health-related information, will it need to comply with HIPAA?

Not so, Freeman said.

“HIPAA, the way it’s laid out, doesn’t cover all medical information,” Freeman said. “That is a common misunderstanding.”

Instead, Freeman said, HIPAA only applies to three types of businesses: health care providers (like doctors, clinics, dentists, and pharmacies), health plans (like health insurance companies and HMOs), and health care clearinghouses (like billing services that process nonstandard health care information).

Without fitting any of those descriptions, doesn’t have to worry about HIPAA compliance.

As for complying with COPPA, Freeman called the law “complicated” and “very hard to comply with.” Attached to a massive omnibus bill at the close of the 1998 legislative session, COPPA is a law that “nobody knew was there until it passed,” Freeman said.

That said, COPPA’s scope is easy to understand.

“Some things are simple,” Freeman said. “You are regulated by Congress and obliged to comply with its byzantine requirements if your website is either directed to children under the age of 13, or you have actual knowledge that you’re collecting information from children under the age of 13.”

That begs the question: What is a website directed to children? According to Freeman, the Federal Trade Commission created a rule that helps answer that question.

“Things like animations on the site, language that looks like it’s geared towards children, a variety of factors that are intuitive are taken into account,” Freeman said.

Other factors include a website’s subject matter, its music, the age of its models, the display of “child-oriented activities,” and the presence of any child celebrities.

Because is not making a child-targeted app, and it does not knowingly collect information from children under the age of 13, it does not have to comply with COPPA.

A quick note on GDPR

No concern about data privacy compliance is complete without bringing up the European Union’s General Data Protection Regulation (GDPR). Passed in 2016 and having taken effect last year, GDPR regulates how companies collect, store, use, and share EU citizens’ personal information online. On the day GDPR took effect, countless Americans received email after email about updated privacy policies, often from companies that were founded in the United States.’ founder is worried. She might have EU users but she isn’t certain. Do those users force her to become GDPR compliant?

“That’s a common misperception,” Freeman said. He said one section of GDPR explains this topic, which he called “extraterritorial application.” Or, to put it a little more clearly, Freeman said: “If you’re a US company, when does GDPR reach out and grab you?”

GDPR affects companies around the world depending on three factors. First, whether the company is established within the EU, either through employees, offices, or equipment. Second, whether the company directly markets or communicates to EU residents. Third, whether the company monitors the behavior of EU residents.

“Number three is what trips people up,” Freeman said. He said that US websites and apps—including those operated by companies without a physical EU presence—must still comply with GDPR if they specifically track users’ behavior that takes place in the EU.

“If you have an analytics service or network, or pixels on your website, or you drop cookies on EU residents’ machines that tracks their behavior,” that could all count as monitoring the behavior of EU residents, Freeman said.

Because those services are rather common, Freeman said many companies have already found a solution. Rather than dismantling an entire analytics operation, companies can instead capture the IP addresses of users visiting their websites. The companies then perform a reverse geolocation lookup. If the companies find any IP addresses associated with an EU location, they screen out the users behind those addresses to prevent online tracking.

Asked whether this setup has been proven to protect against GDPR regulators, Freeman instead said that these steps showcase an understanding and a concern for the law. That concern, he said, should hold up against scrutiny.

“If you’re a startup and an EU regulator initiates an investigation, and you show you’ve done everything you can to avoid tracking—that you get it, you know the law—my hope would be that most reasonable regulators would not take a Draconian action against you,” Freeman said. “You’ve done the best you can to avoid the thing that is regulated, which is the track.”

A data breach law for every state has a clearly-posted privacy policy. It knows about HIPAA and COPPA and it has a plan for GDPR. Everything is going well…until it isn’t. suffers a data breach.

Depending on which data was taken from and who it referred to, the startup will need to comply with the many requirements laid out in California’s data breach notification law. There are rules on when the law is triggered, what counts as a breach, who to notify, and what to tell them.

The law protects Californians’ “personal information,” which it defines as a combination of information. For instance, a first and last name plus a Social Security number count as personal information. So do a first initial and last name plus a driver’s license number, or a first and last name plus any past medical insurance claims, or medical diagnoses. A Californian’s username and associated password also qualify as “personal information,” according to the law.

The law also defines a breach as any “unauthorized acquisition” of personal information data. So, a rogue threat actor accessing a database? Not a breach. That same threat actor downloading the information from the database? Breach.

In California, once a company discovers a data breach, it next has to notify the affected individuals. These notifications must include details on which type of personal information was taken, a description of the breach, contact information for the company, and, if the company was actually the source of the breach, an offer for free identity theft prevention services for at least one year.

The law is particularly strict on these notifications to customers and individuals impacted. There are rules on font size and requirements for which subheadings to include in every notice: “What Happened,” “What Information Was Involved,” “What We Are Doing,” “What You Can Do,” and “More Information.”

After sends out its bevy of notices, it could still have a lot more to do.

As of April 2018, every single US state has its own data breach notification law. These laws, which can sometimes overlap, still include important differences, Freeman said.

“Some states require you to notify affected consumers. Some require you to notify the state’s Attorney General,” Freeman said. “Some require you to notify credit bureaus.”

For example, Florida’s law requires that, if more than 1,000 residents are affected, the company must notify all nationwide consumer reporting agencies. Utah’s law, on the other hand, only requires notifications if, after an investigation, the company finds that identity theft or fraud occurred, or likely occurred. And Iowa has one of the few state laws that protects both electronic and paper records.

Of all the data compliance headaches, this one might be the most time-consuming for

In the meantime, Freeman said, taking a proactive approach—like posting the accurate and truthful privacy policy and being upfront and honest with users about business practices—will put the startup at a clear advantage.

“If they start out knowing those things on the privacy side and just in the USA,” Freeman said, “that’s a great start that puts them ahead of a lot of other startups.”

Stay tuned for our second blog in the series, which will cover the current fight for comprehensive data privacy legislation in the United States.

The post The not-so-definitive guide to cybersecurity and data privacy laws appeared first on Malwarebytes Labs.

Labs survey finds privacy concerns, distrust of social media rampant with all age groups

Before Cambridge Analytica made Facebook an unwilling accomplice to a scandal by appropriating and misusing more than 50 million users’ data, the public was already living in relative unease over the privacy of their information online.

The Cambridge Analytica incident, along with other, seemingly day-to-day headlines about data breaches pouring private information into criminal hands, has eroded public trust in corporations’ ability to protect data, as well as their willingness to use the data in ethically responsible ways. In fact, the potential for data interception, gathering, collation, storage, and sharing is increasing exponentially in all private, public, and commercial sectors.

Concerns of data loss or abuse have played a significant role in the US presidential election results, the legal and ethical drama surrounding Wikileaks, Brexit, and the implementation of the European Union’s General Data Privacy Regulations. But how does the potential for the misuse of private data affect the average user in Vancouver, British Colombia; Fresno, California; or Lisbon, Portugal?

To that end, The Malwarebytes Labs team conducted a survey from January 14 to February 15, 2019 to inquire about the data privacy concerns of nearly 4,000 Internet users in 66 countries, including respondents from: Australia, Belgium, Brazil, Canada, France, Germany, Hong Kong, India, Iran, Ireland, Japan, Kenya, Latvia, Malaysia, Mexico, New Zealand, the Philippines, Saudi Arabia, South Africa, Taiwan, Turkey, the United Kingdom, the United States, and Venezuela.

The survey, which was conducted via SurveyMonkey, focused on the following key areas:

  • Feelings on the importance of online privacy
  • Rating trust of social media and search engines with data online
  • Cybersecurity best practices followed and ignored (a list of options was provided)
  • Level of confidence in sharing personal data online
  • Types of data respondents are most comfortable sharing online (if at all)
  • Level of consciousness of data privacy at home vs. the workplace


For a high-level look at our analysis of the survey results, including an exploration of why there is a disconnect between users’ emotions and their behaviors, as well as which privacy tools Malwarebytes recommends for those who wish to do more to protect their privacy, download our report:

The Blinding Effect of Security Hubris on Data Privacy


For this blog, we explored commonalities and differences among Baby Boomers (ages 56+), Gen Xers (ages 36 – 55), Millennials (ages 18 – 35), and Gen Zeds, or the Centennials (ages 17 and under) concerning feelings about privacy, level of confidence sharing information online, trust of social media and search engines with data, and which privacy best practices they follow.

Lastly, we delved into the regional data compiled from respondents in Europe, the Middle East, and Africa (EMEA) and compared it against North America (NA) to examine whether US users share common ground on privacy with other regions of the world.

Privacy is complicated

If 10 years ago, someone had asked you to carry an instrument that could: listen into your conversations, broadcast your exact location to marketers, and allow you be tracked as you moved between the grocery aisles (and how long you lingered in front of the Cap’n Crunch cereal), most would have declined, suggesting it was a crazy joke. Of course, that was before the advent of smartphones that can do all that and more, today.

Many regard the public disclosure of surreptitious information-gathering programs conducted by the National Security Agency (NSA) here in the US as a watershed moment in the debate over government surveillance and privacy. Despite the outcry, experts noted that the disclosures hardly made a dent in US laws about how the government may monitor citizens (and non-citizens) legally.

Tech companies in Silicon Valley were equally affected (or unaffected, depending on how you look at it) by Edward Snowden’s actions. Yet, over time, they have felt the effects of people’s change in behaviors and actions toward their services. In the face of increasing pressure from criminal actions and public perception in key demographics, companies like Google, Apple, and Facebook have taken steps to beef up the encryption of and better secure user data. But is this enough to make people trust them again?

Challenge: Put your money where your mouth is

In reality, particularly in commerce, we may have reservations about allowing companies to collect from us, especially because we have little influence on how they use it, but that doesn’t stop us from doing so. The care for the protection of our own data, and that of others, may well be nonexistent—signed away in an End-User Licensing Agreement (EULA) buried 18 pages deep.

Case in point: Students of the Massachusetts Institute of Technology (MIT) conducted a study in 2017 and revealed that, among other findings, there is a paradox between how people feel about privacy and their willingness to easily give away data, especially when enticed with rewards (in this case, free pizza).

Indeed, we have a complicated relationship with our data and online privacy. One minute, we’re declaring on Twitter how the system has failed us and the next, we’re taking a big bite of a warm slice of BBQ chicken pizza after giving away your best friend’s email address.

This begs the question: Is getting something in exchange for data a square deal? More specifically, should we have to give something away to use free services? Has a scam just taken place? But more to the point: Do people really, really care about privacy? If they do, why, and to what extent?

In search of answers

Before we conducted our survey, we had theories of our own, and these were colored by many previous articles on the topic. We assumed, for example, that Millennials and Gen Zeds, having grown up with the Internet already in place, would be much less concerned about their privacy than Baby Boomers, who spent a few decades on the planet before ever having created an online account. Rather than further a bias, we started from scratch—we wanted to see for ourselves how people of different generations truly felt about privacy.

Privacy by generations: an overview

This section outlines the survey’s overall findings across generations and regions. A breakdown of each generation’s privacy profile follows, including some correlations from studies that tackled similar topics in the past.

  • An overwhelming majority of respondents (96 percent) feel that online privacy is crucial. And their actions speak for themselves: 97 percent say they take steps to protect their online data, whether they are on a computer or mobile device.
  • Among seven options provided, below are the top four cybersecurity and privacy practices they follow:
    • “I refrain from sharing sensitive personal data on social media.” (94 percent)
    • “I use security software.” (93 percent)
    • “I run software updates regularly.” (90 percent)
    • “I verify the websites I visit are secured before making purchases.” (86 percent)
  • Among seven options provided, below are the top four cybersecurity faux pas they admitted to:
    • “I skim through or do not read End User License Agreements or other consent forms.” (66 percent)
    • “I use the same password across multiple platforms.” (29 percent)
    • “I don’t know which permissions my apps have access to on my mobile device.” (26 percent)
    • “I don’t verify the security of websites before making a purchase. (e.g. I don’t look for “https” or the green padlock on sites.)” (10 percent)

This shows that while respondents feel the need to take care of their privacy or data online, we can deduce that they can only consistently protect it at least most of the time and not all the time.

  • There is a near equal percentage of people who trust (39 percent) and distrust (34 percent) search engines across all generations.
  • Across the board, there is a universal distrust of social media (95 percent). We can then safely assume that respondents are more likely to trust search engines to protect their data than social media.
  • When asked to agree or disagree with the statement, “I feel confident about sharing my personal data online,” 87 percent of respondents disagree or strongly disagree.
  • On the other hand, confident data sharers—or those who give away information to use a service they need—would most likely share their contact info (26 percent), such as name, address, phone number, and email address; card details when shopping online (26 percent); and banking details (16 percent).
  • A small portion (2 percent) of highly confident sharers are also willing to share (or already have shared) their Social Security Number (SSN) and health-related data.
  • In practice, however, 59 percent of respondents said they don’t share any of the sensitive data we listed online.
  • When asked to rate the statement, “I am more conscious of data privacy when at work than I am at home,” a large share (84 percent) said “false.”

Breaking it down

There are many events that happened within this decade that have shaped the way Internet users across generations perceive privacy and how they act on that perception. The astounding number of breaches that have taken place since 2017 and the billions of data stolen, leaked, and bartered on the digital underground market—not to mention the seemingly endless number of opportunities for governments, institutions, and individuals to spy and harvest data on people—can either drive Internet users with a modicum of interest in preserving privacy to (1) live off the grid or (2) completely change their perception of data privacy. The former is unlikely to happen for the majority of users. The latter, however, is already taking place. In fact, not only have perceptions changed but so has behavior, in some cases, almost instantly.

We profiled each age group in light of past and present privacy-related events and how these have changed their perceptions, feeling, and online practices. Here are some of the important findings that emerged from our survey.

Centennials are no noobs when it comes to privacy.*

It’s important to note that while many users who are 18 years old and under (83 percent) admit that privacy is important to them, even more (87 percent) are taking steps to ensure that their data is secure online. Ninety percent of them do this by making sure that the websites they visit are secure before making online purchases. They also refrain from sharing sensitive PII on social media (86 percent) and use security software (86 percent).

Jerome Boursier, security researcher and co-founder of AdwCleaner, is also a privacy advocate. He disagrees with Gen Zeds’ claims that they don’t disclose their personally identifiable information (PII) on social media. “I think most people in the survey would define PII differently. People—especially the younger ones—tend to have a blurry definition of it and don’t consider certain information as personally identifiable the same way older generations do.”

Other notable practices Gen Z admit to partaking in are borrowed from the Cybersecurity 101 handbook, such as using complicated passwords and tools like a VPN on their mobile devices, while others go above-and-beyond normal practices, such as checking the maliciousness of a file they downloaded using Virus Total and modifying files to prevent telemetry logging or reporting—something Microsoft has been doing since the release of Windows 7.

They are also the generation that is the most unlikely to update their software.

Contrary to public belief, Millennials do care about their privacy.

This bears repeating: Millennials do care about their privacy.

An overwhelming majority (93 percent) of Millennials admitted to caring about their privacy. On the other hand, a small portion of this age group, while disclosing that they aren’t that bothered about their privacy, also admit that they still take steps to keep their online data safe.

One reason we can cite why Millennials may care about their privacy is that they want to manage their online reputations, and they are the most active at it, according to the Pew Research Center. In the report “Reputation Management and Social Media,” researchers found that Millennials take steps to limit the amount of PII online, are well-versed at personalizing their social media privacy settings, delete unwanted comments about them on their profiles, and un-tag themselves from photos they were tagged in by someone else. Given that a lot of employers are Google-ing their prospective employees (and Millennials know this), they take a proactive role in putting their best foot forward online.

Like Centennials, Millennials also use VPNs and Tor to protect their anonymity and privacy. In addition, they regularly conduct security checks on their devices and account activity logs, use two-factor authentication (2FA), and do their best to get on top of news, trends, and laws related to privacy and tech. A number of Millennials also admit to not having a social media presence.

While a large share (92 percent) of Millennials polled distrust social media with their data (and 64 percent of them feel the same way about search engines), they continue to use Google, Facebook, and other social media and search platforms. Several Millennials also admit that they can’t seem to stop themselves from clicking links.

Lastly, only a little over half of the respondents (59 percent) are as conscious of their data privacy at home as they are at work. This means that there is a sizable chunk of Millennials who are only conscious of their privacy at work but not so much at home.

Gen Xers feel and behave online almost the same way as Baby Boomers.

Gen Xers are the youngest of the older generations, but their habits better resemble their elder counterparts than their younger compatriots. Call it coincidence or bad luck—depending on your predisposition—or even “wisdom in action.” Either way, being likened to Baby Boomers is a compliment when it comes to privacy and security best practices.

Respondents in this age group have the highest number of people who are privacy-conscious (97 percent), and they are no doubt deliberate (98 percent) in their attempts to secure and take control of their data. Abstaining from posting personal information on social media ranks high in their list of “dos” at 93 percent. Apart from using security software and regularly updating all programs they use, they also do their best to opt out of everything they can, use strong passwords and 2FA, install blocker apps on browsers, and surf the web anonymously.

On the flip side, they’re second only to Millennials for The Generation Good at Avoiding Reading EULAs (71 percent). Gen Xers also bagged The Least Number of People in a Generation to Reuse Passwords (24 percent) award.

When it comes to a search engine’s ability to secure their data, over half of Gen Xers (65 percent) distrust them, while nearly a quarter (24 percent) chose to be neutral in their stance

Baby Boomers know more about protecting privacy online than other generations, and they act upon that knowledge.

Our findings of Baby Boomers have challenged the longstanding notion that they are the most clueless bunch when it comes to cybersecurity and privacy.

Of course, this isn’t to say that there are no naïve users in this generation—all generations have them—but our survey results profoundly contrast what most of us accepted as truth about what Boomers feel about privacy and how they behave when online. They’re actually smarter and more prudent than we care to give them credit for.

Baby Boomers came out as the most distrustful generation (97 percent) of social media when it comes to protecting their data. Because of this, those who have a social media presence hardly disclose (94 percent) any personal information when active.

In contrast, only a little over half (57 percent) of Boomers trust search engines, making them the most trustful among other groups. This means that it is highly likely for a Baby Boomer to trust search engines with their data over social media.

Boomers are also the least confident (89 percent) generation in terms of sharing personal data online. This correlates to a nationwide study commissioned by Hide My Ass! (HMA), a popular VPN service provider, about Baby Boomers and their different approach to online privacy. According to their research, Boomers are likely to respond “I only allow trusted people to see anything I post & employ a lot of privacy restrictions.”

Lastly, they’re also the most consistent in terms of guarding their data privacy both at home and at work (88 percent).

“I am immediately surprised that Baby Boomers are the most conscious about data privacy at work and at home. Anecdotally, I guess it makes sense, at least in work environments,” says David Ruiz, Content Writer for Malwarebytes Labs and a former surveillance activist for the Electronic Frontier Foundation (EFF). He further recalls: “I used to be a legal affairs reporter and 65-and-up lawyers routinely told me about their employers’ constant data security and privacy practices (daily, changing Wi-Fi passwords, secure portals for accessing documents, no support of multiple devices to access those secure portals).”

Privacy by region: an overview of EMEA and NA

A clear majority of survey respondents within the EMEA region are mostly from countries in Europe. One would think that Europeans are more versed in online privacy practices, given they are particularly known for taking privacy and data protection seriously compared to those in North America (NA). Although being well-versed can be seen in certain age groups in EMEA, our data shows that the privacy-savviness of those in NA are not that far off. In fact, certain age groups in NA match or even trump the numbers in EMEA.

Comparing and contrasting user perception and practice in EMEA and NA

There is no denying that those polled in EMEA and NA care about privacy and take steps to secure themselves, too. Most of them refrain from disclosing any information they deemed as sensitive in social media (an average of 89 percent of EMEA users versus 95 percent of NA users), verify websites where they plan to make purchases are secure (an average of 90 percent of EMEA users versus 91 percent of NA users), and use security software (an average of 89 percent of EMEA users versus 94 percent of NA users).

However, like what we’ve seen in the generational profiles, they also recognize the weaknesses that dampen their efforts. All respondents are prone to skimming through or completely avoiding reading the EULA (an average of 77 percent of EMEA users versus 71 percent of NA users). This is the most prominent problem across generations, followed by reusing passwords (an average of 26 percent of EMEA users versus 38 percent of NA users) and not knowing which permissions their apps have access to on their mobile devices (an average of 19 percent of EMEA users versus 17 percent of NA users).

As you can see, there are more users in NA that are embracing these top online privacy practices than those in EMEA.

All respondents from EMEA and NA are significantly distrustful of social media—92 and 88 percent, respectively—when it comes to protecting their data. For those who are willing to disclose their data online, they usually share their credit card details (26 percent), contact info (26 percent), and banking details (16 percent). Essentially, the most common pieces of information you normally give out when you do online banking and purchasing.

Millennials in both EMEA and NA (61 percent) feel the least conscious about their data privacy at work vs. at home. On the other hand, Baby Boomers (85 percent) in both regions feel the most conscious about their privacy in said settings.

It’s also interesting to note that Baby Boomers in both regions appear to share a similar profile.

Privacy in EMEA and NA: notable trends

When it comes to knowing which permissions apps have access to on mobile devices, Gen Zeds in EMEA (90 percent) are the most aware compared to Gen Zeds in NA (63 percent). In fact, Gen Zeds and Millennials (73 percent) are the only generations in EMEA that are conscious of app permissions. Not only that, they’re the less likely group to reuse passwords (at 20 and 24 percent, respectively) across generations in both regions. Although Gen Xers in EMEA have the highest rate of users (31 percent) who recycle passwords.

It also appears that the average percentage of older respondents—the Gen Xers (31 percent) and Baby Boomers (37 percent)—in both regions are more likely to read EULAs or take the time to do so than the average percentage of Gen Zeds and Millennials (both at 18 percent).

Gen Zeds in NA are the most distrustful generation of search engines (75 percent) and social media (100 percent) when it comes to protecting their data. They’re also the most uncomfortable (100 percent) when it comes to sharing personal data online.

Among the Baby Boomers, those in NA are the most conscious (85 percent) when it comes to data privacy at work. However, Baby Boomers in EMEA are not far off (84 percent).

With privacy comes universal reformation, for the betterment of all

The results of our survey have merely provided a snapshot of how generations and certain regions perceive privacy and what steps they take (and don’t take) to control what information is made available online. Many might be surprised by these findings while others may correlate them with other studies in the past. However you take it, one thing is clear: Online privacy has become as important an issue as cybersecurity, and people are beginning to take notice.

With this current privacy climate, it is not enough for Internet users to do the heavy lifting. Regulators play a part, and businesses should act quickly to guarantee that the data they collect from users is only what is reasonably needed to keep services going. In addition, they should secure the data they handle and store, and ensure that users are informed of changes to which data they collect and how they are used. We believe that this demand from businesses will continue at least for the next three years, and any plans or reforms that elevate the importance of online privacy of user data will serve as cornerstones to future transformations.

At this point in time, there is no real way to have complete privacy and anonymity when online. It’s a pipe dream in the current climate. Perhaps the best we can hope for is a society where businesses of all sizes recognize that the user data they collect has a real impact on their customers, and to respect and secure that data. Users should not be treated as a collection of entries with names, addresses, and contact numbers in a huge database. Customers are customers once again, who are always on the lookout for products and services to meet their needs.

The privacy advocate mantle would then be taken upon by Centennials and “Alphas” (or iGeneration), the first age group entirely born within the 21st century and considered the most technologically infused of us all. For those who wish to conduct future studies on privacy like this, it would be really, really interesting to see how Alphas and Centennials would react to a free box of pizza in exchange for their mother’s maiden name.

[*] The Malwarebytes Labs was only able to poll a total of 31 respondents in Gen Zed. This isn’t enough to create an accurate profile of this age group. However, this author believes that what we were able to gather is enough to give an informed assessment of this age group’s feelings and practices.

The post Labs survey finds privacy concerns, distrust of social media rampant with all age groups appeared first on Malwarebytes Labs.

Cyber Security Roundup for February 2019

The perceived threat posed by Huawei to the UK national infrastructure continued to make the headlines throughout February, as politicians, UK government agencies and the Chinese telecoms giant continued to play out their rather public spat in the media. See my post Is Huawei a Threat to UK National Security? for further details. And also, why DDoS might be the greater threat to 5G than Huawei supplied network devices.

February was a rather quiet month for hacks and data breaches in the UK, Mumsnet reported a minor data breach following a botched upgrade, and that was about it. The month was a busy one for security updates, with Microsoft, Adobe and Cisco all releasing high numbers of patches to fix various security vulnerabilities, including several released outside of their scheduled monthly patch release cycles.

A survey by PCI Pal concluded the consequences of a data breach had a greater impact in the UK than the United States, in that UK customers were more likely to abandon a company when let down by a data breach. The business reputational impact should always be taken into consideration when risk assessing security.

Another survey of interest was conducted by Nominet, who polled 408 Chief Information Security Officers (CISOs) at midsize and large organisations in the UK and the United States. A whopping 91% of the respondents admitted to experiencing high to moderate levels of stress, with 26% saying the stress had led to mental and physical health issues, and 17% said they had turned to alcohol. The contributing factors for this stress were job security, inadequate budget and resources, and a lack of support from the board and senior management. A CISO role can certainly can be a poisoned-chalice, so its really no surprise most CISOs don't stay put for long.

A Netscout Threat Landscape Report declared in the second half of 2018, cyber attacks against IoT devices and DDoS attacks had both rose dramatically. Fuelled by the compromise of high numbers of IoT devices, the number of DDoS attacks in the 100GBps to 200GBps range increased 169%, while those in the 200GBps to 300GBps range exploded 2,500%. The report concluded cybercriminals had built and used cheaper, easier-to-deploy and more persistent malware, and cyber gangs had implemented this higher level of efficiency by adopting the same principles used by legitimate businesses. These improvements has helped malicious actors greatly increase the number of medium-size DDoS attacks while infiltrating IoT devices even quicker.

In a rare speech, Jeremy Fleming, the head of GCHQ warned the internet could deteriorate into "an even less governed space" if the international community doesn't come together to establish a common set of principles. He said "China, Iran, Russia and North Korea" had broken international law through cyber attacks, and made the case for when "offensive cyber activities" were good, saying "their use must always meet the three tests of legality, necessity and proportionality. Their use, in particular to cause disruption or damage - must be in extremis".  Clearly international law wasn't developed with cyber space in mind, so it looks like GCGQ are attempting to raise awareness to remedy that.

I will be speaking at the e-crime Cyber Security Congress in London on 6th March 2019, on cloud security, new business metrics, future risks and priorities for 2019 and beyond.

Finally, completely out of the blue, I was informed by 4D that this blog had been picked by a team of their technical engineers and Directors as one of the best Cyber Security Blogs in the UK. The 6 Best Cyber Security Blogs - A Data Centre's Perspective Truly humbled and in great company to be on that list.


    Will pay-for-privacy be the new normal?

    Privacy is a human right, and online privacy should be no exception.

    Yet, as the US considers new laws to protect individuals’ online data, at least two proposals—one statewide law that can still be amended and one federal draft bill that has yet to be introduced—include an unwelcome bargain: exchanging money for privacy.

    This framework, sometimes called “pay-for-privacy,” is plain wrong. It casts privacy as a commodity that individuals with the means can easily purchase. But a move in this direction could further deepen the separation between socioeconomic classes. The “haves” can operate online free from prying eyes. But the “have nots” must forfeit that right.

    Though this framework has been used by at least one major telecommunications company before, and there are no laws preventing its practice today, those in cybersecurity and the broader technology industry must put a stop to it. Before pay-for-privacy becomes law, privacy as a right should become industry practice.

    Data privacy laws prove popular, but flawed

    Last year, the European Union put into effect one of the most sweeping set of data privacy laws in the world. The General Data Protection Regulation, or GDPR, regulates how companies collect, store, share, and use EU citizens’ data. The law has inspired countries everywhere to follow suit, with Italy (an EU member) issuing regulatory fines against Facebook, Brazil passing a new data-protective bill, and Chile amending its constitution to include data protection rights.

    The US is no exception to this ripple effect.

    In the past year, Senators Ron Wyden of Oregon, Marco Rubio of Florida, Amy Klobuchar of Minnesota, and Brian Schatz, joined by 14 other senators as co-sponsors, of Hawaii, proposed separate federal bills to regulate how companies collect, use, and protect Americans’ data.

    Sen. Rubio’s bill asks the Federal Trade Commission to write its own set of rules, which Congress would then vote on two years later. Sen. Klobuchar’s bill would require companies to write clear terms of service agreements and to send users notifications about privacy violations within 72 hours. Sen. Schatz’s bill introduces the idea that companies have a “duty to care” for consumers’ data by providing a “reasonable” level of security.

    But it is Sen. Wyden’s bill, the Consumer Data Protection Act, that stands out, and not for good reason. Hidden among several privacy-forward provisions, like stronger enforcement authority for the FTC and mandatory privacy reports for companies of a certain size, is a dangerous pay-for-privacy stipulation.

    According to the Consumer Data Protection Act, companies that require user consent for their services could charge users a fee if those users have opted out of online tracking.

    If passed, here’s how the Consumer Data Protection Act would work:

    Say a user, Alice, no longer feels comfortable having companies collect, share, and sell her personal information to third parties for the purpose of targeted ads and increased corporate revenue. First, Alice would register with the Federal Trade Commission’s “Do Not Track” website, where she would choose to opt-out of online tracking. Then, online companies with which Alice interacts would be required to check Alice’s “Do Not Track” status.

    If a company sees that Alice has opted out of online tracking, that company is barred from sharing her information with third parties and from following her online to build and sell a profile of her Internet activity. Companies that are run almost entirely on user data—including Facebook, Amazon, Google, Uber, Fitbit, Spotify, and Tinder—would need to heed users’ individual decisions. However, those same companies could present Alice with a difficult choice: She can continue to use their services, free of online tracking, so long as she pays a price.

    This represents a literal price for privacy.

    Electronic Frontier Foundation Senior Staff Attorney Adam Schwartz said his organization strongly opposes pay-for-privacy systems.

    “People should be able to not just opt out, but not be opted in, to corporate surveillance,” Schwartz said. “Also, when they choose to maintain their privacy, they shouldn’t have to pay a higher price.”

    Pay-for-privacy schemes can come in two varieties: individuals can be asked to pay more for more privacy, or they can pay a lower (discounted) amount and be given less privacy. Both options, Schwartz said, incentivize people not to exercise their privacy rights, either because the cost is too high or because the monetary gain is too appealing.

    Both options also harm low-income communities, Schwartz said.

    “Poor people are more likely to be coerced into giving up their privacy because they need the money,” Schwartz said. “We could be heading into a world of the ‘privacy-haves’ and ‘have-nots’ that conforms to current economic statuses. It’s hard enough for low-income individuals to live in California with its high cost-of-living. This would only further aggravate the quality of life.”

    Unfortunately, a pay-for-privacy provision is also included in the California Consumer Privacy Act, which the state passed last year. Though the law includes a “non-discrimination” clause meant to prevent just this type of practice, it also includes an exemption that allows companies to provide users with “incentives” to still collect and sell personal information.

    In a larger blog about ways to improve the law, which was then a bill, Schwartz and other EFF attorneys wrote:

    “For example, if a service costs money, and a user of this service refuses to consent to collection and sale of their data, then the service may charge them more than it charges users that do consent.”

    Real-world applications

    The alarm for pay-for-privacy isn’t theoretical—it has been implemented in the past, and there is no law stopping companies from doing it again.

    In 2015, AT&T offered broadband service for a $30-a-month discount if users agreed to have their Internet activity tracked. According to AT&T’s own words, that Internet activity included the “webpages you visit, the time you spend on each, the links or ads you see and follow, and the search terms you enter.”

    Most of the time, paying for privacy isn’t always so obvious, with real dollars coming out or going into a user’s wallet or checking account. Instead, it happens behind the scenes, and it isn’t the user getting richer—it’s the companies.

    Powered by mountains of user data for targeted ads, Google-parent Alphabet recorded $32.6 billion in advertising revenue in the last quarter of 2018 alone. In the same quarter, Twitter recorded $791 million in ad revenue. And, notable for its CEO’s insistence that the company does not sell user data, Facebook’s prior plans to do just that were revealed in documents posted this week. Signing up for these services may be “free,” but that’s only because the product isn’t the platform—it’s the user.

    A handful of companies currently reject this approach, though, refusing to sell or monetize users’ private information.

    In 2014, CREDO Mobile separated itself from AT&T by promising users that their privacy “is not for sale. Period.” (The company does admit in its privacy policy that it may “sell or trade mailing lists” containing users’ names and street addresses, though.) ProtonMail, an encrypted email service, positions itself as a foil to Gmail because it does not advertise on its site, and it promises that users’ encrypted emails will never be scanned, accessed, or read. In fact, the company claims it can’t access these emails even if it wanted.

    As for Google’s very first product—online search— the clearest privacy alternative is DuckDuckGo. The privacy-focused service does not track users’ searches, and it does not build individualized profiles of its users to deliver unique results.

    Even without monetizing users’ data, DuckDuckGo has been profitable since 2014, said community manager Daniel Davis.

    “At DuckDuckGo, we’ve been able to do this with ads based on context (individual search queries) rather than personalization.”

    Davis said that DuckDuckGo’s decisions are steered by a long-held belief that privacy is a fundamental right. “When it comes to the online world,” Davis said, “things should be no different, and privacy by default should be the norm.”

    It is time other companies follow suit, Davis said.

    “Control of one’s own data should not come at a price, so it’s essential that [the] industry works harder to develop business models that don’t make privacy a luxury,” Davis said. “We’re proof this is possible.”

    Hopefully, other companies are listening, because it shouldn’t matter whether pay-for-privacy is codified into law—it should never be accepted as an industry practice.

    The post Will pay-for-privacy be the new normal? appeared first on Malwarebytes Labs.

    Max Schrems: lawyer, regulator, international man of privacy

    Almost one decade ago, disparate efforts began in the European Union to change the way the world thinks about online privacy.

    One effort focused on legislation, pulling together lawmakers from 28 member-states to discuss, draft, and deploy a sweeping set of provisions that, today, has altered how almost every single international company handles users’ personal information. The finalized law of that effort—the General Data Protection Regulation (GDPR)—aims to protect the names, addresses, locations, credit card numbers, IP addresses, and even, depending on context, hair color, of EU citizens, whether they’re customers, employees, or employers of global organizations.

    The second effort focused on litigation and public activism, sparking a movement that has raised at least nearly half a million dollars to fund consumer-focused lawsuits meant to uphold the privacy rights of EU citizens, and has resulted in the successful dismantling of a 15-year-old intercontinental data-transfer agreement for its failure to protect EU citizens’ personal data. The 2015 ruling sent shockwaves through the security world, and forced companies everywhere to scramble to comply with a regulatory system thrown into flux.

    The law was passed. The movement is working. And while countless individuals launched investigations, filed lawsuits, participated in years-long negotiations, published recommendations, proposed regulations, and secured parliamentary approval, we can trace these disparate yet related efforts back to one man—Maximilian Schrems.

    Remarkably, as the two efforts progressed separately, they began to inform one another. Today, they work in tandem to protect online privacy. And businesses around the world have taken notice.

    The impact of GDPR today

    A Portuguese hospital, a German online chat platform, and a Canadian political consultancy all face GDPR-related fines issued last year. In January, France’s National Data Protection Commission (CNIL) hit Google with a 50-million-euros penalty—the largest GDPR fine to date—after an investigation found a “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.”

    The investigation began, CNIL said, after it received legal complaints from two groups: the nonprofit La Quadrature du Net and the non-governmental organization None of Your Business. None of Your Business, or noyb for short, counts Schrems as its honorary director. In fact, he helped crowdfund its launch last year.

    Outside the European Union, lawmakers are watching these one-two punches as a source of inspiration.

    When testifying before Congress about a scandal involving misused personal data, the 2016 US presidential election, and a global disinformation campaign, Facebook CEO Mark Zuckerberg repeatedly heard calls to regulate his company and its data-mining operations.

    “The question is no longer whether we need a federal law to protect consumers privacy,” said Republican Senator John Thune of South Dakota. “The question is what shape will that law take.”

    Democratic Senator Mark Warner of Virginia put it differently: “The era of the Wild West in social media is coming to an end.”

    A new sheriff comes to town

    In 2011, Schrems was a 23-year-old law student from Vienna, Austria, visiting the US to study abroad. He enrolled in a privacy seminar at the Santa Clara University School of Law where, along with roughly 22 other students, he learned about online privacy law from one of the field’s notable titans.

    Professor Dorothy Glancy practiced privacy law before it had anything to do with the Internet, cell phones, or Facebook. Instead, she navigated the world of government surveillance, wiretaps, and domestic spying. She served as privacy counsel to one of the many subcommittees that investigated the Watergate conspiracy.

    Later, still working for the subcommittee, she examined the number of federal agency databases that contained people’s personally identifiable information. She then helped draft the Privacy Act of 1974, which restricted how federal agencies collected, used, and shared that information. It is one of the first US federal privacy laws.

    The concept of privacy has evolved since those earlier days, Glancy said. It is no longer solely about privacy from the government. It is also about privacy from corporations.

    “Over time, it’s clear that what was, in the 70s, a privacy problem in regards to Big Brother and the federal government, has now gotten so that a lot of these issues have to do with the private [non-governmental] collection of information on people,” Glancy said.

    In 2011, one of the biggest private, non-governmental collectors of that information was Facebook. So, when Glancy’s class received a guest presentation from Facebook privacy lawyer Ed Palmieri, Schrems paid close attention, and he didn’t like what he heard.

    For starters, Facebook simply refused to heed Europe’s data privacy laws.

    Speaking to 60 Minutes, Schrems said: “It was obviously the case that ignoring European privacy laws was the much cheaper option. The maximum penalty, for example, in Austria, was 20,000 euros. So, just a lawyer telling you how to comply with the law was more expensive than breaking it.”

    Further, according to Glancy, Palmieri’s presentation showed that Facebook had “absolutely no understanding” about the relationship between an individual’s privacy and their personal information. This blind spot concerned Schrems to no end. (Palmieri could not be reached for comment.)

    “There was no understanding at all about what privacy is in the sense of the relationship to personal information, or to human rights issues,” Glancy said. “Max couldn’t quite believe it. He didn’t quite believe that Facebook just didn’t understand.”

    So Schrems investigated. (Schrems did not respond to multiple interview requests and he did not respond to an interview request forwarded by his colleagues at Noyb.)

    Upon returning to Austria, Schrems decided to figure out just how much information Facebook had on him. The answer was astonishing: Facebook sent Schrems a 1,200-page PDF that detailed his location history, his contact information, information about past events he attended, and his private Facebook messages, including some he thought he had deleted.

    Shocked, Schrems started a privacy advocacy group called “Europe v. Facebook” and uploaded redacted versions of his own documents onto the group’s website. The revelations touched a public nerve—roughly 40,000 Europeans soon asked Facebook for their own personal dossiers.

    Schrems then went legal. With Facebook’s international headquarters in Ireland, he filed 22 complaints with Ireland’s Data Protection Commissioner, alleging that Facebook was violating EU data privacy law. Among the allegations: Facebook didn’t really “delete” posts that users chose to delete, Facebook’s privacy policy was too vague and unclear to constitute meaningful consent by users, and Facebook engaged in illegal “excessive processing” of user data.

    The Irish Data Protection Commissioner rolled Schrems’ complaints into an already-running audit into Facebook, and, in December 2011, released non-binding guidance for the company. Facebook’s lawyers also met with Schrems in Vienna for six hours in February 2012.

    And then, according to Schrems’ website, only silence and inaction from both Facebook and the Irish Data Protection Commissioner’s Office followed. There were no meaningful changes from the company. And no stronger enforcement from the government.

    Frustrating as it may have been, Schrems kept pressing. Luckily, according to Glancy, he was just the right man for the job.

    “He is innately curious,” Glancy said. “Once he sees something that doesn’t quite seem right, he follows it up to the very end.”

    Safe Harbor? More like safety not guaranteed

    On June 5, 2013, multiple newspapers exposed two massive surveillance programs in use by the US National Security Agency. One program, then called PRISM (now called Downstream), implicated some of the world’s largest technology companies, including Facebook.

    Schrems responded by doing what he did best: He filed yet another complaint against Facebook—his 23rd—with the Irish Data Protection Commissioner. Facebook Ireland, Schrems claimed, was moving his data to Facebook Inc. in the US, where, according to The Guardian, the NSA enjoyed “mass access” to user data. Though Facebook and other companies denied their participation, Schrems doubted the accuracy of these statements.

    “There is probable cause to believe that ‘Facebook Inc’ is granting the NSA mass access to its servers that goes beyond merely individual requests based on probable cause,” Schrems wrote in his complaint. “The statements by ‘Facebook Inc’ are in light of the US laws not credible, because ‘Facebook Inc’ is bound by so-called ‘gag orders.’”

    Schrems argued that, when his data left EU borders, EU law required that it receive an “adequate level of protection.” Mass surveillance, he said, violated that.

    The Irish Data Protection Commissioner disagreed. The described EU-to-US data transfer was entirely legal, the Commissioner said, because of Safe Harbor, a data privacy carve-out approved much earlier.

    In 1995, the EU adopted the Data Protection Directive, which, up until 2018, regulated the treatment of EU citizens’ personal data. In 2000, the European Commission approved an exception to the law: US companies could agree to a set of seven principles, called the Safe Harbor Privacy Principles, to allow for data transfer from the EU to the US. This self-certifying framework proved wildly popular. For 15 years, nearly every single company that moved data from the EU to the US relied, at least briefly, on Safe Harbor.

    Unsatisfied, Schrems asked the Irish High Court to review the Data Protection Commissioner’s inaction. In October 2013, the court agreed. Schrems celebrated, calling out the Commissioner’s earlier decision.

    “The [Data Protection Commissioner] simply wanted to get this hot potato off his table instead of doing his job,” Schrems said in a statement at the time. “But when it comes to the fundamental rights of millions of users and the biggest surveillance scandal in years, he will have to take responsibility and do something about it.”

    Less than one year later, the Irish High Court came back with its decision—the Court of Justice for the European Union would need to review Safe Harbor.

    On March 24, 2015, the Court heard oral arguments for both sides. Schrems’ legal team argued that Safe Harbor did not provide adequate protection for EU citizen’s data. The European Commission, defending the Irish DPC’s previous decision, argued the opposite.

    When asked by the Court how EU citizens might best protect themselves from the NSA’s mass surveillance, the lawyer arguing in favor of Safe Harbor made a startling admission:

    “You might consider closing your Facebook account, if you have one,” said Bernhard Schima, advocate for the European Commission, all but admitting that Safe Harbor could not protect EU citizens from overseas spying. When asked more directly if Safe Harbor provided adequate protection of EU citizens’ data, the European Commission’s legal team could not guarantee it.

    On September 23, 2015, the Court’s advocate general issued his initial opinion—Safe Harbor, in light of the NSA’s mass surveillance programs, was invalid.

    “Such mass, indiscriminate surveillance is inherently disproportionate and constitutes an unwarranted interference with the rights [to respect for privacy and family life and protection of personal data,]” the opinion said.

    Less than two weeks later, the entire Court of Justice agreed.

    Ever a lawyer, Schrems responded to the decision with a 5,500-word blog post (assigned a non-commercial Creative Commons public copyright license) exploring current data privacy law, Safe Harbor alternatives, company privacy policies, a potential Safe Harbor 2.0, and mass surveillance. Written with “limited time,” Schrems thanked readers for pointing out typos.

    The General Data Protection Regulation

    Before the Court of Justice struck down Safe Harbor, before Edward Snowden shed light on the NSA’s mass surveillance, before Schrems received a 1,200-page PDF documenting his digital life, and before that fateful guest presentation in professor Glancy’s privacy seminar at Santa Clara University School of Law, a separate plan was already under way to change data privacy.

    In November 2010, the European Commission, which proposes legislation for the European Union, considered a new policy with a clear goal and equally clear title: “A comprehensive approach on personal data protection in the European Union.”

    Many years later, it became GDPR.

    During those years, the negotiating committees looked to Schrems’ lawsuits as highly informative, Glancy said, because Schrems had successfully proven the relationship between the European Charter of Fundamental Human Rights and its application to EU data privacy law. Ignoring that expertise would be foolish.

    “Max [Schrems] was a part of just about all the committees working on [GDPR]. His litigation was part of what motivated the adoption of it,” Glancy said. “The people writing the GDPR would consult him as to whether it would solve his problems, and parts of the very endless writing process were also about what Max [Schrems] was not happy with.”

    Because Schrems did not respond to multiple interview requests, it is impossible to know his precise involvement in GDPR. His Twitter and blog have no visible, corresponding entries about GDPR’s passage.

    However, public records show that GDPR’s drafters recommended several areas of improvement in the year before the law passed, including clearer definitions of “personal information,” stronger investigatory powers to the EU’s data regulators, more direct “data portability” to allow citizens to directly move their data from one company to another while also obtaining a copy of that data, and better transparency in how EU citizens’ online profiles are created and targeted for ads.

    GDPR eventually became a sweeping set of 99 articles that tightly fasten the collection, storage, use, transfer, and disclosure of data belonging to all EU citizens, giving those citizens more direct control over how their data is treated.

    For example, citizens have the “right to erasure,” in which they can ask a company to delete the data collected on them. Citizens also have the “right to access,” in which companies must provide a copy of the data collected on a person, along with information about how the data was collected, who it is shared with, and why it is processed.

    Approved by a parliamentary vote in April 2016, GDPR took effect two years later.

    GDPR’s immediate and future impact

    On May 23, 2018, GDPR’s arrival was sounded not by trumpets, but by emails. Facebook, TicketMaster, eBay, PricewaterhouseCoopers, The Guardian, Marriott, KickStarter, GoDaddy, Spotify, and countless others began their public-facing GDPR compliance strategies by telling users about updated privacy policies. The email deluge inspired rankings, manic tweets, and even a devoted “I love GDPR” playlist. The blitz was so large, in fact, that several threat actors took advantage, sending fake privacy policy updates to phish for users’ information.

    Since then, compliance looks less like emails and more like penalties.

    Early this year, Google received its €50 million ($57 million) fine out of France. Last year, a Portuguese hospital received a €400,000 fine for two alleged GDPR violations. Because of a July 2018 data breach, a German chat platform got hit with a €20,000 fine. And in the reported first-ever GDPR notice from the UK, Canadian political consultancy—and murky partner to Cambridge Analytica—AggregateIQ received a notice about potential fines of up to €20 million.

    To Noyb, the fines are good news. Gaëtan Goldberg, a privacy lawyer with the NGO, said that data privacy law compliance has, for many years, been lacking. Hopefully GDPR, which Goldberg called a “major step” in protecting personal data, can help turn that around, he said.

    “[We] hope to see strong enforcement measures being taken by courts and data protection authorities around the EU,” Goldberg said. “The fine of 50 [million] euros the French CNIL imposed on Google is a good start in this direction.”

    The future of data privacy

    Last year, when Senator Warner told Zuckerberg that “the era of the Wild West in social media is coming to an end,” he may not have realized how quickly that would come true. In July 2018, California passed a statewide data privacy law called the California Consumer Privacy Act. Months later, three US Senators proposed their own federal data privacy laws. And just this month, the Government Accountability Office recommended that Congress pass a data privacy law similar to GDPR.

    Data privacy is no longer a concept. It is the law.

    In the EU, that law has released a torrent of legal complaints. Hours after GDPR came into effect, Noyb lodged a series of complaints against Google, Facebook, Instagram, and WhatsApp.

    Goldberg said the group’s legal complaints are one component of meaningful enforcement on behalf of the government. Remember: Google’s massive penalty began with an investigation that the French authorities said started after it received a complaint from Noyb.

    Separately, privacy group Privacy International filed complaints against Europe’s data-brokers and advertising technology companies, and Brave, a privacy-focused web browser, filed complaints against Google and other digital advertising companies.

    Google and Facebook did not respond to questions about how they are responding to the legal complaints. Facebook also did not respond to questions about its previous legal battles with Schrems.

    Electronic Frontier Foundation International Director Danny O’Brien wrote last year that, while we wait for the results of the above legal complaints, GDPR has already motivated other privacy-forward penalties and regulations around the world:

    “In Italy, it was competition regulators that fined Facebook ten million euros for misleading its users over its personal data practices. Brazil passed its own GDPR-style law this year; Chile amended its constitution to include data protection rights; and India’s lawmakers introduced a draft of a wide-ranging new legal privacy framework.”

    As the world moves forward, one man—the one who started it all—might be conspicuously absent. Last year, Schrems expressed a desire to step back from data privacy law. If anything, he said, it was time for others to take up the mantle.

    “I know I’m going to be deeply engaged, especially at the beginning, but in the long run [Noyb] should absolutely not be Max’s personal NGO,” Schrems told The Register in a January 2018 interview. Asked to clarify about his potential future beyond privacy advocacy, Schrems said: “It’s retirement from the first line of defense, let’s put it that way… I don’t want to keep bringing cases for the rest of my life.”

    Surprisingly, for all of Schrems’ public-facing and public-empowering work, his interviews and blog posts sometimes portray him as a deeply humble, almost shy individual, with a down-to-earth sense of humor, too. When asked during a 2016 podcast interview if he felt he would be remembered in the same vein as Edward Snowden, Schrems bristled.

    “Not at all, actually,” Schrems said. “What I did is a very conservative approach. You go to the courts, you have your case, you bring it and you do your thing. What Edward Snowden did is a whole different ballgame. He pretty much gave up his whole life and has serious possibilities to some point end up in a US prison. The worst thing that happened to me so far was to be on that security list of US flights.”

    During the same interview, Schrems also deflected his search result popularity.

    “Everyone knows your name now,” the host said. “If you Google ‘Schrems,’ the first thing that comes up is ‘Max Schrems’ and your case.”

    “Yeah but it’s also a very specific name, so it’s not like ‘Smith,’” Schrems said, laughing. “I would have a harder time with that name.”

    If anything, the popularity came as a surprise to Schrems. Last year, in speaking to Bloomberg, he described Facebook as a “test case” when filing his original 22 complaints.

    “I thought I’d write up a few complaints,” Schrems said. “I never thought it would create such a media storm.”

    Glancy described Schrems’ initial investigation into Facebook in much the same way. It started not as a vendetta, she said, but as a courtesy.

    “He started out with a really charitable view of [Facebook],” Glancy said. “At some level, he was trying to get Facebook to wake up and smell the coffee.”

    That’s the Schrems that Glancy knows best, a multi-faceted individual who makes time for others and holds various interests. A man committed to public service, not public spotlight. A man who still calls and emails her with questions about legal strategy and privacy law. A man who drove down the California coast with some friends during spring break. Maybe even a man who is tired of being seen only as a flag-bearer for online privacy. (He describes himself on his Twitter profile as “(Luckily not only) Law, Privacy and Politics.)

    “At some level, he considers himself a consumer lawyer,” Glancy said. “He’s interested in the ways in which to empower the little guy, who is kind of abused by large entities that—it’s not that they’re targeting them, it’s that they just don’t care. [The people’s] rights are not being taken account of.”

    With GDPR in place, those rights, and the people they apply to, now have a little more firepower.

    The post Max Schrems: lawyer, regulator, international man of privacy appeared first on Malwarebytes Labs.

    Security roundup: February 2019

    We round up interesting research and reporting about security and privacy from around the web. This month: security as a global business risk, insured vs protected, a 12-step programme, subject access requests made real, French fine for Google, and an imperfect getaway.

    Risks getting riskier

    Some top ten lists are not the kind you want to appear on. Data theft and cyber attacks both featured in the World Economic Forum’s Global Risks Report 2019. Only threats relating to extreme weather, climate change and natural disasters ranked above both security risks.

    The report is based on a survey which asked 1,000 decision makers to rate global risks by likelihood over a 10-year horizon. As ZDNet reports, 82 per cent of those surveyed believe there’s an increased risk of cyberattacks leading to the theft of money and data. Some 80 per cent believe there’s a greater risk of cyberattacks disrupting operations.

    The report also refers to the increased risk of cyberattacks against critical infrastructure, along with concerns about identity theft and decreasing privacy. The WEF’s overview includes a video of a panel discussing the risks, and the report itself is free to download.

    Insuring against cyber attacks

    Thinking of buying cyber risk insurance in the near future? The legal spat between Mondelez and Zurich might give pause to reconsider. The US food company sued its insurer for refusing to pay a $100 million claim for ransomware damages. NotPetya left Mondelez with 1,700 unusable servers and 24,000 permanently broken laptops. Zurich called this “a hostile or warlike action” by a government or foreign power which therefore excluded it from cover.

    As InfoSecurity’s story suggests, Zurich might have been on safer ground by invoking a gross negligence clause instead, since Mondelez got hit not once but twice. And where does this leave victims? “Just because you have car insurance does not mean you won’t have a car crash. Just because you have cyber insurance does not mean you won’t have a breach,” said Brian Honan.

    Lesley Carhart of Dragos Security said the case would have implications for cyber insurance sales and where CISOs spend money. “Not only is Zurich’s claim apparently that nation state adversaries can’t be insured against, but it adds the ever tenuous question of attribution to insurance claims,” she wrote.

    The 12 steps to better cybersecurity

    Somewhat under the radar, but no less welcome for that, Ireland’s National Cyber Security Centre has published guidance on cybersecurity for Irish businesses. It’s a high-level document that takes the form of a 12-step guide. It’s written in non-technical language, clearly intended for a wide audience. The steps include tips like getting senior management support for a cybersecurity strategy. The full report is free to download from here. We’ve taken a deep dive into the contents and you can read our thoughts here.

    Fight for your right to part…ake of your data

    GDPR obliges companies to cough up the personal data they hold about us on request, but what does that mean in practice? Journalist Jon Porter exercised his right to a subject access request with Apple, Amazon, Facebook, and Google. Just under 138GB of raw data later, he discovered that little of the information was in a format he could easily understand. If some of the world’s biggest tech companies are struggling with this challenge, what does that say for everyone else? It’s a fascinating story, available here.

    Google grapples French fine

    And speaking of all things GDPR-related, France’s data protection regulator CNIL has hit Google with a €50 million fine for violating the regulation. The CNIL claims Google didn’t make its data collection policies transparent enough and didn’t obtain sufficient, specific consent for personalising ads.

    As Brian Honan wrote in the SANS Institute newsletter: “While the €50 million fine is the item grabbing the headlines, the key issue here is the finding by CNIL of the unlawfulness of Google’s approach to gathering people’s personal data. This will have bigger implications for Google, and many other organisations, in how they ensure they legally gather and use people’s personal data in line with the GDPR.”

    You can run, but you can’t hide

    Here’s a cautionary tale about the dangers of oversharing personal data on smart devices. UK police collared a hitman for an unsolved murder after data from his GPS watch linked him to scouting expeditions of the crime scene. Runners World covered the story and the Liverpool Echo published CCTV footage of an alleged recon trip near the victim’s home.

    It’s an extreme example maybe, but the story shows how heavy our digital footprints can be (running shoes or not). Social media sharing can also be a security risk for a company’s remote workers. Trend Micro’s Bob McArdle outlined this very subject in his excellent Irisscon 2018 presentation. Social engineering expert Lisa Forte tweeted that she can gather intel about target companies from what their employees post online.

    Things we liked

    Protector, puzzle master, moral crusader, change agent: the many faces of a CISO. MORE

    And another thing: want to be a good security leader? Learn to tell a good story first. MORE

    Making the contentious case that breaches can be a good thing, and aren’t automatically bad for business. MORE

    Google Chrome, used by almost two-thirds of web browsers, has a new plugin that warns users when entering a username/password combination that’s been detected in a data breach. MORE

    An offer you couldn’t retweet: meeting the godfather of fake news. MORE

    The Council to Secure the Digital Economy (CSDE) has published a guide to help protect the Internet from botnets. The International Anti-Botnet Guide will be updated every year. MORE

    ENISA has released a study of CSIRTs and incident response capabilities in Europe to 2025. MORE

    The post Security roundup: February 2019 appeared first on BH Consulting.

    43% of Cybercrimes Target Small Businesses – Are You Next?

    Cybercrimes cost UK small companies an average of £894 in the year ending February of 2018. Small businesses are an easy target for cybercrooks, so it little surprise that around about 43% of cybercrime is committed against small businesses. According to research conducted by EveryCloud, there is much more at stake than a £900 annual loss, with six out of ten small businesses closing within six months of a data breach.

    Damage to a small company’s reputation can be difficult to repair and recover from following a data breach. Since the GDPR data privacy law came in force in May 2018, companies face significant financial sanctions from regulators if found negligent in safeguarding personal information. Add in the potential for civil suits the potential costs start mounting up fast, which could even turn into a business killer.  Case in point is political consulting and data mining firm Cambridge Analytica, which went under in May 2018 after being implicated with data privacy issues related to its use of personal data held on Facebook. However, most small businesses taken out by cyber attacks don't have the public profile to make the deadly headlines.

    Most big companies have contingency plans and resources to take the hit from a major cyber attack, although major cyber attacks prove highly costly to big business, the vast majority are able to recover and continue trading. Working on a tight budget, small businesses just doesn't the deep pockets of big business. Cyber resilience is not a high priority within most small businesses strategies, as you might image business plans are typically very business growth focused.

    Cyber resilience within small business need not be difficult, but it does involve going beyond installing antivirus. A great starting point is UK National Cyber Security Centre's Cyber Essentials Scheme, a simple but effective approach to help businesses protect themselves from the most common cyber attacks. You’ll also need to pay attention to staff security awareness training in the workplace.

    Every employee must ensure that the company is protected from attacks as much as possible. It’s your responsibility to make sure that everyone understands this and knows what preventative measures to put in place.

    It may cost a few bob, but getting an expert in to check for holes in your cybersecurity is a good place to start. They can check for potential risk areas and also educate you and your staff about security awareness.

    We all know the basics, but how many times do we let convenience trump good common sense? For example, how many times have you used the same password when registering for different sites?

    How strong is the password that you chose? If it’s easy for you to remember, then there’s a good chance that it’s not as secure as you’d like. If you’d like more tips on keeping your information secure, then check out the infographic below.

    No-deal Brexit and GDPR: here’s what you need to know

    Business craves certainty and Brexit is currently giving us anything but. At the time of writing, it’s looking increasingly likely that Britain will leave the EU without a withdrawal agreement. This blog rounds up the latest developments on data protection after a no-deal Brexit. (Appropriately, we’re publishing on Data Protection Day, the international campaign to raise public awareness about privacy rights and protecting data.)

    Under the General Data Protection Regulation, no deal would mean the UK will become a ‘third country’ outside of the European Economic Area. Last week, the Minister for Data Protection Pat Breen said a no-deal Brexit would have a “profound effect” on personal data transfers into the UK from the EU. Speaking at the National Data Protection Conference, he pointed out that although Brexit commentary has focused on trade in goods, services activity rely heavily on flows of personal data to and from the UK.

    “In the event of a ‘no-deal’ Brexit, the European Commission has clarified that no contingency measures, such as an ‘interim’ adequacy decision, are foreseen,” the minister said.

    This means personal data transfers can’t continue as they do today. At 11pm BST on Friday 29 March 2019, the UK will legally leave the European Union. All transfer of data between Ireland and the UK or Northern Ireland will then be considered as international transfers.

    Keep calm and carry on

    Despite the ongoing uncertainty, there are backup measures, as the Minister pointed out. “While Brexit does give rise to concerns, it should not cause alarm. The GDPR explicitly provides for mechanisms to facilitate the transfer of personal data in the event of the United Kingdom becoming a third country in terms of its data protection regime,” he said.

    The latest advice from the Data Protection Commissioner is that Irish-based organisations will need to implement legal safeguards to transfer personal data to the UK after a no-deal Brexit. The DPC’s guidance outlined some typical scenarios if the UK becomes a third country.

    “For example, if an Irish company currently outsources its payroll to a UK processor, legal safeguards for the personal data transferred to the UK will be required. If an Irish government body uses a cloud provider based in the UK, it will also require similar legal safeguards. The same will apply to a sports organisation with an administrative office in Northern Ireland that administers membership details for all members in Ireland and Northern Ireland,” it said.

    Some organisations and bodies in Ireland will already be familiar with the legal transfer mechanisms available for the transfer of personal data to recipients outside of the EU, as they will already be transferring to the USA or India, for example.

    Next steps for ‘third country’ status

    BH Consulting’s senior data protection consultant Tracy Elliott says that data protection officers should take these steps to prepare for the UK’s ‘third country’ status under a no-deal Brexit.

    ·       review their organisation’s processing activities

    ·       identify what data they transfer to the UK

    ·       check if that includes data about EU citizens

    “Consider your options of using a contract or possibly changing that supplier. If your data is hosted on servers in the UK, contact your hosting partner and find out what options are available,” she said.

    Larger international companies may already have data sharing frameworks in place, but SMEs that routinely deal with UK, or that have subsidiaries in the UK, might not have considered this issue yet. All communication between them, even if they’re part of the same group structure, will need to be covered contractually for data sharing. “There are five mechanisms for doing this, but the simplest and quickest way to do this is to roll out model contract clauses, or MCCs. They are a set of guidelines issued by the EU,” Tracy advised.

    Sarah Clarke, a specialist in privacy, security, governance, risk and compliance with BH Consulting, points out that using MCCs has its own risks. The clauses are due for an update to bring them into line with GDPR. Meanwhile the EU-US data transfer mechanism known as Privacy Shield is still not finalised, she added.

    In the short term, however, MCCs are sufficient both for international transfers between legal entities in one organisation, and for transfers between different organisations. “For intra-group transfers, binding corporate rules are too burdensome to implement ‘just in case’. You can switch if the risk justifies it when there is more certainty,” Sarah Clarke said.

    Further reading

    The European Commission website has more information on legal mechanisms for transferring personal data to third countries. The UK Information Commissioner’s Office has a recent blog that deals with personal data flows post-Brexit. You can also check the Data Protection Commission site for details about transfer mechanisms and derogations for specific situations. The DPC also advises checking back regularly for updates between now and Brexit day.

    The post No-deal Brexit and GDPR: here’s what you need to know appeared first on BH Consulting.

    Privacy and Security by Design: Thoughts for Data Privacy Day

    Data Privacy Day has particular relevance this year, as 2018 brought privacy into focus in ways other years have not. Ironically, in the same year that the European Union’s (EU) General Data Protection Regulation (GDPR) came into effect, the public also learned of glaring misuses of personal information and a continued stream of personal data breaches. Policymakers in the United States know they cannot ignore data privacy, and multiple efforts are underway: bills were introduced in Congress, draft legislation was floated, privacy principles were announced, and a National Institute of Standards and Technology (NIST) Privacy Framework and a National Telecommunications and Information Administration (NTIA) effort to develop the administration’s approach to consumer privacy are in process.

    These are all positive steps forward, as revelations about widespread misuse of personal data are causing people to mistrust technology—a situation that must be remedied.

    Effective consumer privacy policies and regulations are critical to the continued growth of the U.S. economy, the internet, and the many innovative technologies that rely on consumers’ personal data. Companies need clear privacy and security expectations to not only comply with the diversity of existing laws, but also to grow businesses, improve efficiencies, remain competitive, and most importantly, to encourage consumers to trust organizations and their technology.

    If an organization puts the customer at the core of everything it does, as we do at McAfee, then protecting customers’ data is an essential component of doing business. Robust privacy and security solutions are fundamental to McAfee’s strategic vision, products, services, and technology solutions. Likewise, our data protection and security solutions enable our enterprise and government customers to more efficiently and effectively comply with regulatory requirements.

    Our approach derives from seeing privacy and security as two sides of the same coin. You can’t have privacy without security. While you can have security without privacy, we strongly believe the two should go hand in hand.

    In comments we submitted to NIST on “Developing a Privacy Framework,” we made the case for Privacy and Security by Design. This approach requires companies to consider privacy and security on the drawing board and throughout the development process for products and services going to market. It also means protecting data through a technology design that considers privacy engineering principles. This proactive approach is the most effective way to enable data protection because the data protection strategies are integrated into the technology as the product or service is created. Privacy and Security by Design encourages accountability in the development of technologies, making certain that privacy and security are foundational components of the product and service development processes.

    The concept of Privacy and Security by Design is aspirational but is absolutely the best way to achieve privacy and security without end users having to think much about them. We have some recommendations for organizations to consider in designing and enforcing privacy practices.

    There are several layers that should be included in the creation of privacy and data security programs:

    • Internal policies should clearly articulate what is permissible and impermissible.
    • Specific departments should specify further granularity regarding policy requirements and best practices (e.g., HR, IT, legal, and marketing will have different requirements and restrictions for the collection, use, and protection of personal data).
    • Privacy (legal and non-legal) and security professionals in the organization must have detailed documentation and process tools that streamline the implementation of the risk-based framework.
    • Ongoing organizational training regarding the importance of protecting personal data and best practices is essential to the continued success of these programs.
    • The policy requirements should be tied to the organization’s code of conduct and enforced as required when polices are violated.

    Finally, an organization must have easy-to-understand external privacy and data security policies to educate the user/consumer and to drive toward informed consent to collect and share data wherever possible. The aim must be to make security and privacy ubiquitous, simple, and understood by all.

    As we acknowledge Data Privacy Day this year, we hope that privacy will not only be a talking point for policymakers but that it will also result in action. Constructing and agreeing upon U.S. privacy principles through legislation or a framework will be a complicated process. We better start now because we’re already behind many other countries around the globe.

    The post Privacy and Security by Design: Thoughts for Data Privacy Day appeared first on McAfee Blogs.

    Why other Hotel Chains could Fall Victim to a ‘Marriott-style’ Data Breach

    A guest article authored by Bernard Parsons, CEO, Becrypt

    Whilst I am sure more details behind the Marriott data breach will slowly come to light over the coming months, there is already plenty to reflect on given the initial disclosures and accompanying hypotheses.

    With the prospects of regulatory fines and lawsuits looming, assimilating the sheer magnitude of the numbers involved is naturally alarming. Up to 500 million records containing personal and potentially financial information is quite staggering. In the eyes of the Information Commissioner’s Office (ICO), this is deemed a ‘Mega Breach’, even though it falls short of the Yahoo data breach. But equally concerning are the various timeframes reported.

    Marriott said the breach involved unauthorised access to a database containing Starwood properties guest information, on or before 10th September 2018. Its ongoing investigation suggests the perpetrators had been inside the company’s networks since 2014.

    Starwood disclosed its own breach in November 2015 that stretched back to at least November 2014. The intrusion was said to involve malicious software installed on cash registers and other payment systems, which were not part of its guest reservations or membership systems.

    The extent of Marriott’s regulatory liabilities will be determined by a number of factors not yet fully in the public domain. For GDPR this will include the date at which the ICO was informed, the processes Marriott has undertaken since discovery, and the extent to which it has followed ‘best practice’ prior to, during and after breach discovery. Despite the magnitude and nature of breach, it is not impossible to imagine that Marriott might have followed best practice, albeit such a term is not currently well-defined, but it is fairly easy to imagine that their processes and controls reflect common practice.

    A quick internet search reveals just how commonplace and seemingly inevitable the industry’s breaches are. In December 2016, a pattern of fraudulent transactions on credit cards were reportedly linked to use at InterContinental Hotels Group (IHG) properties. IHG stated that the intrusion resulted from malware installed at point-of-sale systems at restaurants and bars of 12 properties in 2016, and later in April 2017, acknowledging that cash registers at more than 1,000 of its properties were compromised.

    According to KrebsOnSecurity other reported card breaches include Hyatt Hotels (October 2017), the Trump Hotel (July 2017), Kimpton Hotels (September 2016) Mandarin Oriental properties (2015), and Hilton Hotel properties (2015).

    Therefore perhaps, the most important lessons to be learnt in response to such breaches are those that seek to understand the factors that make data breaches all but inevitable today. Whilst it is Marriott in the news this week, the challenges we collectively face are systemic and it could very easily be another hotel chain next week.

    Reflecting on the role of payment (EPOS) systems and cash registers within leisure industry breaches is illustrative of the challenge. Paste the phrase ‘EPOS software’ into your favourite search engine, and see how prominent, or indeed absent, the notion of security is. Is it any wonder that organisations often unwittingly connect devices with common and often unmanaged vulnerabilities to systems that may at the same time be used to process sensitive data? Many EPOS systems effectively run general purpose operating systems, but are typically subject to less controls and monitoring than conventional IT systems.

    So Why is This?
    Often the organisation can’t justify having a full blown operating system and sophisticated defence tools on these systems, especially when they have a large number of them deployed out in the field, accessing bespoke or online applications. Often they are in widely geographically dispersed locations which means there are significant costs to go out and update, maintain, manage and fix them.

    Likewise, organisations don’t always have the local IT resource in many of these locations to maintain the equipment and its security themselves.

    Whilst a light is currently being shone on Marriott, perhaps our concerns should be far broader. If the issues are systemic, we need to think about how better security is built into the systems and supply chains we use by default, rather than expecting hotels or similar organisations in other industries to be sufficiently expert. Is it the hotel, as the end user that should be in the headlines, or how standards, expectations and regulations apply to the ecosystem that surrounds the leisure and other industries? Or should the focus be on how this needs to be improved in order to allow businesses to focus on what they do best, without being quite such easy prey?

    CEO and co-founder of Becrypt

    Marriott Hotels 4 Year Hack Impacts Half a Billion Guests!

    A mammoth data breach was disclosed by hotel chain Marriott International today (30 Nov 18), with a massive 500 million customer records said to have been compromised by an "unauthorized party". 
    Image result for marriott
    The world's largest hotel group launched an internal investigation in response to a system security alert on 8th September 2018, and found an attacker had been accessing the hotel chain's "Starwood network" and customer personal data since 2014, copying and encrypting customer records. In addition to the Marriott brand, Starwood includes W Hotels, Sheraton, Le Méridien and Four Points by Sheraton. 

    Image result for starwood
    You are at risk if you have stayed at any of the above hotel brands in the last 4 years

    The Marriott statement said for around 326 million of its guests, the personal information compromised included "some combination" of, name, address, phone number, email address, passport number, date of birth, gender and arrival & departure information. The hotelier also said encrypted payment card data was also copied, and it could not rule out the encryption keys to decrypt cardholder data had not been stolen.

    The hotel giant said it would notify customers affected and offer some a fraud detecting service for a year for free, so I expect they will be making contact with myself soon. In the meantime, Marriott has launched a website for affected customers and a free helpline for concerned UK customers 0808 189 1065.

    The UK ICO said it would be investigating the breach, and warned those who believe they are impacted to be extra vigilant and to follow the advice on the ICO website, and by the National Cyber Security Centre
    . The hotel chain could face huge fines under the GDPR, and possibly a large scale class action lawsuit by their affected guests, which could cost them millions of pounds. 

    What I really would like to know is why the hotel chain had retained such vast numbers of guest records post their stay. Why they held their customer's passport details and whether those encryption keys were stolen or not. And finally, why the unauthorised access went undetected for four years.

    Tom Kellermann, Chief Cybersecurity Officer for Carbon Black, said "It appears there had been unauthorised access to the Starwood network since 2014, demonstrating that attackers will get into an enterprise and attempt to remain undetected. A recent Carbon Black threat report found that nearly 60% of attacks now involve lateral movement, which means attackers aren’t just going after one component of an organisation - they’re getting in, moving around and seeking more targets as they go."

    The report also found that 50% of today’s attackers now use the victim primarily for island hopping. In these campaigns, attackers first target an organisation's affiliates, often smaller companies with immature security postures and this can often be the case during an M&A. This means that data at every point in the supply chain may be at risk, from customers, to partners and potential acquisitions.”

    Jake Olcott, VP of Strategic Partnerships at BitSight, said "Following the breaking news today that Marriott’s Starwood bookings database has been comprised with half a billion people affected, it highlights the importance of organisations undertaking sufficient security posture checks to avoid such compromises. Marriott’s acquisition of Starwood in 2016 allowed it to utilise its Starwood customer database. Therefore, proactive due diligence during this acquisition period would have helped Marriott to identify the potential cybersecurity risks, and the impact of a potential breach".

    “This is yet another example of why it is critical that companies perform cybersecurity analysts during the due diligence period, prior to an acquisition or investment. Traditionally, companies have approached cyber risk in acquisitions by issuing questionnaires to the target company; unfortunately, these methods are time consuming and reflect only a “snapshot in time” view.

    “Understanding the cybersecurity posture of an investment is critical to assessing the value of the investment and considering reputational, financial, and legal harm that could befall the company. After an investment has been made, continuous monitoring is essential.”

    GDPR Material and Territorial Scopes

    The new EU General Data Regulation will enter into force 25 May of this year. The GDPR contains rules concerning the protection of natural persons when their personal data are processed and rules on the free movement of personal data. The new regulation is not revolutionary but an evolution from the previous Data Protection Act 1998 […]