Category Archives: data privacy

Facebook Faces Criminal Investigation over Data Handling Partnerships

Facebook Faces Criminal Investigation over Data Handling Partnerships

Facebook’s troubles seem never-ending, as the company now faces a criminal investigation by federal prosecutors for its data-sharing practices and partnerships with global tech companies, writes the New York Times. More than 150 companies including Netflix, Spotify, Apple, Microsoft, Sony and Amazon have apparently “cut sharing deals” to get access to user data without users’ knowledge.

A grand jury has already subpoenaed documents from big names in the smartphone and gadget manufacturing industry accused of allegedly gaining access to the data of hundreds of millions of Facebook accounts.

“Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent … and gave Netflix and Spotify the ability to read Facebook users’ private messages,” say 2017 records obtained by the New York Times in 2018.

Soon after, Facebook’s Konstantinos Papamiltiadis, Director of Developer Platforms and Programs defended the partnerships, claiming “none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC.”

Shortly before, he explains “this work was about helping people do two things. First, people could access their Facebook accounts or specific Facebook features on devices and platforms built by other companies like Apple, Amazon, Blackberry and Yahoo. These are known as integration partners. Second, people could have more social experiences – like seeing recommendations from their Facebook friends – on other popular apps and websites, like Netflix, The New York Times, Pandora and Spotify.”

This is not the first time the company has been under scrutiny for alleged shady practices, besides the Cambridge Analytics scandal that won’t be soon forgotten, which means Facebook might just be looking at a multibillion-dollar fine from the FTC.

According to privacy advocates, Facebook allegedly violated an agreement with the FTC for sharing “data in ways that deceived consumers,” so now officials are investigating multiple accusations.

Facebook is cooperating with law enforcement.

HOTforSecurity: Facebook Faces Criminal Investigation over Data Handling Partnerships

Facebook Faces Criminal Investigation over Data Handling Partnerships

Facebook’s troubles seem never-ending, as the company now faces a criminal investigation by federal prosecutors for its data-sharing practices and partnerships with global tech companies, writes the New York Times. More than 150 companies including Netflix, Spotify, Apple, Microsoft, Sony and Amazon have apparently “cut sharing deals” to get access to user data without users’ knowledge.

A grand jury has already subpoenaed documents from big names in the smartphone and gadget manufacturing industry accused of allegedly gaining access to the data of hundreds of millions of Facebook accounts.

“Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent … and gave Netflix and Spotify the ability to read Facebook users’ private messages,” say 2017 records obtained by the New York Times in 2018.

Soon after, Facebook’s Konstantinos Papamiltiadis, Director of Developer Platforms and Programs defended the partnerships, claiming “none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC.”

Shortly before, he explains “this work was about helping people do two things. First, people could access their Facebook accounts or specific Facebook features on devices and platforms built by other companies like Apple, Amazon, Blackberry and Yahoo. These are known as integration partners. Second, people could have more social experiences – like seeing recommendations from their Facebook friends – on other popular apps and websites, like Netflix, The New York Times, Pandora and Spotify.”

This is not the first time the company has been under scrutiny for alleged shady practices, besides the Cambridge Analytics scandal that won’t be soon forgotten, which means Facebook might just be looking at a multibillion-dollar fine from the FTC.

According to privacy advocates, Facebook allegedly violated an agreement with the FTC for sharing “data in ways that deceived consumers,” so now officials are investigating multiple accusations.

Facebook is cooperating with law enforcement.



HOTforSecurity

Google’s Nest fiasco harms user trust and invades their privacy

Technology companies, lawmakers, privacy advocates, and everyday consumers likely disagree about exactly how a company should go about collecting user data. But, following a trust-shattering move by Google last month regarding its Nest Secure product, consensus on one issue has emerged: Companies shouldn’t ship products that can surreptitiously spy on users.

Failing to disclose that a product can collect information from users in ways they couldn’t have reasonably expected is bad form. It invades privacy, breaks trust, and robs consumers of the ability to make informed choices.

While collecting data on users is nearly inevitable in today’s corporate world, secret, undisclosed, or unpredictable data collection—or data collection abilities—is another problem.

A smart-home speaker shouldn’t be secretly hiding a video camera. A secure messaging platform shouldn’t have a government-operated backdoor. And a home security hub that controls an alarm, keypad, and motion detector shouldn’t include a clandestine microphone feature—especially one that was never announced to customers.

And yet, that is precisely what Google’s home security product includes.

Google fumbles once again

Last month, Google announced that its Nest Secure would be updated to work with Google Assistant software. Following the update, users could simply utter “Hey Google” to access voice controls on the product line-up’s “Nest Guard” device.

The main problem, though, is that Google never told users that its product had an internal microphone to begin with. Nowhere inside the Nest Guard’s hardware specs, or in its marketing materials, could users find evidence of an installed microphone.

When Business Insider broke the news, Google fumbled ownership of the problem: “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” a Google spokesperson said. “That was an error on our part.”

Customers, academics, and privacy advocates balked at this explanation.

“This is deliberately misleading and lying to your customers about your product,” wrote Eva Galperin, director of cybersecurity at Electronic Frontier Foundation.

“Oops! We neglected to mention we’re recording everything you do while fronting as a security device,” wrote Scott Galloway, professor of marketing at the New York University Stern School of Business.

The Electronic Privacy Information Center (EPIC) spoke in harsher terms: Google’s disclosure failure wasn’t just bad corporate behavior, it was downright criminal.

“It is a federal crime to intercept private communications or to plant a listening device in a private residence,” EPIC said in a statement. In a letter, the organization urged the Federal Trade Commission to take “enforcement action” against Google, with the hope of eventually separating Nest from its parent. (Google purchased Nest in 2014 for $3.2 billion.)

Days later, the US government stepped in. The Senate Select Committee on Commerce sent a letter to Google CEO Sundar Pichai, demanding answers about the company’s disclosure failure. Whether Google was actually recording voice data didn’t matter, the senators said, because hackers could still have taken advantage of the microphone’s capability.

“As consumer technology becomes ever more advanced, it is essential that consumers know the capabilities of the devices they are bringing into their homes so they can make informed choices,” the letter said.

This isn’t just about user data

Collecting user data is essential to today’s technology companies. It powers Yelp recommendations based on a user’s location, product recommendations based on an Amazon user’s prior purchases, and search results based on a Google user’s history. Collecting user data also helps companies find bugs, patch software, and retool their products to their users’ needs.

But some of that data collection is visible to the user. And when it isn’t, it can at least be learned by savvy consumers who research privacy policies, read tech specs, and compare similar products. Other home security devices, for example, advertise the ability to trigger alarms at the sound of broken windows—a functionality that demands a working microphone.

Google’s failure to disclose its microphone prevented even the most privacy-conscious consumers from knowing what they were getting in the box. It is nearly the exact opposite approach that rival home speaker maker Sonos took when it installed a microphone in its own device.

Sonos does it better

In 2017, Sonos revealed that its newest line of products would eventually integrate with voice-controlled smart assistants. The company opted for transparency.

Sonos updated its privacy policy and published a blog about the update, telling users: “The most important thing for you to know is that Sonos does not keep recordings of your voice data.” Further, Sonos eventually designed its speaker so that, if an internal microphone is turned on, so is a small LED light on the device’s control panel. These two functions cannot be separated—the LED light and the internal microphone are hardwired together. If one receives power, so does the other.

While this function has upset some Sonos users who want to turn off the microphone light, the company hasn’t budged.

A Sonos spokesperson said the company values its customers’ privacy because it understands that people are bringing Sonos products into their homes. Adding a voice assistant to those products, the spokesperson said, resulted in Sonos taking a transparent and plain-spoken approach.

Now compare this approach to Google’s.

Consumers purchased a product that they trusted—quite ironically—with the security of their homes, only to realize that, by purchasing the product itself, their personal lives could have become less secure. This isn’t just a company failing to disclose the truth about its products. It’s a company failing to respect the privacy of its users.

A microphone in a home security product may well be a useful feature that many consumers will not only endure but embrace. In fact, internal microphones are available in many competitor products today, proving their popularity. But a secret microphone installed without user knowledge instantly erodes trust.

As we showed in our recent data privacy report, users care a great deal about protecting their personal information online and take many steps to secure it. To win over their trust, businesses need to responsibly disclose features included in their services and products—especially those that impact the security and privacy of their customers’ lives. Transparency is key to establishing and maintaining trust online.

The post Google’s Nest fiasco harms user trust and invades their privacy appeared first on Malwarebytes Labs.

How CISOs Can Facilitate the Advent of the Cognitive Enterprise

Just as organizations are getting more comfortable with leveraging the cloud, another wave of digital disruption is on the horizon: artificial intelligence (AI), and its ability to drive the cognitive enterprise.

In early 2019, the IBM Institute for Business Value (IBV) released a new report titled, “The Cognitive Enterprise: Reinventing your company with AI.” The report highlights key benefits and provides a roadmap to becoming a cognitively empowered enterprise, a term used to indicate an advanced digital enterprise that fully leverages data to drive operations and push its competitiveness to new heights.

Such a transformation is only possible with the extensive use of AI in business and technology platforms to continuously learn and adapt to market conditions and customer demand.

CISOs Are Key to Enabling the Cognitive Enterprise

The cognitive enterprise is an organization with an unprecedented level of convergence between technology, business processes and human capabilities, designed to achieve competitive advantage and differentiation.

To enable such a change, the organization will need to leverage more advanced technology platforms and must no longer be limited to dealing only with structured data. New, more powerful business platforms will enable a competitive advantage by combining data, unique workflows and expertise. Internal-facing platforms will drive more efficient operations while external-facing platforms will allow for increased cooperation and collaboration with business partners.

Yet these changes will also bring along new types of risks. In the case of the cognitive enterprise, many of the risks stem from the increased reliance on technology to power more advanced platforms — including AI and the internet of things (IoT) — and the need to work with a lot more data, whether it’s structured, unstructured, in large volume or shared with partners.

As the trusted adviser of the organization, the chief information security officer (CISO) has a strong role to play in enabling and securing the organization’s transformation toward:

  • Operational agility, powered in part by the use of new and advanced technologies, such as AI, 5G, blockchain, 3D printing and the IoT.

  • Data-driven decisions, supported by systems able to recognize and provide actionable insights based on both structured and unstructured data.

  • Fluid boundaries with multiple data flows going to a larger ecosystem of suppliers, customers and business partners. Data is expected to be shared and accessible to all relevant parties.

Shows relationship between data, processes, people, outside forces, and internal drivers (automation, blockchain, AI)Source: IBM Institute for Business Value (IBV) analysis.

Selection and Implementation of Business Platforms

Among the major tasks facing organizations embarking on this transformation is the need to choose and deploy new mega-systems, equivalent to the monumental task of switching enterprise resource planning (ERP) systems — or, in some cases, actually making the switch.

The choice of a new platform will impact many areas across the enterprise, including HR and capital allocation processes, in addition to the obvious impact on how the business delivers value via its product or service. Yet, as the IBM IBV report points out, the benefits can be significant. Leading organizations have been able to deliver higher revenues — as high as eight times the average — by adopting new business and technology platforms and fully leveraging all their data, both structured and unstructured.

That said, having large amounts of data doesn’t automatically translate into an empowered organization. As the report cautions, organizations can no longer simply “pour all their data into a data lake and expect everyone to go fishing.” The right digital platform choice can empower the organization to deliver enhanced profits or squeeze additional efficiency, but only if the data is accurate and can be readily accessed.

Once again, the CISO has an important role to play in ensuring the organization has considered all the implications of implementing a new system, so governance will be key.

Data Governance — When Security and Privacy Converge

For the organization to achieve the level of trust needed to power cognitive operations, the CISO will need to drive conversations and choices about the security and privacy of sensitive data flowing across the organization. Beyond the basic tenets of confidentiality, integrity and availability, the CISO will need to be fully engaged on data governance, ensuring data is accurate and trustworthy. For data to be trusted, the CISO will need to review and guarantee the data’s provenance and lineage. Yet the report mentions that, for now, fewer than half of organization had developed “a systemized approach to data curation,” so there is much progress to be made.

Organizations will need to balance larger amounts of data — several orders of magnitude larger — with greater access to this data by both humans and machines. They will also need to balance security with seamless customer and employee experiences. To handle this data governance challenge, CISOs must ensure the data flows with external partners are frictionless yet also provide security and privacy.

AI Can Enable Improved Cybersecurity

The benefits of AI aren’t limited to the business side of the organization. In 2016, IBM quickly recognized the benefits cognitive security could bring to organizations that leverage artificial intelligence in the cybersecurity domain. As attackers explore more advanced and more automated attacks, organizations simply cannot afford to rely on slow, manual processes to detect and respond to security incidents. Cognitive security will enable organizations to improve their ability to prevent and detect threats, as well as accelerate and automate responses.

Leveraging AI as part of a larger security automation and orchestration effort has clear benefits. The “2018 Cost of a Data Breach Study,” conducted by Ponemon Institute, found that security automation decreases the average total cost of a data breach by around $1.55 million. By leveraging AI, businesses can find threats up to 60 times faster than via manual investigations and reduce the amount of time spent analyzing each incident from one hour to less than one minute.

Successful Digital Transformation Starts at the Top

Whether your organization is ready to embark on the journey to becoming a cognitive enterprise or simply navigating through current digital disruption, the CISO is emerging as a central powerhouse of advice and strategy regarding data and technology, helping choose an approach that enables security and speed.

With the stakes so high — and rising — CISOs should get a head start on crafting their digital transformation roadmaps, and the IBM IBV report is a great place to begin.

The post How CISOs Can Facilitate the Advent of the Cognitive Enterprise appeared first on Security Intelligence.

An Apple a Day Won’t Improve Your Security Hygiene, But a Cyber Doctor Might

You might’ve begun to notice a natural convergence of cybersecurity and privacy. It makes sense that these two issues go hand-in-hand, especially since 2018 was littered with breaches that resulted in massive amounts of personally identifiable information (PII) making its way into the wild. These incidents alone demonstrate why an ongoing assessment of security hygiene is so important.

You may also see another convergence: techno-fusion. To put it simply, you can expect to see technology further integrating itself into our lives, whether it is how we conduct business, deliver health care or augment our reality.

Forget Big Data, Welcome to Huge Data

Underlying in these convergences is the amount of data we produce, which poses an assessment challenge. According to IBM estimates, we produce 2.5 quintillion bytes of data every day. If you’re having problems conceptualizing that number — and you’re not alone — try rewriting it like this: 2.5 million terabytes of data every day.

Did that help? Perhaps not, especially since we are already in the Zettabyte era and the difficulty of conceptualizing how much data we produce is, in part, why we face such a huge data management problem. People are just not used to dealing with these numbers.

With the deployment of 5G on the way — which will spark an explosion of internet of things (IoT) devices everywhere — today’s Big Data era may end up as a molehill in terms of data production and consumption. This is why how you manage your data going forward could be the difference between surviving and succumbing to a breach.

Furthermore, just as important as how you will manage your data is who will manage and help you manage it.

Expect More Auditors

It’s not uncommon for larger organizations to use internal auditors to see what impact IT has on their business performance and financial reporting. With more organizations adopting some sort of cybersecurity framework (e.g., the Payment Card Industry Data Security Standard or NIST’s Framework for Improving Critical Infrastructure Cybersecurity), you can expect to hear more compliance and audit talk in the near future.

There is utility in having these internal controls. It’s a good way to maintain and monitor your organization’s security hygiene. It’s also one way to get internal departments to talk to each other. Just as IT professionals are not necessarily auditors, neither are auditors some sort of IT professionals. But when they’re talking, they can learn from each other, which is always a good thing.

Yet internal-only assessments and controls come with their own set of challenges. To begin, the nature of the work is generally reactive. You can’t audit something you haven’t done yet. Sure, your audit could find that you need to do something, but the process itself may be very laborious, and by the time you figure out what you need to do, you may very well have an avalanche of new problems.

There are also territorial battles. Who is responsible for what? Who reports to whom? And my personal favorite: Who has authority? It’s a mess when you have all the responsibility and none of the authority.

Another, perhaps bigger problem is that internal controls may have blind spots. That’s why there is value in having a regular, external vulnerability assessment.

When it Comes to Your Security Hygiene, Don’t Self-Diagnose

Those in the legal and medical fields have undoubtedly been cautioned not to act as their own counsel or doctor. Perhaps we should consider similar advice for security professionals too. It’s not bad advice, considering a recent Ponemon Institute report found that organizations are “suffering from investments in disjointed, non-integrated security products that increase cost and complexity.”

Think about it like this: You, personally, have ultimate responsibility to take care of your own health. Your cybersecurity concerns are no different. Even at the personal level, if you take care of the basics, you’re doing yourself a huge favor. So do what you can to keep yourself in the best possible health.

Part of healthy maintenance normally includes a checkup with a doctor, even when you feel everything is perfectly fine. Assuming you’re happy with your doctor and have a trusting relationship, after an assessment and perhaps some tests, your doctor will explain to you, in a way that you are certain to understand, what is going on. If something needs a closer look or something requires immediate attention, you can take care of it. That’s the advantage of going to the doctor, even when you think you’re all right. They have the assessment tools and expertise you generally do not.

‘I Don’t Need a Doctor, I Feel Fine’

Undoubtedly, this is a phrase you have heard before, or have even invoked on your own. But cybersecurity concerns continue to grow and internal resources remain overwhelmed by responding to so many alerts and financial constraints or understaffing. Therefore, the need for some outside assistance may not only be necessary, but welcomed, as that feeling of security fatigue has been around for some time now.

There is an added wildcard factor too: I’m confident many of us in the field have heard IT professionals say, “We’ve got this” with a straight face. My general rule of thumb is this: If attackers can get into the U.S. Department of Defense, they can get to you, so the “I feel fine” comment could very well include a dose of denial.

When considering external assistance — really just a vulnerability assessment — it’s worth thinking through the nuance of this question: Is your IT department there to provide IT services, or is it there to secure IT systems? I suggest the answer is not transparently obvious, and much of it will depend on your business mission.

Your IT team may be great at innovating and deploying services, but that does not necessarily mean its strengths also include cybersecurity audits/assessments, penetration testing, remediation or even operating intelligence-led analytics platforms. Likewise, your security team may be great at securing your networks, but that does not necessarily mean it understands your business limitations and continuity needs. And surely, the last thing you want to do is get trapped in some large capital investment that just turns into shelfware.

Strengthen Your Defenses by Seeing a Cyber Doctor

Decision-makers — particularly at the C-suite and board level, in tandem with the chief information security officer (CISO) and general counsels — should consider the benefits of a regular external assessment by trusted professionals that not only understand the cybersecurity landscape in real time, but also the business needs of the organization.

It’s simple: Get a checkup from a cyber doctor who will explain what’s up in simple language, fix it with help if necessary and then do what you can on your own. Or, get additional external help if needed. That’s it. That semiannual or even quarterly assessment could very well be that little bit of outside help that inoculates you from the nastiest of cyber bugs.

The post An Apple a Day Won’t Improve Your Security Hygiene, But a Cyber Doctor Might appeared first on Security Intelligence.

At RSAC 2019, It’s Clear the World Needs More Public Interest Technologists

Cybersecurity experts are no longer the only ones involved in the dialogue around data privacy. At RSA Conference 2019, it’s clear how far security and privacy have evolved since RSAC was founded in 1991. The 28th annual RSAC has a theme of “better,” a concept that speaks to the influence of technology on culture and people.

“Today, technology makes de facto policy that’s far more influential than any law,” said Bruce Schneier, fellow and lecturer at the Harvard Kennedy School, in his RSAC 2019 session titled “How Public Interest Technologists are Changing the World.”

“Law is forever trying to catch up with technology. And it’s no longer sustainable for technology and policy to be in different worlds,” Schneier said. “Policymakers and civil society need the expertise of technologists badly, especially cybersecurity experts.”

Public policy and personal privacy don’t always coexist peacefully. This tension is clear among experts from cryptography, government and private industry backgrounds at RSAC 2019. In the past year, consumer awareness and privacy regulations, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), has created an intensely public dialogue about data security for perhaps the first time in history.

The Cryptographer’s Panel, which opened the conference on Tuesday, delved into issues of policy, spurred in part by the fact that Adi Shamir — the “S” in RSA — was denied a visa to attend the conference. Bailey Whitfield Diffie, who founded public-key cryptography, directly addressed the tension between the legislature, personal privacy and autonomy. Other keynote speakers called for collaboration.

“We are not seeking to destroy encryption, but we are duty-bound to protect the people,” stated FBI Director Christopher Wray. “We need to come together to figure out a way to do this.”

Moving forward to create effective policy will require technical expertise and the advent of a new type of cybersecurity expert: the public interest technologist.

Why Policymakers Need Public Interest Technologists

“The problem is that almost no policymakers are discussing [policy] from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate,” wrote Schneier in a blog post this week. “The result is … policy proposals — ­that occasionally become law­ — that are technological disasters.”

“We also need cybersecurity technologists who understand­ — and are involved in — ­policy. We need public-interest technologists,” Schneier wrote. This profession can be defined as a skilled individual who collaborates on tech policy or projects with a public benefit, or who works in a traditional technology career at an organization with a public focus.

The idea of the public interest technologist isn’t new. It has been formally defined by the Ford Foundation, and it’s the focus of a class taught by Schneier at the Harvard Kennedy School. However, it’s clear from the discussions at RSAC and the tension that exists between privacy, policy and technology in cybersecurity dialogue that public interest technologists are more critically needed than ever before.

Today, Schneier said, “approximately zero percent” of computer science graduates directly enter the field of public interest work. What can cybersecurity leaders and educators do to increase this number and the impact of their talent on the public interest?

Technology and Policy Have to Work Together

Schneier wants public interest technology to become a viable career path for computer science students and individuals currently working in the field of cybersecurity. To that end, he worked with the Ford Foundation and RSAC 2019 to set up an all-day mini-track at the conference on Thursday. Throughout the event, there was a focus on dedicated individuals who are already working to change the world.

Schneier isn’t the only expert pushing for more collaboration and public interest work. A Tuesday panel discussion focused on how female leaders in government are breaking down barriers, creating groundbreaking policy and helping the next generation of talent flourish. Public interest track speaker and former data journalist Matt Mitchell was inspired by the 2013 George Zimmerman trial to create the nonprofit organization CryptoHarlem and start a new career as a public interest cybersecurity expert, according to Dark Reading.

On Thursday, IBM Security General Manager Mary O’Brien issued a clear call for organizations to change their approach to cybersecurity, including focusing on diversity of thought in her keynote speech. “Cross-disciplinary teams provide the ideas and insights that help us get better,” O’Brien said. “We face complex challenges and diverse attackers. Security simply will not be better or best if we rely on technologists alone.”

It’s Time for Organizations to Take Action

When it comes to creating an incentive for talented individuals to enter public interest work, a significant piece of responsibility falls on private industry. Schneier challenged organizations to work to establish public interest technology as a viable career path and become more involved in creating informed policy. He pointed to the legal sector’s offering of pro bono work as a possible financial model for organizations in private industry.

“In a major law firm, you are expected to do some percentage of pro bono work,” said Schneier. “I’d love to have the same thing happen in technology. We are really trying to jump start this movement … [however, many] security vendors have not taken this seriously yet.”

There are already some examples of private organizations that are creating new models of collaboration to create public change, including the Columbia-IBM Center for Blockchain and Data Transparency, a recent initiative to create teams of academics, scientists, business leaders and government officials to work through issues of “policy, trust, sharing and consumption” by using blockchain technology.

It’s possible to achieve the idea of “better” for everyone when organizations become actively involved in public interest work. There is an opportunity to become a better company, strengthen public policy and attract more diverse talent at the same time.

“We need a cultural change,” said Schneier.

In a world where technology and culture are one and the same, public interest technologists are critical to a better future.

The post At RSAC 2019, It’s Clear the World Needs More Public Interest Technologists appeared first on Security Intelligence.

The not-so-definitive guide to cybersecurity and data privacy laws

US cybersecurity and data privacy laws are, to put it lightly, a mess.

Years of piecemeal legislation, Supreme Court decisions, and government surveillance crises, along with repeated corporate failures to protect user data, have created a legal landscape that is, for the American public and American businesses, confusing, complicated, and downright annoying.

Businesses are expected to comply with data privacy laws based on the data’s type. For instance, there’s a law protecting health and medical information, another law protecting information belonging to children, and another law protecting video rental records. (Seriously, there is.) Confusingly, though, some of those laws only apply to certain types of businesses, rather than just certain types of data.

Law enforcement agencies and the intelligence community, on the other hand, are expected to comply with a different framework that sometimes separates data based on “content” and “non-content.” For instance, there’s a law protecting phone call conversations, but another law protects the actual numbers dialed on the keypad.

And even when data appears similar, its protections may differ. GPS location data might, for example, receive a different protection if it is held with a cell phone provider versus whether it was willfully uploaded through an online location “check-in” service or through a fitness app that lets users share jogging routes.

Congress could streamline this disjointed network by passing comprehensive federal data privacy legislation; however, questions remain about regulatory enforcement and whether states’ individual data privacy laws will be either respected or steamrolled in the process.

To better understand the current field, Malwarebytes is launching a limited blog series about data privacy and cybersecurity laws in the United States. We will cover business compliance, sectoral legislation, government surveillance, and upcoming federal legislation.

Below is our first blog in the series. It explores data privacy compliance in the United States today from the perspective of a startup.

A startup’s tale—data privacy laws abound

Every year, countless individuals travel to Silicon Valley to join the 21st century Gold Rush, staking claims not along the coastline, but up and down Sand Hill Road, where striking it rich means bringing in some serious venture capital financing.

But before any fledgling startup can become the next Facebook, Uber, Google, or Airbnb, it must comply with a wide, sometimes-dizzying array of data privacy laws.

Luckily, there are data privacy lawyers to help.

We spoke with D. Reed Freeman Jr., the cybersecurity and privacy practice co-chair at the Washington, D.C.-based law firm Wilmer Cutler Pickering Hale and Dorr about what a hypothetical, data-collecting startup would need to become compliant with current US data privacy laws. What does its roadmap look like?

Our hypothetical startup—let’s call it Spuri.us—is based in San Francisco and focused entirely on a US market. The company developed an app that collects users’ data to improve the app’s performance and, potentially, deliver targeted ads in the future.

This is not an exhaustive list of every data privacy law that a company must consider for data privacy compliance in the US. Instead, it is a snapshot, providing information and answers to potentially some of the most common questions today.

Spuri.us’ online privacy policy

To kick off data privacy compliance on the right foot, Freeman said the startup needs to write and post a clear and truthful privacy policy online, as defined in the 2004 California Online Privacy Protection Act.

The law requires businesses and commercial website operators that collect personally identifiable information to post a clear, easily-accessible privacy policy online. These privacy policies must detail the types of information collected from users, the types of information that may be shared with third parties, the effective date of the privacy policy, and the process—if any—for a user to review and request changes to their collected information.

Privacy policies must also include information about how a company responds to “Do Not Track” requests, which are web browser settings meant to prevent a user from being tracked online. The efficacy of these settings is debated, and Apple recently decommissioned the feature in its Safari browser.

Freeman said companies don’t need to worry about honoring “Do Not Track” requests as much as they should worry about complying with the law.

“It’s okay to say ‘We don’t,’” Freeman said, “but you have to say something.”

The law covers more than what to say in a privacy policy. It also covers how prominently a company must display it. According to the law, privacy policies must be “conspicuously posted” on a website.

More than 10 years ago, Google tried to test that interpretation and later backed down. Following a 2007 New York Times report that revealed that the company’s privacy policy was at least two clicks away from the home page, multiple privacy rights organizations sent a letter to then-CEO Eric Schmidt, urging the company to more proactively comply.

“Google’s reluctance to post a link to its privacy policy on its homepage is alarming,” the letter said, which was signed by the American Civil Liberties Union, Center for Digital Democracy, and Electronic Frontier Foundation. “We urge you to comply with the California Online Privacy Protection Act and the widespread practice for commercial web sites as soon as possible.”

The letter worked. Today, users can click the “Privacy” link on the search giant’s home page.

What About COPPA and HIPAA?

Spuri.us, like any nimble Silicon Valley startup, is ready to pivot. At one point in its growth, it considered becoming a health tracking and fitness app, meaning it would collect users’ heart rates, sleep regimens, water intake, exercise routines, and even their GPS location for selected jogging and cycling routes. Spuri.us also once considered pivoting into mobile gaming, developing an app that isn’t made for children, but could still be downloaded onto children’s devices and played by kids.

Spuri.us’ founder is familiar with at least two federal data privacy laws—the Health Insurance Portability and Accountability Act (HIPAA), which regulates medical information, and the Children’s Online Privacy Protection Act (COPPA), which regulates information belonging to children.

Spuri.us’ founder wants to know: If her company stars collecting health-related information, will it need to comply with HIPAA?

Not so, Freeman said.

“HIPAA, the way it’s laid out, doesn’t cover all medical information,” Freeman said. “That is a common misunderstanding.”

Instead, Freeman said, HIPAA only applies to three types of businesses: health care providers (like doctors, clinics, dentists, and pharmacies), health plans (like health insurance companies and HMOs), and health care clearinghouses (like billing services that process nonstandard health care information).

Without fitting any of those descriptions, Spuri.us doesn’t have to worry about HIPAA compliance.

As for complying with COPPA, Freeman called the law “complicated” and “very hard to comply with.” Attached to a massive omnibus bill at the close of the 1998 legislative session, COPPA is a law that “nobody knew was there until it passed,” Freeman said.

That said, COPPA’s scope is easy to understand.

“Some things are simple,” Freeman said. “You are regulated by Congress and obliged to comply with its byzantine requirements if your website is either directed to children under the age of 13, or you have actual knowledge that you’re collecting information from children under the age of 13.”

That begs the question: What is a website directed to children? According to Freeman, the Federal Trade Commission created a rule that helps answer that question.

“Things like animations on the site, language that looks like it’s geared towards children, a variety of factors that are intuitive are taken into account,” Freeman said.

Other factors include a website’s subject matter, its music, the age of its models, the display of “child-oriented activities,” and the presence of any child celebrities.

Because Spuri.us is not making a child-targeted app, and it does not knowingly collect information from children under the age of 13, it does not have to comply with COPPA.

A quick note on GDPR

No concern about data privacy compliance is complete without bringing up the European Union’s General Data Protection Regulation (GDPR). Passed in 2016 and having taken effect last year, GDPR regulates how companies collect, store, use, and share EU citizens’ personal information online. On the day GDPR took effect, countless Americans received email after email about updated privacy policies, often from companies that were founded in the United States.

Spuri.us’ founder is worried. She might have EU users but she isn’t certain. Do those users force her to become GDPR compliant?

“That’s a common misperception,” Freeman said. He said one section of GDPR explains this topic, which he called “extraterritorial application.” Or, to put it a little more clearly, Freeman said: “If you’re a US company, when does GDPR reach out and grab you?”

GDPR affects companies around the world depending on three factors. First, whether the company is established within the EU, either through employees, offices, or equipment. Second, whether the company directly markets or communicates to EU residents. Third, whether the company monitors the behavior of EU residents.

“Number three is what trips people up,” Freeman said. He said that US websites and apps—including those operated by companies without a physical EU presence—must still comply with GDPR if they specifically track users’ behavior that takes place in the EU.

“If you have an analytics service or network, or pixels on your website, or you drop cookies on EU residents’ machines that tracks their behavior,” that could all count as monitoring the behavior of EU residents, Freeman said.

Because those services are rather common, Freeman said many companies have already found a solution. Rather than dismantling an entire analytics operation, companies can instead capture the IP addresses of users visiting their websites. The companies then perform a reverse geolocation lookup. If the companies find any IP addresses associated with an EU location, they screen out the users behind those addresses to prevent online tracking.

Asked whether this setup has been proven to protect against GDPR regulators, Freeman instead said that these steps showcase an understanding and a concern for the law. That concern, he said, should hold up against scrutiny.

“If you’re a startup and an EU regulator initiates an investigation, and you show you’ve done everything you can to avoid tracking—that you get it, you know the law—my hope would be that most reasonable regulators would not take a Draconian action against you,” Freeman said. “You’ve done the best you can to avoid the thing that is regulated, which is the track.”

A data breach law for every state

Spuri.us has a clearly-posted privacy policy. It knows about HIPAA and COPPA and it has a plan for GDPR. Everything is going well…until it isn’t.

Spuri.us suffers a data breach.

Depending on which data was taken from Spuri.us and who it referred to, the startup will need to comply with the many requirements laid out in California’s data breach notification law. There are rules on when the law is triggered, what counts as a breach, who to notify, and what to tell them.

The law protects Californians’ “personal information,” which it defines as a combination of information. For instance, a first and last name plus a Social Security number count as personal information. So do a first initial and last name plus a driver’s license number, or a first and last name plus any past medical insurance claims, or medical diagnoses. A Californian’s username and associated password also qualify as “personal information,” according to the law.

The law also defines a breach as any “unauthorized acquisition” of personal information data. So, a rogue threat actor accessing a database? Not a breach. That same threat actor downloading the information from the database? Breach.

In California, once a company discovers a data breach, it next has to notify the affected individuals. These notifications must include details on which type of personal information was taken, a description of the breach, contact information for the company, and, if the company was actually the source of the breach, an offer for free identity theft prevention services for at least one year.

The law is particularly strict on these notifications to customers and individuals impacted. There are rules on font size and requirements for which subheadings to include in every notice: “What Happened,” “What Information Was Involved,” “What We Are Doing,” “What You Can Do,” and “More Information.”

After Spuri.us sends out its bevy of notices, it could still have a lot more to do.

As of April 2018, every single US state has its own data breach notification law. These laws, which can sometimes overlap, still include important differences, Freeman said.

“Some states require you to notify affected consumers. Some require you to notify the state’s Attorney General,” Freeman said. “Some require you to notify credit bureaus.”

For example, Florida’s law requires that, if more than 1,000 residents are affected, the company must notify all nationwide consumer reporting agencies. Utah’s law, on the other hand, only requires notifications if, after an investigation, the company finds that identity theft or fraud occurred, or likely occurred. And Iowa has one of the few state laws that protects both electronic and paper records.

Of all the data compliance headaches, this one might be the most time-consuming for Spuri.us.

In the meantime, Freeman said, taking a proactive approach—like posting the accurate and truthful privacy policy and being upfront and honest with users about business practices—will put the startup at a clear advantage.

“If they start out knowing those things on the privacy side and just in the USA,” Freeman said, “that’s a great start that puts them ahead of a lot of other startups.”

Stay tuned for our second blog in the series, which will cover the current fight for comprehensive data privacy legislation in the United States.

The post The not-so-definitive guide to cybersecurity and data privacy laws appeared first on Malwarebytes Labs.

Labs survey finds privacy concerns, distrust of social media rampant with all age groups

Before Cambridge Analytica made Facebook an unwilling accomplice to a scandal by appropriating and misusing more than 50 million users’ data, the public was already living in relative unease over the privacy of their information online.

The Cambridge Analytica incident, along with other, seemingly day-to-day headlines about data breaches pouring private information into criminal hands, has eroded public trust in corporations’ ability to protect data, as well as their willingness to use the data in ethically responsible ways. In fact, the potential for data interception, gathering, collation, storage, and sharing is increasing exponentially in all private, public, and commercial sectors.

Concerns of data loss or abuse have played a significant role in the US presidential election results, the legal and ethical drama surrounding Wikileaks, Brexit, and the implementation of the European Union’s General Data Privacy Regulations. But how does the potential for the misuse of private data affect the average user in Vancouver, British Colombia; Fresno, California; or Lisbon, Portugal?

To that end, The Malwarebytes Labs team conducted a survey from January 14 to February 15, 2019 to inquire about the data privacy concerns of nearly 4,000 Internet users in 66 countries, including respondents from: Australia, Belgium, Brazil, Canada, France, Germany, Hong Kong, India, Iran, Ireland, Japan, Kenya, Latvia, Malaysia, Mexico, New Zealand, the Philippines, Saudi Arabia, South Africa, Taiwan, Turkey, the United Kingdom, the United States, and Venezuela.

The survey, which was conducted via SurveyMonkey, focused on the following key areas:

  • Feelings on the importance of online privacy
  • Rating trust of social media and search engines with data online
  • Cybersecurity best practices followed and ignored (a list of options was provided)
  • Level of confidence in sharing personal data online
  • Types of data respondents are most comfortable sharing online (if at all)
  • Level of consciousness of data privacy at home vs. the workplace

____________________________________________________________________________________________________________________________

For a high-level look at our analysis of the survey results, including an exploration of why there is a disconnect between users’ emotions and their behaviors, as well as which privacy tools Malwarebytes recommends for those who wish to do more to protect their privacy, download our report:

The Blinding Effect of Security Hubris on Data Privacy

____________________________________________________________________________________________________________________________

For this blog, we explored commonalities and differences among Baby Boomers (ages 56+), Gen Xers (ages 36 – 55), Millennials (ages 18 – 35), and Gen Zeds, or the Centennials (ages 17 and under) concerning feelings about privacy, level of confidence sharing information online, trust of social media and search engines with data, and which privacy best practices they follow.

Lastly, we delved into the regional data compiled from respondents in Europe, the Middle East, and Africa (EMEA) and compared it against North America (NA) to examine whether US users share common ground on privacy with other regions of the world.

Privacy is complicated

If 10 years ago, someone had asked you to carry an instrument that could: listen into your conversations, broadcast your exact location to marketers, and allow you be tracked as you moved between the grocery aisles (and how long you lingered in front of the Cap’n Crunch cereal), most would have declined, suggesting it was a crazy joke. Of course, that was before the advent of smartphones that can do all that and more, today.

Many regard the public disclosure of surreptitious information-gathering programs conducted by the National Security Agency (NSA) here in the US as a watershed moment in the debate over government surveillance and privacy. Despite the outcry, experts noted that the disclosures hardly made a dent in US laws about how the government may monitor citizens (and non-citizens) legally.

Tech companies in Silicon Valley were equally affected (or unaffected, depending on how you look at it) by Edward Snowden’s actions. Yet, over time, they have felt the effects of people’s change in behaviors and actions toward their services. In the face of increasing pressure from criminal actions and public perception in key demographics, companies like Google, Apple, and Facebook have taken steps to beef up the encryption of and better secure user data. But is this enough to make people trust them again?

Challenge: Put your money where your mouth is

In reality, particularly in commerce, we may have reservations about allowing companies to collect from us, especially because we have little influence on how they use it, but that doesn’t stop us from doing so. The care for the protection of our own data, and that of others, may well be nonexistent—signed away in an End-User Licensing Agreement (EULA) buried 18 pages deep.

Case in point: Students of the Massachusetts Institute of Technology (MIT) conducted a study in 2017 and revealed that, among other findings, there is a paradox between how people feel about privacy and their willingness to easily give away data, especially when enticed with rewards (in this case, free pizza).

Indeed, we have a complicated relationship with our data and online privacy. One minute, we’re declaring on Twitter how the system has failed us and the next, we’re taking a big bite of a warm slice of BBQ chicken pizza after giving away your best friend’s email address.

This begs the question: Is getting something in exchange for data a square deal? More specifically, should we have to give something away to use free services? Has a scam just taken place? But more to the point: Do people really, really care about privacy? If they do, why, and to what extent?

In search of answers

Before we conducted our survey, we had theories of our own, and these were colored by many previous articles on the topic. We assumed, for example, that Millennials and Gen Zeds, having grown up with the Internet already in place, would be much less concerned about their privacy than Baby Boomers, who spent a few decades on the planet before ever having created an online account. Rather than further a bias, we started from scratch—we wanted to see for ourselves how people of different generations truly felt about privacy.

Privacy by generations: an overview

This section outlines the survey’s overall findings across generations and regions. A breakdown of each generation’s privacy profile follows, including some correlations from studies that tackled similar topics in the past.

  • An overwhelming majority of respondents (96 percent) feel that online privacy is crucial. And their actions speak for themselves: 97 percent say they take steps to protect their online data, whether they are on a computer or mobile device.
  • Among seven options provided, below are the top four cybersecurity and privacy practices they follow:
    • “I refrain from sharing sensitive personal data on social media.” (94 percent)
    • “I use security software.” (93 percent)
    • “I run software updates regularly.” (90 percent)
    • “I verify the websites I visit are secured before making purchases.” (86 percent)
  • Among seven options provided, below are the top four cybersecurity faux pas they admitted to:
    • “I skim through or do not read End User License Agreements or other consent forms.” (66 percent)
    • “I use the same password across multiple platforms.” (29 percent)
    • “I don’t know which permissions my apps have access to on my mobile device.” (26 percent)
    • “I don’t verify the security of websites before making a purchase. (e.g. I don’t look for “https” or the green padlock on sites.)” (10 percent)

This shows that while respondents feel the need to take care of their privacy or data online, we can deduce that they can only consistently protect it at least most of the time and not all the time.

  • There is a near equal percentage of people who trust (39 percent) and distrust (34 percent) search engines across all generations.
  • Across the board, there is a universal distrust of social media (95 percent). We can then safely assume that respondents are more likely to trust search engines to protect their data than social media.
  • When asked to agree or disagree with the statement, “I feel confident about sharing my personal data online,” 87 percent of respondents disagree or strongly disagree.
  • On the other hand, confident data sharers—or those who give away information to use a service they need—would most likely share their contact info (26 percent), such as name, address, phone number, and email address; card details when shopping online (26 percent); and banking details (16 percent).
  • A small portion (2 percent) of highly confident sharers are also willing to share (or already have shared) their Social Security Number (SSN) and health-related data.
  • In practice, however, 59 percent of respondents said they don’t share any of the sensitive data we listed online.
  • When asked to rate the statement, “I am more conscious of data privacy when at work than I am at home,” a large share (84 percent) said “false.”

Breaking it down

There are many events that happened within this decade that have shaped the way Internet users across generations perceive privacy and how they act on that perception. The astounding number of breaches that have taken place since 2017 and the billions of data stolen, leaked, and bartered on the digital underground market—not to mention the seemingly endless number of opportunities for governments, institutions, and individuals to spy and harvest data on people—can either drive Internet users with a modicum of interest in preserving privacy to (1) live off the grid or (2) completely change their perception of data privacy. The former is unlikely to happen for the majority of users. The latter, however, is already taking place. In fact, not only have perceptions changed but so has behavior, in some cases, almost instantly.

We profiled each age group in light of past and present privacy-related events and how these have changed their perceptions, feeling, and online practices. Here are some of the important findings that emerged from our survey.

Centennials are no noobs when it comes to privacy.*

It’s important to note that while many users who are 18 years old and under (83 percent) admit that privacy is important to them, even more (87 percent) are taking steps to ensure that their data is secure online. Ninety percent of them do this by making sure that the websites they visit are secure before making online purchases. They also refrain from sharing sensitive PII on social media (86 percent) and use security software (86 percent).

Jerome Boursier, security researcher and co-founder of AdwCleaner, is also a privacy advocate. He disagrees with Gen Zeds’ claims that they don’t disclose their personally identifiable information (PII) on social media. “I think most people in the survey would define PII differently. People—especially the younger ones—tend to have a blurry definition of it and don’t consider certain information as personally identifiable the same way older generations do.”

Other notable practices Gen Z admit to partaking in are borrowed from the Cybersecurity 101 handbook, such as using complicated passwords and tools like a VPN on their mobile devices, while others go above-and-beyond normal practices, such as checking the maliciousness of a file they downloaded using Virus Total and modifying files to prevent telemetry logging or reporting—something Microsoft has been doing since the release of Windows 7.

They are also the generation that is the most unlikely to update their software.

Contrary to public belief, Millennials do care about their privacy.

This bears repeating: Millennials do care about their privacy.

An overwhelming majority (93 percent) of Millennials admitted to caring about their privacy. On the other hand, a small portion of this age group, while disclosing that they aren’t that bothered about their privacy, also admit that they still take steps to keep their online data safe.

One reason we can cite why Millennials may care about their privacy is that they want to manage their online reputations, and they are the most active at it, according to the Pew Research Center. In the report “Reputation Management and Social Media,” researchers found that Millennials take steps to limit the amount of PII online, are well-versed at personalizing their social media privacy settings, delete unwanted comments about them on their profiles, and un-tag themselves from photos they were tagged in by someone else. Given that a lot of employers are Google-ing their prospective employees (and Millennials know this), they take a proactive role in putting their best foot forward online.

Like Centennials, Millennials also use VPNs and Tor to protect their anonymity and privacy. In addition, they regularly conduct security checks on their devices and account activity logs, use two-factor authentication (2FA), and do their best to get on top of news, trends, and laws related to privacy and tech. A number of Millennials also admit to not having a social media presence.

While a large share (92 percent) of Millennials polled distrust social media with their data (and 64 percent of them feel the same way about search engines), they continue to use Google, Facebook, and other social media and search platforms. Several Millennials also admit that they can’t seem to stop themselves from clicking links.

Lastly, only a little over half of the respondents (59 percent) are as conscious of their data privacy at home as they are at work. This means that there is a sizable chunk of Millennials who are only conscious of their privacy at work but not so much at home.

Gen Xers feel and behave online almost the same way as Baby Boomers.

Gen Xers are the youngest of the older generations, but their habits better resemble their elder counterparts than their younger compatriots. Call it coincidence or bad luck—depending on your predisposition—or even “wisdom in action.” Either way, being likened to Baby Boomers is a compliment when it comes to privacy and security best practices.

Respondents in this age group have the highest number of people who are privacy-conscious (97 percent), and they are no doubt deliberate (98 percent) in their attempts to secure and take control of their data. Abstaining from posting personal information on social media ranks high in their list of “dos” at 93 percent. Apart from using security software and regularly updating all programs they use, they also do their best to opt out of everything they can, use strong passwords and 2FA, install blocker apps on browsers, and surf the web anonymously.

On the flip side, they’re second only to Millennials for The Generation Good at Avoiding Reading EULAs (71 percent). Gen Xers also bagged The Least Number of People in a Generation to Reuse Passwords (24 percent) award.

When it comes to a search engine’s ability to secure their data, over half of Gen Xers (65 percent) distrust them, while nearly a quarter (24 percent) chose to be neutral in their stance

Baby Boomers know more about protecting privacy online than other generations, and they act upon that knowledge.

Our findings of Baby Boomers have challenged the longstanding notion that they are the most clueless bunch when it comes to cybersecurity and privacy.

Of course, this isn’t to say that there are no naïve users in this generation—all generations have them—but our survey results profoundly contrast what most of us accepted as truth about what Boomers feel about privacy and how they behave when online. They’re actually smarter and more prudent than we care to give them credit for.

Baby Boomers came out as the most distrustful generation (97 percent) of social media when it comes to protecting their data. Because of this, those who have a social media presence hardly disclose (94 percent) any personal information when active.

In contrast, only a little over half (57 percent) of Boomers trust search engines, making them the most trustful among other groups. This means that it is highly likely for a Baby Boomer to trust search engines with their data over social media.

Boomers are also the least confident (89 percent) generation in terms of sharing personal data online. This correlates to a nationwide study commissioned by Hide My Ass! (HMA), a popular VPN service provider, about Baby Boomers and their different approach to online privacy. According to their research, Boomers are likely to respond “I only allow trusted people to see anything I post & employ a lot of privacy restrictions.”

Lastly, they’re also the most consistent in terms of guarding their data privacy both at home and at work (88 percent).

“I am immediately surprised that Baby Boomers are the most conscious about data privacy at work and at home. Anecdotally, I guess it makes sense, at least in work environments,” says David Ruiz, Content Writer for Malwarebytes Labs and a former surveillance activist for the Electronic Frontier Foundation (EFF). He further recalls: “I used to be a legal affairs reporter and 65-and-up lawyers routinely told me about their employers’ constant data security and privacy practices (daily, changing Wi-Fi passwords, secure portals for accessing documents, no support of multiple devices to access those secure portals).”

Privacy by region: an overview of EMEA and NA

A clear majority of survey respondents within the EMEA region are mostly from countries in Europe. One would think that Europeans are more versed in online privacy practices, given they are particularly known for taking privacy and data protection seriously compared to those in North America (NA). Although being well-versed can be seen in certain age groups in EMEA, our data shows that the privacy-savviness of those in NA are not that far off. In fact, certain age groups in NA match or even trump the numbers in EMEA.

Comparing and contrasting user perception and practice in EMEA and NA

There is no denying that those polled in EMEA and NA care about privacy and take steps to secure themselves, too. Most of them refrain from disclosing any information they deemed as sensitive in social media (an average of 89 percent of EMEA users versus 95 percent of NA users), verify websites where they plan to make purchases are secure (an average of 90 percent of EMEA users versus 91 percent of NA users), and use security software (an average of 89 percent of EMEA users versus 94 percent of NA users).

However, like what we’ve seen in the generational profiles, they also recognize the weaknesses that dampen their efforts. All respondents are prone to skimming through or completely avoiding reading the EULA (an average of 77 percent of EMEA users versus 71 percent of NA users). This is the most prominent problem across generations, followed by reusing passwords (an average of 26 percent of EMEA users versus 38 percent of NA users) and not knowing which permissions their apps have access to on their mobile devices (an average of 19 percent of EMEA users versus 17 percent of NA users).

As you can see, there are more users in NA that are embracing these top online privacy practices than those in EMEA.

All respondents from EMEA and NA are significantly distrustful of social media—92 and 88 percent, respectively—when it comes to protecting their data. For those who are willing to disclose their data online, they usually share their credit card details (26 percent), contact info (26 percent), and banking details (16 percent). Essentially, the most common pieces of information you normally give out when you do online banking and purchasing.

Millennials in both EMEA and NA (61 percent) feel the least conscious about their data privacy at work vs. at home. On the other hand, Baby Boomers (85 percent) in both regions feel the most conscious about their privacy in said settings.

It’s also interesting to note that Baby Boomers in both regions appear to share a similar profile.

Privacy in EMEA and NA: notable trends

When it comes to knowing which permissions apps have access to on mobile devices, Gen Zeds in EMEA (90 percent) are the most aware compared to Gen Zeds in NA (63 percent). In fact, Gen Zeds and Millennials (73 percent) are the only generations in EMEA that are conscious of app permissions. Not only that, they’re the less likely group to reuse passwords (at 20 and 24 percent, respectively) across generations in both regions. Although Gen Xers in EMEA have the highest rate of users (31 percent) who recycle passwords.

It also appears that the average percentage of older respondents—the Gen Xers (31 percent) and Baby Boomers (37 percent)—in both regions are more likely to read EULAs or take the time to do so than the average percentage of Gen Zeds and Millennials (both at 18 percent).

Gen Zeds in NA are the most distrustful generation of search engines (75 percent) and social media (100 percent) when it comes to protecting their data. They’re also the most uncomfortable (100 percent) when it comes to sharing personal data online.

Among the Baby Boomers, those in NA are the most conscious (85 percent) when it comes to data privacy at work. However, Baby Boomers in EMEA are not far off (84 percent).

With privacy comes universal reformation, for the betterment of all

The results of our survey have merely provided a snapshot of how generations and certain regions perceive privacy and what steps they take (and don’t take) to control what information is made available online. Many might be surprised by these findings while others may correlate them with other studies in the past. However you take it, one thing is clear: Online privacy has become as important an issue as cybersecurity, and people are beginning to take notice.

With this current privacy climate, it is not enough for Internet users to do the heavy lifting. Regulators play a part, and businesses should act quickly to guarantee that the data they collect from users is only what is reasonably needed to keep services going. In addition, they should secure the data they handle and store, and ensure that users are informed of changes to which data they collect and how they are used. We believe that this demand from businesses will continue at least for the next three years, and any plans or reforms that elevate the importance of online privacy of user data will serve as cornerstones to future transformations.

At this point in time, there is no real way to have complete privacy and anonymity when online. It’s a pipe dream in the current climate. Perhaps the best we can hope for is a society where businesses of all sizes recognize that the user data they collect has a real impact on their customers, and to respect and secure that data. Users should not be treated as a collection of entries with names, addresses, and contact numbers in a huge database. Customers are customers once again, who are always on the lookout for products and services to meet their needs.

The privacy advocate mantle would then be taken upon by Centennials and “Alphas” (or iGeneration), the first age group entirely born within the 21st century and considered the most technologically infused of us all. For those who wish to conduct future studies on privacy like this, it would be really, really interesting to see how Alphas and Centennials would react to a free box of pizza in exchange for their mother’s maiden name.


[*] The Malwarebytes Labs was only able to poll a total of 31 respondents in Gen Zed. This isn’t enough to create an accurate profile of this age group. However, this author believes that what we were able to gather is enough to give an informed assessment of this age group’s feelings and practices.

The post Labs survey finds privacy concerns, distrust of social media rampant with all age groups appeared first on Malwarebytes Labs.

Roses Are Red, Violets Are Blue – What Does Your Personal Data Say About You?

A classic meet-cute – the moment where two people, destined to be together, meet for the first time. This rom-com cornerstone is turned on its head by Netflix’s latest bingeable series “You.” For those who have watched, we have learned two things. One, never trust someone who is overly protective of their basement. And two, in the era of social media and dating apps, it’s incredibly easy to take advantage of the amount of personal data consumers readily, and somewhat naively, share online and with the cloud every day.

We first meet Joe Goldberg and Guinevere Beck – the show’s lead characters – in a bookstore, she’s looking for a book, he’s a book clerk. They flirt, she buys a book, he learns her name. For all intents and purposes, this is where their story should end – but it doesn’t. With a simple search of her name, Joe discovers the world of Guinevere Beck’s social media channels, all conveniently set to public. And before we know it, Joe has made himself a figurative rear-window into Beck’s life, which brings to light the dangers of social media and highlights how a lack of digital privacy could put users in situations of unnecessary risk. With this information on Beck, Joe soon becomes both a physical and digital stalker, even managing to steal her phone while trailing her one day, which as luck would have it, is not password protected. From there, Joe follows her every text, plan and move thanks to the cloud.

Now, while Joe and Beck’s situation is unique (and a tad dramatized), the amount of data exposed via their interactions could potentially occur through another romantic avenue – online dating. Many millennial couples meet on dating sites where users are invited to share personal anecdotes, answer questions, and post photos of themselves. The nature of these apps is to get to know a stranger better, but the amount of personal information we choose to share can create security risks. We have to be careful as the line between creepy and cute quickly blurs when users can access someone’s every status update, tweet, and geotagged photo.

While “You” is an extreme case of social media gone wrong, dating app, social media, and cloud usage are all very predominant in 2019. Therefore, if you’re a digital user, be sure to consider these precautions:

  • Always set privacy and security settings. Anyone with access to the internet can view your social media if it’s public, so turn your profiles to private in order to have control over who can follow you. Take it a step further and go into your app settings to control which apps you want to share your location with and which ones you don’t.
  • Use a screen name for social media accounts. If you don’t want a simple search of your name on Google to lead to all your social media accounts, consider using a different variation of your real name.
  • Watch what you post. Before tagging your friends or location on Instagram and posting your location on Facebook, think about what this private information reveals about you publicly and how it could be used by a third-party.
  • Use strong passwords. In the chance your data does become exposed, or your device is stolen, a strong, unique password can help prevent your accounts from being hacked.
  • Leverage two-factor authentication. Remember to always implement two-factor authentication to add an extra layer of security to your device. This will help strengthen your online accounts with a unique, one-time code required to log in and access your data.
  • Use the cloud with caution. If you plan to store your data in the cloud, be sure to set up an additional layer of access security (one way of doing this is through two-factor authentication) so that no one can access the wealth of information your cloud holds. If your smartphone is lost or stolen, you can access your password protected cloud account to lock third-parties out of your device, and more importantly your personal data.

Interested in learning more about IoT and mobile security trends and information? Follow @McAfee_Home on Twitter, and ‘Like” us on Facebook.

The post Roses Are Red, Violets Are Blue – What Does Your Personal Data Say About You? appeared first on McAfee Blogs.

What is Data Privacy and why is it an important issue?

The question of whether privacy is a fundamental right is being argued before the honorable Supreme Court of India. It is a topic to which a young India is waking up too. Privacy is often equated with Liberty, and young Indians wants adequate protection to express themselves.

Privacy according to Wikipedia is the ability of an individual or group to seclude themselves, or information about themselves, and thereby express themselves selectively. There is little contention over the fact that privacy is an essential element of Liberty and the voluntary disclosure of private information is both part of human relationships and a digitized economy.

The reason for debating data privacy is due to the inherent potential for surveillance and disclosure of electronic records which constitute privacy such as sexual orientation, medical records, credit card information, and email.

Disclosure could take place due to wrongful use and distribution of the data such as for marketing, surveillance by governments or outright data theft by cyber criminals. In each case, a cybercitizens right to disclosure specific information to specific companies or people, for a specific purpose is violated.

Citizens in western countries are legally protected through data protection regulation. There are eight principles designed to prevent unauthorized use of personal data by government, organizations and individuals

Lawfulness, Fairness & Transparency
Personal data need to be processed based on the consent given by data subjects. Companies have an obligation to tell data subjects what their personal data will be used for. Data acquired cannot be sold to other entities say marketers.
Purpose limitation
Personal data collected for one purpose should not be used for a different purpose. If data was collected to deliver an insurance service, it cannot be used to market a different product.
Data minimization
Organizations should restrict collection of personal data to only those attributes needed to achieve the purpose for which consent from the data subject has been received.
Accuracy
Data has to be collected, processed and used in a manner which ensures that it is accurate. A data subject has to right to inspect and even alter the data.
Storage limitation
Personal data should be collected for a specific purpose and not be retained for longer than necessary in relation to this purposes.
Integrity and confidentiality
Organizations that collect this data are responsible for its security against data thefts and data entry/processing errors that may alter the integrity of data.
Accountability
Organizations are accountable for the data in their possession
Cross Border Personal information
Requirements.
Personal information must be processed and stored  in secured environment which must be ensured if the data is processed outside the border of the country

It is important for cybercitizens to understand their privacy rights particularly in context of information that can be misused for financial gain or to cause reputational damage.




Three Must Do’s to make a Security Awareness Champion

Setting an example is the best way to institutionalize security awareness within a workplace or at home. Colleagues and children naturally follow examples set by champions as it makes it easy to mimic rather than spend time to self-learn. I found three important aspect to championing security awareness.

Be a role model

Cybercitizens champions take an active interest in being secure by keeping themselves updated and implementing security guidelines for the gadgets and services they use at home, for work and on the Internet. Knowledge on the do and don’ts of security for workplace system is normally obtained through corporate security awareness programs but for personal gadgets and services one needs to invest time to read the security guidelines provided by the service/product provider or on gadget blogs. Security guidelines provide information on the best practice to be used for secure configuration of gadgets, use of passwords, malware prevention and methods to erase data.  Besides security issues like password theft or loss of privacy, there is the possibility of becoming a victim of fraud when using ecommerce. Most ecommerce sites have a fraud awareness section to educate customers on the common types of frauds and on techniques to safeguard against them. Role models take pride in what they do and this passion becomes a source of motivation to others around them. A security champion delights on possessing detailed insights on how to use the best security features in gadgets (say mobile phones) or on recent security incidents.

Be a security buddy at your home

Telling people what to do to keep themselves secure online is difficult, primarily because security controls lower the user experience; as an example most people may prefer not to have a password or keep a simple one for ease of use. People tend to accept risk because they do not fully realize the consequences of a damaged reputation or the financial impact from the fraudulent use of credit cards until they or someone close, experiences its effects firsthand. Security champions act as security buddies at home. They take time to understand how their family members both young and old, use the Internet and to themselves learn about the safety, privacy and security issues related to those sites. Buddies perform the role of coaches, engaging in regular discussions on the use of these sites from a perspective of avoiding security pitfalls and the avoidance of risky behavior that may lead to unwanted attention from elements looking to groom children for sex or terrorism. Highlighting incidents of similar nature helps raise awareness of the reality of the risk.

Display commitment to security at your workplace

Small acts go a long way in promoting useful security behavior. A small security cartoon displayed on a work bench can immensely add to the corporate security awareness effort. Champions bring attention to the importance of security in business by bringing up security in routine business discussions; for example circulating insights into recent published security incident within a discussion group (leadership, business) and popping the security question “what if a customer security or privacy is affected” during project discussions.