Author Archives: Adam Levin

How Your Company Can Prevent a Cyberattack

Capital One’s announcement of a hack that affected more than 100 million people should have you asking not what, but who’s in your wallet. The company estimated a year-one expense ranging from $100-$150 million. Equifax settled recently on a penalty of more than $700 million. Getting cyber wrong is expensive.

Getting cyber wrong–i.e., all the ways that can become manifest–is of course also complex. There will soon be more than 30 billion connected devices “out there’ in consumer hands, on their wrists, in their laps, cars, kitchens, walls, and, yes, at work–in short, IoT is everywhere, our connectables almost always go with us.

Okay, so the obvious metaphor everyone is used to is the vectors of a virus on the move. The president catches a bug in North Korea, and next thing you know everyone at Mar-a-Lago has it. Rachel Maddow catches a cold while fly-fishing on the Housatonic, and next thing you know the whole Democratic establishment has it. Bob from accounting goes on vacation with his laptop, and the next thing you know, millions of customers get hacked.

Bob, you’re fired.

It’s All About Attackable Surface

Tortoises have cyber down pat, both for real and metaphorically. Ever heard about a tortoise getting hacked? The reason you haven’t is because there’s nothing to get.

Tortoises have no finances and, taken as a genus, they rarely have names and social media accounts. When they do have names and Instagram accounts, there’s a hackable human somewhere nearby. Tortoises are not the problem.

If only our employees had the cyber equivalent of what tortoises have. What’s not to like about a having a hard shell? Better, what about one into which one can retract all their vulnerable areas? They also move slow, which in fable allowed at least one of them to beat a hare in a foot race. Among other things, this slowness means fewer clicked links in phishing emails.

Tortoises have a lot of what it takes to be cybersafe–though admittedly in an environment where things have to get done, often quickly, they don’t make the most attractive choice for corporate spirit animal.

Cyber Is a Marathon, Not a Sprint

So, the order of the day is for sure not something like, “Consumers and businesses alike: Be the tortoise!” Not quite. The turtle is to the cybersecurity of your enterprise what campaign slogans like “Make America Great Again” or “Yes We Can” are to the country. I mean, let’s face it, tortoises are not renowned for their earning capacity. That said, they can be inspirational–or at least aspirational. They can help us think about what good cyber looks like.

My marketing department would do a facepalm if I were to recommend courses that you can offer employees to improve their cybersecurity practices, because I own a company that is dedicated to helping companies and individuals stay as safe as possible in our current state of persistent threat. That said, there are some guiding principles of cybersecurity, particularly in the workplace, that I will share with you. They are at the bedrock of our practice, because they work.

Choices? There’s Really Only One

There is a critical mass of options out there for cybersecurity employee training, online and otherwise. By now, we should expect to be seeing puppet shows on the dangers of phishing.

All that aside, the best solution is free. It is creating a culture of cyber threat awareness and best practices. As Peter Drucker once said, “Culture eats strategy for breakfast.”

While I am only going to name one here, there are programs–both for-profit and public advocacy based–that help small and medium-sized businesses learn to be safer and more secure. A non-profit called the National Cyber Security Alliance offers a series of in-person, highly interactive and easy-to-understand workshops based on the National Institute of Standards and Technology (NIST) Cybersecurity Framework.

For-profit choices are legion. They may offer continuous training programs to help thwart phishing attacks and malware infections. There may be modules to go through for employees, or PowerPoint courses, or quizzes. Other programs cover specific topics, like how to navigate the web without picking up a virus, how to recognize social engineering (a fancy term for the hacking practice of luring in unsuspecting victims with links and offers of this or that slice of paradise), safe mobile practice, safe travel practices, safe email practice, and much more.

Other companies offer training courses as part of the onboarding process, and it should go without saying that at this point in the story arc of cyber insecurity, any enterprise that doesn’t secure employee devices during the onboarding process is courting disaster.

Cybersecurity Is Not a Spectacle Sport

Whether you send daily (or weekly) emails listing the latest threats or you talk about it at all-in meetings, cyber needs to be a part of everyday life to keep your enterprise as safe as possible.

The basic tasks that need to be accomplished:

1.      Phish-proof your employees. Teach employees how to recognize phishing attacks, and what happens when they occur.

2.      Foster good end-user practices. Make sure employees know what good password practices look like. Talk about computer-hygiene practices, and commonsense defenses against the threat of insider attacks.

3.      Change management. Change fosters insecurity, and that’s when we’re most vulnerable to attack. Teach employees how to manage cyber during enterprise-wide change.

And then there is the more technical stuff for your CISO, whether that person is in-house or subcontracted. Don’t have anyone playing this role? Figure it out by Monday.

All of the above is fine and good, but I think principles–creating a culture of cyber awareness–is generally more effective, which is why I favor cyber training that is aimed at minimizing, monitoring, and managing cyber risk.

While there are many products and classes out there, and many of them are no doubt workable solutions, here’s the basics of a cultural (and free) approach:

Minimize exposure.

Employees should never authenticate themselves to anyone unless they are in control of the interaction. Oversharing on social media expands one’s attackable surface. Be a good steward of passwords, safeguard any documents that can be used to hack an account or workstation, and in general stay vigilant. Attacks happen. All the time.

Monitor accounts.

A compromised employee can lead to a compromised company. One way your employees can make sure they haven’t been personally compromised is to check their credit reports religiously, keep track of their credit score, and review major accounts daily. Transaction alerts from financial services institutions and credit card companies can help. Your human resources department may want to explore the possibility of offering a credit and identity monitoring program to employees as an added benefit.

Manage the damage.

When something happens, get on top of it quickly and/or get help from professionals who can help navigate and resolve the situation–whatever it is.

Slow and steady wins this seemingly unwinnable race. Sound paradoxical? It is. Cyber security is a practice, not a product. There is no one way to solve the cybersecurity quagmire, but there are very established routes through it, and you owe it to your company to learn them and teach them to everyone you work with.

The post How Your Company Can Prevent a Cyberattack appeared first on Adam Levin.

How to Get Rich and Be Super Creepy

If you missed the news about Russian-owned FaceApp going viral, you’ve probably been vacationing on the coast of a dust pond on the dark side of the moon. It highlights the general lack of privacy laws out there, and may herald the start of meaningful legislation.

FaceApp allows users to tap into the power of artificial intelligence to see what they might look like with a perfect Hollywood smile, different hair, no hair, facial hair, or, alternately, as a much older version of themselves. The app essentially offers a rogue’s gallery of oneself, making it good fodder for social media sharing, and probably much harder to enter witness protection.

While the ability to hide out in South Dakota or South Jersey may not be foremost among the concerns of most FaceApp users, neither apparently is being hacked.

The locust-like media coverage of FaceApp spurred widespread day-after anxiety about how user images might be repurposed, and fake versions of the app laced with malware. Somehow the same fear isn’t daily in the minds of social media users, who waive many of the rights grabbed by the makers of FaceApp.

Consider that in the event user photos might fall into the hands of a hacker–state-sponsored or freelance, bad things could happen. The same problem holds true for Facebook and FaceApp users alike. That very same image (or images) proffered for fun on the user side and profit on the corporate side, could be legally (or illegally) repurposed by the company that lawfully acquired it. We live in an information Wild West. As of today, in most of America we neither know, nor have the right to know how our data is being used.

There Ought to Be A Law

While many were alarmed by the specter of Russia owning pictures of them and the privacy implications that went with that set of affairs, the bigger picture got lost in the scramble.

With FaceApp, the terms of use freaked everyone out. Sen. Chuck Schumer wrote an impassioned letter about it. Others–many others–cried foul. But here’s the thing: those terms of service were not terribly dissimilar from the terms applied by most big tech companies. The fact is, no one is protected.

As far as facial recognition goes, Snapchat and Instagram have FaceApp beat. Those two apps have far more information about user faces, and they are operating in a low regulatory oversight environment. They are in fact breaking laws that will be written in the years to come. Big Tech is to privacy what cigarettes were to healthy lungs and hearts before 1970 when the Surgeon General’s warning became mandatory.

Forget “Don’t Be Evil,” Don’t Be Creepy!

Still don’t think regulations are lacking? Google was recently out on the street offering random people $5 to be photographed. The pictures were being collected to help the company perfect 3D imaging and facial recognition. This will not be allowed to happen for five dollars or five hundred dollars in five years without full disclosure about the transaction and full consent.

Until these fast and loose practices are illegal, consumers should insist that such encroachments stop, and use their clicks and downloads (or more to the point the withholding of them) to change untoward uses of user data.

A citizen’s face is private information, and the collection of it for the purposes of identifying them and placing them here, there or anywhere–something done regularly in the Yuigar regions of China today to control that ethnic group–should not be deemed acceptable in a country like the United States, which is governed by a Constitution that doesn’t condone such encroachments.

Two Steps Forward, One Back

Facebook reached a $5 billion settlement for misrepresenting the way it handles user privacy, the SEC fined the company $100 million for misleading investors about the risks associated with the misuse of user information, and, still later in the day, Facebook admitted that it was the target of an FTC anti-trust investigation. Oh yeah, then came second quarter results, which exceeded expectations. All this in one day.

The settlement required that Facebook create new roles at the company to oversee privacy and police it, and that the company set about creating a more transparent environment for the information that the company collects, and how it’s used. The $5 billion fine was specifically for misleading users regarding their control over the ways Facebook used their data.

The settlement was met with a general outcry, with many experts saying it was toothless. With $56 billion in revenue, the fine is absorbable, and the new strictures in no way proscribe the way Facebook collects and sells user information. In other words, it signaled business as usual for the time being.

“The F.T.C. is sending the message that wealthy executives and massive corporations can rampantly violate Americans’ privacy and lie about how our personal information is used and abused and get off with no meaningful consequences,” Sen. Ron Wyden said.

The anti-trust investigation is not really news. It has been speculated for some time that the FTC was looking into the possibility that Facebook used its muscle to squash competition, news of an investigation may signal a more intense phase of regulatory action regarding the way big tech uses, and, by implication, abuses consumer information.

The bottom line is everything here. Right now, it is robust. Companies are making a killing using consumer information to mint money. The time for this Forty-Niner mentality is drawing to a close. So, if you are starting a company now, it might be a good time to join the handful of entrepreneurial pioneers who are now making money by protecting consumer privacy. The boom days of data strip mining are coming to a close.

The post How to Get Rich and Be Super Creepy appeared first on Adam Levin.

The Content Streaming Gold Rush is a Hacker’s El Dorado

Hard to imagine, but appointment television hasn’t been a real thing for more than a decade now. First, we recorded. Now, we stream. After transforming (actually killing) the movie rental industry, Netflix started streaming in 2010. It changed how consumers viewed television by providing subscribers access to a sizable library of movies and shows on a wide variety of devices.

With a low price point, it wasn’t a very attractive target for hackers. It worked. By 2018, Netflix streaming accounted for 15% of all worldwide downstream traffic on the internet.

The rise of Netflix’s streaming service also led to a decline in piracy. BitTorrent, the preferred method of illicit (if not illegal) file downloading and sharing decreased by a whopping 25 percent between 2011 and 2015. It was no longer the only quasi-infinite virtual warehouse of digital content. That approach to content had become monetized by Netflix; the paradigm of “everything, all the time” went mainstream.

For those who say, “How so?” piracy has long been a hot button topic among intellectuals, some saying it’s not about cost (free, in the case of piracy), but rather ease of use. Consumers could see popular shows and movies on multiple platforms without the maelstrom of channels and hidden fees presented by cable plans and without having to resort to piracy.

Netflix created a commercial play at the piracy game–all above board, and it worked.

The Wrong Idea

Intellectual viewpoints are not always welcome in boardrooms where decisions about distribution are made, and if in fact they wiggle their way in, they are not often embraced. Entertainment didn’t see the Netflix move as a mainstreaming of ease of use.

Enter the “walled garden” approach.

You see it everywhere. Instead of sharing its intellectual property with Netflix, Disney is launching its own streaming service, Disney+. NBC is pulling its tremendously popular workplace comedy, The Office, from Netflix and Hulu and making it available exclusively on NBCUniversal. AT&T is following suit with its recent acquisition of Time Warner and HBO. AppleGoogle, and Facebook are all entering the ring as well. Most of these services are throwing massive amounts of money at original content and licensing to make their own platforms “must-have.”

What amounts to a cash grab for streaming services is a Byzantine snarl for consumers. Anyone who watched Avengers: Infinity War on Netflix in the last year will need to see its sequel, Endgame on Disney+. Soon, certain podcasts will not be available on both Android and iOS. Support for streaming services on devices can be revoked, as was the case for Hulu on Samsung Smart TVs, or HBO GO on the Xbox 360. Movies “purchased” on Apple may vanish from a consumer’s account if the rights lapse. Streaming services are becoming Balkanized, and as the need for different accounts, payment, memberships, and in some cases, hardware becomes ever more complex, once again, a BitTorrent-style warehouse may become the more attractive alternative for tech savvy users.

This fee-ridden decentralization of content has no doubt contributed mightily to the rebound of piracy, and in this new eco-system hackers are the main beneficiaries.

Yo Ho Ho

To pirate a show or a movie, one need only to download a small file from a website such as the Pirate Bay and open it with a BitTorrent client (most of them are free). A user then downloads pieces of said movie or show from however many people are sharing that file while in turn uploading to other users. The more popular the video being downloaded, the faster it goes. Depending on your connection, a full high-quality movie can be downloaded in less time than it takes to make a bowl of popcorn.

Is it any wonder that many users decided to watch the Game of Thrones finaleusing BitTorrent?

From a cybersecurity perspective, BitTorrent is beyond problematic. It is in fact “accepting candy from a stranger in a windowless van” dangerous. Downloading a pirated torrent ultimately means getting files from a network of anonymous sources, and not just downloading them, but actually opening and running them. Malware has only gotten more sophisticated in recent years; if a payload can be delivered through a single link or file in a phishing scam, it doesn’t take much to imagine what can be digitally smuggled within a several gigabyte download of the latest Spider-Man movie. BitTorrent provides a relatively simple way to infect thousands of computers without needing to actively target anyone. It’s passive and potentially quite pervasive.

If this sounds speculative or far-fetched, it could be that you’re simply not reading enough news. For instance, a hacking campaign has been targeting South Korean BitTorrent users for the last few weeks by embedding backdoors into pirated television episodes. It’s only a matter of time before we see similar campaigns closer to home–and it’s a safe bet there already are such hacks happening in the U.S market now.

The threat to corporate and government networks shouldn’t be overlooked. When the U.S. Geological Survey’s networks were infected with Russian malware in late 2018, the culprit was traced back to malware embedded in pornographic videos downloaded by an employee that spread to a USB drive, a mobile device, and finally compromised that employee’s entire office network.

The Takeaway

Understood correctly, piracy presents an object lesson in the unintended consequences of a business decision in the realm of cybersecurity.

Movies, television shows and podcasts are expensive to produce, and companies are necessarily going to try to get the most bang for their buck by trying to corral the cash flow around their intellectual property. Multiple streaming accounts are expensive and often confusing to maintain, and consumers are similarly going to try to go the cheapest route, namely by pirating shows rather than juggling plans and platforms–especially when doing so creates a one-stop shopping experience.

Hackers tend to seek the path of least resistance. An increasing number of potential targets trading relative security for convenience represent a lucrative and potentially dangerous avenue for attack. But it’s avoidable. Digital marketplaces are more profitable when they are free(er) and (more) open.

The post The Content Streaming Gold Rush is a Hacker’s El Dorado appeared first on Adam Levin.

The Government Claims a Private Sector Fail, But It Just Doesn’t Know How to Pick a Vendor

The Government Accountability Office recently released a report that analyzed the results as well as the relative effectiveness of the identity theft services, including insurance, provided to victims of data breaches and other forms of digital compromise.

The report is entitled, “Range of Consumer Risks Highlights Limitations of Identity Theft Services,” and it largely reiterates the GAO’s 2017 assertion that the identity theft insurance provided to agencies in the wake of a data breach were both unnecessary and largely ineffective. The findings also included a conclusion that credit monitoring, identity monitoring, and identity restoration services were of questionable value. The GAO recommended that Congress should explore whether government agencies should be, or indeed are, at present, legally required to offer victims of federal data breaches any of the services examined in the report.

At the center of the report’s finding was $421 million set aside by the Office of Personnel Management for the purchase of a suite of identity protection products and services following the 2015 data breach that exposed extremely sensitive personal information of 22 million individuals. According to the report, the “obligated” money expended was largely squandered.

“3 million had used the services and approximately 61 individuals had received payouts from insurance claims, for an average of $1,800 per claim… GAO’s review did not identify any studies that analyzed whether consumers who sign up for or purchase identity theft services were less subject to identity theft or detected financial or other fraud more or less quickly than those who monitored their own accounts for free…” To be clear, there is a jump in logic here. Just because the GAO was unable to find data to support these services does not mean the services are ineffective. In fact, it could just as easily be that the services work.

Then there was the GAO’s observation that, “The services also do not prevent or directly address risks of nonfinancial harm such as medical identity theft.” When millions of Social Security Numbers have been exposed, prevention of identity theft is purely aspirational. Frankly, this assertion would not pass muster with the FTC, since it is actually frowned upon to suggest that any service provider can prevent identity theft. The goal is awareness and targeted action, and medical fraud, in particular, is an area where detection is, at best, difficult and resolution is often complicated and requires professional assistance.

While the report raises an important point, it is too limited in scope to pinpoint it effectively. Not all identity theft services are the same. Those offered by the OPM to victims of its massive breach may or may not have been ineffective, but if they were, mostly likely it was because they were inadequate to the task or “mis-underestimated” during on-boarding, not because they’re unnecessary. In other words, it’s not a question of how much money changed hands, it’s how those funds were spent.


In the case of the services offered to victims of the OPM breach, the results do look damning: 61 paid insurance claims out of 3 million service users is the kind of figure unworthy of rounding error status. The above result must not, however, be mistaken for a demonstration of why identity theft insurance isn’t useful, but rather should be understood as a real-life metric of the usefulness of the specific plan provided, and the applicability of that’s plan provisions to the majority of the individuals covered by it.

Consider this counterpoint: If the services provided worked, little to no insurance payments would be necessary. (See above.)

Rather than scrapping the requirement, policies should either be expanded to cover more of the expenses associated with identity theft (there are many), or they should prioritize more robust monitoring tools and full identity fraud remediation solutions with the funds available.

Lack of Participation

Another issue raised by the report is participation on the part of those affected by data breaches. According to data from OPM, only 13 percent of those affected took advantage of the services made available to them–at least as of September 30, 2018. While the number may seem low, anecdotally it’s not really. Regardless, the question remains: Were those services made available in an accessible way that encouraged action on the part of users?

History suggests that paltry participation figures are due in no small part to a lack of awareness among consumers of the dangers posed by the exposure of personal information and the often free (to the consumer) availability of products and services that help manage the damage. Workplace education in this area is lacking, for sure, but that alone doesn’t explain it. Beyond breach fatigue, a larger factor may be lack of confidence in or clarity about the services provided–and that is an issue that belongs to vendor selection, because it’s their job to make clear what’s at risk and how the proffered solutions can help.

As described elsewhere in the report: Organizations that offer services, don’t do it based on what should be the pivotal question here: “how effective these services are.” Instead, “some base their decisions on federal or state legal requirements to offer such services and the expectations of affected customers or employees for some action on the breached entities’ part.” If the standard is to offer a certain amount of protection, they do that. Does it matter what kind? Can it be a generic? That’s the crux of the matter here.

Spoiler alert: It matters what service provider you choose. If you take nothing else away here let it be this: identity protection services and insurance are useless in a low-information environment. Indeed, if the service provider doesn’t produce an ocean of content that explains to users why they need to use the services, then it’s probably not right for mass allocation.

Data breaches have become so commonplace and the threat of identity fraud so widespread that token offerings to those affected are increasingly viewed as a B.S. attempt at better optics while a company is in disaster mode. A vicious cycle ensues: lack of confidence in a breach response leads to lack of participation in identity theft protection offered, and lack of participation is used to justify offering less comprehensive protection–all while identity theft incidents and data breaches increase.

The GAO report raises many salient points about the services offered in the wake of data breaches. The current legislation and its requirements for both identity theft protection services and insurance can rightly be viewed as an expensive boondoggle with little to show when it comes to actual results, but the conclusion of the GAO–to pull back instead of getting the right services in place to protect against future breaches and assist their victims when they can’t be avoided–is worrisome.

We need to focus now more than ever on high-information, robust solutions that provide greater protection as well as more guidance and assistance–not less.

This article originally appeared on

The post The Government Claims a Private Sector Fail, But It Just Doesn’t Know How to Pick a Vendor appeared first on Adam Levin.

What This Report on Cyber Risk Gets Wrong

The Marsh brokerage unit of Marsh and McLennan recently announced a new evaluation process called Cyber Catalyst designed to determine the usefulness of enterprise cyber risk tools.

The goal of the new offering is to identify and implement industry-wide standards to help cyber insurance policyholders make more informed decisions about cyber-related products and services; basically, what works and what doesn’t. Other major insurers participating in Cyber Catalyst include Allianz, AXA XL, AXIS, Beazley, CFC, and Sompo International.

While this collaboration between insurance companies is unusual, it’s not entirely surprising. Cyber insurance is a $4 billion market globally. While it’s difficult to accurately gauge how many hacking attempts were successfully foiled by the products targeted here, data breaches and cyber attacks on businesses continue to increase in frequency and severity. The 2019 World Economic Forum’s Global Risks Report ranks “massive data fraud and theft” as the fourth greatest global risk, followed by “cyber-attacks” in the five slot.

Meanwhile, cybersecurity products and vendors have been, to be charitable, a mixed bag.

Good in Theory

From this standpoint, Cyber Catalyst seems like not just a good idea, but an obvious one. A standardized metric to determine which cybersecurity solutions are no better than a fig leaf and which ones provide real armor to defend against cyberattacks is sorely lacking in the cybersecurity space. By Marsh’s own estimates, there are more than three thousand cybersecurity vendors amounting to a $114 billion marketplace. Many of them don’t inspire confidence on the part of businesses.

Insurers have a vested interest in determining the effectiveness of cybersecurity products, weeding out buggy software and promoting effective solutions that can help address risk aggregation issues. Businesses and their data are in turn better protected, and at least in theory, they would pay less for coverage. Everyone wins.

Insurance companies did something similar in the 1950s with the creation of the Insurance Institute for Highway Safety. In the face of rising traffic collisions and fatalities, the insurance industry collaborated to establish a set of tests and ratings for vehicles, and the result has been a gold standard for automotive safety for decades. Using a similar strategy for cybersecurity would at least in theory help mitigate the ever-increasing costs and risks to companies and their data.

Or Maybe Not

Where the analogy to the Insurance Institute for Highway Safety breaks down is here: The threats to car drivers and passengers have ultimately stayed the same since its inception. Everything we’ve learned over the years about making cars has progressively led to safer vehicles. Information technology is vastly different in that iterative improvements in one specific area doesn’t necessarily make an organization as a whole safer or better protected against cyber threats–in fact sometimes it can have the opposite effect when a new feature added turns out to be a bug.

Cyber defenses are meaningless in the presence of an unintended, yet gaping, hole in an organization’s defenses. Then there is the march of sound innovation. Products that provided first-in-class protection for a business’s network a few years ago may no longer be so great where cloud computing and virtual servers, or BYOD are concerned. The attackable surface of every business continues to increase with each newly introduced technology, and it seems overly optimistic to assume the standard evaluation process (currently twice a year) would be able to keep pace with new threats.

There’s also the risk of putting too many eggs into one basket. While the diffuse nature of the cybersecurity market causes headaches for everyone involved, establishing a recommended solution or set of solutions effectively makes them an ideal target for hackers. While it’s important to keep consumers and businesses informed of potential risk to their information, cybersecurity issues require a certain amount of secrecy until they have been properly addressed. Compromising, or even identifying and reporting on a vulnerability before it’s been patched in an industry standard security product, process or vendor practice could cause a potentially catastrophic chain reaction for cyber insurers and their clients.

Culture Eats Strategy for Breakfast

Where the Cyber Catalyst program seems to potentially miss the mark is by overlooking the weakest link in any company’s security (i.e., its users). An advanced cybersecurity system or set of tools capable of blocking the most insidious and sophisticated attack can readily be circumvented by a spear phishing campaign, a compromised smartphone, or a disgruntled employee. Social engineering cannot be systematically addressed. Combatting the lures of compromise requires organizations to foster and maintain a culture of privacy and security.

The risk of employee over-reliance on tools and systems at the expense of training, awareness, and a company culture where cybersecurity is front and center must not be underestimated. While it is easier to opt for the quick and easy approach of purchasing a recommended solution, companies still need a comprehensive and evolving playbook to meet the ever-changing tactics of persistent, sophisticated and creative hackers.

While industry-wide cooperation may be a good thing, it’s vital for companies and insurers alike to recognize that any security program or service is fallible. Without an equal investment in functional cybersecurity, which places as much store in training employees and keeping aware of new threats, the rise in breaches and compromises will continue.

This article originally appeared on

The post What This Report on Cyber Risk Gets Wrong appeared first on Adam Levin.

The One Word No One Is Talking About in the Disney-Fox Deal

On March 20, The Walt Disney Company completed its purchase of 21st Century Fox. The acquisition added huge properties like The Simpsons and National Geographic as well as film blockbuster franchises to Disney’s star-studded stable that includes Star Wars, Marvel Comics, Pixar, the Muppets, and a decades-long catalog of major intellectual properties.

While major acquisitions and mergers often give rise to anti-trust issues–and this one was no exception, the transfer of properties with complex privacy policies, and how that works going forward has not been a big topic of discussion.

Corralling such a massive amount of children’s and family-friendly entertainment under one roof may seem, at least on the surface, like a world-friendly move, but to quote a song from Disney’s 1995 direct-to-video sequel, “Pocahontas 2”–“things aren’t always what they appear.”

While Disney’s acquisition lacks the dark mirror quality of Amazon’s ever-expanding home networking business or Google’s inescapable array of services (all of them tracking users with mindboggling granularity), there is considerable consumer data tied to the properties that just changed hands, all of it governed by the privacy policies attached to them, which also changed hands but cannot be changed without user consent. This is not about whatever privacy fail we might expect next from Facebook. It’s about the potential privacy conflicts caused by Disney’s acquisition of Fox.

It Was All Started by a Mouse

Walt Disney liked to remind people that his company started humbly, “by a mouse.” Today, we are also dealing with something mouse-related: Our data.

Few of us ever read the privacy policies we agree to when we download software or an app–the exception here being those among us who are in the business of selling data. Privacy policies are binding. When a company changes hands, the data in its possession is governed by the privacy policy that was in place when the user accepted its terms, and that remains the case even after it’s transferred to its new owner. They can be changed, with user consent, which is usually given by users who are not studying the new terms of engagement.

Disney of course pre-dates the era of a surveillance economy, but it has invested aggressively in data analytics and customer tracking. Strategic data deployment has been central to Disney’s increased profits in recent years, both at its theme parks and brick-and-mortar stores. While RFID tracking for customers, facial recognition, personalized offers based on prior purchases and behavior can all vastly improve customer experience, we’ve seen far too many instance of companies abusing their privileged access to consumer data.

The “Don’t Be Evil” Option

Companies can start with good intentions (see Google’s recently retired “Don’t Be Evil” motto) and eventually expand their data mining practices to Orwellian dimensions. It’s a matter of grave concern.

When a disproportionate number of the customers being tracked are children, this should be even greater cause for concern. That’s the red button aspect of prime interest in the Disney-Fox deal.

Case in point, the 2017 lawsuit filed against Disney and still pending in court that claims the company was tracking children through at least 42 of its mobile apps via unique device fingerprints to “detect a child’s activity across multiple apps and platforms… across different devices, effectively providing a full chronology of the child’s actions.”

Disney denies these allegations, but they did cop to generating “anonymous reporting” from specific user activity through “persistent identifiers,” and that the information was collected by a laundry list of third party providers, many of which are ad tracking platforms.

The company is by no means alone in this practice. A 2018 study found that 3,337 family- and child- oriented apps available on the Google Play store were improperly tracking children under the age of 13. It’s not hard to see why. If consumer data is valuable, starting the process of collecting data associated with an individual as early as possible can provide marketing companies with extremely deep data about their target’s preferences and habits long before they have a disposable income. The U.S. Children’s Online Privacy Protection Rule (“COPPA”) was created to stop this from happening. But as we’ve seen from companies like TikTok, it’s often skirted or flouted outright and the penalties are often laughable compared to profits.

The collection of data on kids is a problem. Enter Disney, the sheer scale of that empire making its data position comparable to that held by Facebook or Google. It is similar with Fox properties, though to a lesser extent. The upshot: An immense amount of data just changed hands and no one is talking about it–and they should be.

Changing Privacy Policies

While privacy policies are easy to find, they are not so much fun to read. They are not all alike. But without engaging in a tale of the tape regarding Disney and Fox policies, there is still reason for concern.

The problem from a privacy standpoint is a side-effect of Disney’s aggressive expansion. Those of us who love Marvel Comics, and who signed up for related sites or apps before 2009 or Star Wars before 2012, or who subscribed to National Geographic before this year, all belong to Disney’s data holdings now. We have no way of knowing how our data is being used, or whether the privacy policy we agreed to is the one governing the current use of our data. Disney announced changes to each of its new properties’ privacy policies on its main website and updated them accordingly, but is that enough?

Companies can reserve the right to change their privacy policies, and if we don’t like it we can always opt out. Things become murkier when data is purchased by a third party; this can happen with acquisitions, or when major retailers go belly up. It happened when Radio Shack went out of business, and its entire customer database was suddenly put up for sale to the highest bidder.

The creation of meaningful standards for consumer privacy is a moving target, but it should be a legislatively mandated consideration for large scale mergers and acquisitions. Once a customer’s information is sold, there’s no way to get it back. An effective stopgap might be to demand a data transfer “opt out” button when we’re giving consent to privacy policies. When it comes to children, we might even consider legislating automatic “opt out” for anyone under a certain age. Where safeguarding children’s data is concerned, there’s still much work to be done.

This article originally appeared on

The post The One Word No One Is Talking About in the Disney-Fox Deal appeared first on Adam Levin.

Facebook May Have Gotten Hacked, and Maybe It’s Better We Don’t Know

Unless you live under a bottle cap rusting on the bottom of Loon Lake, you know that if you’re concerned about privacy, Facebook CEO Mark Zuckerberg is the gift that keeps on taking.

A week after it landed with a curious (and most likely spurious) thud, Zuckerberg’s announcement about a new tack on consumer privacy still has the feel of an unexpected message from some parallel universe where surveillance (commercial and/or spycraft) isn’t the new normal.

“I believe a privacy-focused communications platform will be even more important than today’s open platforms,” Zuckerberg said. “Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.” And maybe share more freely their inmost wants and needs, thereby making it easier to serve them ads that convert.

While Facebook has a lengthy history of leaks, gaffes, and outright violations of privacy for users and non-users alike, and Zuckerberg has made unfulfilled promises to remedy their problematic and unpopular practices, one needn’t look further than recent news to view this pivot in company policy with deep skepticism:

  • Facebook’s lobbying against data privacy laws worldwide: Leaked internal memos revealed an extensive lobbying effort against data privacy laws on Facebook’s part, targeting the U.S., U.K., Canada, India, Vietnam, Argentina, Brazil, and every member state of the European Union.
  • Facebook’s Two-Factor Authentication phone numbers exposed: After prompting users to provide phone numbers to secure their accounts, Facebook allows anyone to look up their account by using them. These phone numbers are publicly accessible by default, and users have no way of opting out once they’ve provided them. (The company has also used security information for advertising in the past.)
  • Mobile apps send user data to Facebook (even for non-Facebook users): A study by Privacy International showed that several Android apps, including Yelp, Duolingo, Indeed, the King James Bible app, Qibla Connect, and Muslim Pro all transmit users’ personal data back to Facebook. A later update showed that iPhone users were similarly affected: iOS versions of OKCupid, Grindr, Tinder, Migraine Buddy, Kwit, Muslim Pro, Bible, and others were also found to eavesdrop on Facebook’s behalf.
  • Hundreds of millions of user passwords left exposed to Facebook employees: News recently broke that Facebook left the passwords of between 200 million and 600 million users unencrypted and available to the company’s 20,000 employees going back as far as 2012.

Facebook has had more than its share of bad press in recent years, including Russian meddling in U.S. elections and complicity in a genocide campaign in Myanmar, but the company’s antipathy toward user privacy seems to belie a wider disdain for the public interest, which leads to a bigger question.

Facebook has become the most profitable, debt-free business in the world by selling the private information of its users. Do you really think it’s going to stop? Privacy is increasingly important to consumers, but Facebook is proof that a company need not respect the privacy of the lives it comes in contact with in order to thrive–quite the contrary.

When Did You Stop Beating Your Users?

It seems fair to say that Facebook has not earned the benefit of the doubt when it comes to being open and transparent with the public, and I’m not just saying that because I’ve been betting against the company’s stock (I have a fair amount, and, possibly perversely, I think it’s still a sound investment).

I bring this up because Facebook could be doing something to make itself an even better investment. In fact, any business can do it, and increase its value in the process. Put simply, companies can make themselves harder to hit by hackers, and less prone to compromise. While it’s impossible to know for certain whether a company has been compromised or not, organizations have reputations. Reputations tend to color the way we read events. And finally, reputation management in the day and age of near-constant compromise and breach requires transparency–or at least the perception of transparency.

This was the cybersecurity song stuck in my head when Facebook, Instagram, and WhatsApp experienced widespread service outages on March 13, marking the company’s longest ever downtime.

An announcement on Facebook’s Twitter feed described the outage as a result of a “server configuration change,” contradicting a widespread assumption that it was caused by a cyberattack.

A little context: MySpace recently announced a major migration gaffe: “As a result of a server migration project, any photos, videos, and audio files you uploaded more than three years ago may no longer be available on or from Myspace.” People in the know have estimated the mistake affected 53 million songs from 14 million artists.

The same day as the MySpace buzzkill, Zoll Medical reported it had experienced a data breach during an email server migration that exposed select confidential patient data, including patient names, addresses, dates of birth, limited medical information, and some Social Security numbers.

While Facebook’s statement regarding its server configuration change may have been accurate, there may have been more to the story. The problem here is that we’re not dealing with a company that releases reliable information (that isn’t associated with their users as marketing targets).

While the outage may indeed have been caused by an honest sort of epic fail, Facebook has earned a dose of healthy skepticism. Indeed, scandals and overall wrongdoingsometimes seem the way of the world at Facebook, and as a result of this perception–true, false, or truth-y–there is a significant deficit of trust among the general public. While Facebook is too large to fail as a result of this situation, small- to medium-size companies cannot afford the luxury of being perceived as untrustworthy.

Perception Is Everything

Gustave Flaubert said, “There is no truth. There is only perception.” It mattered when he wrote that, and it still matters today.

When a company doesn’t report a cyberattack–or only reports the more harmless aspects of an incident–that needn’t always be ascribed to sinister motives. Consider what would have happened to Facebook if 1) the recent downtime was caused by an attack (possibly made possible by the configuration that they reported), and 2) they admitted it. Admitting publicly that a cyberattack effectively brought a multibillion-dollar business to a halt for the better part of a day would, first and foremost, have the potential to encourage further attacks. Denying anything happened gives system administrators more time to identify and patch newly discovered vulnerabilities. Then there are the repercussions to the company’s stock price. In short, there is no upside.

Regardless of whether the Facebook outage was the result of a cyberattack or internal error, one factor that’s been largely overlooked is the company’s plan to integrate all of its platforms–specifically to make the previously separate Messenger, WhatsApp, and Instagram applications interoperable.

This cross-platform integration represents a monumental undertaking. Each of these services have, at a minimum, hundreds of millions of active users, all of them with different security protocols, data structures and network requirements. Changing the architecture of three separate applications at a fundamental level not only opens the door to human error and system glitches but also presents a golden opportunity for hackers, and that should be what we’re talking about–before anything bad happens.

The primary means of detecting cyber incidents for trained experts or artificial intelligence is to look for inconsistent or unexpected behavior in a system: An influx of traffic could mean a major news event, but it could also mean a DDoS attack. An unexpected delay in network connections could mean a hardware failure, but it could also signify a hijacked DNS server.

It doesn’t matter what caused Facebook’s recent day-long inter-platform outage. There is a valuable takeaway for businesses regardless: As Facebook trundles toward platform unification, it will be increasingly vulnerable to attack. While all companies are easier to breach when they are making a major change, Facebook and its holdings may represent a clear and present danger the likes of which we’ve never seen, and one that can help lead the way to better cyber solutions, no matter how big a company is.

This article originally appeared on

The post Facebook May Have Gotten Hacked, and Maybe It’s Better We Don’t Know appeared first on Adam Levin.