Brave is a free and open-source web browser based on the Chromium web browser. Brave...
Capital One’s announcement of a hack that affected more than 100 million people should have you asking not what, but who’s in your wallet. The company estimated a year-one expense ranging from $100-$150 million. Equifax settled recently on a penalty of more than $700 million. Getting cyber wrong is expensive.
Getting cyber wrong–i.e., all the ways that can become manifest–is of course also complex. There will soon be more than 30 billion connected devices “out there’ in consumer hands, on their wrists, in their laps, cars, kitchens, walls, and, yes, at work–in short, IoT is everywhere, our connectables almost always go with us.
Okay, so the obvious metaphor everyone is used to is the vectors of a virus on the move. The president catches a bug in North Korea, and next thing you know everyone at Mar-a-Lago has it. Rachel Maddow catches a cold while fly-fishing on the Housatonic, and next thing you know the whole Democratic establishment has it. Bob from accounting goes on vacation with his laptop, and the next thing you know, millions of customers get hacked.
Bob, you’re fired.
It’s All About Attackable Surface
Tortoises have cyber down pat, both for real and metaphorically. Ever heard about a tortoise getting hacked? The reason you haven’t is because there’s nothing to get.
Tortoises have no finances and, taken as a genus, they rarely have names and social media accounts. When they do have names and Instagram accounts, there’s a hackable human somewhere nearby. Tortoises are not the problem.
If only our employees had the cyber equivalent of what tortoises have. What’s not to like about a having a hard shell? Better, what about one into which one can retract all their vulnerable areas? They also move slow, which in fable allowed at least one of them to beat a hare in a foot race. Among other things, this slowness means fewer clicked links in phishing emails.
Tortoises have a lot of what it takes to be cybersafe–though admittedly in an environment where things have to get done, often quickly, they don’t make the most attractive choice for corporate spirit animal.
Cyber Is a Marathon, Not a Sprint
So, the order of the day is for sure not something like, “Consumers and businesses alike: Be the tortoise!” Not quite. The turtle is to the cybersecurity of your enterprise what campaign slogans like “Make America Great Again” or “Yes We Can” are to the country. I mean, let’s face it, tortoises are not renowned for their earning capacity. That said, they can be inspirational–or at least aspirational. They can help us think about what good cyber looks like.
My marketing department would do a facepalm if I were to recommend courses that you can offer employees to improve their cybersecurity practices, because I own a company that is dedicated to helping companies and individuals stay as safe as possible in our current state of persistent threat. That said, there are some guiding principles of cybersecurity, particularly in the workplace, that I will share with you. They are at the bedrock of our practice, because they work.
Choices? There’s Really Only One
There is a critical mass of options out there for cybersecurity employee training, online and otherwise. By now, we should expect to be seeing puppet shows on the dangers of phishing.
All that aside, the best solution is free. It is creating a culture of cyber threat awareness and best practices. As Peter Drucker once said, “Culture eats strategy for breakfast.”
While I am only going to name one here, there are programs–both for-profit and public advocacy based–that help small and medium-sized businesses learn to be safer and more secure. A non-profit called the National Cyber Security Alliance offers a series of in-person, highly interactive and easy-to-understand workshops based on the National Institute of Standards and Technology (NIST) Cybersecurity Framework.
For-profit choices are legion. They may offer continuous training programs to help thwart phishing attacks and malware infections. There may be modules to go through for employees, or PowerPoint courses, or quizzes. Other programs cover specific topics, like how to navigate the web without picking up a virus, how to recognize social engineering (a fancy term for the hacking practice of luring in unsuspecting victims with links and offers of this or that slice of paradise), safe mobile practice, safe travel practices, safe email practice, and much more.
Other companies offer training courses as part of the onboarding process, and it should go without saying that at this point in the story arc of cyber insecurity, any enterprise that doesn’t secure employee devices during the onboarding process is courting disaster.
Cybersecurity Is Not a Spectacle Sport
Whether you send daily (or weekly) emails listing the latest threats or you talk about it at all-in meetings, cyber needs to be a part of everyday life to keep your enterprise as safe as possible.
The basic tasks that need to be accomplished:
1. Phish-proof your employees. Teach employees how to recognize phishing attacks, and what happens when they occur.
2. Foster good end-user practices. Make sure employees know what good password practices look like. Talk about computer-hygiene practices, and commonsense defenses against the threat of insider attacks.
3. Change management. Change fosters insecurity, and that’s when we’re most vulnerable to attack. Teach employees how to manage cyber during enterprise-wide change.
And then there is the more technical stuff for your CISO, whether that person is in-house or subcontracted. Don’t have anyone playing this role? Figure it out by Monday.
All of the above is fine and good, but I think principles–creating a culture of cyber awareness–is generally more effective, which is why I favor cyber training that is aimed at minimizing, monitoring, and managing cyber risk.
While there are many products and classes out there, and many of them are no doubt workable solutions, here’s the basics of a cultural (and free) approach:
Employees should never authenticate themselves to anyone unless they are in control of the interaction. Oversharing on social media expands one’s attackable surface. Be a good steward of passwords, safeguard any documents that can be used to hack an account or workstation, and in general stay vigilant. Attacks happen. All the time.
A compromised employee can lead to a compromised company. One way your employees can make sure they haven’t been personally compromised is to check their credit reports religiously, keep track of their credit score, and review major accounts daily. Transaction alerts from financial services institutions and credit card companies can help. Your human resources department may want to explore the possibility of offering a credit and identity monitoring program to employees as an added benefit.
Manage the damage.
When something happens, get on top of it quickly and/or get help from professionals who can help navigate and resolve the situation–whatever it is.
Slow and steady wins this seemingly unwinnable race. Sound paradoxical? It is. Cyber security is a practice, not a product. There is no one way to solve the cybersecurity quagmire, but there are very established routes through it, and you owe it to your company to learn them and teach them to everyone you work with.
The local governments and agencies from twenty-three Texas towns were hit by a coordinated ransomware campaign last week.
The Texas Department of Information Resources (DIR) became aware of the ransomware campaign after being contacted by the municipal governments of several towns that were unable to access critical files. The DIR has yet to identify the affected government entities and is currently working with the Texas Military Department as well as the Texas A&M Cyberresponse and Security Operation Center to investigate the attack and restore critical services where possible.
Although the DIR has released few details about the ransomware campaign, they did confirm that it originated from a single “threat actor.” The ransomware deployed is known is .JSE and typically works by encrypting files and appending the suffix “.jse.” .JSE differs from other ransomware variants and malware in that it doesn’t leave behind a ransom message.
U.S. local governments have increasingly been targeted by ransomware campaigns, including Baltimore, Atlanta and several Florida cities. Municipal governments tend to have lower budgets for IT and cybersecurity support, and are often willing to pay ransom to be able to restore services.
Hacker Paige Thomson, main suspect in the recent Capital One data breach, may also be responsible for hacking as many as 30 other companies and organizations.
Prosecutors from the Seattle U.S. Attorney’s Office announced the discovery of data from more than 30 targeted entities in the bedroom of Paige Thompson, who was arrested in connection with the Capital One data breach. While the Office declined to identify the other potential victims, Israeli security firm CyberInt believes they may include Vodafone, Ford, and Michigan State University.
There has been widespread speculation that Capital One was one of multiple targets based on recovered Slack messages from Thompson’s account, where she reportedly referred to several other companies being vulnerable to the same misconfiguration exploited in the Capital One attack.
“The government expects to add an additional charge against Thompson based upon each such theft of data, as the victims are identified and notified,” said prosecutors.
The Capital One data breach compromised over 144,000 Social Security numbers and a million Canadian Social Insurance numbers from credit card applications. Thompson is currently facing five years in prison and $250,000, but both penalties may increase upon further investigation.
The post Woman Charged in Capital One Breach May Have Hacked Over 30 Companies appeared first on Adam Levin.
Facebook hired hundreds of third-party contractors to transcribe recordings of the site’s users, according to Bloomberg.
The social media giant hired the contractors to transcribe audio gleaned from its Messenger app in order to ensure the accuracy of its artificial intelligence-based voice recognition software. It has since discontinued the practice.
“[W]e paused human review of audio more than a week ago,” Facebook said in an announcement released Tuesday.
Some commentators noted that “paused” is not the same as “permanently ceased.” Facebook is the latest company to come under scrutiny for hiring outside contractors to listen in on recordings of users, following Apple, Amazon, and Google. While each of the companies maintain that the recordings were anonymized, several whistleblowers reported potentially egregious violations of user privacy. Apple and Google have since stopped the practice.
Read the Bloomberg article here.
The post Facebook Hired Outside Contractors to Transcribe User Audio appeared first on Adam Levin.
If you missed the news about Russian-owned FaceApp going viral, you’ve probably been vacationing on the coast of a dust pond on the dark side of the moon. It highlights the general lack of privacy laws out there, and may herald the start of meaningful legislation.
FaceApp allows users to tap into the power of artificial intelligence to see what they might look like with a perfect Hollywood smile, different hair, no hair, facial hair, or, alternately, as a much older version of themselves. The app essentially offers a rogue’s gallery of oneself, making it good fodder for social media sharing, and probably much harder to enter witness protection.
While the ability to hide out in South Dakota or South Jersey may not be foremost among the concerns of most FaceApp users, neither apparently is being hacked.
The locust-like media coverage of FaceApp spurred widespread day-after anxiety about how user images might be repurposed, and fake versions of the app laced with malware. Somehow the same fear isn’t daily in the minds of social media users, who waive many of the rights grabbed by the makers of FaceApp.
Consider that in the event user photos might fall into the hands of a hacker–state-sponsored or freelance, bad things could happen. The same problem holds true for Facebook and FaceApp users alike. That very same image (or images) proffered for fun on the user side and profit on the corporate side, could be legally (or illegally) repurposed by the company that lawfully acquired it. We live in an information Wild West. As of today, in most of America we neither know, nor have the right to know how our data is being used.
There Ought to Be A Law
While many were alarmed by the specter of Russia owning pictures of them and the privacy implications that went with that set of affairs, the bigger picture got lost in the scramble.
As far as facial recognition goes, Snapchat and Instagram have FaceApp beat. Those two apps have far more information about user faces, and they are operating in a low regulatory oversight environment. They are in fact breaking laws that will be written in the years to come. Big Tech is to privacy what cigarettes were to healthy lungs and hearts before 1970 when the Surgeon General’s warning became mandatory.
Forget “Don’t Be Evil,” Don’t Be Creepy!
Still don’t think regulations are lacking? Google was recently out on the street offering random people $5 to be photographed. The pictures were being collected to help the company perfect 3D imaging and facial recognition. This will not be allowed to happen for five dollars or five hundred dollars in five years without full disclosure about the transaction and full consent.
Until these fast and loose practices are illegal, consumers should insist that such encroachments stop, and use their clicks and downloads (or more to the point the withholding of them) to change untoward uses of user data.
A citizen’s face is private information, and the collection of it for the purposes of identifying them and placing them here, there or anywhere–something done regularly in the Yuigar regions of China today to control that ethnic group–should not be deemed acceptable in a country like the United States, which is governed by a Constitution that doesn’t condone such encroachments.
Two Steps Forward, One Back
Facebook reached a $5 billion settlement for misrepresenting the way it handles user privacy, the SEC fined the company $100 million for misleading investors about the risks associated with the misuse of user information, and, still later in the day, Facebook admitted that it was the target of an FTC anti-trust investigation. Oh yeah, then came second quarter results, which exceeded expectations. All this in one day.
The settlement required that Facebook create new roles at the company to oversee privacy and police it, and that the company set about creating a more transparent environment for the information that the company collects, and how it’s used. The $5 billion fine was specifically for misleading users regarding their control over the ways Facebook used their data.
The settlement was met with a general outcry, with many experts saying it was toothless. With $56 billion in revenue, the fine is absorbable, and the new strictures in no way proscribe the way Facebook collects and sells user information. In other words, it signaled business as usual for the time being.
“The F.T.C. is sending the message that wealthy executives and massive corporations can rampantly violate Americans’ privacy and lie about how our personal information is used and abused and get off with no meaningful consequences,” Sen. Ron Wyden said.
The anti-trust investigation is not really news. It has been speculated for some time that the FTC was looking into the possibility that Facebook used its muscle to squash competition, news of an investigation may signal a more intense phase of regulatory action regarding the way big tech uses, and, by implication, abuses consumer information.
The bottom line is everything here. Right now, it is robust. Companies are making a killing using consumer information to mint money. The time for this Forty-Niner mentality is drawing to a close. So, if you are starting a company now, it might be a good time to join the handful of entrepreneurial pioneers who are now making money by protecting consumer privacy. The boom days of data strip mining are coming to a close.
Security researchers have announced the discovery of several election systems across the country connected to the internet that are vulnerable to hacking.
As a security policy, voting machines and election systems are supposed to remain disconnected from the internet, or “air-gapped,” unless they are transmitting data. This is to prevent the possibility of hackers connecting to them and subverting the results. Despite assurances to the contrary from Election Systems & Software, the largest voting machine vendor in the country, researchers identified 35 election systems with persistent internet connections.
The systems were identified in ten states, including swing states Wisconsin, Michigan and Florida and in some cases had been connected to the internet for years.
“Not only should ballot tallying systems not be connected to the internet, they shouldn’t be anywhere near the internet,” said Senator Ron Wyden regarding the findings. Wyden has been a long-term advocate of election security and has proposed legislation banning connections to, and transmissions via the internet in voting machines.
Adding to the potential dangers of exposure to hackers is the finding that many of the identified voting systems are running out-of-date software, or have yet to implement security patches and upgrades. Many districts require any new software to be vetted and certified by state and-or federal authorities before being applied to voting machines. While this is ostensibly done for security purposes, it effectively means that any internet-connected voting machine is vulnerable to known methods of hacking or cyberattack, sometimes for months at a time.
“What you are describing is a bad behavior amplified by sloppiness and complete negligence of security,” said election security expert Harri Hursti.
See the Motherboard article describing the findings here.
When your organization is young and growing, you may find yourself overwhelmed with a never-ending to-do list. It can be easy to overlook security when you’re hiring new employees, finding infrastructure, and adopting policies. Without a proper cybersecurity strategy, however, the business that you’ve put your heart and soul into, or the brilliant idea that you’ve spent years bringing to life, are on the line. Every year, businesses face significant financial, brand, and reputational damage resulting from a data breach, and many small businesses don’t ever recover.
Not only that, but as you grow you may be looking to gain investors or strategic partners. Many of these firms are not willing to give organizations that don’t take security seriously a chance. A strong security stance can be your differentiator among your customers and within the Venture Capital landscape.
One thing’s for sure: you’ve spent a great deal of time creating a business of your own, so why throw it all away by neglecting your security? You can begin building your own cybersecurity strategy by following these steps:
1. Start by identifying your greatest business needs.
This understanding is critical when determining how your vulnerabilities could affect your organization. Possible business needs could include manufacturing, developing software, or gaining new customers. Make a list of your most important business priorities.
2. Conduct a third-party security assessment to identify and remediate the greatest vulnerabilities to your business needs.
The assessment should evaluate your organization’s overall security posture, as well as the security of your partners and contractors.
Once you understand the greatest risks to your business needs, you can prioritize your efforts and budget based on ways to remediate these.
3. Engage a Network Specialist to set-up a secure network or review your existing network.
A properly designed and configured network can help prevent unwanted users from getting into your environment and is a bare necessity when protecting your sensitive data.
Don’t have a set office space? If you and your team are working from home or communal office spaces, be sure to never conduct sensitive business on a shared network.
4. Implement onboarding (and offboarding) policies to combat insider threat, including a third-party vendor risk management assessment.
Your team is your first line of defense, but as you grow, managing the risk of bringing on more employees can be challenging. Whether attempting to maliciously steal data or clicking a bad link unknowingly, employees pose great threats to organizations.
As part of your onboarding policy, be sure to conduct thorough background checks and monitor users’ access privileges. This goes for your employees, as well as any third parties and contractors you bring on.
5. Implement a security awareness training program and take steps to make security awareness part of your company culture.
Make sure your training program includes topics such as password best practices, phishing identification and secure travel training. Keep in mind, though, that company-wide security awareness should be more than once-a-year training. Instead, focus on fostering a culture of cybersecurity awareness.
6. Set-up multi-factor authentication and anti-phishing measures.
Technology should simplify your security initiatives, not complicate them. Reduce the number of administrative notifications to only what is necessary and consider improvements that don’t necessarily require memorizing more passwords, such as password managers and multi-factor authentication for access to business-critical data.
7. Monitor your data and endpoints continuously with a Managed Security Services Provider.
As you grow, so does the amount of endpoints you have to manage and data you have to protect. One of the best ways to truly ensure this data is protected is to have analysts monitoring your data at all hours. A managed security services provider will monitor your data through a 24/7 security operations center, keeping eyes out for any suspicious activity such as: phishing emails, malicious sites, and any unusual network activity.
You’re not done yet: revisit your security strategy as you evolve.
It’s important to remember that effective cybersecurity strategies vary among organizations. As you grow, you’ll want to consider performing regular penetration testing and implementing an Incident Response Plan.
And, as your business changes, you must continually reassess your security strategy and threat landscape.
For more information, get the Comprehensive Guide to Building a Cybersecurity Strategy from Scratch.
The post 7 Steps to Building a Cybersecurity Strategy from Scratch appeared first on GRA Quantum.
An unsecured database containing 40 gigabytes of data belonging to the Honda Motor Company was discovered recently.
The data contained employee information, including names, email addresses, and employee IDs, as well as sensitive network information detailing the security status of connected computers. Among those exposed in the leak was Honda CEO Takahiro Hachigo.
“What makes this data particularly dangerous in the hands of an attacker is that it shows you exactly where the soft spots are,” wrote security researcher Justin Paine, who first discovered the data. “In the hands of an attacker this leaked data could be used to silently monitor those executives to identify ways to launch very targeted attacks.”
Paine discovered the leaked data on Shodan, an IoT-centric search engine that has identified similar unsecured servers in the past. The data was indexed July 1, discovered by Paine July 4, and secured by Honda two days later.
“We investigated the system’s access logs and found no signs of data download by any third parties. At this moment, there is no evidence that data was leaked,” Honda wrote in a response to Paine’s notification.
Read Paine’s article describing his findings here.
A report from the Senate Intelligence Committee released last week concluded that the Russian government extensively interfered in U.S. elections from 2014 to at least 2017.
The partially redacted bipartisan report describes several findings related to Russian activities, including:
- “While the Committee does not know with confidence what Moscow’s intentions were, Russia may have been probing vulnerabilities in voting systems to exploit later… [or] may have sought to undermine confidence in the 2016 U.S. election simply through the discovery of their activity.”
- “State election officials… were not sufficiently warned or prepared to handle an attack from a hostile nation-state actor.
- “DHS and FBI alerted states to the threat of cyber attacks in… 2016, but the warnings did not provide enough information or go to the right people.”
- “In 2016, cybersecurity for electoral infrastructure at the state and local level was sorely lacking… [V]oter registration databases were not as secure as they could have been. Aging voting equipment… were vulnerable to exploitation by a committed adversary.”
While citing an “unprecedented level of activity,” the report also maintains that it found no evidence of the alteration of vote tallies, but it qualified its position by stating that “the Committee and IC’s insight into this are limited.”
The report details evidence of Russian activity targeting elections in all 50 states in 2016, and cites an inability on the part of the Committee or Department of Homeland Security to determine a clear pattern or goal. DHS representatives are quoted as saying that “there wasn’t a clear red state-blue state-purple state more electoral votes, less electoral votes” pattern.
The report concluded with a series of recommendations, including creating a policy of deterrence and responses to attacks on election infrastructure to “send a clear message and create significant costs for the perpetrator.” Strengthening cyber defenses for election-related systems, replacing outdated equipment, and providing greater funding for states were also recommended.
“I hope the bipartisan findings and recommendations outlined in this report will underscore to the White House and all of our colleagues, regardless of political party, that this threat remains urgent, and we have a responsibility to defend our democracy against it,” said committee member Senator Mark Warner in a statement.
The post Bipartisan Senate Support Reveals Russian Election Interference appeared first on Adam Levin.
Awesome-Cellular-Hacking Please note multiple researchers published and compiled this work. This is a list...
The post Awesome Cellular Hacking Hacking Cellular Networks appeared first on HackingVision.
Hard to imagine, but appointment television hasn’t been a real thing for more than a decade now. First, we recorded. Now, we stream. After transforming (actually killing) the movie rental industry, Netflix started streaming in 2010. It changed how consumers viewed television by providing subscribers access to a sizable library of movies and shows on a wide variety of devices.
With a low price point, it wasn’t a very attractive target for hackers. It worked. By 2018, Netflix streaming accounted for 15% of all worldwide downstream traffic on the internet.
The rise of Netflix’s streaming service also led to a decline in piracy. BitTorrent, the preferred method of illicit (if not illegal) file downloading and sharing decreased by a whopping 25 percent between 2011 and 2015. It was no longer the only quasi-infinite virtual warehouse of digital content. That approach to content had become monetized by Netflix; the paradigm of “everything, all the time” went mainstream.
For those who say, “How so?” piracy has long been a hot button topic among intellectuals, some saying it’s not about cost (free, in the case of piracy), but rather ease of use. Consumers could see popular shows and movies on multiple platforms without the maelstrom of channels and hidden fees presented by cable plans and without having to resort to piracy.
Netflix created a commercial play at the piracy game–all above board, and it worked.
The Wrong Idea
Intellectual viewpoints are not always welcome in boardrooms where decisions about distribution are made, and if in fact they wiggle their way in, they are not often embraced. Entertainment didn’t see the Netflix move as a mainstreaming of ease of use.
Enter the “walled garden” approach.
You see it everywhere. Instead of sharing its intellectual property with Netflix, Disney is launching its own streaming service, Disney+. NBC is pulling its tremendously popular workplace comedy, The Office, from Netflix and Hulu and making it available exclusively on NBCUniversal. AT&T is following suit with its recent acquisition of Time Warner and HBO. Apple, Google, and Facebook are all entering the ring as well. Most of these services are throwing massive amounts of money at original content and licensing to make their own platforms “must-have.”
What amounts to a cash grab for streaming services is a Byzantine snarl for consumers. Anyone who watched Avengers: Infinity War on Netflix in the last year will need to see its sequel, Endgame on Disney+. Soon, certain podcasts will not be available on both Android and iOS. Support for streaming services on devices can be revoked, as was the case for Hulu on Samsung Smart TVs, or HBO GO on the Xbox 360. Movies “purchased” on Apple may vanish from a consumer’s account if the rights lapse. Streaming services are becoming Balkanized, and as the need for different accounts, payment, memberships, and in some cases, hardware becomes ever more complex, once again, a BitTorrent-style warehouse may become the more attractive alternative for tech savvy users.
This fee-ridden decentralization of content has no doubt contributed mightily to the rebound of piracy, and in this new eco-system hackers are the main beneficiaries.
Yo Ho Ho
To pirate a show or a movie, one need only to download a small file from a website such as the Pirate Bay and open it with a BitTorrent client (most of them are free). A user then downloads pieces of said movie or show from however many people are sharing that file while in turn uploading to other users. The more popular the video being downloaded, the faster it goes. Depending on your connection, a full high-quality movie can be downloaded in less time than it takes to make a bowl of popcorn.
Is it any wonder that many users decided to watch the Game of Thrones finaleusing BitTorrent?
From a cybersecurity perspective, BitTorrent is beyond problematic. It is in fact “accepting candy from a stranger in a windowless van” dangerous. Downloading a pirated torrent ultimately means getting files from a network of anonymous sources, and not just downloading them, but actually opening and running them. Malware has only gotten more sophisticated in recent years; if a payload can be delivered through a single link or file in a phishing scam, it doesn’t take much to imagine what can be digitally smuggled within a several gigabyte download of the latest Spider-Man movie. BitTorrent provides a relatively simple way to infect thousands of computers without needing to actively target anyone. It’s passive and potentially quite pervasive.
If this sounds speculative or far-fetched, it could be that you’re simply not reading enough news. For instance, a hacking campaign has been targeting South Korean BitTorrent users for the last few weeks by embedding backdoors into pirated television episodes. It’s only a matter of time before we see similar campaigns closer to home–and it’s a safe bet there already are such hacks happening in the U.S market now.
The threat to corporate and government networks shouldn’t be overlooked. When the U.S. Geological Survey’s networks were infected with Russian malware in late 2018, the culprit was traced back to malware embedded in pornographic videos downloaded by an employee that spread to a USB drive, a mobile device, and finally compromised that employee’s entire office network.
Understood correctly, piracy presents an object lesson in the unintended consequences of a business decision in the realm of cybersecurity.
Movies, television shows and podcasts are expensive to produce, and companies are necessarily going to try to get the most bang for their buck by trying to corral the cash flow around their intellectual property. Multiple streaming accounts are expensive and often confusing to maintain, and consumers are similarly going to try to go the cheapest route, namely by pirating shows rather than juggling plans and platforms–especially when doing so creates a one-stop shopping experience.
Hackers tend to seek the path of least resistance. An increasing number of potential targets trading relative security for convenience represent a lucrative and potentially dangerous avenue for attack. But it’s avoidable. Digital marketplaces are more profitable when they are free(er) and (more) open.
The post The Content Streaming Gold Rush is a Hacker’s El Dorado appeared first on Adam Levin.
Fsociety Hacking Tools Pack A Penetration Testing Framework, you will have every script that a...
The post fsociety Hacking Tools Pack – A Penetration Testing Framework appeared first on HackingVision.
Consumer audio recorded by Apple’s Siri platform has been shared with external contractors.
A whistleblower working as a contractor revealed that the company’s digital voice assistant software records audio collected by consumer devices–including iPhones, Apple Watches, and HomePods–and shares it with external contractors. The recordings contained potentially sensitive information.
“A small portion of Siri requests are analysed to improve Siri and dictation. User requests are not associated with the user’s Apple ID. Siri responses are analysed in secure facilities and all reviewers are under the obligation to adhere to Apple’s strict confidentiality requirements,” Apple told the Guardian, which broke the story.
“Amazon and Google allow users to opt out of some uses of their recordings; Apple offers no similar choice short of disabling Siri entirely,” wrote Alex Hern for the Guardian.
Privacy concerns about the practice are compounded by the fallibility of Apple’s voice recognition software. The phrase “Hey, Siri” can be triggered by other sounds and words. Siri is also activated in Apple Watches when the user raises their wrist and speaks.
News about Apple’s overshare followed on the heels of news about Google’s virtual assistant software.
Apple has recently attempted to distance itself from Google and other IoT devices with ad campaigns directly targeting their competitors as less privacy-friendly.
Read more here.
Adam Levin was featured on a short video on TicToc by Bloomberg, where he discussed the trade-offs between security and convenience for mobile banking and payment apps.
“As business tries in its technological innovation to make things more convenient, you end up with the conundrum between convenience and security.” Levin said.
See the video below, or on Bloomberg.com:
The post Adam Levin Discusses Mobile Banking and Security with TicToc appeared first on Adam Levin.
The Government Accountability Office recently released a report that analyzed the results as well as the relative effectiveness of the identity theft services, including insurance, provided to victims of data breaches and other forms of digital compromise.
The report is entitled, “Range of Consumer Risks Highlights Limitations of Identity Theft Services,” and it largely reiterates the GAO’s 2017 assertion that the identity theft insurance provided to agencies in the wake of a data breach were both unnecessary and largely ineffective. The findings also included a conclusion that credit monitoring, identity monitoring, and identity restoration services were of questionable value. The GAO recommended that Congress should explore whether government agencies should be, or indeed are, at present, legally required to offer victims of federal data breaches any of the services examined in the report.
At the center of the report’s finding was $421 million set aside by the Office of Personnel Management for the purchase of a suite of identity protection products and services following the 2015 data breach that exposed extremely sensitive personal information of 22 million individuals. According to the report, the “obligated” money expended was largely squandered.
“3 million had used the services and approximately 61 individuals had received payouts from insurance claims, for an average of $1,800 per claim… GAO’s review did not identify any studies that analyzed whether consumers who sign up for or purchase identity theft services were less subject to identity theft or detected financial or other fraud more or less quickly than those who monitored their own accounts for free…” To be clear, there is a jump in logic here. Just because the GAO was unable to find data to support these services does not mean the services are ineffective. In fact, it could just as easily be that the services work.
Then there was the GAO’s observation that, “The services also do not prevent or directly address risks of nonfinancial harm such as medical identity theft.” When millions of Social Security Numbers have been exposed, prevention of identity theft is purely aspirational. Frankly, this assertion would not pass muster with the FTC, since it is actually frowned upon to suggest that any service provider can prevent identity theft. The goal is awareness and targeted action, and medical fraud, in particular, is an area where detection is, at best, difficult and resolution is often complicated and requires professional assistance.
While the report raises an important point, it is too limited in scope to pinpoint it effectively. Not all identity theft services are the same. Those offered by the OPM to victims of its massive breach may or may not have been ineffective, but if they were, mostly likely it was because they were inadequate to the task or “mis-underestimated” during on-boarding, not because they’re unnecessary. In other words, it’s not a question of how much money changed hands, it’s how those funds were spent.
In the case of the services offered to victims of the OPM breach, the results do look damning: 61 paid insurance claims out of 3 million service users is the kind of figure unworthy of rounding error status. The above result must not, however, be mistaken for a demonstration of why identity theft insurance isn’t useful, but rather should be understood as a real-life metric of the usefulness of the specific plan provided, and the applicability of that’s plan provisions to the majority of the individuals covered by it.
Consider this counterpoint: If the services provided worked, little to no insurance payments would be necessary. (See above.)
Rather than scrapping the requirement, policies should either be expanded to cover more of the expenses associated with identity theft (there are many), or they should prioritize more robust monitoring tools and full identity fraud remediation solutions with the funds available.
Lack of Participation
Another issue raised by the report is participation on the part of those affected by data breaches. According to data from OPM, only 13 percent of those affected took advantage of the services made available to them–at least as of September 30, 2018. While the number may seem low, anecdotally it’s not really. Regardless, the question remains: Were those services made available in an accessible way that encouraged action on the part of users?
History suggests that paltry participation figures are due in no small part to a lack of awareness among consumers of the dangers posed by the exposure of personal information and the often free (to the consumer) availability of products and services that help manage the damage. Workplace education in this area is lacking, for sure, but that alone doesn’t explain it. Beyond breach fatigue, a larger factor may be lack of confidence in or clarity about the services provided–and that is an issue that belongs to vendor selection, because it’s their job to make clear what’s at risk and how the proffered solutions can help.
As described elsewhere in the report: Organizations that offer services, don’t do it based on what should be the pivotal question here: “how effective these services are.” Instead, “some base their decisions on federal or state legal requirements to offer such services and the expectations of affected customers or employees for some action on the breached entities’ part.” If the standard is to offer a certain amount of protection, they do that. Does it matter what kind? Can it be a generic? That’s the crux of the matter here.
Spoiler alert: It matters what service provider you choose. If you take nothing else away here let it be this: identity protection services and insurance are useless in a low-information environment. Indeed, if the service provider doesn’t produce an ocean of content that explains to users why they need to use the services, then it’s probably not right for mass allocation.
Data breaches have become so commonplace and the threat of identity fraud so widespread that token offerings to those affected are increasingly viewed as a B.S. attempt at better optics while a company is in disaster mode. A vicious cycle ensues: lack of confidence in a breach response leads to lack of participation in identity theft protection offered, and lack of participation is used to justify offering less comprehensive protection–all while identity theft incidents and data breaches increase.
The GAO report raises many salient points about the services offered in the wake of data breaches. The current legislation and its requirements for both identity theft protection services and insurance can rightly be viewed as an expensive boondoggle with little to show when it comes to actual results, but the conclusion of the GAO–to pull back instead of getting the right services in place to protect against future breaches and assist their victims when they can’t be avoided–is worrisome.
We need to focus now more than ever on high-information, robust solutions that provide greater protection as well as more guidance and assistance–not less.
This article originally appeared on Inc.com.
The post The Government Claims a Private Sector Fail, But It Just Doesn’t Know How to Pick a Vendor appeared first on Adam Levin.
The Marsh brokerage unit of Marsh and McLennan recently announced a new evaluation process called Cyber Catalyst designed to determine the usefulness of enterprise cyber risk tools.
The goal of the new offering is to identify and implement industry-wide standards to help cyber insurance policyholders make more informed decisions about cyber-related products and services; basically, what works and what doesn’t. Other major insurers participating in Cyber Catalyst include Allianz, AXA XL, AXIS, Beazley, CFC, and Sompo International.
While this collaboration between insurance companies is unusual, it’s not entirely surprising. Cyber insurance is a $4 billion market globally. While it’s difficult to accurately gauge how many hacking attempts were successfully foiled by the products targeted here, data breaches and cyber attacks on businesses continue to increase in frequency and severity. The 2019 World Economic Forum’s Global Risks Report ranks “massive data fraud and theft” as the fourth greatest global risk, followed by “cyber-attacks” in the five slot.
Meanwhile, cybersecurity products and vendors have been, to be charitable, a mixed bag.
Good in Theory
From this standpoint, Cyber Catalyst seems like not just a good idea, but an obvious one. A standardized metric to determine which cybersecurity solutions are no better than a fig leaf and which ones provide real armor to defend against cyberattacks is sorely lacking in the cybersecurity space. By Marsh’s own estimates, there are more than three thousand cybersecurity vendors amounting to a $114 billion marketplace. Many of them don’t inspire confidence on the part of businesses.
Insurers have a vested interest in determining the effectiveness of cybersecurity products, weeding out buggy software and promoting effective solutions that can help address risk aggregation issues. Businesses and their data are in turn better protected, and at least in theory, they would pay less for coverage. Everyone wins.
Insurance companies did something similar in the 1950s with the creation of the Insurance Institute for Highway Safety. In the face of rising traffic collisions and fatalities, the insurance industry collaborated to establish a set of tests and ratings for vehicles, and the result has been a gold standard for automotive safety for decades. Using a similar strategy for cybersecurity would at least in theory help mitigate the ever-increasing costs and risks to companies and their data.
Or Maybe Not
Where the analogy to the Insurance Institute for Highway Safety breaks down is here: The threats to car drivers and passengers have ultimately stayed the same since its inception. Everything we’ve learned over the years about making cars has progressively led to safer vehicles. Information technology is vastly different in that iterative improvements in one specific area doesn’t necessarily make an organization as a whole safer or better protected against cyber threats–in fact sometimes it can have the opposite effect when a new feature added turns out to be a bug.
Cyber defenses are meaningless in the presence of an unintended, yet gaping, hole in an organization’s defenses. Then there is the march of sound innovation. Products that provided first-in-class protection for a business’s network a few years ago may no longer be so great where cloud computing and virtual servers, or BYOD are concerned. The attackable surface of every business continues to increase with each newly introduced technology, and it seems overly optimistic to assume the standard evaluation process (currently twice a year) would be able to keep pace with new threats.
There’s also the risk of putting too many eggs into one basket. While the diffuse nature of the cybersecurity market causes headaches for everyone involved, establishing a recommended solution or set of solutions effectively makes them an ideal target for hackers. While it’s important to keep consumers and businesses informed of potential risk to their information, cybersecurity issues require a certain amount of secrecy until they have been properly addressed. Compromising, or even identifying and reporting on a vulnerability before it’s been patched in an industry standard security product, process or vendor practice could cause a potentially catastrophic chain reaction for cyber insurers and their clients.
Culture Eats Strategy for Breakfast
Where the Cyber Catalyst program seems to potentially miss the mark is by overlooking the weakest link in any company’s security (i.e., its users). An advanced cybersecurity system or set of tools capable of blocking the most insidious and sophisticated attack can readily be circumvented by a spear phishing campaign, a compromised smartphone, or a disgruntled employee. Social engineering cannot be systematically addressed. Combatting the lures of compromise requires organizations to foster and maintain a culture of privacy and security.
The risk of employee over-reliance on tools and systems at the expense of training, awareness, and a company culture where cybersecurity is front and center must not be underestimated. While it is easier to opt for the quick and easy approach of purchasing a recommended solution, companies still need a comprehensive and evolving playbook to meet the ever-changing tactics of persistent, sophisticated and creative hackers.
While industry-wide cooperation may be a good thing, it’s vital for companies and insurers alike to recognize that any security program or service is fallible. Without an equal investment in functional cybersecurity, which places as much store in training employees and keeping aware of new threats, the rise in breaches and compromises will continue.
This article originally appeared on Inc.com.
Facebook announced that it was preparing for a massive fine from the Federal Trade Commission for its mishandling of user privacy. The fine could be as much as $5 billion.
The social media giant revealed the fine as a one-time expense in its annual earnings statement, explaining a 51% decline in income, “in connection with the inquiry of the FTC into our platform and user data practices.”
“We estimate that the range of loss in this matter is $3.0bn to $5.0bn,” the company’s statement explained. “The matter remains unresolved, and there can be no assurance as to the timing or the terms of any final outcome.”
Despite the size of the fine, the company showed continuous growth and an expansion of its ecosystem of apps.
Read more about the story here.
A messaging app released by the French government to secure internal communications has gotten off to a troubled start.
Tchap was released in beta earlier this month as a secure messaging app exclusively for government officials. Its development and release was made to address security concerns and data vulnerabilities in more widely used apps including WhatsApp and Telegram (a favorite of French Prime Minister Emmanuel Macron).
WhatsApp Meet “What Were You Thinking?”
Tchap was built with security in mind, and was initially touted as being “more secure than Telegram.” Man plans and God laughs. The app was hacked within less than a day of its release. Elliot Alderson, the hacker who discovered the initial security vulnerability, subsequently found four more major flaws in its code, and confirmed with the app’s developer that no security audit was performed on the app prior to release.
DINSIC, the government agency responsible for Tchap, issued a press release stating that the software “will be subject to continuous improvement, both in terms of usability and security,” and has since announced a bug bounty for further vulnerabilities.
The French government’s attempts at creating a secure messaging alternative highlights a cybersecurity conundrum. Recent incidents including the allegations of Chinese government “backdoors” in telecom giant Huawei’s hardware and confirmed NSA backdoors in Windows software have left governments and businesses increasingly wary of using software or hardware developed or data stored internationally. At the same time, development of in-house or “proprietary” solutions are significantly more resource-intensive and not necessarily more secure than their more widely used counterparts.
The post French Government App Shows Difficulties with Secure Communications appeared first on Adam Levin.
The European Union’s parliament voted to create a biometric database of over 350 million people.
The Common Identity Repository, or CIR, will consolidate the data from the EU’s border, migration, and law enforcement agencies into one system to be quickly accessible and searchable by any or all of them. Information will include names, birthdates, passport numbers as well as fingerprints and face scans.
While the CIR’s purpose is to eliminate several bottlenecks currently affecting border control and law enforcement, many are concerned about its privacy and security implications.
“[U]nlike other personal data, biometric data are neither given by a third party nor chosen by the individual; they are immanent to the body itself and refer uniquely and permanently to a person,” wrote the European Data Protection Supervisor, an independent EU institution responsible for advising on matters of privacy and security, in an opinion document on the Repository.
“[T]he consequences of any data breach affecting the CIR could seriously harm a potentially large number of individuals. If it ever falls into the wrong hands, the CIR could become a dangerous tool against fundamental rights if it is not surrounded by strict and sufficient legal, technical, and organizational safeguards,” the EDPS continued.
Once deployed, the CIR will be the third largest government biometric database in the world, right behind India’s Aadhaar and the Chinese government’s tracking systems. With the Aadhaar’s history of breaches and recent revelations about Chinese government tracking ethnic and religious minorities, there seems to be plenty cause for alarm here.
Facebook announced that it “unintentionally” harvested the email contacts of 1.5 million of its users without their consent.
The social media company automatically uploaded the information from users who had registered with the site after 2016 and provided their email addresses and passwords. Upon submitting a form to “confirm” their accounts, registrants saw a screen showing that their email contact lists were harvested without any means of providing consent, opting out, or interrupting the process.
“We estimate that up to 1.5 million people’s email contacts may have been uploaded. These contacts were not shared with anyone and we’re deleting them. We’ve fixed the underlying issue and are notifying people whose contacts were imported. People can also review and manage the contacts they share with Facebook in their settings,” a Facebook spokeperson said.
Facebook’s requests for user email passwords during account registration has garnered strong criticism from security and privacy experts and led to the company halting the practice earlier this month.
The news comes at an awkward time for the gaffe-prone company in light of its recent attempts to rebrand itself as being more privacy-focused.
The post Facebook Acknowledges “Unintentional” Harvesting of Email Contacts appeared first on Adam Levin.
A technical glitch took down a wireless network used by New York City’s municipal government, raising serious questions about security and reliability of operational technology used by the city.
The New York City Wireless Network, or NYCWiN, was initially deployed in 2008 at a cost of $500 million. It costs the city an additional $37 million per year to maintain. The stated purpose of NYCWiN is to “support public safety and other essential City operations.” The city uses the GPS-based system to manage license plate readers, as well as the monitoring of water meters and traffic lights among other applications.
NYCWiN was down on April 6, an outage caused by an issue similar to the Y2K bug. In this case, the system required resetting every 1024 weeks due to a memory capacity limitation involving calendar dates.
City officials have been less than transparent about the problem, raising concerns about government communications in general, but especially in the face of increasing attacks against government and municipal targets at the operations level.
“If the city’s paying $40 million a year to maintain software infrastructure, first, when it goes down, the Council and the public should know about it,” said Councilman Brad Lander to the New York Times.
Mayor Bill DiBlasio said the city was investigating who was responsible for the problem. Read more about the outage here.
The post NYC Wireless Network Outage Raises Questions About Effectiveness, Transparency appeared first on Adam Levin.
On March 20, The Walt Disney Company completed its purchase of 21st Century Fox. The acquisition added huge properties like The Simpsons and National Geographic as well as film blockbuster franchises to Disney’s star-studded stable that includes Star Wars, Marvel Comics, Pixar, the Muppets, and a decades-long catalog of major intellectual properties.
While major acquisitions and mergers often give rise to anti-trust issues–and this one was no exception, the transfer of properties with complex privacy policies, and how that works going forward has not been a big topic of discussion.
Corralling such a massive amount of children’s and family-friendly entertainment under one roof may seem, at least on the surface, like a world-friendly move, but to quote a song from Disney’s 1995 direct-to-video sequel, “Pocahontas 2”–“things aren’t always what they appear.”
While Disney’s acquisition lacks the dark mirror quality of Amazon’s ever-expanding home networking business or Google’s inescapable array of services (all of them tracking users with mindboggling granularity), there is considerable consumer data tied to the properties that just changed hands, all of it governed by the privacy policies attached to them, which also changed hands but cannot be changed without user consent. This is not about whatever privacy fail we might expect next from Facebook. It’s about the potential privacy conflicts caused by Disney’s acquisition of Fox.
It Was All Started by a Mouse
Walt Disney liked to remind people that his company started humbly, “by a mouse.” Today, we are also dealing with something mouse-related: Our data.
Disney of course pre-dates the era of a surveillance economy, but it has invested aggressively in data analytics and customer tracking. Strategic data deployment has been central to Disney’s increased profits in recent years, both at its theme parks and brick-and-mortar stores. While RFID tracking for customers, facial recognition, personalized offers based on prior purchases and behavior can all vastly improve customer experience, we’ve seen far too many instance of companies abusing their privileged access to consumer data.
The “Don’t Be Evil” Option
Companies can start with good intentions (see Google’s recently retired “Don’t Be Evil” motto) and eventually expand their data mining practices to Orwellian dimensions. It’s a matter of grave concern.
When a disproportionate number of the customers being tracked are children, this should be even greater cause for concern. That’s the red button aspect of prime interest in the Disney-Fox deal.
Case in point, the 2017 lawsuit filed against Disney and still pending in court that claims the company was tracking children through at least 42 of its mobile apps via unique device fingerprints to “detect a child’s activity across multiple apps and platforms… across different devices, effectively providing a full chronology of the child’s actions.”
Disney denies these allegations, but they did cop to generating “anonymous reporting” from specific user activity through “persistent identifiers,” and that the information was collected by a laundry list of third party providers, many of which are ad tracking platforms.
The company is by no means alone in this practice. A 2018 study found that 3,337 family- and child- oriented apps available on the Google Play store were improperly tracking children under the age of 13. It’s not hard to see why. If consumer data is valuable, starting the process of collecting data associated with an individual as early as possible can provide marketing companies with extremely deep data about their target’s preferences and habits long before they have a disposable income. The U.S. Children’s Online Privacy Protection Rule (“COPPA”) was created to stop this from happening. But as we’ve seen from companies like TikTok, it’s often skirted or flouted outright and the penalties are often laughable compared to profits.
The collection of data on kids is a problem. Enter Disney, the sheer scale of that empire making its data position comparable to that held by Facebook or Google. It is similar with Fox properties, though to a lesser extent. The upshot: An immense amount of data just changed hands and no one is talking about it–and they should be.
Changing Privacy Policies
While privacy policies are easy to find, they are not so much fun to read. They are not all alike. But without engaging in a tale of the tape regarding Disney and Fox policies, there is still reason for concern.
Companies can reserve the right to change their privacy policies, and if we don’t like it we can always opt out. Things become murkier when data is purchased by a third party; this can happen with acquisitions, or when major retailers go belly up. It happened when Radio Shack went out of business, and its entire customer database was suddenly put up for sale to the highest bidder.
The creation of meaningful standards for consumer privacy is a moving target, but it should be a legislatively mandated consideration for large scale mergers and acquisitions. Once a customer’s information is sold, there’s no way to get it back. An effective stopgap might be to demand a data transfer “opt out” button when we’re giving consent to privacy policies. When it comes to children, we might even consider legislating automatic “opt out” for anyone under a certain age. Where safeguarding children’s data is concerned, there’s still much work to be done.
This article originally appeared on Inc.com.
The post The One Word No One Is Talking About in the Disney-Fox Deal appeared first on Adam Levin.
A security analysis of 30 major banking and financial apps has shown major security holes and a lax approach to protecting user data.
The analysis was conducted by the Aite Group, which looked at mobile apps in eight categories: retail banking, credit cards, mobile payment, healthcare savings, retail finance, health insurance, auto insurance and cryptocurrency.
Among the most alarming finding was the practice of embedding and hard-coding of private certificates and API keys into banking apps. API keys and certificates are the primary means of authenticating a user’s identity and determining their level of access to data; leaving hard-coded versions on an app makes access significantly easier for a would-be hacker to gain far too much access to the data underpinning the apps themselves.
Other findings included improperly secured database commands (capable of allowing man-in-the-middle attacks), weak encryption, and the ability to reverse-engineer the app code into a readable format.
Aite declined to identify the companies behind the apps researched or say whether they had warned the companies about the security holes discovered in their apps.
Read more about their report’s findings here.
Israeli cybersecurity researchers have created malware capable of showing fake cancerous growths on CT and MRI scans.
The malware, called CT-GAN, served as a proof of concept to show the potential for hacking medical devices with fake medical news that was convincing enough to fool medical technicians. In a video demonstrating the exploit, researchers at Ben Gurion University described how such an attack might be deployed.
“Attacker[s] can alter 3D medical scans to remove existing, or inject non-existing, medical conditions. An attacker may do this to remove a political candidate / leader, sabotage / falsify research, perform murder / terrorism, or hold data ransom for money.”
In a blind study, CT-GAN had a 99% success rate in deceiving radiologists with fake cancer nodules, and a 94% success rate in hiding actual cancer nodules.
Medical facilities are frequently targeted by hackers, due in part to their reliance on networking technologies and their archives of sensitive personal information. A recent study showed that 1 in 4 healthcare facilities were hit by ransomware in 2018 alone.
Click here to see the original report describing the malware findings.
The post Malware Infected Medical Equipment Shows Fake Tumors appeared first on Adam Levin.
A week after it landed with a curious (and most likely spurious) thud, Zuckerberg’s announcement about a new tack on consumer privacy still has the feel of an unexpected message from some parallel universe where surveillance (commercial and/or spycraft) isn’t the new normal.
“I believe a privacy-focused communications platform will be even more important than today’s open platforms,” Zuckerberg said. “Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.” And maybe share more freely their inmost wants and needs, thereby making it easier to serve them ads that convert.
While Facebook has a lengthy history of leaks, gaffes, and outright violations of privacy for users and non-users alike, and Zuckerberg has made unfulfilled promises to remedy their problematic and unpopular practices, one needn’t look further than recent news to view this pivot in company policy with deep skepticism:
- Facebook’s lobbying against data privacy laws worldwide: Leaked internal memos revealed an extensive lobbying effort against data privacy laws on Facebook’s part, targeting the U.S., U.K., Canada, India, Vietnam, Argentina, Brazil, and every member state of the European Union.
- Facebook’s Two-Factor Authentication phone numbers exposed: After prompting users to provide phone numbers to secure their accounts, Facebook allows anyone to look up their account by using them. These phone numbers are publicly accessible by default, and users have no way of opting out once they’ve provided them. (The company has also used security information for advertising in the past.)
- Mobile apps send user data to Facebook (even for non-Facebook users): A study by Privacy International showed that several Android apps, including Yelp, Duolingo, Indeed, the King James Bible app, Qibla Connect, and Muslim Pro all transmit users’ personal data back to Facebook. A later update showed that iPhone users were similarly affected: iOS versions of OKCupid, Grindr, Tinder, Migraine Buddy, Kwit, Muslim Pro, Bible, and others were also found to eavesdrop on Facebook’s behalf.
- Hundreds of millions of user passwords left exposed to Facebook employees: News recently broke that Facebook left the passwords of between 200 million and 600 million users unencrypted and available to the company’s 20,000 employees going back as far as 2012.
Facebook has had more than its share of bad press in recent years, including Russian meddling in U.S. elections and complicity in a genocide campaign in Myanmar, but the company’s antipathy toward user privacy seems to belie a wider disdain for the public interest, which leads to a bigger question.
Facebook has become the most profitable, debt-free business in the world by selling the private information of its users. Do you really think it’s going to stop? Privacy is increasingly important to consumers, but Facebook is proof that a company need not respect the privacy of the lives it comes in contact with in order to thrive–quite the contrary.
When Did You Stop Beating Your Users?
It seems fair to say that Facebook has not earned the benefit of the doubt when it comes to being open and transparent with the public, and I’m not just saying that because I’ve been betting against the company’s stock (I have a fair amount, and, possibly perversely, I think it’s still a sound investment).
I bring this up because Facebook could be doing something to make itself an even better investment. In fact, any business can do it, and increase its value in the process. Put simply, companies can make themselves harder to hit by hackers, and less prone to compromise. While it’s impossible to know for certain whether a company has been compromised or not, organizations have reputations. Reputations tend to color the way we read events. And finally, reputation management in the day and age of near-constant compromise and breach requires transparency–or at least the perception of transparency.
This was the cybersecurity song stuck in my head when Facebook, Instagram, and WhatsApp experienced widespread service outages on March 13, marking the company’s longest ever downtime.
A little context: MySpace recently announced a major migration gaffe: “As a result of a server migration project, any photos, videos, and audio files you uploaded more than three years ago may no longer be available on or from Myspace.” People in the know have estimated the mistake affected 53 million songs from 14 million artists.
The same day as the MySpace buzzkill, Zoll Medical reported it had experienced a data breach during an email server migration that exposed select confidential patient data, including patient names, addresses, dates of birth, limited medical information, and some Social Security numbers.
While Facebook’s statement regarding its server configuration change may have been accurate, there may have been more to the story. The problem here is that we’re not dealing with a company that releases reliable information (that isn’t associated with their users as marketing targets).
While the outage may indeed have been caused by an honest sort of epic fail, Facebook has earned a dose of healthy skepticism. Indeed, scandals and overall wrongdoingsometimes seem the way of the world at Facebook, and as a result of this perception–true, false, or truth-y–there is a significant deficit of trust among the general public. While Facebook is too large to fail as a result of this situation, small- to medium-size companies cannot afford the luxury of being perceived as untrustworthy.
Perception Is Everything
Gustave Flaubert said, “There is no truth. There is only perception.” It mattered when he wrote that, and it still matters today.
When a company doesn’t report a cyberattack–or only reports the more harmless aspects of an incident–that needn’t always be ascribed to sinister motives. Consider what would have happened to Facebook if 1) the recent downtime was caused by an attack (possibly made possible by the configuration that they reported), and 2) they admitted it. Admitting publicly that a cyberattack effectively brought a multibillion-dollar business to a halt for the better part of a day would, first and foremost, have the potential to encourage further attacks. Denying anything happened gives system administrators more time to identify and patch newly discovered vulnerabilities. Then there are the repercussions to the company’s stock price. In short, there is no upside.
Regardless of whether the Facebook outage was the result of a cyberattack or internal error, one factor that’s been largely overlooked is the company’s plan to integrate all of its platforms–specifically to make the previously separate Messenger, WhatsApp, and Instagram applications interoperable.
This cross-platform integration represents a monumental undertaking. Each of these services have, at a minimum, hundreds of millions of active users, all of them with different security protocols, data structures and network requirements. Changing the architecture of three separate applications at a fundamental level not only opens the door to human error and system glitches but also presents a golden opportunity for hackers, and that should be what we’re talking about–before anything bad happens.
The primary means of detecting cyber incidents for trained experts or artificial intelligence is to look for inconsistent or unexpected behavior in a system: An influx of traffic could mean a major news event, but it could also mean a DDoS attack. An unexpected delay in network connections could mean a hardware failure, but it could also signify a hijacked DNS server.
It doesn’t matter what caused Facebook’s recent day-long inter-platform outage. There is a valuable takeaway for businesses regardless: As Facebook trundles toward platform unification, it will be increasingly vulnerable to attack. While all companies are easier to breach when they are making a major change, Facebook and its holdings may represent a clear and present danger the likes of which we’ve never seen, and one that can help lead the way to better cyber solutions, no matter how big a company is.
This article originally appeared on Inc.com.
The post Facebook May Have Gotten Hacked, and Maybe It’s Better We Don’t Know appeared first on Adam Levin.
Multiple sales subsidiaries of Toyota Motor Corp. were breached in an apparent cyberattack that may have leaked the personal information of up to 3.1 million people in the Tokyo area.
Toyota announced the possible breach as being the result of “unauthorized access” to a network server containing customer information in late March, but explained that they were unable to confirm if any data was actually lost.
The hacking attempt was followed the next day by similar cyberattacks on Toyota’s subsidiaries in Vietnam and Thailand, each of which issued statements about the possibility of breaches without any further details or confirmation regarding the data compromised.
These three attempts followed another announcement made by Toyota’s Australian subsidiary in February, where it disclosed an attempted hack but was similarly light on details.
Toyota has yet to issue further statements on these incidents, but has apologized and promised to implement stronger security measures on their networks and at their facilities.
The post Possible Toyota Breach Affects Up to 3.1 Million Customers appeared first on Adam Levin.
The Georgia Institute of Technology disclosed a data breach that exposed the data of up to 1.3 million people, including current and former students, faculty, and staff.
The breach occurred in late March after what the school is calling an “unknown outside entity” gained access to a web application’s data. While the full scope is yet to be determined, the accessed data included names, addresses, Social Security numbers and birthdates.
This marks the second time in the past year that the school has disclosed a compromise of its data, after accidentally emailing out the information of 8,000 students in July 2018. In addition to the size of the breach, the news is noteworthy because of the school’s reputation in the field of computer science.
Read more about the story here.
The post Georgia Tech Data Breach Exposes 1.3 Million People appeared first on Adam Levin.
A woman carrying two Chinese passports and a thumb drive containing malware was arrested by Secret Service agents after gaining entry to President Trump’s Mar-A-Lago resort.
The woman, Yujing Zhang, initially claimed to be on the premises to use a swimming pool, but later said she had arrived early for a United Nations Chinese American Association Event when questioned by a receptionist. There was no such event scheduled.
Zhang was then detained by Secret Service Special Agent Samuel Ivanovich with whom she became “verbally aggressive,” claiming she was onsite to speak with President Trump, who was golfing nearby.
She was arrested and charged with making false statements to a federal law enforcement officer and entering a restricted area. Zhang faces a maximum of six years in prison and $350,000 in fines. A search of her belongings revealed four cell phones, a laptop, a hard drive, and a thumb drive containing “malicious malware,” the nature of which has yet to be announced.
Zhang’s attorney has declined to comment.
Read more about the story here.
The post Secret Service Arrests Chinese Woman Carrying Malware at Mar-A-Lago appeared first on Adam Levin.
A report issued by the British government has concluded that products developed and manufactured by the Chinese telecommunications company Huawei present significant security risks.
Assembled by the Huawei Cyber Security Evaluation Centre (HCSEC) and presented to the UK National Security Adviser, the report found that on a wide range of security issues related to both its software and engineering, Huawei has failed to maintain adequate protections.
“Poor software engineering and cybersecurity processes lead to security and quality issues, including vulnerabilities. The number and severity of vulnerabilities discovered, along with architectural and build issues, by the relatively small team in HCSEC is a particular concern. If an attacker has knowledge of these vulnerabilities and sufficient access to exploit them, they may be able to affect the operation of the network, in some cases causing it to cease operating correctly,” stated the report, going on to add:
“These findings are about basic engineering competence and cybersecurity hygiene that give rise to vulnerabilities that are capable of being exploited by a range of actors.”
Huawei has been the subject of ongoing controversy in the West. Its bids to build the infrastructure for 5G wireless networks have been blocked in the United States, Australia, and New Zealand for security reasons and allegations that their equipment has backdoors that the Chinese government can exploit. U.S. Secretary of State Mike Pompeo has warned European nations that using Huawei equipment make it “more difficult” for the U.S. to partner with them.
Huawei is currently suing the United States over the ban, and the company’s chairman Guo Ping accused the U.S. government of having a “loser’s attitude,” and that “The U.S. has abandoned all table manners.”
The post British Government Report Confirms Huawei Cybersecurity Concerns appeared first on Adam Levin.
DMitry Deepmagic information Gathering Tool Kali Linux DMitry (Deepmagic Information Gathering Tool) is a open source Linux CLI tool developed by James Greig. Coded in C. DMitry is a powerful information gathering tool that aims to gather as much information about a host that is possible. Features include subdomains search, email addresses, uptime information, […]
The post DMitry Deepmagic information Gathering Tool Kali Linux appeared first on HackingVision.