Category Archives: Google

A Eulogy For Every Product Google Has Ruthlessly Killed (145 and Counting)

An anonymous reader shares a report: Tez. Trendalyzer. Panoramio. Timeful. Bump! SlickLogin. BufferBox. The names sound like a mix of mid-2000s blogs and startups you'd see onstage at TechCrunch Disrupt!. In fact, they are just some of the many, many products that Google has acquired or created -- then killed. While Google is notorious for eliminating underperforming products -- because even though these products often don't cost much for ongoing operations, they can pose a serious legal liability for the company -- it's rare to hear them spoken of after they've been shuttered. In fact, Killed By Google is the first website to memorialize them all in one place. Created by front-end developer Cody Ogden, the site features a tombstone and epitaph for each product the company has killed since it originated.

Read more of this story at Slashdot.

Cyber Security Week in Review (March 22)


Welcome to this week's Cyber Security Week in Review, where Cisco Talos runs down all of the news we think you need to know in the security world. For more news delivered to your inbox every week, sign up for our Threat Source newsletter here.

Top headlines this week


  • Norwegian aluminum company Norsk Hydro was hit with a “severe” ransomware attack. The malware affected production operations in the U.S. and Europe. The company says they do not know the origin of the attack and are still working to contain the effects. 
  • Cisco disclosed several vulnerabilities in some of its IP phones. The bugs could allow an attacker to carry out a cross-site request forgery attack or write arbitrary files to the filesystem. Cisco’s IP Phone 8800 series, a desk phone for businesses that includes HD video features, and the 7800 series, which are mainly used in conference rooms at businesses. Snort rules 49509 - 49511 protects users from these vulnerabilities. 
  • A new variant of the Mirai botnet is in the wild targeting televisions hosting signage and presentation systems. The malware uses 27 different exploits to infect systems, 11 that are completely new to Mirai. Snort rules 49512 - 49520 protects users from this new variant. 

From Talos


  • The new LockerGoga malware straddles the line between a wiper and ransomware. Earlier versions of LockerGoga leverage an encryption process to remove the victim's ability to access files and other data that may be stored on infected systems. A ransom note is then presented to the victim that demands the victim pay the attacker in Bitcoin in exchange for keys that may be used to decrypt the data that LockerGoga has impacted.
  • The latest episode of the Beers with Talos podcast covers point-of-sale malware. Additionally, the guys recap the RSA Conference from earlier this month and talk OpSec fails. 
  • We recently discovered 11 vulnerabilities in the CUJO Smart Firewall. These vulnerabilities could allow an attacker to bypass the safe browsing function and completely take control of the device, either by executing arbitrary code in the context of the root account or by uploading and executing unsigned kernels on affected systems. Snort rules 47234, 47663, 47809, 47811, 47842, 48261 and 48262 provide coverage for these bugs.
  • Our researchers discovered a new way to unmask IPv6 addresses using UPnP. This allows us to enumerate a particular subset of active IPv6 hosts which can then be scanned. We performed comparative scans of discovered hosts on both IPv4 and IPv6 and presented the results and analysis.

The rest of the news


  • A health care vendor in Singapore mistakenly exposed the personal information of 800,000 blood donors. The vendor reportedly used an unsecured database on an internet-facing server without properly protecting it from authorized access. All affected donors have been notified by Singapore’s government. 
    • Talos Take: "The data leak in Singapore is the latest in a string of these. Last summer (June/July) it was 1.5 million records, earlier this year it was 14,000 HIV patients and now this 800,000 blood donor info that you have," Nigel Houghton, director of Talos operations.
  • Google patched a bug in its Photos app that could have allowed an attacker to track users. The vulnerability opened mobile devices to browser-based timing attacks that could produce information about when, where and with whom a user had taken a photo. 
  • The European Union hit Google with another fine, this time worth roughly $1.7 billion. A recent report from the European Commission found that Google “shielded itself from competitive pressure” by blocking rivals from placing advertisements on third-party websites by adding certain clauses in AdSense contracts.
  • Windows is ending support for Windows 7. The company says it will cease support for the operating system on Jan. 14, 2020. Users are being notified of the change via a recent update. 
  • U.S. officials at the recent RSA Conference warned that China is the greatest cyber threat to America, not Russia. Rob Joyce, a cybersecurity adviser at the National Security Agency, compared Russia to a hurricane that can move quickly, while China is closer to the long-term problems that can come with climate change.


Google Disallows VPN Ads Targeting Chinese Users Due To ‘Local Legal Restrictions’

China is already known for its strict policies regarding internet censorship. It is also among those few countries who have

Google Disallows VPN Ads Targeting Chinese Users Due To ‘Local Legal Restrictions’ on Latest Hacking News.

Google announces Stadia, a browser-based streaming game service

Google’s new cloud gaming platform ‘Stadia’ to launch in 2019

Google on Tuesday entered the video game business by announcing its new cloud-based gaming platform ‘Stadia’ at the Game Developers Conference (GDC) 2019 keynote presentation in San Francisco.

Stadia is being promoted as the central community for gamers, creators, and developers that aims to take on Microsoft’s Xbox One, Sony’s PlayStation 4, Nintendo’s Switch, as well as upcoming streaming services like Microsoft’s xCloud and Nvidia’s GeForce Now.

Stadia is not a piece of hardware or a dedicated console but a streaming platform meant to free the player of hardware limitations of PCs and gaming consoles. All games will run on Google’s global network of data centers, and a video feed of the game will be accessible by the user on desktops, laptops, tablets, phones, and TVs connected to Chromecast streaming media sticks to start with.

The streaming technology allows users to play high-quality video games on their internet browser or YouTube without waiting for any content to be downloaded to their device. It gives players “instant access” to a game by clicking a link.

“Our ambition is far beyond a single game,” said Google’s Phil Harrison, vice president overseeing the new service. “The power of instant access is magical, and it’s already transformed the music and movie industries.”

Google is aiming to stream games at launch by supporting up to 4K 60 fps (frames per second) with HDR and full surround audio for both playing games and sharing game streams. It has plans to go up to 8K streaming at 120+ fps in the future.

During the keynote presentation, Google demonstrated titles from the Doom and Assassin’s Creed franchises. The search giant said that it would also be developing some games in-house. Further, Google has partnered with popular video game development software like Unreal Engine, Unity, and Havok.

Google will also launch a game controller called the Stadia controller to use with its service. The controller connects through Wi-Fi directly to the game running in the Google data center. It also has a dedicated screenshot button, a Google Assistant shortcut, and a built-in microphone to access features created by developers to use in games. The controller has a full-size directional pad, two joysticks, four action buttons, and two shoulder buttons on either side.

Stadia will first launch in the US, Canada, the UK, and Europe in 2019. However, no price or exact release date nor title of games support by Google’s Stadia was announced.

The post Google announces Stadia, a browser-based streaming game service appeared first on TechWorm.

Microsoft Launch Application Guard Extension For FireFox and Chrome

Earlier, Microsoft introduced a dedicated Windows Defender browser extension for its browser Microsoft Edge with Windows 10. The extension, named

Microsoft Launch Application Guard Extension For FireFox and Chrome on Latest Hacking News.

The State of Security: Google and Facebook scammed out of $123 million by man posing as hardware vendor

Even the most tech savvy companies in the world can fall for business email compromise.

A Lithuanian man has this week pleaded guilty to tricking Google and Facebook into transferring over $100 million into a bank account under his control, after posing as a company that provided the internet giants with hardware for their data centres.

The post Google and Facebook scammed out of $123 million by man posing as hardware vendor appeared first on The State of Security.



The State of Security

Google and Facebook scammed out of $123 million by man posing as hardware vendor

Even the most tech savvy companies in the world can fall for business email compromise.

A Lithuanian man has this week pleaded guilty to tricking Google and Facebook into transferring over $100 million into a bank account under his control, after posing as a company that provided the internet giants with hardware for their data centres.

The post Google and Facebook scammed out of $123 million by man posing as hardware vendor appeared first on The State of Security.

Google Launches New Policy Manager To Tackle Bad Ads

Every year, Google shares updates about how they handle malicious and scam advertisements. This year, Google announced the launch of

Google Launches New Policy Manager To Tackle Bad Ads on Latest Hacking News.

Google Will Implement a Microsoft-Style Browser Picker For EU Android Devices

Back in 2009, the EU's European Commission said Microsoft was harming competition by bundling its browser -- Internet Explorer -- with Windows. Eventually Microsoft and the European Commission settled on the "browser ballot," a screen that would pop up and give users a choice of browsers. Almost 10 years later, the tech industry is going through this again, this time with Google and the EU. After receiving "feedback" from the European Commission, Google announced last night that it would offer Android users in the EU a choice of browsers and search engines. Ars Technica reports: In July, the European Commission found Google had violated the EU's antitrust rules by bundling Google Chrome and Google Search with Android, punishing manufacturers that shipped Android forks, and paying manufacturers for exclusively pre-installing Google Search. Google was fined a whopping $5.05 billion (which it is appealing) and then the concessions started. Google said its bundling of Search and Chrome funded the development and free distribution of Android, so any manufacturer looking to ship Android with unbundled Google apps would now be charged a fee. Reports later pegged this amount as up to $40 per handset. We don't have many details on exactly how Google's new search and browser picker will work; there's just a single paragraph in the company's blog post. Google says it will "do more to ensure that Android phone owners know about the wide choice of browsers and search engines available to download to their phones. This will involve asking users of existing and new Android devices in Europe which browser and search apps they would like to use."

Read more of this story at Slashdot.

E Hacking News – Latest Hacker News and IT Security News: Google to shut down Google+ and Inbox on April 2





After its social media website Google+, the company has announced that they are now shutting down its Inbox app.

Google will start notifying all its users about the closure of its Inbox from March 18th through a pop-up screen that will pop up every time users will be on the app.

The notification will also include a link to the Gmail app to ensure that it does not disappoint its users. Gmail has recently updated its app with new eye-catching features like Smart Reply, Smart Compose, and Follow-ups.

Now, it is really difficult to find Inbox by Gmail on the Google Play Stores.

The notification released by Google reads:
“This app will be going away in 13 days,” the alert reads. “You can find your favorite inbox features in the Google app. Your messages are already waiting for you.”

While on their official website Google said:

“Inbox is signing off. Find your favorite features in the new Gmail. We are saying goodbye to Inbox at the end of March 2019. While we were here, we found a new way to email with ideas like snooze, nudges, Smart Reply and more. That’s why we’ve brought your favorite features to Gmail to help you get more done. All your conversations are already waiting for you. See you there.”


E Hacking News - Latest Hacker News and IT Security News

Google to shut down Google+ and Inbox on April 2





After its social media website Google+, the company has announced that they are now shutting down its Inbox app.

Google will start notifying all its users about the closure of its Inbox from March 18th through a pop-up screen that will pop up every time users will be on the app.

The notification will also include a link to the Gmail app to ensure that it does not disappoint its users. Gmail has recently updated its app with new eye-catching features like Smart Reply, Smart Compose, and Follow-ups.

Now, it is really difficult to find Inbox by Gmail on the Google Play Stores.

The notification released by Google reads:
“This app will be going away in 13 days,” the alert reads. “You can find your favorite inbox features in the Google app. Your messages are already waiting for you.”

While on their official website Google said:

“Inbox is signing off. Find your favorite features in the new Gmail. We are saying goodbye to Inbox at the end of March 2019. While we were here, we found a new way to email with ideas like snooze, nudges, Smart Reply and more. That’s why we’ve brought your favorite features to Gmail to help you get more done. All your conversations are already waiting for you. See you there.”

Google fined by EU for blocking its rivals advertisements



Google has been imposed fine of  $1.68 billion (1.49 billion euro/£1.28billion) by European Union regulators for blocking the advertisement of rival search engine companies.

This is the third time in the last two years when the company has been fined multi-billion dollar by the EU antitrust.

The EU's commissioner, Margrethe Vestager, notified the company about their decision at a news conference in Brussels on Wednesday.

'Today's decision is about how Google abused its dominance to stop websites using brokers other than the AdSense platform,' Vestager said.

According to the probe, the Google and its parent company, Alphabet,  violated the EU antitrust rules by limiting the contract clauses with other websites which uses AdSense, the clauses prevented websites from placing ads of Google rival companies.

The company 'prevented its rivals from having a chance to innovate and to compete in the market on their merits,' Vestager said.

'Advertisers and website owners, they had less choice and likely faced higher prices that would be passed on to consumers.'

Just after the announcement of fine, the company said that they have made several changes and will make a number of other changes to address EU antitrust regulators' concerns.

'We've always agreed that healthy, thriving markets are in everyone's interest,' Kent Walker, senior vice-president of global affairs, said in a statement.

'We've already made a wide range of changes to our products to address the Commission's concerns.

'Over the next few months, we'll be making further updates to give more visibility to rivals in Europe,' he continued.

Post-Perimeter Security: Addressing Evolving Mobile Enterprise Threats

Experts from Gartner, Lookout and Google talk enterprise mobile security in this webinar replay.

E Hacking News – Latest Hacker News and IT Security News: Fresh e-mails have been floating around warning users to back their Google+ data…




Fresh e-mails have been floating around warning users to back their Google+ data up by April 2, 2019 before it gets deleted forever.

In October, last year last year as announced by Google the platform was planned to be shutting the aforementioned platform.

The reasons outlined were lack of usage, discovery of API bugs, leakage of information and later on discoveries of more bugs.

And as of a few days, Google+ started its email flotation to warn users to backup data and save it before it’s too late.

Per the mail, the shutting process would not in any way affect the other Google products including YouTube, Google Photos and Google Drive.

The following steps could be followed to go about the Backup:

1.     Go to the Google+ download page.

2.   By putting a check-mark on them select the data categories you want to save


3.   Click on “Next Step”.

4.   Mention how exactly you’d like to retrieve the archives, that is, via emails, links or would you want them saved in Dropbox, Google Drive or One Drive.


5.    You could also decide how large you want those files to be.

6.   Once done, click “Create Archive” and Google will start to create an archive for you.

After following all of the above steps you’ll be presented with an archive with all your material saved in it in the form of HTML files and images respectively.

Also to be on the safer side, it’s highly advised by Google+ to start this process quickly and get it done by March 31, 2019.

This would only endure that the archive preparation is done in good time.



E Hacking News - Latest Hacker News and IT Security News

E Hacking News – Latest Hacker News and IT Security News 2019-03-20 12:31:00




Fresh e-mails have been floating around warning users to back their Google+ data up by April 2, 2019 before it gets deleted forever.

In October, last year last year as announced by Google the platform was planned to be shutting the aforementioned platform.

The reasons outlined were lack of usage, discovery of API bugs, leakage of information and later on discoveries of more bugs.

And as of a few days, Google+ started its email flotation to warn users to backup data and save it before it’s too late.

Per the mail, the shutting process would not in any way affect the other Google products including YouTube, Google Photos and Google Drive.

The following steps could be followed to go about the Backup:

1.     Go to the Google+ download page.

2.   By putting a check-mark on them select the data categories you want to save


3.   Click on “Next Step”.

4.   Mention how exactly you’d like to retrieve the archives, that is, via emails, links or would you want them saved in Dropbox, Google Drive or One Drive.


5.    You could also decide how large you want those files to be.

6.   Once done, click “Create Archive” and Google will start to create an archive for you.

After following all of the above steps you’ll be presented with an archive with all your material saved in it in the form of HTML files and images respectively.

Also to be on the safer side, it’s highly advised by Google+ to start this process quickly and get it done by March 31, 2019.

This would only endure that the archive preparation is done in good time.

Now-Patched Google Photos Vulnerability Let Hackers Track Your Friends and Location History

A now-patched vulnerability in the web version of Google Photos allowed malicious websites to expose where, when, and with whom your photos were taken.

A now-patched vulnerability in the web version of Google Photos allowed  malicious websites to expose where, when, and with whom your photos were taken.

Background

One trillion photos were taken in 2018. With image quality and file size increasing, it’s obvious why more and more people choose to host their photos on services like iCloud, Dropbox and Google Photos.

One of the best features of Google Photos is its search engine. Google Photos automatically tags all your photos using each picture’s metadata (geographic coordinates, date, etc.) and a state-of-the-art AI engine, capable of describing photos with text, and detecting objects and events such as weddings, waterfalls, sunsets and many others. If that’s not enough, facial recognition is also used to automatically tag people in photos. You could then use all this information in your search query just by writing “Photos of me and Tanya from Paris 2018”.

The Threat

I’ve used Google Photos for a few years now, but only recently learned about its search capabilities, which prompted me to check for side-channel attacks. After some trial and error, I found that the Google Photos search endpoint is vulnerable to a browser-based timing attack called Cross-Site Search (XS-Search).

In my proof of concept, I used the HTML link tag to create multiple cross-origin requests to the Google Photos search endpoint. Using JavaScript, I then measured the amount of time it took for the onload event to trigger. I used this information to calculate the baseline time — in this case, timing a search query that I know will return zero results.

Next, I timed the following query “photos of me from Iceland” and compared the result to the baseline. If the search time took longer than the baseline, I could assume the query returned results and thus infer that the current user visited Iceland.

As I mentioned above, the Google Photos search engine takes into account the photo metadata. So by adding a date to the search query, I could check if the photo was taken in a specific time range. By repeating this process with different time ranges, I could quickly approximate the time of the visit to a specific place or country.

Attack Flow

The video below demonstrates how a 3rd-party site can use time measurements to extract the names of the countries you took photos in. The first bar in the video named “controlled” represents the baseline of an empty results page timing. Any time measurement above the  baseline indicates a non-empty result timing, i.e., the current user has visited the queried country.

For this attack to work, we need to trick a user into opening a malicious website while logged into Google Photos. This can be done by sending a victim a direct message on a popular messaging service or email, or by embedding malicious Javascript inside a web ad. The JavaScript code will silently generate requests to the Google Photos search endpoint, extracting Boolean answers to any query the attacker wants.

This process can be incremental, as the attacker can keep track of what has already been asked and continue from there the next time you visit one of his malicious websites.

You can see below the timing function I implemented for my proof of concept:

Below is the code I used to demonstrate how users’ location history can be extracted.

Closing Thoughts

As I said in my previous blog post, it is my opinion that browser-based side-channel attacks are still overlooked. While big players like Google and Facebook are catching up, most of the industry is still unaware.

I recently joined an effort to document those attacks and vulnerable DOM APIs. You can find more information on the xsleaks repository (currently still under construction).

As a researcher, it was a privilege to contribute to protecting the privacy of the Google Photos user community, as we continuously do for our own Imperva customers.

***

Imperva is hosting a live webinar with Forrester Research on Wednesday March 27 1 PM PT on the topic, “Five Best Practices for Application Defense in Depth.” Join Terry Ray, Imperva SVP and Imperva Fellow, Kunal Anand, Imperva CTO, and Forrester principal analyst Amy DeMartine as they discuss how the right multi-layered defense strategy bolstered by real-time visibility to help security analysts distinguish real threats from noise can provide true protection for enterprises. Sign up to watch and ask questions live or see the recording!

The post Now-Patched Google Photos Vulnerability Let Hackers Track Your Friends and Location History appeared first on Blog.

Google Will Prompt European Android Users to Select Preferred Default Browser

Google announced some major changes for its Android mobile operating system in October after the European Commission hit the company with a record $5 billion antitrust fine for pre-installing its own apps and services on third-party Android phones. The European Commission accused Google of forcing Android phone manufacturers to "illegally" tie its proprietary apps and services—specifically,

How the Google and Facebook outages could impact application security

With major outages impacting Gmail, YouTube, Facebook and Instagram recently, consumers are right to be concerned over the security of their private data. While details of these outages haven’t yet been published – a situation I sincerely hope Alphabet and Facebook correct – the implications of these outages are something we should be looking closely at. The first, and most obvious, implication is the impact of data management during outages. Software developers tend to design … More

The post How the Google and Facebook outages could impact application security appeared first on Help Net Security.

Google Debuts Video Games Streaming Service Stadia

Google today launched its Stadia cloud gaming service at the Game Developers Conference (GDC) in San Francisco. From a report: Stadia is not a dedicated console or set-top box. The platform will be accessible on a variety of platforms: browsers, computers, TVs, and mobile devices. In an onstage demonstration of Stadia, Google showed someone playing a game on a Chromebook, then playing it on a phone, then immediately playing it on PC -- a low-end PC, no less --, picking up where the game left off in real time. Stadia will be powered by Google's worldwide data centers, which live in more than 200 countries and territories, streamed over hundreds of millions of miles of fiber optic cable, Google CEO Sundar Pichai said. Phil Harrison, previously at PlayStation and Xbox, now at Google, said the company will give developers access to its data centers to bring games to Stadia. Harrison said that players will be able to access and play Stadia games, like Assassin's Creed Odyssey, within seconds. Harrison showed a YouTube video of Odyssey featuring a "Play" button that would offer near-instant access to the game. Pichai announced the new platform at the Game Developers Conference, saying that Google want to build a gaming platform for everyone, and break down barriers to access for high-end games. Users will be able to move from YouTube directly into gameplay without any downloads. Google says this can be done in as little as 5 seconds. At launch, Stadia will stream games at 4k resolution, but Google claimed in the future it will be able to stream at a video quality of 8k. The company says it will launch the service later this year in the U.S. and UK.

Read more of this story at Slashdot.

G Suite admins can now disable SMS and voice 2FA

G Suite administrators can now prevent enterprise users from using SMS and voice codes as their second authentication/verification factor for accessing their accounts. The ability to disable those two options will be made available in the next two weeks to admins using any of the G Suite editions. Why and how? It has been known for quite a while that additional authentication via SMS and voice codes is the least secure option for 2-factor authentication, … More

The post G Suite admins can now disable SMS and voice 2FA appeared first on Help Net Security.

Google Launched Numerous Privacy Features In Android Q

The new Android version of Google not only brings new features but rather it also heightens user privacy. Recently, Google

Google Launched Numerous Privacy Features In Android Q on Latest Hacking News.

Android Q will come with improved privacy protections

Android Q, the newest iteration of Google’s popular mobile OS, is scheduled to be made available to end users at the end of August. While we still don’t know what its official release name will be, the first preview build and accompanying information released by Google give us a peek into some of the privacy improvements that we can look forward to. Stronger protections for user privacy 1. The platform will stop keeping track of … More

The post Android Q will come with improved privacy protections appeared first on Help Net Security.

Security Affairs: Google took down 2.3 billion bad ads in 2018,including 58.8M phishing ads

Google recently shared details about its efforts against malicious advertisement, the giant took down 2.3 billion bad ads last year.

Google revealed that it took down 2.3 billion bad ads in 2018, including 58.8 million phishing ads for violation of its policies.

Google introduced 31 new ads policies in 2018, aiming at protecting users from scams and other fraudulent activities (i.e. third-party tech support, ticket resellers, and crypto-currency).

Some of the policies added by Google in 2018 include the ban of ads from for-profit bail bond providers that were abused for taking advantage of vulnerable communities.

“In all, we introduced 31 new ads policies in 2018 to address abuses in areas including third-party tech support, ticket resellers, cryptocurrency and local services such as garage door repairmen, bail bonds and addiction treatment facilities.” reads the press release published by Google.

“We took down 2.3 billion bad ads in 2018 for violations of both new and existing policies, including nearly 207,000 ads for ticket resellers, over 531,000 ads for bail bonds and approximately 58.8 million phishing ads. Overall, that’s more than six million bad ads, every day.”

Malicious ads that Google took down in 2018 include nearly 207,000 ads for ticket resellers and over 531,000 ads for bail bonds.

Google announced it will launch next month a new policy manager in Google Ads that will give tips to advertisers to avoid common policy mistakes.

Google also revealed it was able to identify threat actors behind bad ads with the help of improved machine learning technology, it terminated nearly one million bad advertiser accounts.

“When we take action at the account level, it helps to address the root cause of bad ads and better protect our users,” continues Google.

In 2017, Google launched new technology for more granular analysis of ads, one year later the company launched 330 detection classifiers to help us better detect “badness” at the page level (nearly three times the number of classifiers launched in 2017 by the tech giant).

“So while we terminated nearly 734,000 publishers and app developers from our ad network, and removed ads completely from nearly 1.5 million apps, we were also able to take more granular action by taking ads off of nearly 28 million pages” Google adds.

Last year, Google introduced a new policy specifically created for election ads in the U.S. ahead of the 2018 midterm elections. The company aimed at preventing misinformation and fake news, it verified nearly 143,000 election ads, similar tools are being launched ahead of elections in the EU and India.

Google removed ads from approximately 1.2 million pages, more than 22,000 apps, and nearly 15,000 sites last year.

Ads from almost 74,000 pages were removed for violating their “dangerous or derogatory” content policy. 190,000 ads were taken down for violating this policy.

In 2018, Google helped the FBI, along with the cyber-security firm White Ops, to take down a sophisticated ad fraud scheme called ‘3ve’ that allowed its operators to earn tens of millions of dollars. 3ve infected over 1.7 million computers to carry out advertising frauds.

Pierluigi Paganini

(SecurityAffairs – malicious ads, Google)

The post Google took down 2.3 billion bad ads in 2018,including 58.8M phishing ads appeared first on Security Affairs.



Security Affairs

Google took down 2.3 billion bad ads in 2018,including 58.8M phishing ads

Google recently shared details about its efforts against malicious advertisement, the giant took down 2.3 billion bad ads last year.

Google revealed that it took down 2.3 billion bad ads in 2018, including 58.8 million phishing ads for violation of its policies.

Google introduced 31 new ads policies in 2018, aiming at protecting users from scams and other fraudulent activities (i.e. third-party tech support, ticket resellers, and crypto-currency).

Some of the policies added by Google in 2018 include the ban of ads from for-profit bail bond providers that were abused for taking advantage of vulnerable communities.

“In all, we introduced 31 new ads policies in 2018 to address abuses in areas including third-party tech support, ticket resellers, cryptocurrency and local services such as garage door repairmen, bail bonds and addiction treatment facilities.” reads the press release published by Google.

“We took down 2.3 billion bad ads in 2018 for violations of both new and existing policies, including nearly 207,000 ads for ticket resellers, over 531,000 ads for bail bonds and approximately 58.8 million phishing ads. Overall, that’s more than six million bad ads, every day.”

Malicious ads that Google took down in 2018 include nearly 207,000 ads for ticket resellers and over 531,000 ads for bail bonds.

Google announced it will launch next month a new policy manager in Google Ads that will give tips to advertisers to avoid common policy mistakes.

Google also revealed it was able to identify threat actors behind bad ads with the help of improved machine learning technology, it terminated nearly one million bad advertiser accounts.

“When we take action at the account level, it helps to address the root cause of bad ads and better protect our users,” continues Google.

In 2017, Google launched new technology for more granular analysis of ads, one year later the company launched 330 detection classifiers to help us better detect “badness” at the page level (nearly three times the number of classifiers launched in 2017 by the tech giant).

“So while we terminated nearly 734,000 publishers and app developers from our ad network, and removed ads completely from nearly 1.5 million apps, we were also able to take more granular action by taking ads off of nearly 28 million pages” Google adds.

Last year, Google introduced a new policy specifically created for election ads in the U.S. ahead of the 2018 midterm elections. The company aimed at preventing misinformation and fake news, it verified nearly 143,000 election ads, similar tools are being launched ahead of elections in the EU and India.

Google removed ads from approximately 1.2 million pages, more than 22,000 apps, and nearly 15,000 sites last year.

Ads from almost 74,000 pages were removed for violating their “dangerous or derogatory” content policy. 190,000 ads were taken down for violating this policy.

In 2018, Google helped the FBI, along with the cyber-security firm White Ops, to take down a sophisticated ad fraud scheme called ‘3ve’ that allowed its operators to earn tens of millions of dollars. 3ve infected over 1.7 million computers to carry out advertising frauds.

Pierluigi Paganini

(SecurityAffairs – malicious ads, Google)

The post Google took down 2.3 billion bad ads in 2018,including 58.8M phishing ads appeared first on Security Affairs.

Before Google+ Shuts Down, The Internet Archive Will Preserve Its Posts

Google+ "was an Internet-based social network. It was almost 8 years old," reports KilledByGoogle.com, which bills itself as "The Google Graveyard: A list of dead products Google has killed and laid to rest in the Google Cemetery." But before Google+ closes for good in April, its posts are being preserved by Internet Archive and the ArchiveTeam, reports the Verge: In a post on Reddit, the sites announced that they had begun their efforts to archive the posts using scripts to capture and back up the data in an effort to preserve it. The teams say that their efforts will only encompass posts that are currently available to the public: they won't be able to back up posts that are marked private or deleted... They also note that they won't be able to capture everything: comment threads have a limit of 500 comments, "but only presents a subset of these as static HTML. It's not clear that long discussion threads will be preserved." They also say that images and video won't be preserved at full resolution... They also urge people who don't want their content to be archived to delete their accounts, and pointed to a procedure to request the removal of specific content. A bit of history: Linus Torvalds launched a Google+ page in 2017 called "Gadget Reviews" -- where he made exactly six posts.

Read more of this story at Slashdot.

Google’s Bad Data Wiped Another Neighborhood Off the Map

Medium's technology publication ran a 3,600-word investigation into a mystery that began when a 66-year-old New York woman Googled directions to her neighborhood, "and found that the app had changed the name of her community..." It's just as well no one contacted Google, because Google wasn't the company that renamed the Fruit Belt to Medical Park. When residents investigated, they found the misnomer repeated on several major apps and websites including HERE, Bing, Uber, Zillow, Grubhub, TripAdvisor, and Redfin... Monica Stephens, a geographer at the University at Buffalo who studies digital maps and misinformation, immediately suspected the geographic clearinghouse Pitney Bowes. Founded in 1920 as a maker of postage meters -- the machines that stamp mail with proof it's been sent -- Pitney Bowes expanded into neighborhood data in 2016 when it bought the leading U.S. provider, Maponics. In its 15-year run, Maponics had supplied neighborhood data to companies from Airbnb to Twitter to the Houston Chronicle. And it had also just acquired a longtime competitor, Urban Mapping, which has previously supplied Facebook, Microsoft, MapQuest, Yahoo, and Apple. Though Pitney Bowes is far from a household name, the $3.4 billion data broker is "a huge company at this point," says Stephens, with enough influence to inadvertently rename a neighborhood across hundreds of sites... In the early 2000s, Urban Mapping offered new college grads $15 to $25 per hour to comb local blogs, home listings, city plans, and brochures for possible neighborhood names and locations. Maponics, meanwhile, used nascent technologies such as computer vision and natural language processing to pull neighborhoods from images and blocks of text, one former executive with the company said... I visited the Buffalo Central Library to find the source of the error... Sure enough, one of the librarians located a single planning office map that used the "Medical Park" label. It was a 1999 report on poverty and housing conditions -- long since relegated to a dusty shelf stacked with old binders and file folders... Somehow, likely in the early 2000s, this map made its way into what is now the Pitney Bowes data set -- and from there, was hoovered into Google Maps and out onto the wider internet. Buffalo published another map in 2017, with the Fruit Belt clearly marked, and broadcast on the city's open data portal. For whatever reason, Pitney Bowes and its customers never picked that map up. This is not the first time Google Maps has seemed to spontaneously rename a neighborhood. But for Fruit Belt the reporter's query eventually prompted corrections to the maps on Redfin, TripAdvisor, Zillow, Grubhub, and Google Maps. But the article argues that when it comes to how city names are represented online, "the process is too opaque to scrutinize in public. And that ambiguity foments a sense of powerlessness." Pitney Bowes doesn't even have a method for submitting corrections. Yet, "In an emailed statement, a spokesperson for Google defended its use of third-party neighborhood sources. 'Overall, this provides a comprehensive and up-to-date map,' the spokesperson said, 'but when we're made aware of errors, we work quickly to fix them.'"

Read more of this story at Slashdot.

Was Venezuela’s 5-Day Blackout Caused By Cyberattacks — or Wildfires?

What caused a devastating five-day blackout in Venezuela? Two engineers with expertise in geospatial technologies believe the answer lies in images from a NASA weather satellite showing thermal activity, which they superimposed onto Google Earth, the AP reports: Within hours of the attack, the government of embattled President Nicolas Maduro began accusing the U.S. of a cyberattack. Maduro has stuck to that narrative, saying hackers in the U.S. first shut down the Guri Dam and then delivered several "electromagnetic" blows. Engineers have questioned that assertion, contending that the Guri Dam's operating system is on a closed network with no internet connection. Several consulted by The Associated Press speculated that a more likely cause was a fire along one of the electrical grid's powerful 765-kilovolt lines that connect the dam to much of Venezuela. The transmission lines traverse through some of Venezuela's most remote and difficult to access regions on their way toward Caracas, making it difficult to obtain any first-hand information that could back up or pinpoint the location of a fire. Working with an expert at Texas Tech University's Geospatial Technologies Laboratory, Jose Aguilar, an expert on Venezuela's electrical grid, said satellite data indicates that on the day of the blackout there were three fires in close proximity to the 765-kilovolt lines transmitting power generated from the Guri Dam, which provides about 80 percent of Venezuela's electricity... Engineers have warned for years that Venezuela's state-run electricity corporation was failing to properly maintain power lines, letting brush that can catch fire during Venezuela's hot, dry months grow near and up the towering structures.

Read more of this story at Slashdot.

Kaspersky Lab official blog: Changes to Kaspersky Internet Security for Android and Kaspersky Safe Kids mobile apps

In early March, we modified the functions of several Kaspersky Lab mobile apps for Android and iOS. Here, we examine what’s different, what apps are affected, and the reasons for the changes.

Why Kaspersky Internet Security for Android and Safe Kids are set to lose some features

What’s changed in Kaspersky Safe Kids?

The parental control software Kaspersky Safe Kids for iPhone and iPad has shed two features: application control and Safari blocking.

Application control is used to block certain programs that parents consider unsuitable for their children. Safari blocking is needed to ensure that children go online only through the secure browser built in Kaspersky Safe Kids. In iOS, these features are now unavailable, but on other platforms, application blocking still works as before.

In Kaspersky Safe Kids for Android, the changes are different: Call and SMS monitoring has disappeared (which was an Android-only feature anyway). This means that Kaspersky Safe Kids will cease to notify parents about how much and with whom their children communicate using voice and SMS.

What’s changed in Kaspersky Internet Security for Android?

Some features have also changed in Kaspersky Internet Security for Android. The following are no longer available:

  • SMS scanning for phishing
  • Incoming call identification
  • Allowing calls from contacts only

The privacy protection feature for hiding selected contacts is also gone. These contacts can either be returned to the general list or deleted. See here for how to do this.

Likewise, the built-in Anti-Theft module has been stripped of two features: the option to receive on your phone the number of a SIM card inserted into a missing device after Lost Mode is enabled, and the option to erase selected personal data from the lost device. That said, it remains possible to delete all data at once from the device and reset it to factory settings.

No changes to other apps?

Kaspersky Safe Kids and Internet Security for Android are part of the Kaspersky Internet Security, Kaspersky Total Security, and Kaspersky Security Cloud integrated security systems. Therefore, the changes described also apply to users of these suites on Android and iOS platforms. On all other operating systems, everything is the same as before.

Why change anything?

The features in the iOS and Android versions of the app were removed for similar reasons in both cases: Apple and Google modified their operating systems, and with those changes came new requirements for apps allowed into the App Store and Google Play.

Google considers access to the call log or SMS permissions too dangerous. As such, this permission is now available only to apps whose main function is directly related to messages or calls.

The situation with Apple is slightly different. In the latest version of iOS 12, the company implemented a proprietary feature to control how much time the user spends (or can spend) on a particular app. Apple then deemed the feature redundant in other apps, following an update to the requirements for App Store programs.

What about other developers? Are their apps affected as well?

The rules are the same for all who upload apps to the two services. That is, they extend not only to Kaspersky Lab, but to all developers who create mobile apps and distribute them through these official stores.

So, many other apps have also been deprived of similar features. In the case of Google Play, no antivirus app will be able to access call log or SMS permissions. In the App Store, no apps except ones made by Apple will be permitted to block access to other apps.

How will this affect users?

As often happens, the changes to the Google and Apple rulebooks are both good and bad.

On the one hand, plenty of examples come to mind of programs in Android misusing the SMS access feature. We already looked at Trojans that exploit this function to intercept one-time passwords sent by banks. So in theory, this screw-tightening should make Android a safer platform.

On the other hand, an exception could’ve been made for apps from trusted developers who need this feature for security purposes. That would ultimately benefit users.

The situation with Apple is less clear-cut. The company is essentially a newcomer to the parental control apps market, and it is seeking to cut off the oxygen supply to competitors — those with a long-established presence in the market (not only Kaspersky Lab by any means).

One could argue that because iOS and the App Store belong to Apple, there is nothing underhanded here — the company is entitled to act as it pleases. But that’s missing the bigger picture.

If Apple were a relatively small company, no one would say a word. But it happens to be a monopolist in its market segment: iOS-driven mobile devices.

The iPhone and iPad are not interchangeable with Android smartphones and tablets. Die-hard iPhone users cannot simply hop over to Android. It takes time, effort, and money to get set up with a new interface. The upshot is that Apple devices represent a separate market fully controlled by one company.

Leveraging its monopoly position, Apple restricts competition in the relatively small segment of parental control apps. This is certainly not good for users; healthy competition is the prime catalyst of progress and price reduction.

What comes next?

On the whole, neither Kaspersky Internet Security for Android nor Kaspersky Safe Kids has lost any key features. What we had to axe, our developers will endeavor to make good in subsequent releases through other means.

Moreover, we remain hopeful that Google and Apple, having carefully weighed the pros and cons, will relax their requirements for tried-and-trusted developers.



Kaspersky Lab official blog

Google Is Shutting Down Its Emmy Award-Winning VR Film Studio

Google is shutting down its Spotlight Stories immersive entertainment unit, according to an email sent out by Spotlight Stories executive producer Karen Dufilho Wednesday evening. "Google Spotlight Stories is shutting its doors after over six years of making stories and putting them on phones, on screens, in VR, and anywhere else we could get away with it," Dufilho said in her email sent to supporters of the studio. Variety reports: Spotlight Stories originally began as a group within Motorola, tasked with exploring the future of storytelling for mobile devices. The group then became part of Google's Advanced Technologies and Products (ATAP) group, and went on to produce a number of 360-degree videos and VR experiences with creators like Glen Keane, Justin Lin, Jorge Gutierrez and Aardman Animation, the makers of "Wallace and Gromit." "Pearl," a Spotlight Story from Patrick Osborne, the director of Disney's Oscar-nominated short film "Feast," was nominated for an Academy Award, and won a Creative Arts Emmy for Outstanding Innovation in Interactive Programming in 2017. Most recently, Spotlight Stories released "Age of Sail," an animated short film directed by Oscar-winning animator John Kahrs. Google is said to have invested significant amounts of money into Spotlight Stories over the years, without giving the group a mandate to monetize their works. However, while Spotlight Stories films pushed the medium forward, the group didn't necessarily improve the fortunes of Google's VR efforts, with the company struggling to find an audience for its Daydream VR headset. A Google spokesperson said in a statement to Variety: "Since its inception, Spotlight Stories strove to re-imagine VR storytelling. From ambitious shorts like 'Son of Jaguar,' 'Sonaria' and 'Back to The Moon' to critical acclaim for 'Pearl' (Emmy winner and first-ever VR film nominated for an Oscar) the Spotlight Stories team left a lasting impact on immersive storytelling. We are proud of the work the team has done over the years." A source with knowledge of the situation told Variety that staffers were given a chance to look for new positions within the company. Most artists who had been working on projects for Spotlight Stories were thought to be contractors on a by-project basis.

Read more of this story at Slashdot.

SimBad malware on Play Store infected millions of Android devices

By Waqas

Most of the applications infected by SimBad malware are simulator games. The IT security researchers at Check Point have discovered a sophisticated malware campaign that has been targeting Android users through Google Play Store on a global level and so far more than 150 million users have fallen prey to it. Dubbed SimBad by researchers; the malware disguises […]

This is a post from HackRead.com Read the original post: SimBad malware on Play Store infected millions of Android devices

Google Maps, Gmail, Drive, Facebook and Instagram Suffered Outage




Google addressed an influx of complaints it received from the users regarding the misbehavior of its popular services like Gmail, YouTube, and Google Drive among others. Users all across the world were troubled by the outage of the services they heavily rely upon for various day-to-day activities. 

Though the cause of the outage has not been confirmed, the issues of the users were addressed by Google.

Besides Google, Youtube has also received complaints by its users which it addressed on Twitter telling them that the platform is aware of the service disruption and the problems faced by its users. Alongside, YouTube assured the sufferers that it is already looking into the matter and will come up with a fix.

Notably, YouTubers and content creators were facing problems while uploading videos and viewers were unable to watch the videos smoothly.

Addressing the issues with Google Drive, the company said, “We’re investigating reports of an issue with Google Drive. We will provide more information shortly. The affected users are able to access Google Drive, but are seeing error messages, high latency, and/or other unexpected behavior.”

Similarly, for Gmail, the company stated, we’re investigating reports of an issue with Gmail. We will provide more information shortly. The affected users are able to access Gmail but are seeing error messages, high latency, and/or other unexpected behavior.

Furthermore, Google mentioned in its G Suite Status Dashboard that the issue has been rectified and the services, i.e., Gmail and Google Drive will be functioning properly soon.

“The problem with Google Drive should be resolved. We apologize for the inconvenience and thank you for your patience and continued support. Please rest assured that system reliability is a top priority at Google, and we are making continuous improvements to make our systems better.”

While acknowledging the disruptions faced by its Cloud Engine, Google said, “We are still seeing the increased error rate with Google App Engine Blobstore API. Our Engineering Team is investigating possible causes. Mitigation work is currently underway by our Engineering Team. We will provide another status update by Tuesday, 2019-03-12 20:45 US/Pacific with current details.”

On the other hand, Facebook was down for more than 14 hours due to which millions of users across the globe were denied access to the platform. It was on Thursday morning, Facebook along with its associated apps seemed to be regaining operational status.

While Facebook is yet to provide an explanation for the services being disrupted, it said, "We're aware that some people are currently having trouble accessing the Facebook family of apps,"
"We're working to resolve the issue as soon as possible."

Being fallen prey to the same crisis, the issues faced by Instagram users included not being able to refresh the feed and other glitches while accessing the content.

Commenting on the matter, Elizabeth Warren, a potential Democratic candidate in the next US presidential election, said in a statement to New York Times, "We need to stop this generation of big tech companies from throwing around their political power to shape the rules in their favor and throwing around their economic power to snuff out or buy up every potential competitor."







New Samsung Galaxy S10 review and features

By Uzair Amir

The all new Samsung Galaxy S10 family has been released by Samsung on 8th March, 2019 and despite the high price & a lazy start, the Galaxy S10 has made a record in pre-orders for Samsung in the US. When the pre-orders for Samsung Galaxy S10 family began, there were rumors and conflicting reports about […]

This is a post from HackRead.com Read the original post: New Samsung Galaxy S10 review and features

Google’s Nest fiasco harms user trust and invades their privacy

Technology companies, lawmakers, privacy advocates, and everyday consumers likely disagree about exactly how a company should go about collecting user data. But, following a trust-shattering move by Google last month regarding its Nest Secure product, consensus on one issue has emerged: Companies shouldn’t ship products that can surreptitiously spy on users.

Failing to disclose that a product can collect information from users in ways they couldn’t have reasonably expected is bad form. It invades privacy, breaks trust, and robs consumers of the ability to make informed choices.

While collecting data on users is nearly inevitable in today’s corporate world, secret, undisclosed, or unpredictable data collection—or data collection abilities—is another problem.

A smart-home speaker shouldn’t be secretly hiding a video camera. A secure messaging platform shouldn’t have a government-operated backdoor. And a home security hub that controls an alarm, keypad, and motion detector shouldn’t include a clandestine microphone feature—especially one that was never announced to customers.

And yet, that is precisely what Google’s home security product includes.

Google fumbles once again

Last month, Google announced that its Nest Secure would be updated to work with Google Assistant software. Following the update, users could simply utter “Hey Google” to access voice controls on the product line-up’s “Nest Guard” device.

The main problem, though, is that Google never told users that its product had an internal microphone to begin with. Nowhere inside the Nest Guard’s hardware specs, or in its marketing materials, could users find evidence of an installed microphone.

When Business Insider broke the news, Google fumbled ownership of the problem: “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” a Google spokesperson said. “That was an error on our part.”

Customers, academics, and privacy advocates balked at this explanation.

“This is deliberately misleading and lying to your customers about your product,” wrote Eva Galperin, director of cybersecurity at Electronic Frontier Foundation.

“Oops! We neglected to mention we’re recording everything you do while fronting as a security device,” wrote Scott Galloway, professor of marketing at the New York University Stern School of Business.

The Electronic Privacy Information Center (EPIC) spoke in harsher terms: Google’s disclosure failure wasn’t just bad corporate behavior, it was downright criminal.

“It is a federal crime to intercept private communications or to plant a listening device in a private residence,” EPIC said in a statement. In a letter, the organization urged the Federal Trade Commission to take “enforcement action” against Google, with the hope of eventually separating Nest from its parent. (Google purchased Nest in 2014 for $3.2 billion.)

Days later, the US government stepped in. The Senate Select Committee on Commerce sent a letter to Google CEO Sundar Pichai, demanding answers about the company’s disclosure failure. Whether Google was actually recording voice data didn’t matter, the senators said, because hackers could still have taken advantage of the microphone’s capability.

“As consumer technology becomes ever more advanced, it is essential that consumers know the capabilities of the devices they are bringing into their homes so they can make informed choices,” the letter said.

This isn’t just about user data

Collecting user data is essential to today’s technology companies. It powers Yelp recommendations based on a user’s location, product recommendations based on an Amazon user’s prior purchases, and search results based on a Google user’s history. Collecting user data also helps companies find bugs, patch software, and retool their products to their users’ needs.

But some of that data collection is visible to the user. And when it isn’t, it can at least be learned by savvy consumers who research privacy policies, read tech specs, and compare similar products. Other home security devices, for example, advertise the ability to trigger alarms at the sound of broken windows—a functionality that demands a working microphone.

Google’s failure to disclose its microphone prevented even the most privacy-conscious consumers from knowing what they were getting in the box. It is nearly the exact opposite approach that rival home speaker maker Sonos took when it installed a microphone in its own device.

Sonos does it better

In 2017, Sonos revealed that its newest line of products would eventually integrate with voice-controlled smart assistants. The company opted for transparency.

Sonos updated its privacy policy and published a blog about the update, telling users: “The most important thing for you to know is that Sonos does not keep recordings of your voice data.” Further, Sonos eventually designed its speaker so that, if an internal microphone is turned on, so is a small LED light on the device’s control panel. These two functions cannot be separated—the LED light and the internal microphone are hardwired together. If one receives power, so does the other.

While this function has upset some Sonos users who want to turn off the microphone light, the company hasn’t budged.

A Sonos spokesperson said the company values its customers’ privacy because it understands that people are bringing Sonos products into their homes. Adding a voice assistant to those products, the spokesperson said, resulted in Sonos taking a transparent and plain-spoken approach.

Now compare this approach to Google’s.

Consumers purchased a product that they trusted—quite ironically—with the security of their homes, only to realize that, by purchasing the product itself, their personal lives could have become less secure. This isn’t just a company failing to disclose the truth about its products. It’s a company failing to respect the privacy of its users.

A microphone in a home security product may well be a useful feature that many consumers will not only endure but embrace. In fact, internal microphones are available in many competitor products today, proving their popularity. But a secret microphone installed without user knowledge instantly erodes trust.

As we showed in our recent data privacy report, users care a great deal about protecting their personal information online and take many steps to secure it. To win over their trust, businesses need to responsibly disclose features included in their services and products—especially those that impact the security and privacy of their customers’ lives. Transparency is key to establishing and maintaining trust online.

The post Google’s Nest fiasco harms user trust and invades their privacy appeared first on Malwarebytes Labs.

Explained: Payment Service Directive 2 (PSD2)

Payment Service Directive 2 (PSD2) is the implementation of a European guideline designed to further harmonize money transfers inside the EU. The ultimate goal of this directive is to simplify payments across borders so that it’s as easy as transferring money within the same country. Since the EU was set up to diminish the borders between its member states, this make sense. The implementation offers a legal framework for all payments made within the EU.

After the introduction of PSD in 2009, and with the Single Euro Payments Area (SEPA) migration completed, the EU introduced PSD2 on January 13, 2018. However, this new harmonizing plan came with a catch— the use of new online payment and account information services provided by third parties, such as financial institutions, who needed to be able to access the bank accounts of EU users. While they first need to obtain users’ consent to do so, we all know consent is not always freely given or with a full understanding of the implications. Still, it must be noted: Nothing will change if you don’t give your consent, and you are not obliged to do so.

Which providers

Before these institutions are allowed to ask for consent, they have to be authorized and registered under PSD2. The PSD2 already sets out information requirements for the application as payment institution and for the registration as account information services provider (AISP). The European Banking Authority (EBA) published guidelines on the information to be provided by applicants intending to obtain authorization as payment and electronic money institutions, as well as to register as an AISP.

From the pages of the Dutch National Bank (De Nederlandsche Bank):

“In this register are also (foreign) Account information service providers based upon the European Passport. These Account information service providers are supervised by the home supervisor. Account information service providers from other countries of the European Economic Area (EEA) could issue Account information services based upon the European Passport through an Agent in the Netherlands. DNB registers these agents of foreign Account information service providers without obligation to register. The registration of these agents are an extra service to the public. However the possibility may exist that the registration of incoming agents differs from the registration of the home supervisor.”

So, an AISP can obtain a European Passport to conduct its services across the entire EU, while only being obligated to register in its country of origin. And even though the European Union is supposed to be equal across the board, the reality is, in some countries, it’s easier to worm yourself into a comfortable position than in others.

Access to bank account = more services

Wait a minute. What exactly does all of this mean? Third parties often live under a separate set of rules and are not always subject to the same scrutiny. (Case in point: AISPs can move to register in “easier” countries and get away with much more.) So while that offers an AISP better flexibility to provide smooth transfer services, it would also allow those payment institutions to offer new services based on their view into your bank account. That includes a wealth of information, such as:

  • How much money is coming into and out of the account each month
  • Spending habits: what you spend money on and where you spend it
  • Payment habits: Are you paying bills way ahead of deadline or tardy?

AISPs can check your balance, request your bank to initiate a payment (transfer) on your behalf, or create a comprehensive overview of your balances for you.

Simple example: There is an AISP service that keeps tabs on your payments and income and shows you how much you can spend freely until your next payment is expected to come in. This is useful information to have when you are wondering if you can make your money last until the end of the month if you buy that dress.

However, imagine this information in the hands of a commercial party that wants to sell you something. They would be able to figure out how much you are spending with their competitors and make you a better offer. Or pepper you with ads tailored to your spending habits. Is that a problem? Yes, because why did you choose your current provider in the first place? Better service or product? Customer friendliness? Exactly what you needed? In short, the competitor might use your information to help themselves, and not necessarily you.

What is worrying about PSD2?

Consumer consent is a good thing. But if we can learn from history, as we should, it will not be too long before consumers are being tricked into clicking a big green button that gives a less trustworthy provider access to their banking information. Maybe they don’t even have to click it themselves. We can imagine Man-in-the-Middle attacks that sign you up for such a service.

Any offer of a service that requires your consent to access banking information should be carefully examined. How will AISPs that work for free make money? Likely by advertising to you or selling your data.

And then there is the possibility for “soft extortion,” like a mortgage provider that doesn’t want to do business with you unless you provide them with the access to your banking information. Or will offer you a better deal if you do.

In all of these scenarios, consent was given in one way or another, but is the deal really all that beneficial for the customer?

What we’d like to see

Some of the points below may already be under consideration in some or all of the EU member states, but we think they offer a good framework for the implementation of these new services.

  • We only want AISPs that work for the consumer and not for commercial third parties. In fairness, the consumer will pay the AISP for their services so that abuse or misuse of free product business models does not take place.
  • AISPs that want to do business in a country should be registered in that country, as well as in other countries where they want to do business.
  • AISPs should be constantly monitored, with the option to revoke their license if they misbehave. Note that GDPR already requires companies to delete data after services have stopped or when consent is withdrawn.
  • Access to banking information should not be used as a requirement for unrelated business models, or be traded for a discount on certain products.
  • GDPR regulations should be applied with extra care in this sensitive area. Some data- and privacy-related bodies have already expressed concerns about the discrepancies between GDPR and PSD2, even though they come from the same source.
  • Obligatory double-check through another medium by the AISP whether the customer has signed up out of their own free will, with a cooling-off period during which they can withdraw the permission.

Would anyone consent to PSD2 access?

For the moment, it’s hard to imagine a reason for allowing another financial institution or other business access to personal banking information. But despite the obvious red flags, it’s possible that people might be convinced with discounts, denials of service, or appealing benefits to give their consent.

And some of our wishes could very well be implemented as some kinks are still being ironed out. The Dutch Data Protection Authority (DPA) has pointed out that there are discrepancies between GDPR and PSD2 and expressed their concern about them. The DPA acknowledges this in their recommendation on the Implementation Act, and most recently in the Implementation Decree.

In both recommendations, the DPA concludes, in essence, that the GDPR has not been taken in consideration adequately in the course of the Dutch implementation of PSD2. The same may happen in other EU member states. Of course, the financial world tells us that licenses will not be issued to just anybody, but the public has not entirely forgotten the global 2008 banking crisis.

On top of that, there are major lawsuits in progress against insurance companies and other companies that sold products constructed in a way the general public could not possibly understand. These products are now considered misleading, and some even fraudulent. To put it mildly, the trust of the European public in financials is not high at the moment.

And we are not just looking at traditional financials.

Did you know that Google has obtained an eMoney license in Lithuania and that Facebook did the same in Ireland?

Are you worried now? Let me explain that all of these concerns have been brought up before, and the general consensus is that the regulations are strict enough to warrant an introduction of PSD2 that will only allow trustworthy partners which have been vetted and will be monitored by the authorities.

Nevertheless, you can rest assured that we will keep an eye on this development. When the times comes that PSD2 is introduced to the public, it might also turn out to be a subject that phishers are interested in. We can already imagine the “Thank you for allowing us to access your bank account; click here to revoke permission” email buried in junk mail.

Stay safe, everyone!

The post Explained: Payment Service Directive 2 (PSD2) appeared first on Malwarebytes Labs.

More than Half of Android apps ask for dangerous permissions. Is yours among?

By John Mason

It wasn’t very long ago that I revealed that most free VPN services are provided as a front for the big corporations running them to collect user that. Spurred by the findings of that study, I decided to dig deeper to see how much of a threat, especially when it comes to user data, Android VPN […]

This is a post from HackRead.com Read the original post: More than Half of Android apps ask for dangerous permissions. Is yours among?

Google’s security program has caught issues in 1 million apps in 5 years

Security is a common concern when it comes to smartphones and it has always been especially important for Android. Google has done a lot over the years to change Android’s reputation and improve security. Monthly Android security patches are just one part of the puzzle. Five years ago, the company launched the Application Security Improvement Program. Recently, they shared some of the success they’ve had.

First, a little information on the program. When an app is submitted to the Play Store, it gets scanned to detect a variety of vulnerabilities. If something is found, the app gets flagged and the developer is notified (above). Diagnosis is provided to help get the app back in good standing. Google doesn’t distribute those apps to Android users until the issues are resolved.

Google likens the process to a doctor performing a routine physical.

Google recently offered an update on its Application Security Improvement Program. First launched five years ago, the program has now helped more than 300,000 developers fix more than 1 million apps on Google Play. In 2018 alone, it resulted in over 30,000 developers fixing over 75,000 apps.

In the same year, Google says it deployed the following six additional security vulnerability classes:

▬ SQL Injection

▬ File-based Cross-Site Scripting

▬ Cross-App Scripting

▬ Leaked Third-Party Credentials

▬ Scheme Hijacking

▬ JavaScript Interface Injection

The list is always growing as Google continues to monitor and improve the capabilities of the program.

Google originally created the Application Security Improvement Program to harden Android apps. The goal was simple: help Android developers build apps without known vulnerabilities, thus improving the overall ecosystem.

Google understands that developers can make mistakes sometimes and they hope to help catch those issues for years to come. Security will continue to be a big talking point as technology evolves. It’s important for users to be able to trust the apps on their phones.

Google Chrome zero-day: Now is the time to update and restart your browser

It’s not often that we hear about a critical vulnerability in Google Chrome, and perhaps it’s even more rare when Google’s own engineers are urging users to patch.

There are several good reasons why you need to take this new Chrome zero-day (CVE-2019-5786) seriously. For starters, we are talking about a full exploitation that escapes the sandbox and leads to remote code execution. This in itself is not an easy feat, and is usually observed only sporadically, perhaps during a Pwn2Own competition. But this time, Google is saying that this vulnerability is actively being used in the wild.

According to Clément Lecigne, the person from Google’s Threat Analysis Group who discovered the attack, there is another zero-day that exists in Microsoft Windows (yet to be patched), suggesting the two could be chained up for even greater damage.

If you are running Google Chrome and its version is below 72.0.3626.121, your computer could be exploited without your knowledge. While it’s true that Chrome features an automatic update component, in order for the patch to be installed you must restart your browser.

This may not seem like a big deal but it is. Another Google engineer explains why this matters a lot, in comparison to past exploits:

Considering how many users keep Chrome and all their tabs opened for days or even weeks without ever restarting the browser, the security impact is real.

Some might see a bit of irony with this latest zero-day considering Google’s move to ban third-party software injections. Many security programs, including Malwarebytes, need to hook into processes, such as the browser and common Office applications, in order to detect and block exploits from happening. However, we cannot say for sure whether or not this could prevent the vulnerability from being exploited, since few details have been shared yet.

In the meantime, if you haven’t done so yet, you should update and relaunch Chrome; and don’t worry about your tabs, they will come right back.

The post Google Chrome zero-day: Now is the time to update and restart your browser appeared first on Malwarebytes Labs.

The not-so-definitive guide to cybersecurity and data privacy laws

US cybersecurity and data privacy laws are, to put it lightly, a mess.

Years of piecemeal legislation, Supreme Court decisions, and government surveillance crises, along with repeated corporate failures to protect user data, have created a legal landscape that is, for the American public and American businesses, confusing, complicated, and downright annoying.

Businesses are expected to comply with data privacy laws based on the data’s type. For instance, there’s a law protecting health and medical information, another law protecting information belonging to children, and another law protecting video rental records. (Seriously, there is.) Confusingly, though, some of those laws only apply to certain types of businesses, rather than just certain types of data.

Law enforcement agencies and the intelligence community, on the other hand, are expected to comply with a different framework that sometimes separates data based on “content” and “non-content.” For instance, there’s a law protecting phone call conversations, but another law protects the actual numbers dialed on the keypad.

And even when data appears similar, its protections may differ. GPS location data might, for example, receive a different protection if it is held with a cell phone provider versus whether it was willfully uploaded through an online location “check-in” service or through a fitness app that lets users share jogging routes.

Congress could streamline this disjointed network by passing comprehensive federal data privacy legislation; however, questions remain about regulatory enforcement and whether states’ individual data privacy laws will be either respected or steamrolled in the process.

To better understand the current field, Malwarebytes is launching a limited blog series about data privacy and cybersecurity laws in the United States. We will cover business compliance, sectoral legislation, government surveillance, and upcoming federal legislation.

Below is our first blog in the series. It explores data privacy compliance in the United States today from the perspective of a startup.

A startup’s tale—data privacy laws abound

Every year, countless individuals travel to Silicon Valley to join the 21st century Gold Rush, staking claims not along the coastline, but up and down Sand Hill Road, where striking it rich means bringing in some serious venture capital financing.

But before any fledgling startup can become the next Facebook, Uber, Google, or Airbnb, it must comply with a wide, sometimes-dizzying array of data privacy laws.

Luckily, there are data privacy lawyers to help.

We spoke with D. Reed Freeman Jr., the cybersecurity and privacy practice co-chair at the Washington, D.C.-based law firm Wilmer Cutler Pickering Hale and Dorr about what a hypothetical, data-collecting startup would need to become compliant with current US data privacy laws. What does its roadmap look like?

Our hypothetical startup—let’s call it Spuri.us—is based in San Francisco and focused entirely on a US market. The company developed an app that collects users’ data to improve the app’s performance and, potentially, deliver targeted ads in the future.

This is not an exhaustive list of every data privacy law that a company must consider for data privacy compliance in the US. Instead, it is a snapshot, providing information and answers to potentially some of the most common questions today.

Spuri.us’ online privacy policy

To kick off data privacy compliance on the right foot, Freeman said the startup needs to write and post a clear and truthful privacy policy online, as defined in the 2004 California Online Privacy Protection Act.

The law requires businesses and commercial website operators that collect personally identifiable information to post a clear, easily-accessible privacy policy online. These privacy policies must detail the types of information collected from users, the types of information that may be shared with third parties, the effective date of the privacy policy, and the process—if any—for a user to review and request changes to their collected information.

Privacy policies must also include information about how a company responds to “Do Not Track” requests, which are web browser settings meant to prevent a user from being tracked online. The efficacy of these settings is debated, and Apple recently decommissioned the feature in its Safari browser.

Freeman said companies don’t need to worry about honoring “Do Not Track” requests as much as they should worry about complying with the law.

“It’s okay to say ‘We don’t,’” Freeman said, “but you have to say something.”

The law covers more than what to say in a privacy policy. It also covers how prominently a company must display it. According to the law, privacy policies must be “conspicuously posted” on a website.

More than 10 years ago, Google tried to test that interpretation and later backed down. Following a 2007 New York Times report that revealed that the company’s privacy policy was at least two clicks away from the home page, multiple privacy rights organizations sent a letter to then-CEO Eric Schmidt, urging the company to more proactively comply.

“Google’s reluctance to post a link to its privacy policy on its homepage is alarming,” the letter said, which was signed by the American Civil Liberties Union, Center for Digital Democracy, and Electronic Frontier Foundation. “We urge you to comply with the California Online Privacy Protection Act and the widespread practice for commercial web sites as soon as possible.”

The letter worked. Today, users can click the “Privacy” link on the search giant’s home page.

What About COPPA and HIPAA?

Spuri.us, like any nimble Silicon Valley startup, is ready to pivot. At one point in its growth, it considered becoming a health tracking and fitness app, meaning it would collect users’ heart rates, sleep regimens, water intake, exercise routines, and even their GPS location for selected jogging and cycling routes. Spuri.us also once considered pivoting into mobile gaming, developing an app that isn’t made for children, but could still be downloaded onto children’s devices and played by kids.

Spuri.us’ founder is familiar with at least two federal data privacy laws—the Health Insurance Portability and Accountability Act (HIPAA), which regulates medical information, and the Children’s Online Privacy Protection Act (COPPA), which regulates information belonging to children.

Spuri.us’ founder wants to know: If her company stars collecting health-related information, will it need to comply with HIPAA?

Not so, Freeman said.

“HIPAA, the way it’s laid out, doesn’t cover all medical information,” Freeman said. “That is a common misunderstanding.”

Instead, Freeman said, HIPAA only applies to three types of businesses: health care providers (like doctors, clinics, dentists, and pharmacies), health plans (like health insurance companies and HMOs), and health care clearinghouses (like billing services that process nonstandard health care information).

Without fitting any of those descriptions, Spuri.us doesn’t have to worry about HIPAA compliance.

As for complying with COPPA, Freeman called the law “complicated” and “very hard to comply with.” Attached to a massive omnibus bill at the close of the 1998 legislative session, COPPA is a law that “nobody knew was there until it passed,” Freeman said.

That said, COPPA’s scope is easy to understand.

“Some things are simple,” Freeman said. “You are regulated by Congress and obliged to comply with its byzantine requirements if your website is either directed to children under the age of 13, or you have actual knowledge that you’re collecting information from children under the age of 13.”

That begs the question: What is a website directed to children? According to Freeman, the Federal Trade Commission created a rule that helps answer that question.

“Things like animations on the site, language that looks like it’s geared towards children, a variety of factors that are intuitive are taken into account,” Freeman said.

Other factors include a website’s subject matter, its music, the age of its models, the display of “child-oriented activities,” and the presence of any child celebrities.

Because Spuri.us is not making a child-targeted app, and it does not knowingly collect information from children under the age of 13, it does not have to comply with COPPA.

A quick note on GDPR

No concern about data privacy compliance is complete without bringing up the European Union’s General Data Protection Regulation (GDPR). Passed in 2016 and having taken effect last year, GDPR regulates how companies collect, store, use, and share EU citizens’ personal information online. On the day GDPR took effect, countless Americans received email after email about updated privacy policies, often from companies that were founded in the United States.

Spuri.us’ founder is worried. She might have EU users but she isn’t certain. Do those users force her to become GDPR compliant?

“That’s a common misperception,” Freeman said. He said one section of GDPR explains this topic, which he called “extraterritorial application.” Or, to put it a little more clearly, Freeman said: “If you’re a US company, when does GDPR reach out and grab you?”

GDPR affects companies around the world depending on three factors. First, whether the company is established within the EU, either through employees, offices, or equipment. Second, whether the company directly markets or communicates to EU residents. Third, whether the company monitors the behavior of EU residents.

“Number three is what trips people up,” Freeman said. He said that US websites and apps—including those operated by companies without a physical EU presence—must still comply with GDPR if they specifically track users’ behavior that takes place in the EU.

“If you have an analytics service or network, or pixels on your website, or you drop cookies on EU residents’ machines that tracks their behavior,” that could all count as monitoring the behavior of EU residents, Freeman said.

Because those services are rather common, Freeman said many companies have already found a solution. Rather than dismantling an entire analytics operation, companies can instead capture the IP addresses of users visiting their websites. The companies then perform a reverse geolocation lookup. If the companies find any IP addresses associated with an EU location, they screen out the users behind those addresses to prevent online tracking.

Asked whether this setup has been proven to protect against GDPR regulators, Freeman instead said that these steps showcase an understanding and a concern for the law. That concern, he said, should hold up against scrutiny.

“If you’re a startup and an EU regulator initiates an investigation, and you show you’ve done everything you can to avoid tracking—that you get it, you know the law—my hope would be that most reasonable regulators would not take a Draconian action against you,” Freeman said. “You’ve done the best you can to avoid the thing that is regulated, which is the track.”

A data breach law for every state

Spuri.us has a clearly-posted privacy policy. It knows about HIPAA and COPPA and it has a plan for GDPR. Everything is going well…until it isn’t.

Spuri.us suffers a data breach.

Depending on which data was taken from Spuri.us and who it referred to, the startup will need to comply with the many requirements laid out in California’s data breach notification law. There are rules on when the law is triggered, what counts as a breach, who to notify, and what to tell them.

The law protects Californians’ “personal information,” which it defines as a combination of information. For instance, a first and last name plus a Social Security number count as personal information. So do a first initial and last name plus a driver’s license number, or a first and last name plus any past medical insurance claims, or medical diagnoses. A Californian’s username and associated password also qualify as “personal information,” according to the law.

The law also defines a breach as any “unauthorized acquisition” of personal information data. So, a rogue threat actor accessing a database? Not a breach. That same threat actor downloading the information from the database? Breach.

In California, once a company discovers a data breach, it next has to notify the affected individuals. These notifications must include details on which type of personal information was taken, a description of the breach, contact information for the company, and, if the company was actually the source of the breach, an offer for free identity theft prevention services for at least one year.

The law is particularly strict on these notifications to customers and individuals impacted. There are rules on font size and requirements for which subheadings to include in every notice: “What Happened,” “What Information Was Involved,” “What We Are Doing,” “What You Can Do,” and “More Information.”

After Spuri.us sends out its bevy of notices, it could still have a lot more to do.

As of April 2018, every single US state has its own data breach notification law. These laws, which can sometimes overlap, still include important differences, Freeman said.

“Some states require you to notify affected consumers. Some require you to notify the state’s Attorney General,” Freeman said. “Some require you to notify credit bureaus.”

For example, Florida’s law requires that, if more than 1,000 residents are affected, the company must notify all nationwide consumer reporting agencies. Utah’s law, on the other hand, only requires notifications if, after an investigation, the company finds that identity theft or fraud occurred, or likely occurred. And Iowa has one of the few state laws that protects both electronic and paper records.

Of all the data compliance headaches, this one might be the most time-consuming for Spuri.us.

In the meantime, Freeman said, taking a proactive approach—like posting the accurate and truthful privacy policy and being upfront and honest with users about business practices—will put the startup at a clear advantage.

“If they start out knowing those things on the privacy side and just in the USA,” Freeman said, “that’s a great start that puts them ahead of a lot of other startups.”

Stay tuned for our second blog in the series, which will cover the current fight for comprehensive data privacy legislation in the United States.

The post The not-so-definitive guide to cybersecurity and data privacy laws appeared first on Malwarebytes Labs.

Update your Chrome browser now! 0-day actively exploited in the wild

Google has released a new stable version of its Internet surfing software equipped with a patch for a zero-day vulnerability reportedly being exploited in the wild. The flaw can allow an attacker to gain full access to the victim’s machine.

Last month, Clement Lecigne of Google’s Threat Analysis Group revealed that Chrome suffered a “use-after-free” vulnerability (CVE-2019-5786) in the FileReader component of the Chrome browser. FileReader is an API that lets web applications asynchronously read the contents of files (or raw data buffers) on the user’s computer, using File or Blob objects to specify the file or data to read. A bad actor leveraging the use-after-free flaw can perform remote code execution attacks.

“Access to bug details and links may be kept restricted until a majority of users are updated with a fix,” Google says in a blog post. “We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.”

The Internet giant says it is aware of reports that an exploit for this vulnerability exists in the wild. The flaw is present in all desktop versions of Chrome (Windows, macOS, Linux).

As Google itself said, the technicalities are still under tight wraps until enough people apply the patch, which can be found in the Help menu – About Google Chrome. If you don’t know where that is, just paste this path – chrome://settings/help – in your browser’s URL bar and hit Enter. At the end of the updating process, your browser should be at version 72.0.3626.121 or higher. Get patching!

How to Add SSL & Move WordPress from HTTP to HTTPS

How to Add SSL & Move WordPress from HTTP to HTTPS

Moving a WordPress website from HTTP to HTTPS should be a priority for any webmaster. Recent statistics show that over 33% of website administrators across the web use WordPress and many of these websites have still not added an SSL certificate.

Why is Important to Have a WordPress SSL Certificate?

SSL has become increasingly important in the past couple of years, not only for securely transmitting information to and from your website, but also to increase visibility and lower the chances of being penalized by website authorities.

Continue reading How to Add SSL & Move WordPress from HTTP to HTTPS at Sucuri Blog.

Hacked Website Trend Report – 2018

Hacked Website Trend Report – 2018

We are proud to be releasing our latest Hacked Website Trend Report for 2018.

This report is based on data collected and analyzed by the GoDaddy Security / Sucuri team, which includes the Incident Response Team (IRT) and the Malware Research Team (MRT).

The data presented is based on the analysis of 25,168 cleanup requests and summarizes the latest trends by bad actors. We’ve built this analysis from prior reports to identify the latest tactics, techniques, and procedures (TTPs) detected by our Remediation Group.

Continue reading Hacked Website Trend Report – 2018 at Sucuri Blog.

Spectre, Google, and the Universal Read Gadget

Spectre, a seemingly never ending menace to processors, is back in the limelight once again thanks to the Universal Read Gadget. First seen at the start of 2018, Spectre emerged alongside Meltdown as a major potential threat to people’s system security.

Meltdown and Spectre

Meltdown targeted Intel processors and required a malicious process running on the system to interact with it. Spectre could be launched from browsers via a script. As these threats were targeting hardware flaws in the CPU, they were difficult to address and required BIOS updates and some other things to ensure a safe online experience. As per our original blog:

The core issue stems from a design flaw that allows attackers access to memory contents from any device, be it desktop, smart phone, or cloud server, exposing passwords and other sensitive data. The flaw in question is tied to what is called speculative execution, which happens when a processor guesses the next operations to perform based on previously cached iterations.

The Meltdown variant only impacts Intel CPUs, whereas the second set of Spectre variants impacts all vendors of CPUs with support of speculative execution. This includes most CPUs produced during the last 15 years from Intel, AMD, ARM, and IBM.

This is not a great situation for everyone to suddenly find themselves in. Manufacturers were caught on the backfoot and customers rightly demanded a solution.

If this is the part where you’re thinking, “What caused this again?” then you’re in luck.

Speculative patching woes

The issues came from something called “speculative execution.” As we said in this follow up blog about patching difficulties:

Speculative execution is an effective optimization technique used by most modern processors to determine where code is likely to go next. Hence, when it encounters a conditional branch instruction, the processor makes a guess for which branch might be executed based on the previous branches’ processing history. It then speculatively executes instructions until the original condition is known to be true or false. If the latter, the pending instructions are abandoned, and the processor reloads its state based on what it determines to be the correct execution path.

The issue with this behaviour and the way it’s currently implemented in numerous chips is that when the processor makes a wrong guess, it has already speculatively executed a few instructions. These are saved in cache, even if they are from the invalid branch. Spectre and Meltdown take advantage of this situation by comparing the loading time of two variables, determining if one has been loaded during the speculative execution, and deducing its value.

Four variants existed across Spectre and Meltdown, with Intel, IBM, ARM, and AMD being snagged by Spectre and “just” Intel being caught up by Meltdown.

The vulnerabilities impacting CPUs (central processing units) made it a tricky thing to fix. Software alterations could cause performance snags, and hardware fixes could be even more complicated. A working group was formed to try and thrash out the incredibly complicated details of how this issue would be tackled.

In January 2018, researchers stressed the only real way to solve Spectre was redesigning computer hardware from the ground up. This is no easy task. Replace everything, or suffer the possible performance hit from any software fixes. Fairly complex patching nightmares abound, with operating systems, pre/post Skylake CPUs, and more needing tweaks or wholesale changes.

Additional complications

It wasn’t long before scams started capitalising on the rush to patch. Now people suddenly had to deal with unrelated fakes, malware, and phishes on top of actual Meltdown/Spectre threats.

Alongside the previously mentioned scams, fake websites started to pop up, too. Typically they claimed to be an official government portals, or plain old download sites offering up a fix. They might also make use of SSL, because displaying a padlock is now a common trick of phishers. That’s a false sense of security—just because there’s a padlock, doesn’t mean it’s a safe site. All it means is the data on it is encrypted. Beyond that, you’re on your own.

The site in our example offered up a zipfile. Contained within was SmokeLoader, well known for attempting to grab additional malicious downloads.

SmokeLoader

Click to enlarge

Eventually, the furore died down and people slowly forgot about Spectre. It’d pop up again in occasional news articles, but for the most part, people treated it as out of sight, out of mind.

Which brings us to last week’s news.

Spectre: What happened now?

What happened now is a reiteration of the “it’s not safe yet” message. The threat is mostly the same, and a lot of people may not need to worry about this. However, as The Register notes, the problem hasn’t gone away and some developers will need to keep it in mind.

Google has released a paper titled, unsurprisingly enough, “Spectre is here to stay: An analysis of side-channels and speculative execution.”

The Google paper

First thing’s first: It’s complicated, and you can read the full paper [PDF] here.

There’s a lot of moving parts to this, and frankly nobody should be expected to understand everything in it unless they’re working in or around this in some capacity. Some of this has already been mentioned, but it’s already about 700 words or so ago so a short recap may be handy:

  1. Side channels are bad. Your computer may be doing a bunch of secure tasks, keeping your data safe. All those bits and pieces of hardware, however, are doing all sorts of things to make those secure processes happen. Side channel attacks come at the otherwise secure data from another angle, in the realm of the mechanical. Sound, power consumption, timing between events, electromagnetic leaks, cameras, and more. All of these provide a means for a clever attacker to exploit this leaky side channel and grab data you’d rather they didn’t.
  2. They do this in Spectre’s case by exploiting speculative execution. Modern processors are big fans of speculative execution, given they make use of it extensively. It helps improve performance, by making guesses about what programs will do next and then abandoning if it turns out that doesn’t happen after all. Conversely, the retained paths are deployed and everything gets a nice speed boost. Those future potential possibilities is where Spectre comes in.
  3. As the paper says, “computations that should never have happened…allow for information to be leaked” via Spectre. It allows the attacker to inject “dangerously speculative behaviour” into trusted code, or untrusted code typically subjected to safety checks. Both are done through triggering “ordinarily impossible computations” through specific manipulations of the processor’s shared micro-architectural states.

Everything is a bit speed versus security, and security lost out. The manufacturers realised too late that the speed/security tradeoff came with a hefty security price the moment Spectre arrived on the scene. Thinking bad actors couldn’t tamper with with speculative executions—or worse, not considering this in the first place—has turned out to be a bit of a disaster.

The paper goes on to list that Intel, ARM, AMD, MIPS, IBM, and Oracle have all reported being affected. It’s also clear that:

Our paper shows these leaks are not only design flaws, but are in fact foundational, at the very base of theoretical computation.

This isn’t great. Nor is the fact that they estimate it’s probably more widely distributed than any security flaw in history, affecting “billions of CPUs in production across all device classes.”

Spectre: no exorcism due

The research paper asserts that Spectre is going to be around for a long time. Software-based techniques to ward off the threat will never quite remove the issue. They may ward off the threat but add a performance cost, with more layers of defence potentially making things too much of a drag to consider them beneficial.

The fixes end up being a mixed bag of trade-offs and performance hits, and Spectre is so variable and evasive that it quickly becomes impossible to pin down a 100 percent satisfactory solution. At this point, Google’s “Universal Read Gadget” wades in and makes everything worse.

What is the Universal Read Gadget?

A way to read data without permission that is for all intents and purposes unstoppable. When multiple vulnerabilities in current languages run on the CPU, it allows construction of said read gadget and that’s the real meat of Google’s research. Nobody is going to ditch speculative execution anytime soon, and nobody is going to magically come up with a way to solve the side channel issue, much less something like a Universal Read Gadget.

As the paper states,

We now believe that speculative vulnerabilities on today’s hardware defeat all language-enforced confidentiality with no known comprehensive software mitigations…as we have discovered that untrusted code can construct a universal read gadget to read all memory in the same address space through side-channels.

On the other hand, it’s clear we shouldn’t start panicking. It sounds bad, and it is bad, but it’s unlikely anyone is exploiting you using these techniques. Of course, unlikely doesn’t mean unfeasible, and this is why hardware and software organisations continue to wrestle with this particular genie.

The research paper stresses that the URG is very difficult to pull off.

The universal read gadget is not necessarily a straightforward construction. It requires detailed knowledge of the μ-architectural characteristics of the CPU and knowledge of the language implementation, whether that be a static compiler or a virtual machine. Additionally, the gadget might have particularly unusual performance and concurrency characteristics

Numerous scenarios will require different approaches, and it lists multiple instances where the gadget will potentially fail. In short, nobody is going to come along and Universal Read Gadget your computer. For now, much of this is at the theoretical stage. That doesn’t mean tech giants are becoming complacent however, and hardware and software organisations have a long road ahead to finally lay this spectre to rest.

The post Spectre, Google, and the Universal Read Gadget appeared first on Malwarebytes Labs.

Cyber Security Week in Review (March 1)


Welcome to this week's Cyber Security Week in Review, where Cisco Talos runs down all of the news we think you need to know in the security world. For more news delivered to your inbox every week, sign up for our Threat Source newsletter here.

Top headlines this week


  • Drupal patched a “highly critical” vulnerability that attackers exploited to deliver cryptocurrency miners and other malware. Some field types in the content management system did not properly sanitize data from non-form sources, which allowed an attacker to execute arbitrary PHP code. Users need to update to the latest version of Drupal to patch the bug. Snort rule 49257 also protects users from this vulnerability.
  • Cryptocurrency mining tool Coinhive says it’s shutting down, but not due to malicious use. Attackers have exploited the tool for months as part of malware campaigns, stealing computing power from users to mine cryptocurrencies. However, the company behind the miner says it’s shutting down because it’s no longer economically viable to run. Snort rules 44692, 44693, 45949 - 45952, 46365 - 46367, 46393, 46394 and 47253 can protect you against the use of Coinhive. 
  • Several popular apps unknowingly share users’ personal information with Facebook. In many cases, this can include personal health information, including females’ menstruation cycle, users’ heart rate and recent home buying purchases. The data is sent to Facebook even if the user doesn’t have a Facebook profile. 

From Talos


  • Attackers are increasingly going after unsecured Elasticsearch clusters. These attackers are targeting clusters using versions 1.4.2 and lower, and are leveraging old vulnerabilities to pass scripts to search queries and drop the attacker's payloads. These scripts are being leveraged to drop both malware and cryptocurrency miners on victim machines.
  • The latest Beers with Talos podcast covers the importance of privacy. Special guest Michelle Dennedy, Cisco’s chief privacy officer, talks about recent initiatives the company is taking on and how other organizations can do better. 

Vulnerability roundup


  • A flaw in the Ring doorbell could allow an attacker to spy on users’ homes and even inject falsified video. The vulnerability could open the door for a man-in-the-middle attack against the smart doorbell app since the sound and video recorded by the doorbell is transmitted in plaintext. 
  • Cisco disclosed multiple vulnerabilities in a variety of its products, including severe bugs in routers. The company urged users of its firewall routers and VPN to patch immediately Thursday, warning against a remote code execution vulnerability. There’s also a certificate validation vulnerability in Cisco Prime Infrastructure that could allow an attacker to perform a man-in-the-middle attack against the SSL tunnel between Cisco’s Identity Service Engine and Prime Infrastructure. Snort rule 49240 protects users from the Prime Infrastructure vulnerability. 
  • New flaws in 4G and 5G could allow attackers to track users’ location and intercept phone calls. A new research paper discloses what is believed to be the first vulnerabilities that affect both broadband technologies. 

The rest of the news


  • A new service from Cisco Duo launched a new product recently to scan Google Chrome extensions. CRXcavator provides customers and users by scanning the Chrome store and then delivering reports on different extensions based on their permissions required and potential use of those permissions. 
  • Google is under fire for allegedly forgetting to inform users of a microphone inside of its Nest smart hub. While the company says it was never supposed to be a secret, users, security researchers and even politicians now are questioning why the microphone was installed in the first place. 
    • Talos Take: "To be clear, because some news outlets have reported this microphone as being present in the Nest THERMOSTAT.  It is NOT present in the thermostat, it’s present in the Smart Hub, which is the centerpiece of their home security solution," Joel Esler, senior manager, Communities Division.


Will pay-for-privacy be the new normal?

Privacy is a human right, and online privacy should be no exception.

Yet, as the US considers new laws to protect individuals’ online data, at least two proposals—one statewide law that can still be amended and one federal draft bill that has yet to be introduced—include an unwelcome bargain: exchanging money for privacy.

This framework, sometimes called “pay-for-privacy,” is plain wrong. It casts privacy as a commodity that individuals with the means can easily purchase. But a move in this direction could further deepen the separation between socioeconomic classes. The “haves” can operate online free from prying eyes. But the “have nots” must forfeit that right.

Though this framework has been used by at least one major telecommunications company before, and there are no laws preventing its practice today, those in cybersecurity and the broader technology industry must put a stop to it. Before pay-for-privacy becomes law, privacy as a right should become industry practice.

Data privacy laws prove popular, but flawed

Last year, the European Union put into effect one of the most sweeping set of data privacy laws in the world. The General Data Protection Regulation, or GDPR, regulates how companies collect, store, share, and use EU citizens’ data. The law has inspired countries everywhere to follow suit, with Italy (an EU member) issuing regulatory fines against Facebook, Brazil passing a new data-protective bill, and Chile amending its constitution to include data protection rights.

The US is no exception to this ripple effect.

In the past year, Senators Ron Wyden of Oregon, Marco Rubio of Florida, Amy Klobuchar of Minnesota, and Brian Schatz, joined by 14 other senators as co-sponsors, of Hawaii, proposed separate federal bills to regulate how companies collect, use, and protect Americans’ data.

Sen. Rubio’s bill asks the Federal Trade Commission to write its own set of rules, which Congress would then vote on two years later. Sen. Klobuchar’s bill would require companies to write clear terms of service agreements and to send users notifications about privacy violations within 72 hours. Sen. Schatz’s bill introduces the idea that companies have a “duty to care” for consumers’ data by providing a “reasonable” level of security.

But it is Sen. Wyden’s bill, the Consumer Data Protection Act, that stands out, and not for good reason. Hidden among several privacy-forward provisions, like stronger enforcement authority for the FTC and mandatory privacy reports for companies of a certain size, is a dangerous pay-for-privacy stipulation.

According to the Consumer Data Protection Act, companies that require user consent for their services could charge users a fee if those users have opted out of online tracking.

If passed, here’s how the Consumer Data Protection Act would work:

Say a user, Alice, no longer feels comfortable having companies collect, share, and sell her personal information to third parties for the purpose of targeted ads and increased corporate revenue. First, Alice would register with the Federal Trade Commission’s “Do Not Track” website, where she would choose to opt-out of online tracking. Then, online companies with which Alice interacts would be required to check Alice’s “Do Not Track” status.

If a company sees that Alice has opted out of online tracking, that company is barred from sharing her information with third parties and from following her online to build and sell a profile of her Internet activity. Companies that are run almost entirely on user data—including Facebook, Amazon, Google, Uber, Fitbit, Spotify, and Tinder—would need to heed users’ individual decisions. However, those same companies could present Alice with a difficult choice: She can continue to use their services, free of online tracking, so long as she pays a price.

This represents a literal price for privacy.

Electronic Frontier Foundation Senior Staff Attorney Adam Schwartz said his organization strongly opposes pay-for-privacy systems.

“People should be able to not just opt out, but not be opted in, to corporate surveillance,” Schwartz said. “Also, when they choose to maintain their privacy, they shouldn’t have to pay a higher price.”

Pay-for-privacy schemes can come in two varieties: individuals can be asked to pay more for more privacy, or they can pay a lower (discounted) amount and be given less privacy. Both options, Schwartz said, incentivize people not to exercise their privacy rights, either because the cost is too high or because the monetary gain is too appealing.

Both options also harm low-income communities, Schwartz said.

“Poor people are more likely to be coerced into giving up their privacy because they need the money,” Schwartz said. “We could be heading into a world of the ‘privacy-haves’ and ‘have-nots’ that conforms to current economic statuses. It’s hard enough for low-income individuals to live in California with its high cost-of-living. This would only further aggravate the quality of life.”

Unfortunately, a pay-for-privacy provision is also included in the California Consumer Privacy Act, which the state passed last year. Though the law includes a “non-discrimination” clause meant to prevent just this type of practice, it also includes an exemption that allows companies to provide users with “incentives” to still collect and sell personal information.

In a larger blog about ways to improve the law, which was then a bill, Schwartz and other EFF attorneys wrote:

“For example, if a service costs money, and a user of this service refuses to consent to collection and sale of their data, then the service may charge them more than it charges users that do consent.”

Real-world applications

The alarm for pay-for-privacy isn’t theoretical—it has been implemented in the past, and there is no law stopping companies from doing it again.

In 2015, AT&T offered broadband service for a $30-a-month discount if users agreed to have their Internet activity tracked. According to AT&T’s own words, that Internet activity included the “webpages you visit, the time you spend on each, the links or ads you see and follow, and the search terms you enter.”

Most of the time, paying for privacy isn’t always so obvious, with real dollars coming out or going into a user’s wallet or checking account. Instead, it happens behind the scenes, and it isn’t the user getting richer—it’s the companies.

Powered by mountains of user data for targeted ads, Google-parent Alphabet recorded $32.6 billion in advertising revenue in the last quarter of 2018 alone. In the same quarter, Twitter recorded $791 million in ad revenue. And, notable for its CEO’s insistence that the company does not sell user data, Facebook’s prior plans to do just that were revealed in documents posted this week. Signing up for these services may be “free,” but that’s only because the product isn’t the platform—it’s the user.

A handful of companies currently reject this approach, though, refusing to sell or monetize users’ private information.

In 2014, CREDO Mobile separated itself from AT&T by promising users that their privacy “is not for sale. Period.” (The company does admit in its privacy policy that it may “sell or trade mailing lists” containing users’ names and street addresses, though.) ProtonMail, an encrypted email service, positions itself as a foil to Gmail because it does not advertise on its site, and it promises that users’ encrypted emails will never be scanned, accessed, or read. In fact, the company claims it can’t access these emails even if it wanted.

As for Google’s very first product—online search— the clearest privacy alternative is DuckDuckGo. The privacy-focused service does not track users’ searches, and it does not build individualized profiles of its users to deliver unique results.

Even without monetizing users’ data, DuckDuckGo has been profitable since 2014, said community manager Daniel Davis.

“At DuckDuckGo, we’ve been able to do this with ads based on context (individual search queries) rather than personalization.”

Davis said that DuckDuckGo’s decisions are steered by a long-held belief that privacy is a fundamental right. “When it comes to the online world,” Davis said, “things should be no different, and privacy by default should be the norm.”

It is time other companies follow suit, Davis said.

“Control of one’s own data should not come at a price, so it’s essential that [the] industry works harder to develop business models that don’t make privacy a luxury,” Davis said. “We’re proof this is possible.”

Hopefully, other companies are listening, because it shouldn’t matter whether pay-for-privacy is codified into law—it should never be accepted as an industry practice.

The post Will pay-for-privacy be the new normal? appeared first on Malwarebytes Labs.

Phishing Campaign Uses Fake Google reCAPTCHA to Distribute Malware

A recent phishing campaign used a fake Google reCAPTCHA as part of its efforts to target Polish bank employees with malware.

Sucuri researchers discovered that the campaign sent out malicious emails masquerading as a confirmation for a recent transaction. Digital attackers deployed this disguise in the hopes that employees at the targeted bank would click on a link to a malicious PHP file out of alarm. That file was responsible for loading a fake 404 error page for visitors that had specifically defined user-agents.

If passed through a user-agent filter, the PHP code loaded a fake Google reCAPTCHA. This feature used static HTML and JavaScript, so was not capable of rotating the individual images used in each authentication test. It also did not support audio replay.

At that point, the PHP code checked the victim’s browser user-agent to determine what payload it should deliver. If it found the victim was using an Android device, the attack would load a malicious APK file capable of intercepting two-factor authentication (2FA) codes. Otherwise, it would download a malicious ZIP archive.

A History of Abusing and Bypassing CAPTCHAs

This isn’t the first time threat actors have incorporated CAPTCHAs into their attack campaigns. Back in 2016, researchers at the University of Connecticut and Bar Ilan University identified a malicious attack in which threat actors could trick users into divulging some of their personal information by completing a fake CAPTCHA. In February 2018, My Online Security observed a campaign that used an image pretending to be a Google reCAPTCHA to download a malicious ZIP file.

Malefactors have also tried to bypass legitimate CAPTCHAs for the purpose of conducting attack campaigns. All the way back in 2009, for example, IT World reported on a worm named Gaptcha that circumvented Gmail’s authentication feature to create new dummy accounts from which to send spam mail. More recently, BullGuard discovered some survey scams using CAPTCHAs to make their ploys more believable.

Defending Against Fake reCAPTCHA Phishing Campaigns

Security professionals can help protect their organizations from fake reCAPTCHA-wielding phishing campaigns by taking an ahead-of-threat approach to detection. Companies should also reject SMS-based 2FA schemes in favor of more practical and convenient multifactor authentication (MFA) deployments that fit into a context-based access strategy.

The post Phishing Campaign Uses Fake Google reCAPTCHA to Distribute Malware appeared first on Security Intelligence.

Google Analytics and Angular in Magento Credit Card Stealing Scripts

Google Analytics and Angular in Magento Credit Card Stealing Scripts

Over the last few months, we’ve noticed several credit card-stealing scripts that use variations of the Google Analytics name to make them look less suspicious and evade detection by website owners.

The malicious code is obfuscated and injected into legitimate JS files, such as skin/frontend/default/theme122k/js/jquery.jscrollpane.min.js, js/meigee/jquery.min.js, and js/varien/js.js.

The obfuscated code loads another script from www.google-analytics[.]cm/analytics.js.

Continue reading Google Analytics and Angular in Magento Credit Card Stealing Scripts at Sucuri Blog.

Max Schrems: lawyer, regulator, international man of privacy

Almost one decade ago, disparate efforts began in the European Union to change the way the world thinks about online privacy.

One effort focused on legislation, pulling together lawmakers from 28 member-states to discuss, draft, and deploy a sweeping set of provisions that, today, has altered how almost every single international company handles users’ personal information. The finalized law of that effort—the General Data Protection Regulation (GDPR)—aims to protect the names, addresses, locations, credit card numbers, IP addresses, and even, depending on context, hair color, of EU citizens, whether they’re customers, employees, or employers of global organizations.

The second effort focused on litigation and public activism, sparking a movement that has raised at least nearly half a million dollars to fund consumer-focused lawsuits meant to uphold the privacy rights of EU citizens, and has resulted in the successful dismantling of a 15-year-old intercontinental data-transfer agreement for its failure to protect EU citizens’ personal data. The 2015 ruling sent shockwaves through the security world, and forced companies everywhere to scramble to comply with a regulatory system thrown into flux.

The law was passed. The movement is working. And while countless individuals launched investigations, filed lawsuits, participated in years-long negotiations, published recommendations, proposed regulations, and secured parliamentary approval, we can trace these disparate yet related efforts back to one man—Maximilian Schrems.

Remarkably, as the two efforts progressed separately, they began to inform one another. Today, they work in tandem to protect online privacy. And businesses around the world have taken notice.

The impact of GDPR today

A Portuguese hospital, a German online chat platform, and a Canadian political consultancy all face GDPR-related fines issued last year. In January, France’s National Data Protection Commission (CNIL) hit Google with a 50-million-euros penalty—the largest GDPR fine to date—after an investigation found a “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.”

The investigation began, CNIL said, after it received legal complaints from two groups: the nonprofit La Quadrature du Net and the non-governmental organization None of Your Business. None of Your Business, or noyb for short, counts Schrems as its honorary director. In fact, he helped crowdfund its launch last year.

Outside the European Union, lawmakers are watching these one-two punches as a source of inspiration.

When testifying before Congress about a scandal involving misused personal data, the 2016 US presidential election, and a global disinformation campaign, Facebook CEO Mark Zuckerberg repeatedly heard calls to regulate his company and its data-mining operations.

“The question is no longer whether we need a federal law to protect consumers privacy,” said Republican Senator John Thune of South Dakota. “The question is what shape will that law take.”

Democratic Senator Mark Warner of Virginia put it differently: “The era of the Wild West in social media is coming to an end.”

A new sheriff comes to town

In 2011, Schrems was a 23-year-old law student from Vienna, Austria, visiting the US to study abroad. He enrolled in a privacy seminar at the Santa Clara University School of Law where, along with roughly 22 other students, he learned about online privacy law from one of the field’s notable titans.

Professor Dorothy Glancy practiced privacy law before it had anything to do with the Internet, cell phones, or Facebook. Instead, she navigated the world of government surveillance, wiretaps, and domestic spying. She served as privacy counsel to one of the many subcommittees that investigated the Watergate conspiracy.

Later, still working for the subcommittee, she examined the number of federal agency databases that contained people’s personally identifiable information. She then helped draft the Privacy Act of 1974, which restricted how federal agencies collected, used, and shared that information. It is one of the first US federal privacy laws.

The concept of privacy has evolved since those earlier days, Glancy said. It is no longer solely about privacy from the government. It is also about privacy from corporations.

“Over time, it’s clear that what was, in the 70s, a privacy problem in regards to Big Brother and the federal government, has now gotten so that a lot of these issues have to do with the private [non-governmental] collection of information on people,” Glancy said.

In 2011, one of the biggest private, non-governmental collectors of that information was Facebook. So, when Glancy’s class received a guest presentation from Facebook privacy lawyer Ed Palmieri, Schrems paid close attention, and he didn’t like what he heard.

For starters, Facebook simply refused to heed Europe’s data privacy laws.

Speaking to 60 Minutes, Schrems said: “It was obviously the case that ignoring European privacy laws was the much cheaper option. The maximum penalty, for example, in Austria, was 20,000 euros. So, just a lawyer telling you how to comply with the law was more expensive than breaking it.”

Further, according to Glancy, Palmieri’s presentation showed that Facebook had “absolutely no understanding” about the relationship between an individual’s privacy and their personal information. This blind spot concerned Schrems to no end. (Palmieri could not be reached for comment.)

“There was no understanding at all about what privacy is in the sense of the relationship to personal information, or to human rights issues,” Glancy said. “Max couldn’t quite believe it. He didn’t quite believe that Facebook just didn’t understand.”

So Schrems investigated. (Schrems did not respond to multiple interview requests and he did not respond to an interview request forwarded by his colleagues at Noyb.)

Upon returning to Austria, Schrems decided to figure out just how much information Facebook had on him. The answer was astonishing: Facebook sent Schrems a 1,200-page PDF that detailed his location history, his contact information, information about past events he attended, and his private Facebook messages, including some he thought he had deleted.

Shocked, Schrems started a privacy advocacy group called “Europe v. Facebook” and uploaded redacted versions of his own documents onto the group’s website. The revelations touched a public nerve—roughly 40,000 Europeans soon asked Facebook for their own personal dossiers.

Schrems then went legal. With Facebook’s international headquarters in Ireland, he filed 22 complaints with Ireland’s Data Protection Commissioner, alleging that Facebook was violating EU data privacy law. Among the allegations: Facebook didn’t really “delete” posts that users chose to delete, Facebook’s privacy policy was too vague and unclear to constitute meaningful consent by users, and Facebook engaged in illegal “excessive processing” of user data.

The Irish Data Protection Commissioner rolled Schrems’ complaints into an already-running audit into Facebook, and, in December 2011, released non-binding guidance for the company. Facebook’s lawyers also met with Schrems in Vienna for six hours in February 2012.

And then, according to Schrems’ website, only silence and inaction from both Facebook and the Irish Data Protection Commissioner’s Office followed. There were no meaningful changes from the company. And no stronger enforcement from the government.

Frustrating as it may have been, Schrems kept pressing. Luckily, according to Glancy, he was just the right man for the job.

“He is innately curious,” Glancy said. “Once he sees something that doesn’t quite seem right, he follows it up to the very end.”

Safe Harbor? More like safety not guaranteed

On June 5, 2013, multiple newspapers exposed two massive surveillance programs in use by the US National Security Agency. One program, then called PRISM (now called Downstream), implicated some of the world’s largest technology companies, including Facebook.

Schrems responded by doing what he did best: He filed yet another complaint against Facebook—his 23rd—with the Irish Data Protection Commissioner. Facebook Ireland, Schrems claimed, was moving his data to Facebook Inc. in the US, where, according to The Guardian, the NSA enjoyed “mass access” to user data. Though Facebook and other companies denied their participation, Schrems doubted the accuracy of these statements.

“There is probable cause to believe that ‘Facebook Inc’ is granting the NSA mass access to its servers that goes beyond merely individual requests based on probable cause,” Schrems wrote in his complaint. “The statements by ‘Facebook Inc’ are in light of the US laws not credible, because ‘Facebook Inc’ is bound by so-called ‘gag orders.’”

Schrems argued that, when his data left EU borders, EU law required that it receive an “adequate level of protection.” Mass surveillance, he said, violated that.

The Irish Data Protection Commissioner disagreed. The described EU-to-US data transfer was entirely legal, the Commissioner said, because of Safe Harbor, a data privacy carve-out approved much earlier.

In 1995, the EU adopted the Data Protection Directive, which, up until 2018, regulated the treatment of EU citizens’ personal data. In 2000, the European Commission approved an exception to the law: US companies could agree to a set of seven principles, called the Safe Harbor Privacy Principles, to allow for data transfer from the EU to the US. This self-certifying framework proved wildly popular. For 15 years, nearly every single company that moved data from the EU to the US relied, at least briefly, on Safe Harbor.

Unsatisfied, Schrems asked the Irish High Court to review the Data Protection Commissioner’s inaction. In October 2013, the court agreed. Schrems celebrated, calling out the Commissioner’s earlier decision.

“The [Data Protection Commissioner] simply wanted to get this hot potato off his table instead of doing his job,” Schrems said in a statement at the time. “But when it comes to the fundamental rights of millions of users and the biggest surveillance scandal in years, he will have to take responsibility and do something about it.”

Less than one year later, the Irish High Court came back with its decision—the Court of Justice for the European Union would need to review Safe Harbor.

On March 24, 2015, the Court heard oral arguments for both sides. Schrems’ legal team argued that Safe Harbor did not provide adequate protection for EU citizen’s data. The European Commission, defending the Irish DPC’s previous decision, argued the opposite.

When asked by the Court how EU citizens might best protect themselves from the NSA’s mass surveillance, the lawyer arguing in favor of Safe Harbor made a startling admission:

“You might consider closing your Facebook account, if you have one,” said Bernhard Schima, advocate for the European Commission, all but admitting that Safe Harbor could not protect EU citizens from overseas spying. When asked more directly if Safe Harbor provided adequate protection of EU citizens’ data, the European Commission’s legal team could not guarantee it.

On September 23, 2015, the Court’s advocate general issued his initial opinion—Safe Harbor, in light of the NSA’s mass surveillance programs, was invalid.

“Such mass, indiscriminate surveillance is inherently disproportionate and constitutes an unwarranted interference with the rights [to respect for privacy and family life and protection of personal data,]” the opinion said.

Less than two weeks later, the entire Court of Justice agreed.

Ever a lawyer, Schrems responded to the decision with a 5,500-word blog post (assigned a non-commercial Creative Commons public copyright license) exploring current data privacy law, Safe Harbor alternatives, company privacy policies, a potential Safe Harbor 2.0, and mass surveillance. Written with “limited time,” Schrems thanked readers for pointing out typos.

The General Data Protection Regulation

Before the Court of Justice struck down Safe Harbor, before Edward Snowden shed light on the NSA’s mass surveillance, before Schrems received a 1,200-page PDF documenting his digital life, and before that fateful guest presentation in professor Glancy’s privacy seminar at Santa Clara University School of Law, a separate plan was already under way to change data privacy.

In November 2010, the European Commission, which proposes legislation for the European Union, considered a new policy with a clear goal and equally clear title: “A comprehensive approach on personal data protection in the European Union.”

Many years later, it became GDPR.

During those years, the negotiating committees looked to Schrems’ lawsuits as highly informative, Glancy said, because Schrems had successfully proven the relationship between the European Charter of Fundamental Human Rights and its application to EU data privacy law. Ignoring that expertise would be foolish.

“Max [Schrems] was a part of just about all the committees working on [GDPR]. His litigation was part of what motivated the adoption of it,” Glancy said. “The people writing the GDPR would consult him as to whether it would solve his problems, and parts of the very endless writing process were also about what Max [Schrems] was not happy with.”

Because Schrems did not respond to multiple interview requests, it is impossible to know his precise involvement in GDPR. His Twitter and blog have no visible, corresponding entries about GDPR’s passage.

However, public records show that GDPR’s drafters recommended several areas of improvement in the year before the law passed, including clearer definitions of “personal information,” stronger investigatory powers to the EU’s data regulators, more direct “data portability” to allow citizens to directly move their data from one company to another while also obtaining a copy of that data, and better transparency in how EU citizens’ online profiles are created and targeted for ads.

GDPR eventually became a sweeping set of 99 articles that tightly fasten the collection, storage, use, transfer, and disclosure of data belonging to all EU citizens, giving those citizens more direct control over how their data is treated.

For example, citizens have the “right to erasure,” in which they can ask a company to delete the data collected on them. Citizens also have the “right to access,” in which companies must provide a copy of the data collected on a person, along with information about how the data was collected, who it is shared with, and why it is processed.

Approved by a parliamentary vote in April 2016, GDPR took effect two years later.

GDPR’s immediate and future impact

On May 23, 2018, GDPR’s arrival was sounded not by trumpets, but by emails. Facebook, TicketMaster, eBay, PricewaterhouseCoopers, The Guardian, Marriott, KickStarter, GoDaddy, Spotify, and countless others began their public-facing GDPR compliance strategies by telling users about updated privacy policies. The email deluge inspired rankings, manic tweets, and even a devoted “I love GDPR” playlist. The blitz was so large, in fact, that several threat actors took advantage, sending fake privacy policy updates to phish for users’ information.

Since then, compliance looks less like emails and more like penalties.

Early this year, Google received its €50 million ($57 million) fine out of France. Last year, a Portuguese hospital received a €400,000 fine for two alleged GDPR violations. Because of a July 2018 data breach, a German chat platform got hit with a €20,000 fine. And in the reported first-ever GDPR notice from the UK, Canadian political consultancy—and murky partner to Cambridge Analytica—AggregateIQ received a notice about potential fines of up to €20 million.

To Noyb, the fines are good news. Gaëtan Goldberg, a privacy lawyer with the NGO, said that data privacy law compliance has, for many years, been lacking. Hopefully GDPR, which Goldberg called a “major step” in protecting personal data, can help turn that around, he said.

“[We] hope to see strong enforcement measures being taken by courts and data protection authorities around the EU,” Goldberg said. “The fine of 50 [million] euros the French CNIL imposed on Google is a good start in this direction.”

The future of data privacy

Last year, when Senator Warner told Zuckerberg that “the era of the Wild West in social media is coming to an end,” he may not have realized how quickly that would come true. In July 2018, California passed a statewide data privacy law called the California Consumer Privacy Act. Months later, three US Senators proposed their own federal data privacy laws. And just this month, the Government Accountability Office recommended that Congress pass a data privacy law similar to GDPR.

Data privacy is no longer a concept. It is the law.

In the EU, that law has released a torrent of legal complaints. Hours after GDPR came into effect, Noyb lodged a series of complaints against Google, Facebook, Instagram, and WhatsApp.

Goldberg said the group’s legal complaints are one component of meaningful enforcement on behalf of the government. Remember: Google’s massive penalty began with an investigation that the French authorities said started after it received a complaint from Noyb.

Separately, privacy group Privacy International filed complaints against Europe’s data-brokers and advertising technology companies, and Brave, a privacy-focused web browser, filed complaints against Google and other digital advertising companies.

Google and Facebook did not respond to questions about how they are responding to the legal complaints. Facebook also did not respond to questions about its previous legal battles with Schrems.

Electronic Frontier Foundation International Director Danny O’Brien wrote last year that, while we wait for the results of the above legal complaints, GDPR has already motivated other privacy-forward penalties and regulations around the world:

“In Italy, it was competition regulators that fined Facebook ten million euros for misleading its users over its personal data practices. Brazil passed its own GDPR-style law this year; Chile amended its constitution to include data protection rights; and India’s lawmakers introduced a draft of a wide-ranging new legal privacy framework.”

As the world moves forward, one man—the one who started it all—might be conspicuously absent. Last year, Schrems expressed a desire to step back from data privacy law. If anything, he said, it was time for others to take up the mantle.

“I know I’m going to be deeply engaged, especially at the beginning, but in the long run [Noyb] should absolutely not be Max’s personal NGO,” Schrems told The Register in a January 2018 interview. Asked to clarify about his potential future beyond privacy advocacy, Schrems said: “It’s retirement from the first line of defense, let’s put it that way… I don’t want to keep bringing cases for the rest of my life.”

Surprisingly, for all of Schrems’ public-facing and public-empowering work, his interviews and blog posts sometimes portray him as a deeply humble, almost shy individual, with a down-to-earth sense of humor, too. When asked during a 2016 podcast interview if he felt he would be remembered in the same vein as Edward Snowden, Schrems bristled.

“Not at all, actually,” Schrems said. “What I did is a very conservative approach. You go to the courts, you have your case, you bring it and you do your thing. What Edward Snowden did is a whole different ballgame. He pretty much gave up his whole life and has serious possibilities to some point end up in a US prison. The worst thing that happened to me so far was to be on that security list of US flights.”

During the same interview, Schrems also deflected his search result popularity.

“Everyone knows your name now,” the host said. “If you Google ‘Schrems,’ the first thing that comes up is ‘Max Schrems’ and your case.”

“Yeah but it’s also a very specific name, so it’s not like ‘Smith,’” Schrems said, laughing. “I would have a harder time with that name.”

If anything, the popularity came as a surprise to Schrems. Last year, in speaking to Bloomberg, he described Facebook as a “test case” when filing his original 22 complaints.

“I thought I’d write up a few complaints,” Schrems said. “I never thought it would create such a media storm.”

Glancy described Schrems’ initial investigation into Facebook in much the same way. It started not as a vendetta, she said, but as a courtesy.

“He started out with a really charitable view of [Facebook],” Glancy said. “At some level, he was trying to get Facebook to wake up and smell the coffee.”

That’s the Schrems that Glancy knows best, a multi-faceted individual who makes time for others and holds various interests. A man committed to public service, not public spotlight. A man who still calls and emails her with questions about legal strategy and privacy law. A man who drove down the California coast with some friends during spring break. Maybe even a man who is tired of being seen only as a flag-bearer for online privacy. (He describes himself on his Twitter profile as “(Luckily not only) Law, Privacy and Politics.)

“At some level, he considers himself a consumer lawyer,” Glancy said. “He’s interested in the ways in which to empower the little guy, who is kind of abused by large entities that—it’s not that they’re targeting them, it’s that they just don’t care. [The people’s] rights are not being taken account of.”

With GDPR in place, those rights, and the people they apply to, now have a little more firepower.

The post Max Schrems: lawyer, regulator, international man of privacy appeared first on Malwarebytes Labs.

Cyber Security Week in Review (Feb. 22)



Welcome to this week's Cyber Security Week in Review, where Cisco Talos runs down all of the news we think you need to know in the security world. For more news delivered to your inbox every week, sign up for our Threat Source newsletter here.

Top headlines this week


  • U.S. officials charged a former member of the Air Force with defecting in order to help an Iranian cyber espionage unit. The Department of Justice say the woman collected information on former colleagues, and then the Iranian hackers attempted to target those individuals and install spyware on their computers.
  • The U.S. Department of Justice is dismantling two task forces aimed at protecting American elections. The groups were originally created after the 2016 presidential election to prevent foreign interference but after the 2018 midterms, the Trump administration shrunk their sizes significantly. 
  • Facebook and the U.S. government are closing in on a settlement over several privacy violations. Sources familiar with the discussions say it will likely result in a multimillion-dollar fine, likely to be the largest the Federal Trade Commission has ever imposed on a technology company. 

From Talos


  • There’s been a recent uptick in the Brushaloader infections. While the malware has been around since mid-2018, this new variant makes it more difficult than ever to detect on infected machines. New features include the ability to evade detection in sandboxes and the avoidance of anti-virus protection. 
  • New features in WinDbg makes it easier for researchers to debug malware. A new JavaScript bridge brings WinDbg in line with other modern programs. Cisco Talos walks users through these new features and shows off how to use them. 

Malware roundup


  • Google says it’s stepping up its banning of malicious apps. The company says it’s seen a 66 percent increase in the number of apps its banned from the Google Play store over the past year. Google says it scans more than 50 billion apps a day on users’ phones for malicious activity. 
  • A new campaign using the Separ malware is attempting to steal login credentials at large businesses. The malware uses short scripts and legitimate executable files to avoid detection. 
  • A new ATM malware called "WinPot" turns the machines into "slot machines." This allows hackers to essentially gamify ATM hacking, randomizing how much money the machine dispenses. 

The rest of the news


  • The U.S. is reviving a secret program to carry out supply-chain attacks against Iran. The cyber attacks are targeted at the country’s missile program. Over the past two months, two of Iran’s efforts to launch satellites have failed within minutes, though it’s difficult to assign those failures to the U.S. 
  • Australia says a “sophisticated state actor” carried out a cyber attack on its parliament. The ruling Liberal-National coalition parties say their systems were compromised in the attack. Since then, the country says it’s put “a number of measures” in place to protect its election system. 
  • Cisco released security updates for 15 vulnerabilities. Two critical bugs could allow attackers to gain root access to a system, and a third opens the door for a malicious actor to bypass authentication altogether. 
  • Facebook keeps a list of users that it believes could be a threat to the company or its employees. The database is made up of users who have made threatening posts against the company in the past. 


Hackers Use Fake Google reCAPTCHA to Cloak Banking Malware

Hackers Use Fake Google reCAPTCHA to Cloak Banking Malware

The most effective phishing and malware campaigns usually employ one of the following two age-old social engineering techniques:

Impersonation

These online phishing campaigns impersonate a popular brand or product through specially crafted emails, SMS, or social media networks. These campaigns employ various methods including email spoofing, fake or real employee names, and recognized branding to trick users into believing they are from a legitimate source. Impersonation phishing campaigns may also contain a victim’s name, email address, account number, or some other personal detail.

Continue reading Hackers Use Fake Google reCAPTCHA to Cloak Banking Malware at Sucuri Blog.

Googlebot or a DDoS Attack?

Googlebot or a DDoS Attack?

A bot is a software application that uses automation to run scripts on the internet. Also called crawlers or spiders, these guys take on the simple yet repetitive tasks we do. There are legitimate bots and malicious ones. A Web Application Firewall (WAF) filters the web traffic and blocks any malicious bots, letting the good ones pass.

Googlebot is Google’s web crawling bot. Google uses it to discover new and updated pages to be added to the search engine index.

Continue reading Googlebot or a DDoS Attack? at Sucuri Blog.

Police arrest alleged Russian hacker behind huge Android ad scam

Police in Bulgaria have arrested an alleged Russian hacker who may be responsible for a huge Android ad scam that netted $10 million. The individual identified as Alexander Zhukov is a Saint Petersburg native who's been living in Varna, Bulgaria, since 2010 and was apprehended on November 6th after the US issued an international warrant for his arrest, according to ZDNet.

Source: Kommersant

Android Ecosystem Security Transparency Report is a wary first step

Reading through Google’s first quarterly Android Ecosystem Security Transparency Report feels like a mix of missed opportunities and déjà vu all over again.

Much of what is in the new Android ecosystem security report is data that has been part of Google’s annual Android Security Year in Review report, including the rates of potentially harmful applications (PHAs) on devices with and without sideloaded apps — spoiler alert: sideloading is much riskier — and rates of PHAs by geographical region. Surprisingly, the rates in Russia are lower than in the U.S.

The only other data in the Android ecosystem security report shows the percentage of devices with at least one PHA installed based on Android version. This is new data shows that the newer the version of Android, the less likely it is a device will have a PHA installed.

However, this also hints at the data Google didn’t include in the report, like how well specific hardware partners have done in updating devices to those newer versions of Android. Considering that Android 7.x Nougat is the most common version of the OS in the wild at 28.2% and the latest version 9.0 Pie hasn’t even cracked the 0.1% marker to be included in Google’s platform numbers, the smart money says OEM updating stats wouldn’t be too impressive.

There’s also the matter of Android security updates and the data around which hardware partners are best at pushing them out. Dave Kleidermacher, head of Android security and privacy, said at the Google I/O developer conference in May 2018 that the company was tracking which partners were best at pushing security updates and that it was considering adding hardware support details to future Android Ecosystem Security Transparency Reports. More recently, Google added stipulations to its OEM contracts mandating at least four security updates per year on Android devices.

It’s unclear why Google ultimately didn’t include this data in the report on Android ecosystem security, but Google has been hesitant to call out hardware partners for slow updates in the past. In addition to new requirements in Android partner contracts regarding security updates, there have been rules stating hardware partners need to update any device to the latest version of Android released in the first 18 months after a device launch. However, it has always been unclear what the punishment would be for breaking those rules. Presumably, it would be a ban on access to Google Play services, the Play Store and Google Apps, but there have never been reports of those penalties being enforced.

Google has taken steps to make Android updates easier, including Project Treble in Android 8.0 Oreo, which effectively decoupled the Android system from any software differentiation added by a hardware partner. But, since Android 7.x is still the most common version in the wild, it doesn’t appear as though that work has yielded much fruit yet.

Adding OS and security update stats to the Android Ecosystem Security Transparency Report could go a long way towards shaming OEMs into being better and giving consumers more information with which to make purchasing decisions, but time will tell if Google ever goes so far as to name OEMs specifically.

The post Android Ecosystem Security Transparency Report is a wary first step appeared first on Security Bytes.

Google sets Android security updates rules but enforcement is unclear

The vendor requirements for Android are a strange and mysterious thing but a new leak claims Google has added language to force manufacturers to push more regular Android security updates.

According to The Verge, Google’s latest contract will require OEMs to supply Android security updates for two years and provide at least four updates within the first year of a device’s release. Vendors will also have to release patches within 90 days of Google identifying a vulnerability.

Mandating more consistent Android security updates is certainly a good thing, but it remains unclear what penalties Google would levy against manufacturers that fail to provide the updates or if Google would follow through on any punitive actions.

It has been known for years that Google sets certain rules for manufacturers who want to include the Play Store, Play services and Google apps on Android devices, but because enforcement has been unclear the rules have sometimes been seen as mere suggestions.

For example, Google has had a requirement in place since the spring of 2011 mandating manufacturers to upgrade devices to the latest version of the Android OS released within 18 months of a device’s launch. However, because of the logistics issues of providing those OS updates, Google has rarely been known to enforce that requirement.

This can be seen in the Android OS distribution numbers, which are a complete mess. Currently, according to Google, the most popular version of Android on devices in the wild is Android 6.0 Marshmallow (21.6%), followed by Android 7.0 (19%), Android 5.1 (14.7%), Android 8.0 (13.4%) and Android 7.1 (10.3%). And not even showing up on Google’s numbers because it hasn’t hit the 0.1% threshold for inclusion is Android 9.0 released in August.

Theoretically, the ultimate enforcement of the Android requirements would be Google barring a manufacturer from releasing a device that includes Google apps and services, but there have been no reports of that ever happening. Plus, the European Union’s recent crackdown on Android give an indication that Google does wield control over the Android ecosystem — and was found to be abusing that power.

The ruling in the EU will allow major OEMs to release forked versions of Android without Google apps and services (something they were previously barred from doing by Google’s contract). It will also force Google to bundle the Play Store, services and most Google apps into a paid licensing bundle, while offering — but not requiring — the Chrome browser and Search as a free bundle. Although early rumors suggest Google might offset the cost of the apps bundle by paying OEMs to use Chrome and Google Search, effectively making it all free and sidestepping any actual change.

These changes only apply to Android devices released in the EU, but it should lead to more devices on the market running Android but featuring third-party apps and services. This could mean some real competition for Google from less popular Android forks such as Amazon’s Fire OS or Xiaomi’s MIUI.

It’s still unknown if the new rules regarding Android security updates are for the U.S. only or if they will be part of contracts in other regions. But, an unintended consequence of the EU rules might be to strengthen Google’s claim that the most secure Android devices are those with the Play Store and Play services.

Google has long leaned on its strong record of keeping malware out of the Play Store and off of user devices, if Play services are installed. Google consistently shows that the highest rates of malware come from sideloading apps in regions where the Play Store and Play services are less common — Russia and China – and where third-party sources are more popular.

Assuming the requirements for Android security updates do apply in other regions around the globe, it might be fair to also assume they’d be tied to the Google apps and services bundle (at least in the EU) because otherwise Google would have no way to put teeth behind the rules. So, not only would Google have its stats regarding how much malware is taken care of in the Play Store and on user devices by Play services, it might also have more stats showing those devices are more consistently updated and patched.

The Play Store, services and Google apps are an enticing carrot to dangle in front of vendors when requiring things like Android security updates, and there is reason to believe manufacturers would be willing to comply in order to get those apps and services, even if the penalties are unclear.

More competition will be coming to the Android ecosystem in the EU, and it’s not unreasonable to think that competition could spread to the U.S., especially if Google is scared to face similar actions by the U.S. government (as unlikely as that may seem).  And the less power Google apps and services have in the market, the  less force there will be behind any Google requirements for security updates.

 

The post Google sets Android security updates rules but enforcement is unclear appeared first on Security Bytes.

Safer Internet Day: 4 Things You Might Not Realise Your Webfilter Can Do

Since it's Safer Internet Day today, I thought i'd use it as an excuse to write a blog post. Regular readers will know I don't usually need an excuse, but I always feel better if I do.

Yesterday, I was talking to our Content Filter team about a post on the popular Edugeek forum, where someone asked "is it possible to block adult content in BBC iPlayer?". Well, with the right web filter, the answer is "yes", but how many people think to even ask the question? Certainly we hadn't thought much about formalising the answer. So I'm going to put together a list of things your web filter should be capable of, but you might not have realised...


1. Blocking adult content on "TV catch up" services like iPlayer. With use of the service soaring, it's important that any use in education is complemented with the right safeguards. We don't need students in class seeing things their parents wouldn't want them watching at home. There's a new section of the Smoothwall blocklist now which will deal with anything on iPlayer that the BBC deem unsuitable for minors.

2. Making Facebook and Twitter "Read Only". These social networks are great fun, and it can be useful to relax the rules a bit to prevent students swarming for 4G. A read-only approach can help reduce the incidence of cyber-bullying and keep users more focused.

3. Stripping the comments out of YouTube. YouTube is a wonderful resource, and the majority of video is pretty safe (use Youtube for Schools if you want to tie that down further — your filter can help you there too). The comments on videos, however, are often at best puerile and at worst downright offensive. Strip out the junk, and leave the learning tool - win win!

4. Busting Google searches back down to HTTP and forcing SafeSearch. Everybody appreciates a secure service, but when Google moved their search engine to HTTPS secure traffic by default, they alienated the education community. With SSL traffic it is much harder to vet search terms, log accesses in detain, and importantly force SafeSearch. Google give you DNS trickery to force the site back into plain HTTP - but that's a pain to implement, especially on a Windows DNS server. Use your web filter to rewrite the requests, and have the best of both.

Analyzing [Buy Cialis] Search Results

A few days ago I was updating the spammy word highlighting functionality in Unmask Parasites results and needed to test the changes on real websites. To find hacked websites with spammy content I would normally google for [viagra] or [cialis], which are arguably the most targeted keywords used in black hat SEO hacks. However after the Google’s June update in how they rank web pages for spammy queries, I didn’t have much expectation of seeing hacked sites on the first page of search results for my usual [buy cialis] query and was ready to check a few more pages.

Indeed, for queries like [payday loans] I can see quite relevant results on the first three pages. All sites are specialized and don’t look like doorways on hacked sites. That’s really good. For [viagra] I found only one result on the first page pointing to a doorway on a hacked site. Still good.

However, when I entered a really spammy combination [buy viagra], the search results were less than optimal — 5 out of 10 led to hacked sites. And at least 2 out of the rest 5 specialized sites were promoted using hidden links on hacked sites. Not good. And the worst results (although ideal for testing my update) were for the [buy cialis] query — 100% of results on the first page (10 out of 10) led to doorways on hacked sites or simply spammy web pages. Not a single result from websites that really have anything to do with cialis.

buy cialis results

Results analysis

Here is the breakdown of the first 10 results (links go to real time Unmask Parasites reports for these pages and at the moment of writing they all reveal spammy content. However this may change over time):

  1. www.epmonthly .com/advertise/ — doorway on a hacked site
  2. werenotsorry .com/ — strange spammy site with a rubbish content like this “The car buy cialis in your car is the ultimate well source of electrical amazing power in your car.
  3. incose .org/dom/ — doorway on a hacked site.
  4. www.deercrash .org/buy/cialis/online/ — doorway on a hacked site
  5. jon-odell .com/?p=54 — doorway on a hacked site
  6. www.goodgrief .org .au/Cialis/ — doorway on a hacked site
  7. www.asm .wisc .edu/buy-cialis — doorway on a hacked site
  8. www.mhfa .com .au/cms/finance-home/ — doorway on a hacked site
  9. www .plowtoplate .org/library/51.html — doorway on a hacked site
  10. john-leung .com/?p=16 — doorway on a hacked site

Over the course of the past week the results slightly fluctuated and sometimes I saw the following links on the first SERP.

Out of 18 links that I encountered on the first page for [buy cialis] 15 point to doorways on hacked sites, 1 to a site with unreadable machine-generated text (still not sure whether it’s some SEO experiment or a backdoor with a tricky search traffic processing procedure) and 2 specialized sites relevant to the query but with quite bad backlink profiles. Overall 0% of results that follow Google’s quality guidelines.

So the Google’s update for spammy queries doesn’t seem to work as it should at least for some über spammy queries. It’s sad. And the reason why I’m sad is not that I worry about people who use such queries on Google to buy some counterfeit drugs. My major concern is this situation justifies the huge number of sites (many thousands) that cyber-criminals hack in order to put a few of their doorways to the top for relevant queries on Google.

Behind the scenes

The above 15 hacked sites that I found on the first Google’s SERP are actually only a tip of the iceberg. Each of them is being linked to from many thousands (if not millions) pages from similarly hacked sites. Here you can see a sample list of sites that link to the above 15 (you might need a specialized tool like Unmask Parasites to see hidden and cloaked links there).

Many of the hacked web pages link to more than one doorway page, which maximizes changes that one of them will be finally chosen by Google to be displayed on the first page for one of the many targeted keywords. And at the same time this helps to have a pool of alternative doorways in case some of them will be removed by webmasters or penalized by Google. As a result, the networks of doorways, landing pages and link pages can be very massive. Here you can see a list with just a small part of spammy links (338 unique domains) that can be found on hacked web pages.

.gov, .edu and .org

Among those hacked sites you can find sites of many reputable organizations, which most likely greatly help to rank well on Google. There are many compromised sites of professional associations, universities and even governmental sites, for example (as of August 19th, 2013):

Volume of spammy backlinks

If you take some of the top results and check their backlink profiles (I used Majestic SEO Site Explorer), you’ll see how many domains can be compromised (or spammed) just in one black hat SEO campaign. And we know that there are many ongoing competing campaigns just for “cialis” search traffic, so you can imaging the overall impact.

backlink profile

On the above screenshot you can see that thousands of domains linking to “www .epmonthly .com/advertise/” using various “cialis” keywords.

The situation with “www. epmonthly .com/advertise/” is quite interesting. If you google for [“www.epmonthly .com/advertise/”] you’ll see more than a million results pointing to web pages where spammers used automated tools to post spammy links (including this one) in comments, profiles , etc. but failed to verify whether those sites accept the HTML code they were posting (still many sites, while escaping the HTML code, automatically make all URLs clickable, so those spammers finally achive their goal) .

Typical black hat SEO tricks

In addition to annoying but pretty harmless comment spamming, forum spamming and creating fake user profiles, black hats massively hack websites with established reputation and turn them into their SEO assets.

The most common use for a hacked site is injecting links pointing to promoted resources (it can be a final landing page, or a doorway, or an intermediary site with links). Here is what such web pages may look like in Unmask Parasites reports:

spammy keyword highlighting

To hide such links from site owners, hackers make them hidden. For example, they can place them in an off-screen <div>

<div style="position:absolute; left:-8745px;">...spammy links here...</div>

Or put them in a normal <div> and add a JavaScript to make this <div> invisible when a browser loads the page

<div id='hideMe'> ... spammy links here.... </div>
<script type='text/javascript'>if(document.getElementById('hideMe') != null){document.getElementById('hideMe').style.visibility = 'hidden';document.getElementById('hideMe').style.display = 'none';}</script>

The JavaScript can be encrypted.

e v a l(function(p,a,c,k,e,d){e=function(c){return(c<a?"":e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)d[e(c)]=k[c]||e(c);k=[function(e){return d[e]}];e=function(){return'\\w+'};c=1;};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p;}('2.1(\'0\').5.4="3";',6,6,'bestlinks|getElementById|document|none|display|style'.split('|'),0,{}))

which translates to

document.getElementById('bestlinks').style.display="none";

where “bestlinks” is the id of the <div> with spammy links.

Sometimes, encrypted JavaScript can be coupled with dynamic HTML generation of the link container. After decryption it looks like this:

document.w ri t e('<style><!-- .read {display:none} --></style><address class="read">');
...spammy links here...
document.wri te('</address>');

Of course, it’s only a client-side representation of the problem. On the server side, it’s rarely this straightforward. Most times it involves obfuscated (usually PHP) code in sneaky places (e.g. themes, plugins, DB, etc.)

Doorways

Sites that rely on black hat SEO techniques get penalized by Google soon enough so the can’t expect much search traffic directly from search engines. Instead they try to promote many disposable doorways on other reputable sites that would redirect search traffic to them.

The typical approach is to hack a website and use cloaking tricks (generating a specialized version with spammy keywords specifically for search engines while leaving the original content for normal visitors) to make search engines think that its pages are relevant for those spammy queries. E.g. check the title of the “www.epmonthly .com/advertise/” when you visit it in a browser (“Advertise“) and when you check it in Unmask Parasites or in Google’s Cache (“Buy Cialis (Tadalafil) Online – OVERNIGHT Shipping“). Then they add some functionality to distinguish visitors coming from search engines and redirect them to third party sites that pay hackers for such traffic.

The redirects may be implemented as .htaccess rules, client-side JavaScript code, or server-side PHP code.

Sometimes, instead of using cloaking, hackers simply create a whole spammy section in a subdirectory of a legitimate site, or a standalone doorway page. Example from our cialis search results: www .asm .wisc .edu/buy-cialis .

To Webmasters

It might be tricky to determine whether your site fell victim to a black hat SEO hack since hackers do their best to hide evidence from site owners and regular visitors. At the same time antivirus tools won’t help you here since links and redirects (in case they can actually see them) are not considered harmful. Nonetheless, a thoughtful webmaster is always equipped with proper tools and tricks (click here for details) to determine such issues. They range from specialized Google search queries and and reports in Webmaster Tools to log analysis and server-side integrity control.

In addition to the tricks that I described here, you can try to simply load your site with JavaScript turned off. Sometimes this is all it takes to find hidden links whose visibility is controlled by a script.

Fighting black hat SEO hacks

Of course, site owners are responsible for what happens with their sites, should protect them and clean them up in case of hacks. Doorways on hacked sites would never appear in search results if all webmasters would quickly mitigate such issues.

But let’s take a look at this from a different perspective. The main goal of all black hat SEO hacks is to put their doorways to the top on Google for relevant keywords and get a targeted search traffic. And 80% (or even more) massive campaigns target a very narrow set of keywords and their modification. If Google actively monitor the first pages of search results for such keywords and penalize doorways, this could significantly reduce efficacy of such campaigns leaving very few incentive to hack website to put spammy links there. And you don’t have to monitor every possible keyword combination. In my experience, most of them will finally point to the same doorways.

I can see Google moving in this direction. The description of the above mentioned ranking algorithm update is very promising. However, as the [buy cialis] query with 0% of relevant search results on the first page shows — a lot should be improved.

P.S Just before posting this article, I checked results for [buy cialis] once more and … surprise!.. found a link to a Wikipedia article about Tadalafil at the 4th position. Wow! Now we have 1 result that doesn’t seem to have anything to do with hacked sites.

Related posts