Category Archives: Privacy

Facebook Admits SMS Notifications Sent Using Two-Factor Number Was Caused by Bug

Facebook has clarified the situation around SMS notifications sent using the company's two-factor authentication (2FA) system, admitting that the messages were indeed caused by a bug. From a report: In a blog post penned by Facebook Chief Security Officer Alex Stamos, the company says the error led it to "send non-security-related SMS notifications to these phone numbers." Facebook uses the automated number 362-65, or "FBOOK," as its two-factor authentication number, which is a secure way of confirming a user's identity by sending a numeric code to a secondary device like a mobile phone. That same number ended up sending users Facebook notifications without their consent. When users would attempt to get the SMS notifications to stop, the replies were posted to their own Facebook profiles as status updates.

Read more of this story at Slashdot.

NBlog February 17 – The I part of CIA

Integrity is a universal requirement, especially if you interpret the term widely to include aspects such as:
  • Completeness of information;
  • Accuracy of information;
  • Veracity, authenticity and assurance levels in general e.g. testing and measuring to determine how complete and accurate a data set is, or is not (an important control, often neglected);
  • Timeliness (or currency or ‘up-to-date-ness’) of information (with the implication of controls to handle identifying and dealing appropriately with outdated info – a control missing from ISO/IEC 27001 Annex A, I think);
  • Database integrity plus aspects such as contextual appropriateness plus internal and external consistency (and, again, a raft of associated controls at all levels of the system, not just Codd’s rules within the DBMS);
  • Honesty, justified credibility, trust, trustworthiness, ‘true grit’, resilience, dependability and so forth, particularly in the humans and systems performing critical activities (another wide-ranging issue with several related controls);
  • Responsibility and accountability, including custodianship, delegation, expectations, obligations, commitments and all that …
  • … leading into ethics, professional standards of good conduct, ‘rules’, compliance and more.
The full breadth of meanings and the implications of “integrity” are the key reason I believe it deserves its place at information risk and security’s high table, along with confidentiality and availability. However, for some people in the field (perhaps a greater proportion of non-native English speakers?), it evidently has a much more restricted meaning, hence the reason for the note to this definition of information security:
information security
preservation of confidentiality (3.10), integrity (3.36) and availability (3.7) of information
Note 1 to entry: In addition, other properties, such as authenticity (3.6), accountability, non-repudiation (3.48), and reliability (3.55) can also be involved."

Those additional properties, and more, are to me all part of “integrity” (plus availability in the case of “reliability”).

By the way, Donn Parker has argued for years (decades!) that the CIA triad is deficient. Aside from the vagueness of “integrity” which is at least partially addressed by that note, Donn points out that there are other, materially different properties or requirements or features of information that are also an integral part of the domain, such as ownershipand control – and I must say I think he’s right. A significant part of privacy, for example, is the concept that we data subjects own and hence have a right to control or choose how our personal information is used, disclosed, stored, maintained and disposed of, regardless of who actually has possession of it at any moment, and regardless of the fact that we may have chosen to disclose it to them, or failed to prevent them accessing it (e.g. by standing naked at a window!). That, for me, goes beyond CIA, although some would say it falls under responsibility, accountability and trust which is part of integrity, and of course there is a confidentiality angle. Regardless of the official/academic definitions, it’s an intriguing perspective. 

Scanned IDs of 119,000 FedEx customers exposed online

An unsecured Amazon Web Services bucket holding personal information and scans of IDs of some 119,000 US and international citizens has been found sitting online by Kromtech security researcher earlier this month. The stored data had been stockpiled by Bongo International, a company that specialized in helping North American retailers and brands sell online to consumers in other countries. Bongo was acquired by FedEx in 2014, relaunched as FedEx Cross-Border International, and ultimately shuttered in … More

GDPR quick guide: Why non-compliance could cost you big

If you conduct business in the EU, offer goods or services to, or monitor the online behavior of EU citizens, then the clock is ticking. You only have a few more months – until May – to make sure your organization complies with GDPR data privacy regulations. Failure to abide by GDPR means you could get hit with huge fines. Finding and investigating data breaches: Why it’s always too little, too late Personal data protection … More

Smashing Security #065: Cryptominomania, Poppy, and your Amazon Alexa

Smashing Security #065: Cryptominomania, Poppy, and your Amazon Alexa

Cryptomining goes nuclear, YouTube for Kids gets scary, and TV ads have been given the green light to mess with your Amazon Alexa.

All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault, who are joined this week by special guest Maria Varmazis.

Seattle To Remove Controversial City Spying Network After Public Backlash

schwit1 shares a report from Activist Post: Following years of resistance from citizens, the city of Seattle has decided to completely remove controversial surveillance equipment -- at a cost of $150,000. In November 2013, Seattle residents pushed back against the installation of several mesh network nodes attached to utility poles around the downtown area. The American Civil Liberties Union of Washington and privacy advocates were immediately concerned about the ability of the nodes to gather user information via the Wi-Fi connection. The Seattle Times reports on the latest developments: "Seattle's wireless mesh network, a node of controversy about police surveillance and the role of federal funding in city policing, is coming down. Megan Erb, spokeswoman for Seattle Information Technology, said the city has budgeted $150,000 for contractor Prime Electric and city employees to remove dozens of surveillance cameras and 158 'wireless access points' -- little, off-white boxes with antennae mounted on utility poles around the city." The nodes were purchased by the Seattle Police Department via a $3.6 million grant from the Department of Homeland Security. The Seattle Police Department argued the network would be helpful for protecting the port and for first-responder communication during emergencies. As the Times notes, "the mesh network, according to the ACLU, news reports and anti-surveillance activists from Seattle Privacy Coalition, had the potential to track and log every wireless device that moved through its system: people attending protests, people getting cups of coffee, people going to a hotel in the middle of the workday." However, by November 2013, SPD spokesman Sean Whitcomb announced, "The wireless mesh network will be deactivated until city council approves a draft (privacy) policy and until there's an opportunity for vigorous public debate." The privacy policy for the network was never developed and, instead, the city has now opted to remove the devices at a cost of $150,000. The Times notes that, "crews are tearing its hardware down and repurposing the usable parts for other city agencies, including Seattle Department of Transportation traffic cameras."

Read more of this story at Slashdot.

NBlog February 14 – IoT security & privacy standard

I've just added another new page to for ISO/IEC 27030, a standard now being developed for IoT security and privacy.

I've been arguing for years that it would be appropriate, since they specify a risk-based approach to security management, for the ISO27k standards to specify the information risks they address. To that end, I've published a PIG (Probability Impact Graph) graphic from the NoticeBored security awareness module on IoT and BYOD, to set the ball rolling ...

There seems little chance of persuading ISO/IEC to incorporate such a colorful image in the standard, unfortunately, but hopefully the analytical approach will at least prove useful for the project team busily drafting the new standard.

On the web page I've described the red and amber zone IoT risks. I'm sure we could have an excellent discussion about those and other risks in the committee, except there is never enough time at the twice-yearly SC27 meetings to get far into the nitty-gritty of stuff like this. Instead I'll see whether I can raise any interest on the ISO27k Forum, perhaps feeding relevant content and creative suggestions to SC27 via formal comments submitted by NZ Standards - the tedious, antiquated, laborious, slow and expensive approach that we are presently lumbered with. It hardly seems worth the effort.

German court says Facebook use of personal data is illegal

Facebook’s default privacy settings and some of its terms of service fall afoul of the German Federal Data Protection Act, the Berlin Regional Court has found. By not adequately securing the informed consent of its users, Facebook’s use of personal data is illegal – and so is the social network’s “real-name” clause, as the German Telemedia Act says that providers of online services must allow users to use their services anonymously or by using a … More

NBlog February 13: ISO/IEC 27000:2018 FREE download

I’ve caught up with a small mountain of ISO/IEC JTC1/SC27 emails, and updated with a smattering of news.

A few new and updated standards have been released in the past 4 months or so, including ISO/IEC 27000:2008, the overview and glossary of terms used throughout ISO27k. 

As usual, ITTF offers legitimate FREE single-user PDF versions of ISO/IEC 27000 in both Englishand French

Please observe the copyright notice. The free ITTF PDFs are for personal use and are not to be shared or networked.

Other recent (but not free) releases include ISO/IEC 27007 (management system auditing), 27019(securing SCADA/ICS process controls in the energy industry) and 27034-5(application security).

ISO/IEC 27021 is an interesting new one: it explains the competences (knowledge and skills) required by ISMS professionals. It’s fairly straightforward, really, but nice to see it laid out in black and white, with the implication that assorted ISO27k training courses will gradually fall into line.

Perhaps we should develop an ISO27021-aligned training course. Would you like to pop down to the South Pacific to learn how to do this ISO27k ISMS stuff, or invite me over to wherever you are? If so, please get in touch. It's a lot of work to put a course together, so we'd need to establish first whether there would be sufficient demand. 😊

There are also some privacy standards in preparation with ISO27k numbers, hinting at commonality/convergence between information risk/security management with privacy management. It's a shame they aren't already available, given the massive push towards GDPR compliance right now.

Finally, I have some choice words to say on the site about a slew of “cybersecurity” standards projects on the go, with a common concern that “cyber” and derivative words are not properly defined – a bit of a drawback for international standards, I feel. That’s one bandwagon I’m happy to observe cynically from the sidelines.

Polisis: AI-based framework for analyzing privacy policies in real time

It has been known for a while that the overwhelming majority of Internet users doesn’t read privacy policies and terms of service before agreeing to them. Those few that do usually skim over them. That’s mostly because these documents and agreements are extremely long and – intentionally or unintentionally – written in a way that makes them unintelligible to the great majority of users. Companies’ privacy policies and terms of service also change through time, … More

SecurityWeek RSS Feed: New Details Surface on Equifax Breach

Documents provided recently by Equifax to senators revealed that the breach suffered by the company last year may have involved types of data not mentioned in the initial disclosure of the incident.

read more

SecurityWeek RSS Feed

Equifax breach may have exposed more data than first thought

The 2017 Equifax data breach was already extremely serious by itself, but there are hints it was somehow worse. CNN has learned that Equifax told the US Senate Banking Committee that more data may have been exposed than initially determined. The hack may have compromised more driver's license info, such as the issuing data and host state, as well as tax IDs. In theory, it would be that much easier for intruders to commit fraud.

Source: CNN Money

A Flaw in Hotspot Shield VPN From AnchorFree Can Expose Users Locations

Security expert Paulos Yibelo has discovered a vulnerability in Hotspot Shield VPN from AnchorFree that can expose locations of the users.

Paulos Yibelo, a security researcher, has discovered a vulnerability that can expose users and locations around the globe compromising their anonymity and privacy. The company has about 500 million users globally.

VPN services providers are used nowadays to protect the identity of individual users and against the eavesdropping of their browsing habits. In countries like North Korea and China they are popular among political activists or dissidents where internet access is restricted because of censorship or heavily monitored once these services hide the IP addresses of the real users, that can be used to locate the person real address.

The Great Firewall of China is an example. Locating a Hotspot Shield user in a rogue country could pose a risk to their life and their families.

The VPN Hotspot Shield developed by AnchorFree to secure the connection of users and protect their privacy contained flaws that allow sensitive information disclosure such as the country, the name of WIFI network connection and the user’s real IP address, according to the researcher.

“By disclosing information such as Wi-Fi name, an attacker can easily narrow down or pinpoint where the victim is located, you can narrow down a list of places where your victim is located”. states Paulos Yibelo.

The vulnerability CVE-2018-6460 was published without a response from the company on Monday, but on Wednesday a patch was released to address the issue. The vulnerability is present on the local web server ( on port 895) that Hotspot Shield installs on the user’s machine.

“http://localhost:895/status.js generates a sensitive JSON response that reveals whether the user is connected to VPN, to which VPN he/she is connected to what and what their real IP address is & other system juicy information. There are other multiple endpoints that return sensitive data including configuration details.” continues the researcher. 

“While that endpoint is presented without any authorization, status.js is actually a JSON endpoint so there are no sensitive functions to override, but when we send the parameter func with $_APPLOG.Rfunc, it returns that function as a JSONP name. We can obviously override this in our malicious page and steal its contents by supplying a tm parameter timestamp, that way we can provide a logtime“.

Once running, the server hosts multiple JSONP endpoints, with no authentication requests and also with responses that leak sensitive information pertaining the VPN service, such as the configuration details. The researcher released a proof of concept (PoC) for the flaw, however, the reporter Zack Whittaker, from ZDNET, independently verified that flaw revealed only the Wi-Fi network name and the country, not the real IP address.

The company replied to the researcher allegation:

“We have found that this vulnerability does not leak the user’s real IP address or any personal information, but may expose some generic information such as the user’s country. We are committed to the safety and security of our users, and will provide an update this week that will completely remove the component capable of leaking even generic information”. 



About the author Luis Nakamoto

Luis Nakamoto is a Computer Science student of Cryptology and an enthusiastic of information security having participated in groups like Comissão Especial de Direito Digital e Compliance (OAB/SP) and CCBS (Consciência Cibernética Brasil) as a researcher in new technologies related to ethical hacking, forensics and reverse engineering. Also, a prolific and compulsive writer participating as a Redactor to Portal Tic from Sebrae Nacional.

Pierluigi Paganini

(Security Affairs – Hotspot Shield VPN, privacy)

The post A Flaw in Hotspot Shield VPN From AnchorFree Can Expose Users Locations appeared first on Security Affairs.

New Deepfakes forum goes mining with Coinhive

You may or may be familiar with the furore over Deepfakes, a relatively new development in pornography involving a tool called FacesApp, which is capable of producing a real porn clip that replaces the original actors’ heads with those of celebrities—or indeed, anyone at all.

Online fakes have been around since the early 2000s or possibly even earlier; alongside those old photos, fakers would also make the odd terrible porno flick. Those movies would quite literally be a static cut out of a celebrity’s head stuck onto the body. Some 20 years later, the tech has caught up, and the web is suddenly dealing with the fallout.

FacesApp allows people to “train” an AI to create a realistic head so the scene is practically indistinguishable from reality. The AI is trained by feeding it images or footage of people; the more data it has to go off, the more realistic everything is.

After a media firestorm, the inevitable has happened. All of the Deepfake subreddits, where the majority of content was being created, have been taken offline after major players such as Twitter and PornHub had already effectively banned Deepfake content from their networks.

The Deepfake tech is available for pretty much anyone to make use of—the only real barrier to entry is having a powerful PC capable of withstanding the intensive training process, which can take hours or days to complete.

Now, if you were a crafty cybercriminal and knew that the main Deepfakes sources were taken offline, with a sizable community of content consumers and creators with heavy-duty PC rigs suddenly set adrift, what would you do?

The answer, of course, is monetize potentially dubious fakes that you didn’t create yourself and hammer visitor’s PCs with mining scripts.

One of the most popular “lifeboat” sites we’ve seen for those unceremoniously dumped from the tender embrace of reddit was being promoted pretty heavily on surviving subreddits:

promo messages

Click to enlarge

On the surface, it looks like a fairly typical forum, and it’s been getting a fair bit of activity so far. It all looks legit—or at least as legit as can be given the controversial content on offer:


Click to enlarge

A quick check of the source code, while your CPU likely ramps up to 100 percent, would tell a slightly different story:

miner code

Click to enlarge

We have some Javascript located at:


Click to enlarge

Sure, you could try to make sense of it as is. Or, you could just unpack it instead and save yourself a headache because that is a large, confusing pile of code. What is it doing?

miner function

var Miner=function

…miner…function? Did this site place mining scripts in the background?


Click to enlarge


They sure did, and we block both the mining and the website in question.


Click to enlarge

Coinhive is something we’ve been blocking since October. It allows you to place cryptocurrency mining scripts on your webpage, similar to how regular adverts are placed, except it’ll try to make as much use of your machine as possible to whip up some Monero coins for the site owner. Here’s an example of a site pushing a PC to the limit via mining scripts in the background. Check out the resources being gobbled up on the right-hand side:

Ramping up

Click to enlarge

In an age of people leaving dozens of tabs open and going for dinner, websites running scripts that ramp you up to 100 percent CPU usage and generate a fair bit of heat in the bargain just aren’t my thing. Now that we have DIY fake porn tech which demands high system specs and also has people simultaneously making content as well as downloading it, they’re prime targets for a spot of potentially surreptitious cryptomining taking place behind the scenes.

We’ve seen a few mentions of other Deepfake aficionados complaining about dodgy sites, and we’ll be taking a closer look to see what’s out there. All in all, you’re probably better off steering clear of the whole mess and taking up a less stress-inducing hobby (for you and your computer).

Keep your security tools up to date, make informed decisions about what you want to block, and keep those CPU temperatures down to a minimum!

The post New Deepfakes forum goes mining with Coinhive appeared first on Malwarebytes Labs.

Data of 800,000 Swisscom customers compromised in breach

Swisscom, the biggest telecom company in Switzerland, has suffered a data breach that resulted in the compromise of personal data of some 800,000 customers, i.e., nearly ten percent of the entire Swiss population. “The data accessed included the first and last names, home addresses, dates of birth and telephone numbers of Swisscom customers; contact details which, for the most part, are in the public domain or available from list brokers,” the company explained. The data … More

Smashing Security #064: So just a ‘teeny tiny’ security issue then?

Smashing Security #064: So just a 'teeny tiny' security issue then?

A Namecheap vulnerability allows strangers to make subdomains for your website, Troy Hunt examines password length, and ex-Google and Facebook employees are fighting to protect kids from social media addiction.

All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault, who are joined this week by special guest HaveIBeenPwned's Troy Hunt.

How to track smartphone users when they’ve turned off GPS

As it turns out, turning off location services (e.g., GPS) on your smartphone doesn’t mean an attacker can’t use the device to pinpoint your location. A group of Princeton University researchers has devised of a novel user-location mechanism that exploits non-sensory and sensory data stored on the smartphone (the environment’s air pressure, the device’s heading, timezone, network status, IP address, etc.) and publicly-available information to estimate the user’s location. The PinMe mechanism The non-sensory and … More

Hotspot Shield VPN flaw can betray users’ location

A flaw in the widely used Hotspot Shield VPN utility can be exploited by attackers to obtain sensitive information that could be used to discover users’ location and, possibly and ultimately, their real-world identity. About the vulnerability According to the entry for the vulnerability (CVE-2018-6460) in the National Vulnerability Database, Hotspot Shield runs a webserver with a static IP address and port 895, and the web server uses JSONP and hosts sensitive information including … More

FTC Brings Its Thirtieth COPPA Case Against Online Talent Agency

On February 5, 2018, the Federal Trade Commission (“FTC”) announced its most recent Children’s Online Privacy Protection Act (“COPPA”) case against Explore Talent, an online service marketed to aspiring actors and models. According to the FTC’s complaint, Explore Talent provided a free platform for consumers to find information about upcoming auditions, casting calls and other opportunities. The company also offered a monthly fee-based “pro” service that promised to provide consumers with access to specific opportunities. Users who registered online were asked to input a host of personal information including full name, email, telephone number, mailing address and photo; they also were asked to provide their eye color, hair color, body type, measurements, gender, ethnicity, age range and birth date.

The FTC alleges that Explore Talent collected the same range of personal information from users who indicated they were under age 13 as from other users, and made no attempts to provide COPPA-required notice or obtain parental consent before collecting such information. Once registered on, all profiles, including children’s, became publicly visible, and registered adults were able to “friend” and exchange direct private messages with registered children. The FTC alleges that, between 2014 and 2016, more than 100,000 children registered on As part of the settlement, Explore Talent agreed to (1) pay a $500,000 civil penalty (which was suspended upon payment of $235,000), (2) comply with COPPA in the future and (3) delete the information it previously collected from children.

Singapore PDPC Issues Response to Public Feedback Regarding Data Protection Consultation

On February 1, 2018, the Singapore Personal Data Protection Commission (the “PDPC”) published its response to feedback collected during a public consultation process conducted during the late summer and fall of 2017 (the “Response”). During that public consultation, the PDPC circulated a proposal relating to two general topics: (1) the relevance of two new alternative bases for collecting, using and disclosing personal data (“Notification of Purpose” and “Legal or Business Purpose”), and (2) a mandatory data breach notification requirement. The PDPC invited feedback from the public on these topics.

“Notification of Purpose” as a new basis for an organization to collect, use and disclose personal data.

In its consultation, the PDPC solicited views on “Notification of Purpose” as a possible new basis for data processing. In its Response, the PDPC noted that it intends to amend its consent framework to incorporate the “Notification of Purpose” approach (also called “deemed consent by notification”), which will essentially provide for an opt-out approach.

Under that approach, organizations may collect, use and disclose personal data merely by providing (1) some form of appropriate notice of purpose in situations where there is no foreseeable adverse impact on the data subjects, and (2) a mechanism to opt out. The PDPC will issue guidelines on what would be considered “not likely to have any adverse impact.” The approach will also require organizations to undertake risk and impact assessments to determine any such possible adverse impacts. Where the risk assessments determine a likely adverse impact, the approach may not be used. Also, the “Notification of Purpose” approach may not be used for direct marketing purposes.

The PDPC will not specify how organizations will be required to notify individuals of purpose, and will leave it to organizations to determine the most appropriate method under the circumstances, which might include a general notification on a website or social media page. The notification must, however, include information on how to opt out or withdraw consent from the collection, use or disclosure. The PDPC also said it would provide further guidance on situations where opt-out would be challenging, such as where large volumes of personal data are collected by sensors, for example.

“Legitimate Interest” as a basis to collect, use or disclose personal data.

In its consultation, the PDPC also sought feedback on a proposed “Legal and Business Purpose” ground for processing personal information. In its Response, the PDPC said that based on the feedback, it intends to adopt this concept under the EU term “legitimate interest.” The PDPC will provide guidance on the legal and business purposes that come within the ambit of “legitimate interest,” such as fraud prevention. “Legitimate interest” will not cover direct marketing purposes. The intent behind this ground for processing is to enable organizations to collect, use and disclose personal data in contexts where there is a need to protect legitimate interests that will have economic, social, security or other benefits for the public or a section thereof, and the processing should not be subject to consent. The benefits to the public or a section thereof must outweigh any adverse impacts to individuals. Organizations must conduct risk assessments to determine whether they can meet this requirement. Organizations relying on “legitimate interest” must also disclose this fact and make available a document justifying the organization’s reliance on it.

Mandatory Data Breach Notification

Regarding the 72-hour breach notification requirement it proposed in the consultation, the PDPC acknowledged in its Response that the affected organization may need time to determine the veracity of a suspected data breach incident. Thus, it stated that the time frame for the breach notification obligation only commences when the affected organization has determined that a breach is eligible for reporting. This means that when an affected organization first becomes aware that an information security incident may have occurred, the organization still has time to conduct a digital forensic investigation to determine precisely what has happened, including whether any breach of personal information security has happened at all, before the clock begins to run on the 72-hour breach notification deadline. From that time, the organization must report the incident to the affected individuals and the PDPC as soon as practicable, but still within 72 hours.

The PDPC requires that the digital forensic investigation be completed within 30 days. However, it still allows that the investigation may continue for more than 30 days if the affected organization has documented reasons why the time taken to investigate was reasonable and expeditious.

Both the Centre for Information Policy and Leadership and Hunton & Williams LLP filed public comments in the PDPC’s consultation.

Identity fraud enters a new era of complexity

The number of identity fraud victims increased by eight percent (rising to 16.7 million U.S. consumers) in the last year, a record high since Javelin Strategy & Research began tracking identity fraud in 2003. The 2018 Identity Fraud Study found that despite industry efforts to prevent identity fraud, fraudsters successfully adapted to net 1.3 million more victims in 2017, with the amount stolen rising to $16.8 billion. With the adoption of EMV cards and terminals, … More

Realistic, well-positioned Reddit clone is out to grab users’ login credentials

A convincing clone of the popular social news aggregation and discussion site Reddit has been spotted on the domain. The author is obviously counting on users not to spot it for what it is: a site meant to harvest users’ username and password. HEADSUP: Looking for infosec people at @Reddit. Website at (phishing?) domain reddit(.)co — using the Colombian TLD — was acting a pitch-perfect apparent MITM of the actual Reddit. Now returning 500 … More

Safer Internet Day 2018: ad blockers and anti-trackers

The path to a safer Internet can be a bit of a quandary. What programs should you buy? How long should your passwords be?  Is it okay to write them down? What makes a website secure?

All of these questions can merit their own lengthy essays, so today, on Safer Internet Day, we’re going to look at some of the simplest solutions for security. What is the easiest, fastest, completely free thing you can do to have a safer Internet experience? The answer: ad blockers and anti-tracking browser extensions. Let’s take a look at how.

Ad blockers

Some people feel that ad blockers are unethical, as they deprive others in the content chain of income. While this can be debated, it’s indisputable that cybercriminals love using ads as a malware delivery mechanism.

Traditionally, bad ads have delivered exploit kits, forced redirects, fake plugin updates, and more.  Recently, malicious ads have been caught running cryptominers, monopolizing your CPU to make the owners a few pennies. Given that you can’t be infected by an ad that doesn’t load, you might want to check out one of the following ad blockers.

Ublock origin (Chrome, Firefox, Safari, Edge)

Is simply blocking most ads not good enough for you? Does the idea of “acceptable ads” seem like a contradiction? Ublock origin might be for you. Most ad blockers are designed for the casual user, eschewing features in favor of keeping a low barrier to entry. Ublock origin is motivated by giving maximum power to the user to determine what content they wish to see, with block granularity down to individual ads on a single site. Ublock used to lose points for being a little tough to get going, but they’ve improved their interface to give a simplified dashboard of the nastiness they’re blocking, as well as a much more defined view if you’re so inclined.

Adblock (Chrome, Firefox, Safari, Edge, Android)

Adblock is one of the earlier blockers out there, and is relatively easy to set and forget. Depending on your block list subscriptions, it may not banish 100 percent of ads from your view, and occasionally struggles with YouTube pre-roll ads.

While its baseline functionality is perfectly serviceable, many privacy advocates take issue with Adblock’s policy on “acceptable ads.” Basically, if your ad meets certain criteria making it less annoying than most, Adblock will let it through. This is something that can be switched off if you’d prefer, but blocking advocates tend to be irritated by the need to go menu diving for what they view as a core function of any blocker—blocking ads.

1blocker (iOS)

Mobile ads, even when not malicious, are some of the worst out there. We’ve observed tech support scams, forced redirects to PUP downloads, and lock screens on the rise for all mobile platforms. 1blocker’s free version will give you back control of what code runs on your iPhone, and in some instances will reduce load on your battery as well.


When you visit a website, part of its content will be delivered by domains separate from the one you actually clicked on. Some of these domains have trackers that send information about your browsing habits to third parties, often for the purpose of serving up ads. Not only can it feel like a violation of privacy, but it can also result in longer load times and wasted bandwidth.

This is a little harder to understand in terms of safety. Aren’t all those people up in arms over privacy concerns being a little paranoid? The threat here is not that Google AdWords is going to take your aggregated data and use it to come club you over the head. A more realistic threat is that AdWords and other poorly vetted (that is to say—all of them) ad networks are accumulating data at a scale that is impossible to moderate, police, or secure.

Given that third parties have had a pretty awful track record at protecting customer data stores at scale, perhaps we should let them have less of it. Anti-tracking browser extensions like Ghostery and the EFF’s Privacy Badger are easy to install, and give you back some measure of control over who is holding onto data about your Internet use.

How do these services keep me safe?

At its core, safety is not a product or service; safety is a collection of behaviors.  While we referred to a handful of products above, they’re really just tools in furtherance of an important behavior—keeping control of what data goes out, and what code goes into your system.

Keeping a vigilant eye on both processes can go a long way towards staying safe online without spending a lot of money. To learn a little more about common online threats, check out our post on bad ads here, and our post on avoiding scams here.

Stay safe, everyone!

The post Safer Internet Day 2018: ad blockers and anti-trackers appeared first on Malwarebytes Labs.

A Safer Internet Starts With Us All: Three Things I Pledge to Do

This past weekend, my seven-year-old told me that her homework was to research and learn a poem. While I was clearing away from breakfast, I was also thinking about what poem we might choose. Twenty minutes later, I went up to her room, where she was cozy in her top bunk with her iPad open beside her, beautifully writing out the words to “A Wintry Night.”

She was in her element and enjoying the task, but I didn’t understand how she knew how to do this. “Mummy, it’s easy,” she said. “I just got the iPad, went to Google and put in ‘poems for kids with words.’ I liked this one, so I am copying it.” Impressed, I asked her how she knew what to do. She said she knew to include “kids” in her search, otherwise she wouldn’t get poems that were right for her.

While I was cuddling her really hard for being so creative and smart, I was also a little worried that she knew just what to do. I realized that I needed to have a conversation with my daughter about internet safety.

Building a Safer Internet, One Step at a Time

Today, the U.K. celebrates Safer Internet Day, which is about getting involved to help promote the safe, responsible and positive use of digital technology for children and young people. One in three internet users is a child, and online safety is now in the front of my mind.

The campaign slogan is “Create, connect and share respect: A better internet starts with you,” which made me think, how can I play a bigger part in making the internet a safer place? Below are three things that I plan to do, and I hope you will join in too.

1. Volunteer My Knowledge

I’ve been working in cybersecurity for 3 1/2 years now, and I’ve learned a huge amount. While I don’t always grasp the technical side of things, I fully understand the importance of internet safety and how easy it is to be one click away from danger.

SkillsBuild, which is part of IBM Volunteers, is a call to action for IBM employees and retirees to reach 5 million students in five years — or 1 million students per year — starting in 2018. The initiative includes an activity kit designed to help raise cybersecurity awareness among students. Now is a great time to connect to the program and share helpful tips for staying safe online.

2. Immunize My Computer

It’s not just me and my husband logging on to the internet anymore. I need to think about a more private and safer internet browsing experience, especially for the little eyes in our household.

By 2025, there will be 80 billion internet-connected devices in homes and offices, according to Forbes. It’s time to get smart and provide my family with a more secure gateway to accessing the internet. That’s where Quad9 comes in.

This free service routes the domain names you type into a browser through a secure network of servers around the globe. The system uses real-time threat intelligence from more than a dozen of the industry’s leading cybersecurity companies to help you determine which websites are safe and which ones are known to include malware or other threats. It only takes about four minutes to follow four simple steps to immunize your PC.

3. Report Suspicious Communications

I’m sure that, like me, you receive plenty of phishing emails, but do you delete them? Let’s think about this for a moment: Every time we delete a suspicious email, we lose the opportunity to share what happened, and that intelligence can no longer be acted upon.

By sharing information related to phishing attempts, we can help security teams and law enforcement better understand the underlying cybercriminal activity. In the U.K., users can help disrupt fraudsters by reporting attempted scams to Action Fraud.

A Better Internet Starts With Us All

Safer Internet Day is a great way to raise awareness about cybersecurity, but it’s important to follow these best practices year-round to ensure a more secure online experience for all and, more importantly, help our children develop safe browsing habits. That’s why, on this cold, wintry February night, I pledge to help build a safer internet, because a better internet starts with us all.

The post A Safer Internet Starts With Us All: Three Things I Pledge to Do appeared first on Security Intelligence.

Apple is Sending Some Developers Ad Spend and Install Details For Other People’s Apps

An issue at Apple appears to be resulting in app developers getting emails of ad spend and install summaries for apps belonging to other developers. From a report: The issue -- which appears specific right now to developers using Search Ads Basic, pay-per-install ads that appear as promoted apps when people search on the App Store -- was raised on Twitter by a number of those affected, including prominent developer Steve Troughton-Smith, who posted a screenshot of an email that summarized January's ad spend and install data another developer's two apps. Several others replied noting the same issue, listing more developers and random apps.

Read more of this story at Slashdot.

ISP’s Wi-Fi weakness highlights privacy and security shortfalls as GDPR approaches

Having been involved in GDPR preparation work for clients, I’ve become more conscious of how other people and organisations access my data. That brings me to how I first noticed one way our privacy could be at risk without us realising.

It was quite by chance I even noticed. I had left my house and forgot to turn my phone’s Wi-Fi network connection off and my data back on. Walking down the street and browsing my phone (obstacles be damned), I suddenly noticed I’d connected to a Wi-Fi network. Turns out it was some random network in one of the houses I happened to be walking past.

How did this happen? Because my ISP provides customers with an option for giving visiting guests free Wi-Fi for up to five devices. They don’t need to be authenticated on your network; they just have to be a customer of the same ISP already. (I hadnt known about that option until recently, probably due to my own lack of research and not reading the documents my ISP sent me.)

Security fail

Because I work in the information security industry, I’m usually more sensitive to, and aware of, what technologies I use. (Just not in this case.) So, I was a bit miffed that this got past me so easily without my ISP drawing more attention to it.

While connected to that random network, I had no clue who was managing it, who could intercept my traffic, or what else they could do with the data. What if I was logging in to my bank, or downloading sensitive data? What if I sit on Wireshark on my neighbour’s Guest network when I know they are having a party or have people over?

More worryingly, it’s not possible to disable this “feature” on your router manually. You have to log in  to your ISP account with that ISP and ask them to deactivate it. This then stops you from being able to connect to others’ networks, as well as them connecting to yours. Deactivation can take “up to” 72 hours.

Putting privacy first

My ISP has an “opt out” policy for its Wi-Fi sharing feature. I don’t know about you, but for me to opt out of something, I need to be made aware of it properly. Other customers of the same ISP complained on Twitter they weren’t aware of these terms and conditions. When an ISP enables a feature giving random people I don’t know access to my network, without me having input over the controls in place to protect both my and my guests’ data, it really needs to consider having an “opt in” policy instead.

There are two sides to the privacy debate. Many of us want to live in a future where we are all connected. Some want that “Smart City” utopia with a free flow of useful information. But we also want to know when this occurs – and that we have consented to sharing our information. With GDPR fast approaching, it’s never been more important to know who has access to your data and who they share it with.  Let’s head towards utopia by all means – provided we keep our fundamental rights to privacy intact along the way.

The post ISP’s Wi-Fi weakness highlights privacy and security shortfalls as GDPR approaches appeared first on BH Consulting.

HHS Announces $3.5 Million Settlement with Fresenius Medical Care

On February 1, 2018, the Department of Health and Human Services’ Office for Civil Rights (“OCR”) announced a settlement with dialysis clinic operator, Fresenius Medical Care (“Fresenius”). Fresenius will pay OCR $3.5 million to settle claims brought under Health Insurance Portability and Accountability Act rules, alleging that lax security practices led to five breaches of electronic protected health information.

The breaches, which occurred at Fresenius facilities in Alabama, Arizona, Florida, Georgia and Illinois from February 23 to July 18, 2012, form the basis of OCR’s claims. According to the settlement, these breaches led to the exposure of 521 patients’ health data.

In announcing the settlement, OCR stated that Fresenius “failed to conduct an accurate and thorough risk analysis of potential risk and vulnerabilities to the confidentiality, integrity, and availability” of protected health data at its locations. Although Fresenius did not admit fault in the settlement, the company agreed to complete a risk analysis and risk management plan, update facility access controls, develop an encryption report and update employees on new policies and procedures.

Episode 82: the skinny on the Autosploit IoT hacking tool and a GDPR update from the front lines

In this week’s episode of The Security Ledger Podcast (#82), we take a look at Autosploit, the new Internet of Things attack tool that was published on the open source code repository Github last week. Brian Knopf of the firm Neustar joins us to talk about what the new tool might mean for attacks on Internet of Things endpoints in 2018....

Read the whole entry... »

Related Stories

CIPL Submits Comments to Article 29 WP’s Updated BCR Working Documents

On January 18, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP submitted formal comments to the Article 29 Working Party (the “Working Party”) on its updated Working Documents, which include a table with the elements and principles found in Binding Corporate Rules (“BCRs”) and Processor Binding Corporate Rules (the “Working Documents”). The Working Documents were adopted by the Working Party on October 3, 2017, for public consultation.

In its comments, CIPL recommends several changes or clarifications the Working Party should incorporate in its final Working Documents.

Comments Applicable to Both Controller and Processor BCRs

  • The Working Documents should clarify that, with respect to the BCR application, providing confirmation of assets to pay for damages resulting from a BCR-breach by members outside of the EU does not extend to fines under the GDPR. Additionally, the Working Party should clarify that access to sufficient assets, such as a guarantee from the parent company, is sufficient to provide valid confirmation.
  • The Working Document should confirm that bringing existing BCRs in line with the GDPR requires updating the BCRs in line with the Working Documents and sending the updated BCRs to the respective supervisory authority.
  • The Working Party should clarify that companies currently in the process of BCR approval through a national mutual recognition procedure should be treated the same as fully approved BCRs, and must simply update the BCRs in line with the GDPR.

Comments Applicable to BCR Controllers (“BCR-C”) Only

  • The Working Party should clarify that companies with approved BCR-C do not have to implement additional controller-processor contracts reiterating the processors’ obligations under Article 28(3) of the GDPR with respect to internal transfers between controllers and processors within the same group of companies.
  • The Working Party should also clarify that BCRs only need to include the requirement that individuals benefitting from third-party beneficiary rights be provided with the information as required by Article 13 and 14 of the GDPR. The BCRs do not need to restate the actual elements of these provisions.

Comments Applicable to BCR Processors Only

  • The Working Documents should emphasize that an individual’s authority to enforce the duty of a processor to cooperate with the controller is limited to situations where cooperation is required to allow the individual to exercise their rights or to make a complaint.
  • The Working Party should remove the requirement that processors must open their facilities for audit, and clarify that the completion of questionnaires or the provision of independent audit reports are sufficient to meet the requirements of Article 28(3)(h). Furthermore, the Working Documents should make clear that certifications can be used in accordance with Article 28(5) to demonstrate compliance with Article 28(3)(h).

General BCR Recommendations

  • The Working Party should clarify that BCR-approved companies are deemed adequate and transfers between two BCR-approved companies (either controllers or processors) or transfers from any controller (not BCR-approved) to a BCR-approved controller are permitted.
  • The status for existing and UK-approved BCRs post-Brexit should be clarified, along with the future role of the UK ICO with regard to BCRs and the situation for new BCR applications post-Brexit.
  • The Working Party should highlight the importance of BCR interoperability with other transfer mechanisms, and propose that the EU Commission consider and promote such interoperability through appropriate means and processes.
  • The Working Party should recommend the EU Commission consider third-party BCR approval by approved certification bodies or “Accountability Agents” and/or a self-certified system for BCRs, which would streamline the BCR approval process and facilitate faster processing times.

To read the above recommendations in more detail, along with all of CIPL’s other recommendations on BCRs, view the full paper.

CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 90 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics the Working Party prioritizes.

7 steps for getting your organization GDPR-ready

While the EU has had long established data protection standards and rules, its regulators haven’t truly commanded compliance until now. Under the General Data Protection Regulation (GDPR), financial penalties for data protection violations are severe – €20 million (about $24.8 million USD) or 4 percent of annual global turnover (whichever is higher), to be exact. What’s more is that GDPR does not merely apply to EU businesses, but any organization processing personal data of EU … More

CIPL Submits Comments to Article 29 WP’s Proposed Guidelines on Transparency

On January 29, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP submitted formal comments to the Article 29 Working Party (the “Working Party”) on its Guidelines on Transparency (the “Guidelines”). The Guidelines were adopted by the Working Party on November 28, 2017, for public consultation.

CIPL acknowledges and appreciates the Working Party’s emphasis on user-centric transparency and the use of layered notices to achieve full disclosure, along with its statements on the use of visualization tools and the importance of avoiding overly technical or legalistic language in providing transparency. However, CIPL also identified several areas in the Guidelines that would benefit from further clarification or adjustment.

In its comments to the Guidelines, CIPL recommends several changes or clarifications the Working Party should incorporate in its final guidelines relating to elements of transparency under the EU GDPR, information to be provided to the data subject, information related to further processing, exertion of data subjects’ rights, and exceptions to the obligation to provide information.

Some key recommendations include:

  • Clear and Concise yet Comprehensive Disclosure: The Guidelines should more clearly acknowledge the tension between asking for clear and concise notices and including all of the information required by the GDPR and recommended by the Working Party. CIPL believes Articles 13 and 14 of the GDPR already require sufficient information, and the risk-based approach gives organizations the opportunity to prioritize which information should be provided.
  • Consequences of Processing: The Working Party should amend their “best practice” recommendation that controllers “spell out” what the most important consequences of the processing will be. The Working Party should clarify that in providing information beyond what is required under the GDPR, controllers must be able to exercise their judgement on whether and how to provide such information.
  • Use of Certain Qualifiers: CIPL recommends removing the Working Party’s statement that qualifiers such as “may,” “might,” “some,” “often” and “possible” be avoided in privacy statements. Sometimes these terms are more appropriate than others. For instance, saying certain processing “will occur” is not as accurate as “may occur” when it is not certain whether the processing will in fact occur.
  • Proving Identity Orally: The Guidelines state that information may be provided orally to a data subject on request, provided that their identity is proven by other non-oral means. CIPL believes the Working Party should revise this statement, as voice recognition or verbal identity confirming questions and answers are valid mechanisms of proving one’s identity orally.
  • Updates to Privacy Notices: The Working Party should remove its suggestion that any changes to an existing privacy statement or notice must be notified to individuals. CIPL believes communications to individuals should be required only for changes having a significant impact.
  • Reminder Notices: The Working Party should remove the recommendation that the controller send reminder notices to individuals when processing occurs on an ongoing basis, even when they have already received the information. This is not required by the GDPR and individuals may feel overwhelmed or frustrated by such constant reminders. Individuals should, however, be able to easily pull such information from an accessible location.
  • New Purposes of Processing: The Guidelines should amend the statement and example suggesting that in addition to providing individuals new information in connection with a new purpose of processing, the controller, as a matter of best practice, should re-provide the individual with all of the information under the notice requirement received previously. CIPL believes this could potentially distract individuals from focusing on any new key information which could undermine transparency, and it should be up to the data controller to determine whether the re-provision of information would be useful.
  • Active Steps: The Working Party should clarify its statement that individuals should not have to take “active steps” to obtain information covered by Articles 13 and 14 of the GDPR, to the effect that clicking links to access notices would not constitute taking an “active step.”
  • Compatibility Analyses: The Working Party states that in connection with processing for compatibility purposes, organizations should provide individuals with “further information on the compatibility analysis carried out under Article 6(4).” CIPL believes such a requirement undermines transparency, as the information would provide little benefit to an individual’s understanding of the organization’s data processing, and burden organizations who have to reform, redact, compose and deliver such information.
  • Disproportionate Efforts: The Guidelines should acknowledge that the disproportionate efforts clause (Article 14(5)(b)) can be relied upon by controllers for purposes other than archiving in the public interest, scientific or historical research purposes or for statistical purposes (e.g., confirming identity or preventing fraud). The Working Party should also revise its statement that controllers who rely on Article 14(5)(b) should have to carry out a balancing exercise to assess the effort of the controller to provide the information versus the impact on the individual if not provided with the information. The GDPR does not require this and the disproportionality at issue refers to the disproportionality between the effort associated with the provision of such information and the intended data use.

To read the above recommendations in more detail along with all of CIPL’s other recommendations on transparency, view the full paper.

CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 90 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics the Working Party prioritizes.

DuckDuckGo CEO: ‘Google and Facebook Are Watching Our Every Move Online. It’s Time To Make Them Stop’

An anonymous reader shares a report from CNBC, written by Gabriel Weinberg, CEO and founder of DuckDuckGo: You may know that hidden trackers lurk on most websites you visit, soaking up your personal information. What you may not realize, though, is 76 percent of websites now contain hidden Google trackers, and 24 percent have hidden Facebook trackers, according to the Princeton Web Transparency & Accountability Project. The next highest is Twitter with 12 percent. It is likely that Google or Facebook are watching you on many sites you visit, in addition to tracking you when using their products. As a result, these two companies have amassed huge data profiles on each person, which can include your interests, purchases, search, browsing and location history, and much more. They then make your sensitive data profile available for invasive targeted advertising that can follow you around the Internet. [...] So how do we move forward from here? Don't be fooled by claims of self-regulation, as any useful long-term reforms of Google and Facebook's data privacy practices fundamentally oppose their core business models: hyper-targeted advertising based on more and more intrusive personal surveillance. Change must come from the outside. Unfortunately, we've seen relatively little from Washington. Congress and federal agencies need to take a fresh look at what can be done to curb these data monopolies. They first need to demand more algorithmic and privacy policy transparency, so people can truly understand the extent of how their personal information is being collected, processed and used by these companies. Only then can informed consent be possible. They also need to legislate that people own their own data, enabling real opt-outs. Finally, they need to restrict how data can be combined including being more aggressive at blocking acquisitions that further consolidate data power, which will pave the way for more competition in digital advertising. Until we see such meaningful changes, consumers should vote with their feet.

Read more of this story at Slashdot.

CIPL Submits Comments to Article 29 WP’s Proposed Guidelines on Consent

On January 29, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP submitted formal comments to the Article 29 Working Party (the “Working Party”) on its Guidelines on Consent (the “Guidelines”). The Guidelines were adopted by the Working Party on November 28, 2017, for public consultation.

CIPL acknowledges and appreciates the Working Party’s elaboration on some of the consent-related requirements, such as providing information relevant to consent in layered format and the acknowledgment of both the push and pull models for providing such information. Additionally, CIPL welcomes the clear acknowledgement that controllers have the flexibility to develop consent experiences suitable to their organizations. However, CIPL also identified several areas in the Guidelines that would benefit from further clarification or adjustment.

In its comments to the Guidelines, CIPL recommends several changes or clarifications the Working Party should incorporate in its final guidelines relating to the elements of valid consent, rules on obtaining explicit consent, the interaction between consent and other processing grounds in the EU GDPR, and specific areas of concern such as scientific research and consent obtained under the Data Protection Directive.

Some key recommendations include:

  • Status of Consent: The Working Party should revise its statement that when initiating processing, controllers must always consider whether consent is the appropriate ground. No processing ground, including consent, is privileged over the other.
  • Imbalance of Power: The Guidelines should clarify what constitutes an imbalance of power outside of cases involving public authorities and employers, and emphasize that such imbalances occur in only narrow situations where the individual truly does not have a meaningful opportunity to consent.
  • Conditionality: The Working Party should clarify that incentivizing an individual (e.g., by reducing the generally applicable fee or providing additional features or services) to consent to additional processing should not be deemed inappropriate pressure preventing an individual from exercising their free will.
  • Informed: While it should be easy to identify directly what information relates to the consent sought, the Guidelines should clarify that it may be important to include such information in context with other information to provide a full picture to the individual and safeguard transparency.
  • Unambiguous Indication of Wishes: Consent must be expressed by a clear affirmative act and the Guidelines note that “merely proceeding with a service” cannot be regarded as such an act. The Working Party should clarify that “merely proceeding with a service” refers to a situation where no affirmative action is taking place at all. Completing a free-text field or other similar action may constitute a valid explicit affirmative act.
  • Obtaining Explicit Consent: The Guidelines should clarify that mechanisms for “regular” consent, as defined in the GDPR, may also meet the “explicit consent” standard.
  • Withdrawing Consent: The Working Party should clarify that withdrawal of consent should not automatically result in deletion of data processed prior to withdrawal. This may be contrary to the individual’s wishes, potentially interfere with other data subject rights (e.g., portability), and may even conflict with other regulations such as those regulating clinical trials or research.
  • Alternative Processing Grounds: The Guidelines should clarify that it is possible to have multiple grounds for one and the same processing, and if consent is withdrawn but another ground is available and the conditions for the validity of the alternative ground are met, the controller may continue to process the data.
  • Scientific Research: The Working Party should clarify that scientific research goes beyond medical research and also encompasses private sector R&D. Additionally, the Guidelines should revise the recommendation that providing a comprehensive research plan is a way to compensate for a lack of purpose specification related to research, as disclosures of such plans would carry risks for organizations’ intellectual property rights, undermine innovation and diminish transparency.
  • Consent under the Directive: The Working Party should revise its statement that all consents obtained under the Directive that do not meet the GDPR standard must be re-obtained. Organizations should only have to re-obtain such consents if there is a material change in the processing and its purposes, the consents do not comply with the GDPR rules on conditionality (Article 7(4)), or the requirements of Article 8(1) on processing children’s data have not been met.

To read the above recommendations in more detail, along with all of CIPL’s other recommendations on consent, view the full paper.

CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 90 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics the Working Party prioritizes.

Facebook Publishes Privacy Principles and Announces Introduction of Privacy Center

On January 28, 2018, Facebook published its privacy principles and announced that it will centralize its privacy settings in a single place. The principles were announced in a newsroom post by Facebook’s Chief Privacy Officer and include:

  • “We give you control of your privacy.”
  • “We help people understand how their data is used.”
  • “We design privacy into our products from the outset.”
  • “We work hard to keep your information secure.”
  • “You own and can delete your information.”
  • “Improvement is constant.”
  • “We are accountable.”

In conjunction with the publication of the privacy principles, Facebook also announced the creation of a new privacy center and an educational video campaign for its users that focuses on advertising, reviewing and deleting old posts, and deleting accounts. The videos will appear in users’ news feeds and will be refreshed throughout the year.

Messaging App Telegram Pulled From Apple’s App Store Due To ‘Inappropriate Content’

An anonymous reader shares a report: Apple has removed Telegram's official app from its iOS App Store. The app disappeared yesterday, shortly after Telegram launched a rewritten Telegram X app for Android. Telegram X is currently in testing on iOS, and it was also removed from the App Store. "We were alerted by Apple that inappropriate content was made available to our users and both apps were taken off the App Store," says Telegram CEO Pavel Durov. "Once we have protections in place we expect the apps to be back on the App Store."

Read more of this story at Slashdot.

Smashing Security #063: Carole’s back!

Ss episode 63 thumb

Fitness trackers breaching your privacy, how anyone can create convincing celebrity porn, and how ransomware authors are getting ripped off by scammers.

All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault, who are joined this week by special guest Maria Varmazis.

GSA to Upgrade Cybersecurity Requirements

Recently, the General Services Administration (“GSA”) announced its plan to upgrade its cybersecurity requirements in an effort to build upon the Department of Defense’s new cybersecurity requirements, DFAR Section 252.204-7012, that became effective on December 31, 2017.

The first proposed rule, GSAR Case 2016-G511 “Information and Information Systems Security,” will require that federal contractors “protect the confidentiality, integrity and availability of unclassified GSA information and information systems from cybersecurity vulnerabilities and threats in accordance with the Federal Information Security Modernization Act of 2014 and associated Federal cybersecurity requirements.” The proposed rule will apply to “internal contractor systems, external contractor systems, cloud systems and mobile systems.” It will mandate compliance with applicable controls and standards, such as those of the National Institute of Standards and Technology, and will update existing GSAR clauses 552.239-70 and 552.239-71, which address data security issues. Contracting officers will be required to include these cybersecurity requirements into their statements of work. The proposed rule is scheduled to be released in April 2018. Thereafter, the public will have 60 days to offer comments.

The second proposed rule, GSAR Case 2016-G515 “Cyber Incident Reporting,” will “update requirements for GSA contractors to report cyber incidents that could potentially affect GSA or its customer agencies.” Specifically, contractors will be required to report any cyber incident “where the confidentiality, integrity or availability of GSA information or information systems are potentially compromised.” The proposed rule will establish a timeframe for reporting cyber incidents, detail what the report must contain and provide points of contact for filing the report. The proposed rule is intended to update the existing cyber reporting policy within GSA Order CIO-9297.2 that did not previously undergo the rulemaking process. Additionally, the proposed rule will establish requirements for contractors to preserve images of affected systems and impose training requirements for contractor employees. The proposed rule is scheduled to be released in August 2018, and the public will have 60 days to comment on the proposed rule.

Although the proposed rules have not yet been published, it is anticipated that they will share similarities with the Department of Defense’s new cybersecurity requirements, DFAR Section 252.204-7012.

UK Court of Appeal Rules DRIPA Inconsistent with EU Law

On January 30, 2018, the UK Court of Appeal ruled that the Data Retention and Investigatory Powers Act (“DRIPA”) was inconsistent with EU law. The judgment, pertaining to the now-expired act, is relevant to current UK surveillance practices and is likely to result in major amendments to the Investigatory Powers Act (“IP Act”), the successor of DRIPA.

In the instant case, the Court of Appeal ruled that DRIPA was inconsistent with EU law as it permitted access to communications data when the objective was not restricted solely to fighting serious crime. Additionally, the Court held that DRIPA lacked adequate safeguards since it permitted access to communications data without subjecting such access to a prior review by a court or independent administrative authority. The ruling follows the judgment of the Court of Justice of the European Union (“CJEU”), to which the Court of Appeal referred questions regarding the instant case in 2015.

The IP Act, which was enforced in 2017, largely replicates and further expands upon the powers contained in DRIPA. Though the present judgment does not change the way UK law enforcement agencies can currently access communications data for the detection and disruption of crime under the IP Act, the UK government is currently facing a separate case challenging the IP Act in the High Court, due to be heard in February 2018.

Reacting to the 2016 ruling of the CJEU, the UK government in late 2017 published a consultation document and proposed amendments to the IP Act which aimed to address the judgment of the CJEU. The proposed changes were deemed to fall short of the CJEU ruling by Liberty, the UK human rights organization bringing the proceedings against the IP Act in the High Court.

The present case and the future ruling of the High Court on the IP Act could impact the UK significantly when Brexit negotiations turn to discussions on adequacy and data sharing between the UK and the EU. UK surveillance legislation that is incompatible with EU data protection law could bring a halt to data flows between EU and UK law enforcement agencies and organizations.

In 2018, Data Security Is No Longer an Underdog

Underdogs are on everyone’s mind these days, especially with so many winter sporting events just around the corner. Rooting for unexpected or undervalued players and seeing them come out on top is a satisfying, exhilarating experience that can lead us to adopt new perspectives and a greater sense of optimism and possibility.

In the enterprise security world, data security has traditionally been viewed as an underdog. Don’t get me wrong — data security is a necessary control to reduce risks, lower costs and support compliance. In fact, given the increasing number and sophistication of data breaches, explosion of sensitive data being exchanged across dizzyingly complex IT environments and mounting regulatory pressures across the globe, data security is more important than ever when it comes to protecting an enterprise’s crown jewels. This is reflected not only in the conversations I engage in with my own colleagues and clients, but also in market data and predictions. According to one recent report, the database security market alone is expected to grow to $7.01 billion by 2022, up from $2.96 billion this past year.

Even with all this in mind, data security is still not often thought of as a business driver — something that can actually increase revenue and facilitate positive results, rather than just protecting against loss. If we take a closer look at where technology is headed and where business needs are emerging, however, this perception changes. Data security, once an underdog, is becoming a strategic driver capable of enabling competitive differentiation for those who approach it with the right mindset.

Emphasizing Data Security Controls Enhances Customer Trust

According to the “2017 Cost of Data Breach Study,” failure to retain customers after a security incident accounts for a significant portion of costs incurred. Customers are making purchasing decisions based on trust that an organization will protect their privacy. Given the enhanced public scrutiny on the issue after multiple high-profile data breaches occurred in the past year, it’s fair to predict that this trend will continue into 2018 and beyond. In addition, new compliance regulations coming into effect underscore the need to protect the privacy rights of data subjects.

Building and maintaining customer trust is one of the most important — and challenging — things a company must do to maintain competitive advantage and enhance market share. A recent Deloitte study reported that 73 percent of consumers would reconsider using a company if it failed to keep their data safe, but only 51 percent would reconsider if they were charged a higher price for a similar product. Data security and data privacy can function as a competitive differentiator. By ensuring that they’ll keep customer information safe — or, better yet, safer than the competition — from external threats and internal prying eyes, organizations can attract security-savvy customers at a higher rate than competitors who fail to highlight these critical capabilities.

Robust Data Security Facilitates Digital Transformation

Another critical component in the race to capture and retain customers is the ability to accelerate digital transformation initiatives. We all know that digital transformation cannot occur without a mature security program in place to support it, but how does data security drive this transformation forward? If we look at this question through the lens of zero trust and security by design, data security’s centrality to digital transformation becomes clear: By securing the data itself, organizations are able to set it free.

Infusing data security everywhere throughout the organization can empower your mobile workforce, enable faster cloud migrations and assuage the threat of privacy or ethical violations that comes along with leveraging big data analytics. By adopting data security as a foundational element of digital transformation, organizations can attain it more quickly and activate the multitude of business benefits it touts.

Read the complete Forrester report: The future of data security and privacy

Data Security Is Now Top Dog

Enhanced customer trust, faster digital transformation and the revenue associated with both are just some of the ways in which data security can function as a business driver. As a result, these technologies are no longer the underdogs of the security world. As data security takes on a more central role to enabling businesses around the globe to reach their full potential, the question becomes, “What’s next?”

To learn more, download the Forrester Report, “The Future of Data Security and Privacy: Growth and Competitive Differentiation.”

The post In 2018, Data Security Is No Longer an Underdog appeared first on Security Intelligence.

Open Banking and PSD2: Disruption or Confusion?

If you’re not yet familiar with the concept of open banking, you’re not alone. U.K. consumer advice firm Which? reported that 92 percent of the public is unaware of the initiative, which officially launched on Jan. 13, 2018, to promote the use of application programming interfaces (APIs) to enable developers to build applications to augment banking services.

Open banking is the main driver behind the EU’s Revised Payment Service Directive (PSD2), which requires the largest financial institutions in the U.K. to release their data in a standardized format so that authorized third parties can share it more easily. Despite the lack of public awareness, this initiative has the potential to bring many benefits to the banking industry, including more valuable data insights and improved customer experience. However, it also introduces additional challenges from a cybersecurity, antifraud and data protection perspective.

A Brief History of Open Banking

In October 2015, the European Parliament revised its original Payment Services Directive (PSD1) to get the open banking ball rolling. A year later, the U.K. Competition and Markets Authority (CMA) identified competition concerns in the retail business and consumer current account markets. The agency subsequently issued a ruling that is consistent with PSD2, requiring the nine largest banks in the U.K. to grant licensed providers access to certain types of data. The open banking initiative is designed to comply with PSD2, providing the legal framework for the CMA requirements.

In short, PSD2 aims to increase regulation between banks and approved third-party payment providers (TPPs) regarding data they hold, access and share. It grants these entities access to payment accounts that hold information related to credit cards, mortgages, loans, savings and more via APIs. Another goal of open banking is to increase choice, control and protection over how consumers manage financial transactions and help promote the ongoing development and innovation of digital payments.

TPPs are categorized as Payment Initiation Services Providers (PISPs) and Account Information Services Providers (ASIPs). Banks may already fall into one or both categories, but the initiative opens the market to retailers, insurers, price comparison websites, utility providers, financial technology companies and more.

Key Concerns Related to Open Banking

PSD2 has the potential to be a catalyst for disruption in the banking and financial services industry, but there is widespread confusion about its purpose and benefit to customers. Let’s take a look at some of the key concerns related to open banking.

1. Ethics

Customers will begin to rely significantly on TPP’s to ethically manage their financial transaction data. Some experts are concerned that third-party access to accounts and data will create opportunities for TTPs to intrusively profile customers. This profiling may increase predatory lending, where TPP’s target vulnerable borrowers with invasive advertising to sell products and services. Access to financial data puts significant power in the hands of lenders.

2. Cybercrime

Providing access to multiple core banking platforms will significantly increase attack vectors for cybercriminals, meaning that banks will need to reassess and re-engineer their security controls and processes. Applying these controls on legacy IT systems may be extremely complex and costly. Conversely, some of the smaller new entrants may not be equipped with the expertise required to manage fraud, human error, identity theft and the loss of customer data.

3. Social Engineering

Cyberattacks will not be limited to exploiting technical vulnerabilities. Open banking may trigger an increase in social engineering attacks against customers who may be inexperienced using new technology platforms. Risks include phishing, malware, fraudulent apps, and physical theft or loss of endpoint devices that could provide access to third parties.

4. Compliance

As we have seen in recent years, not even highly regulated banks and financial services organizations are impervious to cyberattacks, and the aggregated customer data held and managed by TPPs via open APIs could be an easy target. There is an increased risk of information asymmetry, which could result in significant fines under various privacy regulations. Reputational risk is at stake if data is lost or tampered with in the chain of TPPs.

5. Privacy

The issues of consent, data privacy and permission need to be carefully reassessed. Consumers must fully understand what they are agreeing to, and where, when, how and with whom their data is being shared. According to McKinsey, “There is a fine line to walk: educating and empowering consumers without confusing, scaring or boring them.”

Potential Benefits of PSD2

Open banking also holds potential to bring numerous benefits to financial institutions and their customers. Below are some of the most significant.

1. Financial Control

Open banking and PSD2 will transform the banking sector much like the insurance industry changed after the emergence of price comparison websites. Customers will have greater visibility into and control of their finances to make efficient and meaningful decisions.

2. Security

PSD2 drives both PISPs and ASIPs to embed security and privacy directly into the APIs they are designing and implementing. However, they must balance security and the user experience. Open banking also provides an opportunity for banks to reassess their business model and security posture.

3. Increased Competition

Open banking will generate increased competition between established providers and innovative new entrants aiming to make existing products more flexible, bespoke and convenient. These entities include the likes of Amazon, Apple, Google and Facebook, who have agility in their investment capabilities as well as an advanced technological architecture to utilize customer data insights at scale.

4. Fraud Reduction

Despite concerns of increased fraud, PSD2 enhances existing consumer protection rules through increased security requirements. This includes the mandatory use of strong customer authentication, such as two-factor authentication (2FA) with biometrics. The data gathered will be enriched to reduce the number of false positives, thus ensuring that the customer experience is not adversely impacted.

5. Innovation

Open banking will also help accelerate the use of blockchain and cryptocurrencies in mainstream financial services. As cryptocurrencies such as bitcoin and Ethereum progressively become acceptable forms of payment, providers can take the opportunity to embed cryptocurrency payment mechanisms in their open banking platforms.

Open Banking Gains Momentum

As we march into 2018, the open banking initiative will gain momentum as banks and financial services organizations in the U.K. change their security, antifraud, and privacy policies and controls to comply with PSD2. These policies must include strong governance as well as robust processes and technical controls to protect the privacy and security of customer data.

Like any initiative that introduces sweeping changes to an industry as vital as financial services, PSD2 will come with its fair share of growing pains. However, organizations that embrace open banking and tailor their security strategies accordingly will unlock the benefits of shared data for their business and their customers.

Read the white paper: Harnessing the power of open banking

The post Open Banking and PSD2: Disruption or Confusion? appeared first on Security Intelligence.

After Section 702 Reauthorization

For over a decade, civil libertarians have been fighting government mass surveillance of innocent Americans over the Internet. We've just lost an important battle. On January 18, President Trump signed the renewal of Section 702, domestic mass surveillance became effectively a permanent part of US law.

Section 702 was initially passed in 2008, as an amendment to the Foreign Intelligence Surveillance Act of 1978. As the title of that law says, it was billed as a way for the NSA to spy on non-Americans located outside the United States. It was supposed to be an efficiency and cost-saving measure: the NSA was already permitted to tap communications cables located outside the country, and it was already permitted to tap communications cables from one foreign country to another that passed through the United States. Section 702 allowed it to tap those cables from inside the United States, where it was easier. It also allowed the NSA to request surveillance data directly from Internet companies under a program called PRISM.

The problem is that this authority also gave the NSA the ability to collect foreign communications and data in a way that inherently and intentionally also swept up Americans' communications as well, without a warrant. Other law enforcement agencies are allowed to ask the NSA to search those communications, give their contents to the FBI and other agencies and then lie about their origins in court.

In 1978, after Watergate had revealed the Nixon administration's abuses of power, we erected a wall between intelligence and law enforcement that prevented precisely this kind of sharing of surveillance data under any authority less restrictive than the Fourth Amendment. Weakening that wall is incredibly dangerous, and the NSA should never have been given this authority in the first place.

Arguably, it never was. The NSA had been doing this type of surveillance illegally for years, something that was first made public in 2006. Section 702 was secretly used as a way to paper over that illegal collection, but nothing in the text of the later amendment gives the NSA this authority. We didn't know that the NSA was using this law as the statutory basis for this surveillance until Edward Snowden showed us in 2013.

Civil libertarians have been battling this law in both Congress and the courts ever since it was proposed, and the NSA's domestic surveillance activities even longer. What this most recent vote tells me is that we've lost that fight.

Section 702 was passed under George W. Bush in 2008, reauthorized under Barack Obama in 2012, and now reauthorized again under Trump. In all three cases, congressional support was bipartisan. It has survived multiple lawsuits by the Electronic Frontier Foundation, the ACLU, and others. It has survived the revelations by Snowden that it was being used far more extensively than Congress or the public believed, and numerous public reports of violations of the law. It has even survived Trump's belief that he was being personally spied on by the intelligence community, as well as any congressional fears that Trump could abuse the authority in the coming years. And though this extension lasts only six years, it's inconceivable to me that it will ever be repealed at this point.

So what do we do? If we can't fight this particular statutory authority, where's the new front on surveillance? There are, it turns out, reasonable modifications that target surveillance more generally, and not in terms of any particular statutory authority. We need to look at US surveillance law more generally.

First, we need to strengthen the minimization procedures to limit incidental collection. Since the Internet was developed, all the world's communications travel around in a single global network. It's impossible to collect only foreign communications, because they're invariably mixed in with domestic communications. This is called "incidental" collection, but that's a misleading name. It's collected knowingly, and searched regularly. The intelligence community needs much stronger restrictions on which American communications channels it can access without a court order, and rules that require they delete the data if they inadvertently collect it. More importantly, "collection" is defined as the point the NSA takes a copy of the communications, and not later when they search their databases.

Second, we need to limit how other law enforcement agencies can use incidentally collected information. Today, those agencies can query a database of incidental collection on Americans. The NSA can legally pass information to those other agencies. This has to stop. Data collected by the NSA under its foreign surveillance authority should not be used as a vehicle for domestic surveillance.

The most recent reauthorization modified this lightly, forcing the FBI to obtain a court order when querying the 702 data for a criminal investigation. There are still exceptions and loopholes, though.

Third, we need to end what's called "parallel construction." Today, when a law enforcement agency uses evidence found in this NSA database to arrest someone, it doesn't have to disclose that fact in court. It can reconstruct the evidence in some other manner once it knows about it, and then pretend it learned of it that way. This right to lie to the judge and the defense is corrosive to liberty, and it must end.

Pressure to reform the NSA will probably first come from Europe. Already, European Union courts have pointed to warrantless NSA surveillance as a reason to keep Europeans' data out of US hands. Right now, there is a fragile agreement between the EU and the United States ­-- called "Privacy Shield" -- ­that requires Americans to maintain certain safeguards for international data flows. NSA surveillance goes against that, and it's only a matter of time before EU courts start ruling this way. That'll have significant effects on both government and corporate surveillance of Europeans and, by extension, the entire world.

Further pressure will come from the increased surveillance coming from the Internet of Things. When your home, car, and body are awash in sensors, privacy from both governments and corporations will become increasingly important. Sooner or later, society will reach a tipping point where it's all too much. When that happens, we're going to see significant pushback against surveillance of all kinds. That's when we'll get new laws that revise all government authorities in this area: a clean sweep for a new world, one with new norms and new fears.

It's possible that a federal court will rule on Section 702. Although there have been many lawsuits challenging the legality of what the NSA is doing and the constitutionality of the 702 program, no court has ever ruled on those questions. The Bush and Obama administrations successfully argued that defendants don't have legal standing to sue. That is, they have no right to sue because they don't know they're being targeted. If any of the lawsuits can get past that, things might change dramatically.

Meanwhile, much of this is the responsibility of the tech sector. This problem exists primarily because Internet companies collect and retain so much personal data and allow it to be sent across the network with minimal security. Since the government has abdicated its responsibility to protect our privacy and security, these companies need to step up: Minimize data collection. Don't save data longer than absolutely necessary. Encrypt what has to be saved. Well-designed Internet services will safeguard users, regardless of government surveillance authority.

For the rest of us concerned about this, it's important not to give up hope. Everything we do to keep the issue in the public eye ­-- and not just when the authority comes up for reauthorization again in 2024 -- hastens the day when we will reaffirm our rights to privacy in the digital age.

This essay previously appeared in the Washington Post.

Man Arrested for Allegedly Hacking Car-Sharing Company Database

Australian law enforcement officers have arrested a man for allegedly hacking the company database of a car-sharing service. On 30 January, investigators of Strike Force Artsy, a division of the State Crime Command’s Cybercrime Squad, executed a search warrant at a home in Penrose. Officers arrested a 37-year-old man and charged him with two counts […]… Read More

The post Man Arrested for Allegedly Hacking Car-Sharing Company Database appeared first on The State of Security.

Data Privacy Concerns Cause Costly Sales Delays

Recent research suggested that organizations around the world are struggling to keep up with their sales goals due to data privacy concerns.

According to Cisco’s “2018 Privacy Maturity Benchmark Study,” which analyzed the importance of privacy processes in the wake of forthcoming legislation, almost two-thirds of companies experienced substantial delays in their sales cycles due to challenges related to data privacy.

Delays to the sales cycle can have damaging results for businesses. Security professionals should assess data concerns in their own organizations and detail the potential benefits of tighter privacy processes.

Data Privacy Concerns Cause Massive Sales Delays

The report revealed that privacy-related issues forced 65 percent of businesses to delay their sales cycles for an average of 7.8 days in 2017. The good news is that firms with a mature approach to privacy suffered less impact. In fact, privacy-mature organizations suffered average deferrals of 3.4 weeks, as opposed to 17 weeks for less advanced firms.

These mature companies also reported reduced losses due to data breaches, SecurityWeek reported. In addition, just 39 percent of privacy-mature organizations lost more than $500,000, compared to 74 percent of companies with unsophisticated privacy processes.

In many cases, according to the report, the length of the sales delay depended on the privacy model adopted by the business. Organizations with centralized approaches, for example, suffered an average delay of 10 weeks, while those with decentralized resources were delayed for an average of seven weeks. Firms that adopted a hybrid mix of the two approaches managed to cut delays down to less than five weeks.

Average sales delays also varied considerably according to location and sector. The report suggested countries and industries with tighter regulations and higher customer expectations experienced longer delays. Latin America topped the list with an average wait of 15.4 weeks, followed by Mexico (13 weeks) and Japan (12.1 weeks). In terms of industries, government and healthcare organizations suffered the longest deferral times.

The Link Between Data Privacy and the Sales Cycle

The research highlighted the importance of strong data privacy process. Simply put, privacy-mature organizations suffer shorter sales delays and experience lower losses from data breaches. It also emphasized the importance of making sure executives understand how data privacy concerns affect the sales cycle.

To reduce delays, businesses should:

  1. Ensure that salespeople have timely access to information on customer privacy concerns.
  2. Create teams to investigate customer privacy issues as they arise.
  3. Work with development teams to ensure that privacy is built in from the beginning.

In a press release accompanying the report, William Lehr, an economist at Massachusetts Institute of Technology (MIT), noted that the study provides “empirical evidence of the linkage between firm privacy policies and performance-relevant impacts.” He added that the research should help shape future understandings of privacy and cybersecurity.

As privacy regulations mount around the world, these insights will be valuable to help companies that handle customer data reduce sales delays.

The post Data Privacy Concerns Cause Costly Sales Delays appeared first on Security Intelligence.

Aetna Agrees to $1.15 Million Settlement with New York Attorney General

On January 23, 2018, the New York Attorney General announced that Aetna Inc. (“Aetna”) agreed to pay $1.15 million and enhance its privacy practices following an investigation alleging it risked revealing the HIV status of 2,460 New York residents by mailing them information in transparent window envelopes. In July 2017, Aetna sent HIV patients information on how to fill their prescriptions using envelopes with large clear plastic windows, through which patient names, addresses, claims numbers and medication instructions were visible. Through this, the HIV status of some patients was visible to third parties. The letters were sent to notify members of a class action lawsuit that, pursuant to that suit’s resolution, they could purchase HIV medications at physical pharmacy locations, rather than via mail order delivery.

In addition to the monetary penalty, the settlement also requires Aetna to change its standard mailing practices and hire an independent consultant to oversee its compliance with the terms of the settlement. A spokesperson for Aetna said that the company is “implementing measures designed to ensure something like this does not happen again as part of our commitment to best practices in protecting sensitive health information.”

Cisco Patches Critical VPN Vulnerability

Cisco Systems released a patch Monday to fix a critical security vulnerability, with a CVSS rating of 10, in its Secure Sockets Layer VPN solution called Adaptive Security Appliance.

CIPL President Bojana Bellamy Interviewed for Capgemini Video Series

On January 23, 2018, multinational consulting firm Capgemini interviewed Bojana Bellamy, President of the Centre for Information Policy Leadership at Hunton & Williams, for their “Jane Meets” video series with the Chief Information Security Officer (“CISO”). Bellamy spoke with the CISO of Capgemini about companies’ readiness to comply with the EU General Data Protection Regulation (“GDPR”). In response to a question about the key responsibilities of a CISO in GDPR compliance, Bellamy said, “…where I see great involvement for CISO also is in ensuring that the company is ready to deal with security breaches. So it’s not just about preventing the breach, which is obvious, but it’s also about readiness to deal with the breach and readiness to then manage the breach and notify individuals and regulators, because that is what [the] GDPR requires.”

Capgemini’s video series also focuses on cybersecurity and risk, with an emphasis on the need for a proactive approach to keep companies ahead of the increased number of cyber threats.

View Bellamy’s interview on GDPR Compliance: The Critical Role of the CISO on Capgemini’s website.

Lenovo’s Fingerprint Scanner Can Be Bypassed via a Hardcoded Password

Lenovo has issued an update to address a vulnerability in its fingerprint scanner app that it ships with ThinkPad, ThinkCentre, and ThinkStation models running Windows 8.1 or older version of Windows. From a report: Fingerprint Manager Pro is an application developed by Lenovo that allows users to log into Windows machines and online websites by scanning one of their fingerprints using the fingerprint scanner embedded in selected Lenovo products. "A vulnerability has been identified in Lenovo Fingerprint Manager Pro," said Lenovo in a security advisory published last week. "Sensitive data stored by Lenovo Fingerprint Manager Pro, including users' Windows logon credentials and fingerprint data, is encrypted using a weak algorithm, contains a hard-coded password, and is accessible to all users with local non-administrative access to the system it is installed in," the company said.

Read more of this story at Slashdot.

Strava user heatmap reveals patterns of life in western military bases

In November 2017, online fitness tracker Strava published a heatmap of the activity many of its users around the world engage in (and track) daily. But what might have seemed as a harmless sharing of anonymized, aggregated data turned out to reveal potentially sensitive information about (mostly western) military bases and secret sites. The revelation was made and shared over the weekend by Nathan Ruser, an Australian university student and founding member of Institute for … More

Heat Map Released by Fitness Tracker Reveals Location of Secret Military Bases

Every one of us now has at least one internet-connected smart device, which makes this question even more prominent —how much does your smart device know about you? Over the weekend, the popular fitness tracking app Strava proudly published a "2017 heat map" showing activities from its users around the world, but unfortunately, the map revealed what it shouldn't—locations of the United States

Locations of Military Bases Inadvertently Exposed by Fitness Tracker Users

Users of a fitness tracking app have inadvertently exposed the locations of military bases by publicly sharing their jogging/cycling routes. Many service people who use Strava, an app which allows them to record their exercise activity using GPS plotting, are sharing their data publicly. Their movements have ended up in Strava Labs’ Global Heatmap consisting […]… Read More

The post Locations of Military Bases Inadvertently Exposed by Fitness Tracker Users appeared first on The State of Security.

DuckDuckGo App and Extension Upgrades Offer Privacy ‘Beyond the Search Box’

An anonymous reader quotes the Verge: DuckDuckGo is launching updated versions of its browser extension and mobile app, with the promise of keeping internet users safe from snooping "beyond the search box." The company's flagship product, its privacy-focused search engine, will remain the same, but the revamped extension and app will offer new tools to help users keep their web-browsing as safe and private as possible. These include grade ratings for websites, factoring in their use of encryption and ad tracking networks, and offering summaries of their terms of service (with summaries provided by third-party Terms of Service Didn't Read). The app and extension are available for Firefox, Safari, Chrome, iOS, and Android. The ability to block ad tracking networks is probably the most important feature here. These networks are used by companies like Google and Facebook to follow users around the web, stitching together their browsing history to create a more accurate profile for targeted advertising. DuckDuckGo calls it "a major step to simplify online privacy," adding that without it, "It's hard to use the Internet without it feeling a bit creepy -- like there's a nosey neighbor watching everything you do from across the street."

Read more of this story at Slashdot.

Old Bitcoin transactions can come back to haunt you

A group of researchers from Qatar University and Hamad Bin Khalifa University have demonstrated how years-old Bitcoin transactions can be used to retroactively deanonymize users of Tor hidden services. It seems that Bitcoin users’ past transactions – and especially if they used the cryptocurrency for illegal deals on the dark web and didn’t think to launder their payments – may come back to haunt them. Researchers’ findings “We crawled 1.5K hidden service pages and created … More

Facebook, Microsoft announce new privacy tools to comply with GDPR

In four months the EU General Data Protection Regulation (GDPR) comes into force, and companies are racing against time to comply with the new rules (and avoid being brutally fined if they fail). One of the things that the regulation mandates is that EU citizens must be able to get access to their personal data held by companies and information about how these personal data are being processed. Facebook users to get new privacy center … More

China Releases National Standard on Personal Information Security

On January 25, 2018, the Standardization Administration of China published the full text of the Information Security Technology – Personal Information Security Specification (the “Specification”). The Specification will come into effect on May 1, 2018. The Specification is voluntary, but could become influential within China because it establishes benchmarks for the processing of personal information by a wide variety of entities and organizations. In effect, the Specification constitutes a best practices guide for the collection, retention, use, sharing and transfer of personal information, and for the handling of related information security incidents.

The Specification divides personal information into two categories: personal information and sensitive personal information. “Sensitive personal information” includes personal information such as financial information, identifying information (such as an ID card, social insurance card, passport or driver’s license) and biological identifying information. The Specification provides specific requirements for the collection and use of sensitive personal information, as well as a sample functional interface with a data subject which could be incorporated by an enterprise in its products or services for the collection of sensitive personal information. The sample functional interface is a template for an interactive web page or software that is designed in accordance with the Specification, shows information such as the purpose, scope and transfer of personal information, and contains a checkbox to obtain consent.

The Specification reiterates the applicability of the principles of legitimacy and minimization, and the obligation to obtain the consent of a data subject, when collecting personal information, as well as the requirement to formulate and publish a privacy policy. These appear in earlier privacy-related laws and regulations, such as the Cybersecurity Law. In addition, the Specification provides several exceptions to the consent requirement, including when the collection and use of personal information is (1) directly related to national security, public security, a matter of material public interest, the investigation or trial of a crime or the enforcement of a judgement, or (2) requested by a data subject and is necessary for the execution and performance of a contract. The Specification also includes a template privacy policy. When collecting personal information indirectly from a third party (rather than directly from the data subject), an entity must require the party providing the information to explain the source by which the personal information was originally obtained, and to check whether that party obtained the consent of the data subject for the sharing, transfer or disclosure of the personal information.

According to the Specification, personal information must be retained for only the minimum extent necessary, and must be deleted or anonymized after the expiration of the retention period. Encryption measures must be adopted whenever sensitive personal information is retained. When a personal information controller ceases to provide a product or service, it must inform the relevant data subjects and must delete or anonymize all personal information retained in relation to the data subjects.

When an enterprise uses personal information, it must adopt controls on access and restrictions on the display of the information. The use of personal information must not go beyond the purpose stated when collecting it. Personal data subjects have the right to request correction, deletion and copies of personal information that pertains to them, as well as the right to withdraw their consent to the collection and use of the personal information. An enterprise must respond to the request of a data subject for correction, deletion or copying once it has verified his or her identity.

When an enterprise engages a third party to process personal information, it must conduct a security assessment to ensure that the processor possesses sufficient security capabilities. The enterprise must also require the third party to safeguard the personal information, and must also supervise the third party’s processing of the personal information. If an enterprise needs to share or transfer personal information, it must conduct a security assessment and adopt security measures, inform the data subjects of the purpose of the sharing or transfer and of the categories of recipients, and obtain the consent of the data subjects.

An enterprise must formulate a contingency plan for security incidents that involve personal information and conduct emergency drills at least once a year. In the event of an actual data breach incident, the enterprise must inform the affected data subjects by email, letter, telephone or other reasonable and efficient method. The notice must include information such as the substance of the incident and its impact, remedial measures that have been taken or will be taken, suggestions for the data subjects on how to reduce risks, remedial measures made available to data subjects, and the responsible person and his or her contact information.

The Specification requires entities to clarify which of their departments and staff would be responsible for the protection of personal information, and to establish a system to evaluate impacts on the security of personal information. Enterprises must also implement staff training and audit the security measures which they have adopted to protect personal information.

Good privacy is good for business, so pay attention

Data privacy concerns are causing significant sales cycle delays for up to 65 percent of businesses worldwide, according to findings in the new Cisco 2018 Privacy Maturity Benchmark Study. The study shows that privacy maturity is connected to lower losses from cyberevents: 74 percent of privacy-immature organizations experienced losses of more than $500,000 last year caused by data breaches, compared with only 39 percent of privacy-mature organizations. Privacy maturity is a framework defined by the … More

Kaspersky Lab official blog: Transatlantic Cable podcast, episode 21

In this week’s edition of the Transatlantic Cable podcast, Dave and I discuss teenage hackers, a woman who has a bad habit of sneaking onto airplanes, Sonic the Hedgehog and more.

For more on this week’s topics, see:

rss-podcasts rss-podcasts

Kaspersky Lab official blog

Former Employees Say Lyft Staffers Spied On Passengers

An anonymous reader quotes a report from TechCrunch: Similar to Uber's "God View" scandal, Lyft staffers have been abusing customer insight software to view the personal contact info and ride history of the startup's passengers. One source that formerly worked with Lyft tells TechCrunch that widespread access to the company's backend let staffers "see pretty much everything including feedback, and yes, pick up and drop off coordinates." When asked if staffers, ranging from core team members to customer service reps, abused this privilege, the source said "Hell yes. I definitely looked at my friends' rider history and looked at what drivers said about them. I never got in trouble." Another supposed employee anonymously reported on workplace app Blind that staffers had access to this private information and that the access was abused. Our source says that the data insights tool logs all usage, so staffers were warned by their peers to be careful when accessing it surreptitiously. For example, some thought that repeatedly searching for the same person might get noticed. But despite Lyft logging the access, enforcement was weak, so team members still abused it. A Lyft spokesperson issued the following statement to TechCrunch: "Maintaining the trust of passengers and drivers is fundamental to Lyft. The specific allegations in this post would be a violation of Lyft's policies and a cause for termination, and have not been raised with our Legal or Executive teams. We are conducting an investigation into the matter. Access to data is restricted to certain teams that need it to do their jobs. For those teams, each query is logged and attributed to a specific individual. We require employees to be trained in our data privacy practices and responsible use policy, which categorically prohibit accessing and using customer data for reasons other than those required by their specific role at the company. Employees are required to sign confidentiality and responsible use agreements that bar them from accessing, using, or disclosing customer data outside the confines of their job responsibilities."

Read more of this story at Slashdot.

FERC Proposes to Adopt Reliability Standards Designed to Mitigate Cybersecurity Risk

On January 18, 2018, the Federal Energy Regulatory Commission (“FERC”) issued a Notice of Proposed Rulemaking (“NOPR”) that proposes the adoption of new mandatory Reliability Standards designed to mitigate cybersecurity risk in the supply chain for electric grid-related cyber systems. The Reliability Standards were developed by the North American Electric Reliability Corporation (“NERC”) in response to FERC Order No. 829, which ordered the development of standards to address supply chain risk management for industrial control system hardware, software and computing and networking services.

FERC’s NOPR acknowledged the “substantial progress” NERC had made in addressing the supply chain cybersecurity risks and identified remaining areas of “significant” cybersecurity risk. The NOPR proposes that NERC amend the Reliability Standards to address Electronic Access Control and Monitoring Systems associated with “medium-and-high-impact bulk electric system cyber systems.” The NOPR also proposes to direct NERC to evaluate the cybersecurity risks presented by Physical Access Controls and Protected Cyber Assets, as part of a study previously proposed by the NERC Board.

Comments on the NOPR are due 60 days after publication in the Federal Register.

Blog | Avast EN: Online privacy: this is a conversation we all need to have

Ten years ago, the first Data Privacy Day was held in the US and Canada on January 28. Since then, the National Cyber Security Alliance (NCSA) has commemorated it every year with online privacy awareness efforts aimed at both consumers and businesses. With the recent loss of net neutrality in the United States, this year’s Data Privacy Day takes on greater significance.

Blog | Avast EN

WeLiveSecurity: Data Privacy vs. Data Protection: Reflections on Privacy Day and GDPR

Data privacy is also a topic that can spark big debates, like the one between the US and the EU as to what protections should be accorded to data pertaining to people, specifically by those who collect, control, or process such data.

The post Data Privacy vs. Data Protection: Reflections on Privacy Day and GDPR appeared first on WeLiveSecurity


The Year of Privacy

For decades, oil was considered to be the world’s most valuable commodity. Due to its relative scarcity and society’s dependence upon it, this resource has been at the center of countless conflicts. Today, the story has changed; “black gold” has been deposed, as digital gold becomes more and more sought after. They say that data is the oil of the 21st century. The ability to know, through data, who we are gives companies immense power in the creation of business opportunities. And the abuse of this information can, naturally, raise concerns among consumers and legislators. These concerns have led policy-makers to invoke increasingly strict measures for the protection of personal data, a process which this year will reach its peak. 2018 will be the Year of Privacy.

The battle for privacy

Our privacy is at stake. The Internet has made the boundary between the public and the personal more porous than ever before. However, Internet users are increasingly aware of the relevance of protecting their identity online.

With this in mind, Data Privacy Day is celebrated every January 28 in order to raise awareness and promote data protection and healthy privacy practices. This celebration aims to educate users on the importance of protecting their online identity. It also seeks to encourage companies to implement technological solutions to respect user privacy. The date is no coincidence: it corresponds to the anniversary of the signature in 1981 of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, one of the pioneering documents in the field of data protection.

More Data, More Responsibility

Although storing large amounts of customer data can offer multiple business opportunities, it also implies a high level of responsibility. The hardening of data protection regulations and the growing number of cyberattacks are making it necessary to increase investments in privacy. The combination of customer data and employee data puts enormous pressure on security.

The global production rate of new data is increasing exponentially. According to IDC estimates, in 2025 there will be 163 ZB of data, ten times the data generated in 2016. Yes, that’s… let’s see, 163,000,000,000,000,000,000,000 bytes of information flowing around the world! Moreover, as indicated by IDC, 90% of the data generated in 2025 will require some type of security, but less than half of it will be protected.

GDPR: Four Letters to Define 2018

May 25, 2018. Security experts the world over have circled this date on their calendars with big red markers. On that day, the adaptation period stipulated by the regulatory bodies of the European Union for GDPR compliance will expire. This is the primary reason that we are calling 2018 “The Year of Privacy”.

At this point it is unlikely that you have not heard this acronym, but we’re here to summarize the fundamental aspects of the regulation that will revolutionize personal data protection not only in Europe, but worldwide.

The General Data Protection Regulation (GDPR) seeks to protect the privacy of citizens of the European Union and control how companies and institutions process, store, and use their personal data. It is the result of advances that have been made in the field of personal data protection, beginning in the 80s. The rapid evolution of technology was making the previous legislation obsolete, giving rise to the GDPR, the legal framework by which the European Commission intends to eliminate the ambiguities of the previous directive (Data Protection Directive 1995) and unify the specific legislations of each member country of The EU.

The fact that it is a unique, EU-approved regulation has generated many questions among companies, with two questions above all others:  What happens with companies from European countries that are not part of the EU? And with companies from other continents? As we explained in this post, the GDPR applies to all companies that process EU citizen data, regardless of their location. This confusion has led to very few companies being adequately prepared for the GDPR.

In recent months we have also debunked some other myths that surround the GDPR. One of the most widespread is the idea that we must encrypt all data to comply with the GDPR. Another: the personal data that we already have in our database is not subject to the GDPR. (Both are false).

Knowing the ins and outs of the regulation is the only way to avoid being caught off-guard. To simplify this task, we highlighted some of the main changes stemming from this new regulation and explained a series of recommendations for your company to be prepared. The inherent risks of unpreparedness are considerable: fines that could reach up to 20 million euros, as well as potential reputational damages and loss of customers.

So now is not the time to rest on your laurels. Four months away from the GDPR’s becoming a strict requirement, the protection of privacy and personal data must become a business priority. To help you on the road to compliance, we have created this microsite. Don’t wait until May!

The post The Year of Privacy appeared first on Panda Security Mediacenter.

Smashing Security #062: Tinder spying, Amazon shoplifting, and petrol pump malware

Smashing Security #062: Tinder spying, Amazon shoplifting, and petrol pump malware

Your Tinder swipes can be spied upon, Amazon is opening high street stores that don't require any staff, and Russian fuel pumps are being infected with malware in an elaborate scheme to make large amounts of money.

With Carole on a top secret special assignment, it's left to security veteran Graham Cluley to discuss all this and much much more on the "Smashing Security" podcast with special guests David McClelland and Vanja Švajcer.

The State of Security: Data Privacy Day: Expert Advice to Help Keep Your Data Private

Data Privacy Day began in the USA in 2008 as an extension of Data Protection Day in Europe. Since then, The National Cyber Security Alliance (NCSA) has led this international effort, which is held annually on January 28 to help create awareness about the importance of safeguarding data, respecting privacy, and enabling trust. In our […]… Read More

The post Data Privacy Day: Expert Advice to Help Keep Your Data Private appeared first on The State of Security.

The State of Security

Data Privacy Day: Expert Advice to Help Keep Your Data Private

Data Privacy Day began in the USA in 2008 as an extension of Data Protection Day in Europe. Since then, The National Cyber Security Alliance (NCSA) has led this international effort, which is held annually on January 28 to help create awareness about the importance of safeguarding data, respecting privacy, and enabling trust. In our […]… Read More

The post Data Privacy Day: Expert Advice to Help Keep Your Data Private appeared first on The State of Security.

EU Commission Releases Communication on Remaining Issues for GDPR Preparation

On January 24, 2018, the European Commission issued a communication to the European Parliament and the Council (the “Communication”) on the direct application of the EU General Data Protection Regulation (“GDPR”). The Communication (1) recounts novel elements of the GDPR that create stronger protections for individuals and new opportunities for organizations, (2) reviews preparatory work undertaken to date for GDPR implementation, (3) outlines remaining steps for successful preparation and (4) outlines measures the European Commission intends to take up until May 25, 2018.

EU GDPR – Stronger Protection and New Opportunities

The Communication begins by recapping the main innovations and opportunities afforded by the GDPR, including:

  • a harmonized legal framework among EU Member States and a level playing field for all organizations operating in the EU market;
  • stronger protections for individuals through the integration of data protection by design and default in the GDPR, enhanced individual rights and increased control through concepts such as data portability;
  • a comprehensive set of rules on personal data breaches, and clarity on (1) the obligations of data processors and (2) data protection authorities’ (“DPA”) power to impose fines;
  • flexibility for controllers and processors by virtue of the accountability principle; and
  • a modern governance system ensuring consistent application of the GDPR along with guarantees of protection for personal data transferred outside the EU.

Preparatory Work to Date

The second section of the Communication details actions taken by the European Commission and the Article 29 Working Party (“Working Party”) to date in preparing for the GDPR. These include:

  • supporting EU Member States and their authorities’ efforts to prepare for the GDPR through the establishment of an Expert Group and engaging in bilateral meetings with EU Members States’ authorities to discuss issues arising at the national level;
  • supporting individual DPAs and the work of the Working Party, with a view to ensuring a smooth transition to the European Data Protection Board (“EDPB”);
  • working on a modernized Council of Europe Convention 108, which reflects the same principles as those enshrined in the GDPR;
  • pursuing international outreach by actively engaging with key trading partners, notably in East Asia, Southeast Asia and Latin America, working with Japan towards achieving a finding of adequacy and launching talks with South Korea in view of a possible adequacy decision;
  • engaging with stakeholders through organized events and dedicated sectoral discussions, and setting up a multi-stakeholder group of civil society and business representatives, academics and practitioners on the GDPR;
  • funding actions to develop tools supporting the effective application of the GDPR in relation to consent and privacy-preserving methods of data analytics through its Framework Program for research and innovation, Horizon 2020; and
  • issuing Working Party guidelines on GDPR concepts for companies and other stakeholders.

Remaining Steps for Successful Preparation

The third section of the Communication outlines outstanding issues and remaining steps for GDPR preparation among all those involved in data protection.

  • Though the GDPR is directly applicable, EU Member States must take necessary steps to adapt their existing legislation, set up national DPAs, choose an accreditation body and lay down rules for the reconciliation of freedom of expression and data protection. The European Commission notes that only two EU Member States (Germany and Austria) have already adopted relevant national legislation to date.
  • The EDPB must be fully operational as of May 25, 2018. The European Data Protection Supervisor will provide the secretariat of the EDPB and has started the necessary preparations.
  • EU Member States must ensure that national DPAs are provided with the necessary financial, human and technical resources to carry out their duties, both effectively and independently.
  • Businesses, public administrations and other organizations processing personal data must undertake thorough reviews of their data policy cycle, potentially revise existing contracts, especially those between controllers and processors, and review their current transfer and governance mechanisms. The Communication notes organizations should use this opportunity to put their house in order, develop privacy within their organization and reset relations with DPAs.

Next Steps

The final section of the Communication outlines next steps the European Commission will take in the months leading up to the GDPR becoming directly applicable, including:

  • The European Commission will continue working with EU Member States and, from May 2018, will monitor how they apply the new rules, taking appropriate action as necessary.
  • The European Commission is making available new online guidance to help businesses comply with and benefit from the GDPR.
  • The European Commission will make use of its power to issue implementing or delegated acts only where there is clearly demonstrated added value based on stakeholder feedback. The European Commission plans to look into GDPR certifications based on a study contracted with external experts and input from the multi-stakeholder group set up in 2017.
  • The European Commission will work with the three EFTA states (Iceland, Liechtenstein and Norway) in the European Economic Area (“EEA”) to integrate the GDPR into the EEA agreement.
  • The European Commission will pursue the objective of ensuring that provisions of EU data protection law applicable in the UK on the day preceding its withdrawal from the European Union will continue to apply to personal data processed in the UK before the withdrawal date.
  • One year after the GDPR enters into application, the European Commission will organize an event to review different stakeholders’ experiences of implementing the GDPR. This will provide insight for the report the European Commission is required to complete on the evaluation and review of the GDPR by May 2020.

NY Department of Financial Services Issues Reminder for Cybersecurity Filing Deadline

On January 22, 2018, the New York Department of Financial Services (“NYDFS”) issued a press release reminding entities covered by its cybersecurity regulation that the first certification of compliance with the regulation is due on or prior to February 15, 2018. Covered entities must file the certification, which covers the 2017 calendar year, at the NYDFS online portal.

Maria T. Vullo, the Superintendent of the NYDFS, noted the critical importance of the certification of compliance and stated that “DFS’s regulation requires each entity to have an annual review and assessment of the program’s achievements, deficiencies and overall compliance with regulatory standards and the DFS cybersecurity portal will allow the safe and secure reporting of these certifications. DFS’s goal is to prevent cybersecurity attacks, and we therefore will now include cybersecurity in all DFS examinations to ensure that proper cybersecurity governance is being practiced by our regulated entities. As DFS continues to implement its landmark cybersecurity regulation, we will take proactive steps to protect our financial services industry from cyber criminals.”

Superintendent Vullo also announced that the NYDFS will incorporate cybersecurity in all of its regulatory examinations. This includes adding questions related to cybersecurity to “first day letters,” which are notices that the NYDFS issues to commence its examinations of financial services companies, including examinations of banks and insurance companies for safety and soundness and market conduct.

Read more about other key deadlines for the NYDFS cybersecurity regulation.

DuckDuckGo offers new privacy extension and app

DuckDuckGo, the company behind the eponymous privacy-minded Internet search engine, has announced a new browser extension and mobile app: DuckDuckGo Privacy Essentials. DuckDuckGo Privacy Essentials does four things: It makes DuckDuckGo the default search engine (this features is optional – it can be switched off). Forces websites to serve users with an encrypted version (i.e., HTTPS version) of the site – if it’s available. Blocks all hidden, third-party trackers it can find and provides users … More

Are you a Tinder user? Watch out, someone could spy on you

Experts at security firm Checkmarx discovered two security vulnerabilities in the Tinder mobile apps that could be exploited to spy on users.

Security experts at Checkmarx discovered two security vulnerabilities in the Tinder Android and iOS dating applications that could be exploited by an attacker on the same wi-fi network as a target to spy on users and modify their content.

Attackers can view a target user’s Tinder profile, see the profile images they view and determine the actions they take.

“The vulnerabilities, found in both the app’s Android and iOS versions, allow an attacker using the same network as the user to monitor the user’s every move on the app. It is also possible for an attacker to take control over the profile pictures the user sees, swapping them for inappropriate content, rogue advertising or other type of malicious content (as demonstrated in the research).” reads the analysis published by Checkmarx.

“While no credential theft and no immediate financial impact are involved in this process, an attacker targeting a vulnerable user can blackmail the victim, threatening to expose highly private information from the user’s Tinder profile and actions in the app.”

An attacker can conduct many other malicious activities, including intercepting traffic and launching DNS poisoning attacks.

The first issue is related to the fact that both the iOS and Android Tinder apps download profile pictures via insecure HTTP connections, this means that an attacker can access the traffic to determine which profiles are viewed by a Tinder user.

Tinder data leak


An attacker could also modify traffic for example to swap images.

“Attackers can easily discover what device is viewing which profiles,” continues the analysis. “Furthermore, if the user stays online long enough, or if the app initializes while on the vulnerable network, the attacker can identify and explore the user’s profile.” “Profile images that the victim sees can be swapped, rogue advertising can be placed and malicious content can be injected,”

Obviously, such kind of issue could be mitigated with the adoption of HTTPS.

Checkmarx also discovered another issue related to the use of HTTPS, the flaw was called “Predictable HTTPS Response Size”.

“By carefully analyzing the traffic coming from the client to the API server and correlating with the HTTP image requests traffic, it is possible for an attacker to determine not only which image the user is seeing on Tinder, but also which action did the user take.” states Checkmarx. “This is done by checking the API server’s encrypted response payload size to determine the action,” 

An attacker that is in the position of analyzing the traffic can discover the user’s interest in a specific profile by detecting a 278-byte encrypted response that is delivered by the API server when he swipes left on a profile picture. Swiping right, the Tinder user likes a particular profile, in this case, the response generated is composed of 374 bytes.

The researchers also noticed that Tinder member pictures are downloaded to the app via HTTP connection, this makes possible for an attacker to view the profile images of those users being swiped left and right.

In order to mitigate this issue, researchers suggest padding requests, if the responses were padded to a fixed size, it would be impossible to discriminate the user’s action.

Checkmarx disclosed both vulnerabilities to Tinder.

Pierluigi Paganini

(Security Affairs – Tinder, privacy)

The post Are you a Tinder user? Watch out, someone could spy on you appeared first on Security Affairs.

Security Affairs: Are you a Tinder user? Watch out, someone could spy on you

Experts at security firm Checkmarx discovered two security vulnerabilities in the Tinder mobile apps that could be exploited to spy on users.

Security experts at Checkmarx discovered two security vulnerabilities in the Tinder Android and iOS dating applications that could be exploited by an attacker on the same wi-fi network as a target to spy on users and modify their content.

Attackers can view a target user’s Tinder profile, see the profile images they view and determine the actions they take.

“The vulnerabilities, found in both the app’s Android and iOS versions, allow an attacker using the same network as the user to monitor the user’s every move on the app. It is also possible for an attacker to take control over the profile pictures the user sees, swapping them for inappropriate content, rogue advertising or other type of malicious content (as demonstrated in the research).” reads the analysis published by Checkmarx.

“While no credential theft and no immediate financial impact are involved in this process, an attacker targeting a vulnerable user can blackmail the victim, threatening to expose highly private information from the user’s Tinder profile and actions in the app.”

An attacker can conduct many other malicious activities, including intercepting traffic and launching DNS poisoning attacks.

The first issue is related to the fact that both the iOS and Android Tinder apps download profile pictures via insecure HTTP connections, this means that an attacker can access the traffic to determine which profiles are viewed by a Tinder user.

Tinder data leak


An attacker could also modify traffic for example to swap images.

“Attackers can easily discover what device is viewing which profiles,” continues the analysis. “Furthermore, if the user stays online long enough, or if the app initializes while on the vulnerable network, the attacker can identify and explore the user’s profile.” “Profile images that the victim sees can be swapped, rogue advertising can be placed and malicious content can be injected,”

Obviously, such kind of issue could be mitigated with the adoption of HTTPS.

Checkmarx also discovered another issue related to the use of HTTPS, the flaw was called “Predictable HTTPS Response Size”.

“By carefully analyzing the traffic coming from the client to the API server and correlating with the HTTP image requests traffic, it is possible for an attacker to determine not only which image the user is seeing on Tinder, but also which action did the user take.” states Checkmarx. “This is done by checking the API server’s encrypted response payload size to determine the action,” 

An attacker that is in the position of analyzing the traffic can discover the user’s interest in a specific profile by detecting a 278-byte encrypted response that is delivered by the API server when he swipes left on a profile picture. Swiping right, the Tinder user likes a particular profile, in this case, the response generated is composed of 374 bytes.

The researchers also noticed that Tinder member pictures are downloaded to the app via HTTP connection, this makes possible for an attacker to view the profile images of those users being swiped left and right.

In order to mitigate this issue, researchers suggest padding requests, if the responses were padded to a fixed size, it would be impossible to discriminate the user’s action.

Checkmarx disclosed both vulnerabilities to Tinder.

Pierluigi Paganini

(Security Affairs – Tinder, privacy)

The post Are you a Tinder user? Watch out, someone could spy on you appeared first on Security Affairs.

Security Affairs

WordPress Plugin Fixes Bug Allowing Download of 100K+ Sites’ Subscriber Lists

A popular WordPress plugin has fixed a vulnerability that allowed an unauthenticated user to download the subscriber lists for more than 100,000 websites. Email Subscribers & Newsletters incorporated the fix into version 3.4.8 on 19 January after working closely with Dominykas Gelucevicius from ThreatPress, a company which offers security products and services for WordPress users. […]… Read More

The post WordPress Plugin Fixes Bug Allowing Download of 100K+ Sites’ Subscriber Lists appeared first on The State of Security.

Detecting Drone Surveillance with Traffic Analysis

This is clever:

Researchers at Ben Gurion University in Beer Sheva, Israel have built a proof-of-concept system for counter-surveillance against spy drones that demonstrates a clever, if not exactly simple, way to determine whether a certain person or object is under aerial surveillance. They first generate a recognizable pattern on whatever subject­ -- a window, say -- someone might want to guard from potential surveillance. Then they remotely intercept a drone's radio signals to look for that pattern in the streaming video the drone sends back to its operator. If they spot it, they can determine that the drone is looking at their subject.

In other words, they can see what the drone sees, pulling out their recognizable pattern from the radio signal, even without breaking the drone's encrypted video.

The details have to do with the way drone video is compressed:

The researchers' technique takes advantage of an efficiency feature streaming video has used for years, known as "delta frames." Instead of encoding video as a series of raw images, it's compressed into a series of changes from the previous image in the video. That means when a streaming video shows a still object, it transmits fewer bytes of data than when it shows one that moves or changes color.

That compression feature can reveal key information about the content of the video to someone who's intercepting the streaming data, security researchers have shown in recent research, even when the data is encrypted.

Research paper and video.

Singapore government gets into the network defense game

There is a common assumption in the infosec community that enormous breaches like those at Equifax, Anthem, and Target are the new norm. That the next mega breach is simply a matter of time. This is because large companies loathe spending money on things that are not directly profitable like secure infrastructure or quality training for employees. Further, there isn’t really any external pressure on corporations to do better—so they won’t.

Some countries have recognized that these sorts of negative externalities cause significant public harm, and have sought to get ahead of the threat curve with cybersecurity legislation. Singapore currently has a comprehensive cybersecurity bill under consideration that is trying very hard to bring a bit of order to the wild west of technology threats. The bill is exhaustive in covering management of cyberthreats, so let’s look at what it does well and what it does not do well.

The good

  • Appoints a national CISO. US cyberdefenses frequently suffer from an unclear chain of command, as well as competing for agency priorities. The buck needs to stop somewhere to mount an effective defense.
  • Designates critical infrastructure. You cannot prioritize defenses for systems you aren’t looking at.
  • Duty to report. This is a big one. Often fearful of liability, stock impact, or impact to reputation, corporations will often sit on cyberattack disclosure for months—sometimes until an executive can sell his company’s stock. Removing any ambiguity on when and how to report breaches gets everyone on the same page.
  • Designates best standards and obliges companies to follow them. There’s currently no consistent, agreed-upon best cybersecurity practices for companies to follow.
  • Power to investigate and force remediation. In contrast to US defense contractors who handle critical infrastructure, were not obligated to report breaches until 2015, and to date have not lost any contracts due to loss of classified data, Singapore’s draft bill grants the authority for a cybersecurity officer to both investigate a critical infrastructure breach, and compel remediation along industry best practices.
  • Licenses infosec corps. While this could be a little iffy in the implementation, holding companies that audit critical infrastructure to an agreed-upon standard benefits everyone. Infrastructure owners know precisely what services they are paying for, cybersecurity officials can judge the impact of standardized services more accurately, and no one has to deal with a Norse Corp.

The not so good

  • Criminal sanctions for offenses. While seemingly a no-brainer, breaches are rarely due to a single individual’s malfeasance, and much more often the end result of a sick corporate process. A more effective deterrent would be fines leveled at the corporate level, and large enough to hurt. While an ineffective company can lose a handful of employees quite easily, they would feel the loss of a profit percentage much more acutely.
  • Secrecy. Many sections within the bill contain provisions for non-disclosure and corresponding fines and imprisonment for anyone speaking out about a breach in a non-approved way. From a governance perspective, this makes sense. Singapore is deriving their authority to monitor critical infrastructure by classifying breaches as a security threat, and a classic belief of governments is that one does not speak publicly of security threats. Network threats are different. Configurations and applications used by a shipping company can have significant overlap with those used at non-critical corporations. Transparency and information sharing not only pressure a breached company to demonstrate an adequate remediation but also offer lessons learned that can keep hundreds of less critical organizations safe. Sunlight and sharing are proven methods for defenders to propagate best solutions to everyone.

What does it mean?

Traditionally, information security has been viewed as the responsibility of individual companies, and not a particularly important one at that. Efforts of countries like Singapore to centralize cyberthreat defense and vulnerability remediation are an attempt to acknowledge the reality that breached infrastructure affects everyone. A hack might stay within an offshore drilling company, but the knock-on effects to shipping, trade, and the environment can create an impact on millions of citizens.

While the law has not traditionally been responsive to technology needs, that is gradually changing. With input from industry leaders and privacy advocates, technology law has the potential to change for our benefit.

Check out the full text of the bill here.

The post Singapore government gets into the network defense game appeared first on Malwarebytes Labs.

Hunton Publishes Retail Year in Review

On January 18, 2018, Hunton & Williams LLP’s retail industry lawyers, composed of more than 100 lawyers across practices, released their annual Retail Year in Review publication. The Retail Year in Review includes several articles authored by our Global Privacy and Cybersecurity lawyers, and touches on many topics of interest including blockchain, ransomware, cyber insurance and the Internet of Things.

Read the full publication.

SecurityWeek RSS Feed: Seagate Patches Flaws in Personal Cloud, GoFlex Products

Seagate recently patched several vulnerabilities discovered by researchers in the company’s Personal Cloud and GoFlex products, but some weaknesses impacting the latter remain unfixed.

GoFlex Home vulnerabilities

read more

SecurityWeek RSS Feed

Richard Thomas Appointed to the UK Advisory Committee on Business Appointments

Hunton & Williams LLP is pleased to announce that Richard Thomas, Global Strategy Advisor to the Centre for Information Policy Leadership, has been appointed by the UK Prime Minister to serve as a member of its Advisory Committee on Business Appointments (“ACOBA”), effective February 1, 2018.

ACOBA was instituted by the Prime Minister in 1975, and considers new jobs for former ministers, senior civil servants and other Crown servants in accordance with the government’s Business Appointments Rules.

Thomas served as Information Commissioner for the United Kingdom, was recognized as “Privacy Leader of the Year” by the International Association of Privacy Professionals, and ranked third in’s global “IT Agenda Setters” poll. Thomas served as a visiting professor at Northumbria University, speaks frequently at industry conferences and events, and is regularly quoted in the business and trade media.

FTC Releases 2017 Privacy and Data Security Update

On January 18, 2018, the Federal Trade Commission (“FTC”) released its 2017 Privacy & Data Security Update (the “Report”). The annual Report, which summarizes the privacy and data security-related activities conducted by the FTC over the past year, is broken down into five key areas: (1) enforcement, (2) advocacy, (3) workshops, (4) reports and surveys, (5) consumer education and business guidance, and (6) international engagement.

Read the full Report.

GDPR: Whose problem is it anyway?

With the GDPR deadline looming on May 25, 2018, every organization in the world that transmits data related to EU citizens is focused on achieving compliance. And for good reason. The ruling carries the most serious financial consequences of any privacy law to date – the greater of 20 million EUR or 4 percent of global revenue, potentially catastrophic penalties for many companies. Compounding matters, the scope and complexity of GDPR extends beyond cyber security, … More

The US Global surveillance bill has been signed by President Trump

US Government missed a historic opportunity to reform a dangerous surveillance law that opens to a global surveillance, instead it has signed a version that makes it worse.

The U.S. legal framework related to the domestic surveillance has been signed by President Trump one day after the Senate approved it with 65 votes against 34. The bill will be effective for other six years, below the Edward Snowden’s comment:

Privacy advocates and civil rights have a long criticized the Section 702 of the Foreign Intelligence Surveillance Act (FISA) that allows US intelligence agencies to conduct domestic surveillance under certain conditions without a warrant.

The Section 702 allows the NSA to conduct warrantless spying of foreigners located abroad, including any communications with US citizens.

NSA surveillance activities

Section 702 was revealed by NSA whistleblower Edward Snowden in 2012. Civil rights and privacy advocates consider it as unconstitutional under the Fourth Amendment.

The bill increases spying powers of intelligence agencies and block safeguards, curiously it was passed by Republicans who always criticized the corruption of the Government.

Politicians that voted for the Section 702 believe it is crucial it is crucial to protect Americans from foreign governments and terrorism, they highlighted that the revisions to the bill will guarantee citizens from any abuse.

“There is a glimmer of light,” “The last few weeks have demonstrated that bipartisan efforts to reform our surveillance laws continue on an arc of progress.” wrote ACLU legislative counsel Neema Singh Guliani in a blog post.

“With only two more votes, reformers could have halted this bill from advancing and forced a floor debate over badly needed improvements. And an effort to pass the most comprehensive Section 702 reform bill introduced in Congress garnered the support of over 180 members in the House. With actual debate, real reform provisions likely would have passed.”

Just hours before the section 702 program was signed by the President, the Senate’s intelligence committee approved the release of a confidential four-page memo alleging previous abuse of the FISA spying program to the rest of Congress.

“Scores of Republicans have since viewed the document in a Sensitive Compartmented Information Facility on Capitol Hill. They left expressing shock, saying the special counsel investigation into whether Trump’s campaign officials had improper contacts with Russia is based on politically motivated actions at the highest level of law enforcement.” reported The Hill.

House Freedom Caucus Chairman Mark Meadows (R-N.C.) called the memo “shocking.”

““I’m here to tell all of a America tonight that I’m shocked to read exactly what has taken place,” Meadows (R-N.C.) said in a speech on the House floor. 

“I thought it could never happen in a country that loves freedom and democracy like this country. It is time that we become transparent with all of this, and I’m calling on our leadership to make this available so all Americans can judge for themselves.”

Politicians opposing the section 702 program are defining its contents “worse than Watergate.”

In conclusion, this is a black page in the history of Americans. The 6-year extension of the regulation that allows the US government to monitor foreigners’ communications abroad without a warrant has been approved. Moreover, the US  intelligence will also be able to spy on American citizens, politicians, businessmen, and journalists who communicate with them, despite the Fourth Amendment.

Pierluigi Paganini

(Security Affairs – NSA surveillance, Section 702- FISA)

The post The US Global surveillance bill has been signed by President Trump appeared first on Security Affairs.

Trump Signs Surveillance Extension Into Law

President Trump took to Twitter this afternoon to announce that he has signed a six-year renewal of a powerful government surveillance tool. "Just signed 702 Bill to authorize foreign intelligence collection," Trump tweeted. "This is NOT the same FISA law that was so wrongly abused during the election. I will always do the right thing for our country and put the safety of the American people first!" The Hill reports: Section 702 of the Foreign Intelligence Surveillance Act (FISA), which the Senate voted to renew with a few small tweaks this week, allows the U.S. to spy on foreigners overseas. The intelligence community says the program is a critical tool in identifying and disrupting terror plots. But the broader surveillance law, which governs U.S. spying on foreigners, has become politically entangled with the controversy over the federal investigation into Trump's campaign and Russia. Some Republicans have claimed that the FBI inappropriately obtained a politically motivated FISA warrant to spy on Trump during the transition and on Friday, Capitol Hill was consumed with speculation about a four-page memo produced by House Intelligence Committee Republicans that some GOP lawmakers hinted contained evidence of such wrongdoing.

Read more of this story at Slashdot.

Researchers uncover mobile, PC surveillance platform tied to different nation-state actors

The Electronic Frontier Foundation (EFF) and mobile security company Lookout have uncovered a new malware espionage campaign that has targeted activists, journalists, lawyers, military personnel, and enterprises in more than 20 countries in North America, Europe, the Middle East, and Asia. They have dubbed the threat Dark Caracal, and have traced its activities to as far back as 2012. The malware used by Dark Caracal The attackers went after information stored on targets’ Android devices … More

MailChimp Fixes Privacy Issue that Leaked Respondents’ Email Addresses

MailChimp has plugged a privacy issue that leaked users’ email addresses when they responded to websites’ newsletter campaigns. Self-proclaimed mobile enthusiast Terence Eden discovered what he calls an “annoying privacy violation” while viewing the referral logs for his website. Those logs help document “Referer Headers” (misspelling intended), optional header fields which specify the address of […]… Read More

The post MailChimp Fixes Privacy Issue that Leaked Respondents’ Email Addresses appeared first on The State of Security.