Category Archives: Privacy

Chinese facial recognition database tracking Muslims left exposed

By Waqas

China is often held responsible for conducting surveillance campaigns and espionage activities discreetly not only on its citizens but governments across continents. Now, a misconfigured facial recognition database has emerged that confirms the allegations put forth against China. Reportedly, a Dutch security researcher has exposed database that contains exclusive details about the deep-rooted surveillance tactics […]

This is a post from HackRead.com Read the original post: Chinese facial recognition database tracking Muslims left exposed

Website uses Artificial Intelligence to create utterly realistic human faces

By Waqas

A new way for cybercriminals to create fake social media profiles and carry identity scams using Artificial Intelligence powered tool? A couple of months ago it was reported that NVIDIA has developed a tool that uses Artificial Intelligence to create extremely realistic human faces which in reality do not exist. Now, there is a website that […]

This is a post from HackRead.com Read the original post: Website uses Artificial Intelligence to create utterly realistic human faces

The Risks of Public Wi-Fi and How to Close the Security Gap

public wi-fi risksAs I write this blog post, I’m digitally exposed, and I know it. For the past week, I’ve had to log on to a hospital’s public Wi-Fi each day to work while a loved one recuperates.

What seems like a routine, casual connection to the hospital’s Wi-Fi isn’t. Using public Wi-Fi is a daily choice loaded with risk. Sure, I’m conducting business and knocking out my to-do list like a rock star but at what cost to my security?

The Risks

By using public Wi-Fi, I’ve opened my online activity and personal data (via my laptop) up to a variety of threats including eavesdropping, malware distribution, and bitcoin mining. There’s even a chance I could have logged on to a malicious hotspot that looked like the hospital network.

Like many public Wi-Fi spots, the hospital’s network could lack encryption, which is a security measure that scrambles the information sent from my computer to the hospital’s router so other people can’t read it. Minus encryption, whatever I send over the hospital’s network could potentially be intercepted and used maliciously by cybercriminals.

Because logging on to public Wi-Fi is often a necessity — like my situation this week — security isn’t always the first thing on our minds. But over the past year, a new normal is emerging. A lot of us are thinking twice. With data breaches, privacy concerns, the increase in the market for stolen credentials, and increasingly sophisticated online scams making the headlines every day, the risks of using public Wi-Fi are front and center.

Rising Star: VPNpublic wi-fi risks

The solution to risky public Wi-Fi? A Virtual Private Network (VPN). A VPN allows users to securely access a private network and share data remotely through public networks. Much like a firewall protects the data on your computer, a VPN protects your online activity by encrypting your data when you connect to the internet from a remote or public location. A VPN also conceals your location, IP address, and online activity.

Using a VPN helps protect you from potential hackers using public Wi-Fi, which is one of their favorite easy-to-access security loopholes.

Who Needs a VPN?

If you (or your family members) travel and love to shop online, access your bank account, watch movies, and do everyday business via your phone or laptop, a VPN would allow you to connect safely and encrypt your data no matter where you are.

A VPN can mask, or scramble, your physical location, banking account credentials, and credit card information.

Also, if you have a family data plan you’ve likely encouraged your kids to save data by connecting to public Wi-Fi whenever possible. Using a VPN, this habit would be secured from criminal sniffers and snoopers.

A VPN allows you to connect to a proxy server that will access online sites on your behalf and enables a secure connection most anywhere you go. A VPN also allows hides your IP address and allows you to browse anonymously from any location.

How VPNs work

To use a VPN you subscribe to VPN service, download the app onto your desktop or phone, set up your account, and then log onto a VPN server to conduct your online activity privately.

If you are still logging on to public Wi-Fi, here are a few tips to keep you safe until VPNs become as popular as Wi-Fi.

Stay Safe on Public Wi-Fi 

Verify your connection. Fake networks that mine your data abound. If you are logging on to Wi-Fi in a coffee shop, hotel, airport, or library, verify the exact name of the network with an employee. Also, only use Wi-Fi that requires a password to log on.public wi-fi risks

Don’t get distracted. For adults, as well as kids, it’s easy to get distracted and absorbed with our screens — this is risky when on public Wi-Fi, according to Diana Graber, author of Raising Humans in a Digital World. “Knowing how to guard their personal information online is one of the most important skills parents need to equip their young kids with today,” says Graber. “Lots of young people visit public spaces, like a local coffee shop or library, and use public Wi-Fi to do homework, for example. It’s not uncommon for them to get distracted by something else online or even tempted to buy something, without realizing their personal information (or yours!) might be at risk.”

Disable auto Wi-Fi connect. If your phone automatically joins surrounding networks, you can disable this function in your settings. Avoid linking to unknown or unrecognized networks.

Turn off Wi-Fi when done. Your computer or phone can still transmit data even when you are not using it. Be sure to disable your Wi-Fi from the network when you are finished using it.

Avoid financial transactions. If you must use public Wi-Fi, don’t conduct a sensitive transaction such as banking, shopping, or any kind of activity that requires your social security or credit card numbers or password use. Wait until you get to a secured home network to conduct personal business.

Look for the HTTPS. Fake or unsecured websites will not have the HTTPS in their address. Also, look for the little lock icon in the address bar to confirm a secure connection.

Secure your devices. Use a personal VPN as an extra layer of security against hackers and malware.

The post The Risks of Public Wi-Fi and How to Close the Security Gap appeared first on McAfee Blogs.

Venezuela’s Government Appears To be Trying To Hack Activists With Phishing Pages

Hackers allegedly working for the embattled Venezuelan government tried to trick activists into giving away their passwords to popular services such as Gmail, Facebook, Twitter, and others, according to security researchers. From a report: Last week, the Venezuelan opposition leader Juan Guaido called for citizens to volunteer with the goal of helping international humanitarian organizations deliver aid into the country. President Nicolas Maduro is refusing to accept aid and has erected blocks across a border bridge with Colombia with the military's help. The volunteer efforts were organized around the website voluntariosxvenezuela.com. A week later, on February 11 someone registered an almost identical domain, voluntariosvenezuela[.]com. And on Wednesday, users in Venezuela who were trying to visit the original and official VoluntariosxVenezuela website were redirected to the newer one, according to security firm Kaspersky Lab, as well as Venezuelan users on Twitter.

Read more of this story at Slashdot.

Three reasons employee monitoring software is making a comeback

Companies are increasingly implementing employee and user activity monitoring software to: Ensure data privacy Protect intellectual property and sensitive data from falling into the wrong hands Stop malicious or unintentional data exfiltration attempts Find ways to optimize processes and improve employee productivity. Modern user activity monitoring software is incredibly flexible, providing companies with the insights they need while offering the protection they demand. By examining three prominent use cases, it’s evident that employee monitoring software … More

The post Three reasons employee monitoring software is making a comeback appeared first on Help Net Security.

Smashing Security #115: Love, Nests, and is 2FA destroying the world?

Smashing Security #115: Love, Nests, and is 2FA destroying the world?

Is two factor authentication such a pain in the rear end that it’s costing the economy millions? Do you feel safe having a Google Nest in your home? And don’t get caught by a catfisher this Valentine’s Day.

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by B J Mendelson.

Netflix Has Saved Every Choice You’ve Ever Made In ‘Black Mirror: Bandersnatch’

According to a technology policy researcher, Netflix records all the choices you make in Black Mirror's Bandersnatch episode. "Michael Veale, a technology policy researcher at University College London, wanted to know what data Netflix was collecting from Bandersnatch," reports Motherboard. "People had been speculating a lot on Twitter about Netflix's motivations," Veale told Motherboard in an email. "I thought it would be a fun test to show people how you can use data protection law to ask real questions you have." From the report: The law Veale used is Europe's General Data Protection Regulation (GDPR). The GDPR granted EU citizens a right to access -- anyone can request a wealth of information from a company collecting data. Users can formally request a company such as Netflix tell them the reason its collecting data, the categories they're sorting data into, third parties it's sharing the data with, and other information. Veale used this right of access to ask Netflix questions about Bandersnatch and revealed the answers in a Twitter thread. He found that Netflix is tracking the decisions its users make (which makes sense considering how the film works), and that it is keeping those decisions long after a user has finished the film. It is also stores aggregated forms of the users choice to "help [Netflix] determine how to improve this model of storytelling in the context of a show or movie," the company said in its email response to him. The .csv and PDF files displayed Veale's journey through Bandersnatch, every choice displayed in a long line for him to see. After sending along a copy of his passport to prove his identity, Veale got the answers he wanted from Netflix via email and -- in a separate email -- a link to a website where he downloaded an encrypted version of his data. He had to use a Netflix-provided key to unlock the data, which came in the form of a .csv file and a PDF. Veale is concerned by what he learned. Netflix didn't tell Veale how long it keeps the data and what the long term deletion plans are. "They claim they're doing the processing as it's 'necessary' for performing the contract between me and Netflix," Veale told Motherboard. "Is storing that data against my account really 'necessary'? They clearly haven't delinked it or anonymized it, as I've got access to it long after I watched the show. If you asked me, they should really be using consent (which you should be able to refuse) or legitimate interests (meaning you can object to it) instead."

Read more of this story at Slashdot.

Apple Fails To Block Porn and Gambling ‘Enterprise’ Apps

Facebook and Google were far from the only developers openly abusing Apple's Enterprise Certificate program meant for companies offering employee-only apps. A TechCrunch investigation uncovered a dozen hardcore pornography apps and a dozen real-money gambling apps that escaped Apple's oversight. From the report: The developers passed Apple's weak Enterprise Certificate screening process or piggybacked on a legitimate approval, allowing them to sidestep the App Store and Cupertino's traditional safeguards designed to keep iOS family friendly. Without proper oversight, they were able to operate these vice apps that blatantly flaunt Apple's content policies. The situation shows further evidence that Apple has been neglecting its responsibility to police the Enterprise Certificate program, leading to its exploitation to circumvent App Store rules and forbidden categories.

Read more of this story at Slashdot.

Is 2019 the year national privacy law is established in the US?

Data breaches and privacy violations are now commonplace. Unfortunately, the consequences for US companies involved can be complicated. A company’s obligation to a person affected by a data breach depends in part on the laws of the state where the person resides. A person may be entitled to free credit monitoring for a specified period of time or may have the right to be notified of the breach sooner than somebody living in another state. … More

The post Is 2019 the year national privacy law is established in the US? appeared first on Help Net Security.

People still shocked by how easy it is to track someone online

Netflix’s hit series You, has got people discussing their online privacy and traceability. However, McAfee, the device-to-cloud cybersecurity company, discovered less than a fifth (17%) of Brits who lost or had their phone stolen (43%) made any attempt to prevent criminals from accessing data stored on the device or in the cloud. Only 17% said they remotely locked or changed passwords and a mere 12% remotely erased data from the lost or stolen device to … More

The post People still shocked by how easy it is to track someone online appeared first on Help Net Security.

Supply Chain Security – Sex Appeal, Pain Avoidance and Allies

Every security professional and every privacy professional understands that supply chain security is as important as in-house security. (If you don’t understand this, stop and read Maria Korolov’s January 25, 2019 article in CSO, What is a supply chain attack? Why you should be wary of third-party providers.) So how do you marshal the resources […]… Read More

The post Supply Chain Security – Sex Appeal, Pain Avoidance and Allies appeared first on The State of Security.

There’s a growing disconnect between data privacy expectations and reality

There is a growing disconnect between how companies capitalize on customer data and how consumers expect their data to be used, according to a global online survey commissioned by RSA Security. Consumer backlash in response to the numerous high-profile data breaches in recent years has exposed one of the hidden risks of digital transformation: loss of customer trust. According to the study, which surveyed more than 6,000 adults across France, Germany, the United Kingdom and … More

The post There’s a growing disconnect between data privacy expectations and reality appeared first on Help Net Security.

Ep. 114 – Finding Love with Whitney Merrill

What do you get when you mix a lawyer, crypto junkie and a romantic together? Well, none other than our guest for this month, Whitney Merrill. – Feb 11, 2019
Contents Download Get Involved

Download

Ep. 114 – Finding Love with Whitney Merrill
Miro Video Player

Get Involved

Got a great idea for an upcoming podcast? Send us a quick message on the contact form! Enjoy the Outtro Music? Thanks to Clutch for allowing us to use Son of Virginia as our new SEPodcast Theme Music And check out a schedule for all our training at Social-Engineer.Com Check out the Innocent Lives Foundation to help unmask online child predators.

The post Ep. 114 – Finding Love with Whitney Merrill appeared first on Security Through Education.

83% Of Consumers Believe Personalized Ads Are Morally Wrong

An anonymous reader quotes Forbes: A massive majority of consumers believe that using their data to personalize ads is unethical. And a further 76% believe that personalization to create tailored newsfeeds -- precisely what Facebook, Twitter, and other social applications do every day -- is unethical. At least, that's what they say on surveys. RSA surveyed 6,000 adults in Europe and America to evaluate how our attitudes are changing towards data, privacy, and personalization. The results don't look good for surveillance capitalism, or for the free services we rely on every day for social networking, news, and information-finding. "Less than half (48 percent) of consumers believe there are ethical ways companies can use their data," RSA, a fraud prevention and security company, said when releasing the survey results. Oh, and when a compan y gets hacked? Consumers blame the company, not the hacker, the report says.

Read more of this story at Slashdot.

‘Why Data, Not Privacy, Is the Real Danger’

"While it's creepy to imagine companies are listening in to your conversations, it's perhaps more creepy that they can predict what you're talking about without actually listening," writes an NBC News technology correspondent, arguing that data, not privacy, is the real danger. Your data -- the abstract portrait of who you are, and, more importantly, of who you are compared to other people -- is your real vulnerability when it comes to the companies that make money offering ostensibly free services to millions of people. Not because your data will compromise your personal identity. But because it will compromise your personal autonomy. "Privacy as we normally think of it doesn't matter," said Aza Raskin, co-founder of the Center for Humane Technology [and a former Mozilla team leader]. "What these companies are doing is building little models, little avatars, little voodoo dolls of you. Your doll sits in the cloud, and they'll throw 100,000 videos at it to see what's effective to get you to stick around, or what ad with what messaging is uniquely good at getting you to do something...." With 2.3 billion users, "Facebook has one of these models for one out of every four humans on earth. Every country, culture, behavior type, socio-economic background," said Raskin. With those models, and endless simulations, the company can predict your interests and intentions before you even know them.... Without having to attach your name or address to your data profile, a company can nonetheless compare you to other people who have exhibited similar online behavior... A professor at Columbia law school decries the concentrated power of social media as "a single point of failure for democracy." But the article also warns about the dangers of health-related data collected from smartwatches. "How will people accidentally cursed with the wrong data profile get affordable insurance?"

Read more of this story at Slashdot.

Study Analyzes Challenges, Concerns for IT/OT Convergence

A survey conducted by the Ponemon Institute on behalf of security solutions provider TUV Rheinland OpenSky analyzes the security, safety and privacy challenges and concerns related to the convergence between information technology (IT), operational technology (OT), and industrial internet of things (IIoT).

read more

Adiantum: A new encryption scheme for low-end Android devices

Google has created an alternative disk and file encryption mode for low-end Android devices that don’t have enough computation power to use the Advanced Encryption Standard (AES). About Adiantum For the new encryption scheme, dubbed Adiantum, Google used existing standards, ciphers and hashing functions, but combined them in a more efficient way. Paul Crowley and Eric Biggers from the Android Security & Privacy Team noted that they have high confidence in the security of the … More

The post Adiantum: A new encryption scheme for low-end Android devices appeared first on Help Net Security.

These iOS apps have been secretly recording your screen activities

By Waqas

Apple has vowed to remove iOS apps that record screen data. User data recording has become an issue of concern among the cyber-security community as the data is used to launch a variety of scams, identify customer demographics, and targeted marketing gimmicks. Mobile phone manufacturers are trying to ensure that apps that indulge in sneaky […]

This is a post from HackRead.com Read the original post: These iOS apps have been secretly recording your screen activities

Apple Tells App Developers To Disclose Or Remove Screen Recording Code

An anonymous reader quotes a report from TechCrunch: Apple is telling app developers to remove or properly disclose their use of analytics code that allows them to record how a user interacts with their iPhone apps -- or face removal from the app store, TechCrunch can confirm. In an email, an Apple spokesperson said: "Protecting user privacy is paramount in the Apple ecosystem. Our App Store Review Guidelines require that apps request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity." "We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary," the spokesperson added. It follows an investigation by TechCrunch that revealed major companies, like Expedia, Hollister and Hotels.com, were using a third-party analytics tool to record every tap and swipe inside the app. We found that none of the apps we tested asked the user for permission, and none of the companies said in their privacy policies that they were recording a user's app activity. Even though sensitive data is supposed to be masked, some data -- like passport numbers and credit card numbers -- was leaking.

Read more of this story at Slashdot.

Merging Facebook Messenger, WhatsApp, and Instagram: a technical, reputational hurdle

Secure messaging is supposed to be just that—secure. That means no backdoors, strong encryption, private messages staying private, and, for some users, the ability to securely communicate without giving up tons of personal data.

So, when news broke that scandal-ridden, online privacy pariah Facebook would expand secure messaging across its Messenger, WhatsApp, and Instagram apps, a broad community of cryptographers, lawmakers, and users asked: Wait, what?

Not only is the technology difficult to implement, the company implementing it has a poor track record with both user privacy and online security.

On January 25, the New York Times reported that Facebook CEO Mark Zuckerberg had begun plans to integrate the company’s three messaging platforms into one service, allowing users to potentially communicate with one another across its separate mobile apps. According to the New York Times, Zuckerberg “ordered that the apps all incorporate end-to-end encryption.”

The initial response was harsh.

Abroad, Ireland’s Data Protection Commission, which regulates Facebook in the European Union, immediately asked for an “urgent briefing” from the company, warning that previous data-sharing proposals raised “significant data protection concerns.”

In the United States, Democratic Senator Ed Markey for Massachusetts said in a statement: “We cannot allow platform integration to become privacy disintegration.”

Cybersecurity technologists swayed between cautious optimism and just plain caution.

Some professionals focused on the clear benefits of enabling end-to-end encryption across Facebook’s messaging platforms, emphasizing that any end-to-end encryption is better than none.

Former Facebook software engineer Alec Muffet, who led the team that added end-to-end encryption to Facebook Messenger, said on Twitter that the integration plan “clearly maximises the privacy afforded to the greatest [number] of people and is a good idea.”

Others questioned Facebook’s motives and reputation, scrutinizing the company’s established business model of hoovering up mass quantities of user data to deliver targeted ads.

John Hopkins University Associate Professor and cryptographer Matthew Green said on Twitter that “this move could potentially be good or bad for security/privacy. But given recent history and financial motivations of Facebook, I wouldn’t bet my lunch money on ‘good.’”

On January 30, Zuckerberg confirmed the integration plan during a quarterly earnings call. The company hopes to complete the project either this year or in early 2020.

It’s going to be an uphill battle.

Three applications, one bad reputation

Merging three separate messaging apps is easier said than done.

In a phone interview, Green said Facebook’s immediate technological hurdle will be integrating “three different systems—one that doesn’t have any end-to-end encryption, one where it’s default, and one with an optional feature.”

Currently, the messaging services across WhatsApp, Facebook Messenger, and Instagram have varying degrees of end-to-end encryption. WhatsApp provides default end-to-end encryption, whereas Facebook Messenger provides optional end-to-end encryption if users turn on “Secret Conversations.” Instagram provides no end-to-end encryption in its messaging service.

Further, Facebook Messenger, WhatsApp, and Instagram all have separate features—like Facebook Messenger’s ability to support more than one device and WhatsApp’s support for group conversations—along with separate desktop or web clients.

Green said to imagine someone using Facebook Messenger’s web client—which doesn’t currently support end-to-end encryption—starting a conversation with a WhatsApp user, where encryption is set by default. These lapses in default encryption, Green said, could create vulnerabilities. The challenge is in pulling together all those systems with all those variables.

“First, Facebook will have to likely make one platform, then move all those different systems into one somewhat compatible system, which, as far as I can tell, would include centralizing key servers, using the same protocol, and a bunch of technical development that has to happen,” Green said. “It’s not impossible. Just hard.”

But there’s more to Facebook’s success than the technical know-how of its engineers. There’s also its reputation, which, as of late, portrays the company as a modern-day data baron, faceplanting into privacy failure after privacy failure.

After the 2016 US presidential election, Facebook refused to call the surreptitious collection of 50 million users’ personal information a “breach.” When brought before Congress to testify about his company’s role in a potential international disinformation campaign, Zuckerberg deflected difficult questions and repeatedly claimed the company does not “sell” user data to advertisers. But less than one year later, a British parliamentary committee released documents that showed how Facebook gave some companies, including Airbnb and Netflix, access to its platform in exchange for favors—no selling required.

Five months ago, Facebook’s Onavo app was booted from the Apple App Store for gathering app data, and early this year, Facebook reportedly paid users as young as 13-years-old to install the “Facebook Research” app on their own devices, an app intended strictly for Facebook employee use. Facebook pulled the app, but Apple had extra repercussions in mind: It removed Facebook’s enterprise certificate, which the company relied on to run its internal developer apps.

These repeated privacy failures are enough for some users to avoid Facebook’s end-to-end encryption experiment entirely.

“If you don’t trust Facebook, the place to worry is not about them screwing up the encryption,” Green said. “They want to know who’s talking to who and when. Encryption doesn’t protect that at all.”

If not Facebook, then who?

Reputationally, there are at least two companies that users look to for both strong end-to-end encryption and strong support of user privacy and security—Apple and Signal, which respectively run the iMessage and Signal Messenger apps.

In 2013, Open Whisper Systems developed the Signal Protocol. This encryption protocol provides end-to-end encryption for voice calls, video calls, and instant messaging, and is implemented by WhatsApp, Facebook Messenger, Google’s Allo, and Microsoft’s Skype to varying degrees. Journalists, privacy advocates, cryptographers, and cybersecurity researchers routinely praise Signal Messenger, the Signal Protocol, and Open Whisper Systems.

“Use anything by Open Whisper Systems,” said former NSA defense contractor and government whistleblower Edward Snowden.

“[Signal is] my first choice for an encrypted conversation,” said cybersecurity researcher and digital privacy advocate Bruce Schneier.

Separately, Apple has proved its commitment to user privacy and security through statements made by company executives, updates pushed to fix vulnerabilities, and legal action taken in US courts.

In 2016, Apple fought back against a government request that the company design an operating system capable of allowing the FBI to crack an individual iPhone. Such an exploit, Apple argued, would be too dangerous to create. Earlier last year, when an American startup began selling iPhone-cracking devices—called GrayKey—Apple fixed the vulnerability through an iOS update.

Repeatedly, Apple CEO Tim Cook has supported user security and privacy, saying in 2015: “We believe that people have a fundamental right to privacy. The American people demand it, the constitution demands it, morality demands it.”

But even with these sterling reputations, the truth is, cybersecurity is hard to get right.

Last year, cybersecurity researchers found a critical vulnerability in Signal’s desktop app that allowed threat actors to obtain users’ plaintext messages. Signal’s developers fixed the vulnerability within a reported five hours.

Last week, Apple’s FaceTime app, which encrypts video calls between users, suffered a privacy bug that allowed threat actors to briefly spy on victims. Apple fixed the bug after news of the vulnerability spread.

In fact, several secure messaging apps, including Telegram, Viber, Confide, Allo, and WhatsApp have all reportedly experienced security vulnerabilities, while several others, including Wire, have previously drawn ire because of data storage practices.

But vulnerabilities should not scare people from using end-to-end encryption altogether. On the contrary, they should spur people into finding the right end-to-end encrypted messaging app for themselves.

No one-size-fits-all, and that’s okay

There is no such thing as a perfect, one-size-fits-all secure messaging app, said Electronic Frontier Foundation Associate Director of Research Gennie Gebhart, because there’s no such thing as a perfect, one-size-fits-all definition of secure.

“In practice, for some people, secure means the government cannot intercept their messages,” Gebhart said. “For others, secure means a partner in their physical space can’t grab their device and read their messages. Those are two completely different tasks for one app to accomplish.”

In choosing the right secure messaging app for themselves, Gebhart said people should ask what they need and what they want. Are they worried about governments or service providers intercepting their messages? Are they worried about people in their physical environment gaining access to their messages? Are they worried about giving up their phone number and losing some anonymity?

In addition, it’s worth asking: What are the risks of an accident, like, say, mistakenly sending an unencrypted message that should have been encrypted? And, of course, what app are friends and family using?

As for the constant news of vulnerabilities in secure messaging apps, Gebhart advised not to overreact. The good news is, if you’re reading about a vulnerability in a secure messaging tool, then the people building that tool know about the vulnerability, too. (Indeed, developers fixed the majority of the security vulnerabilities listed above.) The best advice in that situation, Gebhart said, is to update your software.

“That’s number one,” Gebhart said, explaining that, though this line of defense is “tedious and maybe boring,” sometimes boring advice just works. “Brush your teeth, lock your door, update your software.”

Cybersecurity is many things. It’s difficult, it’s complex, and it’s a team sport. That team includes you, the user. Before you use a messenger service, or go online at all, remember to follow the boring advice. You’ll better secure yourself and your privacy.

The post Merging Facebook Messenger, WhatsApp, and Instagram: a technical, reputational hurdle appeared first on Malwarebytes Labs.

Many Popular iPhone Apps Secretly Record Your Screen Without Asking

An anonymous reader quotes a report from TechCrunch: Many major companies, like Air Canada, Hollister and Expedia, are recording every tap and swipe you make on their iPhone apps. In most cases you won't even realize it. And they don't need to ask for permission. You can assume that most apps are collecting data on you. Some even monetize your data without your knowledge. But TechCrunch has found several popular iPhone apps, from hoteliers, travel sites, airlines, cell phone carriers, banks and financiers, that don't ask or make it clear -- if at all -- that they know exactly how you're using their apps. Worse, even though these apps are meant to mask certain fields, some inadvertently expose sensitive data. Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed "session replay" technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn't work or if there was an error. Every tap, button push and keyboard entry is recorded -- effectively screenshotted -- and sent back to the app developers. [...] Apps that are submitted to Apple's App Store must have a privacy policy, but none of the apps we reviewed make it clear in their policies that they record a user's screen. Glassbox doesn't require any special permission from Apple or from the user, so there's no way a user would know. When asked, Glassbox said it doesn't enforce its customers to mention its usage in their privacy policy. A mobile expert known as The App Analyst recently found Air Canada's iPhone app to be improperly masking the session replays when they were sent, exposing passport numbers and credit card data in each replay session. Just weeks earlier, Air Canada said its app had a data breach, exposing 20,000 profiles.

Read more of this story at Slashdot.

How much does your credit card issuer know about you?

Cash is slowly but steadily becoming one of the least popular payment methods in the developed countries. Here in the US, the amount of consumer purchases done with plastic cards is approximately ten times higher when compared to cash payments. Consumers are giving up on checks and cash handling and are opting in for the convenience, protection, and rewards often offered by the issuing banks.

Very often credit card companies manage to attract the attention of clients by offering them comprehensive reward points systems, sign-up bonuses, and perks such as early access to concert tickets and invitations for special events organized for clients of a particular network – VISA, MasterCard, American Express, or Discovery. Credit cards enable cardholders to purchase goods and services – the transactions are based on the cardholder’s promise to pay back for the borrowed amounts as well as other additional charges such as interest and monthly services fees.

Credit card issuers are in possession of all sorts of personal information that includes current and previous addresses, income, full name, and DOB. There is no harm there; it’s normal for businesses to ask for personal information so they can verify your identity and determine your trustworthiness. However, personal information is not the only valuable thing that credit card holders are giving away when they start a relationship with a credit card company.

While issuing banks are known to profit out of fees associated with the usage of credit cards; consumers are giving up vast amounts of personal information that might be used by the credit card companies and may end up shared with third parties. Such information includes your spending habits, shopping patterns, preferences, life secrets, and in some cases, even your location.

What information do you give to credit card issuers and how do credit card companies keep track of your buying habits?

Location

If you are using mobile banking the chances that your credit card issuer is aware of your location at all times is high. The information collected could be used for both marketing and security purposes. If you tend to spend a lot of money on dining, you might be offered a new credit card that gives you even more rewards for money spend on a night out. Sharing your location with your credit card issuer helps banks battle fraud too – your credit card issuer would not be concerned if they see an international transaction if you tend to travel a lot.

Spending habits and patterns

Credit card issuers can learn a lot about you from your spending habits and patterns. If you end up spending a lot of money on international trips they might use the information to suggest travel cards with no foreign transaction fees. Or guide you to an affiliated travel website so you can spend more using the same card. Yearly, monthly, and weekly patterns show banks what your day looks like and gives them an idea of what products and services you may need.

Trustworthiness

Bank issuers use your transaction history to decide on whether you are trustworthy and reliable. You may qualify for a credit card limit increase if your income and debt ratios are on an acceptable rate, all your payments are on time, and you pay a regular monthly fee to a luxury car maker. Banks love people who pay their bills on time! It won’t be a surprise if you get offered better credit card conditions if your credit score keeps growing over the years. Banks may even disregard lousy credit if you are a long term client and they see a pattern that they like – you are considered a valued customer as long as you use their card and pay your bills on time.

How do they use the data?

At the age of big data, your card transaction history tells a lot about yourself and how you live your life. So it is not a surprise that many organizations would want access to such data. Life insurance companies might give more favorable quotes to people who go to the gym four times a week, do not spend money on tobacco nor liqueur and buy organic. So you can imagine that apart of providing you with better solutions that suit your lifestyle, card issuers often partner up with data mining companies whose goal is to make you spend more money.

Banks also share transactional data with third parties such as data brokers, that work with advertisers and marketers, who are always ready to target you with what they believe are relevant marketing campaigns of goods and services that you may be willing to purchase. If you do not want your data analyzed, you can opt out you VISA cards here, and MasterCard cards here. The opt requests last only five years so if you want to maintain your opt-out choice, you have to manually enter the card details of every new or replacement card you receive.

Is this enough to be secure and to prevent your data from being spread around?

Not really, the best way to know what data you are sharing with your credit card issuer is to read the Terms & Conditions agreement they give you on sign up. Having antivirus software installed on all your connected devices also helps – being protected will prevent cybercriminals the ability to obtain the missing piece about you from the constant data leaks that have been happening over the last decade.

Download Panda FREE VPN

The post How much does your credit card issuer know about you? appeared first on Panda Security Mediacenter.

Google Launches Password Checkup Extension To Detect Breached Credentials

Breached usernames and passwords have become a pain in the neck with regards to online security. Even if your account

Google Launches Password Checkup Extension To Detect Breached Credentials on Latest Hacking News.

NYPD To Google: Stop Revealing the Location of Police Checkpoints

schwit1 shares a report from the New York Post: The NYPD is calling on Google to yank a feature from its Waze traffic app that tips off drivers to police checkpoints -- warning it could be considered "criminal conduct," according to a report on Wednesday. The department sent a cease-and-desist letter over the weekend demanding Google disable the crowd-sourced app's function that allows motorists to pinpoint police whereabouts, StreetsBlog reported. "Individuals who post the locations of DWI checkpoints may be engaging in criminal conduct since such actions could be intentional attempts to prevent and/or impair the administration of the DWI laws and other relevant criminal and traffic laws," wrote Acting Deputy Commissioner for Legal Matters Ann Prunty in the letter, according to the website. My $0.02 is that the NYPD loses on first amendment grounds.

Read more of this story at Slashdot.

Smashing Security #114: Darknet Diaries, death, and beauty apps

Smashing Security #114: Darknet Diaries, death, and beauty apps

Jack Rhysider from the “Darknet Diaries” podcast joins us to chat about his interview with the elusive Hacker Giraffe, how a death is preventing cryptocurrency investors from reaching their money, and how ‘beauty camera’ apps are redirecting users to phishing websites and stealing their selfies.

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast hosted by computer security veterans Graham Cluley and Carole Theriault.

How to Delete Accidentally Sent Messages, Photos on Facebook Messenger

Ever sent a message on Facebook Messenger then immediately regretted it, or an embarrassing text to your boss in the heat of the moment at late night, or maybe accidentally sent messages or photos to a wrong group chat? Of course, you have. We have all been through drunk texts and embarrassing photos many times that we later regret sending but are forced to live with our mistakes. Good news,

DuckDuckGo Warns that Google Does Not Respect ‘Do Not Track’ Browser Setting

DuckDuckGo cautions internet users that companies like Google, Facebook, and Twitter, do not respect the "Do Not Track" setting on web browsers. From a report: According to DuckDuckGo's research, over 77% of US adults are not aware of that fact. The "Do Not Track" (DNT) setting on browsers sends signals to web services to stop tracking a user's activity. However, the DNT setting is only a voluntary signal which websites are not obligated to respect. "It can be alarming to realize that Do Not Track is about as foolproof as putting a sign on your front lawn that says "Please, don't look into my house" while all of your blinds remain open."

Read more of this story at Slashdot.

Upcoming Firefox version to offer fingerprinting & cryptomining protection

By Uzair Amir

There is very good news for Mozilla Firefox users. After improving the user experience with tracking protection function offering content blocking features and other changes in Firefox 63, Mozilla is aiming for another significant update in the upcoming version of the browser. The new version of Mozilla Firefox called Firefox 67, which is planned to […]

This is a post from HackRead.com Read the original post: Upcoming Firefox version to offer fingerprinting & cryptomining protection

Facebook’s New Privacy Hires

The Wired headline sums it up nicely -- "Facebook Hires Up Three of Its Biggest Privacy Critics":

In December, Facebook hired Nathan White away from the digital rights nonprofit Access Now, and put him in the role of privacy policy manager. On Tuesday of this week, lawyers Nate Cardozo, of the privacy watchdog Electronic Frontier Foundation, and Robyn Greene, of New America's Open Technology Institute, announced they also are going in-house at Facebook. Cardozo will be the privacy policy manager of WhatsApp, while Greene will be Facebook's new privacy policy manager for law enforcement and data protection.

I know these people. They're ethical, and they're on the right side. I hope they continue to do their good work from inside Facebook.

Four differences between the GDPR and the CCPA

By passing the California Consumer Privacy Act (CCPA), which goes into effect on January 1, 2020, the Golden State is taking a major step in the protection of consumer data. The new law gives consumers insight into and control of their personal information collected online. This follows a growing number of privacy concerns around corporate access to and sales of personal information with leading tech companies like Facebook and Google. The bill was signed by … More

The post Four differences between the GDPR and the CCPA appeared first on Help Net Security.

World’s largest data dump surfaces on web with 2.2 billion accounts

By Waqas

It hasn’t even been 15 days since details of the world’s biggest online private data dump were discovered by security researchers and now its second “installment” has posted online. As per the report from Heise.de, a German-language website, the first collection, which was published on January 17 and dubbed as Collections #1 had approx. 770 […]

This is a post from HackRead.com Read the original post: World’s largest data dump surfaces on web with 2.2 billion accounts

Apple issued a partial fix for recent FaceTime spying bug

On Friday, Apple announced that the FaceTime issue recently discovered has been partially fixed, the company plans to release a complete update next week.

This week, Apple issued a partial fix for the FaceTime issue recently discovered, the tech giant plans to release a complete update next week.

Apple experts implemented a server-side patch, but the Group FaceTime feature will be enabled again next week.

The security vulnerability in the Apple FaceTime lets you hear the audio of the person you are calling before they pick up the call by adding your number to a group chat.

On the receiver’s side, it appears as if the call still hasn’t been answered.

The bug was discovered by Grant Thompson, a 14-year-old from Arizona, who attempted to report the flaw to Apple for more than 10 days without success.

“There’s a major bug in FaceTime right now that lets you connect to someone and hear their audio without the person even accepting the call.” reads a thread published on MacRumors.  

“This bug is making the rounds on social media, and as 9to5Mac points out, there are major privacy concerns involved. You can force a FaceTime call with someone and hear what they’re saying, perhaps even without their knowledge. 

We tested the bug at MacRumors and were able to initiate a FaceTime call with each other where we could hear the person on the other end without ever having pressed the button to accept the call.”

The flaw affected iOS 12.1 and 12.2 versions, and macOS Mojave.

FaceTime bug

Just after the bug was disclosed, Apple suspended the Group FaceTime feature.

Apple has officially thanked Thompson for reporting the bug apologized for the delay in receiving the report. The company has promised to improve the process for receiving reports such as the one related to the FaceTime issue.

“We sincerely apologize to our customers who were affected and all who were concerned about this security issue. We appreciate everyone’s patience as we complete this process,” reads the statement issued by Apple.

“We want to assure our customers that as soon as our engineering team became aware of the details necessary to reproduce the bug, they quickly disabled Group FaceTime and began work on the fix,”.

The New York attorney general and Governor Andrew M. Cuomo and Attorney General Letitia James announced a probe into the failure to report the flaw to the customers and the delay in responding to the report.

“In the wake of this egregious bug that put the privacy of New Yorkers at risk, I support this investigation by the Attorney General into this serious consumer rights issue and direct the Division of Consumer Protection to help in any way possible,” Governor Cuomo announced. “We need a full accounting of the facts to confirm businesses are abiding by New York consumer protection laws and to help make sure this type of privacy breach does not happen again.”

“This FaceTime breach is a serious threat to the security and privacy of the millions of New Yorkers who have put their trust in Apple and its products over the years.said Attorney General James.

“My office will be conducting a thorough investigation into Apple’s response to the situation, and will evaluate the company’s actions in relation to the laws set forth by the State of New York. We must use every tool at our disposal to ensure that consumers are always protected.”

Pierluigi Paganini

(SecurityAffairs – FaceTime bug, privacy)

The post Apple issued a partial fix for recent FaceTime spying bug appeared first on Security Affairs.

Security Affairs: Apple issued a partial fix for recent FaceTime spying bug

On Friday, Apple announced that the FaceTime issue recently discovered has been partially fixed, the company plans to release a complete update next week.

This week, Apple issued a partial fix for the FaceTime issue recently discovered, the tech giant plans to release a complete update next week.

Apple experts implemented a server-side patch, but the Group FaceTime feature will be enabled again next week.

The security vulnerability in the Apple FaceTime lets you hear the audio of the person you are calling before they pick up the call by adding your number to a group chat.

On the receiver’s side, it appears as if the call still hasn’t been answered.

The bug was discovered by Grant Thompson, a 14-year-old from Arizona, who attempted to report the flaw to Apple for more than 10 days without success.

“There’s a major bug in FaceTime right now that lets you connect to someone and hear their audio without the person even accepting the call.” reads a thread published on MacRumors.  

“This bug is making the rounds on social media, and as 9to5Mac points out, there are major privacy concerns involved. You can force a FaceTime call with someone and hear what they’re saying, perhaps even without their knowledge. 

We tested the bug at MacRumors and were able to initiate a FaceTime call with each other where we could hear the person on the other end without ever having pressed the button to accept the call.”

The flaw affected iOS 12.1 and 12.2 versions, and macOS Mojave.

FaceTime bug

Just after the bug was disclosed, Apple suspended the Group FaceTime feature.

Apple has officially thanked Thompson for reporting the bug apologized for the delay in receiving the report. The company has promised to improve the process for receiving reports such as the one related to the FaceTime issue.

“We sincerely apologize to our customers who were affected and all who were concerned about this security issue. We appreciate everyone’s patience as we complete this process,” reads the statement issued by Apple.

“We want to assure our customers that as soon as our engineering team became aware of the details necessary to reproduce the bug, they quickly disabled Group FaceTime and began work on the fix,”.

The New York attorney general and Governor Andrew M. Cuomo and Attorney General Letitia James announced a probe into the failure to report the flaw to the customers and the delay in responding to the report.

“In the wake of this egregious bug that put the privacy of New Yorkers at risk, I support this investigation by the Attorney General into this serious consumer rights issue and direct the Division of Consumer Protection to help in any way possible,” Governor Cuomo announced. “We need a full accounting of the facts to confirm businesses are abiding by New York consumer protection laws and to help make sure this type of privacy breach does not happen again.”

“This FaceTime breach is a serious threat to the security and privacy of the millions of New Yorkers who have put their trust in Apple and its products over the years.” said Attorney General James.

“My office will be conducting a thorough investigation into Apple’s response to the situation, and will evaluate the company’s actions in relation to the laws set forth by the State of New York. We must use every tool at our disposal to ensure that consumers are always protected.”

Pierluigi Paganini

(SecurityAffairs – FaceTime bug, privacy)

The post Apple issued a partial fix for recent FaceTime spying bug appeared first on Security Affairs.



Security Affairs

One of the Biggest At-Home DNA Testing Companies Is Working With the FBI

An anonymous reader quotes a report from BuzzFeed News: Family Tree DNA, one of the largest private genetic testing companies whose home-testing kits enable people to trace their ancestry and locate relatives, is working with the FBI and allowing agents to search its vast genealogy database in an effort to solve violent crime cases, BuzzFeed News has learned. Federal and local law enforcement have used public genealogy databases for more than two years to solve cold cases, including the landmark capture of the suspected Golden State Killer, but the cooperation with Family Tree DNA and the FBI marks the first time a private firm has agreed to voluntarily allow law enforcement access to its database. While the FBI does not have the ability to freely browse genetic profiles in the library, the move is sure to raise privacy concerns about law enforcement gaining the ability to look for DNA matches, or more likely, relatives linked by uploaded user data. The Houston-based company, which touts itself as a pioneer in the genetic testing industry and the first to offer a direct-to-consumer test kit, disclosed its relationship with the FBI to BuzzFeed News on Thursday, saying in a statement that allowing access "would help law enforcement agencies solve violent crimes faster than ever." While Family Tree does not have a contract with the FBI, the firm has agreed to test DNA samples and upload the profiles to its database on a case-by-case basis since last fall, a company spokesperson told BuzzFeed News. Its work with the FBI is "a very new development, which started with one case last year and morphed," she said. To date, the company has cooperated with the FBI on fewer than 10 cases. The Family Tree database is free to access and can be used by anyone with a DNA profile to upload, not just paying customers.

Read more of this story at Slashdot.

Apple Will Store Russian User Data Locally, Possibly Decrypt on Request: Report

After resisting local government's mandates for years, Apple appears to have agreed to store Russian citizens' data within the country, a report says. From a report: According to a Foreign Policy report, Russia's telecommunications and media agency Roskomnadzor has confirmed that Apple will comply with the local data storage law, which appears to have major implications for the company's privacy initiatives. Apple's obligations in Russia would at least parallel ones in China, which required it turn over Chinese citizens' iCloud data to a partially government-operated data center last year. In addition to processing and storing Russian citizens' data on servers physically within Russia, Apple will apparently need to decrypt and produce user data for the country's security services as requested.

Read more of this story at Slashdot.

SecurityWeek RSS Feed: UK Data Watchdog Fines Leave.EU, Eldon Insurance

The UK data protection regulator (the Information Commissioner's Office – ICO) launched a wide-ranging investigation into the use of personal information for political purposes following the Facebook/Cambridge Analytica affair. It resulted in the publication of a lengthy report titled 'Democracy disrupted? Personal information and political influence' in July 2018, and a fine on Facebook set at the maximum amount possible – £500,000 ($645,000).

read more



SecurityWeek RSS Feed

Webroot Blog: Cyber News Rundown: Apple Removes Facebook Research App

Reading Time: ~2 min.

Facebook Research App Removed from App Store

After seeing their Onavo VPN application removed from the Apple App Store last year, Facebook has re-branded the service as a “research” app and made it available through non-Apple testing services. The app itself requires users download and install a Facebook Enterprise Developer Certificate and essentially allow the company complete access to the device. While many users seem to be in it only for the monthly gift cards, they remain unaware of the extreme levels of surveillance the app is capable of conducting, including accessing all social media messages, sent and received SMS messages and images, and more. Apple has since completely removed Facebook’s iOS developer certificate after seeing how they collect data on their customers.

Japan Overwhelmed by Love Letter Malware Campaign

Following the discovery of the Love Letter malware a couple weeks ago, the campaign has been determined to be responsible for a massive spike in malicious emails. Hidden amongst the contents of the suspiciously-titled attachments are several harmful elements, ranging from cryptocurrency miners to the latest version of the GandCrab ransomware. Unfortunately for users outside of the origin country of Japan, the initial payload is able to determine the system’s location and download additional malicious payloads based on the specific country.

Apple FaceTime Bug Leads to Lawsuit

With the recent announcement of a critical vulnerability for Apple’s FaceTime app, the manufacturer has been forced to take the application offline. Unfortunately, prior to the shutdown, one Houston lawyer filed a case alleging that the vulnerability allowed for unauthorized callers to eavesdrop on a private deposition without any consent. By simply adding a user to a group FaceTime call, callers were able to listen through the other device’s microphone without that user answering the call.

Authorities Seize Servers for Dark Online Marketplace

Authorities from the US and Europe announced this week that, through their combined efforts, they had successfully located and seized servers belonging to an illicit online marketplace known as xDedic. While this was only one of many such server sites, administrators could have used it to facilitate over $68 million in fraudulent ad revenue and other malicious activities. Hopefully, this seizure will help law enforcement gain an understanding of how such marketplaces operate and assist with uncovering larger operations.

French Engineering Firm Hit with Ransomware

Late last week the French engineering firm Altran Technologies was forced to take its central network and supported applications offline after suffering a ransomware attack. While not yet confirmed, the malware used in the attack has likely been traced to a LockerGoga ransomware sample uploaded to a malware engine detection site the very same day. Along with appending extensions to “.locked”, LockerGoga has been spotted in multiple European countries and seems to spread via an initial phishing campaign, and then through compromised internal networks.

The post Cyber News Rundown: Apple Removes Facebook Research App appeared first on Webroot Blog.



Webroot Blog

SecurityWeek RSS Feed: New York Investigating Apple’s Response to FaceTime Spying Bug

New York authorities have announced the launch of an investigation into the recently disclosed FaceTime vulnerability that can be exploited to spy on users. The probe focuses on Apple’s failure to warn customers and the company’s slow response.

read more



SecurityWeek RSS Feed

Is your organization ready for the data explosion?

“Data is the new oil” and its quantity is growing at an exponential rate, with IDC forecasting a 50-fold increase from 2010 to 2020. In fact, by 2020, it’s estimated that new information generated each second for every human being will approximate to 1.7 megabytes. This creates bigger operational issues for organizations, with both NetOps and SecOps teams grappling to achieve superior performance, security, speed and network visibility. This delicate balancing act will become even … More

The post Is your organization ready for the data explosion? appeared first on Help Net Security.

Apple pulls Facebook enterprise certificate

It’s been an astonishing few days for Facebook. They’ve seen both an app and their enterprise certificate removed and revoked with big consequences.

What happened?

Apple issue enterprise certificates to organizations with which they can create internal apps. Those apps don’t end up released on the Apple store, because the terms of service don’t allow it. Anything storefront-bound must go through the mandatory app checks by Apple before being loaded up for sale.

What went wrong?

Facebook put together a “Facebook research” market research app using the internal process. However, they then went on to distribute it externally to non-Facebook employees. And by “non Facebook employees” we mean “people between the ages of 13 to 35.” In return for access to large swathes of user data, the participants received monthly $20 gift cards.

The program was managed via various Beta testing services, and within hours of news breaking, Facebook stated they’d pulled the app.

Problem solved?

Not exactly. Apple has, in fact, revoked Facebook’s certificate, essentially breaking all of their internal apps and causing major disruptions for their 33,000 or so employees in the process. As per the Apple statement:

We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers…a clear breach of their agreement.

Whoops

Yes, whoops. Now the race is on to get things back up and running over at Facebook HQ. Things may be a little tense behind the scenes due to, uh, something similar involving a VPN-themed app collecting data it shouldn’t have been earlier this year. That one didn’t use the developer certificate, but it took some 33 million downloads before Apple noticed and decided to pull the plug.

Could things get any worse for Facebook?

Cue Senator Ed Markey, with a statement on this particular subject:

It is inherently manipulative to offer teens money in exchange for their personal information when younger users don’t have a clear understanding of how much data they’re handing over and how sensitive it is,” said Senator Markey. “I strongly urge Facebook to immediately cease its recruitment of teens for its Research Program and explicitly prohibit minors from participating. Congress also needs to pass legislation that updates children’s online privacy rules for the 21st century. I will be reintroducing my ‘Do Not Track Kids Act’ to update the Children’s Online Privacy Protection Act by instituting key privacy safeguards for teens.

But my concerns also extend to adult users. I am alarmed by reports that Facebook is not providing participants with complete information about the extent of the information that the company can access through this program. Consumers deserve simple and clear explanations of what data is being collected and how it being used.

Well, that definitely sounds like a slide towards “worse” instead of “better.”

A one-two punch?

Facebook is already drawing heavy criticism this past week for the wonderfully-named “friendly fraud” practice of kids making dubious purchases, and chargebacks being made. It happens, sure, but perhaps not quite like this. From the linked Register article:

Facebook, according to the full lawsuit, was encouraging game devs to build Facebook-hosted games that allowed children to input parents’ credit card details, save those details, and then bill over and over without further authorisation.

While large amounts of money were being spent, some refunds proved to be problematic. Employees were querying why most apps with child-related issues are “defaulting to the highest-cost setting in the purchase flows.” You’d better believe there may be further issues worth addressing.

What next?

The Facebook research program app will continue to run on Android, which is unaffected by the certificate antics. There’s also this app from Google in Apple land which has since been pulled due to also operating under Apple’s developer enterprise program. No word yet as to whether or not Apple will revoke Google’s certificate, too. It could be a bumpy few days for some organizations as we wait to see what Apple does next. Facebook, too, could certainly do with a lot less bad publicity as it struggles to regain positive momentum. Whether that happens or not remains to be seen.

The post Apple pulls Facebook enterprise certificate appeared first on Malwarebytes Labs.

Naked Security – Sophos: 14k HIV+ records leaked, Singapore says sorry

Singapore's Ministry of Health said the HIV status of 14,200 people, plus confidential data of 2,400 of their contacts, is in the possession of somebody who's not authorized to have it and who's published it online.



Naked Security - Sophos

Google also abused its Apple developer certificate to collect iOS user data

It turns out that Google, like Facebook, abused its Apple Enterprise Developer Certificate to distribute a data collection app to iOS users, in direct contravention of Apple’s rules for the distribution program. Unlike Facebook, though, the company did not wait for Apple to revoke their certificate. Instead, they quickly to disabled the app on iOS devices, admitted their mistake and extended a public apology to Apple. Google’s app Google’s Screenwise Meter app is very similar … More

The post Google also abused its Apple developer certificate to collect iOS user data appeared first on Help Net Security.

Kaspersky Lab official blog: Transatlantic Cable podcast, episode 76

The 76th edition of the Kaspersky Lab Transatlantic Cable Podcast, David and I cover a number of stories pertaining to privacy and, surprisingly, browsers. To start things off, we look at the issue that Apple faced earlier in the week where a bug in FaceTime that was reported by a kid wound up in the public eye.

Following that tale, we jump into a stranger-than-fiction story about Facebook and their controversial tactic to have users install a VPN to share their data with Facebook. The kicker is that the target audience included kids.

Following Facebook, we stay on the privacy bandwagon and look at the work that Mozilla did to improve the latest version of Firefox. We close out the podcast bidding happy trails to Internet Explorer 10. If you like the podcast, please consider sharing with your friends or subscribing below; if you are interested in the full text of the articles, please click the links below.



Kaspersky Lab official blog

Taking ethical action in identity: 5 steps for better biometrics

Glance at your phone. Tap a screen. Secure access granted! This is the power of biometric identity at work. The convenience of unlocking your phone with a fingertip or your face is undeniable. But ethical issues abound in the biometrics field. The film Minority Report demonstrated one possible future, in terms of precise advertising targeting based on a face. But the Spielberg film also demonstrated some of the downsides of biometrics – the stunning lack … More

The post Taking ethical action in identity: 5 steps for better biometrics appeared first on Help Net Security.

Smashing Security #113: FaceTime, Facebook, faceplant

Smashing Security #113: FaceTime, Facebook, faceplant

FaceTime bug allows callers to see and hear you *before* you answer the phone, Facebook’s Nick Clegg tries to convince us the social network is changing its ways, and IoT hacking is big in Japan.

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by John Hawes from AMTSO.

SecurityWeek RSS Feed: Yahoo Breach Settlement Rejected by Judge

A U.S. judge has rejected the settlement between Yahoo and users impacted by the massive data breaches suffered by the company, citing, among other things, inadequate disclosure of the settlement fund and high attorney fees.

read more



SecurityWeek RSS Feed

Security Affairs: Facebook paid teens $20 to install a Research App that spies on them

Facebook is paying teens $20 a month to use its VPN app, called Facebook Research, that monitors their activity via their mobile devices.Facebook is paying teens $20 a month to use its VPN app, called Facebook Research, that monitors their activity via the mobile devices.

2018 was a terrible year for Facebook that was in the middle of the Cambridge Analytica privacy scandal. The social network giant was involved in other cases, for example, it was forced to remove its Onavo VPN app from Apple’s App Store because it was caught collecting some of data through Onavo Protect, the Virtual Private Network (VPN) service that it acquired in 2013.

According to a report presented by Privacy International in December at 35C3 hacking conference held in Germany, the list of Android apps that send tracking and personal information back to Facebook includes dozens of apps including KayakYelp, and Shazam, Facebook

Now according to a report published by TechCrunch, Facebook is paying teenagers around $20 a month to use its VPN app that monitors their activity on via the mobile devices.

Facebook Research App Icon

Facebook is accused of using the VPN app to track users’ activities across multiple different apps, especially the use of third-party apps.

“Desperate for data on its competitors, Facebook  has been secretly paying people to install a ‘Facebook Research’ VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August.” reads the report published by Techcrunch.

“Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms.”

Techcrunch reported that some documentation refers to the Facebook Research program as “Project Atlas,” it added that Facebook confirmed the existence of the app.

The news is disconcerting, despite the privacy cases in which Facebook was involved, the company has been paying users ages 13 to 35  as much as $20 per month plus referral fees for installing Facebook Research on their iOS or Android devices. The company described the ‘Facebook Research’ app as “paid social media research study.”

Facebook is distributing the app via third-party beta testing services Applause, BetaBound, and uTest that were also running ads on Instagram and Snapchat recruiting participants to install Facebook Research.

Let’s give a close look at the Facebook Research App. The app requires users to install a custom root enterprise certificate to allow the social media giant to collect private messages in social media apps, chats from in instant messaging apps, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps installed on the users’ devices.

Experts pointed out that in some case, the Facebook Research app also asked users to take screenshots of their Amazon order histories and send it back to Facebook.

Reading the Applause site it is possible to have more info on how the company could use the data:

“By installing the software, you’re giving our client permission to collect data from your phone that will help them understand how you browse the internet, and how you use the features in the apps you’ve installed . . . This means you’re letting our client collect information such as which apps are on your phone, how and when you use them, data about your activities and content within those apps, as well as how other people interact with you or your content within those apps. You are also letting our client collect information about your internet browsing activity (including the websites you visit and data that is exchanged between your device and those websites) and your use of other online services. There are some instances when our client will collect this information even where the app uses encryption, or from within secure browser sessions.” ” the terms read.

Facebook confirmed that the app was developed for research purposes, in particular to study how people use their mobile devices.

“like many companies, we invite people to participate in research that helps us identify things we can be doing better.” explained Facebook.

“helping Facebook understand how people use their mobile devices, we have provided extensive information about the type of data we collect and how they can participate. We do not share this information with others, and people can stop participating at any time.”

Facebook’s spokesperson claimed that the app doesn’t violate the Apple’s Enterprise Certificate program. Techcrunch points out that since Apple requires developers to only use this certificate system for distributing internal corporate apps to their own employees, “recruiting testers and paying them a monthly fee appears to violate the spirit of that rule,”

After the disclosure of the report, Facebook announced that it is planning to shut down the iOS version of the Facebook Research app.

Pierluigi Paganini

(SecurityAffairs – Facebook Research app, Privacy)

The post Facebook paid teens $20 to install a Research App that spies on them appeared first on Security Affairs.



Security Affairs

Facebook paid teens $20 to install a Research App that spies on them

Facebook is paying teens $20 a month to use its VPN app, called Facebook Research, that monitors their activity via their mobile devices.Facebook is paying teens $20 a month to use its VPN app, called Facebook Research, that monitors their activity via the mobile devices.

2018 was a terrible year for Facebook that was in the middle of the Cambridge Analytica privacy scandal. The social network giant was involved in other cases, for example, it was forced to remove its Onavo VPN app from Apple’s App Store because it was caught collecting some of data through Onavo Protect, the Virtual Private Network (VPN) service that it acquired in 2013.

According to a report presented by Privacy International in December at 35C3 hacking conference held in Germany, the list of Android apps that send tracking and personal information back to Facebook includes dozens of apps including KayakYelp, and Shazam, Facebook

Now according to a report published by TechCrunch, Facebook is paying teenagers around $20 a month to use its VPN app that monitors their activity on via the mobile devices.

Facebook Research App Icon

Facebook is accused of using the VPN app to track users’ activities across multiple different apps, especially the use of third-party apps.

“Desperate for data on its competitors, Facebook  has been secretly paying people to install a ‘Facebook Research’ VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August.” reads the report published by Techcrunch.

“Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms.”

Techcrunch reported that some documentation refers to the Facebook Research program as “Project Atlas,” it added that Facebook confirmed the existence of the app.

The news is disconcerting, despite the privacy cases in which Facebook was involved, the company has been paying users ages 13 to 35  as much as $20 per month plus referral fees for installing Facebook Research on their iOS or Android devices. The company described the ‘Facebook Research’ app as “paid social media research study.”

Facebook is distributing the app via third-party beta testing services Applause, BetaBound, and uTest that were also running ads on Instagram and Snapchat recruiting participants to install Facebook Research.

Let’s give a close look at the Facebook Research App. The app requires users to install a custom root enterprise certificate to allow the social media giant to collect private messages in social media apps, chats from in instant messaging apps, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps installed on the users’ devices.

Experts pointed out that in some case, the Facebook Research app also asked users to take screenshots of their Amazon order histories and send it back to Facebook.

Reading the Applause site it is possible to have more info on how the company could use the data:

“By installing the software, you’re giving our client permission to collect data from your phone that will help them understand how you browse the internet, and how you use the features in the apps you’ve installed . . . This means you’re letting our client collect information such as which apps are on your phone, how and when you use them, data about your activities and content within those apps, as well as how other people interact with you or your content within those apps. You are also letting our client collect information about your internet browsing activity (including the websites you visit and data that is exchanged between your device and those websites) and your use of other online services. There are some instances when our client will collect this information even where the app uses encryption, or from within secure browser sessions.” ” the terms read.

Facebook confirmed that the app was developed for research purposes, in particular to study how people use their mobile devices.

“like many companies, we invite people to participate in research that helps us identify things we can be doing better.” explained Facebook.

“helping Facebook understand how people use their mobile devices, we have provided extensive information about the type of data we collect and how they can participate. We do not share this information with others, and people can stop participating at any time.”

Facebook’s spokesperson claimed that the app doesn’t violate the Apple’s Enterprise Certificate program. Techcrunch points out that since Apple requires developers to only use this certificate system for distributing internal corporate apps to their own employees, “recruiting testers and paying them a monthly fee appears to violate the spirit of that rule,”

After the disclosure of the report, Facebook announced that it is planning to shut down the iOS version of the Facebook Research app.

Pierluigi Paganini

(SecurityAffairs – Facebook Research app, Privacy)

The post Facebook paid teens $20 to install a Research App that spies on them appeared first on Security Affairs.

Facebook to shut down iOS app that allowed for near total data access

When Apple banned its Onavo VPN app from its App Store last summer, Facebook took repackaged the app, named it “Facebook Research” and offered it for download through three app beta testing services, TechCrunch has discovered. About the Facebook Research app Facebook used the Onavo app to collect the aforementioned data of both Android and iOS users and, based on the information gleaned from it, made decisions to acquire competing apps and add popular features … More

The post Facebook to shut down iOS app that allowed for near total data access appeared first on Help Net Security.

Mozilla releases anti tracking policy, enhances tracking protection in Firefox 65

Mozilla has released Firefox 65, which includes enhanced, configurable protection against online tracking. The organization has also published an official anti tracking policy that effectively maps out the direction which its popular browser will take when it comes to blocking online tracking. Enhanced Tracking Protection controls Firefox 65 carries a number of improvements and various security fixes, but the one that gets most attention is enhanced tracking protection through simplified content blocking settings. Users can … More

The post Mozilla releases anti tracking policy, enhances tracking protection in Firefox 65 appeared first on Help Net Security.

Ask Slashdot: What Could Go Wrong In Tech That Hasn’t Already Gone Wrong?

dryriver writes: If you look at the last 15 years in tech, just about everything that could go wrong seemingly has gone wrong. Everything you buy and bring into your home tracks you in some way or the other. Some software can only be rented now -- no permanent licenses available to buy. PC games are tethered into cloud crap like Steam, Origin and UPlay. China is messing with unborn baby genes. Drones have managed to mess up entire airports. The Scandinavians have developed a serious hatred of cash money and are instead getting themselves chipped. CPUs have horrible security. Every day some huge customer database somewhere gets pwned by hackers. Cybercrime has gone through the roof. You cannot trust the BIOS on your PC anymore. Windows 10 just will not stop updating itself. And AI is soon going to kill us all, if a self-driving car by Uber doesn't do it first. So: What has -- so far -- not gone wrong in tech that still could go wrong, and perhaps in a surprising way?

Read more of this story at Slashdot.

New Proposal Would Ban Government Facial Recognition Use In San Francisco

An anonymous reader quotes a report from The San Francisco Examiner: San Francisco could be the first city in the nation to ban city agencies from using facial recognition surveillance technology under proposed legislation announced Tuesday by Supervisor Aaron Peskin. The legislation, which will be introduced at Tuesday's Board of Supervisors meeting, echoes ordinances adopted by cities including Oakland and Berkeley, as well as by the transit agency BART, that require legislative approval before city agencies or law enforcement adopt new surveillance technologies or policies for the use of existing technologies. However, the new proposal takes things a step further with an outright ban on facial recognition technology. The San Francisco proposal would not only ban facial recognition but would also require the Board of Supervisors to approve new surveillance technology in general. The board would have to find that the benefits of the technology outweigh the costs, that civil rights will be protected and that the technology will not disparately impact a community or group. Peskin portrayed the proposal to be introduced Tuesday as an extension of his "Privacy First Policy," approved by voters in November, which sets new limits and transparency requirements on the collection and use of personal data by companies doing business with The City.

Read more of this story at Slashdot.

2019 and Beyond: The (Expanded) RSAC Advisory Board Weighs in on What’s Next: Pt. 2

Part two of RSA’s Conference Advisory Board look into the future tackles how approaches to cybersecurity must evolve to meet new emerging challenges.

FaceTime bug exposes live audio & video before recipient picks call

By Waqas

FaceTime bug is exposing calls and videos – Here’s how to disable FaceTime until this issue is fixed. According to reports, there is a major bug in iPhone FaceTime’s video calling function that lets users hear audio from the call even before the recipient has accepted the video call. Moreover, the flaw also lets people see […]

This is a post from HackRead.com Read the original post: FaceTime bug exposes live audio & video before recipient picks call

What steps consumers need to take to protect themselves online

Yesterday was Data Privacy Day, so McAfee warned consumers that cybercriminals are continuing to access personal information through weak passwords, phishing emails, connected things, malicious apps and unsecure Wi-Fi networks. Weak Passwords Consumers often pick simple passwords for the multiple accounts they use daily, not realizing that choosing weak passwords can open the door to identity theft and identity. Tip: Use strong passwords that include uppercase and lowercase letters, numbers and symbols. Don’t use the … More

The post What steps consumers need to take to protect themselves online appeared first on Help Net Security.

A Bug in FaceTime Allows One To Access Someone’s iPhone Camera And Microphone Before They Answered the Call; Apple Temporarily Disables Group FaceTime Feature

Social media sites lit up today with anxious Apple users after a strange glitch in iPhone's FaceTime app became apparent. The issue: It turns out that an iPhone user can call another iPhone user and listen in on -- and access live video feed of -- that person's conversations through the device's microphone and camera -- even if the recipient does not answer the call. In a statement, Apple said it was aware of the bug and was working to release a fix later this week. In the meanwhile, the company has disabled Group calling functionality on FaceTime app. From a report: The issue was so serious that Twitter CEO Jack Dorsey, and even Andrew Cuomo, governor of the state of New York, weighed in and urged their followers to disable FaceTime. [...] That's bad news for a company that's been vocal about privacy and customer data protection lately. The timing couldn't be worse, given that Apple is set to host its earnings call for the October-December quarter of 2018 in just a matter of hours.

Read more of this story at Slashdot.

What If Your VPN Keeps Logs and Why You Should Care

By David Balaban

Have you ever asked yourself the question: “So what if my VPN keeps logs?” Don’t worry. It’s a good question to ask. It means you’re actually curious about the nuances of data collection, management and how they affect you. In order to answer this question, we first have to delve into the inner workings of […]

This is a post from HackRead.com Read the original post: What If Your VPN Keeps Logs and Why You Should Care

Blog | Avast EN: It’s Time for a New Privacy Code of Conduct | Avast

When Facebook founder Mark Zuckerberg infamously declared that privacy “is no longer a social norm” in 2010, he was merely parroting a corporate imperative that Google had long since established. That same year, then-Google CEO Eric Schmidt publicly admitted that Google’s privacy policy was to “get right up to the creepy line and not cross it.”



Blog | Avast EN

What does ‘consent to tracking’ really mean?

Thanks to Jerome Boursier for contributions.

Post GDPR, many social media platforms will ask end users to consent to some form of tracking as a condition of using the service. It’s easy to make assumptions as to what that means, especially when the actual terms of service or data policy for the service in question is tough to find, full of legal jargon, or just long and boring. Part of the shock of Facebook stories was in discovering just how expansive their consent to tracking really was. Let’s take a look at what can happen after you hit OK on a new site’s Terms of Service.

What we think they’re doing

Most commonly, users think that social media sites limit their tracking to actual interactions with the site while logged in. This includes likes, follows, favorites, and general use of the site as intended. Those interactions are then analyzed to determine a user’s rough interests, and serve them corresponding ads.

We asked some non-technical Malwarebytes staffers what they thought popular companies collected on them and got the following responses:

“Hmm I would assume just my name, birthday, trends in the hashtags I use, and locations I’m at. Nothing else.”

“As far as IG goes, I’m guessing they collect data on the hashtags I follow and what I look at because all the ads are home improvement ads.”

While these are common use cases for tracking, innovations in user surveillance have allowed companies to take much more invasive actions.

What they’re actually doing

The Cambridge Analytica reports were quite shocking, but in theory their data practices were actually a violation of the agreement they had with Facebook. Somewhat more concerning are actions that Facebook and other social media companies take overtly with third parties, or as part of their explicit terms of service.

In June 2018, a New York Times report revealed partnerships between Facebook and mobile device manufacturers allowed data collection on your Facebook friends, irrespective of whether those friends had allowed data sharing with third parties. This data collection varied by device manufacturer, and most were relatively benign. Blackberry, however, seemed to go beyond what most of us expect to be collected when we log in:

Facebook has been known for years to have somewhat creepy partnerships like this. But what about other platforms? Instagram has an interesting paragraph in its terms and conditions:

Does communications include direct messages? How long is this information stored, where, and under what conditions? It could be perfectly secure and anonymized, but it’s difficult to tell because Instagram is a little vague on these points. Companies tell us what they collect consistently but they don’t always tell us why or disclose retention conditions, which makes it difficult for a user to make a proper risk assessment for allowing tracking.

Outside of the Facebook family of products, Pinterest does some data sharing that you might not expect:

Kudos to Pinterest for providing clear opt-out instructions.

A reasonable user might not expect that when consent to tracking connected with a Pinterest account, they would also agree to offsite tracking. Pinterest does stand out, however, by presenting well organized and clear information followed by simple opt-out instructions after each section.

What they might be doing

Most platforms that engage in user tracking do so in ways that raise concern, but are not overtly alarming. Abuses we’ve heard about tend to center on the tracking company sharing information with third parties. So what might happen if the wrong third party gains access to this data?

In 2016, a Pro Publica investigation was able to use Facebook ad targeting to create a housing ad that excluded minorities from seeing it. (This probably violates the US Fair Housing Act.) Using user data to discriminate in plausibly deniable ways predates the Internet, but the unprecedented volume of data collected makes schemes by bad actors much more efficient and easy to launch.

A more speculative harm is the use of tracking tags on sensitive websites. In France, a government website providing accurate information on reproductive health services was using a Facebook tracker. A “trusted partner” receiving user metadata, as well as which sections of the site that user clicks on, has the potential to be profoundly invasive. From a risk mitigation perspective, a user with a Facebook account might not have anticipated this sort of tracking when they initially consented to Facebook’s terms of service.

A common counter to complaints regarding user tracking is, “Well, you agreed to their terms, so you should have expected this.” This is arguably applicable to basic metadata collection and targeted ads, but is it reasonable to expect a Facebook user to understand that their off-platform browsing is subject to surveillance as well? User tracking has progressed so far in sophistication that an average user most likely does not have the background necessary to imagine every possible use case for data collection prior to accepting a user agreement.

What you can do about it

If any of the above examples make you uncomfortable, check out how to secure some common social media platforms using internal settings. If you want to implement additional technical solutions, browser extensions like Ghostery and the EFF’s Privacy Badger can prevent trackers from sucking up data you would prefer not to hand over.

Messenger services are a bit harder to transition away from, but not impossible. Signal is a well-regarded messenger app with end-to-end encryption, and a history of respecting user privacy. Alternatively, Wire can provide a more business-oriented alternative, with screen sharing, file sharing, and access role management.

Most important is to stay suspicious when accessing a new platform. No one can mishandle data that you never agree to hand over to begin with. Stay vigilant, stay safe, and enjoy your social media platforms knowing exactly how your data is being used.

The post What does ‘consent to tracking’ really mean? appeared first on Malwarebytes Labs.

No-deal Brexit and GDPR: here’s what you need to know

Business craves certainty and Brexit is currently giving us anything but. At the time of writing, it’s looking increasingly likely that Britain will leave the EU without a withdrawal agreement. This blog rounds up the latest developments on data protection after a no-deal Brexit. (Appropriately, we’re publishing on Data Protection Day, the international campaign to raise public awareness about privacy rights and protecting data.)

Under the General Data Protection Regulation, no deal would mean the UK will become a ‘third country’ outside of the European Economic Area. Last week, the Minister for Data Protection Pat Breen said a no-deal Brexit would have a “profound effect” on personal data transfers into the UK from the EU. Speaking at the National Data Protection Conference, he pointed out that although Brexit commentary has focused on trade in goods, services activity rely heavily on flows of personal data to and from the UK.

“In the event of a ‘no-deal’ Brexit, the European Commission has clarified that no contingency measures, such as an ‘interim’ adequacy decision, are foreseen,” the minister said.

This means personal data transfers can’t continue as they do today. At 11pm BST on Friday 29 March 2019, the UK will legally leave the European Union. All transfer of data between Ireland and the UK or Northern Ireland will then be considered as international transfers.

Keep calm and carry on

Despite the ongoing uncertainty, there are backup measures, as the Minister pointed out. “While Brexit does give rise to concerns, it should not cause alarm. The GDPR explicitly provides for mechanisms to facilitate the transfer of personal data in the event of the United Kingdom becoming a third country in terms of its data protection regime,” he said.

The latest advice from the Data Protection Commissioner is that Irish-based organisations will need to implement legal safeguards to transfer personal data to the UK after a no-deal Brexit. The DPC’s guidance outlined some typical scenarios if the UK becomes a third country.

“For example, if an Irish company currently outsources its payroll to a UK processor, legal safeguards for the personal data transferred to the UK will be required. If an Irish government body uses a cloud provider based in the UK, it will also require similar legal safeguards. The same will apply to a sports organisation with an administrative office in Northern Ireland that administers membership details for all members in Ireland and Northern Ireland,” it said.

Some organisations and bodies in Ireland will already be familiar with the legal transfer mechanisms available for the transfer of personal data to recipients outside of the EU, as they will already be transferring to the USA or India, for example.

Next steps for ‘third country’ status

BH Consulting’s senior data protection consultant Tracy Elliott says that data protection officers should take these steps to prepare for the UK’s ‘third country’ status under a no-deal Brexit.

·       review their organisation’s processing activities

·       identify what data they transfer to the UK

·       check if that includes data about EU citizens

“Consider your options of using a contract or possibly changing that supplier. If your data is hosted on servers in the UK, contact your hosting partner and find out what options are available,” she said.

Larger international companies may already have data sharing frameworks in place, but SMEs that routinely deal with UK, or that have subsidiaries in the UK, might not have considered this issue yet. All communication between them, even if they’re part of the same group structure, will need to be covered contractually for data sharing. “There are five mechanisms for doing this, but the simplest and quickest way to do this is to roll out model contract clauses, or MCCs. They are a set of guidelines issued by the EU,” Tracy advised.

Sarah Clarke, a specialist in privacy, security, governance, risk and compliance with BH Consulting, points out that using MCCs has its own risks. The clauses are due for an update to bring them into line with GDPR. Meanwhile the EU-US data transfer mechanism known as Privacy Shield is still not finalised, she added.

In the short term, however, MCCs are sufficient both for international transfers between legal entities in one organisation, and for transfers between different organisations. “For intra-group transfers, binding corporate rules are too burdensome to implement ‘just in case’. You can switch if the risk justifies it when there is more certainty,” Sarah Clarke said.

Further reading

The European Commission website has more information on legal mechanisms for transferring personal data to third countries. The UK Information Commissioner’s Office has a recent blog that deals with personal data flows post-Brexit. You can also check the Data Protection Commission site for details about transfer mechanisms and derogations for specific situations. The DPC also advises checking back regularly for updates between now and Brexit day.

The post No-deal Brexit and GDPR: here’s what you need to know appeared first on BH Consulting.

Industry reactions to Data Privacy Day 2019

The purpose of Data Privacy Day is to raise awareness and promote privacy and data protection best practices. Data Privacy Day began in the United States and Canada in January 2008 as an extension of the Data Protection Day celebration in Europe. Data Privacy Day is observed annually on Jan. 28. Cindy Provin, CEO, nCipher Security These high profile policy developments are sending a signal that the days of using personal data for commercial advantage … More

The post Industry reactions to Data Privacy Day 2019 appeared first on Help Net Security.

#PrivacyAware: Will You Champion Your Family’s Online Privacy?

online privacyThe perky cashier stopped my transaction midway to ask for my email and phone number.

Not now. Not ever. No more. I’ve had enough. I thought to myself.

“I’d rather not, thank you,” I replied.

The cashier finished my transaction and moved on to the next customer without a second thought.

And, my email and phone number lived in one less place that day.

This seemingly insignificant exchange happened over a year ago, but it represents the day I decided to get serious and champion my (and my family’s) privacy.

I just said no. And I’ve been doing it a lot more ever since.

A few changes I’ve made:

  • Pay attention to privacy policies (especially of banks and health care providers).
  • Read the terms and conditions of apps before downloading.
  • Block cookies from websites.
  • Use a VPN instead of public wi-fi.
  • Refuse to purchase from companies that (appear to) take privacy lightly.
  • Max my privacy settings on social networks.
  • Change my passwords regularly and keep them strong!
  • Delete apps I no longer use.
  • Stay on top of software updates on all devices and add extra protection.
  • Have become hyper-aware before giving out my email, address, phone number, or birth date.
  • Limit the number of photos and details shared on social media.

~~~

The amount of personal information we share every day online — and off — is staggering. There’s information we post directly online such as our birth date, our location, our likes, and dislikes. Then there’s the data that’s given off unknowingly via web cookies, Metadata, downloads, and apps.

While some data breaches are out of our control, at the end of the day, we — along with our family members — are one giant data leak.

Studies show that on average by the age of 13, parents have posted 1,300 photos and videos of their child to social media. By the time kids get devices of their own, they are posting to social media 26 times per day on average — a total of nearly 70,000 posts by age 18.

The Risksonline privacy

When we overshare personal data a few things can happen. Digital fallout includes data misuse by companies, identity theft, credit card fraud, medical fraud, home break-ins, reputation damage, location and purchasing tracking, ransomware, and other risks.

The Mind Shift

The first step toward boosting your family’s privacy is to start thinking differently about privacy. Treat your data like gold (after all, that’s the way hackers see it). Guiding your family in this mind-shift will require genuine, consistent effort.

Talk to your family about privacy. Elevate its worth and the consequences when it’s undervalued or shared carelessly.

Teach your kids to treat their personal information — their browsing habits, clicks, address, personal routine, school name, passwords, and connected devices — with great care. Consider implementing this 11 Step Privacy Take Back Plan.

This mind and attitude shift will take time but, hopefully, your kids will learn to pause and think before handing over personal information to an app, a social network, a retail store, or even to friends.

Data Protection Tips*

  1. Share with care. Think before posting about yourself and others online. Consider what it reveals, who might see it and how it could be perceived now and in the future.
  2. Own your online presence. Set the privacy and security settings on websites and apps to your comfort level for information sharing. Each device, application or browser you use will have different features to limit how and with whom you share information.online privacy
  3. Think before you act. Information about you, such as the games you like to play, your contacts list, where you shop and your geographic location, has tremendous value. Be thoughtful about who gets that information and understand how it’s collected through websites and apps.
  4. Lock down your login. Your usernames and passwords are not enough to protect critical accounts like email, banking, and social media. Strengthen online accounts and use strong authentication tools like a unique, one-time code through an app on your mobile device.

* Provided by the National Cyber Security Alliance (NCSA).

January 28 National Data Privacy Day. The day highlights one of the most critical issues facing families today — protecting personal information in a hyper-connected world. It’s a great opportunity to commit to taking real steps to protect your online privacy. For more information on National Data Privacy Day or to get involved, go to Stay Safe Online.

The post #PrivacyAware: Will You Champion Your Family’s Online Privacy? appeared first on McAfee Blogs.

Millions of Bank Loan and Mortgage Documents Have Leaked Online

An anonymous reader quotes a report from TechCrunch: [M]illions of documents were found leaking after an exposed Elasticsearch server was found without a password. The documents contained highly sensitive financial data on tens of thousands of individuals who took out loans or mortgages over the past decade with U.S. financial institutions. The documents were converted using a technology called OCR from their original paper documents to a computer readable format and stored in the database, but they weren't easy to read. That said, it was possible to discern names, addresses, birth dates, Social Security numbers and other private financial data by anyone who knew where to find the server. Independent security researcher Bob Diachenko and TechCrunch traced the source of the leaking database to a Texas-based data and analytics company, Ascension. When reached, the company said that one of its vendors, OpticsML, a New York-based document management startup, had mishandled the data and was to blame for the data leak. It turns out that data was exposed again -- but this time, it was the original documents. Diachenko found the second trove of data in a separate exposed Amazon S3 storage server, which too was not protected with a password. Anyone who went to an easy-to-guess web address in their web browser could have accessed the storage server to see -- and download -- the files stored inside. The bucket contained 21 files containing 23,000 pages of PDF documents stitched together -- or about 1.3 gigabytes in size. Diachenko said that portions of the data in the exposed Elasticsearch database on Wednesday matched data found in the Amazon S3 bucket, confirming that some or all of the data is the same as what was previously discovered. Like in Wednesday's report, the server contained documents from banks and financial institutions across the U.S., including loans and mortgage agreements. We also found documents from the U.S. Department of Housing and Urban Development, as well as W-2 tax forms, loan repayment schedules and other sensitive financial information. Many of the files also contained names, addresses, phone numbers, Social Security numbers and more.

Read more of this story at Slashdot.

GDPR-ready organizations see lowest incidence of data breaches

Organizations worldwide that invested in maturing their data privacy practices are now realizing tangible business benefits from these investments, according to Cisco’s 2019 Data Privacy Benchmark Study. The study validates the link between good privacy practice and business benefits as respondents report shorter sales delays as well as fewer and less costly data breaches. Business benefits of privacy investments The GDPR, which focused on increasing protection for EU residents’ privacy and personal data, became enforceable … More

The post GDPR-ready organizations see lowest incidence of data breaches appeared first on Help Net Security.

Smashing Security #112: Payroll scams, gold coin heists, web giants spanked

Smashing Security #112: Payroll scams, gold coin heists, web giants spanked

Business email compromise evolves to target your company’s payroll, how the world’s largest gold coin was stolen from a Berlin museum, and are internet giants feeling the heat yet over data security?

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by people hacker Jenny Radcliffe.

Why Free Software Evangelist Richard Stallman is Haunted by Stalin’s Dream

Richard Stallman recently visited Mandya, a small town about 60 miles from Bengaluru, India, to give a talk. On the sidelines, Indian news outlet FactorDaily caught up with Stallman for an interview. In the wide-ranging interview, Stallman talked about companies that spy on users, popular Android apps, media streaming and transportation apps, smart devices, DRM, software backdoors, subscription software, and Apple and censorship. An excerpt from the interview: If you are carrying a mobile phone, it is always tracking your movements and it could have been modified to listen to the conversations around you. I call this product Stalin's dream. What would Stalin have wanted to hand out to every inhabitant of the former Soviet Union? Something to track that person's movements and listen to the person's conservations. Fortunately, Stalin could not do it because the technology didn't exist. Unfortunately for us, now it does exist and most people have been pressured or lured into carrying around such a Stalin's dream device, but not me. I am suspicious of new digital technology. I expect it to have new malicious functionalities. It has happened so many times that I have learned to expect this, so I have always checked before I start using some new digital technology. I asked to find out what is nasty about it and I found out these two things. It was something like 20 years ago, and I decided it was my duty as a citizen to refuse, regardless of whatever convenience it might offer me. To surrender my freedom in this way was failing to defend a free society. This is why I do not have a portable phone. I refuse to carry a portable phone. I never have one and unless things change, I never will. I do use portable phones, lots of different ones. If I needed to call someone right now, I would ask one of you, "Could you please make a call for me?" If I am on a bus and it is late and I need to tell somebody that I am going to arrive late, there is always some other passenger in the bus who will make a call for me or send a text for me. Practically speaking, it is not that hard.

Read more of this story at Slashdot.