Category Archives: Privacy

New Data Leak Exposes 3M Facebook Users

Newly Discovered Data Leak Exposed Intimate Details of 3 Million Facebook Users

Facebook’s data privacy problems and reputation troubles have been snowballing over the last few months. A report recently released by New Scientist claims that sensitive information of more than three million Facebook users gathered by a quiz-app has been readily available online for the last four years. The news comes months after Facebook’s CEO Mark Zuckerberg was grilled before Congress for letting consultancy firm Cambridge Analytica improperly handle data for political purposes.

myPersonality

Developed by Cambridge University researchers, myPersonality was a Facebook app that allowed users to take psychometric tests and obtain instant results. The app was active between 2007 and 2012 and more than 6 million people participated in the project. All quiz answers were recorded, and roughly half of participants opted in to share the data from their Facebook profiles with the researchers. All of the data gathered by the app was stored in a database making it one of the most extensive social science research databases in history. The data was anonymized and then shared with academics around the world.

The database contained highly sensitive and revealing information of millions of Facebook users and even though the academics at the University of Cambridge never charged for access to the database and wanted to be used only for academic purposes, the login details giving access to the database were easily reachable online. Anyone interested in peeking into the personal lives of millions of Facebook users had to merely search for username and password on GitHub – the largest host of source code in the world.

While the data was anonymized and no names are known to have been exposed every single Facebook profile have been assigned with an ID that has been connected with their age, gender location, status updates, etc. With so much information attached to one ID, finding out the real identity of the person behind the profile would have been an easy task, and it might have been easily automated.

Currently, there isn’t a conclusive answer to the number of people who’ve had access to the database over the years, and for the ways, they might have used it. The report released by New Scientist suggests that Facebook has been aware of the quiz since 2011 but did not act up until last month.

Facebook suspended the app on April 7th. The app is one of 200 other banned apps that might have collected data in the same manner. The official website of myPersonality is currently offline too.

The news comes only days after Mark Zuckerberg confirmed that he will be facing MEPs in Brussels and will be meeting with French President Macron.

Download Panda FREE VPN

The post New Data Leak Exposes 3M Facebook Users appeared first on Panda Security Mediacenter.

NBlog May 22 – EU vs Spammers

I guess everyone has received a slew of emails this week from companies asking us to opt-in to their newsletters.

Most have said something along the lines of "If you don't click the link to reconfirm your details by May 25th, you will be unsubscribed", almost identical to a million phishers that we have been patiently training people to avoid for many years now. Hmmm.

Most are going directly to the bin, some as a result of the training but most as a result of people taking the opportunity not to opt-in to being marketed-at. I suspect contact databases around the world are being decimated as a result of GDPR, so we might finally see a drop in the volume of spam once this week is out of the way.

Spam reduction is a very welcome side-effect of GDPR. Previous anti-spam laws have had limited effect. This one, although badged 'privacy', could be the best yet.

Hoorah for 'privacy'!  A round of applause for the EU!

Most GDPR Emails Unnecessary and Some Illegal, Say Experts

The vast majority of emails flooding inboxes across Europe from companies asking for consent to keep recipients on their mailing list are unnecessary and some may be illegal, privacy experts have said, as new rules over data privacy come into force at the end of this week. From a report: Many companies, acting based on poor legal advice, a fear of fines of up to $23.5 million and a lack of good examples to follow, have taken what they see as the safest option for hewing to the General Data Protection Regulation (GDPR): asking customers to renew their consent for marketing communications and data processing. But Toni Vitale, the head of regulation, data and information at the law firm Winckworth Sherwood, said many of those requests would be needless paperwork, and some that were not would be illegal.

Read more of this story at Slashdot.

Episode 97: On eve of GDPR frightening lack of data privacy, security in US

In this episode, #97: we talk with Robert Xiao, the Carnegie Mellon researcher who investigated Location Smart, a free web application that allowed anyone track the location of a mobile phone using just the phone’s number. Also: we welcome University of Washington Researcher Kate Starbird back into the SL studio to talk about her latest...

Read the whole entry... »

Related Stories

Are you ready for the GDPR deadline?

The General Data Protection Regulation (GDPR) compliance deadline looms four days away, but only 29 percent of companies will be ready, according to a new global survey by ISACA. Not only are most unprepared for the deadline, but only around half of the companies surveyed (52 percent) expect to be compliant by end-of-year 2018, and 31 percent do not know when they will be fully compliant. Top GDPR challenges According to the research, the top … More

The post Are you ready for the GDPR deadline? appeared first on Help Net Security.

‘TeenSafe’ Phone Monitoring App Leaked Thousands of User Passwords

An anonymous reader quotes a report from ZDNet: At least one server used by an app for parents to monitor their teenagers' phone activity has leaked tens of thousands of accounts of both parents and children. The mobile app, TeenSafe, bills itself as a "secure" monitoring app for iOS and Android, which lets parents view their child's text messages and location, monitor who they're calling and when, access their web browsing history, and find out which apps they have installed. But the Los Angeles, Calif.-based company left its servers, hosted on Amazon's cloud, unprotected and accessible by anyone without a password. "We have taken action to close one of our servers to the public and begun alerting customers that could potentially be impacted," said a TeenSafe spokesperson told ZDNet on Sunday. The database stores the parent's email address associated with their associated child's Apple ID email address. It also includes the child's device name -- which is often just their name -- and their device's unique identifier. The data contains the plaintext passwords for the child's Apple ID. Because the app requires that two-factor authentication is turned off, a malicious actor viewing this data only needs to use the credentials to break into the child's account to access their personal content data.

Read more of this story at Slashdot.

Repo Men Scan Billions of License Plates — For the Government

The Washington Post notes the billions of license plate scans coming from modern repo men "able to use big data to find targets" -- including one who drives "a beat-up Ford Crown Victoria sedan." It had four small cameras mounted on the trunk and a laptop bolted to the dash. The high-speed cameras captured every passing license plate. The computer contained a growing list of hundreds of thousands of vehicles with seriously late loans. The system could spot a repossession in an instant. Even better, it could keep tabs on a car long before the loan went bad... Repo agents are the unpopular foot soldiers in the nation's $1.2 trillion auto loan market... they are the closest most people come to a faceless, sophisticated financial system that can upend their lives... Derek Lewis works for Relentless Recovery, the largest repo company in Ohio and its busiest collector of license plate scans. Last year, the company repossessed more than 25,500 vehicles -- including tractor trailers and riding lawn mowers. Business has more than doubled since 2014, the company said. Even with the rising deployment of remote engine cutoffs and GPS locators in cars, repo agencies remain dominant. Relentless scanned 28 million license plates last year, a demonstration of its recent, heavy push into technology. It now has more than 40 camera-equipped vehicles, mostly spotter cars. Agents are finding repos they never would have a few years ago. The company's goal is to capture every plate in Ohio and use that information to reveal patterns... "It's kind of scary, but it's amazing," said Alana Ferrante, chief executive of Relentless. Repo agents are responsible for the majority of the billions of license plate scans produced nationwide. But they don't control the information. Most of that data is owned by Digital Recognition Network (DRN), a Fort Worth company that is the largest provider of license-plate-recognition systems. And DRN sells the information to insurance companies, private investigators -- even other repo agents. DRN is a sister company to Vigilant Solutions, which provides the plate scans to law enforcement, including police and U.S. Immigration and Customs Enforcement. Both companies declined to respond to questions about their operations... For repo companies, one worry is whether they are producing information that others are monetizing.

Read more of this story at Slashdot.

‘I Asked Apple for All My Data. Here’s What Was Sent Back’

"I asked Apple to give me all the data it's collected on me since I first became a customer in 2010," writes the security editor for ZDNet, "with the purchase of my first iPhone." That was nearly a decade ago. As most tech companies have grown in size, they began collecting more and more data on users and customers -- even on non-users and non-customers... Apple took a little over a week to send me all the data it's collected on me, amounting to almost two dozen Excel spreadsheets at just 5MB in total -- roughly the equivalent of a high-quality photo snapped on my iPhone. Facebook, Google, and Twitter all took a few minutes to an hour to send me all the data they store on me -- ranging from a few hundred megabytes to a couple of gigabytes in size... The zip file contained mostly Excel spreadsheets, packed with information that Apple stores about me. None of the files contained content information -- like text messages and photos -- but they do contain metadata, like when and who I messaged or called on FaceTime. Apple says that any data information it collects on you is yours to have if you want it, but as of yet, it doesn't turn over your content which is largely stored on your slew of Apple devices. That's set to change later this year... And, of the data it collects to power Siri, Maps, and News, it does so anonymously -- Apple can't attribute that data to the device owner... One spreadsheet -- handily -- contained explanations for all the data fields, which we've uploaded here... [T]here's really not much to it. As insightful as it was, Apple's treasure trove of my personal data is a drop in the ocean to what social networks or search giants have on me, because Apple is primarily a hardware maker and not ad-driven, like Facebook and Google, which use your data to pitch you ads. CNET explains how to request your own data from Apple.

Read more of this story at Slashdot.

FCC Investigating LocationSmart Over Phone-Tracking Flaw

The FCC has opened an investigation into LocationSmart, a company that is buying your real-time location data from four of the largest U.S. carriers in the United States. The investigation comes a day after a security researcher from Carnegie Mellon University exposed a vulnerability on LocationSmart's website. CNET reports: The bug has prompted an investigation from the FCC, the agency said on Friday. An FCC spokesman said LocationSmart's case was being handled by its Enforcement Bureau. Since The New York Times revealed that Securus, an inmate call tracking service, had offered the same tracking service last week, Sen. Ron Wyden, a Democrat from Oregon, called for the FCC and major wireless carriers to investigate these companies. On Friday, Wyden praised the investigation, but requested the FCC to expand its look beyond LocationSmart. "The negligent attitude toward Americans' security and privacy by wireless carriers and intermediaries puts every American at risk," Wyden said. "I urge the FCC expand the scope of this investigation, and to more broadly probe the practice of third parties buying real-time location data on Americans." He is also calling for FCC Chairman Ajit Pai to recuse himself from the investigation, because Pai was a former attorney for Securus.

Read more of this story at Slashdot.

Most firms struggle to comply with GDPR deadline

With GDPR coming into effect in just over a week from today, 85 percent of firms in Europe and the United States will not be ready on time. Additionally, one in four will not be fully compliant by the end of this year. Capgemini’s Digital Transformation Institute surveyed 1,000 executives and 6,000 consumers across eight markets to explore attitudes to, readiness for, and the opportunities of GDPR. A race against the GDPR clock With the … More

The post Most firms struggle to comply with GDPR deadline appeared first on Help Net Security.

Smashing Security #078: Hounds hunt hackers, too-human Google AI, and ethnic recognition tech – WTF?

Smashing Security #078: Hounds hunt hackers, too-human Google AI, and ethnic recognition tech - WTF?

Dogs are trained to sniff out hackers’ hard drives, facial recognition takes an ugly turn, and do you trust Google to book your hair appointment?

All this and more is discussed in the latest edition of the “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by investigative journalist Geoff White.

Achieving GDPR Compliance: The Spark That Will Light a Fire of Change

It is said that innovation and creativity best flourish under pressure and constraint. Think about what the engineers and flight controllers had to do during the Apollo 13 moon mission after an explosion on the vessel. They were constrained by time, fuel, air and many other factors. They had to do things that had never been done before to save the lives of the astronauts.

Another example is the movie “Jaws.” The mechanical sharks used for the movie were extremely problematic, so director Stephen Spielberg changed the way he made the movie, using the shark only sparingly to create a more dramatic impact. Arguably, this actually created a better movie.

As a final example, American musician Jack White has said that it is essential for him to use things like self-imposed tight deadlines to force his creative hand. He said that having all the money, time or colors in the palette ultimately kills creativity.

The process of complying with General Data Protection Regulation (GDPR) could present organizations with this same type of unexpected opportunity. IBM Security and the IBM Institute for Business Value wanted to understand if there was a group of organizations that was using their GDPR preparations as an opportunity to transform how they were approaching security and privacy; data and analytics; and customer relationships. Were organizations turning this compliance challenge into an impetus for broader transformation?

To answer this question, we surveyed 1,500 GDPR leaders — such as chief privacy officers (CPOs), chief data officers (CDOs), general counsels, chief information security officers (CISOs) and data protection officers — representing 15 industries in 34 countries between February and April 2018. We wanted to capture their practices and opinions as close to the May 25 GDPR compliance deadline as possible.

The results of that research are captured in the new report, The End of the Beginning: Unleashing the Transformational Power of GDPR.

Common GDPR Compliance Challenges

During the last couple years as organizations have been preparing for GDPR, they have been tested by both the effort involved and the cost of compliance. Organizations have been busy changing processes and developing new ones; creating new roles and building new relationships; training employees; and deploying new tools and technologies. Hopefully, all this can be leveraged for more than just compliance.

IBM’s CPO, Cristina Cabella, agrees. She has said, “In the market, I see GDPR as a great opportunity to make this culture shift and make privacy more understandable and more leveraged as an opportunity to improve the way we protect data, rather than be perceived as a very niche area that is only for technical experts … So, I think it is a great opportunity in that sense.”

The first thing we found was that many organizations still have a lot of work to do before they can achieve full GDPR compliance, even at this late a date. Only 36 percent of surveyed executives say they will be fully compliant with GDPR by the enforcement date and nearly 20 percent told us that they had not started their preparations yet, but planned to before the May deadline. Organizations could be waiting because of a lack of commitment from organizational leadership — or they are willing to risk taking a wait-and-see approach to see how enforcement works.

Using GDPR Compliance as an Opportunity for Innovation

And yet there was some good news in our respondents’ views of GDPR. The majority held a positive view on the potential of the regulation and what it could do for their organizations. Thirty-nine percent saw GDPR as a chance a transform their security, privacy and data management efforts, and 20 percent said it could be a catalyst for new data-led business models. This is evidence that organizations may see GDPR as a means to improve their organizations in the longer term by enabling a stronger overall digital strategy, better security, closer customer relationships, improved efficiency through streamlined data management and increased competitive differentiation.

In our research, we identified a group of leaders who met a specific set of criteria and see GDPR as a spark for change. Among other insights, we found that:

  • Eighty-three percent of GDPR leaders see security and privacy as key business differentiators.
  • Nearly three times more GDPR leaders than other surveyed executives believe that GDPR will create new opportunities for data-led business models and data monetization.
  • Ninety-one percent of GDPR leaders agree that GDPR will enable more trusted relationships and new business opportunities.

We have crossed a threshold and entered a new era for data, security, privacy and digital customer interactions. While many organization may not have completed all GDPR compliance activities yet, it is vital for organizations large and small to ask themselves how GDPR can help position them for long-term success by unlocking new opportunities and unleashing their creativity.

To learn more about how organizations are using GDPR to drive transformation, please register for the May 22 live webinar, The Transformative Power of GDPR for People and Business, and download the complete IBV study.

Read the study: The End of the Beginning — Unleashing the Transformational Power of GDPR

The post Achieving GDPR Compliance: The Spark That Will Light a Fire of Change appeared first on Security Intelligence.

Accessing Cell Phone Location Information

The New York Times is reporting about a company called Securus Technologies that gives police the ability to track cell phone locations without a warrant:

The service can find the whereabouts of almost any cellphone in the country within seconds. It does this by going through a system typically used by marketers and other companies to get location data from major cellphone carriers, including AT&T, Sprint, T-Mobile and Verizon, documents show.

Another article.

Boing Boing post.

GDPR causes a flood of new policies

The European Union claims that the General Data Protection Regulation (GDPR), which comes to term on May 25, is the most important change in data privacy regulation in 20 years. Many companies have spent months preparing for the changes, working on policy and compliance, and introducing changes to their products in order to meet new standards.

We have received quite a few alerts and emails about those policy changes from a wide variety of companies. Combing through the alerts allowed us to see some interesting methods to solve—or evade—the problems that come with making businesses compliant. Let’s take a look at how different companies are coping with GDPR changes, and what you’ll need to pay attention to in those emails.

Total evasion

For some companies whose business interests are too slim in Europe, giving up seemed like the best option. File this alert from Unroll.Me, an app to unsubscribe from unwanted mailing lists, under “why bother.”

Unroll.Me says goosbye

because our service was not designed to comply with all GDPR requirements, Unroll.Me will not be available to EU residents…. And we must delete any EU user accounts by May 24.

Obviously, there is a reason for such drastic measures, and I would call it a good guess if someone were to suggest that this might be related to Unroll.Me having been found selling email data to Uber.

Unroll.me may not be the only company walking away from its European customers in the face of GDPR. Some services have popped up seeming to help companies stay compliant by blocking EU visitors to websites. The GDPR shield shown below was promoted for a period as a possible solution, but the site seems to be down now. Or I could not reach it because I’m in the EU, and the block works too well.

 

GDPR shield

Keep EU visitors off your site by using a GDPR Shield

Chain responsibility for advertisers

Some sites and platforms have advertising partners with whom they share user data. GDPR states that So, you would hope that they take special care in selecting partners who will handle that shared data. Instagram and other Facebook companies have decided on a different approach, shifting that portion of the responsibilities to their advertisers:

Facebook for bussinesses

Businesses who advertise with Instagram and the Facebook companies can continue to use our platforms and solutions in the same way they do today. Each company is responsible for ensuring their own compliance with the GDPR, just as they are responsible for compliance with the laws that apply to them today.

Helping B2B customers

Google Cloud, on the other hand, offers to help their customers.

Google Cloud

You can count on the fact that Google is committed to GDPR compliance across Google Cloud services. We are also committed to helping our customers with their GDPR compliance journey…

What deserves your attention

Under the GDPR rules, companies need explicit and informed consent from their customers to collect and use their data, so you can expect, and probably have already have seen, a lot of policy changes (Terms of Service). As much as you might be tempted to automatically delete the influx of emails from online providers, it’s important to pay attention to those new privacy policy regulations—especially if it appears that the company may be cutting corners in meeting GDPR standards.

When sifting through these emails, I’ve come across some that I would not count as informed consent. A banner that looks and behaves like a cookie warning does not qualify, and neither does providing a less-than comprehensive picture by spreading out information across several different web pages. I’m hoping that these platforms will provide more detailed and specific information before the magic GDPR drop date arrives.

LinkedIn

To juxtapose these flimsy attempts at GDPR compliance, Google has done an excellent job informing its users of changes. Its Privacy Policy has been updated to make the content easier to understand in light of the GDPR demand that users be able to make informed decisions. It has updated the language and navigation of the document, and introduced videos and illustrations in order to make things clear.

Some companies that are active worldwide do make a distinction between EU and non-EU customers, but offer the same functionality that is automatically applied to EU-based IP addresses as an option to users outside of the EU.

Disqus

When a user is in Privacy Mode, we will not collect or process any personal data, as defined by GDPR. In cases where we do not have a lawful basis for processing personal data we will apply Privacy Mode to requests from IP addresses associated with an EU country.

Other, smaller, companies made an effort to send out more personalized notifications letting me know I needed to approve their new policy in order to stay in touch:

Conclusiv

While the ongoing influx might be a nuisance in your inbox, this is a great opportunity to review the privacy policies and maybe say goodbye to some of the companies that have your email address. (Although the professional spammers will probably just keep on going as if nothing has changed.)

 

Where will GDPR lead us?

Looking at the examples we have seen so far, we can divide the big players from the small players and see that some small players from outside the EU are giving up that part of the market—at least for the time being. The big players and European companies are mostly applying the same policies for EU and non-EU customers, although there will always be some exceptions.

Some have predicted there will be two separate Internets as a result of GDPR. I don’t think that will happen. But we will soon get a better idea of how things will play out once the implementation is done and the first shots across the bow have been fired.

In the meantime, it is worth your time to review the changed policies carefully and pay close attention to privacy policies when you sign up for something new.

And in case you were wondering about ours, feel free to review the Malwarebytes Privacy Policy.

The post GDPR causes a flood of new policies appeared first on Malwarebytes Labs.

NBlog May 15 – joining the dots

Security awareness and training materials are inevitably aligned in the general sense that they all concern or relate in some way to information security. The materials have a lot in common, building upon the same foundational principles and concepts. 

With NoticeBored, consistency is virtually guaranteed since the materials are all conceived, researched and prepared by the same close-knit team. While we enjoy exploring novel approaches, and our own perspective is constantly evolving, we can't help but continue along the same tracks.

Most of the time, relationships between topics are incidental. Every so often, though, we like to point out and use the linkages deliberately as part of the awareness approach. We're delivering a coherent campaign, a planned rolling/continuous program rather than a sequence of discrete, independent and unconnected episodes. 

Grab the crayons and join the dots to reveal the whole glorious technicolor picture.

It occurred to me this morning that by the time June's awareness module is released, GDPR will be live, meaning that most if not all of our customers will be legally obliged to report or disclose privacy breaches within 72 hours.

That's just 3 days in old money [gulp]. Barely enough time for a corporate crisis [cue: panic].

I'm not entirely sure at this point precisely when the breach reporting clock starts counting down the 4,320 minutes, nor when it stops, so I ought to dig out and read the regulation, again, from this month's awareness module. Leaving that issue aside for a moment, those quarter-of-a-million seconds will doubtless fly right by in a flash, hence organizations would be wise to prepare for that eventuality ... which thought feeds directly into June's awareness topic around incidents and disasters. Breach disclosure is a neat example of the value in considering and preparing for incidents, getting ready to respond, ideally practicing and refining the response arrangements in order to beat the regulatory deadline in the most cost-effective and professional manner.

So, that's the topic of June's case study decided, plus a relevant example to bring up in the awareness seminars and briefings, and something for customers to check out using the Internal Controls Questionnaire from the module.

The cool part about these links between topics and modules is that they work both ways. We refer forward to future topics with little tasters of things to come without needing to delve right into them. We refer back to prior topics as reminders of what we covered previously. Glancing at our schedule for the rest of this year, I see we will be exploring security frameworks and methods in July, then insider and outsider threats pop up in August and September: we must remember to mention those topics where applicable in the incidents and disasters material for June.

Careless researchers expose millions of Facebook users’ sensitive data

If you needed another reason to stop sharing intimate information with apps on Facebook or Facebook itself, consider this newest revelation: academics at the University of Cambridge have been using the data harvested through myPersonality, a popular personality app, as a basis for a tool used for targeting adverts based on personality types. Access to the tool was reserved for those who paid for it but, by now, we’re all used to companies earning money … More

The post Careless researchers expose millions of Facebook users’ sensitive data appeared first on Help Net Security.

Researchers disclosed details of EFAIL attacks on in PGP and S/MIME tools. Experts believe claims are overblown

EFAIL attacks – Researchers found critical vulnerabilities in PGP and S/MIME Tools, immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email.

A few hours ago, I reported the news that security researchers from three universities in Germany and Belgium have found critical vulnerabilities in PGP and S/MIME Tools that could be exploited by attackers to read emails encrypted with OpenPGP and S/MIME.

Pretty Good Privacy is the open source end-to-end encryption standard used to encrypt emails, while S/MIME, Secure/Multipurpose Internet Mail Extensions, is an asymmetric cryptography-based technology that allows users to send digitally signed and encrypted emails.

The existence of the vulnerabilities was also confirmed by the researchers at the Electronic Frontier Foundation (EFF) that recommended users to uninstall Pretty Good Privacy and S/MIME applications until the issued are fixed.

The experts initially planned on disclosing details on Tuesday morning, but they later decided to publicly share their findings due to wrong information circulating online.

The experts disclosed two variant of the attack dubbed EFAIL, in both scenarios hackers need to be in a position of intercepting encrypted emails, for example hacking the target email account or conducting a man-in-the-middle (MitM) attack.

“The EFAIL attacks exploit vulnerabilities in the OpenPGP and S/MIME standards to reveal the plaintext of encrypted emails. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs.” reads the blog post published by the researchers.

“To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.”

The attacker manipulates the ciphertext in the protected emails and sends a modified message containing custom HTML code to the original receiver or sender.

EFAIL attack

The first attack technique, dubbed direct exfiltration attack, exploits vulnerabilities in the Apple Mail (for iOS and macOS) and Mozilla Thunderbird email clients. The attacker sends the targeted user a specially crafted multipart email with three HTML body parts. When the victim’s client will open and decrypt the email, the attacker’s code causes the application to send the text to a server controlled by the attacker.

The direct exfiltration technique could be used against both PGP and S/MIME.

The second technique, named a CBC/CFB gadget attack, exploits vulnerabilities in the OpenPGP (CVE-2017-17688) and S/MIME (CVE-2017-17689). In the attack scenario, the victim needs to be in possession of their private key, if the private key has been lost the techniques cannot be used.

“He then sends the manipulated email to one of the original receivers, or to the original sender. He may hide this by choosing new FROM, DATE and SUBJECT fields, and he may hide the manipulated ciphertext by hiding it within an invisible iFrame. Thus the attack mail the victim receives looks unsuspicious” reads the research paper published by the experts.

“Once he opens the email in his client, the manipulated ciphertext will be decrypted – first the private key of the victim is used to decrypt the session key s, and then this session key is used to decrypt the manipulated ciphertext c. The decrypted plaintext now contains, due to the manipulations, an exfiltration channel (e.g., an HTML hyperlink) that will send the decrypted plaintext as a whole or in parts to the attacker,” researchers wrote in their paper on EFAIL.

The CBC/CFB gadget attack is effective against PGP, researchers observed a success rate of 33%.

Test results show the EFAIL attack work against 25 of 35 tested S/MIME email clients and 10 of 28 tested OpenPGP clients.

“Our analysis shows that EFAIL plaintext exfiltration channels exist for 25 of the 35 tested S/MIME email clients and 10 of the 28 tested OpenPGP email clients.” states the blog post.

“While it is necessary to change the OpenPGP and S/MIME standards to reliably fix these vulnerabilities, Apple Mail, iOS Mail and Mozilla Thunderbird had even more severe implementation flaws allowing direct exfiltration of the plaintext that is technically very easy to execute.” 

Many security experts downplayed the importance of the EFAIL attack techniques explaining that the attacks work only against buggy email clients.

EFAIL attacks can be mitigated by not using HTML for incoming emails, patches released by email client developers could prevent the attacks.

Pierluigi Paganini

(Security Affairs – privacy, EFAIL)

The post Researchers disclosed details of EFAIL attacks on in PGP and S/MIME tools. Experts believe claims are overblown appeared first on Security Affairs.

Protecting your business behind a shield of privacy

In this podcast recorded at RSA Conference 2018, Francis Knott, VP of Business Development at Silent Circle, talks about the modern privacy landscape, and introduces Silent Circle’s Silent Phone and GoSilent products. Here’s a transcript of the podcast for your convenience. We are here at the RSA Conference with Francis Knott, the VP of Business Development at Silent Circle, to discuss the recent claims by Homeland Security that the organization has observed anomalous activity in … More

The post Protecting your business behind a shield of privacy appeared first on Help Net Security.

Critical Flaws in PGP and S/MIME Tools – Immediately disable tools that automatically decrypt PGP-encrypted email

Researchers found critical vulnerabilities in PGP and S/MIME Tools, immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email.

If you are one of the users of the email encryption tools Pretty Good Privacy and S/MIME there is an important warning for you.

A group of European security expert has discovered a set of critical vulnerabilities in PGP and S/Mime encryption tools that could reveal your encrypted emails in plain text, also the ones you sent in the past.

Pretty Good Privacy is the open source end-to-end encryption standard used to encrypt emails, while S/MIME, Secure/Multipurpose Internet Mail Extensions, is an asymmetric cryptography-based technology that allows users to send digitally signed and encrypted emails.

Sebastian Schinzel, a professor of Computer Security at the Münster University of Applied Sciences, warned the Pretty Good Privacy (PGP) might actually allow Pretty Grievous P0wnage due to vulnerabilities and the worst news is that currently there are no reliable fixes.

The existence of the vulnerabilities was also confirmed by the researchers at the Electronic Frontier Foundation (EFF), the organization also recommended users to uninstall Pretty Good Privacy and S/MIME applications until the issued are fixed.

“A group of European security researchers have released a warning about a set of vulnerabilities affecting users of PGP and S/MIME. EFF has been in communication with the research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages.” reads the blog post published by the EFF. 

“Our advice, which mirrors that of the researchers, is to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email.”

PGP and S/MIME Tools, hacking encryption

The EFF also provided links to guides on how to temporarily disable PGP plug-ins in for Thunderbird with EnigmailApple Mail with GPGTools, and Outlook with Gpg4win.

“Until the flaws described in the paper are more widely understood and fixed, users should arrange for the use of alternative end-to-end secure channels, such as Signal, and temporarily stop sending and especially reading PGP-encrypted email,” states the advisory.

Schnizel will disclose full details on Tuesday morning at 0700 UTC.

Stay tuned!

Pierluigi Paganini

(Security Affairs – privacy, hacking)

The post Critical Flaws in PGP and S/MIME Tools – Immediately disable tools that automatically decrypt PGP-encrypted email appeared first on Security Affairs.

GDPR compliance: Identifying an organization’s unique profile

After a two-year transition period, the General Data Protection Regulation (GDPR) becomes enforceable beginning 25 May 2018. Presumably, many large companies have been working on a compliance program for months now. As the deadline approaches, many organizations are finding that ensuring compliance is a more complex endeavor than they had initially expected. GDPR replaces the 1995 Data Protection Directive (Directive 95/46/EC), and the new regulation imposes a substantial increase in requirements, reflecting major technological changes … More

The post GDPR compliance: Identifying an organization’s unique profile appeared first on Help Net Security.

NBlog May 12 – plummeting toward the deadline


With less than a fortnight now remaining, are you all set for the GDPR deadline with everything on your privacy projects either completed or well in hand?

If not, now is your last chance to refocus on priorities and squeeze the last ounce of effort from all involved.

The usual approach for many managers and team leaders facing just such a situation is to crack the whip. Maybe you have already done that. Maybe you are being thrashed, and feel obliged to do the same.

Hey, listen. Stop a moment and think. That's not the only way.

Assuming things have been run reasonably effectively to this point, everyone is well aware of the impending deadline. The increasing tension will be plain to all. People will have been slaving away, playing their part and (in most cases) doing their level best to hit the goal ... so piling on the pressure now may be counterproductive. When people are close to their breaking points, there's a chance they'll snap rather than bend, especially if they've learnt that bending get them nothing but sore backs and yet more grief. The team and team leader need to trust each other and that's achieved by experience, not by demand.

What else would help move things along in the right direction? There are almost always other options, other avenues to try besides whip-cracking. Has it occurred to you to ask the team? Seriously, find out what are their main pain points, and do something positive about them, now, before it's too late. 

A significant part of management's role is to facilitate things, enabling the workers to work and give of their best. This includes reducing or removing barriers, tackling issues and, well, teamworking. OK so the deadline is fixed. What about everything else? Look harder for slack in the system, opportunities to cut corners safely and sprint for the finish. Ask for creative suggestions and explore the options as a team. It's not just about 'sharing the solution': given some slack, people will often surprise us with novel responses.

By the way, once the line is crossed and the crowd cheers, what's in store for your little athletes? Maybe not a medal, but will there anything at all to thank them for their supreme efforts, and celebrate a job well done? 

Aside from you, who is most anxious right now? Who has the biggest stake in the success (or failure!) of this effort? What are their main concerns? And can you persuade them to help out, if only to turn up at or before the medal ceremony in order to congratulate the team on a job well done?

Thinking still further forward, what is the current situation teach us? Deadlines are a fact of life, hence we have plenty of chances to try different approaches and learn what works best. Aside from that, right now a substantial number of organizations and teams around the globe are plummeting towards May 25th. What can we learn from others' experiences?

Speaking personally, I'll certainly be reading all I can about how organizations, teams and individuals have faced up to the GDPR challenge, both out of my general interest in management and perhaps to pick up new motivational techniques worth including in my toolbox or, for that matter, the ones to avoid like the plague. 

This motivational stuff is highly relevant to making security awareness and training more or less effective - obvious, if you think about it, which hopefully now you are.

Data breach disclosure is still taking too long, report reveals as GDPR looms

Data breach disclosure is still taking too long, report reveals as GDPR looms

The accepted wisdom in the field of cybersecurity is that things are getting worse, and that more businesses are losing control of more data than ever before.

What a bunch of pessimists we are… The truth, however, might be rather different.

Read more in my article on the Bitdefender Business Insights blog.

Smashing Security #077: Why Paris Hilton doesn’t use iCloud, lottery hacking, and Facebook dating

Smashing Security #077: Why Paris Hilton doesn’t use iCloud, lottery hacking, and Facebook dating

The tricky-to-pronounce Paytsar Bkhchadzhyan is jailed for hacking Paris Hilton, we hear the story of the man who hacked the lottery and almost got away with $16.5 million, and Facebook thinks it is the perfect partner to find you a date.

Find out in this special splinter episode of the “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, who are joined this week by special guest Dave Bittner from The Cyberwire podcast.

Browsing in incognito mode is not private at all

Currently, most modern web browsers offer some type of private browsing. Such privacy shields are gaining popularity amongst internet users from all backgrounds, ages, and occupations. Web surfers are starting to realize the real value of their online privacy amid events such as the foreign state interference in the 2016 US presidential elections and Brexit.

Did companies such as the recently shut down Cambridge Analytica completely changed the course of the 2016 presidential election and the Brexit referendum? No one knows for sure, but they certainly had an impact, and sadly, privacy incidents happen more often than they should. Internet users are starting to take matters into their own hands by using tools that somehow mislead them into thinking they browse anonymously.

According to a study published by researchers from the University of Chicago, USA and Leibniz University Hannover, Germany, there is a common misconception that makes users sometimes believe that browsing in incognito mode gives them online anonymity and protects them from malicious software. Incognito mode helps but is not a cure-for-all should you want to stay entirely anonymous while online. And you definitely need antivirus software if you want to surf safely.

How to achieve online anonymity

Private browsing offered by modern web browsers is undoubtedly a good step towards achieving online anonymity, but it isn’t all you need to be anonymous. Surfing the internet with a browser in incognito mode might prevent your browser from storing your browsing history, cookies and site data, and information entered in forms but does not keep your online anonymity intact.

While no one, including your parents or spouse, would be able to determine the sites you’ve visited by looking at your browsing history, your online activities are not a secret to your internet service provider, your employer, and your school. Even though there is no any trace left on the computer, your employer can see the destination of the traffic that goes in and out of your connected device. Private mode browsing does not make you anonymous to the websites that you visit too. So briefly, everyone but your spouse and parents know your browsing habits.

How can you stop them from monitoring your traffic?

VPN Service

The only way to prevent your system administrator and internet service provider from knowing more about the sites you visit is to use a VPN service. When someone, such as your employer or ISP, gets curious about the websites that you tend to visit, all they will see will be the traffic coming in and out of one single place – your VPN service provider. Unless they get a court order, reputable VPN service providers would never share with third parties any details involving your browsing history.

Even if we blindly believe tech companies keep our data secure, things sometimes go sideways. Twitter just advised its userbase to change their passwords, Equifax got hacked leaving hundreds of millions of US citizens vulnerable, and Facebook’s CEO Mark Zuckerberg was grilled before Congress where he admitted Facebook ‘didn’t do enough’ to protect users. Using incognito mode is indeed a good start, but having a quality VPN service provider is a must should you want to be a step closer towards achieving online anonymity.

Download Panda FREE VPN

The post Browsing in incognito mode is not private at all appeared first on Panda Security Mediacenter.

Android P to Block Apps From Monitoring Device Network Activity

Do you know that any app you have installed on your Android phone can monitor the network activities—even without asking for any sensitive permission—to detect when other apps on your phone are connecting to the Internet? Obviously, they cant see the content of the network traffic, but can easily find to which server you are connecting to, all without your knowledge. Knowing what apps you

NBlog May 7 – [NZ] privacy week

I expect you know already but hey it's privacy week everyone!  Woo-hoo!  


[Cue rockets and Catherine wheels]




Well OK, it's privacy week in New Zealand.





And a short week at that, 5 days not 7.



But who am I to knock it?  We've settled and live here.  We pay our dues.  We have both a personal and proprietary interest in the NZ gummt's privacy and security, and we're doing our level best to ensure that the NZ authorities Get It.  We want the same things.

Don't get me wrong, 5 days of privacy awareness stuff is better than nothing ... but hang on, isn't this the month that GDPR comes into effect?  Isn't this privacy month?

Oh well.

Here's Dilbert's take.

Trivia Time: Test Your Family’s Password Safety Knowledge

Strong PasswordPasswords have become critical tools for every citizen of the digital world. Passwords stand between your family’s gold mine of personal data and the entirety of the internet. While most of us have a love-hate relationship with passwords, it’s beneficial to remember they do serve a powerful purpose when created and treated with intention.

But asking your kids to up their password game is like asking them to recite the state capitals — booooring! So, during this first week of May as we celebrate World Password Day, add a dash of fun to the mix. Encourage your family to test their knowledge with some Cybersavvy Trivia.

Want to find out what kind of password would take two centuries to crack? Or, discover the #1 trick thieves use to crack your password? Then take the quiz and see which family member genuinely knows how to create an awesome password.

We’ve come a long way in our understanding of what makes a strong password and the many ways nefarious strangers crack our most brilliant ones. We know that unique passwords are the hardest to crack, but we also know that human nature means we lean toward creating passwords that are also easy to remember. So striking a balance between strong and memorable may be the most prudent challenge to issue to your family this year.

Several foundational principles remain when it comes to creating strong passwords. Share them with your family and friends and take some of the worries out of password strength once and for all.

5 Password Power Principles

  1. Unique = power. A strong password includes numbers, lowercase and uppercase letters, and symbols. The more complicated your password is, the more difficult it will be to crack. Another option is a password that is a Strong Passwordpassphrase only you could know. For instance, look across the room and what do you see? I can see my dog. Only I know her personality; her likes and dislikes. So, a possible password for me might be #BaconDoodle$. You can even throw in a misspelling of your password to increase its strength such as Passwurd4Life. Just be sure to remember your intentional typos if you choose this option.
  2. Diverse = power. Mixing up your passwords for different websites, apps, and accounts can be a hassle to remember but it’s necessary for online security. Try to use different passwords for online accounts so that if one account is compromised, several accounts aren’t put in jeopardy.
  3. Password manager = power. Working in conjunction with our #2 tip, forget about remembering every password for every account. Let a password manager do the hard work for you. A password manager is a tech tool for generating and storing passwords, so you don’t have to. It will also auto-log you onto frequently visited sites.
  4. Private = power. The strongest password is the one that’s kept private. Kids especially like to share passwords as a sign of loyalty between friends. They also share passwords to allow friends to take over their Snapchat streaks if they can’t log on each day. This is an unwise practice that can easily backfire. The most Strong Passwordpowerful password is the one that is kept private.
  5. 2-step verification = power. Use multi-factor (two-step) authentication whenever possible. Multiple login steps can make a huge difference in securing important online accounts. Sometimes the steps can be a password plus a text confirmation or a PIN plus a fingerprint. These steps help keep the bad guys out even if they happen to gain access to your password.

It’s a lot to manage, this digital life but once you’ve got the safety basics down, you can enjoy all the benefits of online life without the worry of your information getting into the wrong hands. So have a fun and stay informed knowing you’ve equipped your family to live their safest online life!

toni page birdsong

 

 

Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her on Twitter @McAfee_Family. (Disclosures).

The post Trivia Time: Test Your Family’s Password Safety Knowledge appeared first on McAfee Blogs.

How To Protect Your Password and Keep Hackers Away

Passwords are the most common way to prove we are who we say we are when it comes to using websites, social media accounts, email, and even the computer itself. Passwords also give us and others access into mobile phones, bank applications, work log-ins, and confidential files. For many online systems, a password is the only thing keeping a hacker from stealing our personal data. Read on to learn how cyber criminals can hack passwords and password protection techniques.

Why It’s Easy for Hackers to Hack

While creating a password may seem like a safe bet, large, reliable companies such as eBay, LinkedIn and most recently Facebook have all been breached, compromising passwords for many of their users. According to the chief executive of specialist insurer Hiscox, in 2016 cyber crime cost the global economy more than $450 billion and over two billion records were stolen. Why is it so easy for hackers to access accounts and obtain secure passwords?

First and foremost, we reuse our passwords. Over 60 percent of the population use the same password across multiple sites. And since 39 percent have a hard time keeping track of passwords, we become incredibly susceptible to hackers when we keep passwords for years or even decades.

People are also incredibly predictable. We tend to use passwords that are personalized in some form to our lives, because they are easier to remember. Because of our visual memory capacity, it is easier to remember images and information that we are already familiar with and have some meaning to us. This is why we often create easy to remember, predictable passwords based on things like family members, pets, or birthdays.

The average user also has about 26 password-protected accounts, but only has five different passwords across these accounts. That makes us more susceptible to hacks, especially brute force attacks. With more than 85 percent of Americans keeping track of online passwords by memorizing them in their heads, it’s nearly impossible to memorize up to 26 passwords. And with a plethora of passwords, it’s important to install a password management program. However, a shocking low 12 percent of Americans actually have one installed.

The standard rule of thumb used to be to change passwords every 90 days. However, in recent years this method has been defined as ineffective by the FTC Chief Technologist and Carnegie Mellon computer science professor, Lorrie Cranor. She found that when people are forced to change their passwords on the regular, they put less mental effort into it. This is another way that hackers can take advantage of people’s lack of effort or desire to change or diversify their passwords.

How Long it Takes Cyber Criminals to Determine Your Password

If you have a password as simple as “password” or “abcdefg”, it would only take a hacker 0.29 milliseconds to crack it according to BetterBuys’ password-cracking times. Even more surprising? The password 123456789 is cracked 431 times during the blink of an eye. Even more complicated passwords are being hacked faster. What used to take hackers three years to crack is now taking under two months.

Hackers first go after the easiest and most common worst passwords, then move on to passwords with the least amount of characters. While a password with seven characters may take only 0.29 milliseconds to crack, one with 12 characters can take up to two centuries. The longer that passwords are, the longer it will take for the hackers to get the right combination.

How Cyber Criminals Hack Passwords

So how do hackers actually do their dirty work? First off, it’s important to understand that this is their job. For most modern, successful hackers, this is what they put their time and effort into on a daily basis. The most common ways that hackers can access your accounts through your credentials are:

  • keylogger attacks
  • brute force attacks
  • dictionary attacks
  • phishing attacks

Keylogger Attacks

A keylogger is a type of surveillance technology used to record and monitor each keystroke typed on a specific device’s keyboard. Cyber criminals use keyloggers as a spyware tool to seal personal information, login information, and sensitive enterprise data.

How to Protect Yourself:

Use a firewall to prevent a keylogger from transmitting information to a third party. You can also install a password manager, which will autofill your passwords and prevent keyloggers from accessing your credentials. Make sure to also keep your software updated, as keyloggers can take advantage of software vulnerabilities to inject themselves into your system.

Brute Force Attacks

We use passwords that are simple, relevant and can be guessed within a few tries. When using the brute force method, hackers use software that repeatedly tries several password combinations. This is a reliable way to steal your information, as many users use passwords as easy as “abcd”. Some of the most common password stealing softwares include Brutus, Wfuzz, and RainbowCrack.

How to Protect Yourself:

There are a number of ways to prevent brute force attacks. First, you can implement an account lockout policy, so after a few failed login attempts, the account is locked until an administrator unlocks it. You can also implement progressive delays, which lock out user accounts for a set period of time after failed attempts, increasing the lock out time after each failed attempt.

Another solution is using a challenge-response test to prevent an automated submission to the login page. Systems such as reCAPTCHA can require a word or math problem to make sure a person is entering credentials rather than a hacking system.

Dictionary Attacks

In 2012, more than 6 million passwords were hacked on LinkedIn due to a dictionary attack. A dictionary attack works by systematically entering every word in a dictionary as a password. Dictionary attacks seem to succeed because people have a tendency to choose short, common passwords.

How to Protect Yourself:

Choose a password that is at least 8 characters. Avoid any words in the dictionary, or common predictable variations on words. Use SSH keys to connect to a remote server to store your password. You should also only allow SSH connections for certain hosts or IP addresses so you know what computers are connecting to your server.

Phishing Attacks

Phishing attacks involve hackers using fake emails and websites to steal your credentials. They are most commonly emails that disguise as legitimate companies, asking you to download a file or click on a link. Most commonly, phishing attacks can involve a hacker masking as your bank provider, which can be especially detrimental.

How to Protect Yourself

Be cautious of emails that come from unrecognized senders, are not personalized, ask you to confirm personal or financial information, or are urging you to act quickly with threatening information. Do not click on links, download files, or open attachments from unknown senders. Never email personal or financial information to even those you trust, as your email can still be breached.

Creating a Fool-Proof Password

Cyber criminals have become experts in determining passwords. 50 percent of small to midsize organizations suffered at least one cyberattack in 2017. That’s half of all small businesses, not to mention the large corporations such as T-Mobile, JP Morgan, and eBay who have suffered massive cyber attacks affecting hundreds of millions of customers. That’s not even the scariest part.

According to this WordPress’ UnMasked study, even high-level executives like the senior engineer at PayPal or the program manager at Microsoft have faulty, predictable passwords. This could seriously impacted their businesses. When creating a password, there are a few tips that can significantly help you keep your accounts safe and hackers out.

A password that is at least 14 characters is ideal. Eight characters is the shortest that a password should be. Make sure to use a variety of characters, numbers, and letters that have seamlessly no correlation or direct link to you or your hobbies.

Avoid predictable patterns in letter capitalization like at the beginning or end of your password, or for proper nouns. Also, try to use your entire keyboard, and not just characters you use on a daily basis, as hackers know this and will target the common characters.

Password Protection: Keeping Your Passwords Safe

In order to keep your passwords locked and secure, it’s important to create quality passwords and use security measures when creating new accounts. While many studies used to say to change your password every 90 days, the newest guidelines actually suggest changing your passwords when necessary, as changing too often can actually hurt you rather than help you.

Also, make new password hints as these are easy ways for hackers to receive a “recovery email” with your account information. Try to use uncommon answers such as obscure teacher names, or even create random answers and write these down to remember. Another technique is to create a sentence or acronym that only applies to you but is random enough to fool hackers.

Use a password manager such as Dashlane, LastPass, or Sticky Password. These tools generate and store complex passwords for you. The password managers live in your browser and can fill in your login information whenever on a site.

Lastly, install antivirus software for password protection across the internet. Install your antivirus on all devices, in order to keep tabs on suspicious activity and keep unknown downloads from installing on your computer.

The post How To Protect Your Password and Keep Hackers Away appeared first on Panda Security Mediacenter.

This Week in Security News: Zippy’s and Flynn

Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, Hawaii-based restaurant Zippy’s suffered a POS data breach. In addition, Uber executive John Flynn argued that user expectations on data protection are rising, but consumers still aren’t implementing the right precautions for their own data safety.

Read on to learn more.

State-of-the-Art Security: The Role of Technology in the Journey to GDPR Compliance

As we’ve discussed over the last 7 weeks in our video case study series, the GDPR impacts many different areas of our company, including our employees, customers, and partners.

PROTECTING YOUR PRIVACY – Part 1: The Privacy Risks of Social Networks and Online Browsing

Most Americans today spend many of their waking hours online. In fact, we’re up to spending an average of five hours per day just on our mobiles.

What HIPAA and Other Compliance Teaches Us About the Reality of GDPR

The date for General Data Protection Regulation (GDPR) compliance is three months away, yet many organizations, especially those outside Europe, remain unprepared

PROTECTING YOUR PRIVACY – Part 2: How to Maximize Your Privacy on Social Media and in Your Browser

You can manually configure your Privacy Settings on sites including Facebook, Twitter, LinkedIn, and more. However, no two sites are the same, and some are easier than others to navigate. 

Securing the Connected Industrial World with Trend Micro

At Trend Micro we’ve made it our business over the past 30 years to anticipate where technology is taking the world. That’s why our message has evolved over that time from Peace of Mind Computing to Your Internet Firewall and most recently Securing Your Journey to the Cloud.

FacexWorm Targets Cryptocurrency Trading Platforms, Abuses Facebook Messenger for Propagation

Trend Micro’s Cyber Safety Solutions team identified a malicious Chrome extension we named FacexWorm, which uses a miscellany of techniques to target cryptocurrency trading platforms accessed on an affected browser and propagates via Facebook Messenger.  

How cryptocurrency is shaping today’s threat environment

Cryptocurrency has exploded as a popular way to support digital transactions. Since its creation, users have discovered an array of different ways to leverage cryptocurrency, including within mining strategies and digital wallets. 

Cryptocurrency-Mining Malware Targeting IoT, Being Offered in the Underground

Cryptocurrencies have been generating much buzz of late. While some governments are at work to regulate transactions involving them, there are others that want to stop mining activities related to them altogether.  

Legitimate Application AnyDesk Bundled with New Ransomware Variant

Trend Micro recently discovered a new ransomware (Detected as RANSOM_BLACKHEART.THDBCAH), which drops and executes the legitimate tool known as AnyDesk alongside its malicious payload.  

Zippy’s Restaurants Suffers POS Data Breach

Zippy’s Restaurants’ point-of-sale system was compromised for four months, exposing customer data.

ASEAN Cybersecurity in the Spotlight Under Singapore’s Chairmanship

At ASEAN Summit 2018 in Singapore, the strong focus on cybersecurity reflected regional and international attention to growing cyber threats in Southeast Asia.

Almost Half of UK Businesses Suffered Cyberattack or Security Breach Last Year, Figures Show

The 2018 Cyber Security Breaches Survey found 19 percent of charities and 43 percent of businesses in the UK had reported cyber security breaches or attacks in the last year.

Commentary: States Are Getting Tough on Data Security—but That Might Be a Problem

The Facebook-Cambridge Analytica scandal shines a light on the need for more regulation protecting data; more than 240 bills were introduced in 42 states last year covering a range of security issues.

Uber Security Head Says Users Need to Care More About Data After Breach

At the 2018 Collision Tech Conference, John Flynn relayed that user expectations on data protection are rising, but customers still aren’t taking the right actions to protect their personal information. 

Alexa Can Listen Indefinitely, Potentially Exploited to Transcribe Information to Cybercriminals

Researchers discovered a new internet of things (IoT) design flaw in a popular smart home system: They found that Amazon’s Alexa service can be programmed to eavesdropon its users and transcribe all the information heard.  

Securing the Internet of Things Through Effective Regulation

According to a survey done by Gartner, almost 20 percent of organizations have observed at least one IoT-based attack in the last three years.

As cities get high-tech, hackers become more dangerous

Remember when a major U.S. city’s computer infrastructure was hacked, and held ransom, by a group of cyber criminals?

Do you agree with John Flynn’s speech on user expectations for data protection? Share your thoughts in the comments below or follow me on Twitter to continue the conversation; @JonLClay.

The post This Week in Security News: Zippy’s and Flynn appeared first on .

What HIPAA and Other Compliance Teaches Us About the Reality of GDPR

with contributing author, William J. Malik, CISA | VP, Infrastructure Strategies

The date for General Data Protection Regulation (GDPR) compliance is just weeks away, yet many organizations, especially those outside Europe, remain unprepared. It turns out that the experiences from other privacy compliance regulations are less helpful than assumed, but the best lessons learned may be from non-privacy regulations.

GDPR Lessons from Other Privacy Compliance Aren’t Very Helpful

Because compliance is tied to regulations and laws, they are often regional. In Canada, the Personal Information and Documents Protection Act (PIPEDA) became law in 2000. PIPEDA is mostly about privacy, specifically obtaining consent from and letting people know why their information is being collected. As with too many laws and regulations for privacy, to date there have been no penalties for PIPEDA non-compliance other than reputational. Governments are eager to pass regulations for compliance but often balk at implementing penalties. This ‘false sense of non-compliance’ will be a surprise to organizations that choose to run afoul of GDPR expecting it to be similar to privacy regulations in many jurisdictions. GDPR however has penalties in its first iteration. Rather than looking to other privacy regulations, financial compliance is a better example to use for convincing your organization to get serious about GDPR. The penalties in GDPR are real.

GDPR Lessons from PCI-DSS

PCI-DSS is a better comparison to GDPR: Regional compliance having a global impact and with penalties. When PCI was first introduced, many organizations assumed it wouldn’t apply to them as they were not a credit card processor. The next phase was compliance-surprise, when organizations discovered credit card holder information was present in new apps or added to existing apps that were previously not in scope for PCI. One noteworthy case saw a $13.3M fine levied.  The GDPR lesson is that even if you are not subject to compliance on day 1, monitor changes to your business to check if you do later become subject to GDPR.

GDPR Lessons from HIPAA

US companies are generally not ready for GDPR compliance. By examining the history of compliance with HIPAA, we can forecast how GDPR compliance will roll out. HIPAA is focused on privacy, so it has some lessons. Initially, HIPAA enforcement was light. GDPR applies to any organization processing personally identifiable information belonging to EU citizens. In the US, this requirement had been defined under the European Data Privacy Directive. Those basic definitions remain in place. What has changed are:

  1. The Safe Harbor has been supplanted by the EU-US Privacy Shield, which requires US companies to self-certify with the Federal Trade Commission (see https://www.privacyshield.gov/Program-Overview for details).
  2. Reporting requirements are much more stringent. An organization has 72 hours after discovery to report a breach.
  3. Organizations must show that they are using best-in-class or state-of-the-art technology to protect personally identifiable information.
  4. Fines are greater. There’s two tiers of fines, the first is up to a maximum of 10M Euros or 2 percent of global revenue (whichever is highest), and the second up to 20M Euros or 4 percent of global revenue (whichever is highest).
  5. Organizations must name a Data Protection Officer (DPO), who has a broad remit to investigate and report on data breaches. This individual cannot be dismissed or sanctioned by their organization for doing that job.
  6. Individuals have the right to request their information be corrected or erased, by application to the DPO.

But penalties for HIPAA non-compliance have grown steadily over the past 10 years:

 

Note that under the terms of the Privacy Shield, individuals and government agencies (specifically the FTC) can bring actions against organizations in US courts. The mechanisms for levying fines are already in place. Organizations that fail to prepare for GDPR will face the financial consequences of non-compliance, that is, Stage 3, in short order. Unlike HIPAA, GDPR is familiar to many multinationals. Organizations have faced penalties under the current Data Protection Directive for over a decade. The learning curve will be much shorter this time. Do not expect a multi-year gap before US-based organizations face substantial financial consequences. We expect to see fines levied within the next 18 to 24 months.

GDPR Lessons from Increasing Compliance Maturity

Not all compliance is created equally. For other privacy regulations it is common that there is no penalty for non-compliance, even willful breaches, whereas in some geographies privacy breaches can bring significant discomfort. So there is a gradient of maturity that compliance falls into, not by category of compliance (e.g. financial, privacy) but for the specific regulation or standard. This isn’t to argue that every compliance regime needs penalties, formality and significant oversight – but there are noteworthy differences in the ‘seriousness’ or impact of compliance with each system. We foresee that organizations will mature in their compliance following this proposed maturity model:

Maturity Level Characteristics Likely Examples (and fodder for arguments)
0 Minimal utility in compliance, can be used as excuse for doing less than due diligence standards OWASP Top 10
1 Guidance and checklists NIST Standards, ISO 27001
2 Regulations and formal laws without penalties – “name and shame” PIPEDA (current version)
3 Impact of non-compliance, fines, significant PCI-DSS, HIPAA, GDPR
4 Embedded into business. Compliance because it makes life better. FIPS 140-2

 

We will move rapidly through stages 0 and 1 to stage 2. We already have organizations that report on breaches, investigations in progress, and fines for HIPAA. The Privacy Shield site tracks registered organizations, and will provide a platform for reporting on breaches and fines, as well.

The Bottom Line

Although GDPR deadlines are approaching rapidly, this is not wholly unfamiliar territory. Use the practices already in place for your non-privacy compliance. Yes, GDPR is a more mature model of privacy compliance than most North American organizations are used to, but the compliance already in place for other regulations and laws can be a roadmap in getting compliant quickly.

The mechanisms for levying fines are already in place. Organizations that fail to prepare for GDPR will face the financial consequences of non-compliance, that is, Stage 3, in short order. Unlike HIPAA, GDPR is familiar to many multinationals. Organizations have faced penalties under the current Data Protection Directive for over a decade. The learning curve will be much shorter this time. Do not expect a multi-year gap before US-based organizations face substantial financial consequences. We expect to see fines levied within the next 18 to 24 months.

The post What HIPAA and Other Compliance Teaches Us About the Reality of GDPR appeared first on .

PROTECTING YOUR PRIVACY – Part 2: How to Maximize Your Privacy on Social Media and in Your Browser

As social media sites become a bigger part of users' daily lives, they must be increasingly careful about their online privacy.

In the last post we highlighted the privacy risks associated with using popular social networking sites and browsers. You might not appreciate just how much of your personal data is being accessed by advertisers and other third parties via your social media accounts and internet browsing. Similarly, your privacy settings may have changed significantly since the last time you checked them, meaning that you’re now over-sharing via your updates and posts online.

This could lead to various unintended consequences. For example, a prospective employer may cut you from a shortlist of candidates because they don’t like what they see on your Facebook page. Or an enterprising burglar might see from a Twitter post that you’re not at home and raid your property. Hackers might even harvest the information you share and use your identity to apply for new bank cards in your name.

Fortunately, there are things you can do to protect your privacy online — both within the sites themselves and by using third-party tools like Trend Micro’s Privacy Scanner. Let’s take a look.

Changing your Privacy Settings

You can manually configure your Privacy Settings on sites including Facebook, Twitter, Google+, LinkedIn, and more, as well as in your browser. However, no two sites are the same, and some are easier than others to navigate.

Facebook:

The good news is that following the Cambridge Analytica scandal, Facebook has made several changes designed to make it easier for you to manage your privacy settings. A privacy shortcuts button   is now accessible from the top right of any Facebook page and will help you manage who can view your content; who can contact you; and how you can stop someone hassling you. In addition, anywhere you’re able to share your status updates, photos and other posts, there’s an “audience selector” tool which allows you to specify whether they can be seen by the Public (anyone on or off Facebook), Friends, or just you. Be aware that Facebook remembers your most recent setting.

The amount of data you share with apps is also increasingly important to users. Following the recent data leakage scandal, Facebook has promised to notify if it removes any apps for breaching terms of service; remove an app’s access if it hasn’t been used in three months; and will reduce the data that an app can request without app review. If you want to manually review what info your Facebook apps can access, click  in the top right, click Settings, then go to Apps and Websites on the left-hand side. You can choose between Active, Expired or Removed websites/apps and remove those you no longer wish to access your personal data.

Twitter:

As mentioned in the previous blog, Twitter is easier to manage than Facebook, but there are some settings users may prefer to enhance their privacy. In your account, click on Settings and Privacy then Privacy and Safety and you’ll be given several options. Tweets are public by default so if you want them to be private, and only shared with approved friends, click Protect your Tweets. Similarly, there are options to remove your geolocation, not allow users to tag you in photos, or let others find you by email address/phone number. Also switch personalization off to stop sharing data with advertisers and switch off Receive Direct Messages from anyone to avoid spam direct messages.

Browser (Chrome on Windows):

As the most popular browser in the world, Google Chrome tracks and sells much of your activity to advertisers as well as sharing it with other Google products. If you don’t want to sync your personal browsing history to all devices, including your work machine, then click on the three dots in the top right-hand corner, Settings, Sync, and then toggle off the features you don’t want. You’ll need to do the same at work or for other machines.

The browser also shares information with various other services. If you’re not happy with that happening, you can toggle them off by going to Settings, Advanced (at the bottom of the page). However, enabling Do Not Track will help prevent third-party sites storing your data, although it’s not 100% effective. It’s also a good idea to keep on the service protecting you and your device from dangerous sites.

Click on “content settings” to dive into additional privacy settings. Go into Cookies and “keep local data until you quit your browser” to limit what data sites can harvest from you. Finally, consider using a password manager from a third-party expert like Trend Micro instead of storing your passwords in the browser, since it’s far more secure.

Automate Privacy Settings with Trend Micro Privacy Scanner

If you want an easier way to manage your privacy on social media and browsers, consider the Trend Micro Privacy Scanner feature, which is available within Trend Micro Security on Windows and Mac, and within Mobile Security on Android and iOS. While we can’t help you with all your social network settings, we can certainly help you with quick and easy fixes on four major platforms, as well as their linked apps, and in Windows browsers.

For Windows, the social networks covered are Facebook, Twitter, Google+, and LinkedIn, as well as Internet Explorer, Chrome, and Firefox browsers. Privacy Scanner also works on Macs the same way for the same social networking platforms. And it works on Android (for Facebook) and iOS (for Facebook and Twitter). It’s turned on by default in Trend Micro Internet, Maximum and Premium Security and can also be launched from the Trend Micro Toolbar. Either click on the Privacy icon in the Console, or in the browser, select the Trend Micro Toolbar and “Check your Online Privacy.” Here are a few scenarios:

Facebook on Windows

A Facebook sign-in page is shown by default by the Privacy Scanner. Sign-in and then See Scan Results. Click Fix All and then Fix to fix all the issues highlighted, or click the drop down to tackle them individually. You can also view any apps here which may have privacy concerns. If you want to fix each separately click “Who can see each app and its posts?”

Once that has been completed you will get a message saying your friends’ accounts need help. In that case you can share a link to the Privacy Scanner with them on the social network.

Chrome on Windows

To start a scan, open up your browser. In the Trend Micro toolbar, select Check your online privacy. The Trend Micro Privacy Scanner portal will appear. Click on the browser you want to check. The scanner will show you where there are privacy concerns. Click Fix All and then Fix or manually fix/edit each one.

Twitter on iOS

To scan and fix Twitter via Trend Micro Mobile Security on iOS, swipe the Safe Surfing shield to the left and tap the Social Network Privacy Shield in the main Console. (Note: this UI will change in the Fall of 2018.) Tap the Twitter icon to sign-in and then Login to start the scan. Tap Improve Now or the individual settings panel to change the settings. The feature works similarly on Android.

Trend Micro Password Manager

Finally, Trend Micro Password Manager has been designed to help you protect the privacy of your account passwords across PCs, Macs, Android and iOS. It’s worth considering as an alternative to storing your online credentials in the browser, which exposes them to hackers. Trend Micro Password Manager is automatically installed with Trend Micro Maximum Security, but you can also install a free or paid stand-alone edition of the product, Password Manager.

  • Generates highly secure, unique and tough-to-hack passwords for each of your online accounts
  • Securely stores and recalls these credentials so you don’t have to remember them
  • Offers an easy way to change passwords, if any do end up being leaked or stolen
  • Makes it quick and easy to manage your passwords from any location, on any device and browser

At Trend Micro we understand that protecting your privacy and security online is becoming increasingly challenging. That’s why we’ve done our best to do the hard work for you—helping you to enjoy your digital life safely and easily.

For more info or to purchase Trend Micro Security for PC and Mac, as well as Trend Micro Mobile Security for iOS and Android, go here.

To watch a video on using Trend Micro Privacy Scanner, go here.

For more info on Trend Micro Password Manager go here, or to watch videos on using Password Manager go here.

The post PROTECTING YOUR PRIVACY – Part 2: How to Maximize Your Privacy on Social Media and in Your Browser appeared first on .

Permission slip: what consent means and where it really applies to GDPR

As data protection and privacy professionals, we use terms from data protection legislation daily and they roll off the tongue as if we were born knowing what the words mean. The problem is, GDPR contains words that have both a legal meaning and a different semantic meaning.

Talking with consumers and clients, I realise that we must temper our language carefully. As practitioners, we understand the legal meaning and frequently we don’t account for clients only understanding the semantic meaning.

You say consent, I say ‘consent’

The domain of GDPR where I have felt this disparity most strongly is with the legal instrument referred to in the GDPR as ‘consent ’. Beyond GDPR’s ‘consent as legal basis for processing data’, many other forms of consent exist in our society . I recently gave a talk to a group of executives on GDPR and when I discussed ‘consent’ as a legal basis for processing, one individual in the audience noted how horrified he was that Ireland was reducing the age of consent from 16 to 13. Realising his confusion, I quickly reaffirmed that the ‘consent’ I was referring to was as a legal instrument for certain types of processing such as e-marketing – and no other kind.

Recent media coverage of the Ulster Rugby   players trial made me realise how clear we must be with clients and others when expressing ourselves about ‘consent’.  So, if you aren’t doing so already, I encourage you to explain to your clients not just the differences between consent and the other legal instruments in the GDPR, but the actual meaning of consent under GDPR.

Consent and permission ≠ the same

The second issue I have encountered is that I so frequently encounter people who do not understand ‘consent‘ in GDPR terms and who confuse it in that context with ‘consent’ in its literal sense.

It is an understandable confusion, as consent in its literal sense means ‘permission for something to happen or agreement to do something’. In GDPR, consent can only be provided and revoked from processing that is undertaken using consent as the legal basis for processing. I have found that organisations (and data subjects) often discuss how they will facilitate ‘revoking consent’ from processing of information that is processed under a  legal basis other than consent.

Lawful bases for processing data

Elizabeth Denham, the UK Information Commissioner, summarised this issue succinctly when she noted that headlines about consent often lack context or understanding about all the different lawful bases that organisations will have for processing personal information under the GDPR. For processing to be lawful under the GDPR, at least one lawful basis is required.

Consider the following  examples: a government body processing property tax information; banks sharing data for fraud protection purposes; or insurance companies processing claims information. All these examples require a different lawful basis for processing personal information that isn’t ‘consent’. Each legal instrument has its own set of requirements. If the legal basis for processing is, for example, legitimate interest, the GDPR outlines a completely different set of requirements. In such cases, you do not need consent. This also means that the rules for ‘consent’, such as positive affirmative opt-ins, and freedom to change preferences etc. are only mandatory for consent-based processing.

With less than a month to go until GDPR , some organisations may still be grappling with the issue of ‘consent’ and the related implications for data processing under a misunderstanding of the meaning and where it applies. For our part, privacy professionals can help by being completely clear in how we communicate – while watching for signs that our intended audience understands what we mean. The regulation will be with us for a long time to come after 25 May. It is always worthwhile to ensure privacy policies apply ‘consent’ only where it’s legally necessary to do so.

The post Permission slip: what consent means and where it really applies to GDPR appeared first on BH Consulting.

Smashing Security #076: Spying phones, hacked ski lifts, and World Password Day

Smashing Security #076: Spying phones, hacked ski lifts, and World Password Day

Cheap Android smartphones sold on Amazon have been sending customers’ full text messages to a Chinese server, ski lifts are found to be the latest devices left open to abuse by hackers, and we remind you why password managers are a good idea on World Password Day. Oh, and our guest serenades us with a hit from the 1980s!

All this and more is discussed in the latest edition of the “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by broadcaster and journalist David McClelland.

Cyber Security Roundup for April 2018

The fallout from the Facebook privacy scandal rumbled on throughout April and culminated with the closure of the company at the centre of the scandal, Cambridge Analytica.
Ikea was forced to shut down its freelance labour marketplace app and website 'TaskRabbit' following a 'security incident'. Ikea advised users of TaskRabbit to change their credentials if they had used them on other sites, suggesting a significant database compromise.

TSB bosses came under fire after a botch upgraded to their online banking system, which meant the Spanished owned bank had to shut down their online banking facility, preventing usage by over 5 million TSB customers. Cybercriminals were quick to take advantage of TSB's woes.

Great Western Railway reset the passwords of more than million customer accounts following a breach by hackers, US Sun Trust reported an ex-employee stole 1.5 million bank client records, an NHS website was defaced by hackers, and US Saks, Lord & Taylor had 5 million payment cards stolen after a staff member was successfully phished by a hacker.

The UK National Cyber Security Centre (NCSC) blacklist China's state-owned firm ZTE, warning UK telecom providers usage of ZTE's equipment could pose a national security risk. Interestingly BT formed a research and development partnership with ZTE in 2011 and had distributed ZTE modems. The NCSC, along with the United States government, released statements accusing Russian of large-scale cyber-campaigns, aimed at compromising vast numbers of the Western-based network devices.

IBM released the 2018 X-Force Report, a comprehensive report which stated for the second year in a row that the financial services sector was the most targeted by cybercriminals, typically by sophisticated malware i.e. Zeus, TrickBot, Gootkit. NTT Security released their 2018 Global Threat Intelligence Report, which unsurprisingly confirmed that ransomware attacks had increased 350% last year.  

A concerning report by the EEF said UK manufacturer IT systems are often outdated and highly vulnerable to cyber threats, with nearly half of all UK manufacturers already had been the victim of cybercrime. An Electropages blog questioned whether the boom in public cloud service adoption opens to the door cybercriminals.

Finally, it was yet another frantic month of security updates, with critical patches released by Microsoft, Adobe, Apple, Intel, Juniper, Cisco, and Drupal.

NEWS
AWARENESS, EDUCATION AND THREAT INTELLIGENCE
REPORTS

NBlog May 1 – privacy & GDPR awareness module

The awareness series on privacy is complete with the third and final installment delivered to NoticeBored subscribers today in time for the GDPR deadline on the 25th of this month.

Parts one and two on GDPR and privacy were delivered in December 2016 and November 2017.


Privacy is a perennial topic, of course, so it's not literally 'final'. We will be back to cover it again. Maybe in a year or so we'll refresh the materials with a GDPR retrospective, a look back at the privacy changes brought about globally by the regulation and a look forward to how the field is evolving.

Alternatively, the global news media picking up on the first major prosecution under GDPR will present a golden opportunity for awareness. Although we can't predict exactly when it will happen, we could still prepare for it while the topic is front-of-mind. We might perhaps pre-assemble a mini-module as an awareness refresher on privacy, the OECD principles and the GDPR requirements. Customizing the materials to name the organization/s in the headlines, outline the specific allegations and draw out the implications might only take a few hours, thanks to having the stash of privacy awareness content to hand. We could deliver relevant awareness content shortly after the news breaks, while it is still hot.

In the same way, we have plenty of awareness content in the bag ready to tweak and roll out at short notice in the event of almost any other major information security-related incident. We've done exactly that previously in the wake of terrorist attacks, the Sony hack and fake news, for example, leveraging the saturation news coverage for security awareness purposes. When employees see security stuff in the press, it raises all sorts of questions that the awareness content can address.

Aside from such major incidents, information security-related news is a rich seam for awareness purposes all year round. Whatever the monthly topic, there's always something relevant going on ... otherwise it wouldn't be worth covering in the awareness program. This month, for instance, we're using the Facebook privacy incident as a topical example, supplementing the GDPR stuff. We picked up on ransomware incidents in the malware module, and other events as applicable. It's surprising* just how often hot news seem to fall into our laps!

NoticeBored subscribers can take the same approach with corporate situations, internal initiatives and local incidents that don't hit the news. Whatever happens in the information risk and security domain, we have probably covered it in one or more of the 60+ topics in the awareness portfolio, hence there are awareness materials to hand - enough to get started at least. Having an issue with backups? No problem. People not patching? A breeze. Social engineers breathing down your necks? A doddle. Speak to us nicely and we might even prepare something just for you.

And that thought leads us towards our next planned awareness topic on business continuity and incident management. I'll be blogging about it here during the month ahead as the module takes shape. Come back often!

Meanwhile, if your organization needs its privacy and security awareness boosting, email me today. We can get you up and running in next to no time - prior to the the GDPR deadline if you're on-the-ball. Mention this blog and ask for a special blog-readers' price. I'll see what we can do for you.

* Actually, it's not surprising at all. It takes continuous effort to stay abreast of the field, researching every topic and chasing down relevant examples to incorporate into the awareness stream. Our experience leads us to see information risk and security aspects of all sorts of news, events and situations. You may think us obsessive or paranoid. We consider ourselves passionate about this stuff, driven, evangelical even. It's what we do.

Does Your Family Need a VPN? Here are 3 Reasons it May Be Time

At one time Virtual Private Networks (VPNs) used to be tools exclusive to corporations and techie friends who appeared overly zealous about masking their online activity. However, with data breaches and privacy concerns at an all-time high, VPNs are becoming powerful security tools for anyone who uses digital devices.

What’s a VPN?

A VPN allows users to securely access a private network and share data remotely through public networks. Much like a firewall protects the data on your computer, a VPN protects your activity by encrypting (or scrambling) your data when you connect to the internet from a remote or public location. A VPN allows you to hide your location, IP address, and online activity.

For instance, if you need to send a last-minute tax addendum to your accountant or a legal contract to your office but must use the airport’s public Wi-Fi, a VPN would protect — or create a secure tunnel in which that data can travel —while you are connected to the open network. Or, if your child wants to watch a YouTube or streaming video while on vacation and only has access to the hotel’s Wi-Fi, a VPN would encrypt your child’s data and allow a more secure internet connection. Without a VPN, any online activity — including gaming, social networking, and email — is fair game for hackers since public Wi-Fi lacks encryption.

Why VPNs matter

  • Your family is constantly on the go. If you find yourself conducting a lot of business on your laptop or mobile device, a VPN could be an option for you. Likewise, if you have a high school or college-aged child who likes to take his or her laptop to the library or coffee shop to work, a VPN would protect data sent or received from that location. Enjoy shopping online whenever you feel the urge? A VPN also has the ability to mask your physical location, banking account credentials, and credit card information. If your family shares a data plan like most, connecting to public Wi-Fi has become a data/money-saving habit. However, it’s a habit that puts you at risk of nefarious people eavesdropping, stealing personal information, and even infecting your device. Putting a VPN in place, via a subscription service, could help curb this risk. In addition, a VPN can encrypt conversations via texting apps and help keep private chats and content private.
  • You enjoy connected vacations/travel. It’s a great idea to unplug on vacation but let’s be honest, it’s also fun to watch movies, check in with friends via social media or email, and send Grandma a few pictures. Service to some of your favorite online streaming sites can be interrupted when traveling abroad. A VPN allows you to connect to a proxy server that will access online sites on your behalf and allow a secure and easier connection most anywhere you go.
  • Your family’s data is a big deal. Protecting personal information is a hot topic these days and for good reason. Most everything we do online is being tracked by Internet Service Providers (ISPs). ISPs track us by our individual Internet Protocol (IP) addresses generated by each device that connects to a network. Much like an identification number, each digital device has an IP address which allows it to communicate within the network. A VPN routes your online activity through different IP addresses allowing you remain anonymous. A favorite entry point hackers use to eavesdrop on your online activity is public Wi-Fi and unsecured networks. In addition to potentially stealing your private information, hackers can also use public Wi-Fi to distribute malware. Using a VPN cuts cyber crooks off from their favorite watering hole — public Wi-Fi!

As you can see VPNs can give you an extra layer of protection as you surf, share, access, and receive content online. If you look for a VPN product to install on your devices, make sure it’s a product that is trustworthy and easy to use, such as McAfee’s Safe Connect. A robust VPN product will provide bank-grade encryption to ensure your digital data is safe from prying eyes.

toni page birdsong

 

 

Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her on Twitter @McAfee_Family. (Disclosures).

The post Does Your Family Need a VPN? Here are 3 Reasons it May Be Time appeared first on McAfee Blogs.

NBlog April 28 – awareness devices

Today in a sudden flash of inspiration I invented a "device", a mechanism to raise awareness. 

It's a graphical image, a metric, a simple visual device, an analytical or rhetorical tool to set people thinking about and discussing the topic - privacy in this case. It explores their perceptions of the state of readiness of the organization to meet the May 25th GDPR deadline. 

The specific thinkers and discussers I have in mind at this point are senior managers, executives or board members, with a significant interest in the organization's readiness for GDPR. They ought to know where things stand, and ought to have a reasonable grasp of the situation, but do they? The device is a way to find out.

I'm not going to describe it further at this point but NoticeBored subscribers will find it in the 'board agenda' paper in May's module.

Generalizing from there, with minor changes the same device could be used to stimulate analysis and discussion on almost any deadline or situation where there are several non-exclusive options or possibilities on the table, and inherent uncertainties. That's most business decisions, then! It's something I'm sure I'll be using elsewhere in awareness materials, training courses and more.

While it could be used by individuals working in private, it is really intended for group or team settings where people feed off each others' energy and hopefully reach a consensus. Stimulating a productive discussion around a given topic is the main awareness goal, with measuring and comparing perceptions a subsidiary aim.

There are loads of techniques for creative thinking and teamwork so I'd be amazed if my idea is totally novel (and potentially patentable!) ... which hints at the value of exploring and exploiting such methods for awareness and training purposes. It exemplifies the value of employing professionals to handle awareness and training, people like me with sufficient experience and interest to make interesting stuff happen. Another nail in the coffin of those deadly dull death-by-Powerpoint bullet-point-ridden torture sessions that pass for awareness and training in some organizations, stuck firmly in the Dark Ages.

So that's it for today. May's module is nearly over the line, down to the last few hours' slog. Must dash ...

Please don’t buy this: smart toys

Smart toys attempt to offer what a lot of us imagined as kids—a toy that we can not only play with, but one that plays back. Many models offer voice recognition, facial expressions, hundreds of words and phrases, reaction to touch and impact, and even the ability to learn and retain new information. These features provide an obvious thrill for many children, whose imaginary friend just became a lot more real.

At the low end, smart toys can be as simple as a motion-activated rattle designed with features intended to help with developmental milestones. Higher-end toys can be as engaging as a real-life R2-D2 that will watch Star Wars with you and offer commentary.

But much like other Internet of Things products, smart toys don’t have a great track record of protecting personal information, designing software according to industry best practices, and updating in a timely manner. And we’re in fairly new territory when it comes to young children and the Internet. Suddenly, we have to worry about protecting the digital footprint of our kids before they’re even online as active participants. Not only that, we don’t yet know the repercussions of a person’s data being collected and transmitted online essentially from birth.

As cool as that R2-D2 is, we suggest for the time being that you please don’t buy smart toys.

Why not?

The problems start to creep in with the data collection necessary for a toy to be properly interactive. While simple games and preprogrammed phrases can launch using on board memory or a bluetooth connection to a computer, more complex speech recognition and “remembering” user preferences and conversations generally requires sending input data to a remote server for analysis of the training data set.

This process can be completely benign if all points in the data transmission chain are configured and secured properly. Unfortunately, there is a lot of room in the collection chain for vulnerabilities to creep in.

At the point of collection, decisions need to be made to appropriately sanitize personal information. (Doubly important if the user is a child.) The collected data needs to be transmitted in a manner that’s secure against third-party eavesdroppers. And at the other end of the collection chain, all data needs to be stored on a secure server using patched, up-to-date software, and hashed with a modern, secure algorithm. Smart toy makers have not done well on any of these benchmarks in the past.

Setting privacy issues aside for a moment, software update lag is a common issue with any IoT device. A smart toy may be smart today, but new functionality and bug fixes might be rare or nonexistent to allow for new product releases. Security patches, in particular, vary wildly in frequency across IoT manufacturers. Of the manufacturers we reviewed, only Fisher Price disclosed anything specific about their updates and data collection practices, and no manufacturers provided any information about security features.

Lastly, security design of these products—in particular, their associated mobile apps—is generally not very good. Hong Kong maker VTech Electronics made the news in 2015 for what they described as a “sophisticated” SQL injection attack that resulted in exposure of personal information for millions of children. Breaches happen quite a bit, and the temptation is to dismiss it as something unavoidable. But an outstanding article by Troy Hunt took a look at their security practices and found:

  • No usage of SSL anywhere on their websites
  • Password hashing with a deprecated, easily-cracked algorithm
  • Storage of security questions in plain text
  • Extensive use of Flash

For those not in the know, these are basic, 101-level security design flaws that in total suggest irresponsibility by the company rather than a one-off event by a hyper-competent hacker. (Please read Troy’s followup article, which goes into greater detail on the impact of VTech’s poor design.)

Until companies can be held to a unified standard of foundational security practices, allowing them access to an underaged user in any way is ill-advised.

Maybe buy this instead

Beyond the security issues built into the product out of the box, adult users aren’t always helping the cause, ignoring updates or clicking through agreements without reading privacy notices in detail. Often simple computer hygiene, like changing the default password, could save a family from creepy hacks of their baby monitors and teddy bears.

Sitting down with your toddler and having a conversation about privacy and secure PII best practices probably won’t go well, either. Should your child not be amenable to an IoT ban, Fisher Price makes a series of smart toys that state clearly that no personal information is transmitted via WiFi. Clear, unequivocal statements like that are rare in the IoT space.

However, in 2016, a Fisher Price smart bear was found to be leaking customer and children’s data via an unsecured API. Industry security standards for most IoT products are so low that even the best in a particular class can still be a risk.

For the sake of the children

Smart toys take all of the risk of IoT products and apply them to children. Prior negligence by some companies, as well as the larger impact of security flaws when the user is a child, prompted the FBI to release an advisory on potential issues with smart toys. Until manufacturers operate under a shared security standard with meaningful enforcement, we advise that please, for the sake of the kids—don’t buy this.

The post Please don’t buy this: smart toys appeared first on Malwarebytes Labs.

NBlog April 24 – privacy policies under GDPR [UPDATED x3]

As the world plummets towards the May 25th GDPR deadline, organizations are revising their web-based privacy policies to align with both the new regulatory regime and their internal privacy practices.


From May 10th, PayPal, for instance, has a new ~4,000 word ~11 A4 page privacy policy - well, several in fact depending on the user's location. Among other things, I notice that they "do not respond to DNT signals" (meaning, I think, that they simply ignore the Do Not Track flag sent by cautious browsers) and they:

"... maintain technical, physical, and administrative security measures designed to provide reasonable protection for your Personal Data against loss, misuse, unauthorized access, disclosure, and alteration. The security measures include firewalls, data encryption, physical access controls to our data centers, and information access authorization controls ..."
Providing 'reasonable' protection is perhaps all we can expect of anyone. It would be unreasonable to insist on absolute security, although it would be nice to have greater assurance than a simple assertion such as confirmation that their privacy and data security measures have been competently and independently checked (audited) for compliance with applicable legal and regulatory obligations (GDPR for instance), as well as good practices such as the ISO27k or NIST SP800 standards.



Google's privacy policy was revised in December. It has a similar length and structure to the PayPal one, with personal choice and transparency being prominent up-front.


Google does mention compliance:
"... We regularly review our compliance with our Privacy Policy. We also adhere to several self regulatory frameworks, including the EU-US and Swiss-US Privacy Shield Frameworks ..." 
There's nothing in there about GDPR compliance as yet, and personally I'm dubious about the assurance value of the Privacy Shield which, as I understand it, is another self-assertion rather than an independent audit and certification mechanism.

Although the information security section highlights a few specific controls, most remain unspecified.

Re people deleting their personal information, I like the way they put this:  
"... We aim to maintain our services in a manner that protects information from accidental or malicious destruction. Because of this, after you delete information from our services, we may not immediately delete residual copies from our active servers and may not remove information from our backup systems ..." 
They are right in saying that backups and other measures are needed for security and resilience reasons, which can make it tricky to ensure that all primary and backup copies of personal data are revised or deleted in line with privacy requirements. It might be nice to know that those backups will eventually expire and be deleted too, preferably within a 'reasonable' period (maybe a year?) but formally ensuring that happens across such a massive, complex and dynamic network would be tough too. So they don't even make the promise. Seems fair enough to me, provided their approach fulfills their privacy obligations, and I'm not in a position to challenge that.

In contrast to PayPal and Google, Santander UK's 'privacy statement' follows the typical European structure and style. It is much shorter (just the 2 pages, not 11) with only brief, plain English statements in most cases, such as this carefully-crafted line near the top:
"We're committed to keeping your personal information safe and confidential both online and offline."
Although that may or may not be a strict promise in the legal sense of a warranty or contractual obligation, it's reassuring to know, especially right up-front. If you can't be bothered to read the rest of the statement, it's a comforting message to take away.

The rest of the message includes the obligatory yawn-inducing tripe about cookies that most EU sites are compelled to trot out as a result of some EU bureaucrat or committee's edict, I guess. What were they thinking it would achieve? Had they no idea how the Web works? Oh well. Aside from that drivel, most of the other sections are an admirable 1-3 sentences each - readable and sufficiently informative for an overview. As an infosec pro, I would have preferred links to further details on many areas but I accept I am "special".

[Update 25th April] Twitter's new privacy policy that comes into effect a month from today is another lengthy tome of about 11-12 pages, although they have at least made an effort to provide a readable summary version as well.

[Update 26th April] The Facebook/Cambridge Analytica privacy breach, plus the widespread adoption of GDPR, may mark a turning point in US attitudes towards privacy and personal data. As I understand it, if the Social Media Privacy Protection and Consumer Rights Act for instance became law as proposed, it would give Americans the rights to opt out of having to provide their personal data [to social media sites] and have the [social media] sites delete any or all of their personal data. It would force the [social media] sites to clarify their terms of service, and introduce a 72 hour privacy breach notification rule [for social media sites?] - requirements curiously similar to the EU and OECD approach to privacy, including GDPR. The apparent myopic focus purely on social media sites strikes me as odd, though, given that the same issues affect anyone using personal data, including big business, the marketing industry and the US Government. Aha, the light just went on.

Meanwhile, Facebook is preparing to update its privacy policy on some as yet unspecified date. The new version is ~4,300 words and ~12 A4 pages, with no mention of GDPR. The pattern is becoming clear.

[Update 27th April] GoDaddy's new privacy policy is shorter, simpler and clearer than most US organizations. There's also a Privacy Center, essentially an FAQ or help page with minimal content at present, but hopefully that will be fleshed out in time. Good on 'em!  It doesn't mention GDPR as such but the phrasing (such as 'only using personal data for the purposes for which it was provided' and having a Data Protection Officer) suggests GDPR compliance is an objective.


NBlog April 23 – David v Goliath


Thanks to a mention in the latest RISKS-list email, I've been reading a blog piece by Bruce Schneier about the Facebook incident and changing US cultural attitudes towards privacy.
"As creepy as Facebook is turning out to be, the entire industry is far creepier. It has existed in secret far too long, and it's up to lawmakers to force these companies into the public spotlight, where we can all decide if this is how we want society to operate and -- if not -- what to do about it ... [The smartphone] is probably the most intimate surveillance device ever invented. It tracks our location continuously, so it knows where we live, where we work, and where we spend our time. It's the first and last thing we check in a day, so it knows when we wake up and when we go to sleep. We all have one, so it knows who we sleep with."
With thousands of data brokers in the US actively obtaining and trading personal information between a far larger number of sources and exploiters, broad-spectrum and mass surveillance is clearly a massive issue in America. The size and value of the commercial market makes it especially difficult to reconcile the rights and expectations of individuals against those of big business, plus the government and security services. This is David and Goliath stuff.

GDPR is the EU's attempt to re-balance the equation by imposing massive fines on noncompliant organizations: over the next few years, we'll see how well that works in practice. 

Meanwhile, US-based privacy advocates such as EPIC and EFF have been bravely fighting the individuals' corner. I wonder if they would consider joining forces? 

NBlog April 20 – whistleblower policy

For more than two decades now, I have been fascinated by whistleblowers - people who blow the whistle on various forms of impropriety. 

In my experience, they are  high-integrity, ethically-motivated and aggrieved individuals willing to take a stand rather than put up with Things That Should Not Be Going On. They are powerful change agents. To my mind, they are brave heroes taking significant risks to their careers, personal lives, liberty and safety (nods hat to Ed Snowden among others).

I've blogged about it several times, most recently at the start of this month when I said:

Organizations clearly need strategies, policies and procedures for receiving and dealing with incident notifications and warnings of all sorts. 
And that set me thinking: do we actually offer anything along those lines - any awareness and training materials supporting such activities?

We don't currently have a whistleblower policy as such in our suite of information security policy templates, although the term is mentioned in a few of them, generally in reference to a "Whistleblowers' Hotline".  We envisage a corporate service being run by a trustworthy, competent and independent person or group such as Internal Audit, or a suitable external service provider.

Whistleblowing has certainly come up in the context of oversight, compliance, governance, fraud etc., so we ought to check through the back catalog to see what we have to hand in the way of guidance/awareness content. I'm thinking the incident management procedures might be adapted to suit, but what else is there? I'll be exploring this further, figuring out the common approaches and concerns and perhaps drafting a whistleblower policy.

This is partially relevant to May's materials on GDPR in that compliant organizations are expected to receive and address privacy-related requests and  complaints in a professional manner, a process that arguably ought be in effect today but patently (in my unhappy experience with a certain French hotel chain, for example) it ain't necessarily so. The controversial right to be forgotten, for instance, requires organizations to expunge personal information on request from a data subject, a situation that strongly suggests a serious breakdown of trust between the parties, perhaps as a result of an undisclosed incident.  There may be no formal obligation for individuals to explain why they want their personal information erased, but asking the question at least would seem like a sensible thing for the organization to do.  It might suggest the need for further investigation, even if the person's reasons are withheld or obscure. 

Obvious when you think about it. I wonder how many are?

NBlog April 19 – looking beyond the horizon [UPDATED]

We are fast approaching an event horizon - May 25th 2018 - beyond which the privacy landscape will be changed forever.

As of today, most of the world respects the rights of individuals to control information about themselves that they consider personal, with the glaring exception of the US which treats personal information as merely another information asset, to be obtained, exploited and traded the same as any other. The changes brought about by GDPR will directly and indirectly affect the whole world, including the US in ways that are not entirely clear at this precise point.

The European Union anticipates the whole world falling neatly into line, playing the privacy game the EU way or facing punitive fines until they do. 

Some players in the US are making noises about continuing their exploitation of personal information with impunity, perhaps grudgingly paying their GDPR fines but only after a massive playground punch-up over whether the EU's rules even apply to the US, and without necessarily falling into line. [Cue cartoon of someone's eyes rolling like a fruit machine, stopping on $$$ $$$ to the sound of a ker-ching cash register or tinkle-tinkle Vegas coin payout.]

Some are talking about fracturing the Internet along the GDPR/non-GDPR boundary, maintaining different privacy rules and approaches on each side and somehow handling the not inconsiderable issue of personal information crossing the boundary. I think this is either fake news, panic, bravado or tongue-in-cheekiness, not dissimilar to those cranky but desperate suggestions to call the year 2000 "199A" followed by "199B" giving a stay of execution for the non-Y2K compliant organizations, perhaps, but a world of pain for the rest of us. 

This strikes me as an interesting perspective to get management thinking differently about GDPR, in strategic business terms. 

Another approach we'll be taking is to treat personal information as a valuable and sensitive information asset not totally dissimilar to secret recipes for herbs and spices, business plans, customer and prospect lists, and more - another opportunity to get management thinking differently about privacy. Securing personal info is not just A Jolly Good Idea for compliance reasons.

Those two concepts, plus the remainder of the NoticeBored materials for May, are all aimed at raising awareness of the privacy and related issues. As always, we'll be supplying a blend of factual information, motivational suggestions, tools and techniques, metrics, strategic options, policy matters, guidance and more: if you think your GDPR project would benefit from any of this, email me soon about subscribing to NoticeBored - if you care about crossing the event horizon at full pelt on both feet anyway, rather than crawling exhaustedly across the line, collapsing dejectedly in a heap on the home straight, or sticking your head in the sand and pretending it won't affect you. We have awareness content on privacy and other information security topics ready to deliver today, and we're working hard on the privacy and GDPR awareness module for delivery to subscribers on May 1st, for sure. Will your GDPR/privacy awareness stuff be done in time? With just 35 days remaining, have you even started preparing it yet?! Good luck Jim.


[Added 20th April] Talking of heads-in-sand, what do you make of this?


5 Common Sense Security and Privacy IoT Regulations

F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here

For most of human history, the balance of power in commercial transactions has been heavily weighted in favour of the seller. As the Romans would say, caveat emptor – buyer beware!

However, there is just as long a history of people using their collective power to protect consumers from unscrupulous sellers, whose profits are too often based on externalising their costs which are then borne by the society. Probably the earliest known consumer safety law is found in Hammurabi’s Code nearly 4000 years ago – it is quite a harsh example:

If a builder builds a house for someone, and does not construct it properly, and the house which he built falls in and kills its owner, then that builder shall be put to death.

However, consumer safety laws as we know them today are a relatively new invention. The Consumer Product Safety Act became law in the USA in 1972. The Consumer Protection Act became law in the UK in 1987.

Today’s laws provide for stiff penalties – for example the UK’s CPA makes product safety issues into criminal offenses liable with up to 6 months in prison and unlimited fines. These laws also mandate enforcement agencies to set standards, buy and test products, and to sue sellers and manufacturers.

So if you sell a household device that causes physical harm to someone, you run some serious risks to your business and to your personal freedom. The same is not true if you sell a household device that causes very real financial, psychological, and physical harm to someone by putting their digital security at risk. The same is not true if you sell a household device that causes very real psychological harm, civil rights harm, and sometimes physical harm to someone by putting their privacy rights at risk. In those cases, your worst case risk is currently a slap on the wrist.

This situation may well change at the end of May 2017 when the EU General Data Protection Regulation (GDPR) goes into force across the EU, and for all companies with any presence or doing business in the EU. The GDPR provides two very welcome threats that can be wielded against would-be negligent vendors: the possibility of real fines – up to 2% of worldwide turnover; and a presumption of guilt if there is a breach – it will be up to the vendor to show that they were not negligent.

However, the GDPR does not specifically regulate digital consumer goods – in other words Internet of Things (IoT) “smart” devices. Your average IoT device is a disaster in terms of both security and privacy – as our Mikko Hypponen‘s eponymous Law states: “smart device” = “vulnerable device”, or if you prefer the Fennel Corollary: “smart device” = “vulnerable surveillance device”.

The current IoT market is like the household goods market before consumer safety laws were introduced. This is why I am very happy to see initiatives like the UK government’s proposed Secure by Design: Improving the cyber security of consumer Internet of Things Report. While the report has many issues, there is clearly a need for the addition of serious consumer protection laws in the security and privacy area.

So if the UK proposal does not go far enough, what would I propose as common sense IoT security and privacy regulation? Here are 5 things I think are mandatory for any serious regulation in this area:

  1. Consumer safety laws largely work due to the severe penalties in place for any company (and their directors) who provide consumers with goods that place their safety in danger, as well as the funding and willingness of a governmental consumer protection agency to sue companies on consumers’ behalf. The same rigorous, severe, and funded structure is required for IoT goods that place consumers’ digital and physical security in danger.
  2. The danger to consumers from IoT goods is not only in terms of security, but also in terms of privacy. I believe similar requirements must be put in place for Privacy by Design, including severe penalties for any collecting, storing, and selling (whether directly, or indirectly via profiling for targeting of advertising) of consumers’ personal data if it is not directly required for the correct functioning of the device and service as seen by the consumer.
  3. Similarly, the requirements should include a strict prohibition on any backdoor, including government or law enforcement related, to access user data, usage information, or any form of control over the devices. Additionally, the requirements should include a strict prohibition on vendors providing any such information or control via “gentleman’s agreements” with a governmental or law enforcement agency/representative.
  4. In terms of the requirements for security and privacy, I believe that any requirements specifically written into law will always be outdated and incomplete. Therefore I would mandate independent standards agencies in a similar way to other internet governing standards bodies. A good example is the management of TLS certificate security rules by the CA/Browser Forum.
  5. Requirements must also deal with cases of IoT vendors going out of business or discontinuing devices and/or software updates. There must be a minimum software update duration, and in the case of discontinuation of support, vendors should be required to provide the latest firmware and update tool as Open Source to allow support to be continued by the user or a third party.

Just as there will always be ways for a determined person to hack around any physical or software security controls, people will find ways around any regulations. However, it is still better to attempt to protect vulnerable consumers than to pretend the problem doesn’t exist; or even worse, to blame the users who have no real choice and no possibility to have any kind of informed consent for the very real security and privacy risks they face.

Let’s start somewhere!

NBlog April 18 – GDPR full immersion

Today I've dived deep into GDPR, poring over, becoming immersed in and trying to make sense of the legislation.

The regulation itself is freely available online - handy really since it is intended to apply and to be implemented and complied-with very widely.

It is an official EU regulation, almost a law, and as such it has clearly been drafted by and for the lawyers.  Readability is clearly not as high on their priority list as making it watertight.

So, the door swings open to interpret and explain it for the common man and, for that matter, the common manager.

NBlog April 17 – GDPR countdown



A countdown is a common way to align everyone towards some event - the launch of a space mission or start of a new year for instance, or the completion of your GDPR compliance project. As a communications, awareness and motivational technique, countdowns work well for that rather narrow objective, focusing attention on a given point in time.

With a little more creativity and effort, it's not hard to use countdowns to get people to re-assess their progress and maybe prioritize things on the way down to the deadline ... and then to follow-through with count-ups - in other words, keep the timer going past the zero point, displaying the time since the deadline passed or expired. 

This is often done for overdue activities, starting with gentle reminders then steadily ramping up the pressure (red reminders, warnings) and perhaps escalating matters (court orders, bailiffs) as time marches inexorably on. 

Before you know it, the point-in-time spot focus has turned into a zone of concern, with an accompanying sequence of activities, a plan and a process. 

The passage of time can also be used in a more positive manner, in the sense of "Look how far we've come!". It is generally implied in the concept of maturity. It takes time to reach then stabilize and become comfortable at each level before starting the assault on the next, like climbing the stairs or a mountain. [Maturity also implies gaining competence and wisdom, which are the more obvious objectives.]

A related concept is that of momentum or inertia - winding things up to reach a critical speed, then sustaining it as long as possible. This is not just Newton's first law of motion as it literally applies to boulders, wheels and space rockets in the physical world. It's also figurative, applying to organizations and processes, even to individuals. Our energy/activity levels and motivations vary and, to an extent, can be influenced by others. Some things fire us up and get us going. Others wear us out and exhaust us. Understanding the difference goes a long way towards making awareness activities more effective.

I'll end with a simple suggestion to use the countdown to the GDPR go-live deadline quite deliberately as a means to align and drive everyone to May 25th, and perhaps to lead them ever onward and upwards thereafter, having hopefully achieved the specific goal. Privacy is no less important on May 26th!


To the GDPR deadline ... and beyond!

NBlog April 11 – a rich seam

Surprisingly often, a breaking news story falls into our laps at precisely the right moment.

Today, I've been developing a general staff awareness presentation on privacy. Three core messages appeal to me, this time around:
  1. Privacy is an ethical consideration - something we anticipate or expect of each other as members of a civilized society.
  2. Privacy is also a compliance obligation - something enshrined in the laws of the land and imposed on our organizations.
  3. Those two issues together make privacy a business issue.

So, what's been all over the news lately in relation to privacy? Why, the latest Facebook incident, of course. 

I'm not going to re-hash the story now, nor draw out the privacy lessons for you. I've given you more than enough of a clue already, and if you read the press coverage with a slightly cynical and jaundiced eye, you'll find your own take on the incident - as indeed will our subscribers' employees ... which makes it an excellent, highly relevant case study to incorporate into the awareness content.

Thanks to the saturation media coverage, we barely need mention 'Facebook' for people to think of the incident. Almost all will have seen the news reports. Those who use Facebook (a substantial proportion of people, we are led to believe) probably have perfectly reasonable concerns about their own privacy. Those who don't use it are also implicated, although we might need to explain that a little. Either way, it's something they can relate to, a story that resonates and has impact. We can pose a few questions that they can contemplate, in their own way, in their own time.

We will exploit their interest to engage them with the awareness program so, in a way, we are also exploiting the victims' personal information, but (we assert) it's for their own good, for the benefit of their employer and for the sake of human society. We mean well. We are not even vaguely approaching the boundaries of decency or legislation. Public incidents of this nature are perfectly legitimate and in fact rich resources for awareness, training and educational purposes. It would be a waste to let them drift back below our consciousness without milking them for all they're worth.

The real trick is to be constantly scanning the horizon for relevant news items. Information security is such a broad topic that finding stuff is hardly ever the issue - the very opposite in fact. The Facebook incident, for instance, is directly and obviously relevant to privacy, but also to incident management, compliance, governance, information risk, information security, cybersecurity, social engineering, fraud, accountability, business continuity and more.

Ethically speaking, I have no qualms about using reported incidents in this way, particularly where the protagonists are implicated in the incidents rather than merely being the poor unfortunate victims of some malicious third party. I'm currently trying to track down the original source of a quoted Goldman Sachs assessment of the eye-wateringly huge amount of revenue Facebook may forgo once GDPR comes into effect, with the strong implication that they have been making their fortune by exploiting the personal information of their users. OK so it may have been entirely legal, but was it appropriate? Was it ethical? Was it socially acceptable? These rhetorical questions hint at how we might explore the same incident from the business perspective in the management awareness materials, making a link that will hopefully get staff and managers thinking and talking animatedly about privacy.

And that's another security awareness win, right there.

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

This year I have witnessed too many DNS stories - rising from the Government censorship programs to privacy-centric secure DNS (DNS over TLS) in order to protect the customers' queries from profiling or profiting businesses. There are some DNS which are attempting to block the malicious sites (IBM Quad9 DNS and SafeDNS) while others are trying to give un-restricted access to the world (Google DNS and CISCO OpenDNS) at low or no costs.

Yesterday, I read that Cloudflare has joined the race with it's DNS service (Quad1, or 1.1.1.1), and before I dig further (#punintended) let me tell you - it's blazing fast! Initially I thought it's a classic April Fool's prank but then Quad1, or 1.1.1.1 or 4/1 made sense. This is not a prank, and it works just as proposed. Now, this blog post shall summarize some speed tests, and highlight why it's best to use Cloudflare Quad1 DNS.

Quad1 DNS 1.1.1.1 Speed Test

To test the query time speeds (in milliseconds or ms), I shall resolve 3 sites: cybersins.com, my girl friend's website palunite.de and my friend's blog namastehappiness.com against 4 existing DNS services - Google DNS (8.8.8.8), OpenDNS (208.67.222.222), SafeDNS (195.46.39.39), IBM Quad9 DNS (9.9.9.9) and Cloudflare Quad1 (1.1.1.1)

Website Google DNS OpenDNS IBM Quad9 SafeDNS CloudFlare
cybersins.com 158 187 43 238 6
palunite.de 365 476 233 338 3
namastehappiness.com 207 231 178 336 3

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

This looks so unrealistic, that I had to execute these tests again to verify, and these numbers are indeed true.

Privacy and Security with Quad1 DNS 1.1.1.1

This is the key element that has not been addressed for quite a while. The existing DNS services are slow, but as well store logs and can profile a user based on the domains they query. The existing DNS services run on UDP port 53, and are vulnerable to MITM (man in the middle) kind of attacks. Also, your ISP has visibility in this clear text traffic to sensor or monetize you, if required. In the blogpost last weekend, Matthew Prince, co-founder and CEO of Cloudflare mentioned,

The web should have been encrypted from the beginning. It's a bug it wasn't. We're doing what we can do fix it ... DNS itself is a 35-year-old protocol and it's showing its age. It was never designed with privacy or security in mind.

The Cloudflare Quad1 DNS overcomes this by supporting both DNS over TLS and HTTPS which means you can setup your internal DNS server and then route the queries to Cloudflare DNS over TLS or HTTPS. To address the story behind the Quad1 or 1.1.1.1 choice, Matthew Prince quoted,

But DNS resolvers inherently can't use a catchy domain because they are what have to be queried in order to figure out the IP address of a domain. It's a chicken and egg problem. And, if we wanted the service to be of help in times of crisis like the attempted Turkish coup, we needed something easy enough to remember and spraypaint on walls.

Kudos to Cloudflare for launching this service, and committing to the privacy and security of the end-users in keeping short-lived logs. Cloudflare confirmed that they don't see a need to write customer's IP addresses to the disk, and retain the logs for more than 24 hours.

Cheers and be safe.

Don’t Get Duped: How to Spot 2018’s Top Tax Scams

It’s the most vulnerable time of the year. Tax time is when cyber criminals pull out their best scams and manage to swindle consumers — smart consumers — out of millions of dollars.

According to the Internal Revenue Service (IRS), crooks are getting creative and putting new twists on old scams using email, phishing and malware, threatening phone calls, and various forms of identity theft to gain access to your hard earned tax refund.

While some of these scams are harder to spot than others, almost all of them can be avoided by understanding the covert routes crooks take to access your family’s data and financial accounts.

According to the IRS, the con games around tax time regularly change. Here are just a few of the recent scams to be aware of:

Erroneous refunds

According to the IRS, schemes are getting more sophisticated. By stealing client data from legitimate tax professionals or buying social security numbers on the black market, a criminal can file a fraudulent tax return. Once the IRS deposits the tax refund into the taxpayer’s account, crooks then use various tactics (phone or email requests) to reclaim the refund from the taxpayer. Multiple versions of this sophisticated scam continue to evolve. If you see suspicious funds in your account or receive a refund check you know is not yours, alert your tax preparer, your bank, and the IRS. To return erroneous refunds, take these steps outlined by the IRS.

Phone scams

If someone calls you claiming to be from the IRS demanding a past due payment in the form of a wire transfer or money order, hang up. Imposters have been known to get aggressive and will even threaten to deport, arrest, or revoke your license if you do not pay the alleged outstanding tax bill.

In a similar scam, thieves call potential victims posing as IRS representatives and tell potential victims that two certified letters were previously sent and returned as undeliverable. The callers then threaten to arrest if a payment the victim does not immediately pay through a prepaid debit card. The scammer also tells the victim that the purchase of the card is linked to the Electronic Federal Tax Payment System (EFTPS) system.

Note: The IRS will never initiate an official tax dispute via phone. If you receive such a call, hang up and report the call to the IRS at 1-800-829-1040.

Robo calls

Baiting you with fear, scammers may also leave urgent “callback” requests through prerecorded phone robot or robo calls, or through a phishing email. Bogus IRS robo often politely ask taxpayers to verify their identity over the phone. These robo calls will even alter caller ID numbers to make it look as if the IRS or another official agency is calling.

Phishing schemes

Be on the lookout for emails with links to websites that ask for your personal information. According to the IRS, thieves now send very authentic-looking messages from credible-looking addresses. These emails coax victims into sharing sensitive information or contain links that contain malware that collects data.

To protect yourself stay alert and be wary of any emails from financial groups or government agencies Don’t share any information online, via email, phone or by text. Don’t click on random links sent to you via email. Once that information is shared anywhere, a crook can steal your identity and use it in different scams.

Human resource/data breaches

In one particular scam crooks target human resource departments. In this scenario, a thief sends an email from a fake organization executive. The email is sent to an employee in the payroll or human resources departments, requesting a list of all employees and their Forms W-2.  This scam is sometimes referred to as business email compromise (BEC) or business email spoofing (BES). 

Using the collected data criminals then attempt to file fraudulent tax returns to claim refunds. Or, they may sell the data on the Internet’s black market sites to others who file fraudulent tax returns or use the names and Social Security Numbers to commit other identity theft related crimes. While you can’t personally avoid this scam, be sure to inquire about your firm’s security practices and try to file your tax return early every year to beat any potentially false filing. Businesses/payroll service providers should file a complaint with the FBI’s Internet Crime Complaint Center (IC3).

As a reminder, the IRS will never:

  • Call to demand immediate payment over the phone, nor will the agency call about taxes owed without first having mailed you several bills.
  • Call or email you to verify your identity by asking for personal and financial information.
  • Demand that you pay taxes without giving you the opportunity to question or appeal the amount they say you owe.
  • Require you to use a specific payment method for your taxes, such as a prepaid debit card.
  • Ask for credit or debit card numbers over the phone or e-mail.
  • Threaten to immediately bring in local police or other law-enforcement groups to have you arrested for not paying.

If you are the victim identity, theft be sure to take the proper reporting steps. If you receive any unsolicited emails claiming to be from the IRS to phishing@irs.gov (and then delete the emails).

This post is part II of our series on keeping your family safe during tax time. To read more about helping your teen file his or her first tax return, here’s Part I.

toni page birdsong

 

 

Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her on Twitter @McAfee_Family. (Disclosures). 

The post Don’t Get Duped: How to Spot 2018’s Top Tax Scams appeared first on McAfee Blogs.

What Were the CryptoWars ?

F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here

In a previous article, I mentioned the cryptowars against the US government in the 1990s. Some people let me know that it needed more explanation. Ask and thou shalt receive! Here is a brief history of the 1990s cryptowars and cryptography in general.

Crypto in this case refers to cryptography (not crypto-currencies like BitCoin). Cryptography is a collection of clever ways for you protect information from prying eyes. It works by transforming the information into unreadable gobbledegook (this process is called encryption). If the cryptography is successful, only you and the people you want can transform the gobbledegook back to plain English (this process is called decryption).

People have been using cryptography for at least 2500 years. While we normally think of generals and diplomats using cryptography to keep battle and state plans secret, it was in fact used by ordinary people from the start. Mesopotamian merchants used crypto to protect their top secret sauces, lovers in ancient India used crypto to protect their messages, and mystics in ancient Egypt used crypto to keep more personal secrets.

However, until the 1970s, cryptography was not very sophisticated. Even the technically and logistically impressive Enigma machines, used by the Nazis in their repugnant quest for Slavic slaves and Jewish genocide, were just an extreme version of one of the simplest possible encryptions: a substitution cipher. In most cases simple cryptography worked fine, because most messages were time sensitive. Even if you managed to intercept a message, it took time to work out exactly how the message was encrypted and to do the work needed to break that cryptography. By the time you finished, it was too late to use the information.

World War II changed the face of cryptography for multiple reasons – the first was the widespread use of radio, which meant mass interception of messages became almost guaranteed instead of a matter of chance and good police work. The second reason was computers. Initially computers meant women sitting in rows doing mind-numbing mathematical calculations. Then later came the start of computers as we know them today, which together made decryption orders of magnitude faster. The third reason was concentrated power and money being applied to surveillance across the major powers (Britain, France, Germany, Russia) leading to the professionalization and huge expansion of all the relatively new spy agencies that we know and fear today.

The result of this huge influx of money and people to the state surveillance systems in the world’s richest countries (i.e. especially the dying British Empire, and then later America’s growing unofficial empire) was a new world where those governments expected to be able to intercept and read everything. For the first time in history, the biggest governments had the technology and the resources to listen to more or less any conversation and break almost any code.

In the 1970s, a new technology came on the scene to challenge this historical anomaly: public key cryptography, invented in secret by British spies at GCHQ and later in public by a growing body of work from American university researchers Merkle, Diffie, Hellman, Rivest, Sharmir, and Adleman. All cryptography before this invention relied on algorithm secrecy in some aspect – in other words the cryptography worked by having a magical secret method only known to you and your friends. If the baddies managed to capture, guess, or work out your method, decrypting your messages would become much easier.

This is what is known as “security by obscurity” and it was a serious problem from the 1940s on. To solve this, surveillance agencies worldwide printed thousands and thousands of sheets of paper with random numbers (one-time pads) to be shipped via diplomatic courier to embassies and spies around the world. Public key cryptography changed this: the invention meant that you could share a public key with the whole world, and share the exact details of how the encryption works, but still protect your secrets. Suddenly, you only had to guard your secret key, without ever needing to share it. Suddenly it didn’t matter if someone stole your Enigma machine to see exactly how it works and to copy it. None of that would help your adversary.

And because this was all normal mathematical research, it appeared in technical journals, could be printed out and go around the world to be used by anyone. Thus the US and UK governments’ surveillance monopoly was in unexpected danger. So what did they do? They tried to hide the research, and they treated these mathematics research papers as “munitions”. It became illegal to export these “weapons of war” outside the USA without a specific export license from the American government, just like for tanks or military aircraft.

This absurd situation persisted into the early 1990s when two new Internet-age inventions made their continued monopoly on strong cryptography untenable. Almost simultaneously, Zimmermann created a program (PGP) to make public key cryptography easy for normal people to use to protect their email and files, and Netscape created the first SSL protocols for protecting your connection to websites. In both cases, the US government tried to continue to censor and stop these efforts. Zimmermann was under constant legal threat, and Netscape was forced to make an “export-grade” SSL with dramatically weakened security. It was still illegal to download, use, or even see, these programs outside the USA.

But by then the tide had turned. People started setting up mirror websites for the software outside the USA. People started putting copies of the algorithm on their websites as a protest. Or wearing t-shirts with the working code (5 lines of Perl is all that’s needed). Or printing the algorithm on posters to put up around their universities and towns. In the great tradition of civil disobedience against injustice, geeks around the world were daring the governments to stop them, to arrest them. Both the EFF (Electronic Frontier Foundation) and the EPIC (Electronic Privacy Information Center) organizations were created as part of this fight for our basic (digital) civil rights.

In the end, the US government backed down. By the end of the 1990s, the absurd munitions laws still existed but were relaxed sufficiently to allow ordinary people to have basic cryptographic protection online. Now they could be protected when shopping at Amazon without worrying that their credit card and other information would be stolen in transit. Now they could be protected by putting their emails in an opaque envelope instead of sending all their private messages via postcard for anyone to read.

However that wasn’t the end of the story. Like in so many cases “justice too long delayed is justice denied”. The internet is becoming systematically protected by encryption in the last two years thanks to the amazing work of LetsEncrypt. However, we have spent almost 20 years sending most of our browsing and search requests via postcard, and that “export-grade” SSL the American government forced on Netscape in the 1990s is directly responsible for the existence of the DROWN attack putting many systems at risk even today.

Meanwhile, thanks to the legal threats, email encryption never took off. We had to wait until the last few years for the idea of protecting everybody’s communications with cryptography to become mainstream with instant messaging applications like Signal. Even with this, the US and UK governments continue to lead the fight to stop or break this basic protection for ordinary citizens, despite the exasperated mockery from everyone who understands how cryptography works.

Can’t Keep Up? 6 Easy Things You Can Do to Keep Your Kids Safe Online

Having a hard time doing what needs to be done to keep your kids safe online? Do you mentally shrink back when you realize you don’t do any of the tips experts so often recommend? Let the guilt go, parent because you are not alone.

Family life moves at warp speed. We want to keep up, we do everything we can to keep up, but sometimes — depending on the season of life — our best intentions get left on the roadside gulping dust.

So if you feel like you are falling behind, we put together this quick cheat sheet that will allow you to cover your safety bases and regain some ground on the technology front.

6 Easy Things You Can Do to Keep Your Kids Safe Online

Ask about apps

Restrictions on apps exist for a reason. Glance through your child’s home screen and ask about any app you don’t recognize. If you are unsure about an app’s functionality, audience, or risks, dig deeper. This step covers a lot of ground since apps are the #1 way tweens and teens gain access to mature content.

YouTube Safety Mode

Your kids probably spend a ton of time watching videos online andwho knows what their eyes have seen or what links they’ve clicked. What you may not realize is that YouTube has a safety feature that will block most inappropriate or sexual content from search, related videos, playlists, shows, and films. For kids under four, there’s YouTube Kids.

Google SafeSearch

While it’s not going to be as powerful as filtering software, Google has a SafeSearch feature that will filter explicit content (links, videos, and images) on any device. Google also has a reporting system if anything gets through their feature.

Verify Privacy Settings

This step is a five-minute conversation with your child that will remove some risks. If your child is on Facebook, Instagram, Snapchat or Twitter, make sure their privacy settings are marked “private.” This will keep anyone outside of their friend group from connecting with them. As part of the privacy settings chat, review strong password practices.

Relationship over rules

The #1 way to safeguard your kids against online risk, is making sure you have a strong relationship. Spend tech-free time together, listen and observe how your child uses and enjoys his or her devices. A healthy parent-child relationship is foundational to raising a wise digital citizen who can make good choices and handle issues such as cyberbullying, sexting, conflict, or online scams. Connect with your child daily. Talk about what’s new with school, their friends, and anything else important to them. Along the way, you’ll find out plenty about their online life and have the necessary permission (and trust) to work your concerns about online safety into any conversation.

Friend and follow but don’t stalk

Many parents cringe at the thought of opening a Twitter or Snapchat account, but if that is where your child spends most of his or her time, it’s time to open an account. It’s easy by the way. The wise rule here is that once you follow your child, give them space and privacy. Don’t chime in on the conversation or even compliment them. While they may appreciate your “likes” on Instagram, they aren’t too happy with “mom comments” as my daughter calls them. If you have a concern about a photo or comment your child has uploaded, handle it through a Direct Message or face to face but never in the public feed.

toni page birdsong

 

 

Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her on Twitter @McAfee_Family. (Disclosures). 

The post Can’t Keep Up? 6 Easy Things You Can Do to Keep Your Kids Safe Online appeared first on McAfee Blogs.

How prepared is your business for the GDPR?

The GDPR is the biggest privacy shakeup since the dawn of the internet and it is just weeks before it comes into force on 25th May. GDPR comes with potentially head-spinning financial penalties for businesses found not complying, so it really is essential for any business which touches EU citizen's personal data, to thoroughly do their privacy rights homework and properly prepare.

Sage have produced a nice GDPR infographic which breaks down the basics of the GDPR with tips on complying, which is shared below.

I am currently writing a comprehensive GDPR Application Developer's Guidance series for IBM developerWorks, which will be released in the coming weeks.


The GDPR: A guide for international business - A Sage Infographic

Equifax breach may have exposed more data than first thought

The 2017 Equifax data breach was already extremely serious by itself, but there are hints it was somehow worse. CNN has learned that Equifax told the US Senate Banking Committee that more data may have been exposed than initially determined. The hack may have compromised more driver's license info, such as the issuing data and host state, as well as tax IDs. In theory, it would be that much easier for intruders to commit fraud.

Source: CNN Money

An Interview by Timecamp on Data Protection

An Interview by Timecamp on Data Protection

A few months back I was featured in an interview on Data Protection Tips with Timecamp. Only a handful of questions but they are well articultated for any organisation which is proactive & wants to address security in corporations, and their employees' & customers responsibilities.

--

How do you evaluate people's awareness regarding the need to protect their private data?

This is an exciting question as we have often faced challenges during data protection training on how to evaluate with certainty that a person understood the importance of data security & is not just mugging for the test.

Enterprise Security is as closely related to the systems as with the people interacting with them.

One way to perform evaluations is to include surprise checks and discussions within the teams. A team of security aware individuals are trained and then asked to carry on the tasks of such inspections. For example, if a laptop is found logged-in, and unattended for long, the team confuscates it and submits to a C-level executive (e.g. CIO or COO). As a consultant, I have also worked on an innovative solution of using such awareness questions as "the second level" check while logging into the intranet applications. And, we all are aware of phishing campaigns that management can execute on all employees and measure their receptiveness to such emails. But, it must be followed up with training on how an individual can detect such attack, and what can it can do to avoid falling prey to such scammers in the future. We must understand that while data protection is vital, all the awareness training and assessment should not cause speed bumps in a daily schedule.

These awareness checks must be regularly performed without adding much stress for the employee. More the effort, more the employee would like to either bypass or avoid it. Security teams must work with the employees and support their understanding of data protection.Data protection must function as the inception of understanding security, and not a forced argument.

Do you think that an average user pays enough attention to the issue of data protection?

Data protection is an issue which can only be dealt with a cumulative effort, and though each one of us cares about privacy, few do that collectively within an enterprise.It is critical to understand that security is a culture, not a product. It needs an ongoing commitment to providing a resilient ecosystem for the business. Social engineering is on the rise with phishing attacks, USB drops, fraudulent calls and messages. An employee must understand that their casual approach towards data protection, can bring the whole business to ground zero. And, core business must be cautious when they do data identification and classification. The business must discern the scope of their application, and specify what's the direct/ indirect risk if the data gets breached. Data breach is not only an immediate loss of information but a ripple effect leading to disclosure of the enterprise's inner sanctum.

Now, how close are we to achieving this? Unfortunately, we are far from the point where an "average user" accepts data protection as a cornerstone of success in the world where information in the asset. Businesses consider security as a tollgate which everyone wants to bypass because neither do they like riding with it, nor being assessed by it. Reliable data protection can be achieved when it's not a one-time effort, but the base to build our technology.

Until unless we use the words "security" and "obvious" in the same line, positively, it would always be a challenge which an "average user" would try to deceive than achieve.

Why is the introduction of procedures for the protection of federal information systems and organisations so important?

Policies and procedures are essential for the protection of federal or local information as they harmonise security with usability. We should understand security is a long road, and when we attempt to protect data, it often has its quirks which confuse or discourages an enterprise to evolve. I have witnessed many fortune 500 firms safeguard their assets and getting absorbed in like it's a black hole. They invest millions of dollars and still don't reach par with the scope & requirements. Therefore, it becomes essential to understand the needs of business, the data it handles, and which procedures apply in their range. Now, specifically, procedures help keep the teams aligned in how to implement a technology or a product for the enterprise. Team experts or SME, usually have a telescopic vision in their domain, but a blind eye on the broader defence in depth.Their skills tunnel their view, but a procedure helps them to attain sync with the current security posture, and the projected roadmap. Also, a procedure reduces the probability of error while aligning with a holistic approach towards security. A procedure dictates what and how to do, thereby leaving a minimal margin of misunderstanding in implementing sophisticated security measures.

Are there any automated methods to test the data susceptibility to cyber-attacks, for instance, by the use of frameworks like Metasploit? How reliable are they in comparison to manual audits?

Yes, there are automated methods to perform audits, and to some extent, they are well devised to detect low hanging fruits. In simpler terms, a computerised assessment has three key phases - Information gathering, tool execution to identify issues, report review. Security aware companies and the ones that fall under strict regulations often integrate such tools in their development and staging environments. This CI (continuous integration) keeps the code clean and checks for vulnerabilities and bugs on a regular basis. It also helps smoothen out the errors that might have come in due to using existing code, or outdated functions. On the other side, there are tools which validate the sanity of the production environment and also perform regular checks on the infrastructure and data flows.

Are these automated tools enough? No. They are not "smart" enough to replace manual audits.

They can validate configurations & issues in the software, but they can't evolve with the threat landscape. Manual inspections, on the other hand, provide a peripheral vision while verifying the ecosystem resilience. It is essential to have manual audits, and use the feedback to assess, and even further tune the tools. If you are working in a regulated and well-observed domain like finance, health or data collection - the compliance officer would always rely on manual audits for final assurance. The tools are still there to support, but remember, they are as good as they are programmed and configured to do.

How to present procedures preventing attacks in one's company, e.g., to external customers who demand an adequate level of data protection?

This is a paramount concern, and thanks for asking this. External clients need to "trust you" before they can share data, or plug you into their organisation. The best approach that has worked for me is an assurance by what you have, and how well are you prepared for the worst.> The cyber world is very fragile, and earlier we used to construct "if things go bad ... " but now we say "when things go bad ...".

This means we have accepted the fact that an attack is pertinent if we are dealing with data/ information. Someone is observing to attempt a strike at the right time especially if you are a successful firm. Now, the assurance can be achieved by demonstrating the policies you have in place for Information Security and Enterprise Risk Management. These policies must be supplemented with standards which identify the requirements, wherein the procedures as the how-to document on the implementation. Most of the cases if you have to assure the client on your defence in depth, the security policy, architecture and previous third-party assessment/ audit suffice. In rare cases, a client may ask to perform its assessment of your infrastructure which is at your discretion. I would recommend making sure that your policy handles not only security but also incidence to reflect your preparedness for the breach/ attack.

On the other hand, if your end customers want assurance, you can entirely reflect that by being proactive on your product, blog, media etc. on how dedicated you are in securing their data. For example, the kind of authentication you support tells whether your commitment to protecting the vault. Whether it's mandated or not depends on the usability and UI, but to allow support shows your commitment to addressing the security-aware customers & understanding the need for the hour.

--
Published at https://www.timecamp.com/blog/index.php/2017/11/data-protection-tips/ with special thanks to Ola Rybacka for this opportunity.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Backdoors in messaging apps – what’s really going on?

We are in one of those phases again. The Paris attacks caused, once again, a cascade of demands for more surveillance and weakening of encryption. These demands appear every time, regardless of if the terrorists used encryption or not.

The perhaps most controversial demand is to make backdoors mandatory in communication software. Encryption technology can be practically unbreakable if implemented right. And the use of encryption has skyrocketed after the Snowden revelations. But encryption is not only used by terrorists. As a matter of fact, it’s one of the fundaments we are building our information society on. Protection against cybercrime, authentication of users, securing commerce, maintaining business secrets, protecting the lives of political dissidents, etc. etc. These are all critical functions that rely on encryption. So encryption is good, not bad. But as any good thing, it can be both used and misused.

And beside that. As people from the Americas prefer to express it: encryption is speech, referring to the First Amendment that grant people free speech. Both encryption technology and encrypted messages can be seen as information that people are free to exchange. Encryption technology is already out there and widely known. How on earth can anyone think that we could get this genie back in the bottle? Banning strongly encrypted messages would just harm ordinary citizens but not stopping terrorists from using secure communications, as they are known to disregard laws anyway. Banning encryption as an anti-terror measure would work just as well as simply banning terrorism. (* So can the pro-backdoor politicians really be that stupid and ignorant?

Well, that might not be the whole truth. But let’s first take a look at the big picture. What kind of tools do the surveillance agencies have to fight terrorism, or spy on their enemies or allies, or anybody else that happen to be of interest? The methods in their toolboxes can roughly be divided in three sections:

  • Tapping the wire. Reading the content of communications this way is becoming futile thanks to extensive use of encryption, but traffic analysis can still reveal who’s communicating with whom. People with unusual traffic patterns may also get attention at this level, despite the encryption.
  • Getting data from service provider’s systems. This usually reveals your network of contacts, and also the contents unless the service uses proper end-to-end encryption. This is where they want the backdoors.
  • Putting spying tools on the suspects’ devices. This can reveal pretty much everything the suspect is doing. But it’s not a scalable method and they must know whom to target before this method can be used.

And their main objectives:

  • Listen in to learn if a suspect really is planning an attack. This require access to message contents. This is where backdoors are supposed to help, according to the official story.
  • Mapping contact networks starting from a suspect. This requires metadata from the service providers or traffic analysis on the cable.
  • Finding suspects among all network users. This requires traffic analysis on the cable or data mining at the service providers’ end.

So forcing vendors to weaken end-to-end encryption would apparently make it easier to get message contents from the service providers. But as almost everyone understands, a program like this can never be water-tight. Even if the authorities could force companies like Apple, Google and WhatsApp to weaken security, others operating in another jurisdiction will always be able to provide secure solutions. And more skillful gangs could even use their own home-brewed encryption solutions. So what’s the point if we just weaken ordinary citizens’ security and let the criminals keep using strong cryptography? Actually, this is the real goal, even if it isn’t obvious at first.

Separating the interesting targets from the mass is the real goal in this effort. Strong crypto is in itself not the intelligence agencies’ main threat. It’s the trend that makes strong crypto a default in widely used communication apps. This makes it harder to identify the suspects in the first place as they can use the same tools and look no different from ordinary citizens.

Backdoors in the commonly used communication apps would however drive the primary targets towards more secure, or even customized, solutions. These solutions would of course not disappear. But the use of them would not be mainstream, and function as a signal that someone has a need for stronger security. This signal is the main benefit of a mandatory backdoor program.

But it is still not worth it, the price is far too high. Real-world metaphors are often a good way to describe IT issues. Imagine a society where the norm is to leave your home door unlocked. The police is walking around and checking all doors. They may peek inside to check what you are up to. And those with a locked door must have something to hide and are automatically suspects. Does this feel right? Would you like to live in a society like that? This is the IT-society some agencies and politicians want.

 

Safe surfing,
Micke

 

(* Yes, demanding backdoors and banning cryptography is not the same thing. But a backdoor is always a deliberate fault that makes an encryption system weaker. So it’s fair to say that demanding backdoors is equal to banning correctly implemented encryption.

Why Cameron hates WhatsApp so much

It’s a well-known fact that UK’s Prime Minister David Cameron doesn’t care much about peoples’ privacy. Recently he has been driving the so called Snooper’s Charter that would give authorities expanded surveillance powers, which got additional fuel from the Paris attacks.

It is said that terrorists want to tear down the Western society and lifestyle. And Cameron definitively puts himself in the same camp with statements like this:

“In our country, do we want to allow a means of communication between people which we cannot read? No, we must not.”
David Cameron

Note that he didn’t say terrorists, he said people. Kudos for the honesty. It’s a fact that terrorist blend in with the rest of the population and any attempt to weaken their security affects all of us. And it should be a no-brainer that a nation where the government can listen in on everybody is bad, at least if you have read Orwell’s Nineteen Eighty-Four.

But why does WhatsApp occur over and over as an example of something that gives the snoops grey hair? It’s a mainstream instant messenger app that wasn’t built for security. There are also similar apps that focus on security and privacy, like Telegram, Signal and Wickr. Why isn’t Cameron raging about them?

The answer is both simple and very significant. But it may not be obvious at fist. Internet was by default insecure and you had to use tools to fix that. The pre-Snowden era was the golden age for agencies tapping into the Internet backbone. Everything was open and unencrypted, except the really interesting stuff. Encryption itself became a signal that someone was of interest, and the authorities could use other means to find out what that person was up to.

More and more encryption is being built in by default now when we, thanks to Snowden, know the real state of things. A secured connection between client and server is becoming the norm for communication services. And many services are deploying end-to-end encryption. That means that messages are secured and opened by the communicating devices, not by the servers. Stuff stored on the servers are thus also safe from snoops. So yes, people with Cameron’s mindset have a real problem here. Correctly implemented end-to-end encryption can be next to impossible to break.

But there’s still one important thing that tapping the wire can reveal. That’s what communication tool you are using, and this is the important point. WhatsApp is a mainstream messenger with security. Telegram, Signal and Wickr are security messengers used by only a small group people with special needs. Traffic from both WhatsApp and Signal, for example, are encrypted. But the fact that you are using Signal is the important point. You stick out, just like encryption-users before.

WhatsApp is the prime target of Cameron’s wrath mainly because it is showing us how security will be implemented in the future. We are quickly moving towards a net where security is built in. Everyone will get decent security by default and minding your security will not make you a suspect anymore. And that’s great! We all need protection in a world with escalating cyber criminality.

WhatsApp is by no means a perfect security solution. The implementation of end-to-end encryption started in late 2014 and is still far from complete. The handling of metadata about users and communication is not very secure. And there are tricks the wire-snoops can use to map peoples’ network of contacts. So check it out thoroughly before you start using it for really hot stuff. But they seem to be on the path to become something unique. Among the first communication solutions that are easy to use, popular and secure by default.

Apple’s iMessage is another example. So easy that many are using it without knowing it, when they think they are sending SMS-messages. But iMessage’s security is unfortunately not flawless either.

 

Safe surfing,
Micke

 

PS. Yes, weakening security IS a bad idea. An excellent example is the TSA luggage locks, that have a master key that *used to be* secret.

 

Image by Sam Azgor

POLL – Is it OK for security products to collect data from your device?

We have a dilemma, and maybe you want to help us. I have written a lot about privacy and the trust relationship between users and software vendors. Users must trust the vendor to not misuse data that the software handles, but they have very poor abilities to base that trust on any facts. The vendor’s reputation is usually the most tangible thing available.

Vendors can be split into two camps based on their business model. The providers of “free” services, like Facebook and Google, must collect comprehensive data about the users to be able to run targeted marketing. The other camp, where we at F-Secure are, sells products that you pay money for. This camp does not have the need to profile users, so the privacy-threats should be smaller. But is that the whole picture?

No, not really. Vendors of paid products do not have the need to profile users for marketing. But there is still a lot of data on customers’ devices that may be relevant. The devices’ technical configuration is of course relevant when prioritizing maintenance. And knowing what features actually are used helps plan future releases. And we in the security field have additional interests. The prevalence of both clean and malicious files is important, as well as patterns related to malicious attacks. Just to name a few things.

One of our primary goals is to guard your privacy. But we could on the other hand benefit from data on your device. Or to be precise, you could benefit from letting us use that data as it contributes to better protection overall. So that’s our dilemma. How to utilize this data in a way that won’t put your privacy in jeopardy? And how to maintain trust? How to convince you that data we collect really is used to improve your protection?

Our policy for this is outlined here, and the anti-malware product’s data transfer is documented in detail in this document. In short, we only upload data necessary to produce the service, we focus on technical data and won’t take personal data, we use hashing of the data when feasible and we anonymize data so we can’t tell whom it came from.

The trend is clearly towards lighter devices that rely more on cloud services. Our answer to that is Security Cloud. It enables devices to off-load tasks to the cloud and benefit from data collected from the whole community. But to keep up with the threats we must develop Security Cloud constantly. And that also means that we will need more info about what happens on your device.

That’s why I would like to check what your opinion about data upload is. How do you feel about Security Cloud using data from your device to improve the overall security for all users? Do you trust us when we say that we apply strict rules to the data upload to guard your privacy?

 

 

Safe surfing,
Micke

 

Image by balticservers.com

 

The ‘Safe Harbor’ ruling divides the ‘old world’ and ‘new world’

This week’s ruling by the European Court of Justice striking down the 2000 “Safe harbor” agreement between the European Union and and the United States was celebrated as vindication by privacy activists, who saw the decision as a first major international consequence of the Snowden revelations detailing the extraordinary extent of mass surveillance being conducted by the U.S. and its allies.

“The safe harbor agreement allowed U.S. companies to self-certify they abided by EU-strength data protection standards,” Politico’s David Meyer reported. “This gave them a relatively simple mechanism to start legally handling Europeans’ personal data.”

That simple mechanism did not abide by the Commissions own privacy standards, the Court decided.

“The court, by declaring invalid the safe harbor which currently permits a sizeable amount of the commercial movement of personal data between the EU and the U.S., has signaled that PRISM and other government surveillance undermine the privacy rights that regulates such movements under European law,” the EFF’s Danny O’Brien wrote.

A new Safe Harbor agreement is currently being negotiated and the Court’s ruling seems designed to speed that up. But for now many companies — especially smaller companies — and users are now in a sort of a legal limbo.

And that legal limbo may not be great news for your privacy, according to F-Secure Security Advisor Sean Sullivan, as it creates legal uncertainty that could easily be exploited by government spy agencies and law enforcement.

“Uncertainty is their bread and butter,” he told me.

To Sean, this ruling and the urge to break the old agreement without a new one yet in place represent an “old world” view of the Internet where geography was key.

The U.S. government has suggested that it doesn’t need to respect borders when it comes to companies like Microsoft, Facebook and Google, which are headquartered in the U.S. but do business around the world. Last month, the Department of Justice said it could demand Microsoft turn over Hotmail data of any user, regardless where s/he lives.

“The cloud doesn’t have any borders,” Sean said. “Where stuff is located geographically is kind of quaint.”

You can test this out by using an app like Citizen Ex that tests your “Algorithmic Citizenship.” Sean, an American who lives in Finland, is identified as an American online — as much of the world would be.

What Europe gave up in privacy with Safe Harbor was, to some, made up for in creating a cohesive marketplace that made it easier for businesses to prosper.

Facebook and Google warned that the U.S.’s aggressive surveillance risked “breaking the Internet.” This ruling could be the first crack in that break.

Avoiding a larger crackup requires a “new world” view of the Internet that respects privacy regardless of geography, according to Sean. He’s hopeful that reform comes quickly and democratically in a way that doesn’t require courts to force politicians’ hands.

The U.S. showed some willingness to reform is surveillance state when it passed the USA FREEDOM Act — the first new limitations on intelligence gathering since 9/11. But more needs to be done, says the EFF. The digital rights organization is calling for “reforming Section 702 of the Foreign Intelligence Surveillance Amendments Act, and re-formulating Executive Order 12333.”

Without these reforms, it’s possible that any new agreement that’s reached between the U.S. and Europe might not reach the standards now reaffirmed by the European Court of Justice.

You don’t need to upgrade to Windows 10 to get some of its worst privacy features

Windows 10 is hungry — for your data.

For instance, it gives every device running the OS a unique advertising ID for tracking purposes. In addition, anything you say or type can be processed by Microsoft along with telemetry data — including software configuration, network usage and connection data. You can turn much of this off (but not all of the telemetry collection) and you should definitely take a tour of Windows 10’s privacy settings right now if you haven’t already.

“Even when you’ve disabled a number of the nosier features (like Windows 10’s new digital assistant, Cortana), the OS ceaselessly and annoyingly opens an array of encrypted channels back to the Redmond mother ship that aren’t entirely under the user’s control,” TechDirt‘s Karl Bode explains.

The upgrade to the “last” new edition of Windows ever is free in effort to reduce the fragmentation that now exists in the Windows landscape.

But just in case you haven’t or won’t upgrade, Microsoft doesn’t want you to feel ignored. A new set of updates for Windows 7 and 8 add Windows 10’s telemetry collection to its ancestors.

“If these updates are installed on the system, data is sent to Microsoft regularly about various activities on it,” Ghacks‘ Martin Brinkmann writes. His suggestion is that you not install them.

If you do install them, you can turn off much of the collection by opting out of the Customer Experience Improvement Program (CEIP). You can do that in Windows 8 by going to the Control Panel’s Action Center. On the left side menu, select “Change Action Center Settings”.

How big of a privacy concern is this?

It all comes down how much you trust Microsoft with having access in some way with almost everything you do with your devices. While Windows 10 has received lots of positive praise, the aggressiveness of the data collection continues to push more privacy-focused users to the far more customizable Linux OS.

For more everyday users, the growing connectedness of all devices forces us to do addition thinking about privacy when making purchases. “What kind of data is my car or refrigerator tracking?” just isn’t a question many of us had thought about before.

It makes sense that Microsoft needs to monitor its product for usability and stability, but when that monitoring goes beyond the product, it makes sense if your hackles begin to rise.

[Image by Lali Masriera | Flickr]

After the Ashley Madison hack, should you ever trust the internet again?

5 ways to avoid having your life ruined by a data breach

If there were ever a security story that raises complex questions about intersections of morality, technology and privacy, it’s the Ashley Madison hack.

With a pair of suicides possibly connected to the leaking of over 30 million online identities connected to the infidelity website, we should not forget that there are very human consequences to the things we do and are done to us online.

“The most concrete fear for users listed in the database is that they’re now framed as cheaters, whether they actually did it or not,” our Mikko Hypponen told Bloomberg. “We have to remember that they are victims of a crime.”

This crime exposes the victims to whims of scammers and has the very possibility of destroying families and lives.

The excellent service Have I Been Pwned? has taken the extra step of requiring email verification before disclosing whether an email was part of the hack. Site owner Troy Hunt has been documenting what he’s hearing from Ashley Madison members and a story like this in the comments (hat tip to @BrianHonan) make much easier to ignore the “puritanical glee” that accompanied much of the initial press coverage.

If our browsers could talk to the world, there likely would be very few of us who would not end up ashamed. Knowing that, you likely take all the steps you should to secure your privacy online: updated system and security software, a VPN from a provider you trust, unique passwords….

But you could have done all that and still ended up being exposed as a victim of the Ashley Madison hack, which is the second biggest breach of all time, according to Have I Been Pwned?.

The owner of Ashley Madison Avid Life Media will face massive legal claims, which should act as encouragement for all sites to pursue better security — or to at least not brag about their security, taunting hackers.

Is that enough to calm all your nerves? Probably not.

The truth is your privacy in other’s hands once you pass your information on. But there are some things you can do to try to keep your privacy.

1. You can start by not using identifiable email addresses — and definitely not your work email! — when you sign up for a site that promotes activity many frown upon. Some people use what they call “burner” accounts that provide disconnection from your real life identity and other benefits.

2. Never leave your computer or mobile devices unlocked.

2. Get smart about your security questions. Don’t ever use answers that can be guessed or figured out through your social media accounts. Consider using fictional answers that you save in a password manager.

3. Never save the passwords to any site you don’t want to be discovered using. You shouldn’t do this in general — a password manager like our Key is much smarter.

4. Identity protection is an extra but imperfect layer of protection that might protect you finances, but not your reputation in a case like the Ashley Madison hack.

5. Ultimately and ironically, trust is the most important factor. Stick with sites, services and providers that you trust.

Moral perfection is not something we should expect from internet users. But we should demand as close to perfection as we can get from those who promise to protect our data.

Whether that trust can be rebuilt, remains to be seen.

[Screenshot via Ashley Madison]

 

 

3 Password Tips from the Pros

Passwords are the keys to online accounts. A good password known only to account owners can ensure email, social media accounts, bank accounts, etc. stay accessible only to the person (or people) that need them. But a bad password will do little to prevent people from getting access to those accounts, and can expose you to serious security risks (such as identity theft). And sadly, many people continue to recycle easy to guess/crack passwords.

A recent study conducted by researchers from Google attempted to nail down the most common pieces of advice and practices recommended by security researchers, and unsurprisingly, several of them had to do with passwords. And there were several gaps between what security experts recommend people do when creating passwords, and what actually happens. Here’s 3 expert tips to help you use passwords to keep your accounts safe and secure.

  1. Unique Passwords are Better than Strong Passwords

One thing experts recommend doing is to choose a strong and unique password – advice many people hear but few actually follow. Chances are, if your password is on this computer science professor’s dress, it’s not keeping your accounts particularly secure.

Many major online service providers automatically force you to choose a password that follows certain guidelines (such as length and character combinations), and even provide you feedback on the password’s strength. But security researchers such as F-Secure Security Advisor Sean Sullivan say that, while strong passwords are important, the value of choosing unique passwords is an equally important part of securing your account.

Basically, using unique passwords means you shouldn’t recycle the same password for use with several different accounts, or even slight variations of the same word or phrase. Google likens that to having one key for all the doors in your house, as well as your car and office. Each service should get its own password. That way, one compromised account won’t give someone else the keys to everything you do online.

A strong password will be long, use combinations of upper-case and lower-case letters, numbers, and symbols. The password should also be a term or phrase that is personal to you – and not a phrase or slogan familiar to the general public, or something people that know you could easily guess. But there are still many ways to compromise these passwords, as proven by The Great Politician Hack.

So using unique passwords prevents criminals, spies, etc. from using one compromised password to access several different services. Sullivan says choosing strong and unique passwords for critical accounts – such as online banking, work related email or social media accounts, or cloud storage services containing personal documents – is a vital part of having good account security.

  1. Experts Use Password Managers for a Reason

One study showed that the average Internet user has 26 different online accounts. Assuming you’re choosing unique passwords, and you fit the bill of an “average Internet user”, you’ll find yourself with a large number of passwords. You’ve now made your account so safe and secure that you can’t even use it!

That’s why experts recommend using a password manager. Password managers can help people maintain strong account security by letting them choose strong and unique passwords for each account, and store them securely so that they’re centralized and accessible. Keeping 26 or more online accounts secure with strong and unique passwords known only to you is what password managers do to keep your data safe, which is why 73% of experts that took part in Google’s study use them, compared to just 24% of non-experts.

  1. Take Advantage of Additional Security Features

Another great way to secure accounts is to activate two-factor authentication whenever it’s made available. Two-factor (or multi-factor) authentication essentially uses two different methods to verify the identity of a particular account holder. An example of this would be protecting your account with a password, but also having your phone number registered as a back-up, so any kind of password reset done on the account makes use of your phone to verify you are who you say you are.

While the availability of this option may be limited, security experts recommend taking advantage of it whenever you can. You can find a list of some popular services that use two-factor authentication here, as well as some other great tips for using passwords to keep your online accounts secure.

[Photo by geralt | Pixabay]

5 things you need to know to feel secure on Windows 10

New versions of windows used to be like an international holiday. PC users around the world celebrated by sharing what they liked — much of Windows 7 — and hated — all of Windows 8 and Vista — about the latest version of the world’s most popular operating system.

In this way, Windows 10 is the end of an era.

This is the “final version” of the OS. After you step up to this version, there will be continual updates but no new version to upgrade to. It’s the birth of “Windows as a service,” according to Verge.

So if you’re taking free upgrade to the new version, here are 5 things you need to know as you get used to the Windows that could be with you for the rest of your life.

1.Our Chief Research Office Mikko Hypponen noted Windows 10 still hides double extensions by default.
“Consider a file named doubleclick.pdf.bat. If ‘hide extensions’ is enabled, then this will be shown in File Explorer as ‘doubleclick.pdf’. You, the user, might go ahead and double-click on it, because it’s just a PDF, right?” F-Secure Security Advisor Tom Gaffney told Infosecurity Magazine.

“In truth, it’s a batch file, and whatever commands it contains will run when you double-click on it.”

Keep this in mind when you do — or DON’T — click on unknown files.

2. You could end up sharing your Wi-Fi connection with all your contacts.
There’s some debate about whether or not Windows 10’s Wi-Fi Sense shares your Wi-Fi connection with social media contacts by default, as Windows Phone has for a while now.

ZDNet‘s Ed Bott says no, noting that “you have to very consciously enable sharing for a network. It’s not something you’ll do by accident.”

Security expert Brian Krebs is more skeptical, given how we’re “conditioned to click ‘yes’ to these prompts.”

“In theory, someone who wanted access to your small biz network could befriend an employee or two, and drive into the office car park to be in range, and then gain access to the wireless network,” The Register‘s Simon Rockman wrote. “Some basic protections, specifically ones that safeguard against people sharing their passwords, should prevent this.”

Gaffney notes that Wi-Fi Sense is “open to accidental and deliberate misuse.”

So what to do?

Krebs recommends the following:

  1. Prior to upgrade to Windows 10, change your Wi-Fi network name/SSID to something that includes the terms “_nomap_optout”. [This is Windows opt-out for Wi-Fi Sense].
  2. After the upgrade is complete, change the privacy settings in Windows to disable Wi-Fi Sense sharing.

3. There are some privacy issues you should know about.
Basically “whatever happens, Microsoft knows what you’re doing,” The Next Web‘s Mic Wright noted.

Microsoft, according to its terms and conditions, can gather data “from you and your devices, including for example ‘app use data for apps that run on Windows’ and ‘data about the networks you connect to.’” And they can also disclose it to third parties as they feel like it.

Here’s a good look at the privacy and security settings you should check now.

Want a deep dive into the privacy issues? Visit Extreme Tech.

4. The new Action Center could be useful but it could get annoying.
This notification center makes Windows feel more like an iPhone — because isn’t the point of everything digital to eventually merge into the same thing?

BGR‘s Zach Epstein wrote “one location for all of your notifications is a welcome change.” But it can get overwhelming.

“In Windows 10, you can adjust notifications settings by clicking the notifications icon in the system tray,” he wrote. “The click All settings, followed by System and then Notifications & actions.”

5. Yes, F-Secure SAFE, Internet Security and Anti-Virus are all Windows 10 ready.

[Image by Brett Morrison | Flickr]

How to avoid your worst social media nightmare

There wouldn’t be 1.44 billion active users on Facebook if the risks outweighed the rewards.

Likewise, with more than a billion using a website that requires you to use your real identity to share our media, thoughts and feelings, we can’t expect there to be zero risks to social media.

The same way someone can study your driveway to find out when you’re not home, your profile can be stalked for insights into your life. Despite this, the worst most of us have had to deal with is being awkwardly contacted by people we’ve purposely kept out of our lives. Most of us will never have to deal with what female gamers were forced to endure when they ignored or rejected friend requests from a seventeen-year old resident of British Columbia.

He exposed their private secrets to the world, put their lives in danger and shut down Disneyland in the process,” CBC News’ Jason Proctor explains.

His alleged speciality was a combination of “doxing” and “swatting.”

“Doxing” has come to mean “using the internet to find and expose a target’s personal information,” which is technically legal, though against the terms and conditions of many sites, in most places. “Swatting,” which is “the faking emergency calls to trigger the deployment of SWAT teams to a victim’s house,” is not legal anywhere.

What can you do to prevent this kind of behavior? If the perpetrator is fixated enough, not much.

“You’re not going to stop a dedicated attacker from doxxing you,” F-Secure Security Advisor Sean Sullivan told me. “Get offline for that.”

Any threat of harm online should be taken seriously. Take a screenshot and report it to both the platform where the threat was posted and the appropriate law enforcement agency.

But the good news is that most perpetrators are not clever and lawless enough to go to the extremes this young man was. And even if they were, most of us have gotten pretty good at not oversharing after more than a decade of living in a world where Google makes researching people’s lives easy.

“The world has gotten smaller because of the internet, not just social media,” Sean explained.

If you Google your name along with the name of the city you live in, for instance, you may be disturbed at what you find. And even if you are good at limiting what’s posted online about you along with what you share and with whom, you still may be vulnerable.

“Oversharing is not the problem,” he said. “Security questions are.”

The answers to many of the security questions attackers could use to infiltrate your accounts and dig out private information from you or your friends are based on “trivia” from your life, like what school you attended. Such information can be easily Googled.

What can you do about that?

“Consider lying,” Sean said.

But that does create a problem. As Mark Twain said, “If you tell the truth, you don’t have to remember anything.” If you lie on your security questions, you’ll have to remember those lies.

Sean’s suggestion, “Use a Password Manager like F-Secure KEY that has a notes section.”

Then you can record your fibs and protect your strong, unique passwords that are — along with updates system and security software and a reliable VPN — essential for keeping intruders from accessing your accounts.

“Now would be a good time update your security questions.”

[Image by Secretive Ireland | Flickr]

Sunset for section 215, but is the world better now?

Section 215 of the US Patriot Act has been in the headlines a lot lately. This controversial section was used by the US intelligence agencies to scoop up large quantities of US phone records, among other things. The section had a sunset clause and needed to be renewed periodically, with the latest deadline at midnight May 31st 2015. The renewal has previously been a rubber-stamp thing, but not this time. Section 215 has expired and been replaced by the Freedom Act, which is supposed to be more restrictive and better protect our privacy. And that made it headline news globally.

But what does this mean in practice? Is this the end of the global surveillance Edward Snowden made us aware of? How significant is this change in reality? These are questions that aren’t necessary answered by the news coverage.

Let’s keep this simple and avoid going into details. Section 215 was just a part in a huge legal and technical surveillance system. The old section 215 allowed very broad secret warrants to be issued by FISA courts using secret interpretations of the law, forcing companies to hand over massive amounts of data about citizens’ communications. All this under gag orders preventing anyone to talk about it or even seek legal advice. The best known example was probably the bulk collection of US phone records. It’s not about tapping phones, rather about keeping track of who called whom at what time. People in US could quite safely assume that if they placed calls, NSA had them on record.

The replacing Freedom Act still allows a lot of surveillance, but aims to restrict the much criticized mass surveillance. Surveillance under Freedom Act needs to be more specified than under Section 215. Authorities can’t just tell a tele operator to hand over all phone records to see if they can find something suspicious. Now they have to specify an individual or a device they are interested in. Tele operators must store certain data about all customers, but only hand over the requested data. That’s not a problem, it is pretty much data that the operators have to keep anyway for billing purposes.

This sounds good on paper, but reality may not be so sunny. First, Freedom Act is a new thing and we don’t know yet how it will work in practice. Its interpretation may be more or less privacy friendly, time will tell. The surveillance legislation is a huge and complex wholeness. A specific kind of surveillance may very well be able to continue sanctioned by some other paragraph even if section 215 is gone. It’s also misleading when media reports that the section 215 intelligence stopped on June 1st. In reality it continues for at least six months, maybe longer, to safeguard ongoing investigations.

So the conclusion is that the practical impact of this mini reform is a lot less significant than what we could believe based on the headlines. It’s not the end of surveillance. It doesn’t guarantee privacy for people using US-based services. It is however an important and welcome signal that the political climate in US is changing. It’s a sign of a more balanced view on security versus basic human rights. Let’s hope that this climate change continues.

 

Safe surfing,
Micke

Image by Christian Holmér

Your favorite breakfast cereal and other things Twitter knows about you

At Re:publica 2015, our Chief Research Officer Mikko Hypponen told the main stage crowd that the world’s top scientists are now focused on the delivery of ads. “I think this is sad,” he said.

To give the audience a sense of how much Twitter knows about its users, he showed them the remarkable targeting the microblogging service offers its advertisers. If you use the site, you may be served promoted tweets based on the following:

1. What breakfast cereal you eat.

2. The alcohol you drink.

3. Your income.

4. If you suffer from allergies.

5. If you’re expecting a child.

And that’s just the beginning. You can be targeted based not only on your recent device purchases but things you may be in the market for, like a new house or a new car. You can see all the targeting offered by logging into your Twitter, going to the top right corner of the interface, clicking on your icon and selecting “Twitter Ads”.

Can Twitter learn all this just based on your tweets and which accounts follow?

No, Mikko said. “They buy this information from real world shops, from credit card companies, and from frequent buyer clubs.”

Twitter then connects this information to you based on… your phone number. And you’ve agreed to have this happen to you because you read and memorized the nearly 7,000 words in its Terms and Conditions. Because everyone reads the terms and conditions.

Full disclosure: We do occasionally promote tweets on Twitter to promote or digital freedom message and tools like Freedome that block ad trackers. It’s an effective tool and we find the irony rich.

Part of our mission is to make it clear that there’s no such thing as “free” on the internet. If you aren’t paying a price, you are the product. Aral Balkan compares social networks to a creepy uncle” that pays the bills by listening to as many of your conversations as they can then selling what they’ve heard to its actual customers.

And with the world’s top minds dedicated to monetizing your attention, we just think you should be as aware of advertisers as they are as of you. Most of the top URLs in the world are actually trackers that you never access directly. To get a sense of what advertisers learn every time you click check out our new Privacy Checker.

Cheers,

Jason

The one question that could change the privacy debate

How important is it to ask the right question? Our Security Advisor Sean Sullivan thinks it’s so important that it can either help or hurt your cause.

Most anyone who has debated the issues of government surveillance and online tracking by corporations has likely faced someone who dismisses concerns with “I don’t have anything to hide.”

This is apparently a very popular sentiment. 83 percent of respondents in the United Kingdom answered “No” to the question “Do you have anything to hide?” in a new F-Secure survey.

“You might as well be asking people – are you a dishonest person?” Sean wrote in our latest Threat Report (like goes to PDF). “The question is emotionally charged and so of course people react to it in a defensive manner – I think it is perfectly natural that 83% of people said no.”

Sean suggested another question that reframes the debate: “Would you want to share everything about your life with everyone everywhere, all the time, forever?”

Think about just your Google Search history. Seriously, take a look at it — here’s how you can see it (and delete it).

“And my prediction was proven correct – 89% of respondents did not want to be exhibitionists,” he wrote.

Both questions, he notes, at the core ask, “Do you think privacy is important?” One does it in a way that’s accusatory. The other in a way that’s explanatory.

Sean suggests that we all have things in our past we’d rather forget and asking the right question can get people to see that quite quickly.

There’s reason to pessimistic about privacy given that there has been substantial change in U.S. government policy since the Snowden revelations began. But even that may change soon with bipartisan revisions to the the law that began legalized mass surveillance.

This imperfect attempt to limit the NSA’s bulk collection is a promising start of a major shift away from methods that have done more to stifle digital freedom than to achieve the unachievable goal of creating a world without threats, if it’s indeed just a start.

Maybe we’re starting to ask the right questions.

[Image by Ashleigh Nushawg | Flickr]

How about ‘Take Your Work to Kid’ Day?

In the United States, Australia and Canada, April 23 will be Take Our Sons and Daughters to Work Day. But given our changing economy and workplace, is one day enough to improve the bonds between parent and child?

Originally created to give girls a chance to “shadow” their parents in the workplaces women have so often been excluded from, Take Your Kid to Work Day, as it’s often called, was expanded in 2003 to include boys as a way to help all kids see “the power and possibilities associated with a balanced work and family life.”

It’s a nice ideal, but it isn’t much of a reality, at least in many industrial countries.

Americans spend an average of 1,788 hours a year at work. Most parents with full-time jobs will spend almost two-thirds of their day working and sleeping, leaving little time for anything else.

time spent sleeping working leisure famil

Hopefully your country is a little better at balancing work/home. Finnish workers, for instance, spent 1,666 hours on average at work in 2013 that’s 122 hours or 3 full weeks less than their American counterparts. Don’t be jealous: German workers only averaged 1,388 hours at work in 2013.

Chances are wherever you live your kids already see you at work. A 2012 survey found that 60 percent of Americans are email accessible for 13.5 hours a weekday with an extra 5 hours on the weekend.

Given the extraordinary demands work makes on us, perhaps you can make a demand on your work to be a bit more flexible. Given that we’re nearly always accessible, why can’t parents plan around their kids’ schedules and get some work done?

Activities like sports, dance, karate and other arts offer parents a chance to be an active observer of their kids while getting some work done on a mobile PC or device while their children are being supervised by another adult.

Given that 70 percent of millennial use their own devices for work, it’s likely that younger parents already do this to some degree on their phones and tablets. But they’re likely not thinking about potential data leakage that can occur, especially when using public Wi-Fi built on old technology that could expose your identity and possibly even your email.

But with security and a virtual personal network — like our Freedome VPN — you can be about as secure in the office as you’re out in the world seeing how your kids work, as they get another chance to see you.

Cheers,
Sandra

[Image by Wesley Fryer | Flickr]

 

 

 

 

5 things you need to know about securing our future

“Securing the future” is a huge topic, but our Chief Research Officer Mikko Hypponen narrowed it down to the two most important issues is his recent keynote address at the CeBIT conference. Watch the whole thing for a Matrix-like immersion into the two greatest needs for a brighter future — security and privacy.

To get started here are some quick takeaways from Mikko’s insights into data privacy and data security in a threat landscape where everyone is being watched, everything is getting connected and anything that can make criminals money will be attacked.

1. Criminals are using the affiliate model.
About a month ago, one of the guys running CTB Locker — ransomware that infects your PC to hold your files until you pay to release them in bitcoin — did a reddit AMA to explain how he makes around $300,000 with the scam. After a bit of questioning, the poster revealed that he isn’t CTB’s author but an affiliate who simply pays for access to a trojan and an exploit-kid created by a Russian gang.

“Why are they operating with an affiliate model?” Mikko asked.

Because now the authors are most likely not breaking the law. In the over 250,000 samples F-Secure Labs processes a day, our analysts have seen similar Affiliate models used with the largest banking trojans and GameOver ZeuS, which he notes are also coming from Russia.

No wonder online crime is the most profitable IT business.

2. “Smart” means exploitable.
When you think of the word “smart” — as in smart tv, smartphone, smart watch, smart car — Mikko suggests you think of the word exploitable, as it is a target for online criminals.

Why would emerging Internet of Things (IoT) be a target? Think of the motives, he says. Money, of course. You don’t need to worry about your smart refrigerator being hacked until there’s a way to make money off it.

How might the IoT become a profit center? Imagine, he suggests, if a criminal hacked your car and wouldn’t let you start it until you pay a ransom. We haven’t seen this yet — but if it can be done, it will.

3. Criminals want your computer power.
Even if criminals can’t get you to pay a ransom, they may still want into your PC, watch, fridge or watch for the computing power. The denial of service attack against Xbox Live and Playstation Netwokr last Christmas, for instance likely employed a botnet that included mobile devices.

IoT devices have already been hijacked to mine for cypto-currencies that could be converted to Bitcoin then dollars or “even more stupidly into Rubbles.”

4. If we want to solve the problems of security, we have to build security into devices.
Knowing that almost everything will be able to connect to the internet requires better collaboration between security vendors and manufacturers. Mikko worries that companies that have never had to worry about security — like a toaster manufacturer, for instance — are now getting into IoT game. And given that the cheapest devices will sell the best, they won’t invest in proper design.

5. Governments are a threat to our privacy.
The success of the internet has let to governments increasingly using it as a tool of surveillance. What concerns Mikko most is the idea of “collecting it all.” As Glenn Glenwald and Edward Snowden pointed out at CeBIT the day before Mikko, governments seem to be collecting everything — communication, location data — on everyone, even if you are not a person of interest, just in case.

Who knows how that information may be used in a decade from now given that we all have something to hide?

Cheers,

Sandra

 

The Freedome approach to privacy

We were recently asked a series of questions about how Freedome protects private data by TorrentFreak.com. Since we believe transparency and encryption are keys to online freedom, we wanted to share our answers that explain how we try to make the best privacy app possible.

1. Do you keep ANY logs which would allow you to match an IP-address and a time stamp to a user of your service? If so, exactly what information do you hold and for how long?
We do not keep any such logs. If ever required by law under a jurisdiction, we would implement such a system, but only where applicable and keeping storage time to the minimum required by law of that respective jurisdiction. Note also that no registration is required to use our service, so any log information would generally map to an anonymous, random user ID (UUID) and the user’s public IP address.

2. Under what jurisdiction(s) does your company operate?
Freedome is a service provided from Finland by a Finnish company, and manufactured and provided in compliance with applicable Finnish laws.

3. What tools are used to monitor and mitigate abuse of your service?
We have proprietary tools for fully automated traffic pattern analysis, including some DPI for the purpose of limiting peer-to-peer traffic on some gateway sites. Should we detect something that is not in line with our acceptable use policy, we can rate limit traffic from a device, or block a device from accessing the VPN service. All of this is automated and happens locally on the VPN gateway.

4. Do you use any external email providers (e.g. Google Apps) or support tools ( e.g Live support, Zendesk) that hold information provided by users?
We do not use any external email providers, but our users can, for example, sign up for beta programs with their email address and send us feedback by email. The email addresses are used only to communicate things like product availability.

In the future, paying customers can also use our support services and tools such as chat. In those cases, we do hold information that customers provide us voluntarily. This information is incident based (connected to the support request) and is not connected to any other data (e.g. customer information, marketing, licensing, purchase or any Freedome data). This data is purely used for managing and solving support cases.

5. In the event you receive a DMCA takedown notice or European equivalent, how are these handled?
There is no content in the service to be taken down. Freedome is a data pipeline and does not obtain direct financial benefit from user content accessed while using the service. While some of the other liability exclusions of DMCA (/ its European equivalent) apply, the takedown process itself is not really applicable to (this) VPN service.

6. What steps are taken when a valid court order requires your company to identify an active user of your service? Has this ever happened?
The law enforcement data requests can effectively be done directly only to F-Secure Corporation in Finland. If a non-Finnish authority wants to request such data from F-Secure, the request will be done by foreign authorities directly to Finnish police or via Interpol in accordance to procedures set out in international conventions. To date, this has never happened for the Freedome Service.

7. Does your company have a warrant canary or a similar solution to alert customers to gag orders?
We do not have a warrant canary system in place. Instead, Freedome is built to store as little data as possible.

Since a warrant canary would be typically triggered by a law enforcement request on individual user, they are more reflective on the size of the customer base and how interesting the data in the service is from a law enforcement perspective. They are a good, inventive barometer but do not really measure the risk re: specific user’s data.

8. Is BitTorrent and other file-sharing traffic allowed on all servers? If not, why?
BitTorrent and other peer-to-peer file sharing is rate limited / blocked on some gateway servers due to acceptable use policies of our network providers. Some providers are not pleased with a high volume of DMCA takedown requests. We use multiple providers (see Question #12) and these blocks are not in place on all the servers.

9. Which payment systems do you use and how are these linked to individual user accounts?
There are multiple options. The most anonymous way to purchase is by buying a voucher code in a retail store. If you pay in cash, the store will not know who you are. You then enter the anonymous voucher code in the Freedome application, and we will then confirm from our database that it is a valid voucher which we have given for sale to one of our retail channels. The retail store does not pass any information to us besides the aggregate number of sold vouchers, so even if you paid by a credit card, we do not get any information about the individual payment.

For in-app (e.g., Apple App Store, Google play) purchases you in most cases do need to provide your details but we actually never receive those, we get just an anonymous receipt. The major app stores do not give any contact information about end users to any application vendors.

When a purchase is made through our own e-store, the payment and order processing is handled by our online reseller, cleverbridge AG, in Germany. Our partner collects payment information together with name, email, address, etc. and does store these, but in a separate system from Freedome. In this case we have a record who have bought Freedome licenses but pointing a person to any usage of Freedome is intentionally difficult and against our policies. We also don’t have any actual usage log and therefore could not point to one anyway.

10. What is the most secure VPN connection and encryption algorithm you would recommend to your users? Do you provide tools such as “kill switches” if a connection drops and DNS leak protection?
Our application does not provide user selectable encryption algorithms. Servers and clients are authenticated using X.509 certificates with 2048-bit RSA keys and SHA-256 signatures. iOS clients use IPSEC with AES-128 encryption. Other clients (Android, Windows, OS X) use OpenVPN with AES-128 encryption. Perfect Forward Secrecy is enabled (Diffie-Hellman key exchange).

We provide DNS leak protection by default, and we also provide IPv6 over the VPN so that IPv6 traffic will not bypass the VPN. Kill switches are not available. The iOS IPSEC client does not allow traffic to flow unless the VPN is connected, or if the VPN is explicitly turned off by the user. The Android app, in “Protection ON” state keeps capturing internet traffic even if network or VPN connection drops, thus there is no traffic or DNS leaks during connection drops. If the Freedome application process gets restarted by the Android system, there is a moment where traffic could theoretically leak outside the VPN. Device startup Android 4.x requires user’s consent before it allows a VPN app to start capturing traffic; until that traffic may theoretically leak. (Android 5 changes this, as it does not forget user’s consent at device reboot.)

11. Do you use your own DNS servers? (if not, which servers do you use?)
We do have our own DNS servers.

12. Do you have physical control over your VPN servers and network or are they outsourced and hosted by a third party (if so, which ones)? Where are your servers located?
In most locations we utilize shared hardware operated by specialized hosting vendors, but we also have our own dedicated hardware at some locations. Providers vary from country to country and over time. In some countries we also use multiple providers at the same time for improved redundancy. An example provider would be Softlayer, an IBM company whom we use in multiple locations.

For real data privacy, transparency and encryption are our best hopes

With Net Neutrality close to becoming a reality in the United States, Europe’s telecom companies appear ready to fight for consumers’ trust.

At the Mobile World Congress in Barcelona this week, Telefonica CEO Cesar Alierta called for strict rules that will foster “digital confidence”. Vodafone CEO Vittorio Colao’s keynote highlighted the need for both privacy and security. Deutsche Telekom’s Tim Höttges was in agreement, noting that “data privacy is super-critical”.

“80% [of consumers] are concerned about data security and privacy, but they are always clicking ‘I accept [the terms and conditions], I accept, I accept’ without reading them,” said Höttges, echoing a reality we found when conducting an experiment that — in the fine print — asked people to give up their first born child in exchange for free Wi-Fi.

The fight for consumers’ digital freedom is close to our hearts at F-Secure and we agree that strong rules about data breach disclosure are essential to regaining consumers trust. However, we worry that anything that limits freedom in name of privacy must be avoided.

Telenor CEO and GSMA chairman, Fredrik Baksaas noted the very real problem that consumers face managing multiple online identities with multiple passwords. He suggested tying digital identity to SIM cards. This dream of a single identity may seem liberating on a practical level. But beyond recently exposed problems with SIM security, a chained identity could disrupt some of the key benefits of online life — the right to define your identity, the liberty to separate work life from home life, the ability to participate in communities with an alternate persona.

GMSA is behind a single authentication system adopted by more than a dozen operators that is tied to phones, which could simplify life for many users. But it will likely not quench desires to have multiple email accounts or identities on a site nor completely solve the conundrum of digital identity.

The biggest problem is that so many of us aren’t aware of what we’ve already given up.

The old saying goes, “If it’s free, you’re the product”.  This was a comfortable model for generations who grew up trading free content in exchange for watching or listening to advertisements. But now the ads are watching us back.

F-Secure Labs has found that more than half of the most popular URLs in the world aren’t accessed directly by users. They’s accessed automatically when you visit the sites we intend to visit and used to track our activity.

Conventional terms and conditions are legal formalities that offer no benefits to users. As our Mikko Hypponen often says, the biggest lie on the Internet is “I have read and agreed with terms and conditions.” This will have to change for any hope of a world where privacy is respected.

In the advanced world, store-bought food is mandated to have its nutritional information printed on the packaging. We don’t typically read — nor understand — all the ingredients. But we get a snapshot of what effect it will have on us physically.

How about something like this for privacy that informs us how data is treated by a particular site or application.

What data is captured?

Is is just on this site or does it follow you around the web?

How long is stored?

Whom is it shared with?

Key questions, simply answered — all with the purpose of making it clear that your privacy has value.

Along with this increased transparency, operators and everyone who cares about digital rights must pay close attention to the effort to ban or limit encryption in the name of public safety. The right of law-abiding citizens to cloak their online activity is central to democracy. And all the privacy innovations in the world won’t matter if we cannot expect that right to exist.

We are entering an era where consumers will have more reasons, need and opportunities to connect than ever before. The services that offer us the chance to be more than a product will be the ones that thrive.

UPDATE: Micke reminds me to point out that F-Secure has already taken steps towards simple, clean disclosure with documents like this Data Transfer Declaration.

Mikko Hypponen on the Internet: “There are so many ways we could screw it up”

It’s Safer Internet Day, but what does that mean? In a recent F-Secure poll, 50% of respondents said they trust the Internet very little or not at all when it comes to their security and privacy. I spoke with F-Secure’s Chief Research Officer Mikko Hypponen, known worldwide for fighting viruses and defending the Net, about the state of the Internet today, the innovation he’s most excited about, and what he’d miss the most if it all came to an end.

 

The theme of Safer Internet Day this year is “Let’s create a better internet together.” What would you say is the state of the Internet today?

The Internet is a wonderful thing, but there are things going wrong in the online world. Malware spread by cybercriminals of course, governments using the technology to spy on their own citizens, and all the other privacy concerns that technology creates. We have two roads we can go down: we can either continue the slide away from our digital freedoms, or we can take action to try to preserve a free and open Internet.

 

Lately you’ve been talking about some of the pitfalls of Internet innovations (Bitcoin, Internet of Things, etc.). So are there any services or innovations out there you’re really excited about, that you think really hold promise for creating a better Internet?

I’m a strong believer in crowdsourcing. Two examples: Wikipedia and crowd funding like Kickstarter. Wikipedia is the greatest collection of knowledge for mankind. To today’s teens, the idea that an encyclopedia would be written by just five people, the way they used to be, is totally foreign. Today’s teens think “how could you even trust that?”

With crowd funding, you have a network of people with a niche interest who previously would never have found each other, like, say, people who collect pink ribbons. They can get together and fund something unique they’re all passionate about. You would never find enough in one city to make a difference, but on the Internet they can find each other and have the power to do something they care about. I have funded over 20 Kickstarters myself.

 

You’ve said that you’re concerned that we might not have an Internet to pass down to our children. What did you mean by that? What could possibly happen to the Internet?

I didn’t mean the Internet will cease to exist. What I meant is we’ve gotten to enjoy a free and open Internet, but there are so many ways we can screw it up. I would hate for our children to have a less free, open Internet than we have because of our inactivity. Back when the Internet was born, the powers that be didn’t pay it any attention. Nowadays the powers that be realize very well its power and that’s why they want to control it, restrict it, monitor it. There is a price to pay to enjoy a free and open Internet and that price is that you have to defend it. It’s your – and all of our – responsibility.

 

Safer Internet Day has a strong focus on children. What are your observations about kids online?

Kids don’t think about the Internet at all – simply because they don’t know of a world without it. To talk to them about a world where that isn’t the norm is like reading them a book about the 18th century. It’s as natural to them as electricity. But of course it’s different from things like electricity in that it has to be defended.

 

OK, so the Internet isn’t going to completely collapse. But let’s say it did, today…if that happened, what would you miss the most?

Serendipity. Randomly surfing the Internet and randomly learning about things you never heard of before, never knew existed.

 

What thought would you like to leave us with?

We’re seeing things going wrong in the online world and very few people are taking action. Some people say there’s nothing that can be done, so give up trying to do anything. Just get used to it. But I don’t believe in that. When we see things going wrong, we should try to stop it. We can’t all become activists, but even so there are small things we can all do.

 

Check out Mikko’s ideas for practical ways we can all take action.

 

Safer Internet Day: Do you trust the Internet?

 

Privacy is non-negotiable: We have the right to cover our arse — or expose it

We can’t all be as brave as Emma Holten.

When an ex-boyfriend shared a series of intimate photos taken when she was 17 on a “revenge porn” website, Emma — now 23-years old — decide she couldn’t let him and anonymous people who attacked her online have the last word.

She got her own photographer and published a series of images she was comfortable along with an evocative essay on “Consent“.

“The pictures are an attempt at making me a sexual subject instead of an object,” she wrote. “I am not ashamed of my body, but it is mine. Consent is key. Just as rape and sex have nothing to do with each other, pictures shared with and without consent are completely different things.”

Last year when a series of hacks resulted in nude photos of Hollywood stars being posted online, many of us were shocked at the attempt to blame the victims.

No one asks, “Why would you dare to keep your credit card information on your phone?” when someone had their private financial data stolen.

But these women were lambasted for daring to have private images of themselves stored somewhere they assumed was private. It was as if a woman should expect images of her body to become public property. Certainly, we at F-Secure believe that you should take the best precautions possible to secure your online life, especially if you’re highly visible. But if the most intimate parts of our lives are not sacred, nothing is

On Friday the 26th of September 2014, eight of my female colleagues, a male coworker and myself decided to pose naked in front of the camera to support the victims of the hackings.

As we removed our bathrobes and become vulnerable to anyone who had a camera, I thought, “If this was hard for us, with a professional photographer in a private space, doing this voluntarily and knowing that the pictures will be out with our full consent, then how hard was for celebrities to be exposed as they have been, with zero consent? What will happen when we, anonymous people, will be seen naked by the whole world? What would my colleagues think when they see my naked pictures? What will my friends and parents in Spain –a macho country- would say? Why do I care?”

As I read Emma’s essay I realized the answer. It is not about nudity.

It is not about the relationship I have with my body. It is about power, fear and consequences, fear of judgment and a lack of support for the victims. It was about knowing that the victims were taking more of the blame than the perpetrators. It was about the fear, I could be next and the greater fear that by doing nothing, I was ensuring there would be more victims who’d be blamed for their own violation.

We held back on publishing the photos because we feared how they be interpreted. Now I realized that there was a crucial point we hadn’t yet identified yet in what we were trying to say. Emma helped us understand that consent is the key. Emma had to reclaim how her body was being reinterpreted and she inspired us to make the statement we wanted to make all along: Only we have the right to take what is private and make it public.

We reached out to Emma to let her know she inspired us and now we’re reaching out to you to ask you to join us. Shout out loud that privacy is a fundamental right and violating it is a crime. Join us by posting a nude image of yourself online with the hashtag #uncoveryourarse.

The message is: consent matters. And like Emma, we won’t be shamed into silence.

– By Laura

Have the Snowden revelations changed your attitudes about privacy?

It’s been well over a year since the first revelations from former National Security Agency contractor Edward Snowden became public.

Though President Obama has called for reforms in his government’s mass surveillance polices, the one significant attempt to reform U.S. laws and end “bulk collection” of data– the USA Freedom Act — failed in November. And many privacy advocates warned that even that bill was far too limited to do much good or excite the public. With the PATRIOT Act, the law passed in the immediately aftermath of 9/11, up for renewal in 2015, there may be a larger debate about the tactics embraced by the NSA over the last decade and a half coming.

But for now, all that has changed is that we are slightly more informed about how governments may be spying on us.

F-Secure-Infographic-Privacy-Final (1)

Will we just give in to an “aquarium” life and a perverse definition of “privacy”? Watch our Mikko Hypponen’s latest talk “The Internet is On Fire” and see if you’re ready to grab the microphone.

How have the Snowden revelations changed your views about privacy?

[Image by Josh Hallett via Flickr]

Facebook’s new terms, is the sky falling?

You have seen them if you are on Facebook, and perhaps even posted one yourself. I’m talking about the statements that aim to defuse Facebook’s new terms of service, which are claimed to take away copyright to stuff you post. To summarize it shortly, the virally spreading disclaimer is meaningless from legal point of view and contains several fundamental errors. But I think it is very good that people are getting aware of their intellectual rights and that new terms may be a threat.

Terms of service? That stuff in legalese that most people just click away when starting to use a new service or app. What is it really about and could it be important? Let’s list some basic points about them.

  • The terms of service or EULA (End User License Agreement) is a legally binding agreement between the service provider and the user. It’s basically a contract. Users typically agree to the contract by clicking a button or simply by using the service.
  • These terms are dictated by the provider of the service and not negotiable. This is quite natural for services with a large number of users, negotiating individual contracts would not be feasible.
  • Terms of service is a defensive tool for companies. One of their primary goals is to protect against lawsuits.
  • These terms are dictated by one part and almost never read by the other part. Needless to say, this may result in terms that are quite unfavorable for us users. This was demonstrated in London a while ago. No, we have not collected any children yet.
  • Another bad thing for us users is the lack of competition. There are many social networks, but only one Facebook. Opting out of the terms means quitting, and going to another service is not really an option if all your friends are on Facebook. Social media is by its nature monopolizing.
  • The upside is that terms of service can’t change the law. The legislation provides a framework of consumer and privacy protection that can’t be broken with an agreement. Unreasonable terms, like paying with your firstborn child, are moot.
  • But be aware that the law of your own country may not be applicable if the service is run from another country.
  • Also be aware that these terms only affect your relationship to the provider of the service. Intelligence performed by authorities is a totally different thing and may break privacy promises given by the company, especially for services located in the US.
  • The terms usually include a clause that grant the provider a license to do certain things with stuff the users upload. There’s a legitimate reason for this as the provider need to copy the data between servers and publish it in the agreed way. This Facebook debacle is really about the extent of these clauses.

Ok, so what about Facebook’s new terms of service? Facebook claim they want to clarify the terms and make them easier to understand, which really isn’t the full story. They have all the time been pretty intrusive regarding both privacy and intellectual property rights to your content, and the latest change is just one step on that path. Most of the recent stir is about people fearing that their photos etc. will be sold or utilized commercially in some other way. This is no doubt a valid concern with the new terms. Let’s first take a look at the importance of user content for Facebook. Many services, like newspapers, rely on user-provided content to an increasing extent. But Facebook is probably the ultimate example. All the content you see in Facebook is provided either by the users or by advertisers. None by Facebook itself. And their revenue is almost 8 billion US$ without creating any content themselves. Needless to say, the rights to use our content is important for them. What Facebook is doing now is ensuring that they have a solid legal base to build current and future business models on.

But another thing of paramount importance to Facebook is the users’ trust. This trust would be severely damaged if private photos start appearing in public advertisements. It would cause a significant change in peoples relationship with Facebook and decrease the volume of shared stuff, which is what Facebook lives on. This is why I am ready to believe Facebook when they promise to honor our privacy settings when utilizing user data.

Let’s debunk two myths that are spread in the disclaimer. Facebook is *not* taking away the copyright to your stuff. Copyright is like ownership. What they do, and have done previously too, is to create a license that grant them rights to do certain things with your stuff. But you still own your data. The other myth is that a statement posted by users would have some kind of legal significance. No, it doesn’t. The terms of service are designed to be approved by using the service, anyone can opt to stop using Facebook and thus not be bound by the terms anymore. But the viral statements are just one-sided declarations that are in conflict with the mutually agreed contact.

I’m not going to dig deeper into the changes as it would make this post long and boring. Instead I just link to an article with more info. But let’s share some numbers underlining why it is futile for ordinary mortals to even try to keep up with the terms. I browsed through Facebook’s set of terms just to find 10 different documents containing some kind of terms. And that’s just the stuff for ordinary users, I left out terms for advertisers, developers etc. Transferring the text from all these into MS Word gave 41 pages with a 10pt font, almost 18 000 words and about 108 000 characters. Quite a read! But the worst of all is that there’s no indication of which parts have changed. Anyone who still is surprised by the fact that users don’t read the terms?

So it’s obvious that ordinary user really can’t keep up with terms like this. The most feasible way to deal with Facebook’s terms of service is to consider these 3 strategies and pick the one that suits you best.

  1. Keep using Facebook and don’t worry about how they make money with your data.
  2. Keep using Facebook but be mindful about what you upload. Use other services for content that might be valuable, like good photos or very private info.
  3. Quit Facebook. That’s really the only way to decline their terms of service.

By the way, my strategy is number 2 in the above list, as I have explained in a previous post. That’s like ignoring the terms, expecting the worst possible treatment of your data and posting selectively with that in mind. One can always put valuable stuff on some other service and post a link in Facebook.

So posting the viral disclaimer is futile, but I disagree with those who say it’s bad and it shouldn’t be done. It lacks legal significance but is an excellent way to raise awareness. Part of the problem with unbalanced terms is that nobody cares about them. A higher level of awareness will make people think before posting, put some pressure on providers to make the terms more balanced, and make the legislators more active, thus improving the legal framework that control these services. The legislation is by the way our most important defense line as it is created by a more neutral part. The legislator should, at least in theory, balance the companies’ and end users’ interests in a fair way.

 

Safe surfing,
Micke

 

Image: Screenshot from facebook.com

A Few Thoughts on Privacy in the Age of Social Media

Everyone already knows there are privacy issues related to social media and new technologies. Non-tech-oriented friends and family members often ask me questions about whether they should avoid Facebook messenger or flashlight apps. Or whether it's OK to use credit cards online in spite of recent breach headlines. The mainstream media writes articles about leaked personal photos and the Snappening. So, it's out there. We all know. We know there are bad people out there who will attempt to hack their way into our personal data. But, that's only a small part of the story.

For those who haven't quite realized it, there's no such thing as a free service. Businesses exist to generate returns on investment capital. Some have said about Social Media, "if you can't tell what the product is, it's probably you." To be fair, most of us are aware that Facebook and Twitter will monetize via advertising of some kind. And yes, it may be personalized based on what we like or retweet. But, I'm not sure we fully understand the extent to which this personal, potentially sensitive, information is being productized.

Here are a few examples of what I mean:

Advanced Profiling

I recently viewed a product marketing video targeted to communications service providers. It describes that massive adoption of mobile devices and broadband connections suggesting that by next year there will be 7.7 billion mobile phones in use with 15 billion connections globally. And that "All of these systems produce an amazing amount of customer data" to the tune of 40TB per day; only 3% of which is transformed into revenue. The rest isn't monetized. (Gasp!) The pitch is that by better profiling customers, telcos can improve their ability to monetize that data. The thing that struck me was the extent of the profiling.



As seen in the screen capture, the user profile presented extends beyond the telco services acquired or service usage patterns into the detailed information that flows through the system. The telco builds a very personal profile using information such as favorite sports teams, life events, contacts, location, favorite apps, etc. And we should assume that favorite sports team could easily be religious beliefs, political affiliations, or sexual interests.

IBM and Twitter

On October 29, IBM and Twitter announced a new relationship that enables enterprises to "incorporate Twitter data into their decision-making." In the announcement, Twitter describes itself as "an enormous public archive of human thought that captures the ideas, opinions and debates taking place around the world on almost any topic at any moment in time." And now all of those thoughts, ideas, and opinions are available for purchase through a partnership with IBM.

I'm not knocking Twitter or IBM. The technology behind these capabilities is fascinating and impressive. And perhaps Twitter users allow their data to be used in these ways by accepting the Terms of Use. But, it feels a lot more invasive to essentially provide any third party with a siphon into the massive data that is our Twitter accounts than it would be to, for example, insert a sponsored tweet into my feed that may be selected based on which accounts I follow or keywords I've tweeted.

Instagram Users and Facebook

I recently opened Facebook to see an updated list of People I may know. Most Facebook users are familiar with the feature. It can be an easy way to locate old friends or people who recently joined the network. But something was different. The list was heavily comprised of people who I sort of recognize but have never known personally.

I realized that Facebook was trying to connect me with many of the people behind the accounts I follow on Instagram. Many of these people don't use their real names, talk about their work, or discuss personal family matters on Instagram. They're photographers sharing photos. Essentially, they're artists sharing their art with anyone who wants to take a look. And it feels like a safe way to share.

But now I'm looking at a profile of someone I knew previously only as "Ty_Chi the landscape photographer" and I can now see that he is actually Tyson Kendrick, retail manager from Chicago, father of three girls and a boy. Facebook is telling me more than Mr. Kendrick wanted to share. And I'm looking at Richard Thompson, who's a marketing specialist for one of the brands I follow. I guess Facebook knows the real people behind brand accounts too. It started feeling pretty creepy.

What does it all mean?

Monetization of social media goes way beyond targeted advertising. Businesses are reaching deep into any available data to make connections or discover insights that produce better returns. Service providers and social media platforms may share customer details with each other or with third parties to improve their own bottom lines. And the more creative they get, the more our sense of privacy erodes.

What I've outlined here extends only slightly beyond what I think most people expect. But, we should collectively consider how far this will all go. If companies will make major financial decisions based on Twitter user activity, will there be well-funded campaigns to change user behavior on Social Media platforms? Will the free-flow exchange of ideas and opinions become more heavily and intentionally influenced?

The sharing/exchanging of users' personal data is becoming institutionalized. It's not a corner case of hackers breaking in. It's a systemic business practice that will grow, evolve, and expand.

I have no recipe to avoid what's coming. I have no suggestions for users looking to hold onto to the last threads of their privacy. I just think it's worth thinking critically about how our data may be used and what that may mean for us in years to come.

Your privacy is our pride, part 2 of 3 – security is a fundament

Welcome back to this tree post series about F-Secure’s privacy principles. The first post is here.

We have already covered the fundaments, the importance of privacy. In short, that is how we avoid collecting unnecessary data, and never misuse what we collect for purposes not endorsed by you.

But that’s not enough. We take on a great responsibility as soon as your data is stored on our systems. It’s not enough that we have good intents, we must also ensure that others with malicious intents can’t misuse your data. That’s what we talk about today.

NO BACKDOORS

Many government agencies show an increasing interest in data ordinary people store in cloud services. There are several known cases where vendors have been forced to implement backdoors allowing agencies to examine and fetch users’ data. F-Secure operates in countries where we can’t be forced to do this, so you are secured against bulk data collection. But we are not trying to build a safe haven for criminals. We support law enforcement when a warrant is issued against a defined suspect based on reasonable suspicions. We do cooperate with officials in these cases, but validate each warrant separately.

THERE IS NO PRIVACY WITHOUT SECURITY

It’s not enough to promise we don’t misuse your data ourselves. We must make sure that no one else can either. This is done by applying high security standards to all planning and implementation work we do. Another security aspect is our own personnel. We have technical systems and processes that prevent employees from misusing your data.

WE CHOOSE SERVICE PROVIDERS WE CAN TRUST

Today’s complex systems are rarely built from ground up by one company. That’s the case for our systems as well. The level of security and privacy is defined by the chain’s weakest link, and this means that we must apply the same strict principles to technology partners and subcontractors as well. Customers should never have to think what licensed services a product contains. We naturally carry full responsibility for what we deliver to you, and our privacy principles cover it all even if we rely on services and code made by someone else.

The last three principles will be covered in the next and final post. Stay tuned.

 

Safe surfing,
Micke

 

 

Our fundamental human rights are being violated

We are worried about our digital freedom and need your help. The world our children will inherit may lack some fundamental rights we take for granted, unless actions are taken now. Our Digital Freedom Manifesto is one such action. Read on to learn more.

The United Nations’ Universal Declaration of Human Rights, Article 12:

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

I think this is a very good and important article, and most people probably agree. We have all gotten used to concepts like secrecy of telephony and the postal service. In short, we have the right to privacy and the right to decide ourselves what private information we share with others. And we value these rights. We would not accept that our letters arrive opened or the police installing cameras in our homes.

But on the Internet everything seems to be different. The information you think is private may actually be transferred and stored by systems far away from you, often in other countries. This gives a wide range of agencies and companies a technical possibility to access your data. Article 12 is often your only protection but you have no way to verify that all involved parties respect it. After the Snowden leaks we know for sure what we feared earlier, there are several countries that pay no respect at all to article 12. The ability to monitor most of the world’s Internet traffic, and that way gain political and economic benefits, is just too desirable no matter how unethical it is. USA, where most of our data is hosted, is sadly among the worst offenders.

If warrantless wholesale data collection for political and economic purposes isn’t a violation of article 12, then what is? What’s really going on here? Are we ready to dump article 12 or should something be done? Why are we accepting erosion of our digital rights, while similar violations would cause an immediate outcry if some other area of our lives was affected?

We at F-Secure are ready to fight for your digital freedom. We do that by providing products that guard your on-line life, like F-Secure SAFE and F-Secure Freedome. But that is not enough. Guarding privacy is an uphill battle if the network’s foundations are unreliable or hostile. And the real foundations have nothing to do with technology, they are the laws regulating network use and the attitude of the authorities that enforce or break those laws.

That’s why we need the F-Secure Digital Freedom Manifesto. We know that many people around the word share our concern. This manifesto is crowd sourced and will be made available to the public and selected decision makers when ready. We want you to participate, preferable with your own words, or just by reading it and thinking about how valuable digital freedom is for you. The manifesto will not change anything by itself, but it will help raise awareness. And when the people are aware, then we can demand change. We have democracy after all, right?

You can participate until June 30th. Or just read the draft and think about how all this affects your digital life. Right now is a good moment to get familiar with it.

Micke

#DigitalFreedom is always on the ballot — so vote!

“Should we worry?” Mikko Hypponen asked during his TED Talk How the NSA betrayed the world’s trust — time to act. “No, we shouldn’t worry. We should be angry, because this is wrong, and it’s rude, and it should not be done.”

What can be done to force politicians to listen people who are fed up with the internet and smartphones being turned into tracking tools?

One of the most direct actions any citizen can take in a functioning democracy is to vote for candidates who respect #digitalfreedom. Elections for all 751 Members of the European Parliament will be held across the European Union from 22 -25 May.

Unfortunately, in elections where voters are not motivated or informed, it’s those already with power who tend to have the most influence over the results.

WePromise.eu is attempting to raise the prominence of digital rights issues by encouraging candidates for the European Parliament to endorse a 10 Point Charter of Digital Rights. Like our own #DigitalFreedom Manifesto, it lays out what governments need to do to regain our trust.

Unfortunately only 3,615 of the 503 million people living in the EU have endorsed the Charter. But it’s a start.

The old saying is, “If you don’t vote, you can’t complain.” Now we should say, “If you don’t vote #digitalfreedom, the government will know all your complaints — whether you want them to or not.”

Cheers,

Sandra

[Image by Rob Boudon via Flickr.com]

New look, new mission, new FREEDOM

Since classified documents illuminating America’s mass surveillance began being released last year, we at F-Secure began feeling different, uneasy—even a little angry.

For well over a decade, we’ve vowed to identify any government trojans. However, the general disregard for the privacy of individuals all over the world shown by the NSA and other government intelligence agencies shocked us. Suddenly, it seemed that we couldn’t just offer the award-winning security and backup solutions in the same way we have for 25 years.

We had to take a stand.

F-Secure was born in Finland and born both of a spirit of connection and independence. Finland is part of the EU but outside NATO. It’s a global hotspot for tech innovation, but so committed to privacy that employers aren’t even allowed to Google job applicants. It’s part of the online revolution that has reshaped the way we communicate, but the economy was forced to dramatically re-adapt as cell phones became smartphones.

We’ve spent the last six months rethinking our mission, our products and what we stand for. Now we’re ready to announce that we’re no longer simply about “Protecting the Irreplaceable”, though our legacy will always burn deep in our DNA. Now, we have to be about something deeper.

Our new tagline is: Switch on Freedom

digitalfreedom

And it isn’t just about a promise to you, our customers, it’s about a stand against an invasive mindset that doesn’t value the privacy of your personal data, no matter how many times people suggest that surveillance is only a problem “if you have something to hide.” (Who doesn’t? Did you leave your house wearing pants today? Furthermore, it’s no one’s business but your own what you are doing online.)

Our new look says that we fight for digital freedom.

That’s what our Labs has been doing for decades by protecting you from online criminals. Now that we have even bigger enemies, our products can’t just protect your PC—they have to protect the way you connect and share, too.

aboutfreedom

Your hard drive, your VPN, your content is only as secure and private as the partner you choose to protect you. Our promise to you is that we will never compromise your privacy and we will never open our technology to any government for any reason other than a direct, lawful criminal investigation.

Your privacy is non-negotiable and that’s the core value we operate upon as we build our tools to set you free as you engage in the life you wish to lead.

Now – as the world is waking up to the real threat of governments with unchecked power to capture data on everything we do and keep it forever – is the time for us to stand up.

“The world is changing,” our Mikko Hypponen recently told the TED Radio hour. “We shouldn’t just blindly accept the change. Just because something is technologically possible, it might not be right. And we really have to think about these things now when we can still change them.”

We are doing what we can both in advocating for governments to respect privacy and in building solutions that protect your freedom, regardless of what politicians and corporations want.

We hope you’ll join us.

Cheers,
Sandra

sandra

Is e-mail OK for secret stuff?

Image by EFF
Image by EFF

Short answer: No. Slightly longer answer: Maybe, but not without additional protection.

E-mail is one of the oldest and most widely used services on Internet. It was developed during an era when we were comfortably unaware of viruses, worms, spam, e-crime and the NSA. And that is clearly visible in the architecture and blatant lack of security features. Without going deep into technical details, one can conclude that the security of plain e-mail is next to non-existing. The mail standards do by themselves not provide any kind of encryption or verification of the communicating parties’ identity. All this can be done with additional protection arrangements. But are you doing it and do you know how to?

Here’s some points to keep in mind.

  • Hackers or intelligence agencies may tap into the traffic between you and the mail server. This is very serious as it could reveal even your user ID and password, enabling others to log in to the server and read your stored mails. The threat can be mitigated by ensuring that the network traffic is encrypted. Most mail client programs offer an option to use SSL- or TLS-encryption for sent and received mail. See the documentation for your mail program or service provider. If you use webmail in your browser, you should make sure the connection is encrypted. See this article for more details. If it turns out that you can’t use encryption with your current service provider, then start looking for another one promptly.
  • Your mails are stored at the mail server. There are three main points that affect how secure they are there. Your own password and how secret you keep it, the service provider’s security policies and the legislation in the country where the service provider operates. Most ordinary service providers offer decent protection against hackers and other low-resource parties, but less protection against authorities in their home country.
  • Learn how to recognize phishing attacks as that is one of the most common reasons for mail accounts to be compromised.
  • There are some mail service providers that focus purely on secrecy and use some kind of encryption to keep messages secret. Hushmail (Canada) and Mega’s (New Zealand) planned service are good examples. Lavabit and Silent Mail used to provide this kind of service too, but they have been closed down under pressure from officials. This recent development shows that services run in the US can’t be safe. US authorities can walk in at any time and request your data or force them to implement backdoors, no matter what security measures the service provider is implementing. And it’s foolish to believe that this is used only against terrorists. It’s enough that a friend of a friend of a friend is targeted for some reason or that there is some business interest that competes with American interests.
  • The safest way to deal with most of the threats is to use end-to-end encryption. For this you need some additional software like Pretty Good Privacy, aka. PGP. It’s a bit of a hassle as both parties need to have compatible encryption programs and exchange encryption keys. But when it’s done you have protection for both stored messages and messages in transit. PGP also provides strong authentication of the message sender in addition to secrecy. This is the way to go if you deal with hot stuff frequently.
  • An easier way to transfer secret stuff is to attach encrypted files. You can for example use WinZip or 7-Zip to create encrypted packages. Select the AES encryption algorithm (if you have a choice) and make sure you use a hard to guess password that is long enough and contains upper and lowercase letters, numbers and special characters. Needless to say, do not send the password to the other party by mail. Agreeing on the password is often the weakest link and you should pay attention to it. Even phone and SMS may be unsafe if an intelligence agency is interested in you.
  • Remember that traffic metadata may reveal a lot even if you have encrypted the content. That is info about who you have communicated with and at what time. The only protection against this is really to use anonymous mail accounts that can’t be linked to you. This article touches on the topic.
  • Remember that there always are at least two parties in communication. And no chain is stronger than its weakest link. It doesn’t matter how well you secure your mail if you send a message to someone with sloppy security.
  • Mails are typically stored in plaintext on your own computer if you use a mail client program. Webmail may also leave mail messages in the browser cache. This means that you need to care about the computer’s security if you deal with sensitive information. Laptops and mobile devices are especially easy to lose or steal, which can lead to data leaks. Data can also leak through malware that has infected your computer.
  • If you work for a company and use mail services provided by them, then the company should have implemented suitable protection. Most large companies run their own internal mail services and route traffic between sites over encrypted connections. You do not have to care yourself in this case, but it may be a good idea to check it. Just ask the IT guy at the coffee table if NSA can read your mails and see how he reacts.

Finally. Sit down and think about what kind of mail secrecy you need. Imagine that all messages you have sent and received were made public. What harm would that cause? Would it be embarrassing to you or your friends? Would it hurt your career or employer? Would it mean legal problems for you or your associates? (No, you do not need to be criminal for this to happen. Signing a NDA may be enough.) Would it damage the security of your country?  Would it risk the life of you or others? And harder to estimate, can any of this stuff cause you harm if it’s stored ten or twenty years and then released in a world that is quite different from today?

At this point you can go back to the list above and decide if you need to do something to improve your mail security.

Safe surfing,
Micke

Is democracy ready for the Internet age?

Lancaster-County-Sample-Ballot-November-6-2012-Page-1IT technology is infiltrating almost every area in our society, but there is one front where the progress is notably slow. Democracy. Why?

We still use representative democracy and elect politicians for several years at a time. This is largely done using pen and paper and the votes are counted manually. Processing the votes seems like a task well suited for computers. And why do we even need to elect representatives when we could vote directly over the net in the big and important questions? Representative democracy was after all invented thousands of years ago when people had to gather physically to hold a meeting. Then it made sense to send someone to represent a group of people, but now we could involve a whole nation directly using the net. So what’s stopping us from doing that?

Let’s first look at IT as an aid in representative democracy. First, voting machines have already been used for a long time in some countries, including the US. But there have been many controversies and elections have even been declared invalid by court (link in Finnish) due to problems in electronic handling of votes.

Handling an election seems like a straightforward IT problem, but it really isn’t. Let’s keep in mind the fundamental requirements for an election: 1. The identity of voters and their right to vote must be verified. 2. It must be ensured that no one votes more than once. 3. It shall not be possible to determine how a person has voted. 4. The integrity of the result must be verifiable. The big problem is that these requirements conflicts with each other. You must know the person who votes but still store the data in a way that makes it possible to verify the result but not identify the voter. This leads to complex designs involving cryptography. It’s no doubt possible to develop systems that fulfill these needs. The hard part is to verify the systems thoroughly enough to make sure they really work.

And here psychology enters the scene. We all know pens and paper well and we have learned to trust the traditional election system. There is a fairly large number of unclear votes in every election and we have accepted that as a fact. But people are a lot more suspicious against computerized systems. Most of us lack the ability to understand how electronic voting works. And the requirements described above causes complexity that makes it hard even for many professionals. Only crypto experts have the true ability to audit it. This makes it hard to build a chain of trust between ordinary people and the voting system.

Is our suspicious attitude justified? Yes and no. We should be suspicious against complex electronic systems and put them through thorough scrutiny before using them in elections. We must demand that their design is open and audited by independent experts. But we are at the same time forgetting the fact that traditional security measures are far from perfect. Written signatures is a very weak method to prove identity and a photo ID is not much better. A nice example is a friend of mine who keeps using an expired ID card just to test the system. The card is his own and he still looks like the picture. The only problem is that the card expired 11 years ago. During these years the card has only been rejected once! It has been used several times when voting in elections. Needless to say, an electronic signature would not pass even once. Despite this, people typically trust written signatures and ID cards a lot more than computerized security measures. The same attitude is visible when discussing electronic voting.

Another real reason to be suspicious against electronic voting is the computers’ ability to process massive amounts of data very quickly. There are always minor errors in the traditional voting systems, but massive manipulation of the result is hard. In a computerized system, on the other hand, even a fairly small glitch may enable someone to make a big impact on the result.

The other side of the coin is the question if we need representative democracy at all anymore. Should we have net polls about the important questions instead? Well, representative democracy has an important benefit, continuity. The same people are given at least some time to achieve results before people can decide if they should continue. But a four to six year term is really too short to change the big things and our politicians tend to focus on smaller and easier issues. Imagine how it would be if the people had a more direct say in decision making? That could lead to an even bigger lack of focus and strategic direction. Probably not a good idea after all.

But representative democracy can be complemented instead of replaced. Crowd sourcing is one area that is taking off. A lot of things can be crowd sourced and legislative proposals is one of them. Many countries already have a Constitution that allows ordinary citizens to prepare proposals and force the parliament to vote on them, if enough people support the proposal. Here in Finland a crowd sourced copyright act proposal made headlines globally when it recently passed the 50 000 supporter threshold (1,2 % of the voting population). This is an excellent example of how modern Internet-based schemes can complement the representative democracy. Finland’s current copyright legislation is almost 10 years old and is heavily influenced by entertainment industry lobbyists. It was written during a time when most ordinary people had no clue about copyright issues, and the politicians knew even less. For example, most ordinary people probably thinks that downloading a song illegally from the net is less severe than selling a truckload of false CDs. Our current copyright law disagrees.

Issues like this can easily become a politically hot potato that no one want to touch. Here the crowd sourced initiatives comes in really handy. Other examples of popular initiatives in Finland are a demand for equal rights for same-sex couples and making a minority language optional in the schools. Even Edward Snowden has inspired a proposal: It should be possible to apply for political asylum remotely, without visiting the target country. Another issue is however that these initiatives need to pass the parliament to become laws. The representative democracy will still get the final word. Even popular crowd sourced initiatives may be dismissed, but they are still not in vain. Every method to bring in more feedback to the decision makers during their term in office is good and helps mitigate the problems with indirect democracy.

So what will our democracy look like in ten or twenty years? Here’s my guess. We still have representative democracy. Electronic voting machines takes care of most of the load, but we may still have traditional voting on paper available as an alternative. Well, some countries rely heavily on voting machines already today. The electronic machines are accepted as the norm even if some failures do occur. Voting over Internet will certainly be available in many countries, and is actually already in use in Estonia. Direct ways to affect the political system, like legislative proposals, will be developed and play a more important role. And last but not least. Internet has already become a very powerful tool for improving the transparency of our legislative institutions and to provide feedback from voters. This trend will continue and actually make the representative democracy blend into some kind of hybrid democracy. The representatives do in theory have carte blance to rule, but they also need to constantly mind their public reputation. This means that you get some extra power to affect the legislative institutions if you participate in the monitoring and express your opinion constantly, rather than just cast a vote every 4th year.

Safe surfing,
Micke

Looking Through the Cloudy PRISM

As you have no doubt heard, a lot of fuss has been made over the past couple days involving both NSA, Verizon, and Facebook, as well as several other companies and governments. Here, we want to provide a concise overview of the information available at this point, along with some links to additional reading about the program that is known as “PRISM”.

On June 6, 2013, the Guardian published an article that suggested a classified order was issued on April 25, 2013 that allowed the United States government to collect data until July 19, 2013 and then hand it over to the NSA. This order was issued to Verizon, and it’s existence was not allowed to be spoken of. Currently, the documents revealed only cover Verizon, but there may have been similar orders involving other companies, not just ones that provide phone service. PRISM, a program allowing the NSA access to company data, was originally enabled in December of 2007 by President Bush under a U.S. surveillance law and then renewed by President Obama in December of 2012. This program was started to aid anti-terrorism efforts and there are claims by the government that it has already prevented a terrorist plot in Colorado.

These documents reveal that the NSA is performing massive data mining covering millions of U.S. citizens. Wired reported the collected data includes phone numbers of both parties involved in the phone call, the time and duration of the call, the calling card numbers used in the call, and the International Mobile Subscriber Identity (IMSI) number which applies to mobile callers. The location of the calls may have been recording using cell tower data. Data that was NOT collected includes names, addresses, account information, and recordings of call content. There is heated debate whether this metadata is sensitive or not. On the one hand, no names or call content suggests that your fundamental privacy is intact. On the other hand, consider that the government knows you “spoke with an HIV testing service, then your doctor, then your health insurance company in the same hour. But they don’t know what was discussed.”

Edward Snowden has been identified as the whistleblower who released the documents that exposed this classified order. He had access to these documents as an employee for the NSA, which he had been working for over last four years as a contractor from outside organizations, including Booz Allen and Dell. When Snowden released the documents he stated, “I can’t allow the US to destroy privacy and Internet freedom.”

This article by the Guardian highlights multiple comments made by President Obama about the issue. He called this a “very limited issue” when discussing these disclosures of the NSA accessing phone data. In an attempt to deflect criticism, the President also stated that he had privacy concerns regarding private corporations as they collect more data than the government.

Both Facebook and Google denied any previous knowledge of the PRISM surveillance program after concerns they may have been part of the program. Many other technology companies thought be be part of PRISM issued similar statements saying that they did not allow the government “direct access” to their systems. However, theNY Times reports that Google, Microsoft, Apple, Facebook, Yahoo, AOL, and Paltalk all negotiated with the government and were required to share information due to theForeign Intelligence Surveillance Act (FISA). The Guardian also states that Microsoft has been a part of this information sharing program since the beginning in December of 2007 and was joined by Yahoo in 2008, Google, Facebook and PalTalk in 2009, YouTube in 2010, Skype and AOL in 2011, and Apple in 2012. At this point, it is a game of “who do you trust?” The government who finds such data incredibly valuable, or the corporations that sometimes rely on such data for their business model (e.g. Facebook).

In an article by Mark Jaquith, he mentions how important the details are in this situation. There are two different reports on how PRISM actually works; one says the government can directly and unilaterally access company servers to take data and the other is just an easier way to transfer data requested by court orders. The majority of reports are pointing toward the second method describing the way that PRISM works. If this is true, the transfer of data is moderated and indirect making it basically a lock box to securely pass information through. Now, that this has been brought to light we hope more details will continue come to the surface to provide clarity.

As with many big information leaks, the emotions and politics quickly take hold and begin to dominate the argument. Veterans of the Internet are largely not surprised by the PRISM news, due to fleeting memory of ECHELON, Carnivore, and likely other initiatives that never came to light. Regardless, the PRISM program represents a serious threat to individual privacy and every citizen should be concerned.

Written by eabsetz

Are we all RoboCops in the future?

7457645618_1c7dcd0523_oInternet together with small and inexpensive digital cameras have made us aware of the potential privacy concerns of sharing digital photos. The mobile phone cameras have escalated this development even further. Many people are today carrying a camera with ability to publish photos and videos on the net almost in real-time. Some people can handle that and act in a responsible way, some can’t. Defamatory pictures are constantly posted on the net, either by mistake or intentionally. But that’s not enough. Now it looks like the next revolution that will rock the privacy scene is around the corner, Google Glass.

Having a camera in your phone has lowered the threshold to take photos tremendously. It’s always with you and ready to snap. But you still have to take it out of the pocket and aim it at your object. The “victim” has a fair chance to notice that you are taking photos, especially if you are working at close distance.

Google Glass is a smartphone-like device that is integrated in a piece of headgear. You wear it all the time just like ordinary glasses. The screen is a transparent piece in your field of view that show output as an overlay layer on top of what’s in front of you. No keyboard, mouse or touchscreen. You control it by voice commands. Cool, but here comes the privacy concern. Two of the voice commands are “ok, glass, take a picture” and “ok, glass, record a video”. Yes, that’s right. It has a camera too.

Imagine a world where Google Glasses are as common as mobile phones today. You know that every time you talk to someone, you have a camera and microphone pointed at you. You have no way of knowing if it is recording or not. You have to take this into account when deciding what you say, or run the risk of having an embarrassing video on YouTube in minutes. A little bit like in the old movie RoboCop, where the metallic law enforcement officer was recording constantly and the material was good to use as evidence in court. Do we want a world like that? A world where we all are RoboCops?

We have a fairly clear and good legislation about the rules for taking photos. It is in most countries OK to take photos in public places, and people who show up there must accept to be photographed. Private places have more strict rules and there are also separate rules about publishing and commercial use of a photo. This is all fine and it applies to any device, also the Google Glass. The other side of the coin is peoples’ awareness of these laws, or actually lack thereof. In practice we have a law that very few care about, and a varying degree of common sense. People’s common sense do indeed prevent many problems, but not all. It may work fairly OK today, but will it be enough if the glasses become common?

I think that if Google Glass become a hit, then it will force us to rethink our relationship to photo privacy. Both as individuals and as a society. There will certainly be problems if 90% of the population have glasses and still walk around with only a rudimentary understanding about how the law restricts photography. Some would suffer because they broke the law unintentionally, and many would suffer because of the published content.

I hope that our final way to deal with the glasses isn’t the solution that 5 Point Cafe in Seattle came up with. They became the first to ban the Google Glass. It is just the same old primitive reaction that has followed so many new technologies. Needless to say, much fine technology would be unavailable if that was our only way to deal with new things.

But what will happen? That is no doubt an interesting question. My guess is that there will be a compromise. Camera users will gradually become more aware of what boundaries the law sets. Many people also need to redefine their privacy expectation, as we have to adopt to a world with more cameras. That might be a good thing if the fear of being recorded makes us more thoughtful and polite against others. It’s very bad if it makes it harder to mingle in a relaxed way. Many questions remain to be answered, but one thing is clear. Google Glass will definitively be a hot topic when discussing privacy.

Micke

PS. I have an app idea for the Glass. You remember the meteorite in Russia in February 2013? It was captured by numerous car cameras, as drivers in Russia commonly use constantly recording cameras as measure against fraudulent accusations. What if you had the same functionality on your head all the time? There would always be a video with the last hour of your life. Automatically on all the time and ready to get you out of tricky situations. Or to make sure you don’t miss any juicy moments…

Photo by zugaldia @ Flickr