Daily Archives: October 7, 2020

Election 2020 – Keep Misinformation from Undermining the Vote

Protect Your Vote

Election 2020 – Keep Misinformation from Undermining the Vote

On September 22nd, the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) issued an advisory about the potential threat from foreign actors and cybercriminals attempting to spread false information. Their joint public service announcement makes a direct statement regarding how this could affect our election:

“Foreign actors and cybercriminals could create new websites, change existing websites, and create or share corresponding social media content to spread false information in an attempt to discredit the electoral process and undermine confidence in U.S. democratic institutions.”

Their call to action is clear—critically evaluate the content you consume and to seek out reliable and verified information from trusted sources, such as state and local election officials. Not just leading up to Election Day, but during and after as well.

Here’s why: it’s estimated that roughly 75% of American voters will be eligible to vote by mail, potentially leading to some 80 million mail-in ballots being cast. That’s twice the number from the 2016 presidential election, which could prolong the normal certification process. Election results will likely take days, even weeks, to ensure every legally cast ballot is counted accurately so that the election results can ultimately get certified.

That extended stretch of time is where the concerns come in. Per the FBI and CISA:

“Foreign actors and cybercriminals could exploit the time required to certify and announce elections’ results by disseminating disinformation that includes reports of voter suppression, cyberattacks targeting election infrastructure, voter or ballot fraud, and other problems intended to convince the public of the elections’ illegitimacy.”

In short, bad actors may attempt to undermine people’s confidence in our election as the results come in.

Our moment to act as smart consumers, and sharers, of online news has never been more immediate.

Misinformation flies quicker, and farther, than the truth

Before we look at how we can combat the spread of false information this election, let’s see how it cascades across the internet.

It’s been found that false political news traveled deeper and more broadly, reached more people, and was more viral than any other category of false information, according to a Massachusetts Institute of Technology study on the spread of true and false news online, which was published by Science in 2018.

Why’s that so? In a word: people. According to the research findings,

“We found that false news was more novel than true news, which suggests that people were more likely to share novel information … Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.”

Thus, bad actors pick their topics, pumps false information about them into social media channels, and then lets people spread it by way of shares, retweets, and the like—thanks to “novel” and click-baity headlines for content people may not even read or watch, let alone fact check.

Done on a large scale, false information thus can hit millions of feeds, which is what the FBI and CISA is warning us about.

Five ways you can combat the spread of false information this election

The FBI and CISA recommend the following:

  1. Seek out information from trustworthy sources, such as state and local election officials; verify who produced the content; and consider their intent.
  2. Verify through multiple reliable sources any reports about problems in voting or election results and consider searching for other reliable sources before sharing such information via social media or other avenues.
  3. For information about final election results, rely on state and local government election officials.
  4. Report potential election crimes—such as disinformation about the manner, time, or place of voting—to the FBI.
  5. If appropriate, make use of in-platform tools offered by social media companies for reporting suspicious posts that appear to be spreading false or inconsistent information about election-related problems or results.

Stick to trustworthy sources

If there’s a common theme across our election blogs so far, it’s trustworthiness.

Knowing which sources are deserving of our trust and being able to spot the ones that are not takes effort—such as fact-checking from reputable sources like FactCheck.org, the Associated Press, and Reuters or researching the publisher of the content in question to review their credentials. Yet that effort it worthwhile, even necessary today. The resources listed in my recent blogs can help:

Stay Updated 

To stay updated on all things McAfee and for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

 

The post Election 2020 – Keep Misinformation from Undermining the Vote appeared first on McAfee Blogs.

US County Election Websites (Still) Fail to Fulfill Basic Security Measures

Elections 2020

In January 2020, McAfee released the results of a survey establishing the extent of the use of .GOV validation and HTTPS encryption among county government websites in 13 states projected to be critical in the 2020 U.S. Presidential Election. The research was a result of  my concern that the lack of .GOV and HTTPS among county government websites and election-specific websites could allow foreign or domestic malicious actors to potentially create fake websites and use them to spread disinformation in the final weeks and days leading up to Election Day 2020.

Subsequently, reports emerged in August that the U.S. Federal Bureau of Investigations, between March and June, had identified dozens of suspicious websites made to look like official U.S. state and federal election domains, some of them referencing voting in states like Pennsylvania, Georgia, Tennessee, Florida and others.

Just last week, the FBI and Department of Homeland Security released another warning about fake websites taking advantage of the lack of .GOV on election websites.

These revelations compelled us to conduct a follow-up survey of county election websites in all 50 U.S. states.

Why .GOV and HTTPS Matter

Using a .GOV web domain reinforces the legitimacy of the site. Government entities that purchase .GOV web domains have submitted evidence to the U.S. government that they truly are the legitimate local, county, or state governments they claimed to be. Websites using .COM, .NET, .ORG, and .US domain names can be purchased without such validation, meaning that there is no governing authority preventing malicious parties from using these names to set up and promote any number of fraudulent web domains mimicking legitimate county government domains.

An adversary could use fake election websites for disinformation and voter suppression by targeting specific citizens in swing states with misleading information on candidates or inaccurate information on the voting process such as poll location and times. In this way, a malicious actor could impact election results without ever physically or digitally interacting with voting machines or systems.

The HTTPS encryption measure assures citizens that any voter registration information shared with the site is encrypted, providing greater confidence in the entity with which they are sharing that information. Websites lacking the combination of .GOV and HTTPS cannot provide 100% assurance that voters seeking election information are visiting legitimate county and county election websites. This leaves an opening for malicious actors to steal information or set up disinformation schemes.

I recently demonstrated how such a fake website would be created by mimicking a genuine county election website and then inserting misleading information that could influence voter behavior. This was done in an isolated lab environment that was not accessible to the internet as to not create any confusion for legitimate voters.

In many cases, election websites have been set up to provide a strong user experience versus a focus on mitigating concerns that they could be spoofed to exploit the communities they serve. Malicious actors can pass off fake election websites and mislead large numbers of voters before detection by government organizations. A campaign close to election day could confuse voters and prevent votes from being cast, resulting in missing votes or overall loss of confidence in the democratic system.

September 2020 Survey Findings

McAfee’s September survey of county election administration websites in all 50 U.S. states (3089 counties) found that 80.2% of election administration websites or webpages lack the .GOV validation that confirms they are the websites they claim to be.

Nearly 45% of election administration websites or webpages lack the necessary HTTPS encryption to prevent third-parties from re-directing voters to fake websites or stealing voter’s personal information.

Only 16.4% of U.S. county election websites implement U.S. government .GOV validation and HTTPS encryption.

States # Counties # .GOV % .GOV # HTTPS % HTTPS # BOTH %BOTH
Alabama 67 8 11.9% 26 38.8% 6 9.0%
Alaska 18 1 5.6% 12 66.7% 1 5.6%
Arizona 15 11 73.3% 14 93.3% 11 73.3%
Arkansas 75 18 24.0% 30 40.0% 17 22.7%
California 58 8 13.8% 45 77.6% 6 10.3%
Colorado 64 21 32.8% 49 76.6% 20 31.3%
Connecticut 8 1 12.5% 2 25.0% 1 12.5%
Delaware 3 0 0.0% 0 0.0% 0 0.0%
Florida 67 4 6.0% 64 95.5% 4 6.0%
Georgia 159 40 25.2% 107 67.3% 35 22.0%
Hawaii 5 4 80.0% 4 80.0% 4 80.0%
Idaho 44 6 13.6% 28 63.6% 5 11.4%
Illinois 102 14 13.7% 60 58.8% 12 11.8%
Indiana 92 28 30.4% 41 44.6% 16 17.4%
Iowa 99 27 27.3% 80 80.8% 25 25.3%
Kansas 105 8 7.6% 46 43.8% 2 1.9%
Kentucky 120 19 15.8% 28 23.3% 15 12.5%
Louisiana 64 5 7.8% 12 18.8% 2 3.1%
Maine 16 0 0.0% 0 0.0% 0 0.0%
Maryland 23 9 39.1% 22 95.7% 8 34.8%
Massachusetts 14 3 21.4% 5 35.7% 2 14.3%
Michigan 83 9 10.8% 63 75.9% 9 10.8%
Minnesota 87 5 5.7% 59 67.8% 5 5.7%
Mississippi 82 8 9.8% 30 36.6% 5 6.1%
Missouri 114 8 7.0% 49 43.0% 7 6.1%
Montana 56 15 26.8% 21 37.5% 8 14.3%
Nebraska 93 35 37.6% 73 78.5% 32 34.4%
Nevada 16 3 18.8% 13 81.3% 2 12.5%
New Hampshire 10 0 0.0% 0 0.0% 0 0.0%
New Jersey 21 3 14.3% 11 52.4% 2 9.5%
New Mexico 33 7 21.2% 20 60.6% 6 18.2%
New York 62 15 24.2% 48 77.4% 14 22.6%
North Carolina 100 37 37.0% 69 69.0% 29 29.0%
North Dakota 53 3 5.7% 19 35.8% 2 3.8%
Ohio 88 77 87.5% 88 100.0% 77 87.5%
Oklahoma 77 1 1.3% 24 31.2% 1 1.3%
Oregon 36 1 2.8% 22 61.1% 0 0.0%
Pennsylvania 67 11 16.4% 40 59.7% 7 10.4%
Rhode Island 5 2 40.0% 3 60.0% 0 0.0%
South Carolina 46 15 32.6% 33 71.7% 13 28.3%
South Dakota 66 2 3.0% 14 21.2% 1 1.5%
Tennessee 95 23 24.2% 38 40.0% 12 12.6%
Texas 254 10 3.9% 86 33.9% 6 2.4%
Utah 29 8 27.6% 16 55.2% 7 24.1%
Vermont 14 0 0.0% 0 0.0% 0 0.0%
Virginia 95 33 34.7% 61 64.2% 35 36.8%
Washington 39 7 17.9% 26 66.7% 6 15.4%
West Virginia 55 18 32.7% 33 60.0% 16 29.1%
Wisconsin 72 16 22.2% 61 84.7% 11 15.3%
Wyoming 23 4 17.4% 15 65.2% 2 8.7%
Total 3089 611 19.8% 1710 55.4% 507 16.4%

We found that the battleground states were largely in a bad position when it came to .GOV and HTTPS.

Only 29% of election websites used both .GOV and HTTPS in North Carolina, 22% in Georgia, 15.3% in Wisconsin, 10.8% in Michigan, 10.4% in Pennsylvania, and 2.4% in Texas.

While 95.5% of Florida’s county election websites and webpages use HTTPS encryption, only 6% percent validate their authenticity with .GOV.

During the January 2020 survey, only 11 Iowa counties protected their election administration pages and domains with .GOV validation and HTTPS encryption. By September 2020, that number rose to 25 as 14 counties added .GOV validation. But 72.7% of the state’s county election sites and pages still lack official U.S. government validation of their authenticity.

Alternatively, Ohio led the survey pool with 87.5% of election webpages and domains validated by .GOV and protected by HTTPS encryption. Four of Five (80%) Hawaii counties protect their main county and election webpages with both .GOV validation and encryption and 73.3% of Arizona county election websites do the same.

What’s not working

Separate Election Sites. As many as 166 counties set up websites that were completely separate from their main county web domain.  Separate election sites may have easy-to-remember, user-friendly domain names to make them more accessible for the broadest possible audience of citizens. Examples include my own county’s www.votedenton.com as well as www.votestanlycounty.com, www.carrollcountyohioelections.gov, www.voteseminole.org, and www.worthelections.com.

The problem with these election-specific domains is that while 89.1% of these sites have HTTPS, 92.2% lack .GOV validation to guarantee that they belong to the county governments they claim. Furthermore, only 7.2% of these domains have both .GOV and HTTPS implemented. This suggests that malicious parties could easily set up numerous websites with similarly named domains to spoof these legitimate sites.

Not on OUR website. Some smaller counties with few resources often reason that they can inform and protect voters simply by linking from their county websites to their states’ official election sites. Other smaller counties have suggested that social media platforms such as Facebook are preferable to election websites to reach Internet-savvy voters.

Unfortunately, neither of these approaches prevents malicious actors from spoofing their county government web properties. Such actors could still set up fake websites regardless of whether the genuine websites link to a .GOV validated state election website or whether counties set up amazing Facebook election pages.

For that matter, Facebook is not a government entity focused on validating that organizational or group pages are owned by the entities they claim to be. The platform could just as easily be used by malicious parties to create fake pages spreading disinformation about where and how to vote during elections.

It’s not OUR job. McAfee found that some states’ voters could be susceptible to fake county election websites even though their counties have little if any role at all in administering elections. States such as Connecticut, Delaware, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont administer their elections through their local governments, meaning that any election information is only available at the states’ websites and those websites belonging to major cities and towns. While this arrangement makes county-level website comparisons with other states difficult for the purpose of our survey, it doesn’t make voters in these states any less susceptible to fake versions of their county website.

There should be one recipe for the security and integrity of government websites such as election websites and that recipe should be .GOV and HTTPS.

What IS working: The Carrot & The Stick

Ohio’s leadership position in our survey appears to be the result of a state-led initiative to transition county election-related content to .GOV validated web properties. Ohio’s Secretary of State used “the stick” approach by demanding by official order that counties implement .GOV and HTTPS on their election web properties. If counties couldn’t move their existing websites to .GOV, he offered “the carrot” of allowing them to leverage the state’s domain.

A majority of counties have subsequently transitioned their main county websites to .GOV domains, their election-specific websites to .GOV domains, or their election-specific webpages to Ohio’s own .GOV-validated https://ohio.gov/ domain.

Examples:

https://adamscountyoh.gov/elections.asp
https://www.allen.boe.ohio.gov/
https://boe.ashland.oh.gov/
https://www.boe.ohio.gov/ashtabula
https://elections.bcohio.gov/
https://www.carrollcountyohioelections.gov/
https://boe.clermontcountyohio.gov/
https://crawfordcountyohioboe.gov/
https://vote.delawarecountyohio.gov/
https://votehamiltoncountyohio.gov/

While Ohio’s main county websites still largely lack .GOV validation, Ohio does provide a mechanism for voters to quickly assess if the main election website is real or potentially fake. Other states should consider such interim strategies until all county and local websites with election functions can be fully transitioned to .GOV.

Ultimately, the end goal success should be that we are able to tell voters that if they don’t see .GOV and HTTPS, they shouldn’t believe that a website is legitimate or safe. What we tell voters must be that simple, because the general public lacks a technical background to determine real sites from fake sites.

For more information on our .GOV-HTTPS county website research, potential disinformation campaigns, other threats to our elections, and voter safety tips, please visit our Elections 2020 page: https://www.mcafee.com/enterprise/en-us/2020-elections.html

The post US County Election Websites (Still) Fail to Fulfill Basic Security Measures appeared first on McAfee Blogs.

Achieving Compliance with Qatar’s National Information Assurance Policy

Qatar is one of the wealthiest countries in the world. Finances Online, Global Finance Magazine and others consider it to be the wealthiest nation. This is because the country has a small population of under 3 million but relies on oil for the majority of its exports and Gross Domestic Product (GDP). These two factors […]… Read More

The post Achieving Compliance with Qatar’s National Information Assurance Policy appeared first on The State of Security.

How Tripwire Custom Workflow Automation Can Enhance Your Network Visibility

Tripwire Enterprise is a powerful tool. It provides customers insight into nearly every aspect of their systems and devices. From change management to configuration and compliance, Tripwire can provide “eyes on” across the network. Gathering that vast amount of data for analysis does not come without challenges. Customers have asked for better integration with their […]… Read More

The post How Tripwire Custom Workflow Automation Can Enhance Your Network Visibility appeared first on The State of Security.

Smashing Security podcast #199: A few tech cock-ups, and one cock lock-up

An internet-connected adult toy could leave its users encaged, the official NHS COVID-19 contact-tracing app alarms users, and would you be happy if a robot interviewed you for a job? All this and much more is discussed in the latest edition of the award-winning "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by BBC technology correspondent Zoe Kleinman.

Spot Fake News and Misinformation in Your Social Media Feed

fake news

Spot Fake News and Misinformation in Your Social Media Feed

Where do you get your news? There’s a good chance much of it comes from social media.

In 2019, Pew Research found that 55% of American adults said they get their news from social media either “often” or “sometimes,” which is an 8% rise over the previous year. We can visualize what that mix might look like. Some of their news on social media may come from information sources they’ve subscribed to and yet more news may appear via articles reposted or retweeted by friends.

So, as we scroll through our feeds and quickly find ourselves awash in a cascade of news and comments on the news, we also find ourselves wondering: what’s true and false here?

And that’s the right question to ask. With the advent of the internet, anyone can become a publisher. That’s one of the internet’s greatest strengths—we can all have a voice. Publishing is no longer limited to newspaper, TV, and radio ownership bodies. Yet it’s one of the internet’s greatest challenges as well—with millions of publishers out there, not everyone is posting the truth. And sometimes, people aren’t doing the posting at all.

For example, last May, researchers at Carnegie Melon University studied more than 200 million tweets about the current virus. Of the top 50 most influential retweeters, 82% of them were bots. Some 62% of the top 1,000 retweeters were bots as well. What were they retweeting? Researchers said the tweets revolved around more than 100 types of inaccurate stories that included unfounded conspiracy theories and phony cures. Researchers cited two reasons for this surge: “First, more individuals have time on their hands to create do-it-yourself bots. But the number of sophisticated groups that hire firms to run bot accounts also has increased.”

With the sheer volume of news and information we wade through each day, you can be assured that degrees of false and misleading information make their way into people’s social media mix. And that calls for all of us to build up our media literacy—which is our ability to critically analyze the media we consume for bias and accuracy.

What follows are a few basics of media literacy that can help you to discern what’s fact and what’s fiction as you scroll through your social media feed for news.

The difference between misinformation and disinformation

When talking about spotting truth from falsehood on social media, it helps to first define two types of falsehood: unintentional and the deliberate.

First off, there’s unintentional misinformation. We’re only human, and sometimes that means we get things wrong. We forget details, recall things incorrectly, or we pass along unverified accounts that we mistakenly take for fact. Thus, misinformation is wrong information that you don’t know is wrong. An innocent everyday example of this is when someone on your neighborhood Facebook group posts that the drug store closes at 8pm on weeknights when in fact it really closes at 7pm. They believe it closes at 8pm, but they’re simply mistaken.

That differs entirely from deliberate disinformation. This is intentionally misleading information or facts that have been manipulated to create a false narrative—typically with an ulterior motive in mind. The readiest example of this is propaganda, yet other examples also extend to deliberate untruths engineered to discredit a person, group, or institution. In other words, disinformation can take forms both large and small. It can apply to a person just as easily as it can to a major news story.

Now, let’s take a look at some habits and tactics designed to help you get a better grasp on the truth in your social media feed.

Consider the source

Some of the oldest advice is the best advice, and that holds true here: consider the source. Take time to examine the information you come across. Look at its source. Does that source have a track record of honesty and dealing plainly with the facts? Likewise, that source has sources too. Consider them in the same way as well.

Now, what’s the best way to go about that? For one, social media platforms are starting to embed information about publications into posts where their content is shared. For example, if a friend shares an article from The Economist, Facebook now includes a small link in the form of an “i” in a circle. Clicking on this presents information about the publication, which can give you a quick overview of its ownership, when it was founded, and so forth.

Another fact-finding trick comes by way of Michael Caufield, the Director of Blended and Networked Learning at Washington State University. He calls it: “Just Add Wikipedia.” It entails doing a search for a Wikipedia page by using the URL of an information source. For example, if you saw an article published on Vox.com, you’d simply search “Wikipedia www.vox.com.” The Wikipedia entry will give you an overview of the information source, its track record, its ownership, and if it has fired reporters or staff for false reporting. Of course, be aware that Wikipedia entries are written by public editors and contributors. These articles will only be as accurate as the source material that they are drawn from, so be sure to reference the footnotes that are cited in the entry. Reading those will let you know if the entry is informed by facts from reputable sources as well. They may open up other avenues of fact-finding as well!

Expand your media diet

A single information source or story won’t provide a complete picture. It may only cover a topic from a certain angle or narrow focus. Likewise, information sources are helmed by editors and stories are written by people—all of which have their biases, whether overt or subtle. It’s for this reason that expanding your media diet to include a broader range information sources is so important.

So, see what other information sources have to say on the same topic. Consuming news across a spectrum will expose you to thoughts and coverage you might not otherwise get if you keep your consumption to a handful of sources. The result is that you’re more broadly informed and have the ability to compare and contrast different sources and points of view. Using the tips above, you can find other reputable sources to round out your media diet.

Additionally, for a list of reputable information sources, along with the reasons why they’re reputable, check out “10 Journalism Brands Where You Find Real Facts Rather Than Alternative Facts” published by Forbes and authored by an associate professor at The King’s College in New York City. It certainly isn’t the end all, be all of lists, yet it should provide you with a good starting point.

Let your emotions be your guide

Has a news story you’ve read or watched ever made you shake your fist at the screen or want to clap and cheer? How about something that made you fearful or simply laugh? Bits of content that evoke strong emotional responses tend to spread quickly, whether they’re articles, a post, or even a tweet. That’s a ready sign that a quick fact check could be in order.

There’s a good reason for that. Bad actors who wish to foment unrest, unease, or simply spread disinformation use emotionally driven content to plant a seed. Whether or not their original story gets picked up and viewed firsthand doesn’t matter to these bad actors. Their aim is to actually get some manner of disinformation out into the ecosystem. They rely on others who will re-post, re-tweet, or otherwise pass it along on their behalf—to the point where the original source of the information is completely lost. This is one instance where people readily begin to accept certain information as fact, even if it’s not factual at all.

Certainly, some legitimate articles will generate a response as well, yet it’s a good habit to do a quick fact check and confirm what you’ve read. This leads us right back to our earlier points about considering the source and cross-checking against other sources of information as well.

Keep an eye out for “sponsored content”

You’ve probably seen headlines similar to this before: THIS FAT-BURNING TRICK HAS DOCTORS BAFFLED! You’ll usually spot them in big blocks laden with catchy photos and illustrations, almost to the point that they look like they’re links to other news stories. They’re not. They’re ads, which often strike a sensationalistic tone.

The next time you spot one of these, look around the area of the web page where they’re placed. You should find a little graphic or snippet of text that says “Advertisement,” “Paid Sponsor,” or something similar. And there you go. You spotted some sponsored content. These so-called articles aren’t intentionally developed to misinform you. They are likely trying to bait you into buying something.

However, in some less reputable corners of the web ads like these can take you to malicious sites that install malware or expose you to other threats. Always surf with web browser protection. Good browser protection will either identify such links as malicious right away or prevent your browser from proceeding to the malicious site if you click on such a link.

Be helpful, not right

So, let’s say you’ve been following these practices of media literacy for a while. What do you do when you see a friend posting what appears to be misinformation on their social media account? If you’re inclined to step in and comment, try to be helpful, not right.

We can only imagine how many spoiled relationships and “unfriendings” have occurred thanks to moments where one person comments on a post with the best intentions of “setting the record straight,” only to see tempers flare. We’ve all seen it happen. The original poster, instead of being open to the new information, digs in their heels and becomes that much more convinced of being right on the topic.

One way to keep your friendships and good feelings intact is this: instead of entering the conversation with the intention of being “right,” help people discover the facts for themselves. You can present your information as part of a discussion on the topic. So while you shouldn’t expect this to act like a magic wand that whisks away misinformation, what you can do is provide a path toward a reputable source of information that the original poster, and their friends, can follow if they wish.

Be safe out there

Wherever your online travels take you as you read and research the news, be sure to go out there with a complete security suite. In addition to providing virus protection, it will also help protect your identity and privacy as you do anything online. Also look for an option that will protect your mobile devices too, as we spend plenty of time scrolling through our social media feeds on our smartphones.

If you’re interested in learning more about savvy media consumption, pop open a tab and give these articles a read—they’ll give you a great start:

Bots in the Twittersphere: Pew Research
How to Spot Fake News: FactCheck.org

Likewise, keep an eye on your own habits. We forward news in our social media feeds too—so follow these same good habits when you feel like it’s time to post. Make sure that what you share is truthful too.

Be safe, be well-read, and be helpful!

Stay Updated

To stay updated on all things McAfee and for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Spot Fake News and Misinformation in Your Social Media Feed appeared first on McAfee Blogs.

Privacy-Preserving Smart Input with Gboard

Google Keyboard (a.k.a Gboard) has a critical mission to provide frictionless input on Android to empower users to communicate accurately and express themselves effortlessly. In order to accomplish this mission, Gboard must also protect users' private and sensitive data. Nothing users type is sent to Google servers. We recently launched privacy-preserving input by further advancing the latest federated technologies. In Android 11, Gboard also launched the contextual input suggestion experience by integrating on-device smarts into the user's daily communication in a privacy-preserving way.

Before Android 11, input suggestions were surfaced to users in several different places. In Android 11, Gboard launched a consistent and coordinated approach to access contextual input suggestions. For the first time, we've brought Smart Replies to the keyboard suggestions - powered by system intelligence running entirely on device. The smart input suggestions are rendered with a transparent layer on top of Gboard’s suggestion strip. This structure maintains the trust boundaries between the Android platform and Gboard, meaning sensitive personal content cannot be not accessed by Gboard. The suggestions are only sent to the app after the user taps to accept them.

For instance, when a user receives the message “Have a virtual coffee at 5pm?” in Whatsapp, on-device system intelligence predicts smart text and emoji replies “Sounds great!” and “👍”. Android system intelligence can see the incoming message but Gboard cannot. In Android 11, these Smart Replies are rendered by the Android platform on Gboard’s suggestion strip as a transparent layer. The suggested reply is generated by the system intelligence. When the user taps the suggestion, Android platform sends it to the input field directly. If the user doesn't tap the suggestion, gBoard and the app cannot see it. In this way, Android and Gboard surface the best of Google smarts whilst keeping users' data private: none of their data goes to any app, including the keyboard, unless they've tapped a suggestion.

Additionally, federated learning has enabled Gboard to train intelligent input models across many devices while keeping everything individual users type on their device. Today, the emoji is as common as punctuation - and have become the way for our users to express themselves in messaging. Our users want a way to have fresh and diversified emojis to better express their thoughts in messaging apps. Recently, we launched new on-device transformer models that are fine-tuned with federated learning in Gboard, to produce more contextual emoji predictions for English, Spanish and Portuguese.

Furthermore, following the success of privacy-preserving machine learning techniques, Gboard continues to leverage federated analytics to understand how Gboard is used from decentralized data. What we've learned from privacy-preserving analysis has let us make better decisions in our product.

When a user shares an emoji in a conversation, their phone keeps an ongoing count of which emojis are used. Later, when the phone is idle, plugged in, and connected to WiFi, Google’s federated analytics server invites the device to join a “round” of federated analytics data computation with hundreds of other participating phones. Every device involved in one round will compute the emoji share frequency, encrypt the result and send it a federated analytics server. Although the server can’t decrypt the data individually, the final tally of total emoji counts can be decrypted when combining encrypted data across devices. The aggregated data shows that the most popular emoji is 😂 in Whatsapp, 😭 in Roblox(gaming), and ✔ in Google Docs. Emoji 😷 moved up from 119th to 42nd in terms of frequency during COVID-19.

Gboard always has a strong commitment to Google’s Privacy Principles. Gboard strives to build privacy-preserving effortless input products for users to freely express their thoughts in 900+ languages while safeguarding user data. We will keep pushing the state of the art in smart input technologies on Android while safeguarding user data. Stay tuned!

Best practices for defending Azure Virtual Machines

One of the things that our Detection and Response Team (DART) and Customer Service and Support (CSS) security teams see frequently during investigation of customer incidents are attacks on virtual machines from the internet.

This is one area in the cloud security shared responsibility model where customer tenants are responsible for security. Security is a shared responsibility between Microsoft and the customer and as soon as you put just one virtual machine on Azure or any cloud you need to ensure you apply the right security controls.

The diagram below illustrates the layers of security responsibilities:

Image of the shared responsibility model showing customer, service, and cloud responsibilities

Fortunately, with Azure, we have a set of best practices that are designed to help protect your workloads including virtual machines to keep them safe from constantly evolving threats. This blog will share the most important security best practices to help protect your virtual machines.

The areas of the shared responsibility model we will touch on in this blog are as follows:

  • Tools
  • Identity and directory infrastructure
  • Applications
  • Network Controls
  • Operating System

We will refer to the Azure Security Top 10 best practices as applicable for each:

Best practices

1. Use Azure Secure Score in Azure Security Center as your guide

Secure Score within Azure Security Center is a numeric view of your security posture. If it is at 100 percent, you are following best practices. Otherwise, work on the highest priority items to improve the current security posture. Many of the recommendations below are included in Azure Secure Score.

2. Isolate management ports on virtual machines from the Internet and open them only when required

The Remote Desktop Protocol (RDP) is a remote access solution that is very popular with Windows administrators. Because of its popularity, it’s a very attractive target for threat actors. Do not be fooled into thinking that changing the default port for RDP serves any real purpose. Attackers are always scanning the entire range of ports, and it is trivial to figure out that you changed from 3389 to 4389, for example.

If you are already allowing RDP access to your Azure VMs from the internet, you should check the configuration of your Network Security Groups. Find any rule that is publishing RDP and look to see if the Source IP Address is a wildcard (*). If that is the case, you should be concerned, and it’s quite possible that the VM could be under brute force attack right now.

It is relatively easy to determine if your VMs are under a brute force attack, and there are at least two methods we will discuss below:

  • Azure Defender (formerly Azure Security Center Standard) will alert you if your VM is under a brute force attack.
  • If you are not using Security Center Standard tier open the Windows Event Viewer and find the Windows Security Event Log. Filter for Event ID 4625 (an account failed to log on). If you see many such events occurring in quick succession (seconds or minutes apart), then it means you are under brute force attack.

Other commonly attacked ports would include: SSH (22), FTP (21), Telnet (23), HTTP (80), HTTPS (443), SQL (1433), LDAP 389. This is just a partial list of commonly published ports. You should always be cautious about allowing inbound network traffic from unlimited source IP address ranges unless it is necessary for the business needs of that machine.

A couple of methods for managing inbound access to Azure VMs:

Just-in-time will allow you to reduce your attack service while also allowing legitimate users to access virtual machines when necessary.

Network security groups contain rules that allow or deny traffic inbound to, or outbound traffic from several types of Azure resources including VMs. There are limits to the number of rules and they can become difficult to manage if many users from various network locations need to access your VMs.

For more information, see this top Azure Security Best Practice:

3. Use complexity for passwords and user account names

If you are required to allow inbound traffic to your VMs for business reasons, this next area is of critical importance. Do you have complete confidence that any user account that would be allowed to access this machine is using a complex username/password combination? What if this VM is also domain joined? It’s one thing to worry about local accounts, but now you must worry about any account in the domain that would have the right to log on to that Virtual Machine.

For more information, see this top Azure Security Best Practice:

4. Keep the operating system patched

Vulnerabilities of the operating system are particularly worrisome when they are also combined with a port and service that is more likely to be published. A good example is the recent vulnerabilities affecting the Remote Desktop Protocol called “BlueKeep.” A consistent patch management strategy will go a long way towards improving your overall security posture.

5. Keep third-party applications current and patched

Applications are another often overlooked area, especially third-party applications installed on your Azure VMs. Whenever possible use the most current version available and patch for any known vulnerabilities. An example is an IIS Server using a third-party Content Management Systems (CMS) application with known vulnerabilities. A quick search of the Internet for CMS vulnerabilities will reveal many that are exploitable.

For more information, see this top Azure Security Best Practice:

6. Actively monitor for threats

Utilize the Azure Security Center Standard tier to ensure you are actively monitoring for threats. Security Center uses machine learning to analyze signals across Microsoft systems and services to alert you to threats to your environment. One such example is remote desktop protocol (RDP) brute-force attacks.

For more information, see this top Azure Security Best Practice:

7. Azure Backup Service

In addition to turning on security, it’s always a good idea to have a backup. Mistakes happen and unless you tell Azure to backup your virtual machine there isn’t an automatic backup. Fortunately, it’s just a few clicks to turn on.

Next steps

Equipped with the knowledge contained in this article, we believe you will be less likely to experience a compromised VM in Azure. Security is most effective when you use a layered (defense in depth) approach and do not rely on one method to completely protect your environment. Azure has many different solutions available that can help you apply this layered approach.

If you found this information helpful, please drop us a note at csssecblog@microsoft.com.

To learn more about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Best practices for defending Azure Virtual Machines appeared first on Microsoft Security.

Promising Infusions of Cash, Fake Investor John Bernard Walked Away With $30M

September featured two stories on a phony tech investor named John Bernard, a pseudonym used by a convicted thief named John Clifton Davies who’s fleeced dozens of technology companies out of an estimated $30 million with the promise of lucrative investments. Those stories prompted a flood of tips from Davies’ victims that paints a much clearer picture of this serial con man and his cohorts, including allegations of hacking, smuggling, bank fraud and murder.

KrebsOnSecurity interviewed more than a dozen of Davies’ victims over the past five years, none of whom wished to be quoted here out of fear of reprisals from a man they say runs with mercenaries and has connections to organized crime.

As described in Part II of this series, John Bernard is in fact John Clifton Davies, a 59-year-old U.K. citizen who absconded from justice before being convicted on multiple counts of fraud in 2015. Prior to his conviction, Davies served 16 months in jail before being cleared of murdering his third wife on their honeymoon in India.

The scam artist John Bernard (left) in a recent Zoom call, and a photo of John Clifton Davies from 2015.

After eluding justice in the U.K., Davies reinvented himself as The Private Office of John Bernard, pretending to a be billionaire Swiss investor who made his fortunes in the dot-com boom 20 years ago and who was seeking investment opportunities.

In case after case, Bernard would promise to invest millions in tech startups, and then insist that companies pay tens of thousands of dollars worth of due diligence fees up front. However, the due diligence company he insisted on using — another Swiss firm called Inside Knowledge — also was secretly owned by Bernard, who would invariably pull out of the deal after receiving the due diligence money.

Bernard found a constant stream of new marks by offering extraordinarily generous finders fees to investment brokers who could introduce him to companies seeking an infusion of cash. When it came time for companies to sign legal documents, Bernard’s victims interacted with a 40-something Inside Knowledge employee named “Katherine Miller,” who claimed to be his lawyer.

It turns out that Katherine Miller is a onetime Moldovan attorney who was previously known as Ecaterina “Katya” Dudorenko. She is listed as a Romanian lawyer in the U.K. Companies House records for several companies tied to John Bernard, including Inside Knowledge Solutions Ltd., Docklands Enterprise Ltd., and Secure Swiss Data Ltd (more on Secure Swiss data in a moment).

Another of Bernard’s associates listed as a director at Docklands Enterprise Ltd. is Sergey Valentinov Pankov. This is notable because in 2018, Pankov and Dudorenko were convicted of cigarette smuggling in the United Kingdom.

Sergey Pankov and Ecaterina Dudorenco, in undated photos. Source: Mynewsdesk.com

According to the Organized Crime and Corruption Reporting Project, “illicit trafficking of tobacco is a multibillion-dollar business today, fueling organized crime and corruption [and] robbing governments of needed tax money. So profitable is the trade that tobacco is the world’s most widely smuggled legal substance. This booming business now stretches from counterfeiters in China and renegade factories in Russia to Indian reservations in New York and warlords in Pakistan and North Africa.”

Like their erstwhile boss Mr. Davies, both Pankov and Dudorenko disappeared before their convictions in the U.K. They were sentenced in absentia to two and a half years in prison.

Incidentally, Davies was detained by Ukrainian authorities in 2018, although he is not mentioned by name in this story from the Ukrainian daily Pravda. The story notes that the suspect moved to Kiev in 2014 and lived in a rented apartment with his Ukrainian wife.

John’s fourth wife, Iryna Davies, is listed as a director of one of the insolvency consulting businesses in the U.K. that was part of John Davies’ 2015 fraud conviction. Pravda reported that in order to confuse the Ukrainian police and hide from them, Mr. Davies constantly changed their place of residence.

John Clifton Davies, a.k.a. John Bernard. Image: Ukrainian National Police.

The Pravda story says Ukrainian authorities were working with the U.K. government to secure Davies’ extradition, but he appears to have slipped away once again. That’s according to one investment broker who’s been tracking Davies’ trail of fraud since 2015.

According to that source — who we’ll call “Ben” — Inside Knowledge and The Private Office of John Bernard have fleeced dozens of companies out of nearly USD $30 million in due diligence fees over the years, with one company reportedly paying over $1 million.

Ben said he figured out that Bernard was Davies through a random occurrence. Ben said he’d been told by a reliable source that Bernard traveled everywhere in Kiev with several armed guards, and that his entourage rode in a convoy that escorted Davies’ high-end Bentley. Ben said Davies’ crew was even able to stop traffic in the downtown area in what was described as a quasi military maneuver so that Davies’ vehicle could proceed unobstructed (and presumably without someone following his car).

Ben said he’s spoken to several victims of Bernard who saw phony invoices for payments to be made to banks in Eastern Europe appear to come from people within their own organization shortly after cutting off contact with Bernard and his team.

While Ben allowed that these invoices could have come from another source, it’s worth noting that by virtue of participating in the due diligence process, the companies targeted by these schemes would have already given Bernard’s office detailed information about their finances, bank accounts and security processes.

In some cases, the victims had agreed to use Bernard’s Secure Swiss Data software and services to store documents for the due diligence process. Secure Swiss Data is one of several firms founded by Davies/Inside Knowledge and run by Dudorenko, and it advertised itself as a Swiss company that provides encrypted email and data storage services. In February 2020, Secure Swiss Data was purchased in an “undisclosed multimillion buyout” by SafeSwiss Secure Communication AG.

Shortly after the first story on John Bernard was published here, virtually all of the employee profiles tied to Bernard’s office removed him from their work experience as listed on their LinkedIn resumes — or else deleted their profiles altogether. Also, John Bernard’s main website — the-private-office.ch — replaced the content on its homepage with a note saying it was closing up shop.

Incredibly, even after the first two stories ran, Bernard/Davies and his crew continued to ply their scam with companies that had already agreed to make due diligence payments, or that had made one or all of several installment payments.

One of those firms actually issued a press release in August saying it had been promised an infusion of millions in cash from John Bernard’s Private Office. They declined to be quoted here, and continue to hold onto hope that Mr. Bernard is not the crook that he plainly is.

Recorded Future Express gives you elite security intelligence at zero cost

Graham Cluley Security News is sponsored this week by the folks at Recorded Future. Thanks to the great team there for their support! Recorded Future empowers your organization, revealing unknown threats before they impact your business, and helping your teams respond to alerts 10 times faster. How does it do this? By automatically collecting and … Continue reading "Recorded Future Express gives you elite security intelligence at zero cost"

A Handy Guide for Choosing a Managed Detection & Response (MDR) Service

Every company needs help with cybersecurity. No CISO ever said, "I have everything I need and am fully confident that our organization is fully protected against breaches." This is especially true for small and mid-sized enterprises that don't have the luxury of enormous cybersecurity budgets and a deep bench of cybersecurity experts. To address this issue, especially for small and mid-sized

New Valak Variant Makes “Most Wanted Malware” List for First Time

An updated variant of the Valak malware family earned a place on a security firm’s “most wanted malware” list for the first time. Check Point revealed that an updated version of Valak ranked as the ninth most prevalent malware in its Global Threat Index for September 2020. First detected back in 2019, Valak garnered the […]… Read More

The post New Valak Variant Makes “Most Wanted Malware” List for First Time appeared first on The State of Security.

ALERT! Hackers targeting IoT devices with a new P2P botnet malware

Cybersecurity researchers have taken the wraps off a new botnet hijacking Internet-connected smart devices in the wild to perform nefarious tasks, mostly DDoS attacks, and illicit cryptocurrency coin mining. Discovered by Qihoo 360's Netlab security team, the HEH Botnet — written in Go language and armed with a proprietary peer-to-peer (P2P) protocol, spreads via a brute-force attack of the