Drivers are distracted by a hacked billboard, we take a deeper look at how the deepfake problem has… uh… deepened, and Carole is less than happy about Amazon’s announcement about new Alexa integrations.
All this, an annoying goose, and much much more is discussed in the latest edition of the “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Maria Varmazis.
Worldwide concern is increasing over the adverse effects that deepfakes could have on society, and for good reason. Recently, the employee of an energy company based in the UK was tricked into thinking he was talking on the phone with his boss, the CEO of the German parent company, who asked him to transfer $243,000 to a Hungarian supplier. Of course, the employee was not speaking with the actual CEO, but with a scammer who was impersonating the real CEO through voice-altering AI.
This kind of social engineering attack is not new. In fact, merely two months ago, cybersecurity researchers identified three successful deepfake audio attacks on companies. Their “CEO” called a financial officer to ask for an urgent money transfer. The voices of the real CEO had been taken from earnings calls, YouTube videos, TED talks, and other recordings, and inserted into an AI program which enabled fraudsters to imitate the voices.
These types of incidents are the audio version of what are known as deepfake videos, which have been causing global panic for the past couple of years. As we become accustomed to the existence of deepfakes, this may affect our trust in any videos we see or audio footage we hear, including the real ones. Videos, which once used to be the ultimate form of truth that transcended edited pictures that can be easily altered, can now deceive us as well.
And this brings us to the question:
How safe is your business in the face of the deepfake threat?
What are Deepfakes?
Deepfakes are fake video and audio footage of individuals, that are meant to make them look like they have said and done things which, in fact, they haven’t. “Deep” relates to the “deep learning” technology used to produce the media and “fake” to its artificial nature. Most of the time, the faces of people are superimposed on the bodies of others, or their actual figure is altered in such a way that it appears to be saying and doing something that they never did.
The term was born in 2017 when a Reddit user posted a fake adult video showing the faces of some Hollywood celebrities. Later, the user also published the machine learning code used to create the video.
Can we detect and stop Deepfakes?
Right now, researchers and companies are investigating how they can utilize AI to distinguish and wipe out deepfakes. New advancements have started to rise that are meant to help us identify which pictures and recordings are real and which are fake.
For example, Facebook, Microsoft, the Partnership on AI coalition, and academics from several universities are launching a contest to help improve the detection of deepfakes. They aim to encourage people to produce a technology that can be used by anyone to detect when deepfake material has been created. The Deepfake Detection Challenge will feature a data set and leaderboard, alongside grants and awards, to motivate participants to design new methods of identifying and stopping fake footage meant to deceive others.
Yet, this won’t prevent the fake media from being created, shared, seen and heard by millions of people before it is removed. And without doubt, it can be extremely difficult to face the consequences and repair the damage once malicious materials get distributed.
How can you spot Deepfake videos?
Until some highly reliable technical solutions are designed, we should learn to identify the tell-tale signs of deepfakes. So, here are the flaws you should be looking for:
- Blinking – According to research, the eye blinking in videos seems to be not that well presented in deepfake videos.
- Head position – Watch out for blurry face borders that subtly blend into the background.
- Artificially-looking skin – If the face looks extra smooth like it’s been edited, this may be another warning sign. Also, watch out for the skin tone that can be slightly different than the rest of the body.
- Slow speech and different intonation – Sometimes, you will notice the one who is being impersonated talks rather slowly or there isn’t quite a match between the real person’s voice and the fake one.
- An overall strange look and feel – In the end, you should trust your instinct. Sometimes, you can simply tell something’s not right.
At the moment, one can easily spot deepfakes. But in the future, as this technology progresses, it will gradually become more difficult.
Deepfakes could destroy everything
Here is what deepfakes could have a highly negative impact on:
Deepfakes could influence elections since they can put words into politicians’ mouths and make them look like they’ve done or said certain things which, in fact, they haven’t. Deepfake producers could target popular social media channels, where the content shared can instantly become viral.
Fake evidence for criminal trials could be used against people in court and this way, they could become accused of crimes they did not commit. Thus, the wrong people could go to jail. And on the other hand, people who are guilty could be set free based on false proof.
#3. Stock market
Deepfakes could be used to manipulate stock prices when altered footage of influential people making certain statements gets distributed. Imagine what would happen if a fake video of the CEOs of companies such as Apple, Amazon, or Google declaring they’ve done something illegal. For instance, back in 2008, Apple’s stock dropped 10 points based on a false rumor that Steve Jobs had suffered a major heart attack emerged.
#4. Online bullying
The deepfake technology could also be used to amplify cyberbullying, especially since it’s now becoming widely available. People can easily turn into victims when manipulated media of them is posted online. Or they can get blackmailed by cybercriminals who are threatening leak the footage if, for instance, they don’t pay a certain amount of money.
Someone could be making false statements about your business to destabilize and degrade it. Malicious actors could make it look like you or someone within your organization admitting to having been involved in consumer fraud, bribery, sexual abuse, and any other wrongdoings you can think of. Obviously, these kinds of false statements can destroy your company’s reputation and make it difficult for you to prove otherwise.
What can be done?
Due to the current gaps in the law, producers of deepfakes are not incriminated. However, the Deepfakes Accountability Act (known as “Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act – yes, you’ve correctly identified an acronym right there) aims to take measures to criminalize this type of fake media.
In short, anyone who creates deepfakes would be required to reveal that the footage is altered. And if they fail to do so, it will be considered a crime. The existence of these kinds of regulations is mandatory to protect deepfake victims and also the general public from distorted information.
How can you protect your business from Deepfakes?
Your competitors could resort to deepfake blackmail in order to try to eliminate you from the industry.
No matter how good technological deepfake detection solutions will become, they won’t prevent manipulated media from being shared and reach large numbers of people. So, the best way is to teach your employees how to identify fake footage and question everything that seems suspicions inside the organization.
#1. Train your employees
The topic of deepfakes can be looked at during your cybersecurity training. For instance, if they receive an unexpected call from the CEO who is asking them to transfer $1 million to a bank account, they could, first of all, question if the person on the other line is who they say they are. Maybe, a good countermeasure would be to have a few security questions in place that need to be asked to verify a caller’s identity.
#2. Monitor your brand’s online presence
Your brand’s presence is probably already being monitored online. So, make sure your designated people keep an eye on fake content involving your organization and if anything suspicious is brought to light, they do their best to take it down as soon as possible and mitigate the damage.
This brings us to the next point.
#3. Be transparent
If you become a victim of deepfakes, ensure that your audience is aware of the targeted attack. Trying to ignore what happened or assume that people didn’t believe what they’ve seen or heard won’t make the issue disappear. Therefore, your PR efforts should be centered around communicating that someone from your company has been impersonated and highlighting the artificial nature of the distributed footage.
Never let misinformation erode your public’s confidence!
Wrapping it all up
The dangers of deefakes are real and should not be underestimated. A single ill-intended rumor could destroy your business. So, you, both as an individual and an organization, should be prepared to stand against these threats.
A company is said to have lost €220,000 (approximately $243,000) after receiving a phone call from a boss requesting the money be transferred into a supplier’s bank account.
But it wasn’t the real boss on the phone…
Read more in my article on the Hot for Security blog.
Was a cybercrime committed on the International Space Station? What on earth were Ukrainian scientists thinking when they plugged a nuclear power station into the internet? And someone has cloned Canadian clinical psychologist Jordan Peterson’s voice…
All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast with Graham Cluley and Carole Theriault, joined this week by Mark Stockley.