Category Archives: artificial intelligence

How artificial intelligence stopped an Emotet outbreak

At 12:46 a.m. local time on February 3, a Windows 7 Pro customer in North Carolina became the first would-be victim of a new malware attack campaign for Trojan:Win32/Emotet. In the next 30 minutes, the campaign tried to attack over a thousand potential victims, all of whom were instantly and automatically protected by Windows Defender AV.

How did Windows Defender AV uncover the newly launched attack and block it at the outset? Through layered machine learning, including use of both client-side and cloud machine learning (ML) models. Every day, artificial intelligence enables Windows Defender AV to stop countless malware outbreaks in their tracks. In this blog post, well take a detailed look at how the combination of client and cloud ML models detects new outbreaks.

Figure 1. Layered detected model in Windows Defender AV

Client machine learning models

The first layer of machine learning protection is an array of lightweight ML models built right into the Windows Defender AV client that runs locally on your computer. Many of these models are specialized for file types commonly abused by malware authors, including, JavaScript, Visual Basic Script, and Office macro. Some models target behavior detection, while other models are aimed at detecting portable executable (PE) files (.exe and .dll).

In the case of the Emotet outbreak on February 3, Windows Defender AV caught the attack using one of the PE gradient boosted tree ensemble models. This model classifies files based on a featurization of the assembly opcode sequence as the file is emulated, allowing the model to look at the files behavior as it was simulated to run.

Figure 2. A client ML model classified the Emotet outbreak as malicious based on emulated execution opcode machine learning model.

The tree ensemble was trained using LightGBM, a Microsoft open-source framework used for high-performance gradient boosting.

Figure 3a. Visualization of the LightBGM-trained client ML model that successfully classified Emotet’s emulation behavior as malicious. A set of 20 decision trees are combined in this model to classify whether the files emulated behavior sequence is malicious or not.

Figure 3b. A more detailed look at the first decision tree in the model. Each decision is based on the value of a different feature. Green triangles indicate weighted-clean decision result; red triangles indicate weighted malware decision result for the tree.

When the client-based machine learning model predicts a high probability of maliciousness, a rich set of feature vectors is then prepared to describe the content. These feature vectors include:

  • Behavior during emulation, such as API calls and executed code
  • Similarity fuzzy hashes
  • Vectors of content descriptive flags optimized for use in ML models
  • Researcher-driven attributes, such as packer technology used for obfuscation
  • File name
  • File size
  • Entropy level
  • File attributes, such as number of sections
  • Partial file hashes of the static and emulated content

This set of features form a signal sent to the Windows Defender AV cloud protection service, which runs a wide array of more complex models in real-time to instantly classify the signal as malicious or benign.

Real-time cloud machine learning models

Windows Defender AVs cloud-based real-time classifiers are powerful and complex ML models that use a lot of memory, disk space, and computational resources. They also incorporate global file information and Microsoft reputation as part of the Microsoft Intelligent Security Graph (ISG) to classify a signal. Relying on the cloud for these complex models has several benefits. First, it doesnt use your own computers precious resources. Second, the cloud allows us to take into consideration the global information and reputation information from ISG to make a better decision. Third, cloud-based models are harder for cybercriminals to evade. Attackers can take a local client and test our models without our knowledge all day long. To test our cloud-based defenses, however, attackers have to talk to our cloud service, which will allow us to react to them.

The cloud protection service is queried by Windows Defender AV clients billions of times every day to classify signals, resulting in millions of malware blocks per day, and translating to protection for hundreds of millions of customers. Today, the Windows Defender AV cloud protection service has around 30 powerful models that run in parallel. Some of these models incorporate millions of features each; most are updated daily to adapt to the quickly changing threat landscape. All together, these classifiers provide an array of classifications that provide valuable information about the content being scanned on your computer.

Classifications from cloud ML models are combined with ensemble ML classifiers, reputation-based rules, allow-list rules, and data in ISG to come up with a final decision on the signal. The cloud protection service then replies to the Windows Defender client with a decision on whether the signal is malicious or not all in a fraction of a second.

Figure 4. Windows Defender AV cloud protection service workflow.

In the Emotet outbreak, one of our cloud ML servers in North America received the most queries from customers; corresponding to where the outbreak began. At least nine real-time cloud-based ML classifiers correctly identified the file as malware. The cloud protection service replied to signals instructing the Windows Defender AV client to block the attack using two of our ML-based threat names, Trojan:Win32/Fuerboos.C!cl and Trojan:Win32/Fuery.A!cl.

This automated process protected customers from the Emotet outbreak in real-time. But Windows Defender AVs artificial intelligence didnt stop there.

Deep learning on the full file content

Automatic sample submission, a Windows Defender AV feature, sent a copy of the malware file to our backend systems less than a minute after the very first encounter. Deep learning ML models immediately analyzed the file based on the full file content and behavior observed during detonation. Not surprisingly, deep neural network models identified the file as a variant of Trojan:Win32/Emotet, a family of banking Trojans.

While the ML classifiers ensured that the malware was blocked at first sight, deep learning models helped associate the threat with the correct malware family. Customers who were protected from the attack can use this information to understand the impact the malware might have had if it were not stopped.

Additionally, deep learning models provide another layer of protection: in relatively rare cases where real-time classifiers are not able to come to a conclusive decision about a file, deep learning models can do so within minutes. For example, during the Bad Rabbit ransomware outbreak, Windows Defender AV protected customers from the new ransomware just 14 minutes after the very first encounter.

Intelligent real-time protection against modern threats

Machine learning and AI are at the forefront of the next-gen real-time protection delivered by Windows Defender AV. These technologies, backed by unparalleled optics into the threat landscape provided by ISG as well as world-class Windows Defender experts and researchers, allow Microsoft security products to quickly evolve and scale to defend against the full range of attack scenarios.

Cloud-delivered protection is enabled in Windows Defender AV by default. To check that its running, go to Windows Settings > Update & Security > Windows Defender. Click Open Windows Defender Security Center, then navigate to Virus & threat protection > Virus &threat protection settings, and make sure that Cloud-delivered protection and Automatic sample submission are both turned On.

In enterprise environments, the Windows Defender AV cloud protection service can be managed using Group Policy, System Center Configuration Manager, PowerShell cmdlets, Windows Management Instruction (WMI), Microsoft Intune, or via the Windows Defender Security Center app.

The intelligent real-time defense in Windows Defender AV is part of the next-gen security technologies in Windows 10 that protect against a wide spectrum of threats. Of particular note, Windows 10 S is not affected by this type of malware attack. Threats like Emotet wont run on Windows 10 S because it exclusively runs apps from the Microsoft Store. Learn more about Windows 10 S. To know about all the security technologies available in Windows 10, read Microsoft 365 security and management features available in Windows 10 Fall Creators Update.


Geoff McDonald, Windows Defender Research
with Randy Treit and Allan Sepillo



Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.

Most CIOs plan to deploy artificial Iintelligence

Meaningful artificial intelligence (AI) deployments are just beginning to take place. Gartner’s 2018 CIO Agenda Survey shows that four percent of CIOs have implemented AI, while a further 46 percent have developed plans to do so. “Despite huge levels of interest in AI technologies, current implementations remain at quite low levels,” said Whit Andrews, research vice president and distinguished analyst at Gartner. “However, there is potential for strong growth as CIOs begin piloting AI programs … More

What does the GDPR and the “right to explanation” mean for AI?

Security teams increasingly rely on machine learning and artificial intelligence to protect assets. Will a requirement to explain how they make decisions make them less effective?”But I’m not guilty,” said

The post What does the GDPR and the “right to explanation” mean for AI? appeared first on The Cyber Security Place.

E Hacking News – Latest Hacker News and IT Security News: Advancing Ransomware Attacks and Creation of New Cyber Security Strategies

As ransomware is on the rise, the organisations are focusing too much on the anti-virus softwares rather than proactively forming strategies to deal with cyber-attacks which could pose as an indefinite threat to the users. Nevertheless one of the good advices to deal with this issue is the creation of the air-gaps, as through these it becomes quite easy to store and protect critical data. It even allows the offline storage of data. So, when a ransomware attack occurs, it should be possible to restore your data without much downtime – if any at all.

But it usually happens so that organisations more often than not find themselves taking one step forward and then one step back. As traditionally, the ransomware is more focused on backup programs and their associated storage but on the other hand it seems very keen on perpetually targeting the storage subsystems which has spurred organisations into having robust backup procedures in place to counter the attack if it gets through.

So in order for the organisations to be proactive it is recommended that they should resort to different ways to protecting data that allows it to be readily recovered whenever a ransomware attack, or some other cyber security issue, threatens to disrupt day-to-day business operations and activities.

Clive Longbottom, client services director at analyst firm Quocirca explains: “If your backup software can see the back-up, so can the ransomware. Therefore, it is a waste of time arguing about on-site v off-site – it comes down to how well air locked the source and target data locations are.”

However, to defend against any cyber-attack there needs to be several layers of defence which may or may not consist of a firewall, anti-virus software or backup. The last layer of defence that is to be used by the user though, must be the most robust of them all to stop any potential costly disruption in its track before it’s too late. So, anti-virus software must still play a key defensive role.

A ransomware attack is pretty brutal, warns Longbottom, “It requires a lot of CPU and disk activity. It should be possible for a system to pick up this type of activity and either block it completely, throttles it, or prevents it from accessing any storage system other than ones that are directly connected physically to the system.”

Now coming down to the traditional approach, it is often observed that data centres are in position in close proximity to each other in order to easily tackle the impact of latency, but for the fact they are all too often situated within the same circles of disruption increases the financial, operational and reputational risks associated with downtime.

Therefore there are a few certain tips that could allow the user to successfully migrate data to prevent ransomware attacks:

• The more layers you can add the better.
• User education.
• Update your Back-up regularly - it can be the last layer of defence.
• Have a copy off site – tape or cloud but don’t leave the drawbridge down.
• Planning of your backup process for your recovery requirement.
By following these one could successfully prevent cyber-attacks with ease and precision.

E Hacking News - Latest Hacker News and IT Security News

Data and analytics maturity: Most organizations should be doing better

91 percent of organizations have not yet reached a transformational level of maturity in data and analytics, despite this area being a number one investment priority for CIOs in recent years, according to a worldwide survey of 196 organizations by Gartner. Overview of the maturity model for data and analytics “Most organizations should be doing better with data and analytics, given the potential benefits,” said Nick Heudecker, research VP at Gartner. “Organizations at transformational levels … More

How do your IT complexity challenges compare to those of other CIOs?

A global survey of 800 CIOs conducted by Vanson Bourne reveals that 76% of organizations think IT complexity could soon make it impossible to manage digital performance efficiently. IT complexity is growing The study further highlights that IT complexity is growing exponentially; a single web or mobile transaction now crosses an average of 35 different technology systems or components, compared to 22 just five years ago. This growth has been driven by the rapid adoption … More

AI in the Workplace: How Digital Assistants Impact Cybersecurity

Digital Assistants (sometimes seen as AIs) are becoming ubiquitous in living rooms and smartphones everywhere. Now, these devices are taking the leap to the business world. With Amazon’s announcement of

The post AI in the Workplace: How Digital Assistants Impact Cybersecurity appeared first on The Cyber Security Place.

Google CEO Sundar Pichai compares AI to fire and electricity

Google CEO Sundar Pichai thinks impact of AI on humanity is more profound than electricity or fire

While fire is regarded as one of the greatest invention of the Early Stone Age, followed by the discovery of electricity in the 1800’s, Google’s CEO Sundar Pichai considers AI (artificial intelligence) impact on humanity to be more ‘profound’ than fire or electricity.

Speaking as part of a new show hosted by MSNBC’s Ari Melber and Recode’s Kara Swisher, Pichai, 45, said that AI is ‘one of the most important things that humanity is working on. It’s more profound that, I don’t know, electricity or fire.’

“Fire’s pretty good,” Swisher said.

However, Pichai drew equals to electricity and fire, and said that AI is basically both useful and dangerous at the same time.

“But it kills people, too. They learn to harness fire for the benefits of humanity, but we have to overcome its downsides, too,” Pichai said.

While admitting some concerns about AI taking over the world one day, Pichai also mentioned that he feels that AI technology will inevitably play a significant role in the advancement of humanity, such as curing cancer and providing climate change solutions.

“My point is AI is really important, but we have to be concerned about it,” Pichai said. “It’s fair to be worried about it—I wouldn’t say we’re just being optimistic about it— we want to be thoughtful about it. AI holds the potential for some of the biggest advances we’re going to see.

“Whenever I see the news of a young person dying of cancer, you realize AI is going to play a role in solving that in the future. So I think we owe it to make progress too.”

However, Pichai did acknowledge the need for “balance” to be struck between AI technology’s downsides and upsides.

The extracts are part of a new series on MSNBC called “Revolution: Google and YouTube Changing the World” where Pichai along with YouTube CEO Susan Wojcicki were interviewed. The show featuring the full interview with Pichai is scheduled to air on US news channel MSNBC on January 26.

The post Google CEO Sundar Pichai compares AI to fire and electricity appeared first on TechWorm.

Future workforce: Intelligent technology meets human ingenuity

Businesses risk missing major growth opportunities unless CEOs take immediate steps to pivot their workforces and equip their people to work with intelligent technologies. The potential of AI Accenture estimates that if businesses invest in Artificial Intelligence (AI) and human-machine collaboration at the same rate as top performing companies, they could boost revenues by 38 percent by 2022 and raise employment levels by 10 percent. Collectively, this would lift profits by $4.8 trillion globally over … More

CAPTCHA + reCAPTCHA: Are they the Best Fraud Prevention Solution for your Business?

As someone who has worked in cybersecurity for years, it’s been fascinating to watch the evolution of CAPTCHA. Whilst you’re probably familiar with the acronym, you may not know that

The post CAPTCHA + reCAPTCHA: Are they the Best Fraud Prevention Solution for your Business? appeared first on The Cyber Security Place.

DFINITY is the Third Wave of Blockchain

The use of blockchain technology is rapidly proliferating, and it has already become a strong candidate to be the most revolutionary technology of this decade. The first generation of blockchain technology came with the invention of bitcoin by Satoshi Nakamoto in 2008. It depended heavily on a virtual ledger, which keeps track of all transactions […]

The post DFINITY is the Third Wave of Blockchain appeared first on Hacked: Hacking Finance.

The End of Human Money Managers

Quantitative Easing by central banks around the world has led to dramatic changes in the money management industry over the past six years. Not only have we seen increasing regional differences, but stock picking has also become more difficult as the money injected into the markets by central banks has lifted pretty much everything, regardless […]

The post The End of Human Money Managers appeared first on Hacked: Hacking Finance.

Is AI allegedly hacking users’ account?

Recently the leak of a few documents online seems to reveal insight into the computer gaming industry's use of Artificial Intelligence (AI) to increase advertising revenue and gaming deals. The classified documents showed up on Imgur two days back, and have been doing the rounds on Twitter. The leaked documents, if genuine, uncover the startling lengths that the computer game industry will go to with a specific end goal to snoop on gamers using AI.

The archives state that reconnaissance data is accumulated to order detailed profiles about users. As indicated by the reports AI focused on the users' smartphones and utilized inactive listening innovation/technology to connect with the smartphone's microphone, phones are checked to see whether they (users) stay in a similar area for eight hours or more. On the off chance that this is observed to be genuine the subject is set apart as "at home". 

The unsubstantiated documents at that point go ahead to clarify the detailed observing or monitoring that happens inside a user’s home:
 “When in home, monitor area of common walking space. Pair with information about number of staircases gathered from footfall audio patterns. Guess square footage of house.”

A part of the document marked "Example Highlight" at that point goes ahead to clarify how it was chosen that "high bonus gaming sessions during relaxing times are paradoxically not the time to encourage premium engagement."

Around then, users are focused with free rewards, bonuses and "non-revenue-generating gameplay ads." As per the leak, at these circumstances "the AI severely discourages premium ads.”
As though this wasn't sufficient, the AI additionally listens in, for catchphrases as well as for "non word sounds." Examples include microwave sounds and notwithstanding biting and chewing noises, which are utilized to figure whether packaged meals have been consumed.

A section marked "Calendar K" clarifies how psychological manipulation is utilized to coerce users into making purchases. AI may sit tight for players to be tired after long gaming sessions. Can turn around the shade of free and paid game titles (generally blue and red), with a specific end goal to "trick a player into making a buy unintentionally."

Unbelievably though,it gets worse. As indicated by the leaked documents the gaming business industry likewise utilizes hacked data dumps to gather additional information about users. Also a segment marked "Schedule O" even clarifies how the AI gathers side channel data.
For the present however, it remains to be seen whether this information or data dump will end up being genuine or not.

As is dependably the case, we encourage smart phone users to be careful about the applications they install. Continuously check for obtrusive authorizations before consenting to install any application or game. On the off chance that a game requests authorization to utilize the microphone, please remember that this sort of reconnaissance might happen.

As per these leaked documents, AI software may likewise be utilizing previously hacked information and data to pick up passage to outsider or third-party administrations and services. If it happens, at that point the gaming companies might break into auxiliary services to put users under surveillance and develop a detailed profile about them.

For now, these serious allegations still can't seem to be demonstrated valid. Be that as it may, the users are reminded to dependably utilize solid one of a kind passwords for the greater part of their diverse online accounts – to make it substantially harder for organizations and companies to use such practices.

3 ways to keep businesses safe from cybercrime in 2018

In the fight against cybercrime, companies can expect things to keep getting worse before they get any better With 2.7 billion more IoT devices brought online in 2017, businesses saw

The post 3 ways to keep businesses safe from cybercrime in 2018 appeared first on The Cyber Security Place.

Top enterprise security predictions for 2018

2017 delivered a good deal of excitement (as well as massive, massive headaches) in IT security. WannaCry attacked more than 300,000 computers in 150 countries only to be followed by

The post Top enterprise security predictions for 2018 appeared first on The Cyber Security Place.

Behavioral biometrics will replace passwords by 2022 – Gartner

In just a few years, we can all safely forget those cumbersome passwords we use to secure and unlock our devices. And we will be able to thank on-device artificial intelligence (AI) for easing the strain on our memory, according to a forecast by Gartner.

Gartner analysts believe on-device AI, as opposed to cloud-based AI, will mark a paradigm shift in digital security, and will do so sooner than most people think.

“On-device AI is currently limited to premium devices and provides better data protection and power management than full cloud-based AI, since data is processed and stored locally,” Gartner says in a report published on January 4.

The research company outlines 10 AI solutions expected to run on 80% of smartphones in 2022 that will become an essential part of vendor roadmaps and our everyday lives. At least four of them impact security.

“Digital Me”

“Smartphones will be an extension of the user, capable of recognizing them and predicting their next move,” reads the report. “They will understand who you are, what you want, when you want it, how you want it done and execute tasks upon your authority.”

This ability will not only ensure that your digital devices act under your authority, and your authority alone, but it will also ensure you know what to expect from them in terms of functionality and behavior. Going by Gartner’s forecast, “digital me” will be a crucial selling point for IoT / smart home vendors in the next couple of years.

Personal Profiling

New-generation smartphones will collect behavioral data to more accurately profile the user, paving the way for dynamic protection and assistance in emergency situations. It will also benefit insurers. Gartner speculates that car insurers will be able to adjust insurance rates based on driving behavior.

Behavioral Biometrics is an emerging technology that analyzes user behavior (including keystroke dynamics, gait analysis, voice ID, mouse use characteristics, signature analysis and cognitive biometrics), and creates a unique biometric template on the device. When the behavior doesn’t match the template, the (presumed) impostor is blocked from using the device or the device requires multi-layer authentication (just in case it makes a mistake).

Content Censorship/Detection

A device with on-board AI could automatically detect inappropriate content – such as objectionable images, videos or text – and flag it, or block it altogether.

“Computer recognition software can detect any content that violates any laws or policies,” according to the report. “For example, taking photos in high security facilities or storing highly classified data on company-paid smartphones will notify IT.”

User Authentication

Probably the boldest, but also the most-likely-to-materialize prediction from the report is the idea that on-device AI will render password-based authentication obsolete. Passwords / passcodes and PINs are indeed a weak defense, with hundreds of millions of credentials leaked, stolen or otherwise compromised every year.

For example, a list of 100 worst passwords compiled by SplashData was only made possible thanks to 5 million leaked credentials.

“Password-based, simple authentication is becoming too complex and less effective, resulting in weak security, poor user experience, and a high cost of ownership,” Gartner asserts.

“Security technology combined with machine learning, biometrics and user behavior will improve usability and self-service capabilities. For example, smartphones can capture and learn a user’s behavior, such as patterns when they walk, swipe, apply pressure to the phone, scroll and type, without the need for passwords or active authentications.”

Gartner isn’t just making assumptions either – Australian scientists have successfully prototyped a small wearable that uses your gait as an authentication token.

Other AI technologies that Gartner expects in portable devices by 2022 include emotion recognition, natural-language understanding, audio analytics, and more.

The road to “true AI”

Artificial intelligence was founded as an academic discipline in the 1950s and it has since had many ups and downs. Tasks requiring “intelligence” from a machine are often discarded from the definition as they become ubiquitous.

Optical character recognition, for example, has become so mundane that it no longer fits the definition. This has led computer scientist Larry Tesler to postulate a theorem along with a now-famous quip: “AI is whatever hasn’t been done yet.”

More recently AI has become a controversial topic, where even those actively developing AI systems express deep concerns about its implications if not handled correctly. Tesla CEO Elon Musk and theoretical physicist Stephen Hawking are just two of many prominent figures of our time casting a gloomy projection of AI in the years to come.

Still, humanity is a long way from true AI. Even the most complex computer systems today can’t emulate the most basic characteristics of human intelligence, such as reasoning or planning.

Top Five Trends IT Security Pros Need to Think About Going into 2018

It’s that time of the year when we look back at the tech trends of 2017 to provide us with a hint of things to come. Accordingly, let’s engage in our favorite end-of-year pastime: predictions about the coming year.

Equipped with Imperva’s own research, interactions with our customers, and a wealth of crowdsourcing data analyzed from installations around the world, we’ve looked ahead to the future of cybersecurity and compiled a few significant trends IT security pros can expect to see in 2018.

Here are our top five predictions for 2018 and what you can do to prepare for them:

1. Massive Cloud Data Breach

Companies have moved to cloud data services faster than anticipated even in traditional industries like banking and healthcare where security is a key concern. As shown in Figure 1, take-up of cloud computing will continue to increase, attaining a compound annual growth rate (CAGR) of 19%, from $99B in 2017 to $117B in 2018.

In 2018, in parallel with the take-up of cloud computing, we’ll see massive cloud data breaches—primarily because companies are not yet fully aware of the complexities involved with securing cloud data.

growth of cloud computing - IDC

Figure 1: Rapid Growth of Cloud Computing (Source: IDC)

Data Breaches: A Troubling Past, A Worrying Future

It is estimated that in 2017 alone, over 99 billion records were exposed because of data breaches. Of the various circumstances behind the breaches, hacking of IT systems is by far the most prevalent cause, followed by poor security, inside jobs, and lost or stolen hardware and media.

Major breaches at healthcare and financial services companies indicate a growing trend of vulnerabilities and exploits in these two vital business sectors.

Healthcare was one of the hardest hit sectors in 2017, and that trend is expected to worsen in the coming year. Some 31 million records were stolen, accounting for 2% of the total and up a whopping 423% from just 6 million.

The financial services industry is the most popular target for cyber attackers (see Figure 2), and this dubious distinction is likely to continue in the upcoming year. Finance companies suffered 125 data breaches, 14% of the total, up 29% from the previous six months.

Data breaches in various other industries totaled 53, up 13% and accounting for 6% of the total. The number of records involved in these attacks was a staggering 1.34 billion (71% of the total) and significantly up from 14 million.

It is estimated that the average cost of a data breach will be over $150 million by 2020, with the global annual cost forecast to be $2.1 trillion.

data records stolen or lost top five sectors - IDC

Figure 2: Data Records Stolen or Lost by Sector (Source: IDC)

Critical Cloud-based Security Misconfigurations

Missteps in cloud-based security configurations often lead to data breaches. This is likely to increase as more organizations move some or most of their operations to the cloud.

As organizations and business units migrate to public cloud services, centralized IT departments will find it increasingly difficult to control their company’s IT infrastructure. These enterprises lack the visibility necessary to manage their cloud environments and don’t have the monitoring tools to detect and report on security governance and compliance. Many are not even aware of the specific workloads they’ve migrated to the cloud. And without a doubt, you can’t secure what you can’t see.

For example, an unsecured Amazon Web Services S3 storage bucket has been an ongoing concern for cloud users. The bucket, which can be configured to allow public access, has in the past leaked highly sensitive information. In one instance of a major security breach, a whopping 111 GB worth was exposed, affecting tens of thousands of consumers.

Most significantly, Amazon is aware of the security issue, but is not likely to mitigate it since it is caused by cloud-user misconfigurations.

2. Cryptocurrency Mining

We expect to see a growth of cryptocurrency mining attacks where attackers are utilizing endpoint resources (CPU/GPU) to mine cryptocurrency either by cross-site scripting (XSS) or by malware. It’s increasingly likely that remotely vulnerable/hackable IoT devices will also be used as a mining force to further maximize an attacker’s profits.

Illegal mining operations set up by insiders, which can be difficult to detect, are also on the rise—often carried out by employees with high-level network privileges and the technical skills needed to turn their company’s computing infrastructure into a currency mint.

These attacks will quickly grow in popularity given their lucrative nature. As long as there is a potential windfall involved, such inside jobs are likely to remain high on the list of cybersecurity challenges faced by companies.

Although attacks that attempt to embed crypto-mining malware are currently unsophisticated, we expect to see an increase in the sophistication of attacks as word gets out that this is a lucrative enterprise. We also expect these attacks to target higher-traffic websites, since the potential to profit increases greatly with higher numbers of concurrent site visitors.

3. Malicious Use of AI/Deception of AI Systems

The malicious use of artificial intelligence (AI) will continue to grow quickly. The industry has started to see early traces of attackers leveraging AI to learn normal behavior and mimic that behavior to bypass current user and entity behavior analytics (UEBA) solutions. It’s still very early stage and will continue to mature beyond 2018. However, it will force current UEBA vendors to come up with a 2.0 approach to identifying anomalous behavior.

AI and internet of things (IoT) use cases drive cloud adoption. Artificial intelligence in the cloud promises to be the next great disrupter as computing is evolving from a mobile-first to an artificial intelligence-first model. The proliferation of cloud-based IoT in the marketplace continues to drive cloud demand, as cloud allows for secure storage of massive amounts of structured and unstructured data central to IoT core functions.

Without proper awareness and security measures, AI can be easily fooled by adversarial behavior. In 2018 we will see more:

  • Attacks on AI systems (for example, self-driving cars)
  • Cyber attackers who adapt their attacks to bypass AI-based cybersecurity systems

4. Cyber Extortion Targets Business Disruption

Cyber extortion will be more disruption focused. Encryption, corruption, and exfiltration will still be the leaders in cyber extortion, but disruption will intensify this year, manifesting in disabled networks, internal network denials of service, and crashing email services.

In the last few years, attackers have adopted a “traditional” ransomware business model—encrypt, corrupt or exfiltrate the data and extort the owner in order to recover the data or prevent it from leaking. Fortunately, techniques such as deception or machine learning have helped to prevent these types of attacks and made it more difficult for attackers to successfully complete a ransomware attack.

From a cost perspective, most of the damage associated with ransomware attacks is not the data loss itself, since many firms have backups, but the downtime. Often in the case of ransomware, attackers will start to leverage a disrupt-and-extort method. DDoS is the classic and most familiar one, but attackers will probably adopt new techniques. Examples include shutting down an internal network (web app to a database, point-of-sale systems, communication between endpoints, etc.), modifying computer configuration to cause software errors, causing software crashes, system restarts, disruption of your corporate email or disruption of any other infrastructure which is mandatory for an organization’s employees and/or customers day-to-day functions. Basically, any event that leaves the company unable to conduct business.

While absolute protection is impossible, you can help lower your chance of business interruption due to a cyber-attack. Start by creating a formal, documented risk management plan that addresses the scope, roles, responsibilities, compliance criteria and methodology for performing cyber risk assessments. This plan should include a characterization of all systems used at the organization based on their functions, the data they store and process, and their importance to the organization.

5. Breach by Insiders

Businesses are relying more on data which means more people within the business have access to it. The result is a corresponding increase in data breaches by insiders either through intentional (stealing) or unintentional (negligent) behavior of employees and partners.

While the most sensational headlines typically involve infiltrating an ironclad security system or an enormous and well-funded team of insurgents, the truth of how hackers are able to penetrate your system is more boring: it’s your employees.

A new IT security report paints a bleak picture of the actual gravity of the situation. Researchers found that IT workers in the government sector overwhelmingly think that employees are actually the biggest threat to cybersecurity. In fact, 100% of respondents said so.

Fortunately, security-focused companies have begun identifying these traditionally difficult to detect breaches using data monitoring, analytics, and expertise. The difference being that in 2018, more companies will invest in technology to identify this behavior where previously they were blind.

In fact, 75% of IT employees in government reported that rather than their organization having dedicated cybersecurity personnel on staff (which is becoming more and more necessary with each passing year), an overworked IT team was left to deal with security and employee compliance. As a result, 57% reported that they didn’t even have enough time to implement stronger security measures while 54% cited too small of a budget.

Here’s another fact for you: insider threats are the cause of the biggest security breaches out there, and they are very costly to remediate. According to a 2017 Insider Threat Report, 53% of companies estimate remediation costs of $100,000 and more, with 12% estimating a cost of more than $1 million. The same report suggests that 74% of companies feel that they are vulnerable to insider threats, with seven percent reporting extreme vulnerability.

These are the steps every company should take to minimize insider threats:

  • Background checks
  • Watch employee behavior
  • Use the principle of least privilege
  • Control user access
  • Monitor user actions
  • Educate employees

Insider threats are one of the top cybersecurity threats and a force to be reckoned with. Every company will face insider-related breaches sooner or later regardless of whether it is caused by a malicious action or an honest mistake. And it’s much better to put the necessary security measures in place now than to spend millions of dollars later.


Join Imperva on January 23rd for a live webinar where we’ll discuss these trends in more detail and review the security measures necessary to mitigate the risks. Register to attend today.

The good, the bad and the anomaly

This blog was originally published on LinkedIn.

The security industry is rampant with vendors peddling anomaly detection as the cure all for cyber attacks. This is grossly misleading.

The problem is that anomaly detection over-generalizes: All normal behavior is good; all anomalous behavior is bad – without considering gradations and context. With anomaly detection, the distinction between user behaviors and attacker behaviors is nebulous, even though they are fundamentally different.

Analytics 101

From today’s smart home applications to autonomous vehicles of the future, the efficiency of automated decision-making is becoming widely embraced. Sci-fi concepts such as “machine learning” and “artificial intelligence” have been realized; however, it is important to understand that these terms are not interchangeable but evolve in complexity and knowledge to drive better decisions.

Distinguishing Between Machine Learning, Deep Learning and Artificial Intelligence

Put simply, analytics is the scientific process of transforming data into insight for making better decisions. Within the world of cybersecurity, this definition can be expanded to mean the collection and interpretation of security event data from multiple sources, and in different formats for identifying threat characteristics.

Simple explanations for each are as follows:

  • Machine Learning: Automated analytics that learn over time, recognizing patterns in data.  Key for cybersecurity because of the volume and velocity of Big Data.
  • Deep Learning: Uses many layers of input and output nodes (similar to brain neurons), with the ability to learn.  Typically makes use of the automation of Machine Learning.
  • Artificial Intelligence: The most complex and intelligent analytical technology, as a self-learning system applying complex algorithms which mimic human-brain processes such as anticipation, decision making, reasoning, and problem solving.

Benefits of Analytics within Cybersecurity

Big Data, the term coined in October 1997, is ubiquitous in cybersecurity as the volume, velocity and veracity of threats continue to explode. Security teams are overwhelmed by the immense volume of intelligence they must sift through to protect their environments from cyber threats. Analytics expand the capabilities of humans by sifting through enormous quantities of data and presenting it as actionable intelligence.

While the technologies must be used strategically and can be applied differently depending upon the problem at hand, here are some scenarios where human-machine teaming of analysts and analytic technologies can make all the difference:

  • Identify hidden malware with Machine Learning: Machine Learning algorithms recognize patterns far more quickly than your average human. This pattern recognition can detect behaviors that cause security breaches, whether known or unknown, periodically “learning” to become smarter. Machine Learning can be descriptive, diagnostic, predictive, or prescriptive in its analytic assessments, but typically is diagnostic and/or predictive in nature.
  • Defend against new threats with Deep Learning: Complex and multi-dimensional, Deep Learning reflects similar multi-faceted security behaviors in its actual algorithms; if the situation is complex, the algorithm is likely to be complex. It can detect, protect, and correct old or new threats by learning what is reasonable within any environment and identifying outliers and unique relationships.  Deep Learning can be descriptive, diagnostic, predictive, and prescriptive as well.
  • Anticipate threats with Artificial Intelligence: Artificial Intelligence uses reason and logic to understand its ecosystem. Like a human brain, AI considers value judgements and outcomes in determining good or bad, right or wrong.  It utilizes a number of complex analytics, including Deep Learning and Natural Language Processing (NLP). While Machine Learning and Deep Learning can span descriptive to prescriptive analytics, AI is extremely good at the more mature analytics of predictive and prescriptive.

With any security solution, therefore, it is important to identify the use case and ask “what problem are you trying to solve” to select Machine Learning, Deep Learning, or Artificial Intelligence analytics.  In fact, sometimes a combination of these approaches is required, like many McAfee products including McAfee Investigator.  Human-machine teaming as well as a layered approach to security can further help to detect, protect, and correct the most simple or complex of breaches, providing a complete solution for customers’ needs.

The post Analytics 101 appeared first on McAfee Blogs.

The Future of Cyber Safety: Could Artificial Intelligence Be The Silver Bullet?

Stay Safe Online Week 2017

Cyber safety: outsourcing to experts makes such sense!

Like most multi-tasking millennium mums, I’m a BIG fan of outsourcing: ironing, cleaning and gardening – it just makes such sense! Why not get an expert involved so you can focus on the things you love? Smart, I say!

But did you know that the future of cyber safety might just be heading the same way? Many technology experts and futurists including Ian Yip, McAfee® APAC’s CTO, believe that many of the decisions we make each day regarding our online safety could soon be made for us by digital assistants powered by artificial intelligence.

Sound a little crazy? Let me explain.

We’re human, after all.

An unfortunate reality of digital life is the fact that many of us have been hacked or scammed online, or know someone who has. But the truth is that almost all this pain could have been avoided if we had taken the necessary steps to protect ourselves and our families online. It is our ‘humanness’ that often gets us into trouble – our inconsistent and imperfect approach means we may take risks online without thinking, accidentally overshare or inadvertently click on a dodgy link.

But what if we could offload the management of our cyber safety to a true expert? An expert who is 100% organised, never forgets to renew security software, never uses the same password twice AND who can constantly analyse behaviour and immediately implement limits, should they be required?

Welcome to the future world of cyber safety digital assistants!!

Sounds good? Well, it could get even better… Imagine if your digital assistant, powered by artificial intelligence, had been programmed with the latest scientific research around brain and human development. Your loyal digital assistant could then interject at crucial points during your child’s interaction with digital content to educate them or it could tell them to perform a chore before allowing more online time. Or limit their screen time when scientifically-proven or parent-enforced limit has been reached. All the while keeping them safe online.

Sounds like every parent’s dream!!

No longer would technology be the enemy of committed parents. Computers set up with digital assistants could instead be a positive influence and assist committed parents to raise healthy, well-adjusted young people.

But we’re not there quite yet…

Before we get too excited, we need to remember that this paradise is still some time away. So, until that time, we need to embrace our ‘humanness’ and this means doing what we can to protect ourselves and our families online.

One of our biggest jobs as parents is to teach our kids how to independently navigate the complexities of life and this includes the online world. Although tempting, wrapping your offspring in cotton wool and keeping them away from risks is unfortunately not the best way to prepare them for the complications of the online world.

Instead we need to teach them to question what they see, dig deeper and take a moment to reflect before they act. These critical thinking skills will hold them in great stead and mean you don’t need to panic unnecessarily about new online threats – if they have the skills then they can be smart, safe online operators!

But we also need to practise what we preach! As parents, it is essential that we also model appropriate online behaviour and healthy digital habits. Psychologist Jocelyn Brewer believes that our generation of parents are ‘just as likely to be glued to their screens as their teenage offspring.’ And while we are checking work emails from the sporting field or playground, we are playing a ‘powerful role in (our) child’s social learning’ – modelling behaviour that we then spend much energy trying to rid our children of.

Stay Smart Online Week

This week is Stay Smart Online Week, an initiative by the Australian Government together with business and community groups to raise awareness about the ways people can protect themselves online. So, why not take a moment and do a quick audit on your personal cyber safety strategy? Here are my top tips to get you started:

1. Create complex passwords.

Creating strong, unique passwords is the best way to keep your personal and financial information safe online. This is especially true in the era of widespread corporate hacks, where one database breach can reveal tens of thousands of user passwords. Why not consider McAfee’s password manager the True Key™ app? It uses multiple authentication factors to sign you in – no need to remember anything!!

2. Secure your connections.

When at home or work, you probably use a password-protected router that encrypts your data. However, when you’re out you might use free, public Wi-Fi that is often unsecured – meaning a hacker can easily access your device or information. Consider using a Virtual Private Network (VPN) so you can connect safely from anywhere.

3. Keep your software up-to-date.

Mobile devices face new threats each day, such as risky apps and dangerous links sent by text (smishing). Make sure your security software is enabled and your software apps are up-to-date on your mobile, computers and other devices to ensure you have the latest security patches. Turn on automatic updates to avoid forgetting.

4. Keep your guard up.

Always be cautious about what you do online and which websites you visit. Make sure you know what to look out for. Incorrect spelling and/or grammar in website addresses often is a sign of illegitimate websites. To keep your defence up, use comprehensive security software like McAfee Total Protection, and make sure to back-up your data on a regular basis.

5. Be a selective sharer and practise safe surfing.

Be cautious about what you share online, particularly when it comes to your identity information. This includes online shopping and banking. Always make sure that the site’s address starts with “https” instead of just “http”, and has a padlock icon in the URL field. This indicates that the website is secure. Use safe search tools such as McAfee WebAdvisor to help you steer clear of risky websites.


Parenting in the digital age can definitely be complicated. As the first generation of digital parents, we are learning on the job – sometimes even making it up as we go. But help is on its way!! Artificial Intelligence will, without a doubt, transform the way we manage our online safety and, in my opinion, make a positive contribution to the next generation of cyber citizens.

Take care

Alex x


The post The Future of Cyber Safety: Could Artificial Intelligence Be The Silver Bullet? appeared first on McAfee Blogs.