Category Archives: AI

2019 State of Malware report: Trojans and cryptominers dominate threat landscape

Each quarter, the Malwarebytes Labs team gathers to share intel, statistics, and analysis of the tactics and techniques made popular by cybercriminals over the previous three months. At the end of the year, we synthesize this data into one all-encompassing report—the State of Malware report—that aims to follow the most important threats, distribution methods, and other trends that shaped the threat landscape.

Our 2019 State of Malware report is here, and it’s a doozy.

In our research, which covers January to November 2018 and compares it against the previous period in 2017, we found that two major malware categories dominated the scene, with cryptominers positively drenching users at the back end of 2017 and into the first half of 2018, and information-stealers in the form of Trojans taking over for the second half of the year.

But that’s not all we discovered.

The 2019 State of Malware report follows the top 10 global threats for consumers and businesses, as well as top threats by region and by corporate industry verticals. In addition, we followed noteworthy distribution techniques for the year, as well as popular scams. Some of our findings include:

  • In 2018, we saw a shift in ransomware attack techniques from malvertising and exploits that deliver ransomware as a payload to targeted, manual attacks. The shotgun approach was replaced with brute force, as witnessed in the most successful SamSam campaigns of the year.
  • Malware authors pivoted in the second half of 2018 to target organizations over consumers, recognizing that the bigger payoff was in making victims out of businesses instead of individuals. Overall business detections of malware rose significantly over the last year—79 percent to be exact—and primarily due to the increase in backdoors, miners, spyware, and information stealers.

  • The fallout from the ShadowBrokers’ leak of NSA exploits in 2017 continued, as cybercriminals used SMB vulnerabilities EternalBlue and EternalRomance to spread dangerous and sophisticated Trojans, such as Emotet and TrickBot. In fact, information stealers were the top consumer and business threat in 2018, as well as the top regional threat for North America, Latin America, and Europe, the Middle East, and Africa (EMEA).

Finally, our Labs team stared into its crystal ball and predicted top trends for 2019. Of particular note are the following:

  • Attacks designed to avoid detection, like soundloggers, will slip into the wild.

  • Artificial Intelligence will be used in the creation of malicious executables.

  • Movements such as Bring Your Own Security (BYOS) to work will grow as trust declines.

  • IoT botnets will come to a device near you.

To learn more about top threats and trends in 2018 and our predictions for 2019, download our report from the link below.

2019 State of Malware Report

The post 2019 State of Malware report: Trojans and cryptominers dominate threat landscape appeared first on Malwarebytes Labs.

AI is Sending People To Jail — and Getting it Wrong

At the Data for Black Lives conference last weekend, technologists, legal experts, and community activists snapped the kind of impact AI has on our lives into perspective with a discussion of America's criminal justice system. There, an algorithm can determine the trajectory of your life. From a report: The US imprisons more people than any other country in the world. At the end of 2016, nearly 2.2 million adults were being held in prisons or jails, and an additional 4.5 million were in other correctional facilities. Put another way, 1 in 38 adult Americans was under some form of correctional supervision. The nightmarishness of this situation is one of the few issues that unite politicians on both sides of the aisle. Under immense pressure to reduce prison numbers without risking a rise in crime, courtrooms across the US have turned to automated tools in attempts to shuffle defendants through the legal system as efficiently and safely as possible. This is where the AI part of our story begins. Police departments use predictive algorithms to strategize about where to send their ranks. Law enforcement agencies use face recognition systems to help identify suspects. These practices have garnered well-deserved scrutiny for whether they in fact improve safety or simply perpetuate existing inequities. Researchers and civil rights advocates, for example, have repeatedly demonstrated that face recognition systems can fail spectacularly, particularly for dark-skinned individuals -- even mistaking members of Congress for convicted criminals. But the most controversial tool by far comes after police have made an arrest. Say hello to criminal risk assessment algorithms.

Read more of this story at Slashdot.

Is AI the Answer to never-ending Cybersecurity Problems?

Paul German, CEO, Certes Networks, talks about the impact and the benefits of Artificial Intelligence(AI) driven cybersecurity. And how AI adoption is helping organisations to stay ahead in the never-ending game that is cybersecurity.

Artificial Intelligence (AI) isn’t going anywhere anytime soon. With 20% of the C-suite already using machine learning and 41% of consumers believing that AI will improve their lives, wide scale adoption is imminent across every industry - and cybersecurity is no exception. A lot has changed in the cyber landscape over the past few years and AI is being pushed to the forefront of conversations. It’s becoming more than a buzzword and delivering true business value. Its ability to aid the cybersecurity industry is increasingly being debated; some argue it has the potential to revolutionise cybersecurity, whilst others insist that the drawbacks outweigh the benefits.

With several issues facing the current cybersecurity landscape such as a disappearing IT perimeter, a widening skills gap, increasingly sophisticated cyber attacks and data breaches continuing to hit headlines, a remedy is needed. The nature of stolen data has also changed - CVV and passport numbers are becoming compromised, so coupled with regulations such as GDPR, organisations are facing a minefield.

Research shows that 60% think AI has the ability to find attacks before they do damage. But is AI the answer to the never-ending cybersecurity problems facing organisations today?

The Cost-Benefit Conundrum
On one hand, AI could provide an extremely large benefit to the overall framework of cybersecurity defences. On the other, the reality that it equally has the potential to be a danger under certain conditions cannot be ignored. Hackers are fast gaining the ability to foil security algorithms by targeting the data AI technology is training on. Inevitably, this could have devastating consequences.

AI can be deployed by both sides - by the attackers and the defenders. It does have a number of benefits such as the ability to learn and adapt to its current learning environment and the threat landscape. If it was deployed correctly, AI could consistently collect intelligence about new threats, attempted attacks, successful data breaches, blocked or failed attacks and learn from it all, fulfilling its purpose of defending the digital assets of an organisation. By immediately reacting to attempted breaches, mitigating and addressing the threat, cybersecurity could truly reach the next level as the technology would be constantly learning to detect and protect.

Additionally, AI technology has the ability to pick up abnormalities within an organisation’s network and flag it quicker than a member of the cybersecurity or IT team could; AI’s ability to understand ‘normal’ behaviour would allow it to bring attention to potentially malicious behaviour of suspicious or abnormal user or device activity.

As with most new technologies, for each positive there is an equal negative. AI could be configured by hackers to learn the specific defences and tools that it runs up against which would give way to larger and more successful data breaches. Viruses could be created to host this type of AI, producing more malware that can bypass even more advanced security implementations. This approach would likely be favoured by hackers as they don’t even need to tamper with the data itself - they could work out the features of the code a model is using and mirror it with their own. In this particular care, the tables would be turned and organisations could find themselves in sticky situations if they can’t keep up with hackers.

Organisations must be wary that they don’t adopt AI technology in cybersecurity ‘just because.’ As attack surfaces expand and hackers become more sophisticated, cybersecurity strategies must evolve to keep up. AI contributes to this expanding attack surface so when it comes down to deployment, the benefits must be weighed up against the potential negatives. A robust, defence-in-depth Information Assurance strategy is still needed to form the basis of any defence strategy to keep data safe.



Paul German, CEO, Certes Networks

A Poker-Playing Robot Goes To Work for the Pentagon

In 2017, a poker bot called Libratus made headlines when it roundly defeated four top human players at no-limit Texas Hold 'Em. Now, Libratus' technology is being adapted to take on opponents of a different kind -- in service of the US military. From a report: Libratus -- Latin for balanced -- was created by researchers from Carnegie Mellon University to test ideas for automated decision making based on game theory. Early last year, the professor who led the project, Tuomas Sandholm, founded a startup called Strategy Robot to adapt his lab's game-playing technology for government use, such as in war games and simulations used to explore military strategy and planning. Late in August, public records show, the company received a two-year contract of up to $10 million with the US Army. It is described as "in support of" a Pentagon agency called the Defense Innovation Unit, created in 2015 to woo Silicon Valley and speed US military adoption of new technology. [...] Sandholm declines to discuss specifics of Strategy Robot's projects, which include at least one other government contract. He says it can tackle simulations that involve making decisions in a simulated physical space, such as where to place military units. The Defense Innovation Unit declined to comment on the project, and the Army did not respond to requests for comment. Libratus' poker technique suggests Strategy Robot might deliver military personnel some surprising recommendations. Pro players who took on the bot found that it flipped unnervingly between tame and hyperaggressive tactics, all the while relentlessly notching up wins as it calculated paths to victory.

Read more of this story at Slashdot.

McAfee Blogs: Artificial Intelligence & Your Family: The Wows & the Risks

artificial intelligenceAm I the only one? When I hear or see the word Artificial Intelligence (AI), my mind instantly defaults to images from sci-fi movies I’ve seen like I, Robot, Matrix, and Ex Machina. There’s always been a futuristic element — and self-imposed distance — between AI and myself.

But AI is anything but futuristic or distant. AI is here, and it’s now. And, we’re using it in ways we may not even realize.

AI has been woven throughout our lives for years in various expressions of technology. AI is in our homes, workplaces, and our hands every day via our smartphones.

Just a few everyday examples of AI:

  • Cell phones with built-in smart assistants
  • Toys that listen and respond to children
  • Social networks that determine what content you see
  • Social networking apps with fun filters
  • GPS apps that help you get where you need to go
  • Movie apps that predict what show you’d enjoy next
  • Music apps that curate playlists that echo your taste
  • Video games that deploy bots to play against you
  • Advertisers who follow you online with targeted ads
  • Refrigerators that alert you when food is about to expire
  • Home assistants that carry out voice commands
  • Flights you take that operate via an AI autopilot

The Technology

While AI sounds a little intimidating, it’s not when you break it down. AI is technology that can be programmed to accomplish a specific set of goals without assistance. In short, it’s a computer’s ability to be predictive — to process data, evaluate it, and take action.

AI is being implemented in education, business, manufacturing, retail, transportation, and just about any other sector of industry and culture you can imagine. It’s the smarter, faster, more profitable way to accomplish manual tasks.

An there’s tons of AI-generated good going on. Instagram — the #2 most popular social network — is now using AI technology to detect and combat cyberbullying on in both comments and photos.

No doubt, AI is having a significant impact on everyday life and is positioned to transform the future.

Still, there are concerns. The self-driving cars. The robots that malfunction. The potential jobs lost to AI robots.

So, as quickly as this popular new technology is being applied, now is a great time to talk with your family about both the exciting potential of AI and the risks that may come with it.

Talking points for families

Fake videos, images. AI is making it easier for people to face swap within images and videos. A desktop application called FakeApp allows users to seamlessly swap faces and share fake videos and images. This has led to the rise in “deep fake” videos that appear remarkably realistic (many of which go viral). Tip: Talk to your family about the power of AI technology and the responsibility and critical thinking they must exercise as they consume and share online content.

Privacy breaches. Following the Cambridge Analytica/Facebook scandal of 2018 that allegedly used AI technology unethically to collect Facebook user data, we’re reminded of those out to gather our private (and public) information for financial or political gain. Tip: Discuss locking down privacy settings on social networks and encourage your kids to be hyper mindful about the information they share in the public feed. That information includes liking and commenting on other content — all of which AI technology can piece together into a broader digital picture for misuse.

Cybercrime. As outlined in McAfee’s 2019 Threats Prediction Report, AI technology will likely allow hackers more ease to bypass security measures on networks undetected. This can lead to data breaches, malware attacks, ransomware, and other criminal activity. Additionally, AI-generated phishing emails are scamming people into handing over sensitive data. Tip: Bogus emails can be highly personalized and trick intelligent users into clicking malicious links. Discuss the sophistication of the AI-related scams and warn your family to think about every click — even those from friends.

IoT security. With homes becoming “smarter” and equipped with AI-powered IoT products, the opportunity for hackers to get into these devices to steal sensitive data is growing. According to McAfee’s Threat Prediction Report, voice-activated assistants are especially vulnerable as a point-of-entry for hackers. Also at risk, say security experts, are routers, smartphones, and tablets. Tip: Be sure to keep all devices updated. Secure all of your connected devices and your home internet at its source — the network. Avoid routers that come with your ISP (Internet Security Provider) since they are often less secure. And, be sure to change the default password and secure your primary network and guest network with strong passwords.

The post Artificial Intelligence & Your Family: The Wows & the Risks appeared first on McAfee Blogs.



McAfee Blogs

AI & Your Family: The Wows and Potential Risks

artificial intelligenceAm I the only one? When I hear or see the word Artificial Intelligence (AI), my mind instantly defaults to images from sci-fi movies I’ve seen like I, Robot, Matrix, and Ex Machina. There’s always been a futuristic element — and self-imposed distance — between AI and myself.

But AI is anything but futuristic or distant. AI is here, and it’s now. And, we’re using it in ways we may not even realize.

AI has been woven throughout our lives for years in various expressions of technology. AI is in our homes, workplaces, and our hands every day via our smartphones.

Just a few everyday examples of AI:

  • Cell phones with built-in smart assistants
  • Toys that listen and respond to children
  • Social networks that determine what content you see
  • Social networking apps with fun filters
  • GPS apps that help you get where you need to go
  • Movie apps that predict what show you’d enjoy next
  • Music apps that curate playlists that echo your taste
  • Video games that deploy bots to play against you
  • Advertisers who follow you online with targeted ads
  • Refrigerators that alert you when food is about to expire
  • Home assistants that carry out voice commands
  • Flights you take that operate via an AI autopilot

The Technology

While AI sounds a little intimidating, it’s not when you break it down. AI is technology that can be programmed to accomplish a specific set of goals without assistance. In short, it’s a computer’s ability to be predictive — to process data, evaluate it, and take action.

AI is being implemented in education, business, manufacturing, retail, transportation, and just about any other sector of industry and culture you can imagine. It’s the smarter, faster, more profitable way to accomplish manual tasks.

An there’s tons of AI-generated good going on. Instagram — the #2 most popular social network — is now using AI technology to detect and combat cyberbullying on in both comments and photos.

No doubt, AI is having a significant impact on everyday life and is positioned to transform the future.

Still, there are concerns. The self-driving cars. The robots that malfunction. The potential jobs lost to AI robots.

So, as quickly as this popular new technology is being applied, now is a great time to talk with your family about both the exciting potential of AI and the risks that may come with it.

Talking points for families

Fake videos, images. AI is making it easier for people to face swap within images and videos. A desktop application called FakeApp allows users to seamlessly swap faces and share fake videos and images. This has led to the rise in “deep fake” videos that appear remarkably realistic (many of which go viral). Tip: Talk to your family about the power of AI technology and the responsibility and critical thinking they must exercise as they consume and share online content.

Privacy breaches. Following the Cambridge Analytica/Facebook scandal of 2018 that allegedly used AI technology unethically to collect Facebook user data, we’re reminded of those out to gather our private (and public) information for financial or political gain. Tip: Discuss locking down privacy settings on social networks and encourage your kids to be hyper mindful about the information they share in the public feed. That information includes liking and commenting on other content — all of which AI technology can piece together into a broader digital picture for misuse.

Cybercrime. As outlined in McAfee’s 2019 Threats Prediction Report, AI technology will likely allow hackers more ease to bypass security measures on networks undetected. This can lead to data breaches, malware attacks, ransomware, and other criminal activity. Additionally, AI-generated phishing emails are scamming people into handing over sensitive data. Tip: Bogus emails can be highly personalized and trick intelligent users into clicking malicious links. Discuss the sophistication of the AI-related scams and warn your family to think about every click — even those from friends.

IoT security. With homes becoming “smarter” and equipped with AI-powered IoT products, the opportunity for hackers to get into these devices to steal sensitive data is growing. According to McAfee’s Threat Prediction Report, voice-activated assistants are especially vulnerable as a point-of-entry for hackers. Also at risk, say security experts, are routers, smartphones, and tablets. Tip: Be sure to keep all devices updated. Secure all of your connected devices and your home internet at its source — the network. Avoid routers that come with your ISP (Internet Security Provider) since they are often less secure. And, be sure to change the default password and secure your primary network and guest network with strong passwords.

The post AI & Your Family: The Wows and Potential Risks appeared first on McAfee Blogs.

Microsoft India to set up 10 AI labs, train 5 lakh youths in the country

Microsoft to set up 10 AI labs, train 5 lakh youths and upskill 10,000 Indian developers

Microsoft India on Wednesday announced its plans to set up Artificial Intelligence labs (AI) in 10 universities in the country in the next three years. The company also plans to upskill over 10,000 developers to bridge the skills gap and enhance employability, and train 5 lakh youths across the country in disrupting technologies.

Microsoft has 715 partners who are working with the company in India to help customers design and implement a comprehensive AI strategy.

“The next wave of innovation for India is being driven by the tech intensity of companies – how you combine rapid adoption of cutting edge tech with your company’s own distinctive tech and business capabilities,” Anant Maheshwari, President of Microsoft India, said at the ‘Media and Analyst Days 2019’ held in Bengaluru.

“We believe AI will enable Indian businesses and more for India’s progress, especially in education, skilling, healthcare, and agriculture. Microsoft also believes that it is imperative to build higher awareness and capabilities on security, privacy, trust, and accountability. The power of AI is just beginning to be realized and can be a game-changer for India.”

According to Microsoft, the company’s AI and cloud technologies has today digitally transformed more than 700 customers, of which 60 percent customers are large manufacturing and financial services enterprises.

The Redmond giant has partnered with Indian government’s policy think tank, NITI Aayog, to “combine the cloud, AI, research and its vertical expertise for new initiatives and solutions across several core areas including agriculture and healthcare and the environment,” said Microsoft India in an official press release.

“We are also an active participant along with CII in looking at building solution frameworks for application in AI across areas such as Education, skills, health, and agriculture,” the company added.

In December last year, Microsoft had announced a three-year “Intelligent Cloud Hub” collaborative programme in India, which will “equip research and higher education institutions with AI infrastructure, build curriculum and help both faculty and students to build their skills and expertise in cloud computing, data sciences, AI and IoT.”

Source: Microsoft

The post Microsoft India to set up 10 AI labs, train 5 lakh youths in the country appeared first on TechWorm.

Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical

An anonymous reader quotes a report from MIT Technology Review: Algorithms are increasingly being used to make ethical decisions. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers' lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like "freedom" and "well-being," a satisfactory mathematical solution doesn't always exist. "We as humans want multiple incompatible things," says Peter Eckersley, the director of research for the Partnership on AI, who recently released a paper that explores this issue. "There are many high-stakes situations where it's actually inappropriate -- perhaps dangerous -- to program in a single objective function that tries to describe your ethics." These solutionless dilemmas aren't specific to algorithms. Ethicists have studied them for decades and refer to them as impossibility theorems. So when Eckersley first recognized their applications to artificial intelligence, he borrowed an idea directly from the field of ethics to propose a solution: what if we built uncertainty into our algorithms? Eckersley puts forth two possible techniques to express this idea mathematically. He begins with the premise that algorithms are typically programmed with clear rules about human preferences. We'd have to tell it, for example, that we definitely prefer friendly soldiers over friendly civilians, and friendly civilians over enemy soldiers -- even if we weren't actually sure or didn't think that should always be the case. The algorithm's design leaves little room for uncertainty. The first technique, known as partial ordering, begins to introduce just the slightest bit of uncertainty. You could program the algorithm to prefer friendly soldiers over enemy soldiers and friendly civilians over enemy soldiers, but you wouldn't specify a preference between friendly soldiers and friendly civilians. In the second technique, known as uncertain ordering, you have several lists of absolute preferences, but each one has a probability attached to it. Three-quarters of the time you might prefer friendly soldiers over friendly civilians over enemy soldiers. A quarter of the time you might prefer friendly civilians over friendly soldiers over enemy soldiers. The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs, Eckersley says.

Read more of this story at Slashdot.

Ethics In Artificial Intelligence: Introducing The SHERPA Consortium

In May of this year, Horizon 2020 SHERPA project activities kicked off with a meeting in Brussels. F-Secure is a partner in the SHERPA consortium – a group consisting of 11 members from six European countries – whose mission is to understand how the combination of artificial intelligence and big data analytics will impact ethics and human rights issues today, and in the future (https://www.project-sherpa.eu/).

As part of this project, one of F-Secure’s first tasks will be to study security issues, dangers, and implications of the use of data analytics and artificial intelligence, including applications in the cyber security domain. This research project will examine:

  • ways in which machine learning systems are commonly mis-implemented (and recommendations on how to prevent this from happening)
  • ways in which machine learning models and algorithms can be adversarially attacked (and mitigations against such attacks)
  • how artificial intelligence and data analysis methodologies might be used for malicious purposes

We’ve already done a fair bit of this research*, so expect to see more articles on this topic in the near future!

 

As strange as it sounds, I sometimes find powerpoint a good tool for arranging my thoughts, especially before writing a long document. As an added bonus, I have a presentation ready to go, should I need it.

 

 

Some members of the SHERPA project recently attended WebSummit in Lisbon – a four day event with over 70,000 attendees and over 70 dedicated discussions and panels. Topics related to artificial intelligence were prevalent this year, ranging from tech presentations on how to develop better AI, to existential debates on the implications of AI on the environment and humanity. The event attracted a wide range of participants, including many technologists, politicians, and NGOs.

During WebSummit, SHERPA members participated in the Social Innovation Village, where they joined forces with projects and initiatives such as Next Generation Internet, CAPPSI, MAZI, DemocratieOuverte, grassroots radio, and streetwize to push for “more social good in technology and more technology in social good”. Here, SHERPA researchers showcased the work they’ve already done to deepen the debate on the implications of AI in policing, warfare, education, health and social care, and transport.

The presentations attracted the keen interest of representatives from more than 100 large and small organizations and networks in Europe and further afield, including the likes of Founder’s Institute, Google, and Amazon, and also led to a public commitment by Carlos Moedas, the European Commissioner for Research, Science and Innovation. You can listen to the highlights of the conversation here.

To get a preview of SHERPA’s scenario work and take part in the debate click here.

 


* If you’re wondering why I haven’t blogged in a long while, it’s because I’ve been hiding away, working on a bunch of AI-related research projects (such as this). Down the road, I’m hoping to post more articles and code – if and when I have results to share 😉

Ep. 111 – Crypto AI Blockchain Smoothies at Walmart with Nick Furneaux

What does crypto currency, blockchain, artificial intelligence and Walmart smoothies have to do with social engineering?  Join us this month as Nick Furneaux lets us know. Nov 12, 2018

Contents

Download

Ep. 111 – Crypto AI Blockchain Smoothies at Walmart with Nick Furneaux

Miro Video Player

Get Involved

Got a great idea for an upcoming podcast? Send us a quick message on the contact form!

Enjoy the Outtro Music? Thanks to Clutch for allowing us to use Son of Virginia as our new SEPodcast Theme Music

And check out a schedule for all our training at Social-Engineer.Com

Check out the Innocent Lives Foundation to help unmask online child predators.

The post Ep. 111 – Crypto AI Blockchain Smoothies at Walmart with Nick Furneaux appeared first on Security Through Education.

AI and the future of cybersecurity work

In February 2014, journalist Martin Wolf wrote a piece for the London Financial Times[1] titled Enslave the robots and free the poor. He began the piece with the following quote:

“In 1955, Walter Reuther, head of the US car workers’ union, told of a visit to a new automatically operated Ford plant. Pointing to all the robots, his host asked: How are you going to collect union dues from those guys? Mr. Reuther replied: And how are you going to get them to buy Fords?”

Most attacks against energy and utilities occur in the enterprise IT network

The United States has not been hit by a paralyzing cyberattack on critical infrastructure like the one that sidelined Ukraine in 2015. That attack disabled Ukraine's power grid, leaving more than 700,000 people in the dark.

But the enterprise IT networks inside energy and utilities networks have been infiltrated for years. Based on an analysis by the U.S. Department of Homeland Security (DHS) and FBI, these networks have been compromised since at least March 2016 by nation-state actors who perform reconnaissance activities looking industrial control system (ICS) designs and blueprints to steal.

Near and long-term directions for adversarial AI in cybersecurity

The frenetic pace at which artificial intelligence (AI) has advanced in the past few years has begun to have transformative effects across a wide variety of fields. Coupled with an increasingly (inter)-connected world in which cyberattacks occur with alarming frequency and scale, it is no wonder that the field of cybersecurity has now turned its eye to AI and machine learning (ML) in order to detect and defend against adversaries.

The use of AI in cybersecurity not only expands the scope of what a single security expert is able to monitor, but importantly, it also enables the discovery of attacks that would have otherwise been undetectable by a human. Just as it was nearly inevitable that AI would be used for defensive purposes, it is undeniable that AI systems will soon be put to use for attack purposes.

Choosing an optimal algorithm for AI in cybersecurity

In the last blog post, we alluded to the No-Free-Lunch (NFL) theorems for search and optimization. While NFL theorems are criminally misunderstood and misrepresented in the service of crude generalizations intended to make a point, I intend to deploy a crude NFL generalization to make just such a point.

You see, NFL theorems (roughly) state that given a universe of problem sets where an algorithm’s goal is to learn a function that maps a set of input data X to a set of target labels Y, for any subset of problems where algorithm A outperforms algorithm B, there will be a subset of problems where B outperforms A. In fact, averaging their results over the space of all possible problems, the performance of algorithms A and B will be the same.

With some hand waving, we can construct an NFL theorem for the cybersecurity domain:  Over the set of all possible attack vectors that could be employed by a hacker, no single detection algorithm can outperform all others across the full spectrum of attacks.

Neural networks and deep learning

Deep learning refers to a family of machine learning algorithms that can be used for supervised, unsupervised and reinforcement learning. 

These algorithms are becoming popular after many years in the wilderness. The name comes from the realization that the addition of increasing numbers of layers typically in a neural network enables a model to learn increasingly complex representations of the data.