Category Archives: artificial intelligence

When all else fails, organizations realize they must share threat intel

A large majority of security IT decision makers are ready and willing to share valuable threat intelligence data to help the collective industry make better, more informed decisions when it comes to cyber attacks, an IronNet Cybersecurity report reveals. To compile the “Collective Offense Calls for a Collective Defense: A Reality Check for Cybersecurity Decision Makers” report, IronNet commissioned survey firm Vanson Bourne to interview 200 U.S. security IT decision makers across many industries including … More

The post When all else fails, organizations realize they must share threat intel appeared first on Help Net Security.

CEOs and business leaders trust AI, but employees are more cautious

Most senior executives (85%) classify themselves as artificial intelligence (AI) optimists, citing increased investment and trust in the technology. Eighty-seven percent say their company will invest in AI initiatives this year, the EY study reveals. The data was collected via an online study conducted by Engine on behalf of EY among a sample of 500 US CEOs and business leaders ages 21 and older who work for a company with US$25m–US$50m in revenue or US$50m … More

The post CEOs and business leaders trust AI, but employees are more cautious appeared first on Help Net Security.

Malware Training Sets: FollowUP

The popular expert Marco Ramilli provided a follow up to its Malware classification activity by adding a scripting section which would be useful for several purposes.

On 2016 I was working hard to find a way to classify Malware families through artificial intelligence (machine learning). One of the first difficulties I met was on finding classified testing set in order to run new algorithms and to test specified features. So, I came up with this blog post and this GitHub repository where I proposed a new testing-set based on a modified version of Malware Instruction Set for Behavior-Based Analysis, also referred as MIST. Since that day I received hundreds of emails from students, researchers and practitioners all around the world asking me questions about how to follow up that research and how to contribute to expanding the training set.

malware

I am so glad that many international researches used my classified Malware dataset as building block for making great analyses and for improving the state of the art on Malware research. Some of them are listed here, but many others papers, articles and researches have been released (just ask to Google).

Today I finally had chance to follow-it-up by adding a scripting section which would be useful to: (i) generate the modified version of MIST files (the one in training sets) and to (ii) convert the obtained results to ARFF (Attribute Relation File Format) by University of Waikato. The first script named mist_json.py is a reporting module that could be integrated into a running CuckooSandBox environment. It is able to take the cuckoo report and convert it into a modified version of MIST file. To do that, drop mist_json.py into your running instance of CuckooSandbox V1 (modules/reporting/) and add the specific configuration section into conf/reporting.conf. You might decide to force its execution without configuration by editing directly the source code. The result would be a MIST file for each Cuckoo analysed sample. The MIST file wraps out the generated features as described into the original post here. By using the second script named fromMongoToARFF.py you can convert your JSON object into ARFF which would be very useful to be imported into WEKA for testing your favorite algorithms.

Now, if you wish you are able to generate training sets by yourself and to test new algorithms directly into WEKA. The creation process follows those steps:

  • Upload the samples into a running CuckooSanbox patched with
    mist_json.py
  • The mist_json.py produces a MIST.json file for each submitted sample
  • Use a simple script to import your desired MIST.json files into a MongoDB. For example for i in */.json; do; mongoimport –db test –collection test –file $i; done;
  • Use the fromMongoToARFF.py to generate ARFF
  • Import the generated ARFF into Weka
  • Start your experimental sessions

If you want to share with the community your new MIST classified files please feel free to make pull requests directly on GitHubEverybody is using this set will appreciate it.

The original post along many other interesting analysis are available on the Marco Ramilli blog:

https://marcoramilli.com/2019/05/14/malware-training-sets-followup/

About the author: Marco Ramilli, Founder of Yoroi

This image has an empty alt attribute; its file name is ramilli.jpeg

I am a computer security scientist with an intensive hacking background. I do have a MD in computer engineering and a PhD on computer security from University of Bologna. During my PhD program I worked for US Government (@ National Institute of Standards and Technology, Security Division) where I did intensive researches in Malware evasion techniques and penetration testing of electronic voting systems.

This image has an empty alt attribute; its file name is yoroi.png

I do have experience on security testing since I have been performing penetration testing on several US electronic voting systems. I’ve also been encharged of testing uVote voting system from the Italian Minister of homeland security. I met Palantir Technologies where I was introduced to the Intelligence Ecosystem. I decided to amplify my cybersecurity experiences by diving into SCADA security issues with some of the biggest industrial aglomerates in Italy. I finally decided to found Yoroi: an innovative Managed Cyber Security Service Provider developing some of the most amazing cybersecurity defence center I’ve ever experienced! Now I technically lead Yoroi defending our customers strongly believing in: Defence Belongs To Humans

Pierluigi Paganini

(SecurityAffairs – malware, artificial intelligence)

The post Malware Training Sets: FollowUP appeared first on Security Affairs.

Is AI the Answer to never-ending Cybersecurity Problems?

Paul German, CEO, Certes Networks, talks about the impact and the benefits of Artificial Intelligence(AI) driven cybersecurity. And how AI adoption is helping organisations to stay ahead in the never-ending game that is cybersecurity.

Artificial Intelligence (AI) isn’t going anywhere anytime soon. With 20% of the C-suite already using machine learning and 41% of consumers believing that AI will improve their lives, wide scale adoption is imminent across every industry - and cybersecurity is no exception. A lot has changed in the cyber landscape over the past few years and AI is being pushed to the forefront of conversations. It’s becoming more than a buzzword and delivering true business value. Its ability to aid the cybersecurity industry is increasingly being debated; some argue it has the potential to revolutionise cybersecurity, whilst others insist that the drawbacks outweigh the benefits.

With several issues facing the current cybersecurity landscape such as a disappearing IT perimeter, a widening skills gap, increasingly sophisticated cyber attacks and data breaches continuing to hit headlines, a remedy is needed. The nature of stolen data has also changed - CVV and passport numbers are becoming compromised, so coupled with regulations such as GDPR, organisations are facing a minefield.

Research shows that 60% think AI has the ability to find attacks before they do damage. But is AI the answer to the never-ending cybersecurity problems facing organisations today?

The Cost-Benefit Conundrum
On one hand, AI could provide an extremely large benefit to the overall framework of cybersecurity defences. On the other, the reality that it equally has the potential to be a danger under certain conditions cannot be ignored. Hackers are fast gaining the ability to foil security algorithms by targeting the data AI technology is training on. Inevitably, this could have devastating consequences.

AI can be deployed by both sides - by the attackers and the defenders. It does have a number of benefits such as the ability to learn and adapt to its current learning environment and the threat landscape. If it was deployed correctly, AI could consistently collect intelligence about new threats, attempted attacks, successful data breaches, blocked or failed attacks and learn from it all, fulfilling its purpose of defending the digital assets of an organisation. By immediately reacting to attempted breaches, mitigating and addressing the threat, cybersecurity could truly reach the next level as the technology would be constantly learning to detect and protect.

Additionally, AI technology has the ability to pick up abnormalities within an organisation’s network and flag it quicker than a member of the cybersecurity or IT team could; AI’s ability to understand ‘normal’ behaviour would allow it to bring attention to potentially malicious behaviour of suspicious or abnormal user or device activity.

As with most new technologies, for each positive there is an equal negative. AI could be configured by hackers to learn the specific defences and tools that it runs up against which would give way to larger and more successful data breaches. Viruses could be created to host this type of AI, producing more malware that can bypass even more advanced security implementations. This approach would likely be favoured by hackers as they don’t even need to tamper with the data itself - they could work out the features of the code a model is using and mirror it with their own. In this particular care, the tables would be turned and organisations could find themselves in sticky situations if they can’t keep up with hackers.

Organisations must be wary that they don’t adopt AI technology in cybersecurity ‘just because.’ As attack surfaces expand and hackers become more sophisticated, cybersecurity strategies must evolve to keep up. AI contributes to this expanding attack surface so when it comes down to deployment, the benefits must be weighed up against the potential negatives. A robust, defence-in-depth Information Assurance strategy is still needed to form the basis of any defence strategy to keep data safe.



Paul German, CEO, Certes Networks

AI & Your Family: The Wows and Potential Risks

artificial intelligenceAm I the only one? When I hear or see the word Artificial Intelligence (AI), my mind instantly defaults to images from sci-fi movies I’ve seen like I, Robot, Matrix, and Ex Machina. There’s always been a futuristic element — and self-imposed distance — between AI and myself.

But AI is anything but futuristic or distant. AI is here, and it’s now. And, we’re using it in ways we may not even realize.

AI has been woven throughout our lives for years in various expressions of technology. AI is in our homes, workplaces, and our hands every day via our smartphones.

Just a few everyday examples of AI:

  • Cell phones with built-in smart assistants
  • Toys that listen and respond to children
  • Social networks that determine what content you see
  • Social networking apps with fun filters
  • GPS apps that help you get where you need to go
  • Movie apps that predict what show you’d enjoy next
  • Music apps that curate playlists that echo your taste
  • Video games that deploy bots to play against you
  • Advertisers who follow you online with targeted ads
  • Refrigerators that alert you when food is about to expire
  • Home assistants that carry out voice commands
  • Flights you take that operate via an AI autopilot

The Technology

While AI sounds a little intimidating, it’s not when you break it down. AI is technology that can be programmed to accomplish a specific set of goals without assistance. In short, it’s a computer’s ability to be predictive — to process data, evaluate it, and take action.

AI is being implemented in education, business, manufacturing, retail, transportation, and just about any other sector of industry and culture you can imagine. It’s the smarter, faster, more profitable way to accomplish manual tasks.

An there’s tons of AI-generated good going on. Instagram — the #2 most popular social network — is now using AI technology to detect and combat cyberbullying on in both comments and photos.

No doubt, AI is having a significant impact on everyday life and is positioned to transform the future.

Still, there are concerns. The self-driving cars. The robots that malfunction. The potential jobs lost to AI robots.

So, as quickly as this popular new technology is being applied, now is a great time to talk with your family about both the exciting potential of AI and the risks that may come with it.

Talking points for families

Fake videos, images. AI is making it easier for people to face swap within images and videos. A desktop application called FakeApp allows users to seamlessly swap faces and share fake videos and images. This has led to the rise in “deep fake” videos that appear remarkably realistic (many of which go viral). Tip: Talk to your family about the power of AI technology and the responsibility and critical thinking they must exercise as they consume and share online content.

Privacy breaches. Following the Cambridge Analytica/Facebook scandal of 2018 that allegedly used AI technology unethically to collect Facebook user data, we’re reminded of those out to gather our private (and public) information for financial or political gain. Tip: Discuss locking down privacy settings on social networks and encourage your kids to be hyper mindful about the information they share in the public feed. That information includes liking and commenting on other content — all of which AI technology can piece together into a broader digital picture for misuse.

Cybercrime. As outlined in McAfee’s 2019 Threats Prediction Report, AI technology will likely allow hackers more ease to bypass security measures on networks undetected. This can lead to data breaches, malware attacks, ransomware, and other criminal activity. Additionally, AI-generated phishing emails are scamming people into handing over sensitive data. Tip: Bogus emails can be highly personalized and trick intelligent users into clicking malicious links. Discuss the sophistication of the AI-related scams and warn your family to think about every click — even those from friends.

IoT security. With homes becoming “smarter” and equipped with AI-powered IoT products, the opportunity for hackers to get into these devices to steal sensitive data is growing. According to McAfee’s Threat Prediction Report, voice-activated assistants are especially vulnerable as a point-of-entry for hackers. Also at risk, say security experts, are routers, smartphones, and tablets. Tip: Be sure to keep all devices updated. Secure all of your connected devices and your home internet at its source — the network. Avoid routers that come with your ISP (Internet Security Provider) since they are often less secure. And, be sure to change the default password and secure your primary network and guest network with strong passwords.

The post AI & Your Family: The Wows and Potential Risks appeared first on McAfee Blogs.