Category Archives: Threats

Humans vs. Machines: Will Adversarial AI Become the Better Hacker?

Humans versus machines: Who’s the better hacker? The advent of artificial intelligence (AI) brought with it a new set of attacks using adversarial AI, and this influx suggests the answer is likely machine.

With each innovation in technology comes the reality that attackers who study the security tools will find ways to exploit it. AI can make a phone number look like it’s coming from your home area code — and trick your firewall like a machine learning Trojan horse.

How can organizations fight an unknown enemy that’s not even human?

Humans vs. Machines: The Problem for Security

When cybersecurity company ZeroFOX asked if humans or machines were better hackers back in 2016, they took to Twitter with an automated E2E spear phishing attack. The results? According to their experiment, machines are much more effective at getting humans to click on malicious links.

AI models are built with a type of machine learning called deep neural networks (DNNs), which are similar to neurons in the human brain. DNNs make the machine capable of mimicking human behaviors like decision-making, reasoning and problem-solving.

When researchers and developers make an image, they are trying to picture an object, such as a cup, stop sign or cat. They can generate data that attempts to mimic real data by using machine learning — and each model brings that image closer to the real object. Now, imagine those pictures for medical imaging: The power of AI offers massive benefits when it comes to analyzing images.

So, what’s the problem for security? “Adversarial examples are (say, images) which have deliberately been modified to produce a desired response by a DNN,” according to IBM Research – Ireland.

The differences between the real and the fabricated are too small for the human eye to catch. Trained DNNs might catch those differences and classify the image as something all-together different — which is exactly what the attacker wants.

An Adversarial AI Arms Race

As the amount of data increases, nefarious actors will become more efficient at deploying new types of attacks by leveraging adversarial AI. This tactic will make attack attribution even more challenging.

“Adversaries will increase their use of machine learning to create attacks, experiment with combinations of machine learning and AI and expand their efforts to discover and disrupt the machine learning models used by defenders,” according to a 2018 cybercrime report. Enterprises must essentially prepare for an adversarial arms race.

Attacks will also become more affordable, according to the report — an additional bonus for attackers. An attacker can use an AI system to perform functions that would be virtually impossible for humans given the brain power and technical expertise required to achieve at scale.

Rage Against the Machine

What’s different about adversarial AI attacks? They can put on the same malicious offenses with great speed and depth. While AI is not a fully accessible tool for cybercriminals just yet, it’s weaponization is quickly growing more widespread. These threats can multiply the variations of the attack, vector or payload and increase the volume of the attacks. But outside of speed and scale, the attacks are fundamentally quite similar to current threat tactics.

So, how can organizations defend themselves? IBM recently released the Adversarial Robustness Toolbox to help defend DNNs against weaponized AI attacks, allowing researchers and developers to measure the robustness of their DNN models. This, in turn, will improve AI systems.

Sharing intelligence information with the cybersecurity community is also important in building strong defenses. The solution to adversarial AI will come from a combination of technology and policy, but all hands must be on deck. The risks threaten all sectors across public and private institutions. Coordinated efforts among key stakeholders will help to build a more secure future.

After all, the union of man and machine has the power to give defenders a leg up.

Visit the Adversarial Robustness Toolbox and contribute to IBM’s ongoing research into Adversarial AI Attacks

The post Humans vs. Machines: Will Adversarial AI Become the Better Hacker? appeared first on Security Intelligence.

Kaspersky Lab official blog: Mobile beasts and where to find them — part one

In recent years, cybercriminals have been increasingly fixated on our phones. After all, we never part company with our smartphones; they are our primary means for storing personal docs and photos, communicating, and taking pictures. We even use them as tickets and wallets, and much more besides.

They also store oodles of valuable data that can fetch a handsome reward in certain quarters. And mobile devices are excellent for other malicious purposes as well. So there’s no shortage of smartphone malware out there.

Last year we caught 42.7 million pieces of malware on smartphones and tablets. For this series on mobile malware, we divided them into several types according to purpose and behavior. In part one, we look at three fairly common types.

 

Adware: Ad clickers and intrusive banners

 

One of the most common types of mobile infection comes in the shape of adware. Its task is to increase the number of clicks on online banners either automatically or manually (by exploiting users). Some just show you unwanted advertising.

In the first case, you don’t even see the ad, but the clicker uses up your smartphone’s resources, including battery charge and data. The infected smartphone dies in just a few hours, and the next bill may hold an unpleasant surprise.

The second type of adware replaces online banners with the ones of its own, and drowns the user in so many ads that, like it or not, they end up following some links. In many cases, the flow of spam is so overwhelming that the device becomes impossible to use — everything is smothered with ad banners.

Some malware also collects information about your online habits without asking. This data then ends up in the hands of advertisers, who use it to fine-tune their advertising campaigns. What’s more, banners can link to malicious sites where your device might pick up something even worse.

 

SMS and Web subscribers

 

The second type of malware we discuss today is subscribers, also known as Trojan clickers. Their job is to steal data from your mobile account, where thievery is much simpler because it bypasses card numbers, which tend to be under tighter guard. The funds flow out through WAP or SMS billing, and in some cases through calls to premium numbers at the victim’s expense.

See here for details of what WAP is and how cybercriminals exploit it. To take out a paid subscription in your name, all the WAP clicker needs do is click on the relevant button on the site. SMS malware requires permission to send messages, but many users give it to any app without a second thought. Programs that waste your money on IP telephony have a slightly harder task: They have to register an account with the service.

A striking example of a subscriber is the Trojan Ubsod. This pest is a WAP specialist. To conceal its activity for as long as possible, it deletes all SMS messages containing the text string “ubscri” (a fragment of the word “subscribe” or “subscription”). Moreover, it can switch from Wi-Fi to mobile Internet, which is required for WAP operations.

Fortunately, getting rid of unwanted subscriptions isn’t complicated; all subscriptions are displayed in the user’s personal account on the operator’s website. There, you can delete them and even forbid new ones from being linked to the phone number (though in some cases such a block can be imposed only temporarily). The main thing is to notice money leaking from your account as early as possible to prevent a deluge.

 

SMS flooders and DDoSers

 

These two categories combine malware that instead of downloading, sends data — lots of data! And they do it on the sly without requesting permission. Scammers are able to make a pretty penny from ruining other people’s lives at your expense.

As such, SMS flooding is often used by hooligans to tease their victims or disable their devices. A user can willingly install a flooding app on his or her device to swamp their enemies with thousands of SMS messages. But many go further and try to send messages at others’ expense, surreptitiously planting the malicious app on the devices of unsuspecting owners.

DDoSers are able to overwhelm not only smartphones, but also far more powerful devices and even major online resources. Cybercriminals do so by combining infected gadgets into a network, known as a botnet, and bombarding a victim with requests from it. Incidentally, clickers can also act as DDoSers when trying to open the same Web page countless times.

Both flooders and DDoSers try to use your smartphone to harm third parties. But you too will suffer from the load on your device’s battery and processor, not to mention your wallet. Typically, such programs are not widely distributed, but in July 2013, the SMS flooder Didat made it into the Top 20 malicious programs sent by e-mail.

 

The further you get, the harder the going

 

To be honest, the types of mobile miscreants we’ve covered today are small fries. At worst, they’ll siphon off a bit of cash from your phone account and frazzle your nerves. In any event, many of them are easy to detect and remove with the help of antivirus software.

In the chapters to come, we’ll discuss some villains higher up in the pecking order. Keep track of updates and remember the rules of mobile security:

  • Don’t install apps from third-party sources, or better still, block them in the operating system settings!
  • Keep your mobile OS and all installed apps updated to the latest versions.
  • Protect all of your Android devices with a mobile antivirus solution
  • Regularly check the list of paid services in your personal account with your mobile operator and disable anything that you didn’t subscribe to yourself. If you see a subscription you don’t recognize, immediately scan the entire device for viruses.
  • Always read the list of permissions requested by an app, and grant only what’s absolutely essential.


Kaspersky Lab official blog

Kaspersky Lab official blog: Experiment: How easy is it to spy on a smartwatch wearer?

Can a smartwatch be used to spy on its owner? Sure, and we already know lots of ways. But here’s another: A spying app installed on a smartphone can send data from the built-in motion sensors (namely, accelerometer and gyroscope) to a remote server, and that data can be used to piece together the wearer’s actions — walking, sitting, typing, and so on.

How extensive is the threat in practice, and what data can really be siphoned off? We decided to investigate.

Experiment: Can smartwatch movements reveal a password?

We started with an Android-based smartwatch, wrote a no-frills app to process and transmit accelerometer data, and analyzed what we could get from this data. For more details, see our full report.

The data can indeed be used to work out if the wearer is walking or sitting. Moreover, it’s possible to dig deeper and figure out if the person is out for a stroll or changing subway trains — the accelerometer patterns differ slightly; that’s also how fitness trackers differentiate between, say, walking and cycling.

It’s also easy to see when a person is typing on a computer. But working out what they are typing is way more complex. Everyone has a specific way of typing: the ten-finger method, the one- or two-digit keyboard stab, or something in-between. Basically, different people typing the same phrase can produce very different accelerometer signals — although one person entering a password several times in a row will produce pretty similar graphs.

So, a neural network trained to recognize how a particular individual enters text could make out what that person types. And if this neural network happens to be schooled in your particular way of typing, the accelerometer data from the smartwatch on your wrist could be used to recognize a password based on your hand movements.

However, the training process would require the neural network to track you for quite a long time. The processors in modern portable gadgets are not powerful enough to run a neural network directly, so the data has to be sent to a server.

And therein lies trouble for a would-be spy: The constant upload of accelerometer readings consumes a fair bit of Internet traffic and zaps the smartwatch battery in a matter of hours (six, to be precise, in our case). Both of those telltale signs are easy to spot, alerting the wearer that something is wrong. Both, however, are easily minimized by scooping up data selectively, for example when the target arrives at work, a likely time for password entry.

In short, your smartwatch can be used to identify what you’re typing. But it’s hard, and accurate recovery relies on repeat text entry. In our experiment, we were able to recover a computer password with 96% accuracy and a PIN code entered at an ATM with 87% accuracy.

It could be worse

For cybercriminals, however, such data is not all that useful. To use it, they’d still need access to your computer or credit card. The task of determining a card number or CVC code is way trickier.

Here’s why. On returning to the workplace, first thing the smartwatch owner types is almost certainly a password to unlock their computer. That is, the accelerometer graph indicates first walking, then typing. Based on data obtained just for this brief period, it’s possible to recover the password.

But the person won’t enter a credit card number as soon as they sit down — or get up and walk away immediately after entering that data. What’s more, no one will ever enter this information several times in short succession.

To steal data-entry information from a smartwatch, attackers need predictable activity followed by data entered several times. The latter part, incidentally, is yet another reason not to use the same password for different services.

Who should worry about smartwatches?

Our research has shown that data obtained from a smartwatch acceleration sensor can be used to recover information about the wearer: movements, habits, some typed information (for example, a laptop password).

Infecting a smartwatch with data-siphoning malware that lets cybercriminals recover this information is quite straightforward. They just need to create an app (say, a trendy clockface or fitness tracker), add a function to read accelerometer data, and upload it to Google Play. In theory, such an app will pass the malware screening, since there is nothing outwardly malicious in what it does.

Should you worry about being spied on by someone using this technique? Only if that someone has a strong motivation to spy on you, specifically. The average cybercrook is after easy pickings and won’t have much to gain.

But if your computer password or route to the office is of value to someone, a smartwatch is a viable tracking tool. In this case, our advice is:

  • Take note if your smartwatch is overly traffic-hungry or the battery drains quickly.
  • Don’t give apps too many permissions. In particular, watch out for apps that want to retrieve account info and geographical coordinates. Without this data, intruders will struggle to ascertain that it’s your smartwatch they’ve infected.
  • Install a security solution on your smartphone that can help detect spyware before it starts spying.


Kaspersky Lab official blog

Kaspersky Lab official blog: GDPR bustle: Even scammers have new privacy policy

Recently, you’ve probably been drowning in messages from every service you’ve ever used informing of changes to privacy policies and the need to resubscribe to their newsletters in order to carry on receiving them.

No, it’s not an international flash mob of global companies — they’re just trying to fall in line with the EU’s new General Data Protection Regulation (GDPR), which came into force on May 25, 2018.

The GDPR applies to all companies operating in the territory of the EU, and requires them to handle user data responsibly, which includes storing it securely, not transferring it to anyone without the users permission, and providing timely notifications about leaks in case they happen.

What’s more, companies do not have the right to send messages to users without their consent. That’s why your mailbox is full of resubscription requests — services are keen to keep sending you stuff, but can’t do so without that OK from you, which they are desperately trying to get.

GDPR fraud

Cybercriminals sniffed a perfect opportunity to make quite some user data out of this situation. After all, millions of people worldwide are blindly clicking “Yes, I agree” in countless messages and entering personal info on multiple sites without a second thought.

For example, we came across a mailshot seemingly on behalf of Apple menacingly informing recipients that their Apple ID is locked and set to be deleted in three days unless they fill out a form to confirm their account information.

Apple is unable to confirm your billing details, the message said, and this allegedly violates the company’s security policy. Your account is frozen and will be deleted within three days, continued the warning, unless you follow the link and enter your data.

This, of course, has nothing to do with Apple. Just plain phishing.

The authors of the mailshot employed the oldest social engineering trick in the book: intimidation. Afraid of parting company with such a precious account, the less savvy user panics and acts rashly, entering data in places where it shouldn’t. Such scams are as effective as they are numerous, i.e. very.

Example of a GDPR-related phishing email on behalf of Apple

How to spot phishing

If you keep a cool head, it’s fairly easy to see that you’re being phished. Let’s take a closer look at this Apple ID-related message.

In most cases, it’s possible to determine that it’s fraud even without opening it. For example, look at the sender’s address in the From field and the topic in the Subject field (see screenshot). There is something obviously fake about a long email address containing generic words and a sequence of numbers, especially when you know that all legitimate messages about the Apple ID account come from appleid@id.apple.com.

The message subject also contains strange numbers that don’t make any sense. Spammers use them to create information noise and make the message look unique. Also pay attention to the RE tag, which means that the received message is a reply to a message that you sent. This is highly suspicious if you never wrote to this company (again, this is done to bypass spam filters).

If the subject and sender’s address aren’t enough, an analysis of the message text should dispel all doubts. No self-respecting company in possession of your personal data will ever address you using your email address instead of your first and last names.

Another way to recognize a fraudulent email is to look at the address of the link that you are being asked to follow. If you hover the mouse cursor over the text of the link, the address it points to will appear nearby or in the bottom-left corner of the browser window. It should not contain any strange domains or short links, such as bit.ly or similar.

How to protect your data

  • Never enter personal data on suspicious sites. All actions involving personal data should be performed on official company websites.
  • Before clicking on a link in a message and agreeing to provide personal information, make sure that the message is genuine. Check the sender’s address, subject, and text for anything untoward. If something looks odd, don’t click on anything. Contact the technical support team of the service in whose name the message was sent. They will help clarify the situation.
  • Use a reliable security solution, such as Kaspersky Internet Security, with anti-spam and anti-phishing components. It will sift out dubious messages and give warning of suspicious links.


Kaspersky Lab official blog

Episode 96: State Elections Officials on Front Line against Russian Hackers

In this episode of The Security Ledger Podcast (#96): with primary elections taking place in states across the United States in the coming weeks, we talk to John Dickson about how state elections offices have become the front line in a pitched battle with state-sponsored hackers – with the fate of a 240 year democracy hanging in the...

Read the whole entry... »

Related Stories

Opinion: Don’t Be Blinded by APTs

In this industry perspective, Thomas Hofmann of Flashpoint says that sensational coverage of advanced persistent threat (APT) actors does little to help small and mid sized firms defend their IT environments from more common threats like cyber criminals. The key to getting cyber defense right is understanding the risks to your firm and...

Read the whole entry... »

Related Stories

Own goal: email scam nets €2 million for criminals who hijack Lazio transfer

Business email compromise has found its way into the beautiful game. The Italian football club Lazio fell for an email scam and paid €2 million to fraudsters.

Rome daily Il Tempo reported that scammers tricked Lazio by sending an email pretending to be from Feyenoord. Lazio was due to pay the final instalment of a transfer fee to the Dutch club for the defender Stefan de Vrij, who joined the Italians in 2014.

Foul!

The email contained bank transfer details for another Dutch account not belonging to Feyenoord, Il Tempo said. The Dutch club said it didn’t send the email to Lazio, nor did it get the transfer fee. So, in fact both parties suffered a heavy defeat in this case.

Lazio just happens to be the latest to fall foul of a growing problem affecting many kinds of business. Last year the FBI called it ‘the 5 billion dollar scam’, recognising the huge amounts of money fraudsters have made since 2013. Between 2015 and 2017, fraudulent wire transfers grew at a staggering rate of 2,370 per cent.

Offside!

The eye-watering sums of money swirling around the football ‘industry’ make it an enticing target for crooks. Arguably the surprise is that a case like this didn’t come to light sooner. News of multi-million transfer deals are the stuff of back-page stories, bar-room chatter, and online speculation. It’s not hard to imagine a would-be scammer scouring the sports pages for easy research about potential victims. Many of the details for concocting a plausible cover story were in the public domain.

Regardless of the industry, this is a useful test case for infosec professionals. Companies in many industries routinely announce M&A activity, supplier agreements and product launches. News of big deals might be an opportune moment for criminals to strike. Sophos’ Naked Security blog noted that scammers may try to hack actual email accounts in the target company. This would allow them to send scam emails from a genuine address, and then delete them from the sent folder.

As we’ve noted previously on this blog, there’s also a business process aspect to beating the scammers. Requiring approval from more than one director for large-scale payments could reduce the chances of falling for a fake email.

By way of a footnote, the name of the player in question translates from Dutch into English as ‘free’. As Lazio discovered, this transfer was anything but.

 

The post Own goal: email scam nets €2 million for criminals who hijack Lazio transfer appeared first on BH Consulting.

The DFIR Hierarchy of Needs & Critical Security Controls

As you weigh how best to improve your organization's digital forensics and incident response (DFIR) capabilities heading into 2017, consider Matt Swann's Incident Response Hierarchy of Needs. Likely, at some point in your career (or therapy 😉) you've heard reference to Maslow's Hierarchy of Needs. In summary, Maslow's terms,  physiological, safety, belongingness & love, esteem, self-actualization, and self-transcendence, describe a pattern that human motivations generally move through, a pattern that is well represented in the form of a pyramid.
Matt has made great use of this model to describe an Incident Response Hierarchy of Needs, through which your DFIR methods should move. I argue that his powerful description of capabilities extends to the whole of DFIR rather than response alone. From Matt's Github, "the Incident Response Hierarchy describes the capabilities that organizations must build to defend their business assets. Bottom capabilities are prerequisites for successful execution of the capabilities above them:"

The Incident Response Hierarchy of Needs
"The capabilities may also be organized into plateaus or phases that organizations may experience as they develop these capabilities:"

Hierarchy plateaus or phases
As visualizations, these representations really do speak for themselves, and I applaud Matt's fine work. I would like to propose that a body of references and controls may be of use to you in achieving this hierarchy to its utmost. I also welcome your feedback and contributions regarding how to achieve each of these needs and phases. Feel free to submit controls, tools, and tactics you have or would deploy to be successful in these endeavors; I'll post your submission along with your preferred social media handle.
Aspects of the Center for Internet Security Critical Security Controls Version 6.1 (CIS CSC) can be mapped to each of Matt's hierarchical entities and phases. Below I offer one control and one tool to support each entry. Note that there is a level of subjectivity to these mappings and tooling, but the intent is to help you adopt this thinking and achieve this agenda. Following is an example for each one, starting from the bottom of the pyramid.

 INVENTORY - Can you name the assets you are defending?  
Critical Security Control #1: Inventory of Authorized and Unauthorized Devices
Family: System
Control: 1.4     
"Maintain an asset inventory of all systems connected to the network and the network devices themselves, recording at least the network addresses, machine name(s), purpose of each system, an asset owner responsible for each device, and the department associated with each device. The inventory should include every system that has an Internet protocol (IP) address on the network, including but not limited to desktops, laptops, servers, network equipment (routers, switches, firewalls, etc.), printers, storage area networks, Voice Over-IP telephones, multi-homed addresses, virtual addresses, etc.  The asset inventory created must also include data on whether the device is a portable and/or personal device. Devices such as mobile phones, tablets, laptops, and other portable electronic devices that store or process data must be identified, regardless of whether they are attached to the organization’s network." 
Tool option:
Spiceworks Inventory

 TELEMETRY - Do you have visibility across your assets?  
Critical Security Control #6: Maintenance, Monitoring, and Analysis of Audit Logs
Family: System
Control: 6.6      "Deploy a SIEM (Security Information and Event Management) or log analytic tools for log aggregation and consolidation from multiple machines and for log correlation and analysis.  Using the SIEM tool, system administrators and security personnel should devise profiles of common events from given systems so that they can tune detection to focus on unusual activity, avoid false positives, more rapidly identify anomalies, and prevent overwhelming analysts with insignificant alerts."
Tool option:  
AlienVault OSSIM

 DETECTION - Can you detect unauthorized actvity? 
Critical Security Control #8: Malware Defenses
Family: System
Control: 8.1
"Employ automated tools to continuously monitor workstations, servers, and mobile devices with anti-virus, anti-spyware, personal firewalls, and host-based IPS functionality. All malware detection events should be sent to enterprise anti-malware administration tools and event log servers."
Tool option:
OSSEC Open Source HIDS SECurity

 TRIAGE - Can you accurately classify detection results? 
Critical Security Control #4: Continuous Vulnerability Assessment and Remediation
Family: System
Control: 4.3
"Correlate event logs with information from vulnerability scans to fulfill two goals. First, personnel should verify that the activity of the regular vulnerability scanning tools is itself logged. Second, personnel should be able to correlate attack detection events with prior vulnerability scanning results to determine whether the given exploit was used against a target known to be vulnerable."
Tool option:
OpenVAS         

 THREATS - Who are your adversaries? What are their capabilities? 
Critical Security Control #19: Incident Response and Management
Family: Application
Control: 19.7
"Conduct periodic incident scenario sessions for personnel associated with the incident handling team to ensure that they understand current threats and risks, as well as their responsibilities in supporting the incident handling team."
Tool option:
Security Incident Response Testing To Meet Audit Requirements

 BEHAVIORS - Can you detect adversary activity within your environment? 
Critical Security Control #5: Controlled Use of Administrative Privileges
Family: System
Control: 5.1
"Minimize administrative privileges and only use administrative accounts when they are required.  Implement focused auditing on the use of administrative privileged functions and monitor for anomalous behavior."
Tool option: 
Local Administrator Password Solution (LAPS)

 HUNT - Can you detect an adversary that is already embedded? 
Critical Security Control #6: Maintenance, Monitoring, and Analysis of Audit Logs       
Family: System
Control: 6.4
"Have security personnel and/or system administrators run biweekly reports that identify anomalies in logs. They should then actively review the anomalies, documenting their findings."
Tool option:
GRR Rapid Response

 TRACK - During an intrusion, can you observe adversary activity in real time? 
Critical Security Control #12: Boundary Defense
Family: Network
Control: 12.10
"To help identify covert channels exfiltrating data through a firewall, configure the built-in firewall session tracking mechanisms included in many commercial firewalls to identify TCP sessions that last an unusually long time for the given organization and firewall device, alerting personnel about the source and destination addresses associated with these long sessions."
Tool option:
Bro

 ACT - Can you deploy countermeasures to evict and recover? 
Critical Security Control #20: Penetration Tests and Red Team Exercises       
Family: Application
Control: 20.3
"Perform periodic Red Team exercises to test organizational readiness to identify and stop attacks or to respond quickly and effectively."
Tool option:
Red vs Blue - PowerSploit vs PowerForensics


 Can you collaborate with trusted parties to disrupt adversary campaigns? 
Critical Security Control #19: Incident Response and Management       
Family: Application
Control: 19.5
"Assemble and maintain information on third-party contact information to be used to report a security incident (e.g., maintain an e-mail address of security@organization.com or have a web page http://organization.com/security)."
Tool option:
MISP

I've mapped the hierarchy to the controls in CIS CSC 6.1 spreadsheet, again based on my experience and perspective, yours may differ, but consider similar activity.

CIS CSC with IR Hierarchy mappings


My full mapping of Matt's Incident Response Hierarchy of Needs in the
CIS CSC 6.1 spreadsheet is available here: http://bit.ly/CSC-IRH

I truly hope you familiarize yourself with Matt's Incident Response Hierarchy of Needs and find ways to implement, validate, and improve your capabilities accordingly. Consider that the controls and tools mentioned here are but a starting point and that you have many other options available to you. I look forward to hearing from you regarding your preferred tactics and tools as well. Kudos to Matt for framing this essential discussion so distinctly.