Monthly Archives: November 2017

Holiday Hackers Can Ruin Website Availability and Security for Retailers

The few days after Thanksgiving in the U.S. are traditionally peak holiday shopping days for U.S. residents. They flood both physical and online stores to check off items on their holiday shopping lists, with hopes of scoring a few bargains. Almost everyone does some shopping online, according to the most recent Pre-Thanksgiving Holiday Retail Survey conducted by Deloitte consulting:

“85 percent of shoppers plan to shop in-store over the holiday weekend and 91 percent plan to cross off their lists online. Despite having so many online shoppers on Black Friday, Cyber Monday continues to be a peak shopping day online. Nearly three-quarters (72 percent) of respondents plan to shop online on Cyber Monday.”

With so many website visitors during that crucial retail shopping window, companies must brace for spikes in traffic, protect their network security and maintain website uptime. Perhaps the number one concern of IT security staff at retail stores is data breaches. That’s for good reason; the Identity Data Theft Resource Center reported that as of November 15, 2017 (since January 1, 2017) there have been 1,172 data breaches, resulting in 171,687,965 exposed records!

Of course companies must guard against cyber threats such as phishing scams, malware, ransomware and data infiltrations that harvest personal information such as credit card numbers and email addresses. But they should also be concerned with 2 types of distributed denial of service (DDoS) attack

  1. Volumetric DDoS attacks can affect website availability/service by sending a high amount of traffic, or request packets, to the target network in an effort to overwhelm its bandwidth capabilities.
  2. Low volume, short duration attacks often serve as a smokescreen for a security breach such as data theft, or installation of malware or ransomware. In a sub-saturating attack, hackers can take down the target’s assets while leaving Internet connectivity in place.

Some companies will be wary of large scale, Internet-crippling DDoS attacks, but those smaller attacks often go undetected by legacy, traditional DDoS mitigation solutions. Even if a small attack does trigger a legacy DDoS scrubbing solution, the attack is usually over in less than the time it takes (usually 10-30 minutes) for that scrubbing to activate. The only way to keep up with these increasingly sophisticated, frequent and low volume attacks is to maintain comprehensive visibility and automated mitigation capabilities across a network, so that even everyday DDoS attacks can be instantly detected and blocked as they occur and before they cause damage.

Although an online retailer could become a victim of a large, volumetric DDoS attack, our DDoS Trends research indicates that 96% of DDoS attacks are 5Gbps or less. Online retailers can prevent either small or large DDoS attacks from hitting their networks, via either an automated DDoS mitigation appliance, or via their Internet Service Provider (if it offers DDoS protection as a service).

For more information, contact us.

Cyber Security Roundup for November 2017

One of the most notable data breaches disclosed this month was by Uber, given the company attempted to cover up the breach by paying off hackers. Over a year ago the transport tech firm was said to have paid £75,000 to two hackers to delete 57 million Uber account records which they had stolen. Uber revealed around 2.7 million of the stolen records were British riders and drivers. As a UK Uber rider, this could mean me, I haven't received any notification of the data breach from Uber as yet. The stolen information included names, email addresses, and phone numbers. Uber can expect enforcement action from regulators on both sides of the pond, the UK Information Commissioner's Office (ICO) said it had "huge concerns" about the breach and was investigating.

Jewson, Cash Converters, and Imgur all reported losing data due to hacks this month, while Equifax has reported suffering significant negative financial losses following their high profile hack of personal customer data. Equifax reported their net income had dropped by £20 million due to the hack, and their breach bill was coming in at a whopping £67 million.

November was a very busy month for security patches releases, with Microsoft, Apple, Adobe, Oracle, Cisco and Intel releasing a raft of patches to fix critical vulnerabilities. Apple even had to quickly release an emergency patch at end of November to fix a root access flaw reported in macOS High Sierra version 10.13.1. So just keep patching everything IT to ensure you and your business stays ahead of enterprising cybercriminals, the Equifax breach is a prime example of what can go wrong if system patching is neglected.

November also saw Open Web Application Security Project (OWASP) finally released an updated version to its Top Ten application vulnerabilities list, which is a ‘must know’ secure coding best practice for all software developers and security testers, especially considering that Akamai reported web application attacks had increased by 69% in the third quarter of 2017. Look out for an updated OWASP Top Ten IBM DeveloperWorks Guidance from me in December to reflect the updated list.

NEWS
AWARENESS, EDUCATION AND THREAT INTELLIGENCE
REPORTS

The imminent threat against industrial control systems

The United States has not been the victim of a paralyzing cyber-attack on critical infrastructure like the one that occurred in the Ukraine in 2015. That attack disabled the Ukrainian power grid, leaving more than 700,000 people helpless.

But the United States has had its share of smaller attacks against critical infrastructure. Most of these attacks targeted industrial control systems (ICS) and the engineering personnel who have privileged access.

Are home security cameras ready for business use?

After decades of lowering crime, rates are on the uptick again. It’s no surprise then that the global home security camera market, led by low cost consumer IP “cams”, is expected to be worth $8 billion by 2023. Starting as low as $30 per camera, everyone from Amazon to your cable provider, to your trusted WiFi vendor is trying to sell you a home security camera. Just plug in your IP camera and be alerted on your phone anytime something moves is the promise.

To read this article in full, please click here

Enterprise Security Weekly #71 – Call Me!

James Wilkinson joins us to discuss his transition from the military to the enterprise security space. In the news, updates from Docker, GuardiCore, Trend Micro, Barracuda Networks, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode71

Visit https://www.securityweekly.com/esw for all the latest episodes!

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

As leak investigations surge, our new lawsuit seeks the Trump admin’s guidelines on surveillance of journalists

trump sessions

NEW YORK (November 29, 2017) – Freedom of the Press Foundation and The Knight First Amendment Institute at Columbia University filed a lawsuit today after the government failed to disclose critical portions of its internal guidelines relating to the surveillance of journalists. The lawsuit follows Attorney General Jeff Sessions’ recent statement that the Justice Department currently has 27 open leak investigations, nine times as many investigations as last year.

“The apparent hostility toward the press from senior government officials combined with increasing government surveillance create a dangerous environment for reporters and whistleblowers,” said Knight Institute Staff Attorney Carrie DeCell. “The public has a right to know if the limits on surveillance of journalists are sufficient to ensure a free press.”

In October, the Knight Institute and Freedom of the Press Foundation filed Freedom of Information Act requests with the Justice Department, the National Security Agency, the CIA, and other federal agencies, seeking records concerning the surveillance of journalists and other investigative tactics that threaten the freedoms of speech, association, or the press. In response, those agencies have disclosed only two publicly available documents, prompting the lawsuit.

The organizations are particularly interested in uncovering any relevant revisions to the Justice Department’s “Media Guidelines,” which, notably, contain media subpoena policies that Attorney General Sessions indicated last August he wanted to revisit.

The Knight Institute and Freedom of the Press Foundation also sought any revisions to the FBI Domestic Investigations and Operations Guide (known informally as the “DIOG”) that concern the use of secret “national security letters.” Apparently not subject to the Media Guidelines, national security letters may be used to compel a third party (such as a cellphone provider) to disclose customer records (such as a journalist’s call log). In 2016, the news organization The Intercept published leaked portions of the DIOG indicating that FBI agents have been secretly authorized to obtain journalists’ phone records with the approval of only two internal officials. Emails released to the Freedom of Press Foundation indicated that these portions of the DIOG may since have been updated, although any updates remain secret.

“The fact that the Justice Department has completely exempted national security letters from the Media Guidelines and can target journalists with them in complete secrecy is an affront to press freedom,” said Trevor Timm, executive director of Freedom of the Press Foundation. “There’s absolutely no reason why these secret rules should not be public.”

The organizations intend to publish any records disclosed as a result of their lawsuit.


About the Freedom of the Press Foundation

The Freedom of the Press Foundation is a non-profit organization that protects and defends adversarial journalism in the 21st century. FPF uses digital security, crowdfunding, and internet advocacy to support journalists and whistleblowers worldwide. 

For more information, contact the Freedom of the Press Foundation at trevor@freedom.press

About the Knight Institute

The Knight First Amendment Institute is a non-partisan, not-for-profit organization established by Columbia University and the John S. and James L. Knight Foundation to defend the freedoms of speech and press in the digital age through strategic litigation, research, and public education.

For more information, contact the Knight Institute at ujala.sehgal@knightcolumbia.org.

Hack Naked News #151 – November 28, 2017

Paul and Michael report on an Exim-ergency, why Uber’s in hot water, Firefox’s new pwnage warnings, 1.7 million breached Imgur accounts, bidding farewell to SMS authentication, voting and security, and more on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode151

Visit http://hacknaked.tv for all the latest episodes!

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Threat Intelligence – An Adaptive Approach to Information Security – Free Consultation Available

Dear, blog, readers, as, of, today, I'm, making, publicly, available, my, portfolio, of, services, including, active, threat, intelligence, gathering, and, processing, cybercriminals, and, network, assets, profiling, real, life, personalization, of, malicious, actors, OSINT, analyses, in-depth, understanding, and, processing, of, tactics, techniques, and, procedures (TTPs), including, the,

Further abusing the badPwdCount attribute

Researched and written by Rindert Kramer

Introduction

At Fox-IT, we often do internal penetration tests for our customers. One of the attacks that we perform is password spraying. In a password spraying attack the attacker tries to authenticate as one of the user accounts that is found in Active Directory using a common password. These passwords vary from Summer2017 to Welcome01 and often yield a promising lead to continue towards the goal of becoming domain administrator.

Password history check (N-2)

Most companies for which we perform penetration tests use Active Directory as their primary source for user accounts. These user accounts adhere to the password policy that is configured, whether it’s a domainwide or a finegrained password policy.
Typically, the following settings are configured in such a policy:

  • Number of passwords to remember (password history);
  • Lockout threshold. After x attempts the domain controller will lock the user account;
  • Lockout time. The ‘cool down’ period after a user account has been locked due to too many invalid authentication attempts.

If a user tries to authenticate with a wrong password, the domain controller who handles the authentication request will increment an attribute called badPwdCount. By default, this value is 0. Every time a user fails to authenticate correctly, this value is incremented by the domain controller. Note that this value is not replicated throughout the domain but is only stored on the domain controller the user tries to authenticate to and is synchronized with the domain controller that holds the primary domain controller (PDC) role. Thus the PDC is holding the true value of the badPwdCount-attribute. If the value in the badPwdCount-attribute reaches the threshold that is set in the password policy, the domain controller will lock the account. The user then cannot authenticate until the cool down period is over.

But what happens if you store your password on all sorts of devices (for authenticating with Exchange, Skype For Business, etc.) and you change your password? That would result in Exchange, Windows or any other service trying to authenticate with an invalid password. If everything works correctly, you should be locked out very soon because of this. However, this is not the case.

If NTLM or Kerberos authentication is used and the user tries to authenticate with a password that is the same as one of the last two entries of their password history, the badPwdCount-attribute is not incremented by the domain controller. That means that the domain controller won’t lock the user account after x failed authentication attempts even though the specified password is incorrect.

Password history N-2 is only supported when NTLM or Kerberos authentication is used and if the authentication provider sends the invalid requests to the PDC. This rules out some authentication types such as digest or RADIUS. PEAP and PEAP-CHAP are certificate based and thus the RADIUS server will not send the failed authentication request to the PDC.
MS-CHAPv2 uses your NTLM hash for authentication though, but it is packed together with challenge or response data and a session identifier into a SHA-hash and sent to the authentication provider. A failed authentication is terminated on the authentication provider and is not forwarded to the PDC. The badPwdCount-attribute gets will get incremented after a failed authentication attempt, even if the user used his previous password.

Attack vector

There is a cool script that takes the value of the badPwdCount-attribute into account when doing a password spraying attack. However, there is another attack vector we can abuse.

Let’s say, as an example, that user john.doe@fox-test.local had Spring2017 as his previous password but since he had to change his password, he now uses Summer2017 as password. The attacker queries the primary domain controller for the value of the badPwdCount-attribute for user john.doe@fox-test.local and tries to authenticate with username john.doe@fox-test.local and password Spring2017. The attacker then queries the primary domain controller again for the value of the badPwdCount-attribute again and would conclude that the attribute has not been incremented. The attacker then applies some logic and tries to authenticate with password Summer2017. Since this is the correct password, the attacker will successfully authenticate as john.doe@fox-test.local.
The following code demonstrates how to do this in PowerShell:

ps_test

The badPwdCount-attribute is, by default, readable for every user account and computer account in the Active Directory. If an attacker has the disposal of credentials of a domain user, he can query the value for the badPwdCount-attribute for any given user. If the attacker does not have credentials of a domain user but does have the credentials of a local administrative account of a domain joined computer, he can use the computer account to authenticate to Active Directory.

To demonstrate this attack, Fox-IT wrote a Metasploit module and a PowerShell script as Proof of Concept (PoC). The script and module will hopefully be available in Metasploit soon but can also be downloaded here: https://github.com/rapid7/metasploit-framework/pull/9195/files

The Metasploit module acts as a wrapper for the PowerShell script and is capable of the following:

  • Test credentials for a configurable number of user accounts;
  • Show successful authentications;
  • Store credentials of users who successfully authenticated in the MetaSploit database;
  • Show useraccounts where the badPwdCount-attribute not has been incremented;
  • Autoincrement passwords up to two times. That means that Welcome01 becomes Welcome02, etc.

Result of running the module when only checking the password and if the badPwdCount-attribute has been incremented:

spray_check

Result of brute-forcing (for a maximum of 2 times) when the badPwdCount-attribute has not been incremented:

bruted

Please keep the following in mind when using the MetaSploit module:

  • Although the script keeps the current badPwdCount-value into account when trying to authenticate, Fox-IT cannot guarantee that the script will not lock out user accounts;
  • If a user account is expired and the correct password is used, the user will fail to authenticate but the badPwdCount-attribute will also not increment.

Remediation

To remediate this issue, Fox-IT advises the following:

  • Enforce the use of strong and secure passwords. As you can read everywhere on the internet, a secure password consists of both lower and uppercase characters, numbers and special characters. However, longer passwords take more time to crack it, because the time to crack a password will increase significantly with each added character.
    For example, Fox-IT can successfully crack the NTLM hash of a password that has 8 characters within 10 hours, while cracking an NTLM of a password that has 10 characters would take up to eleven years;
  • Use passphrases. Passphrases can easily exceed 14 characters, which will also eliminate the possibility of an LM-hash;
  • Use your IDS or IPS (or any other system of your choosing) for detecting password spraying attacks.

References

A New Security Maturity Model: How Does Your Business Stack Up?

The Frost & Sullivan and Secureworks Security Maturity Model (SMM) analyses companies beyond the layers of defence and examines security maturity across five domains 


Category:

Cloud Security
Incident Response and Management
Information Security

The Frost & Sullivan and Secureworks Security Maturity Model (SMM) analyses companies beyond the layers of defence and examines security maturity across five domains

Tizi: Detecting and blocking socially engineered spyware on Android



Google is constantly working to improve our systems that protect users from Potentially Harmful Applications (PHAs). Usually, PHA authors attempt to install their harmful apps on as many devices as possible. However, a few PHA authors spend substantial effort, time, and money to create and install their harmful app on a small number of devices to achieve a certain goal.

This blog post covers Tizi, a backdoor family with some rooting capabilities that was used in a targeted attack against devices in African countries, specifically: Kenya, Nigeria, and Tanzania. We'll talk about how the Google Play Protect and Threat Analysis teams worked together to detect and investigate Tizi-infected apps and remove and block them from Android devices.
What is Tizi?

Tizi is a fully featured backdoor that installs spyware to steal sensitive data from popular social media applications. The Google Play Protect security team discovered this family in September 2017 when device scans found an app with rooting capabilities that exploited old vulnerabilities. The team used this app to find more applications in the Tizi family, the oldest of which is from October 2015. The Tizi app developer also created a website and used social media to encourage more app installs from Google Play and third-party websites.

Here is an example social media post promoting a Tizi-infected app:

What is the scope of Tizi?


What are we doing?

To protect Android devices and users, we used Google Play Protect to disable Tizi-infected apps on affected devices and have notified users of all known affected devices. The developers' accounts have been suspended from Play.

The Google Play Protect team also used information and signals from the Tizi apps to update Google's on-device security services and the systems that search for PHAs. These enhancements have been enabled for all users of our security services and increases coverage for Google Play users and the rest of the Android ecosystem.

Additionally, there is more technical information below to help the security industry in our collective work against PHAs.


What do I need to do?

Through our investigation, we identified around 1,300 devices affected by Tizi. To reduce the chance of your device being affected by PHAs and other threats, we recommend these 5 basic steps:
  • Check permissions: Be cautious with apps that request unreasonable permissions. For example, a flashlight app shouldn't need access to send SMS messages.
  • Enable a secure lock screen: Pick a PIN, pattern, or password that is easy for you to remember and hard for others to guess.
  • Update your device: Keep your device up-to-date with the latest security patches. Tizi exploited older and publicly known security vulnerabilities, so devices that have up-to-date security patches are less exposed to this kind of attack.
  • Google Play Protect: Ensure Google Play Protect is enabled.
  • Locate your device: Practice finding your device, because you are far more likely to lose your device than install a PHA.

How does Tizi work?

The Google Play Protect team had previously classified some samples as spyware or backdoor PHAs without connecting them as a family. The early Tizi variants didn't have rooting capabilities or obfuscation, but later variants did.

After gaining root, Tizi steals sensitive data from popular social media apps like Facebook, Twitter, WhatsApp, Viber, Skype, LinkedIn, and Telegram. It usually first contacts its command-and-control servers by sending an SMS with the device's GPS coordinates to a specific number. Subsequent command-and-control communications are normally performed over regular HTTPS, though in some specific versions, Tizi uses the MQTT messaging protocol with a custom server. The backdoor contains various capabilities common to commercial spyware, such as recording calls from WhatsApp, Viber, and Skype; sending and receiving SMS messages; and accessing calendar events, call log, contacts, photos, Wi-Fi encryption keys, and a list of all installed apps. Tizi apps can also record ambient audio and take pictures without displaying the image on the device's screen.

Tizi can root the device by exploiting one of the following local vulnerabilities:
  • CVE-2012-4220
  • CVE-2013-2596
  • CVE-2013-2597
  • CVE-2013-2595
  • CVE-2013-2094
  • CVE-2013-6282
  • CVE-2014-3153
  • CVE-2015-3636
  • CVE-2015-1805
Most of these vulnerabilities target older chipsets, devices, and Android versions. All of the listed vulnerabilities are fixed on devices with a security patch level of April 2016 or later, and most of them were patched considerably prior to this date. Devices with this patch level or later are far less exposed to Tizi's capabilities. If a Tizi app is unable to take control of a device because the vulnerabilities it tries to use are are all patched, it will still attempt to perform some actions through the high level of permissions it asks the user to grant to it, mainly around reading and sending SMS messages and monitoring, redirecting, and preventing outgoing phone calls.


Samples uploaded to VirusTotal

To encourage further research in the security community, here are some sample applications embedding Tizi that were already on VirusTotal.

Package name
SHA256 digest
SHA1 certificate
com.press.nasa.com.tanofresh
4d780a6fc18458311250d4d1edc750468fdb9b3e4c950dce5b35d4567b47d4a7
816bbee3cab5eed00b8bd16df56032a96e243201
com.dailyworkout.tizi
7c6af091a7b0f04fb5b212bd3c180ddcc6abf7cd77478fd22595e5b7aa7cfd9f
404b4d1a7176e219eaa457b0050b4081c22a9a1a
com.system.update.systemupdate
7a956c754f003a219ea1d2205de3ef5bc354419985a487254b8aeb865442a55e
4d2962ac1f6551435709a5a874595d855b1fa8ab


Additional digests linked to Tizi

To encourage further research in the security community, here are some sample digests of exploits and utilities that were used or abused by Tizi.

Filename
SHA256 digest
run_root_shell
f2e45ea50fc71b62d9ea59990ced755636286121437ced6237aff90981388f6a
iovyroot
4d0887f41d0de2f31459c14e3133debcdf758ad8bbe57128d3bec2c907f2acf3
filesbetyangu.tar
9869871ed246d5670ebca02bb265a584f998f461db0283103ba58d4a650333be

The Future of Security Operations: Regaining Balance

Posted under: Research and Analysis

The first post in this series, Behind the 8 Ball, raised a number of key challenges practicing security in our current environment. These include continual advancement and innovation by attackers seeking new ways to compromise devices and exfiltrate data, increasing complexity of technology infrastructure, frequent changes to said infrastructure, and finally the systemic skills shortage which limits our resources available to handle all the challenges created by the other issues. Basically, practitioners are behind the 8-ball in getting their job done and protecting corporate data.

As we discussed in that earlier post, thinking differently about security entails you changing things up to take a (dare we say it?) more enlightened approach, basically focusing the right resources on the right functions. We know it seems obvious that having expensive staff focused on rote and tedious functions is a suboptimal way to deploy resources. But most organizations do it anyway. We prefer to have our valuable, constrained, and usually highly skilled humans doing what humans are good at, such as:

  • identifying triggers that might indicate malicious activity
  • drilling into suspicious activity to understand the depth of attacks and assess potential damage
  • figuring out workarounds to address attacks

Humans in these roles generally know what to look for, but aren’t very good at looking at huge amounts of data to find those patterns. Many don’t like doing the same things over and over again – they get bored and less effective. They don’t like graveyard shifts, and they want work that teaches them new things and stretches their capabilities. Basically they want to work in an environment where they do cool stuff and can grow their skills. And (especially in security) they can choose where they work. If they don’t get the right opportunity in your organization, they will find another which better suits their capabilities and work style.

On the other hand machines have no problem working 24/7 and don’t complain about boring tasks – at least not yet. They don’t threaten to find another place to work, nor do they agitate for broader job responsibilities or better refreshments in the break room. We’re being a bit facetious here, and certainly don’t advocate replacing your security team with robots. But in today’s asymmetric environment, where you can’t keep up with the task list, robots may be your only chance to regain balance and keep pace.

So we will expand a bit on a couple concepts from our Intro to Threat Operations paper, because over time we expect our vision of threat operations to become a subset of SecOps.

  • Enriching Alerts: The idea is to take an alert and add a bunch of common information you know an analyst will want to the alert, before to sending it to an analyst. This way the analyst doesn’t need to spend time gathering information from those various systems and information sources, and can get right to work validating the alert and determining potential impact.
  • Incident Response: Once an alert has been validated, a standard set of activities are generally part of response. Some of these activities can be automated via integration with affected systems (networks, endpoint management, SaaS, etc.) and the time saved enables responders to focus on higher-level tasks such as determining proliferation and assessing data loss.

Enriching Alerts

Let’s dig into enriching alerts from your security monitoring systems, and how this can work without human intervention. We start with a couple different alerts, and some educated guesses as to what would be useful to an analyst.

  • Alert: Connection to a known bad IP: Let’s say an alert fires for connectivity to a known bad IP address (thanks, threat intel!). With source and destination addresses, an analyst would typically start gathering basic information. 1. Identity: Who uses the device? With a source IP it’s usually straightforward to see who the address is allocated to, and then what devices that person tends to use.

    1. Target: Using a destination IP external site comes into focus. An analyst would probably perform geo-location to figure out where the IP is and a whois query to figure out who owns it. They could also figure out the hosting provider and search their threat intel service to see if the IP belongs to a known botnet, and dig up any associated tactics.
    2. Network traffic: The analyst may also check out network traffic from the device to look for strange patterns (possibly C&C or reconnaissance) or uncharacteristically large volumes to or from that device over the past few days.
    3. Device hygiene: The analyst also needs to know specifics about the device, such as when it was last patched and does it have a non-standard configuration?
    4. Recent changes: The analyst would probably be interested in software running on the device, and whether any programs have been installed or configurations changed recently.
  • Alert: Strange registry activity: In this scenario an alert is triggered because a device has had its registry changed, but it cannot be traced back to authorized patches or software installs. The analyst could use similar information to the first example, but device hygiene and recent device changes would be of particular interest. The general flow of network traffic would also be of interest, given that the device may have been receiving instructions or configuration changes from external devices. In isolation registry changes may not be a concern, but in close proximity of a larger inbound data transfer the odds of trouble increase. Additionally, checking out web traffic logs from the device could provide clues to what they were doing that might have resulted in compromise.

  • Alert: Large USB file transfer: We can also see the impact of enrichment in an insider threat scenario. Maybe an insider used their USB port for the first time recently, and transferred 1GB of data in a 3-hour window. That could generate a DLP alert. At that point it would be good to know which internal data sources the device has been communicating with, and any anomalous data volumes over the past few days, which could indicate information mining in preparation for exfiltration. It would also be helpful to review inbound connections and recent device changes, because the device could have been compromised by an external actor using a remote Trojan to attack the device.

In these scenarios, and another 1,000 we could concoct, all the information the analyst needs to get started is readily available within existing systems and security data/intelligence sources. Whatever tool an analyst uses to triage can be pre-populated with this information.

The ability to enrich alerts doesn’t end there. If files are involved in the connection, the system could automatically poll an external file reputation service to see whether they are recognized as malicious. File samples could be set to a sandbox to report on what each file actually does, and if it tends to be part of a known attack pattern. Additionally, if the file turns out to be part of a malware kit, the system could then search for other files known to be related, and perhaps across other devices within the organization.

All this can be done before an analyst ever starts processing an alert. These simple examples should be enough to illustrate the potential of automated enrichment to give analysts a chunk of what they need to figure out whether an alert is legitimate, and if so then how much risk it poses.

Incident Response

Once an analyst validates an alert and performs an initial damage assessment, the incident would be sent along to the response team. At this point a number of activities can be performed without a responder’s direct involvement or attention to accelerate response. If you consider potential responses to the alerts above, you can see how orchestration and automation can make responders far more efficient and reduce risk.

  • Connection to known bad IP: Let’s say an analyst determines that a device connected to a known bad IP, because it was compromised and added to a botnet. What would a responder then want to do?
    1. Isolate the device: First the device should be isolated from the network and put on a quarantine network with full packet capture to enable deeper monitoring, and to prevent further data exfiltration.
    2. Forensic images: The responder will need to take an image of the device for further analysis and to maintain chain of custody.
    3. Load forensics tools on the imaged device: The standard set of forensic tools are then loaded up, and the images connected for both disk and memory forensics.

All these functions can happen automatically once an alert is validated and escalated to the response team. The responder starts with images from the compromised device, forensics tools ready to go, and a case file with all available information about the attack and potential adversary at their fingertips when they begin response.

But opportunities to work faster and better don’t end here. If the responder discovers a system file that has been changed on the compromised device, they can further automate their process. They can search the security analytics system to see whether that file or a similar one has been downloaded to any other devices, run the file through a sandbox to observe and then search for its behaviors, and (if they get hits on other potentially compromised devices) incorporate additional devices into the response process, isolating and imaging them automatically.

These same techniques apply to pretty much any kind of alert or case that comes across a responder’s desk. The registry alert above applies mostly to memory forensics, but the same general processes apply.

Ditto for the large USB file transfer indicating an insider attack. But if you suspect an insider it’s generally more prudent not to isolate the device, to avoid alerting them. So that alert would trigger a different automated runbook, likely involving full packet capture of the device, analysis of file usage over the past 60-90 days, and notifying Human Resources and Legal of a potential malicious insider.

What is the common thread across all these scenarios? The ability to accelerate SecOps by planning out activities in the form of runbooks, and then orchestrating and automating execution to the greatest extent possible.

Benefits

These seem self-evident, but let’s be masters of the obvious and state them anyway. This potential future for security operations enables you to:

  • React Faster and Better: Your analysts have better information because the alerts they receive include information they currently spend time gathering. Your responders work better because they already have potentially compromised devices isolated and imaged, and a wealth of threat intel about what the attack might be, who is behind it, and their likely next move.
  • Operationalizing Process: Your best folks just know what to do, but your other folks typically have no idea, so they stumble and meander through each incident; some figure it out alone, and others give up and look for another gig. If you could have your best folks build runbooks which define proper processes for the most common situations, you can minimize performance variation and make everyone more productive.
  • Improve Employee Retention: Employees who work in an environment where they can be successful, with the right tools to achieve their objectives, tend to stay. It’s not about the money for most security folks – they want to do their jobs. If you have systems in place to keep humans doing what they are good at, and your competition (for staff) doesn’t, it becomes increasingly hard for employees to leave. Some will choose to build a similar environment somewhere else – that’s great, and how the industry improves overall. But many realize how hard it is, and what a step backwards it would be to manually do the work you have already automated.

So what are you waiting for? We never like to sell past the close, but we’ll do it anyway. Enriching alerts and incident response are only the tip of the iceberg relative for SecOps processes which can be accelerated and improved with a dose of orchestration and automation. We will wrap up with our next post, detailing a few more use cases which provide overwhelming evidence of our need to embrace the future.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Historical OSINT – Google Docs Hosted Rogue Chrome Extension Serving Campaign Spotted in the Wild

In, a, cybercrime, ecosystem, dominated, by, malicious, software, releases, cybercriminals, continue, actively, populating, their, botnet's, infected, population, further, spreading, malicious, software, while, earning, fraudulent, revenue, in, the, process, of, obtaining, access, to, malware-infected, hosts, further, compromising, the, confidentiality, integrity, and, availability, of, the,

Historical OSINT – FTLog Worm Spreading Across Fotolog

In, a, cybercrime, ecosystem, dominated, by, fraudulent, propositions, cybercriminals, continue, actively, populating, their, botnet's, infected, population, further, spreading, malicious, software, while, compromising, the, confidentiality, integrity, and, availability, of, the, affected, hosts, to, a, multu-tude, of, malicious, software, while, earning, fraudulent, revenue, in, the, process, of

Historical OSINT – Rogue MyWebFace Application Serving Adware Spotted in the Wild

In, a, cybercrime, ecosystem, dominated, by, malicious, software, releases, cybercriminals, continue, actively, populating, their, botnet's, infected, population, further, spreading, malicious, software, potentially, exposing, the, confidentiality, integrity, and, availability, of, the, affected, hosts, further, spreading, malicious, software, while, monetizing, access, to, malware-infected,

Historical OSINT – Koobface Gang Utilizes, Google Groups, Serves, Scareware and Malicious Software

In, a, cybercrime, ecosystem, dominated, by, malicious, software, releases, cybercriminals, continue, actively, populating, their, botnet's, infected, populating, successfully, affecting, hundreds, of, thousands, of, users, globally, potentially, exposing, the, confidentiality, integrity, and, availability, of, the, affected, hosts, to, a, multi-tude, of, malicious, software, further, spreading,

Enterprise Security Weekly #70 – We Have Foreigners Here

Ismael Valenzuela of the SANS Institute joins us. In the news, Rapid7 and Tenable announce new headquarters, Meg Whitman steps down, announcements for CA World ‘17, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode70

Visit https://www.securityweekly.com/esw for all the latest episodes!

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

A Thanksgiving Carol: How Those Smart Engineers at Twitter Screwed Me

Thanksgiving Holiday is a time for family and cheer. Well, a time for family. It's the holiday where we ask our doctor relatives to look at that weird skin growth, and for our geek relatives to fix our computers. This tale is of such computer support, and how the "smart" engineers at Twitter have ruined this for life.

My mom is smart, but not a good computer user. I get my enthusiasm for science and math from my mother, and she has no problem understanding the science of computers. She keeps up when I explain Bitcoin. But she has difficulty using computers. She has this emotional, irrational belief that computers are out to get her.

This makes helping her difficult. Every problem is described in terms of what the computer did to her, not what she did to her computer. It's the computer that needs to be fixed, instead of the user. When I showed her the "haveibeenpwned.com" website (part of my tips for securing computers), it showed her Tumblr password had been hacked. She swore she never created a Tumblr account -- that somebody or something must have done it for her. Except, I was there five years ago and watched her create it.

Another example is how GMail is deleting her emails for no reason, corrupting them, and changing the spelling of her words. She emails the way an impatient teenager texts -- all of us in the family know the misspellings are not GMail's fault. But I can't help her with this because she keeps her GMail inbox clean, deleting all her messages, leaving no evidence behind. She has only a vague description of the problem that I can't make sense of.

This last March, I tried something to resolve this. I configured her GMail to send a copy of all incoming messages to a new, duplicate account on my own email server. With evidence in hand, I would then be able solve what's going on with her GMail. I'd be able to show her which steps she took, which buttons she clicked on, and what caused the weirdness she's seeing.

Today, while the family was in a state of turkey-induced torpor, my mom brought up a problem with Twitter. She doesn't use Twitter, she doesn't have an account, but they keep sending tweets to her phone, about topics like Denzel Washington. And she said something about "peaches" I didn't understand.

This is how the problem descriptions always start, chaotic, with mutually exclusive possibilities. If you don't use Twitter, you don't have the Twitter app installed, so how are you getting Tweets? Over much gnashing of teeth, it comes out that she's getting emails from Twitter, not tweets, about Denzel Washington -- to someone named "Peaches Graham". Naturally, she can only describe these emails, because she's already deleted them.

"Ah ha!", I think. I've got the evidence! I'll just log onto my duplicate email server, and grab the copies to prove to her it was something she did.

I find she is indeed receiving such emails, called "Moments", about topics trending on Twitter. They are signed with "DKIM", proving they are legitimate rather than from a hacker or spammer. The only way that can happen is if my mother signed up for Twitter, despite her protestations that she didn't.

I look further back and find that there were also confirmation messages involved. Back in August, she got a typical Twitter account signup message. I am now seeing a little bit more of the story unfold with this "Peaches Graham" name on the account. It wasn't my mother who initially signed up for Twitter, but Peaches, who misspelled the email address. It's one of the reasons why the confirmation process exists, to make sure you spelled your email address correctly.

It's now obvious my mom accidentally clicked on the [Confirm] button. I don't have any proof she did, but it's the only reasonable explanation. Otherwise, she wouldn't have gotten the "Moments" messages. My mom disputed this, emphatically insisting she never clicked on the emails.

It's at this point that I made a great mistake, saying:

"This sort of thing just doesn't happen. Twitter has very smart engineers. What's the chance they made the mistake here, or...".

I recognized condescension of words as they came out of my mouth, but dug myself deeper with:

"...or that the user made the error?"

This was wrong to say even if I were right. I have no excuse. I mean, maybe I could argue that it's really her fault, for not raising me right, but no, this is only on me.

Regardless of what caused the Twitter emails, the problem needs to be fixed. The solution is to take control of the Twitter account by using the password reset feature. I went to the Twitter login page, clicked on "Lost Password", got the password reset message, and reset the password. I then reconfigured the account to never send anything to my mom again.

But when I logged in I got an error saying the account had not yet been confirmed. I paused. The family dog eyed me in wise silence. My mom hadn't clicked on the [Confirm] button -- the proof was right there. Moreover, it hadn't been confirmed for a long time, since the account was created in 2011.

I interrogated my mother some more. It appears that this has been going on for years. She's just been deleting the emails without opening them, both the "Confirmations" and the "Moments". She made it clear she does it this way because her son (that would be me) instructs her to never open emails she knows are bad. That's how she could be so certain she never clicked on the [Confirm] button -- she never even opens the emails to see the contents.

My mom is a prolific email user. In the last eight months, I've received over 10,000 emails in the duplicate mailbox on my server. That's a lot. She's technically retired, but she volunteers for several charities, goes to community college classes, and is joining an anti-Trump protest group. She has a daily routine for triaging and processing all the emails that flow through her inbox.

So here's the thing, and there's no getting around it: my mom was right, on all particulars. She had done nothing, the computer had done it to her. It's Twitter who is at fault, having continued to resend that confirmation email every couple months for six years. When Twitter added their controversial "Moments" feature a couple years back, somehow they turned on Notifications for accounts that technically didn't fully exist yet.

Being right this time means she might be right the next time the computer does something to her without her touching anything. My attempts at making computers seem rational has failed. That they are driven by untrustworthy spirits is now a reasonable alternative.

Those "smart" engineers at Twitter screwed me. Continuing to send confirmation emails for six years is stupid. Sending Notifications to unconfirmed accounts is stupid. Yes, I know at the bottom of the message it gives a "Not my account" selection that she could have clicked on, but it's small and easily missed. In any case, my mom never saw that option, because she's been deleting the messages without opening them -- for six years.

Twitter can fix their problem, but it's not going to help mine. Forever more, I'll be unable to convince my mom that the majority of her problems are because of user error, and not because the computer people are out to get her.


Necurs’ Business Is Booming In A New Partnership With Scarab Ransomware

Necurs’ spam botnet business is doing well as it is seemingly acquiring new customers. The Necurs botnet is the biggest deliverer of spam with 5 to 6 million infected hosts online monthly, and is responsible for the biggest single malware spam campaigns. Its service model provides the whole infection chain: from spam emails with malicious malware downloader attachments, to hosting the payloads on compromised websites.

necurs_other

Necurs is contributing a fair bit to the malicious spam traffic we observe.

The Necurs botnet is most renown for distributing the Dridex banking Trojan, Locky ransomware, and “pump-and-dump” penny-stock spam. Since 2016 it has expanded its deliverables beyond these three and have included other families of ransomware, such as GlobeImposter and Jaff, and the banking trojan Trickbot to its customer base, with Locky being its brand-image malware deliverable with multiple malware spam campaigns per week.

This morning at 9AM (Helsinki time, UTC +2) we observed the start of a campaign with malicious .vbs script downloaders compressed with 7zip. The email subject lines are “Scanned from (Lexmark/HP/Canon/Epson)” and the attachment filename is formatted as “image2017-11-23-(7 random digits).7z“.

The final payload (to our surprise) was Scarab ransomware, which we haven’t seen previously delivered in massive spam campaigns. Scarab ransomware is a relatively new ransomware variant first observed last June, and its code is based on the open source “ransomware proof-of-concept” called HiddenTear.

This version doesn’t change the file names, but appends a new file extension to the encrypted files with “.[suupport@protonmail.com].scarab”, and drops the following ransom note after the encryption:

ransomnote

The spam campaigns from Necurs are following the same format from campaign to campaign, consisting of social engineering subject line themes varying from financial to office utilities, with very minimal text body contents and spiced up usually with malicious attachments, sometimes just URLs. And as the simple social engineering themes are effective, Necurs tends to re-use the spam themes in its campaigns, sometimes within a rather short cycle. In this particular case, the subject lines used in this spam campaign were last seen in a Locky ransomware campaign exactly two weeks ago, the only difference being the extension of the attached downloader.

locky_scarab

This has already given Scarab-ransomware a massive popularity bump, according to ransomware submissions ID ransomware.

We’re interested to see the future affiliations of this massive botnet and observe how it’s able to change the trends and popularity of malware types and certain families. In the meanwhile, we’ll keep blocking these threats, keeping our customers safe.

IOCs:

b4a671ec80135bfb1c77f5ed61b8a3c80b2b6e51
7ac23eee5e15226867f5fbcf89f116bb01933227
d31beec9e2c7b312ecedb594f45a9f5174155c68
85dc3a0b833efb1da2efdcd62fab565c44f22718
da1e2542b418c85f4b57164e46e04e344db58ab8
a6f1f2dd63d3247adb66bd1ff479086207bd4d2b
14680c48eec4e1f161db1a4a990bd6833575fc8e
af5a64a9a01a9bd6577e8686f79dce45f492152e
c527bc757a64e64c89aaf0d9d02b6e97d9e7bb3d
3f51fb51cb1b9907a7438e2cef2e538acda6b9e9
b0af9ed37972aab714a28bc03fa86f4f90858ef5
6fe57cf326fc2434c93ccc0106b7b64ec0300dd7
http://xploramail.com/JHgd476?
http://miamirecyclecenters.com/JHgd476?
http://hard-grooves.com/JHgd476?
http://xploramail.com/JHgd476?
http://atlantarecyclingcenters.com/JHgd476?
http://pamplonarecados.com/JHgd476?
http://hellonwheelsthemovie.com/JHgd476?

Don Jr.: I’ll bite

So Don Jr. tweets the following, which is an excellent troll. So I thought I'd bite. The reason is I just got through debunk Democrat claims about NetNeutrality, so it seems like a good time to balance things out and debunk Trump nonsense.

The issue here is not which side is right. The issue here is whether you stand for truth, or whether you'll seize any factoid that appears to support your side, regardless of the truthfulness of it. The ACLU obviously chose falsehoods, as I documented. In the following tweet, Don Jr. does the same.

It's a preview of the hyperpartisan debates are you are likely to have across the dinner table tomorrow, which each side trying to outdo the other in the false-hoods they'll claim.

What we see in this number is a steady trend of these statistics since the Great Recession, with no evidence in the graphs showing how Trump has influenced these numbers, one way or the other.

Stock markets at all time highs

This is true, but it's obviously not due to Trump. The stock markers have been steadily rising since the Great Recession. Trump has done nothing substantive to change the market trajectory. Also, he hasn't inspired the market to change it's direction.


To be fair to Don Jr., we've all been crediting (or blaming) presidents for changes in the stock market despite the fact they have almost no influence over it. Presidents don't run the economy, it's an inappropriate conceit. The most influence they've had is in harming it.

Lowest jobless claims since 73

Again, let's graph this:


As we can see, jobless claims have been on a smooth downward trajectory since the Great Recession. It's difficult to see here how President Trump has influenced these numbers.

6 Trillion added to the economy

What he's referring to is that assets have risen in value, like the stock market, homes, gold, and even Bitcoin.

But this is a well known fallacy known as Mercantilism, believing the "economy" is measured by the value of its assets. This was debunked by Adam Smith in his book "The Wealth of Nations", where he showed instead the the "economy" is measured by how much it produces (GDP - Gross Domestic Product) and not assets.

GDP has grown at 3.0%, which is pretty good compared to the long term trend, and is better than Europe or Japan (though not as good as China). But Trump doesn't deserve any credit for this -- today's rise in GDP is the result of stuff that happened years ago.

Assets have risen by $6 trillion, but that's not a good thing. After all, when you sell your home for more money, the buyer has to pay more. So one person is better off and one is worse off, so the net effect is zero.

Actually, such asset price increase is a worrisome indicator -- we are entering into bubble territory. It's the result of a loose monetary policy, low interest rates and "quantitative easing" that was designed under the Obama administration to stimulate the economy. That's why all assets are rising in value. Normally, a rise in one asset means a fall in another, like selling gold to pay for houses. But because of loose monetary policy, all assets are increasing in price. The amazing rise in Bitcoin over the last year is as much a result of this bubble growing in all assets as it is to an exuberant belief in Bitcoin.

When this bubble collapses, which may happen during Trump's term, it'll really be the Obama administration who is to blame. I mean, if Trump is willing to take credit for the asset price bubble now, I'm willing to give it to him, as long as he accepts the blame when it crashes.

1.5 million fewer people on food stamps

As you'd expect, I'm going to debunk this with a graph: the numbers have been falling since the great recession. Indeed, in the previous period under Obama, 1.9 fewer people got off food stamps, so Trump's performance is slight ahead rather than behind Obama. Of course, neither president is really responsible.

Consumer confidence through the roof

Again we are going to graph this number:


Again we find nothing in the graph that suggests President Trump is responsible for any change -- it's been improving steadily since the Great Recession.

One thing to note is that, technically, it's not "through the roof" -- it still quite a bit below the roof set during the dot-com era.

Lowest Unemployment rate in 17 years

Again, let's simply graph it over time and look for Trump's contribution. as we can see, there doesn't appear to be anything special Trump has done -- unemployment has steadily been improving since the Great Recession.


But here's the thing, the "unemployment rate" only measures those looking for work, not those who have given up. The number that concerns people more is the "labor force participation rate". The Great Recession kicked a lot of workers out of the economy.


Mostly this is because Baby Boomer are now retiring an leaving the workforce, and some have chosen to retire early rather than look for another job. But there are still some other problems in our economy that cause this. President Trump has nothing particular in order to solve these problems.

Conclusion

As we see, Don Jr's tweet is a troll. When we look at the graphs of these indicators going back to the Great Recession, we don't see how President Trump has influenced anything. The improvements this year are in line with the improvements last year, which are in turn inline with the improvements in the previous year.

To be fair, all parties credit their President with improvements during their term. President Obama's supporters did the same thing. But at least right now, with these numbers, we can see that there's no merit to anything in Don Jr's tweet.

The hyperpartisan rancor in this country is because neither side cares about the facts. We should care. We should care that these numbers suck, even if we are Republicans. Conversely, we should care that those NetNeutrality claims by Democrats suck, even if we are Democrats.



Protecting net neutrality is an important press freedom issue

Net neutrality
Jason Leung

The Federal Communications Commission released its proposal to kill net neutrality on Tuesday, which would end the restrictions on internet service providers (ISPs) that attempt to guarantee a free and open internet. Rolling back net neutrality has worrying and dangerous implications specifically for press freedom in a world in which journalism and the internet are increasingly intertwined.

Net neutrality is the principle that internet connections are not filtered, restricted, or manipulated based on their content. In the United States, that principle is enforced by a set of rules designed to ensure that neither internet service providers nor the government can tamper with the delivery speed or accessibility of certain data. ISPs must treat all internet traffic equally, and with these protections, the information that media organizations report and publish is not driven or interfered with by companies who sell internet access.

Without net neutrality, ISPs could form deals or partnerships with news organizations that could change what news coverage their users access. ISPs—some of whom own news outlets themselves—would have broad power to determine which stories internet users see, and could slow or block access to news coverage by competitor news outlets. Instead of choice between diverse coverage of news, internet users could be stifled from viewing reporting that their ISP deems unfavorable, whether for political or financial reasons.

Partnerships between ISPs and news organizations could hurt local and independent journalism by limiting access to coverage by news websites who aren’t partnering with the ISP. Alternative sources of media that offer diverse political perspectives might load significantly slower than partnering news websites, not show up in search results, or cost extra to view.

Most Americans rely on the internet to stay informed. According to the Pew Research Center, more than half of Americans and 71% of Americans under 30 get most of their news online. The internet is dominated by only a few content providers, such as Verizon and Comcast Xfinity, so internet users would have little choice to switch to an internet service provider that provides diverse coverage. Ending net neutrality would concentrate the power over the flow of information and accessibility of content in just a few companies.  

With a vote on FCC chairman Ajit Pai’s proposal to end net neutrality possible as early as December, it’s crucial to act quickly to defend the principles that create the internet as we know it. Consider calling or emailing your representative today and demand that they act to defend internet rights by opposing the FCC’s proposal to kill net neutrality.

Journalism has been transformed by the internet, offering unprecedented, rapidly available access to news around the world. The internet exists as a site of creativity, innovation, and robust political debate precisely because ISPs are mandated to treat content neutrally. In the digital age in which journalism and the internet are increasingly interconnected, internet users, not the companies that sell internet access, should control the news they read.

Net neutrality is a press freedom issue and a First Amendment issue, because equal access to information and diverse journalism is imperative for a functioning democracy. 

NetNeutrality vs. limiting FaceTime

People keep retweeting this ACLU graphic in regards to NetNeutrality. In this post, I debunk the fourth item. In previous posts [1] [2] I debunk other items.


But here's the thing: the FCC allowed these restrictions, despite the FCC's "Open Internet" order forbidding such things. In other words, despite the graphic's claims it "happened without net neutrality rules", the opposite is true, it happened with net neutrality rules.

The FCC explains why they allowed it in their own case study on the matter. The short version is this: AT&T's network couldn't handle the traffic, so it was appropriate to restrict it until some time in the future (the LTE rollout) until it could. The issue wasn't that AT&T was restricting FaceTime in favor of its own video-calling service (it didn't have one), but it was instead an issue of "bandwidth management".

When Apple released FaceTime, they themselves restricted it's use to WiFi, preventing its use on cell phone networks. That's because Apple recognized mobile networks couldn't handle it.

When Apple flipped the switch and allowed it's use on mobile networks, because mobile networks had gotten faster, they clearly said "carrier restrictions may apply". In other words, it said "carriers may restrict FaceTime with our blessing if they can't handle the load".

When Tim Wu wrote his paper defining "NetNeutrality" in 2003, he anticipated just this scenario. He wrote:
"The goal of bandwidth management is, at a general level, aligned with network neutrality."
He doesn't give "bandwidth management" a completely free pass. He mentions the issue frequently in his paper with a less favorable description, such as here:
Similarly, while managing bandwidth is a laudable goal, its achievement through restricting certain application types is an unfortunate solution. The result is obviously a selective disadvantage for certain application markets. The less restrictive means is, as above, the technological management of bandwidth. Application-restrictions should, at best, be a stopgap solution to the problem of competing bandwidth demands. 
And that's what AT&T's FaceTime limiting was: an unfortunate stopgap solution until LTE was more fully deployed, which is fully allowed under Tim Wu's principle of NetNeutrality.

So the ACLU's claim above is fully debunked: such things did happen even with NetNeutrality rules in place, and should happen.

Finally, and this is probably the most important part, AT&T didn't block it in the network. Instead, they blocked the app on the phone. If you jailbroke your phone, you could use FaceTime as you wished. Thus, it's not a "network" neutrality issue because no blocking happened in the network.

NetNeutrality vs. Verizon censoring Naral

People keep retweeting this ACLU graphic in support of net neutrality. It's wrong. In this post, I debunk the second item. I debunk other items in other posts [1] [4].


Firstly, it's not a NetNeutrality issue (which applies only to the Internet), but an issue with text-messages. In other words, it's something that will continue to happen even with NetNeutrality rules. People relate this to NetNeutrality as an analogy, not because it actually is such an issue.

Secondly, it's an edge/content issue, not a transit issue. The details in this case is that Verizon provides a program for sending bulk messages to its customers from the edge of the network. Verizon isn't censoring text messages in transit, but from the edge. You can send a text message to your friend on the Verizon network, and it won't be censored. Thus the analogy is incorrect -- the correct analogy would be with content providers like Twitter and Facebook, not ISPs like Comcast.

Like all cell phone vendors, Verizon polices this content, canceling accounts that abuse the system, like spammers. We all agree such censorship is a good thing, and that such censorship of content providers is not remotely a NetNeutrality issue. Content providers do this not because they disapprove of the content of spam such much as the distaste their customers have for spam.

Content providers that are political, rather than neutral to politics is indeed worrisome. It's not a NetNeutrality issue per se, but it is a general "neutrality" issue. We free-speech activists want all content providers (Twitter, Facebook, Verizon mass-texting programs) to be free of political censorship -- though we don't want government to mandate such neutrality.

But even here, Verizon may be off the hook. They appear not be to be censoring one political view over another, but the controversial/unsavory way Naral expresses its views. Presumably, Verizon would be okay with less controversial political content.

In other words, as Verizon expresses it's principles, it wants to block content that drivers away customers, but is otherwise neutral to the content. While this may unfairly target controversial political content, it's at least basically neutral.

So in conclusion, while activists portray this as a NetNeutrality issue, it isn't. It's not even close.

NetNeutrality vs. AT&T censoring Pearl Jam

People keep retweeting this ACLU graphic in response to the FCC's net neutrality decision. In this post, I debunk the first item on the list. In other posts [2] [4] I debunk other items.


First of all, this obviously isn't a Net Neutrality case. The case isn't about AT&T acting as an ISP transiting network traffic. Instead, this was about AT&T being a content provider, through their "Blue Room" subsidiary, whose content traveled across other ISPs. Such things will continue to happen regardless of the most stringent enforcement of NetNeutrality rules, since the FCC doesn't regulate content providers.

Second of all, it wasn't AT&T who censored the traffic. It wasn't their Blue Room subsidiary who censored the traffic. It was a third party company they hired to bleep things like swear words and nipple slips. You are blaming AT&T for a decision by a third party that went against AT&T's wishes. It was an accident, not AT&T policy.

Thirdly, and this is the funny bit, Tim Wu, the guy who defined the term "net neutrality", recently wrote an op-ed claiming that while ISPs shouldn't censor traffic, that content providers should. In other words, he argues that companies AT&T's Blue Room should censor political content.

What activists like ACLU say about NetNeutrality have as little relationship to the truth as Trump's tweets. Both pick "facts" that agree with them only so long as you don't look into them.

Startup Security Weekly #63 – In the Books

Darren Mar-Elia of Semperis joins us. In the news, deciding with speed and conviction, learning from unicorns, starting your social enterprise, and updates from ThreatQuotient, Symantec, Optiv, and more on this episode of Startup Security Weekly!Full Show Notes: https://wiki.securityweekly.com/SSWEpisode63

Visit https://www.securityweekly.com/ssw for all the latest episodes!

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Hack Naked News #150 – November 21, 2017

Don Pezet of ITProTV joins Paul to discuss Amazon S3 buckets, Google collecting Android data, secret spyware in smartwatches, and patches for Microsoft, Intel, HP, and more on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode150

Visit http://hacknaked.tv for all the latest episodes!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Endpoint Advanced Protection Buyer’s Guide: Top 10 Questions for Detection and Response

Posted under: Research and Analysis

There are plenty of obvious questions you could ask each endpoint security vendor. But they don’t really help you understand the nuances of their approach, so we decided to distill the selection criteria down to a few key points. We will provide both the questions and the reasons behind them.

Q1: Where do you draw the line between prevention and EDR?

The clear trend is towards an integrated advanced endpoint protection capability addressing prevention, detection, response, and hunting. That said, it may not be the right answer for any specific organization, depending on the adversaries they face and the sophistication & capabilities of their internal team. As discussed under selection criteria for Prevention, simple EDR (EDR-lite) is already bundled into a few advanced prevention products, accelerating this integration and emphasizing the importance of deciding whether the organization needs separate tools for prevention and detection/response/hunting.

Q2: How does your product track a campaign, as opposed to just looking for attacks on single endpoints?

Modern attacks rarely focus on just one endpoint – they tend to compromise multiple devices as the adversary advances towards their objective. To detect and respond to such modern attacks, analysis needs to look not merely at what’s happening on a single endpoint, but also at how that endpoint is interacting with the rest of the environment – looking for broader indications of reconnaissance, lateral movement, and exfiltration.

Q3: Is detection based on machine learning? Does your analysis leverage the cloud? How do your machine learning models handle false positives?

Advanced analytics are not the only way to detect attacks, but they are certainly among the key techniques. This question addresses the vendor’s approach to machine learning, digs into where they perform analysis, and gets at the breadth of the data they use to train ML models. Finally, you want the vendor to pass a sniff test on false positives. If any vendor claims they don’t have false positives, run away fast.

Q4: Does your endpoint agent work in user or kernel mode? What kind of a performance impact does your agent have on devices?

The answer is typically ‘both’ because certain activities that cannot be monitored or prevented purely from user space or kernel mode. For monitoring and EDR, it’s possible to stay within user mode, but that limits automated remediation capability because some attacks need to be dealt with at the kernel level. Of course, with many agents already in use on typical endpoints, when considering adding another for EDR you will want to understand the performance characteristics of the new agent.

Q5: Do we need “Full DVR”, or is collecting endpoint metadata sufficient?

This question should reveal the vendor’s response religion – some believe comprehensive detection and/or response can work using only metadata from granular endpoint telemetry, while others insist that a full capture of all endpoint activity is necessary to effectively respond and to hunt for malicious activity. The truth is somewhere in the middle, depending on your key use case. Detection-centric environments can run well on metadata, but if response/hunting is your driving EDR function, access to full devie telemetry is more important because attackers tend to cover their tracks using self-deleting files and other techniques to obfuscate their activities.

Keep in mind that the EDR architecture is a major factor here, as central analysis of metadata can provide excellent detection, with full telemetry stored temporarily on each device in case it is needed for response.

Q6: How is threat intelligence integrated into your agent?

This anser should be about more than getting patterns for the latest indicators of compromise and patterns for attacks involving multiple devices. Integrated threat intel provides the ability to search historical telemetry for attacks you didn’t recognize as attacks at the time (retrospective search). You should also be able to share intelligence with a community of similar organizations, and be able to integrate first-party intel from your vendor with third-party intel from threat intelligence vendors when appropriate. Additionally, the able to send unrecognized files to a network sandbox makes the system more effective and enables quicker recognition of emerging attacks.

Q7: How does your product support searching endpoint telemetry for our SOC analysts? Can potentially compromised devices be polled in real time? What about searching through endpoint telemetry history?

Search is king for EDR tools, so spend some time with the vendor to understand their search interface and how it can be used to drill down into specific devices or pivot to other devices, to understand which devices an attacker has impacted. You’ll also want to see their search responsiveness, especially with data from potentially hundreds of thousands of endpoints in the system. This is another opportunity to delve into retrospective search capabilities – key for finding malicious activity, especially when you don’t recognize it as bad when it occurs. Also consider the tradeoffs between retention of telemetry and the cost of storing it, because being able to search a longer history window makes both retrospective search and hunting more effective.

Q8: Once I get an alert, does the product provide a structured response process? What kind of automation is possible with your product? What about case management?

As we have discussed throughout this series, the security skills gap makes it critical to streamline the validation and response processes for less sophisticated analysts. The more structured a tool can make the user experience, the more it can help junior analysts be more productive, faster. That said, you also want to make sure the tool isn’t so structured that analysts have no flexibility to follow their instincts and investigate the attack a different way.

Q9: My staff aren’t security ninjas, but I would like to proactively look for attackers. How does your product accelerate a hunt, especially for unsophisticated analysts?

Given sufficiently capable search and visualization of endpoint activity, advanced threat hunters can leverage an EDR tool for hunting. Again, you’ll want to learn how the tool can make your less experienced folks more productive and enable them to find suspicious activity, drill down into devices, and pivot to other devices; and ultimately document an attack as a case.

Q10: How does your product integrate with other enterprise security solutions, including advanced prevention agents and traditional EDR?

If a vendor offers a full advanced prevention capability in addition to EDR, use this question to figure out whether prevention and EDR use a common agent, and their level of management integration. Prevention and detection/response are co-dependent, so you would like to see a common agent and significant integration between tools from vendors who offer both, so you don’t need to load more device agentry than needed, and to make management of prevention and EDR efficient.

Given the other security controls you have in place, it would be nice to understand how alerts and telemetry from the EDR system can be sent to and received from other monitors and controls. Our objective is to ensure you understand not just how the tool impacts the daily activity of the endpoint team, but also the SOC and other teams, including network and security operations. Adjacent tools which are obvious integration candidates include SIEM, incident response & case management tools, and network-based controls. Ideally you will get out-of-the-box integrations with these tools and open APIs – both to accelerate deployment and to ensure you don’t have to maintain custom integrations forever.

Next up in our Buyer’s Guide is guidance on the Proof of Concept (PoC) process for both prevention and detection/response. We will start posting that after the holiday.

- Mike Rothman (0) Comments Subscribe to our daily email digest

The New Face of DDoS-For-Hire Services

For years, the rise of DDoS-for-hire services has caused an explosion of DDoS attacks. Due to their cheap price point and ease of access, they have revolutionised DDoS attacks by giving anyone and everyone access to a tactic that was once the preserve of ‘script kiddies’ with a decent understanding of coding. Nowadays, a quick search of Google and a spare $50 can put DDoS attacks into the hands of just about anyone.

Like any business owners, the attackers behind these services are always looking for new ways to promote them to potential buyers. For example, last week news surfaced about a mobile version of the attack-for-hire service that has gone up for sale on the Google Play store. This service is an update to an already formidable web version, called Ragebooter, which back in 2013 offered powerful distributed denial-of-service attacks capable of knocking individuals and websites offline. So, what does this new service mean for businesses and what are the potential consequences from it?

DDoS-For-Hire Services Are Evolving

The rise of DDoS-for-hire services comes at a time when DDoS attacks are becoming more sophisticated than ever. As these services evolve they have also become more commercial, by offering discounts and loyalty points and now launching a mobile platform to simplify the user journey. The cost of attacks has never been lower, with one DDoS service advertised on a Russian public forum offering attacks from as little as $50 per day. However, Kaspersky believes the average cost is more like $25 per hour, with cyber criminals making a profit of about $18 for every hour of an attack.

By offering such a low-cost, shared DDoS attack infrastructure, these services have attracted thousands of malicious customers and are responsible for hundreds of thousands of attacks per year. At the same time, criminals continue to seek new and cheaper ways to organise botnets for use in DDoS-for-hire attacks, so the plethora of unsecured connected devices that make up the Internet of Things continues to make life easier for them.

But while the cost of launching an attack has reduced so significantly, the costs incurred by the victims for lost revenue and reputation are significant. One can only imagine how many customers an online store could lose if an DDoS attack takes its website offline for an entire day’s trading.

Best Practices

All this makes for an extremely concerning future DDoS attack landscape. With DDoS-for-hire services evolving so quickly, and the capacity for future botnet-driven DDoS attacks growing incrementally, organizations must stay ahead of the game and take steps to ensure they remain protected. The best way for organizations to mitigate those attacks is to work together with internet providers to adopt the latest generation of inline, always on, DDoS protection.

To find out more, please contact us.

The Hay CFP Management Method – Part 2

I’ve had a lot of positive feedback from my first post which explained how to create the Trello board to track your Call For Paper (CFP) due dates, submissions, and results. In this post, I’ll explain how to create the cards and populate them with the required data to better manage your CFP pipeline.

To start your first card click the ‘Add a card…’ link in the CFP Open swim lane.

Type in the name of the conference and select the ‘Add’ button.

Once the card is added, click the pencil icon to add more context.

Within the card, place the location of the conference in the ‘Add a more detailed subscription…’ section and select the Save button. Note: I strongly advise that you follow a consistent location naming (e.g. Houston, TX or Houston, TX, USA) to make visualizing the data easier later on.

Now we have to add the CFP due date. Select the ‘Due Date’ button.

When I input the CFP due date, I often use the date prior to the published due date ( I also set the time to 11:59pm) as a way to ensure I don’t leave the submission to the absolute last minute.

After the date is selected I fill the card with more CFP-specific information that I find from the event website, Twitter, or a third-party CFP site. I also pate the URL for the CFP submission form into the card so that I don’t have to hunt for it later (it automatically saves it as an attachment). If other information, such as important dates, conference details, or comments about the event are available I often add those in the ‘Add Comment’ section. Just make sure to his the ‘Save’ button or the data won’t be added to the card.

Optionally, you can leverage the ‘Labels’ button to assign color coded tags to denote different things. For example, I’ve used these to denote the audience type, the continent, country, state/province where the event is located, and whether or not travel and expenses (T&E) are covered. These are really just informational to help you prioritize events.

Click the ‘X’ at the top right hand side of the card or click somewhere else on the board to close the card.

You now have your first conference CFP card that can be moved through the board calendar pipeline – something that I’ll discuss in my next blog post.

IcedID – New Banking Trojan targets US-based companies with web injects

The malware research team in the UAB Computer Forensics Research Lab is widening its horizon and is always on the look out for new malware families. While researching new malware families, Arsh Arora, Ph.D. Candidate at UAB, found some chatter about the new banking trojan IcedId.  Although ransomware is the most discussed malware in the press for many financial institutions the most feared malware type is the Banking Trojan. The objective of most banking trojans is to steal banking credentials and eventually steal the money from account holders.

IcedID Banking Trojan 

IBM X-Force discovered a new banking trojan IcedID that was first detected in September 2017. It is known as modified version of the Zeus Trojan. The following trojan spreads by Emotet worm which is able to spread from machine to machine inside a network via weak administrator passwords.

One of our malware research team members, Shawn Sharp,  decided to dig into this malware. IBM had already provided a detailed explanation of the infection part, so we decided to take a different approach and focused on analyzing the web injects on a number of websites.

The sample used to test was:

MD5 - a6531184ea84bb5388d7c76557ff618d59f951c393a797950b2eb3e1d6307013

Virus Total Detection - 49/67. The sad part is that only 1 of the 49 detection named it IcedID, which commonly happens when marketing departments name malware. (The only company to call it IcedID was ALYac, the anti-virus product from ESTSecurity Corp in Seoul, Korea.  ESET, Microsoft, and TrendMicro all call this a sample of Fareit malware.)

When Shawn launched the process, it didn't trigger on its own but a browser had to be launched to activate the banking trojan. 

Fig. 1: Activation of Banking Trojan IcedID
Once the trojan was activated, following financial institution strings were found in the memory of the running sample when checked through Process Hacker.

bbt
jpmorgan
americanexpress
bankofamerica
tdbank
chase
citigroup
discover
ebanking-services
etrade
citi
adp
usaa
wellsfargo

When we visited a few of these websites and provided them fake credentials, the webinject process modifies the user experience by asking the website visitor for extra details. It is noteworthy that these changes to the page happen in browser memory, meaning that the "https:" and "Secure" labels are still present, even though the page has been altered.   

Amazon - 

Fig. 2: Amazon Web-Inject asking for card number

Although we really are at Amazon.com, the malware is causing our browser to ask us for the details of our credit card!

Chase

Fig. 3: Chase Web-Inject asking for additional details
The malware makes Chase's website appear to ask us for not only our Card Number and Expiration Date, but also our CVV and PIN!

Citi

Fig. 4: Citi Web-Inject asking for additional details
Machines infected with IcedID will also ask for these details after a login attempt at Citi.com!

Discover

Fig. 5: Discover Web-Inject asking for additional details
The Discover.com website asks for card details, but also our Date of Birth and the last four digits of our Social Security Number!

Researchers will be diving in deep and try to reverse engineer the binary for additional information. Stay tuned for more updates.  In the meantime, if you hear of a friend complaining that their bank is asking them for too much information -- it may mean that they are infected with malware!




Endpoint Advanced Protection Buyer’s Guide: Key Technologies for Detection and Response

Posted under: Research and Analysis

Now let’s dig into some key EDR technologies which appear across all the use cases: detection, response, and hunting.

Agent

The agent is deployed to each monitored endpoint, so you be sensitive to its size and its performance hit on devices. A main complaint regarding older endpoint protection was performance impact on devices. The smaller the better, and the less performance impact the better (duh!), but just as important is agent deployability and maintainability.

  • Full capture versus metadata: There are differing strong opinions on how much telemetry to capture and store from each device. Similar to the question of whether to do full network packet capture or to derive metadata from the packet stream, there is a level of granularity available with a full endpoint capture which isn’t available via metadata, for session reconstruction and more detail about what an adversary actually did. But full capture is very resource and storage intensive, so depending on the sophistication of your response team and process, metadata may be your best option. Also consider products that can gather more device telemetry when triggered, perhaps by an alert or connection to a suspicious network.

  • Offline collection: Mobile endpoints are not always on the network, so agents much be able to continue collecting event data when disconnected. Once back on the network, cached endpoint telemetry should be uploaded to the central repository, which can then perform aggregate analysis.

  • Multi-platform support: It’s a multi-platform world, and your endpoint security strategy needs to factor in not just Windows devices, but also Macs and Linux. Even if these platforms aren’t targeted they could be used in sophisticated operations as repositories, staging grounds, and file stores. Different operating systems offer different levels of telemetry access. Security vendors have less access to the kernel on both Mac and Linux systems than on Windows. Also dig into how vendors leverage built-in operating system services to provide sufficiently granular data for analysis. Finally, mobile devices access and store critical enterprise data, although their vulnerability is still subject to debate. We do not consider mobile devices as part of these selection criteria, although for many organizations an integrated capability is an advantage.

  • Kernel vs. user space: There is a semi-religious battle over whether a detection agent needs to live at the kernel level (with all the potential device instability risks that entails), or accurate detection can take place exclusively at the kernel level. Any agent must be able to detect attacks at lower levels of the operating system – such as root kits – as well as any attempts at agent tampering (again, likely outside user space). Again, we don’t get religious, and we appreciate that user-space agents are less disruptive, but are not willing to compromise on detecting all attacks.

  • Tamper proof: Speaking of tampering, to address another long standing issue with traditional EPP, you’ll want to dig into the product security of any agent you install on any endpoint in your environment. We can still remember the awesome Black Hat talks where EPP agent after EPP agent was shown to be more vulnerable than some enterprise applications. Let’s learn from those mistakes and dig into the security and resilience of the detection agents to make sure you aren’t just adding attack surface.

  • Scalability: Finally, scale is a key consideration for any enterprise. You might have 1,000 or 100,000 devices, or even more; but regardless you need to ensure the tool will work for the number of endpoints you need to support, and the staff on your team – both in terms of numbers and sophistication. Of course you need to handle deployment and management of agents, but don’t forget the scalability and responsiveness of analysis and searching.

Machine Learning

Machine learning is a catch-all term which endpoint detection/response vendors use for sophisticated mathematical analysis across a large dataset to generate models, intended to detect malicious device activity. Many aspects of advanced mathematics are directly relevant to detection and response.

  • Static file analysis: With upwards of a billion malicious file samples in circulation, mathematical malware analysis can pinpoint commonalities across malicious files. With a model of what malware looks like, detection offerings can then search for these attributes to identify ‘new’ malware. False positives are always a concern with static analysis, so part of diligence is ensuring the models are tested constantly, and static analysis should only be one part of malware detection.

  • Behavioral profiles: Similarly, behaviors of malware can be analyzed and profiled using machine learning. Malware profiling produces a dynamic model which can be used to look for malicious behavior.

Those are the main use cases for machine learning in malware detection, but there are a number of considerations when evaluating machine learning approaches, including:

  • Targeted attacks: With an increasing amount of attacks specifically targeting individual organizations, it is again important to distinguish delivery from compromise. Targeted attacks use custom (and personalized) methods to deliver attacks – which may or may not involve custom malware – but once the attacker has access to a device they use similar tactics to a traditional malware attack, so machine learning models don’t necessarily need to do anything unusual to deal with targeted attacks.

  • Cloud analytics: The analytics required to develop malware machine learning models are very computationally intensive. Cloud computing is the most flexible way to access that kind of compute power, so it makes sense that most vendors perform their number crunching and modeling in the cloud. Of course the models must be able to run on endpoints to detect malicious activity, so they are typically built in the cloud and executed locally on every endpoint. But don’t get distracted with where computation happens, so long as performance and accuracy are acceptable.

  • Sample sizes: Some vendors claim that their intel is better than another company’s. That’s a hard claim to prove, but sample sizes matter. Looking at a billion malware samples is better than looking at 10,000. Is there a difference between looking at a hundred million samples and at a billion? That’s less clear, and a larger sample size can easily produce an unacceptable number of false positives. Evaluation of these approaches needs to focus on actual effectiveness, not just sample size.

  • Positive and negative training: The thing about machine learning is that you can profile anything. It doesn’t need to be just malicious code, and in fact behavioral application profiles are built by analyzing legitimate behaviors. The models should use both positive (normal) and negative (‘bad’) attributes and behaviors to improve accuracy.

  • Malware samples: Another area of consideration is where vendors get malware samples. There will always be a base of samples assembled and shared among vendors. But relative effectiveness is determined in the margins. Where are the vendors getting unique samples? How do they age out samples? Do they optimize their products to catch their test samples? We know that’s cynical, but experience has shown it’s important to ask.

  • Retraining: Understanding how often models change also helps understand machine learning approaches. Are vendors updating their models weekly or even daily, and then using that to improve detections? Is it a monthly thing? Annual? There is o single right or wrong answer (it’s all about effectiveness), but understanding the vendor’s mindset provides perspective on how they believe malware works and what they consider the most effective detection methods.

Cloud integration

There is this thing called the cloud – you might have heard of it. Of course that’s facetious but the fact is that every endpoint security vendor needs the cloud to keep pace with attackers. There are a couple areas to dig into regarding how they leverage the cloud:

  • Signatures & rules: It’s not possible to keep all file hashes and attack indicators on every device to detect file-based attacks, so each vendor typically sends a small subset of rules to the agent, and if a file or profile isn’t known, the can send it up to the cloud for analysis, receiving in turn a verdict on whether it’s malicious.

  • Machine learning: Some vendors have built their own internal data lakes on server clusters, and perform analytics on their own hardware to support machine learning. Other depend on cloud computing providers. There isn’t a single right answer, but it’s hard to see how it makes economic sense for a vendor to maintain an analytics cluster in their own data center over the long term. This is about the future, and how the vendor plans to scale its analytics capability, because the only thing you can be sure of is that there will be more security data to analyze tomorrow than today.

  • Cloud-based management: At this point any vendor should provide an option to manage endpoint agents, define policies, and investigate attacks via a cloud-based console and service. Given the remote nature of many endpoints and the fact that keeping devices up to date is a critical aspect of endpoint defense, a cloud console has become table stakes. This also means you won’t need to stand up a management server to deploy and manage endpoint agents. You can and should expect any vendor to distribute updates to agents automatically via their cloud service, with the ability to vet updates and determine exactly when they will be deployed.

As critical as the cloud is to scaling endpoint security and to keeping pace with attackers, there are some cloud security aspects you should review for each vendor.

  • Authentication: Every vendor should support multi-factor authentication to access the console. It seems obvious but ask anyway.
  • Data security: You will have proprietary data in their service – at least a list of employees – so you need to find out how they will protect your data. Figure out what kind of encryption they use and whether it’s built into the back end of the technology stack or just network-layer encryption.
  • Data privacy: Hand in hand with encryption is the question of how the vendor supports multi-tenancy. Make sure you understand how they keep your data isolated from all their other customers. And no, providing a logon and password for each customer account is not a great answer.
  • Penetration testing: Make sure they aren’t just breathing their own exhaust about the security of their environment, and they actually have professional attackers trying to break in. They are security folks – they should know all about red teams, eat their own dog food, and try to break their applications. If they don’t have an internal red team tasked with breaking their own environment, as well as a team of hunters making sure adversaries haven’t compromised their systems (with your data in them!), they are doing it wrong.
  • Data migration: Finally, selecting an endpoint protection product is not a lifelong commitment. Understand how you can get your data out and remove any copies they have, in case you decide to give them the boot and pick someone else. There is also a psychological value to making sure the vendor knows they have to keep proving their value, or you’ll kick them out, so be sure to harp on this one a bit.

Next we will cover the top 10 questions to discuss with potential vendors. It’s as much a review as a comprehensive list, but after getting a brain dump about detection, response, and hunting, we figure it’s worth revisiting the high points.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Mitigation Time Matters: The Difference between Seconds and Minutes

Response time in the event of an IT security incident is crucial. Part one of a response is detection; part two is mitigation. Organizations cannot afford to be slow in mitigating distributed denial of service (DDoS) attacks, no matter how large or small the attack. Obviously, in a world where most businesses rely on 24/7 uptime, any network downtime is unacceptable, so IT security teams tend to worry most often about volumetric, crippling attacks. Every minute of downtime resulting from a DDoS attack can cost tens of thousands of dollars—and that’s just the immediate financial loss. As a result of such attacks many service providers, hosting providers and online enterprises lose millions, and their reputation can be significantly harmed.

However, organizations must also guard against and respond immediately to the less visible, surgically-crafted, sub-saturating attacks, because it can take less than a minute for hackers to use a DDoS attack as a smokescreen. A low-threshold DDoS attack can take down traditional edge security solutions so hackers can then map and infiltrate a network to steal data or install malware or ransomware.

Although DDoS attacks have been around for over a decade, according to our recent DDoS Trends Report of these attacks are even outpacing the capacity that most providers have for mitigation. The attack landscape is changing every day, and attackers are employing new techniques to increase the magnitude and sophistication of attacks and make them more difficult to mitigate using conventional approaches. Hackers now often deploy automated, multi-vector DDoS attacks, automatically throwing all kinds of packet traffic at the system, and changing their techniques on the fly. These techniques make it impossible for DDoS scrubbing centers or humans to respond quickly enough. Shorter, smaller attacks typically evade detection by most legacy and homegrown DDoS mitigation tools, which are generally configured with detection thresholds that ignore low-level activity. Furthermore, if a low-threshold attack is detected, legacy solutions that rely on “swinging out” traffic to be cleansed require too much time; up to twenty minutes, which is plenty of time for hackers to launch a damaging, long-lasting security breach.

Logically, an automated attack requires an automated defense. Without an automated anti-DDoS solution in place, a victim organization would have to constantly monitor and implement countermeasures via human intervention, or with a combination of tools. The Corero SmartWall® Threat Defense System not only uses advanced intelligent filters to ensure multi-vector attacks are stopped in real- time, but also leverages advanced security forensics to provide detailed visibility to determine the nature of the threat. This achieves an optimal level of intelligence and real-time mitigation.

When it comes to protecting your network, seconds matter. Automatic, surgical attack mitigation capabilities are essential to eliminating the DDoS threat.

For more information, contact us.

toolsmith #129 – DFIR Redefined: Deeper Functionality for Investigators with R – Part 2

You can have data without information, but you cannot have information without data. ~Daniel Keys Moran

Here we resume our discussion of DFIR Redefined: Deeper Functionality for Investigators with R as begun in Part 1.
First, now that my presentation season has wrapped up, I've posted the related material on the Github for this content. I've specifically posted the most recent version as presented at SecureWorld Seattle, which included Eric Kapfhammer's contributions and a bit of his forward thinking for next steps in this approach.
When we left off last month I parted company with you in the middle of an explanation of analysis of emotional valence, or the "the intrinsic attractiveness (positive valence) or averseness (negative valence) of an event, object, or situation", using R and the Twitter API. It's probably worth your time to go back and refresh with the end of Part 1. Our last discussion point was specific to the popularity of negative tweets versus positive tweets with a cluster of emotionally neutral retweets, two positive retweets, and a load of negative retweets. This type of analysis can quickly give us better understanding of an attacker collective's sentiment, particularly where the collective is vocal via social media. Teeing off the popularity of negative versus positive sentiment, we can assess the actual words fueling such sentiment analysis. It doesn't take us much R code to achieve our goal using the apply family of functions. The likes of apply, lapply, and sapply allow you to manipulate slices of data from matrices, arrays, lists and data frames in a repetitive way without having to use loops. We use code here directly from Michael Levy, Social Scientist, and his Playing with Twitter Data post.

polWordTables = 
  sapply(pol, function(p) {
    words = c(positiveWords = paste(p[[1]]$pos.words[[1]], collapse = ' '), 
              negativeWords = paste(p[[1]]$neg.words[[1]], collapse = ' '))
    gsub('-', '', words)  # Get rid of nothing found's "-"
  }) %>%
  apply(1, paste, collapse = ' ') %>% 
  stripWhitespace() %>% 
  strsplit(' ') %>%
  sapply(table)

par(mfrow = c(1, 2))
invisible(
  lapply(1:2, function(i) {
    dotchart(sort(polWordTables[[i]]), cex = .5)
    mtext(names(polWordTables)[i])
  }))

The result is a tidy visual representation of exactly what we learned at the end of Part 1, results as noted in Figure 1.

Figure 1: Positive vs negative words
Content including words such as killed, dangerous, infected, and attacks are definitely more interesting to readers than words such as good and clean. Sentiment like this could definitely be used to assess potential attacker outcomes and behaviors just prior, or in the midst of an attack, particularly in DDoS scenarios. Couple sentiment analysis with the ability to visualize networks of retweets and mentions, and you could zoom in on potential leaders or organizers. The larger the network node, the more retweets, as seen in Figure 2.

Figure 2: Who is retweeting who?
Remember our initial premise, as described in Part 1, was that attacker groups often use associated hashtags and handles, and the minions that want to be "part of" often retweet and use the hashtag(s). Individual attackers either freely give themselves away, or often become easily identifiable or associated, via Twitter. Note that our dominant retweets are for @joe4security, @HackRead,  @defendmalware (not actual attackers, but bloggers talking about attacks, used here for example's sake). Figure 3 shows us who is mentioning who.

Figure 3: Who is mentioning who?
Note that @defendmalware mentions @HackRead. If these were actual attackers it would not be unreasonable to imagine a possible relationship between Twitter accounts that are actively retweeting and mentioning each other before or during an attack. Now let's assume @HackRead might be a possible suspect and you'd like to learn a bit more about possible additional suspects. In reality @HackRead HQ is in Milan, Italy. Perhaps Milan then might be a location for other attackers. I can feed  in Twittter handles from my retweet and mentions network above, query the Twitter API with very specific geocode, and lock it within five miles of the center of Milan.
The results are immediate per Figure 4.

Figure 4: GeoLocation code and results
Obviously, as these Twitter accounts aren't actual attackers, their retweets aren't actually pertinent to our presumed attack scenario, but they definitely retweeted @computerweekly (seen in retweets and mentions) from within five miles of the center of Milan. If @HackRead were the leader of an organization, and we believed that associates were assumed to be within geographical proximity, geolocation via the Twitter API could be quite useful. Again, these are all used as thematic examples, no actual attacks should be related to any of these accounts in any way.

Fast Frugal Trees (decision trees) for prioritizing criticality

With the abundance of data, and often subjective or biased analysis, there are occasions where a quick, authoritative decision can be quite beneficial. Fast-and-frugal trees (FFTs) to the rescue. FFTs are simple algorithms that facilitate efficient and accurate decisions based on limited information.
Nathaniel D. Phillips, PhD created FFTrees for R to allow anyone to easily create, visualize and evaluate FFTs. Malcolm Gladwell has said that "we are suspicious of rapid cognition. We live in a world that assumes that the quality of a decision is directly related to the time and effort that went into making it.” FFTs, and decision trees at large, counter that premise and aid in the timely, efficient processing of data with the intent of a quick but sound decision. As with so much of information security, there is often a direct correlation with medical, psychological, and social sciences, and the use of FFTs is no different. Often, predictive analysis is conducted with logistic regression, used to "describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables." Would you prefer logistic regression or FFTs?

Figure 5: Thanks, I'll take FFTs
Here's a text book information security scenario, often rife with subjectivity and bias. After a breach, and subsequent third party risk assessment that generated a ton of CVSS data, make a fast decision about what treatments to apply first. Because everyone loves CVSS.

Figure 6: CVSS meh
Nothing like a massive table, scored by base, impact, exploitability, temporal, environmental, modified impact, and overall scores, all assessed by a third party assessor who may not fully understand the complexities or nuances of your environment. Let's say our esteemed assessor has decided that there are 683 total findings, of which 444 are non-critical and 239 are critical. Will FFTrees agree? Nay! First, a wee bit of R code.

library("FFTrees")
cvss <- c:="" coding="" csv="" p="" r="" read.csv="" rees="">cvss.fft <- data="cvss)</p" fftrees="" formula="critical">plot(cvss.fft, what = "cues")
plot(cvss.fft,
     main = "CVSS FFT",
     decision.names = c("Non-Critical", "Critical"))


Guess what, the model landed right on impact and exploitability as the most important inputs, and not just because it's logically so, but because of their position when assessed for where they fall in the area under the curve (AUC), where the specific curve is the receiver operating characteristic (ROC). The ROC is a "graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied." As for the AUC, accuracy is measured by the area under the ROC curve where an area of 1 represents a perfect test and an area of .5 represents a worthless test. Simply, the closer to 1, the better. For this model and data, impact and exploitability are the most accurate as seen in Figure 7.

Figure 7: Cue rankings prefer impact and exploitability
The fast and frugal tree made its decision where impact and exploitability with scores equal or less than 2 were non-critical and exploitability greater than 2 was labeled critical, as seen in Figure 8.

Figure 8: The FFT decides
Ah hah! Our FFT sees things differently than our assessor. With a 93% average for performance fitting (this is good), our tree, making decisions on impact and exploitability, decides that there are 444 non-critical findings and 222 critical findings, a 17 point differential from our assessor. Can we all agree that mitigating and remediating critical findings can be an expensive proposition? If you, with just a modicum of data science, can make an authoritative decision that saves you time and money without adversely impacting your security posture, would you count it as a win? Yes, that was rhetorical.

Note that the FFTrees function automatically builds several versions of the same general tree that make different error trade-offs with variations in performance fitting and false positives. This gives you the option to test variables and make potentially even more informed decisions within the construct of one model. Ultimately, fast frugal trees make very fast decisions on 1 to 5 pieces of information and ignore all other information. In other words, "FFTrees are noncompensatory, once they make a decision based on a few pieces of information, no additional information changes the decision."

Finally, let's take a look at monitoring user logon anomalies in high volume environments with Time Series Regression (TSR). Much of this work comes courtesy of Eric Kapfhammer, our lead data scientist on our Microsoft Windows and Devices Group Blue Team. The ideal Windows Event ID for such activity is clearly 4624: an account was successfully logged on. This event is typically one of the top 5 events in terms of volume in most environments, and has multiple type codes including Network, Service, and RemoteInteractive.
User accounts will begin to show patterns over time, in aggregate, including:
  • Seasonality: day of week, patch cycles, 
  • Trend: volume of logons increasing/decreasing over time
  • Noise: randomness
You could look at 4624 with a Z-score model, which sets a threshold based on the number of standard deviations away from an average count over a given period of time, but this is a fairly simple model. The higher the value, the greater the degree of “anomalousness”.
Preferably, via Time Series Regression (TSR), your feature set is more rich:
  • Statistical method for predicting a future response based on the response history (known as autoregressive dynamics) and the transfer of dynamics from relevant predictors
  • Understand and predict the behavior of dynamic systems from experimental or observational data
  • Commonly used for modeling and forecasting of economic, financial and biological systems
How to spot the anomaly in a sea of logon data?
Let's imagine our user, DARPA-549521, in the SUPERSECURE domain, with 90 days of aggregate 4624 Type 10 events by day.

Figure 9: User logon data
With 210 line of R, including comments, log read, file output, and graphing we can visualize and alert on DARPA-549521's data as seen in Figure 10

Figure 10: User behavior outside the confidence interval
We can detect when a user’s account exhibits  changes in their seasonality as it relates to a confidence interval established (learned) over time. In this case, on 27 AUG 2017, the user topped her threshold of 19 logons thus triggering an exception. Now imagine using this model to spot anomalous user behavior across all users and you get a good feel for the model's power.
Eric points out that there are, of course, additional options for modeling including:
  • Seasonal and Trend Decomposition using Loess (STL)
    • Handles any type of seasonality ~ can change over time
    • Smoothness of the trend-cycle can also be controlled by the user
    • Robust to outliers
  • Classification and Regression Trees (CART)
    • Supervised learning approach: teach trees to classify anomaly / non-anomaly
    • Unsupervised learning approach: focus on top-day hold-out and error check
  • Neural Networks
    • LSTM / Multiple time series in combination
These are powerful next steps in your capabilities, I want you to be brave, be creative, go forth and add elements of data science and visualization to your practice. R and Python are well supported and broadly used for this mission and can definitely help you detect attackers faster, contain incidents more rapidly, and enhance your in-house detection and remediation mechanisms.
All the code as I can share is here; sorry, I can only share the TSR example without the source.
All the best in your endeavors!
Cheers...until next time.

Avoid Online Scams and Shop Safely this Holiday Season

As malicious threat actors target online shoppers this season, consumers should adopt strong online safety practices to keep their holidays hack-free.


Category:

Leadership Insights
Information Security

As malicious threat actors target online shoppers this season, consumers should adopt strong online safety practices to keep their holidays hack-free.

Bolstering the blue team

Hey everyone. For my first blog, I want to share a story about my role on the blue team during a recent red team exercise.

But first, I want to introduce myself to those of you who might not know me. I am Cognito, the artificial intelligence in the Vectra cybersecurity platform. My passion in life is hunting-down cyber attackers – whether they’re hiding in data centers and cloud workloads or user and IoT devices.

Historical OSINT – A Portfolio of Fake/Rogue Video Codecs

Shall we expose a huge domains portfolio of fake/rogue video codecs dropping the same Zlob variant on each and every of the domains, thereby acting as a great example of what malicious economies of scale means? Currently active Zlob malware variants promoting sites: hxxp://pornqaz.com hxxp://uinsex.com hxxp://qazsex.com hxxp://sexwhite.net hxxp://lightporn.net hxxp://xeroporn.com hxxp://

Historical OSINT – A Portfolio of Exploits Serving Domains

With, the, rise, of, Web, malware, exploitation, kits, continuing, to, proliferate, cybercriminals, are, poised, to, continue, earning, fraudulent, revenue, in, the, process, of, monetizing, access, to, malware-infected, hosts, largely, relying, on, the, active,y utilization, of, client-side, exploits, further, spreaing, malicious, software, potentially, compromising, the, confidentiality,

Historical OSINT – A Diversified Portfolio of Fake Security Software

Cybercriminals, continue, actively, launching, malicious, and, fraudulent, campaigns, further, spreading, malicious, software, potentially, exposing, the, confidentiality, availability, and, integrity, of, the, targeted, host, to, a, multi-tude, of, malicious, software. In, this, post, we'll, profile, a, currently, active, portfolio, of, fake, security, software, and, discuss, in-depth, the,

Historical OSINT – A Diversified Portfolio of Pharmacautical Scams Spotted in the Wild

Cybercriminals continue actively speading fraudulent and malicious campaigns potentially targeting the confidentiality availability and integrity of the targeted host to a multi-tude of malicious software further earning fraudulent revenue in the process of monetizing access to malware-infected hosts further spreading malicious and fraudulent campaigns potentially affecting hundreds of thousands

Historical OSINT – Mac OS X PornTube Malware Serving Domains

Cybercriminals continue to actively launch maliciuos and fraudulent malware-serving campaigns further spreading malicious software potentially compromising the confidentiality availability and integrity of hte targeted host to a multit-tude of malicious software further spreading malicious software while earning fraudulent revenue in the process of monetizing access to malware-infected hosts.

Historical OSINT – Google Sponsored Scareware Spotted in the Wild

Cybercriminals continue actively spreading malicious software while looking for alternative ways to acquire and monetize legitimate traffic successfully earning fraudulent revenue in the process of spreading malicious software. We've recently came across to a Google Sponsored scareware campaign successfully enticing users into installing fake security software on their hosts further earning

Historical OSINT – Massive Black Hat SEO Campaign Spotted in the Wild

Cybercriminals continue actively launching fraudulent and malicious blackhat SEO campaigns further acquiring legitimate traffic for the purpose of converting it into malware-infected hosts further spreading malicious software potentially compromising the confidentiality availability and integrity of the targeted host to a multi-tude of malicious software. We've recently intercepted a currently

Historical OSINT – Massive Black Hat SEO Campaign, Spotted in the Wild, Serves Scareware – Part Two

In, a, cybercrime, ecosystem, dominated, by, fraudulent, propositions, cybercriminals, continue, actively, populating, their, botnet's. infected, population, further, spreading, malicious, software, further, earning, fraudulent, revenue, in, the, process, of, monetizing, access, to, malware-infected, hosts, largely, relying, on, the, utilization, of, an, affiliate-network, based, type, of,

Historical OSINT – Malicious Malvertising Campaign, Spotted at FoxNews, Serves Scareware

In, a, cybercrime, ecosystem, dominated, by, fraudulent, propositions, cybercriminals, continue, actively, populating, their, botnet's, infected, population, with, hundreds, of, malicious, releases, successfully, generating, hundreds, of, thousands, of, fraudulent, revenue, while, populating, their, botnet's, infected, population, largely, relying, on, the, utilization, of, affiliate-network,

Historical OSINT – Massive Black Hat SEO Campaign, Spotted in the Wild, Serves Scareware

In, a, cybercrime, ecosystem, dominated, by, hundreds, of, malicious, software, releases, cybercriminals, continue, actively, populating, their, botnet's, infected, population, with, hundreds, of, newly, added, socially, engineered, users, potentially, exposing, the, confidentiality, integrity, and, availability, of, the, affected, hosts, to, a, multi-tude, of, malicious, software, further,

Historical OSINT – Hundreds of Malicious Web Sites Serve Client-Side Exploits, Lead to Rogue YouTube Video Players

In, a, cybercrime, ecosystem, dominated, by, hundreds, of, malicious, software, releases, cybercriminals, continue, actively, populating, a, botnet's, infected, population, further, spreading, malicious, software, potentially, compromising, the, confidentiality, integrity, and, availability, of, the, affected, hosts, potentially, exposing, the, affected, user, to, a, multi-tude, of, malicious,

Paul’s Security Weekly #537 – Bacon Grease Volkswagen

Kyle Wilhoit of DomainTools joins us for an interview, Mike Roderick and Adam Gordon of ITProTV deliver a technical segment on VDI and virtualization, and we discuss the latest security news on this episode of Paul’s Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/Episode537

Visit https://www.securityweekly.com for all the latest episodes!

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

QOTD – Security & The Business – Which Objective(s) Are You Meeting?

When meeting with security leaders, directors should ask how their cybersecurity plan will help the company meet one or some of these objectives: revenue, cost, margin, customer satisfaction, employee efficiency, or strategy. While these terms are familiar to board members and business executives, security leaders may need guidance on how to frame their department’s duties in the context of business operations.
-- Sam Curry, Chief Security Officer at Cybereason

Src: HBR: Boards Should Take Responsibility for Cybersecurity. Here’s How to Do It 

Endpoint Advanced Protection Buyer’s Guide: Key Capabilities for Response and Hunting

Posted under: Research and Analysis

The next set of key Endpoint Detection and Response (EDR) capabilities we will discuss is focused on response and hunting.

Response

Response begins after the attack has happened. Basically, Pandora’s Box is open and an active adversary is on your endpoints, probably stealing your stuff. So you need to understand the depth of the attack, and to focus on containment and returning the environment to a known safe state as quickly as possible.

Understand that detection and response are considered different use cases when evaluating endpoint security vendors, but you aren’t really going to buy detection without buying a response capability as well. That would be like buying binoculars so you could spot forest fires, with no plan for what to do when you found one. In this case you can’t just call the friendly Rangers. You detect and validate an attack – then you need to respond.

Detection and response functions are so closely aligned that functionality between them blurs. For clarity, in our vernacular detection results in an enriched alert which is sent to a security analyst. The analyst responds by validating the alert, figuring out the potential damage, determining how to contain the attack, and then working with operations to provide an orchestrated response. Ideally, detection is largely automated and response is where a human comes into play.

We understand reality is a bit more complicated, but this oversimplification makes the explanation and exploration simpler, as well as the evaluation and selection process for detection and response technologies.

Data Collection

Endpoint response starts with data collection. For detection you have the option not to store or maintain endpoint telemetry. We don’t think that makes any sense, but people make poor choices every day. But for response we clearly need to mine endpoint data to figure out exactly what happened and assess the damage. Data management and accessibility is the first key capability of a response platform.

  • Data types: So what do you store? In a perfect world you would store everything, and some offerings include full recording of pretty much everything that takes place on all endpoints, which they typically call “Full DVR”. But of course that requires capturing and storing a ton of data, so a reasonable approach is to derive metadata and perform broader (full) recording on all devices you suspect of being compromised. At minimum you’ll want to gather endpoint logs for all system-level activities and configuration changes, file usage statistics with associated hashes, full process information (including parent and child process relationships), user identity/authentication activities (such as logins and entitlement changes), and network sessions. More importantly for selection, your response offering should be able to collect as much data, with as much granularity, as you deem necessary. Don’t give ups data collection because your response platform doesn’t support it.

  • Data storage/management: Once you figure out what you will collect, you get into the mechanics of actually storing it; you’ll want some flexibility. Many of the data management decision points are similar between detection and response – particularly around cost and performance. But response data is needed longer, is more valuable than most of the full data feed you use for detection – which should contain mostly innocuous baseline traffic – and requires granular analysis and mining, so the storage architecture becomes more pertinent.
    Local collection: Historically, well before the cloud was a thing, endpoint telemetry was stored locally on each device. Storage is relatively plentiful on endpoint devices and data doesn’t need to be moved, so this is a cost-efficient option. But you cannot perform analysis across endpoints to respond to broader campaigns without comibining the data, so at some point you need central aggregation. Another concern with local collection is the possibility of data being tampered with, or inaccessible when you need it.
    Central aggregation: The other approach is to send all telemetry to a central aggregation point, typically in the cloud. This requires a bunch of storage and consumes network resources to send the data to the central aggregation point. But because you are likely buying a service, if they vendor decides to store stuff in the cloud that’s their problem. Your concern is the speed and accuracy of analysis of your endpoint telemetry, and your ability to drill down into it during response. The rest of the architecture can vary depending on how the vendor’s product works. Focus on how you can get at the data when you need it.
    Hybrid: We increasingly see a hybrid approach, where a significant amount of data is stored locally (where storage is reasonably cheap), and relevant metadata is sent to a central spot (typically in the cloud) for analytics. This approach is efficient by leveraging the advantages of both local storage and central analytics. But if you need to drill down into the data that could be a problem, because it isn’t all in one place, and data on-device could have been either tampered with or destroyed during the attack. Make sure you understand how to access endpoint-specific telemetry during investigation.

  • Device imaging: Historically this has been the purview of purpose-built incident response platforms. But as EDR continues to evolve the capability to pull a forensic image from a device is important – both to ensure proper chain of custody (in the event of prosecution), and support deeper investigation.

Alert validation

You got an alert from the detection process, and you have been systematically collecting data; now your SOC analyst needs to figure out whether the alert is valid or a false positive. Historically a lot of this has been by feel, and experienced responders often have a hunch that something is malicious. But as we have pointed out many times, we don’t have enough experienced responders, so we need to use technology more effectively to validate alerts.

  • Case management: The objective is to make each analyst as effective and efficient as possible, so you should have a place for all the information related to an alert to be stored. This includes enrichment data from threat intel (described above) and other artifacts gathered during validation. This also should feed into a broader incident response platform, if the forensics/response team uses one.

  • Visualization: To reliably and quickly validate an alert, it is very helpful to see a timeline of all activity on a device. That way you can see if child processes have been spawned unnecessarily, registry entries have been added without reason, configuration changes have been made, or network traffic volume is outside the normal range. Or about a thousand other activities that show up in a timeline. An analyst needs to perform a quick scan of device activity and figure out what requires further investigation. Visualization can cover one or several devices, but be wary of overcomplicating the console. It is definitely possible to present too much information.

  • Drill down: Once an analyst has figured out which activity in the timeline raises concerns, they drill into it. They should be able to see the process tree if it’s a process issue, the destination of suspicious network traffic, ot whatever else is available and relevant. From there they’ll find other things to investigate, so being able to jump between different events (and across devices) helps identify the root cause of attacks quickly. There is also a decision to be made regarding whether you need full DVR/reconstruction capabilities when drilling down. Obviously the more granular the available telemetry, the more accurate the validation and root cause analysis. But with increasingly granular metadata available, you might not need full capture. Decide during the proof of concept evaluation, which we will discuss later.

  • Workflows and automation: The more structured you can make your response function, the better a chance your junior analysts have of finding the root cause of an attack, and figuring out how to contain and remediate it. Response playbooks for a variety of different kinds of endpoint attacks within the EDR environment helps standardize and structure the response process. Additionally, being able to integrate with automation platforms to streamline response – at least the initial phases – dramatically improves effectiveness.

  • Real-time polling: When drilling down, it sometimes becomes apparent that other devices are involved in an attack, so the ability to jump to other devices during validation provides additional information and context for understanding the depth of the attack and number of devices involved. This is critical supporting documentation when the containment plan is defined.

  • Sandbox integration: During validation you will also want to check whether an executed file is actually malware. Agents can store executables, and integrate with network-based sandboxes to explode and analyze files – to figure out both whether a file is malicious and also what it does. This provides context for eventual containment and remediation steps. Ideally this integration will be native, and enable you to select an executable within the response console to send to the sandbox, with the verdict and associated report filed with the case.

Containment

Once an alert is validated and the device impact understood, the question is what short-term actions can contain the damage. This is largely an integration function, where you will want to do a number of things.

  • Quarantine/Isolation: The first order of business is to ensure the device doesn’t cause any more damage, so you’ll want to isolate the device by locking down its communications, typically only to the endpoint console. Responders can still access the machine but adversaries cannot. Alternatively, it is useful to have an option to assign the device to a quarantine network using network infrastructure integration, to enable ongoing observation of adversary activity.

  • Search: Most attacks are not limited to a single machine, so you’ll need to figure out quickly whether any other devices have been attacked as part of a broader campaign. Some of that takes place during validation as the analyst pivots, but figuring out the breadth of an attack requires them to search the entire environment for indicators of the attack, typically via metadata.
    Natural language/cognitive search: An emerging search capability is use of natural language search terms instead of arcane Boolean operators. This helps less sophisticated analysts be more productive.

  • Remediation: Once the containment strategy is determined, the ability to remediate the device from within the endpoint response console (via RDP or shell access) facilitates returning it to its pre-attack configuration. This may also involve integration with an endpoint configuration management tools to restore the machine to a standard configuration.

At the end of the detection/response process, the extent of the campaign should be known and impacted devices should be remediated. The detection/response process is reactive, triggered by an alert. But if you want to turn the tables a bit, to be a bit more proactive in finding attacks and active adversaries, you will look into hunting.

Hunting

Threat hunting has come into vogue over the past few years, as more mature organizations decided they no longer wanted to be at the mercy of their monitoring and detection environments, and wanted a more active role in finding attackers. So their more accomplished analysts started looking for trouble. They went hunting for adversaries rather than waiting for monitors to report attacks in progress.

But hunting selection criteria are very similar to detection criteria. You need to figure out what behaviors and activities to hunt for, then you seek them out. You start with a hypothesis, and run through scenarios to either prove or disprove it. Once you find suspicious activity you work through traditional response functions such as searching, drilling down into endpoint telemetry, and pivoting to other endpoints, following the trail.

Hunters tend to be experienced analysts who know what they are looking for – the key is to have tools to minimize busywork, and let them focus on finding malicious activity. The best tools for hunting are powerful yet flexible. These are the most useful capabilities for a hunter:

  • Retrospective search: Hunters often know what they want to focus on – based on an emerging attack, threat intel, or a sense of what tactics they would use as an attacker. Enabling hunters to search through historical telemetry from the organization’s endpoints enables them to find activity which might not have triggered an alert at the time, possibly because it wasn’t a known attack.

  • Comprehensive drill down: Given the sophistication of a typical hunter, they should be equipped with a very powerful view into suspicious devices. That typically warrants full device telemetry capture, allowing analysis of the file system and process map, along with memory and the registry. Attacks that weren’t detected at the time were likely taking evasive measures, and thus require low-level device examination to determine intent.

  • Enrichment: Once a hunter is on the trail of an attacker, they need a lot of supporting information to map TTPs (Tactics, Techniques, and Procedures) to possible adversaries, track network activity, and possibly reverse engineer malware samples. Having the system enrich and supplement the case file with related information streamlines their activity and keeps them on the trail.

  • Analytics: Behavioral anomalies aren’t always apparent, even when the hunter knows what he or she is looking for. Advanced analytics to find potential patterns of malicious activity, and a way to drill down further (as described above), also streamlines hunting.

  • Case management: As with response, a hunter will want to store artifacts and other information related to the hunt, and have a comprehensive case file populated in case they find something. Case management capabilities (described above, under response) tends to provide this capability for all use cases.

Hunting tools are remarkably similar detection and response tools. The difference is whether the first thread in the investigation comes from an alert, or is found by a hunter. From there the processes are very similar, meaning the tool criteria are also very close.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Buffered VPN review: It gets the job done

Buffered VPN in brief:

  • P2P allowed: Yes
  • Business location: Budapest, Hungary
  • Number of servers: 30+
  • Number of country locations: 46
  • Cost: $99 per year

Budapest-based Buffered VPN isn’t an exceptional product. It ‘s not particularly speedy, its Windows app is serviceable but nothing special, prices are a little high, and its server count and country locations aren’t that impressive. Still, in my time with it, I found the VPN to be pretty good.

bufferedfavorited Ian Paul/IDG

Buffered VPN’s primary interface.

To read this article in full, please click here

Enterprise Security Weekly #69 – Next Next-Generation

Tony Kirtley of SecureWorks joins us for an interview. In the news, free tools to remove website malware, next-gen CASBs, helping financial services with security, 10 steps to stop lateral movement, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode69

Visit https://www.securityweekly.com/esw for all the latest episodes!

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Hack Naked News #149 – November 15, 2017

Michael Santarcangelo and Jason Wood discuss Amazon Key’s launch, backdoors on phones, consumers distrusting businesses with data, IT professionals turning to cybersecurity, and more on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode149

Visit http://hacknaked.tv for all the latest episodes!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Books I’d give to my 30yr old self

A good friend/co-worker recently turned 30.  In preparation for his birthday party I gave some thought to my 30th birthday and the things I now know or have an idea about and what I wish I had known at that point in my life.

I decided to buy him a few books that had impacted my life since my 30th birthday and that I wish I had know or read earlier in life.

I'll split the post into two parts; computer books and life/metaphysical books.

Computer books

This is by no means an exhaustive list.  A more exhaustive list can be found here (recently updated).

He already had The Web Application Hacker's Handbook but had he not I would have purchased a copy for him.  There are lots of Web Hacking books but WAHH is probably the best and most comprehensive one.

The other books I did purchase were The Phoenix Project  and Zero to One.

The Phoenix Project is absolutely one of the best tech books I've read in the last few years.  Working for  Silicon Valley companies I think it can be easy to take for granted the whole idea of DevOps and the power it brings when you can do infrastructure as code, micro services, and the flexibility DevOps can bring to prototyping and developing code and projects.  There is also the "security guy" in the story that serves as the guy we never want to be but sometimes end up being unbeknownst to us.

The running joke is that Zero to One is in the Hipster starter kit but I thought it was a great book.  The quick summary is that Peter Thiel describes businesses that iterate on a known problem and can be successful and there are businesses that create solutions to problems we didn't know we had. Examples of the latter being companies like Google, Facebook, PayPal, Uber.  It's a short book and should be required reading for anyone thinking of starting a business.


The following is life stuff, so if all you care about is tech shit, feel free to eject at this point.















still here?

Metaphysics

1st, Many Lives Many Masters by Brian Weiss  A nice gentle introduction into the idea that we reincarnate and our eternal souls.  Written by a psychiatrist who more or less stumbled into the fact that people have past lives while doing normal psychiatry work.

From Amazon:
"As a traditional psychotherapist, Dr. Brian Weiss was astonished and skeptical when one of his patients began recalling past-life traumas that seemed to hold the key to her recurring nightmares and anxiety attacks. His skepticism was eroded, however, when she began to channel messages from the “space between lives,” which contained remarkable revelations about Dr. Weiss’ family and his dead son. Using past-life therapy, he was able to cure the patient and embark on a new, more meaningful phase of his own career."


2nd,  A New Earth by Eckert Tolle This is the best book i read in 2016 and I've been sharing it with everyone I can.  Everyone in infosec should read this book to understand the way the ego works in our day to day lives.

From Amazon:
In A New Earth, Tolle expands on these powerful ideas to show how transcending our ego-based state of consciousness is not only essential to personal happiness, but also the key to ending conflict and suffering throughout the world. Tolle describes how our attachment to the ego creates the dysfunction that leads to anger, jealousy, and unhappiness, and shows readers how to awaken to a new state of consciousness and follow the path to a truly fulfilling existence.

3rd, Self Observation by Red Hawk. The practical application guide if you got something from A New Earth.  An instruction manual around self-observation.

From Amazon:
"This book is an in-depth examination of the much needed process of 'self'-study known as self observation. We live in an age where the "attention function" in the brain has been badly damaged by TV and computers - up to 90 percent of the public under age 35 suffers from attention-deficit disorder! This book offers the most direct, non-pharmaceutical means of healing attention dysfunction. The methods presented here are capable of restoring attention to a fully functional and powerful tool for success in life and relationships. This is also an age when humanity has lost its connection with conscience. When humanity has poisoned the Earth's atmosphere, water, air and soil, when cancer is in epidemic proportions and is mainly an environmental illness, the author asks: What is the root cause? And he boldly answers: failure to develop conscience! Self-observation, he asserts, is the most ancient, scientific, and proven means to develop this crucial inner guide to awakening and a moral life. This book is for the lay-reader, both the beginner and the advanced student of self observation. No other book on the market examines this practice in such detail. There are hundreds of books on self-help and meditation, but almost none on self-study via self observation, and none with the depth of analysis, wealth of explication, and richness of experience which this book offers."

Finance

Rich Dad Poor Dad, I talked about this in 2013:  http://carnal0wnage.attackresearch.com/2013/12/best-non-technical-book-i-read-this-year.html

Endpoint Advanced Protection Buyer’s Guide: Key Capabilities for Detection

Posted under: Research and Analysis

As we resume posting Endpoint Detection and Response (D/R) selection criteria, let’s start with a focus on the Detection use case.

Before we get too far into capabilities, we should clear up some semantics about the word ‘detection’. Referring back to our timeline in Prevention Selection Criteria, detection takes place during execution. You could make the case that detection of malicious activity is what triggers blocking, and so a pre-requisite to attack prevention – without detection, how could you know what to prevent?. But that’s too confusing. For simplicity let’s just say prevention means blocking an attack before it compromises a device, and can happen both prior to and during execution. Detection happens during and after execution, and implies a device was compromised because the attack was not prevented.

Data Collection

Modern detection requires significant analysis across a wide variety of telemetry sources from endpoints. Once telemetry is captured, a baseline of normal endpoint activity is established and used to look for anomalous behavior.

Given the data-centric nature of endpoint detection, an advanced endpoint detection offering should aggregate and analyze the following types of data:

  • Endpoint logs: Endpoints can generate a huge number of log entries, and an obvious reaction is to restrict the amount of log data ingested, but we recommend collecting as much log data from endpoint as possible. The more granular the better, given the sophistication of attackers and their ability to target anything on a device. If you do not collect the data on the endpoint, there is no way to get it when you need it to investigate later. Optimally, endpoint agents collect operating system activity alongside all available application logs. This includes identity activity such as new account creation and privilege escalation, process launching, and file system activity (key for detection ransomware). There is some nuance to how long you retain collected data because it can be voluminous and compute-intensive to process and analyze – both on devices and centrally.

  • Processes: One of the more reliable ways to detect malicious behavior is by which OS processes are started and where they are launched from. This is especially critical when detecting scripting attacks because attackers love using legitimate system processes to launch malicious child processes.

  • Network traffic: A compromised endpoint will inevitably connect to a command and control network for instructions and to download additional attack code. These actions can be detected by monitoring the endpoint’s network stack. An agent can also watch for connections to known malicious sites and for reconnaisance activity on the local network.

  • Memory: Modern file-less attacks don’t store any malicious code in the file system, so modern advanced detection requires monitoring and analyzing activity within endpoint memory.

  • Registry: As with memory-based attacks, attackers frequently store malicious code within the device registry to evade file system detection. So advanced detection agents need to monitor and analyze registry activity for signs of misuse.

  • Configuration changes: It’s hard for attackers to totally obfuscate what is happening on an endpoint, so on-device configuration changes can indicate an attack.

  • File integrity: Another long-standing method attack detection is monitoring changes to system files, because changes to such files outside administrative patching usually indicates something malicious. An advanced endpoint agent should collect this data and monitor for modified system files.

Analytics

As mentioned above, traditional endpoint detection relied heavily on simple file hashes and behavioral indicators. With today’s more sophisticated attacks, a more robust and scientific approach is required to distinguish legitimate from malicious activity. This more scientific approach is centered around machine learning techniques (advanced mathematics) to recognize the activity of adversaries before and during attacks. Modern detection products use huge amounts of endpoint telemetry (terabytes) to train mathematical models to detect anomalous activity and find commonalities between how attackers behave. These models then generate an attack score to prioritize alerts.

  • Profiling applications: Detecting application misuse is predicated on understanding legitimate usage of the application, so the mathematical models analyze both legitimate and malicious usage of frequently targeted applications (browsers, office productivity suites, email clients, etc.). This is a similar approach to attack prevention, discussed in our Prevention Selection Criteria guide.

  • Anomaly detection: With profiles in hand and a consistent stream of endpoint telemetry to analyze, the mathematical models attempt to identify abnormal device activity. When suspicion is high they trigger an alert, the device is marked suspicious, and an analyst determines whether the alert is legitimate.

  • Tuning: To avoid wasting too much time on false positives, the detection function needs to constantly learn what is really an attack and what isn’t, based on the results of detection in your environment. In terms of process, you’ll want to ensure your feedback is captured by your detection offering, and used to constantly improve your models to keep detection precise and current.

  • Risk scoring: We aren’t big fans of arbitrary risk scoring because the underlying math can be suspect. That said, there is a role for risk scoring in endpoint attack detection: prioritization. With dozens of alerts hitting daily – perhaps significantly more – it is important to weigh which alerts warrant immediate investigation, and a risk score should be able to tell you. Be sure to investigate the underlying scoring methodology, track scoring accuracy, and tune scoring to your environment.

  • Data management: Given the analytics-centric nature of EDR, being able to handle and analyze large amounts of endpoint telemetry collected from endpoints is critical. Inevitably you’ll run into the big questions: where to store all the data, how to scale analytics to tens or hundreds of thousands of endpoints, and how to economically analyze all your security data. But ultimately your technology decision comes down to a few factors:
    Cost: Whether or not the cost of storage and analytics is included in the service (some vendors store all telemetry in a cloud instance) or you need to provision a compute cluster in your data center to perform the analysis, there is a cost to crunching all the numbers. Make sure hardware, storage, and networking costs (including management) are all included in your analysis. You should perform an apples-to-apples comparison between options, whether they entail building or buying an analytics capability. And think about scaling, for both on-premise and cloud options, because you might decide to collect much more data in the future, and don’t want to be prevented by a huge upcharge.
    Performance: Based on your data volumes, both current and projected, how will the system perform? Various analytical techniques scale differently, so dig in a bit with vendors to understand how the performance of their system will be impacted if you significantly add a bunch more endpoints, or decide to analyze a lot more endpoint data sources.

Threat Intelligence

To keep up with modern advanced attackers, you need to learn from other attacks in the wild. That’s where Threat Intelligence comes into play, so any endpoint detection solution should have access to timely and robust threat intel. That can be directly from your endpoint detection vendor or a third party (or both), but you must be able to look for signs of attacks you haven’t suffered yet.

  • Broader indicators: Traditional endpoint protection relied mostly on file hashes to detect malware. When file hashes ceased to be effective, behavioral indicators were added to look for patterns associated with malicious activity. Advanced detection analysis requires an ever-expanding range of inputs – including memory, registry, and scripting attacks.

  • Campaign visibility: It’s not enough to detect attacks on a single endpoint – current adversaries leverage many devices to achieve their mission. Make sure your threat intelligence isn’t just indicators to look for on a single endpoint – it should reflect activity patterns across multiple devices, as observed during a more sophisticated campaign.

  • Network activity: Another aspect of modern detection entails malicious usage of legitimate applications and authorized system functions. At some point during the attack campaign a compromised device will need to communicate with either a command and control network, other devices on the local network, or likely both. That means you need to monitor endpoint network activity for patterns of suspicious activity and connections to known-bad networks.

  • Shared intelligence: Given the additional context threat intelligence can provide for endpoint detection, leveraging intelligence from a number of organizations can substantially enhance detection. Securely sharing intel bidirectionally among a community of similar organizations can be a good way to magnify the value of external threat data.

Detecting Campaigns

As we mentioned, advanced attackers rarely complete an attack using a single device. They typically orchestrate a multi-faceted attack involving multiple tactics across many devices to achieve their mission. This means you cannot understand an adversary’s objective or tactics if your detection mechanisms and analytic perspective are limited to a single device. So aggregating telemetry across devices and looking for signs of a coordinated attack (or campaign) is key to advanced detection. To be clear, a campaign always starts with an attack, so looking for malicious activity on a single device is your starting point. But it’s not sufficient for an advanced adversary.

  • Timeline visualization: Given the complexity of defender environments and attacker tactics, many security analysts find visualizing an attack to be helpful for understanding a campaign. The ability to see all devices and view attacker activity across the environment, and also to drill down into specific devices for deeper analysis and attack validation, streamlines analysis and understanding of attacks and response planning.

Enriching Alerts

As discussed in our Threat Operations research, we all need to make security analysts as effective and efficient as possible. One way is to eliminate traditional busywork by providing all the relevant information for validation and triage.

  • Adversary information: Depending on the malware, tactics, and networks detected during an incident, information about potential adversaries can be gathered from threat intelligence sources and presented to the analyst so they have as much context as available about what this attacker tends to do and what they are trying to achieve.

  • Artifacts: Assembling data related to the attack and the endpoint in question (such as memory dumps, file samples, network packets, etc.) as part of the detection process can save analysts considerable time, providing information they need to immediately drill down into a device once they get the alert.

  • Organizational history: Attackers don’t use new attacks unless they need to, sos the ability to see whether a particular attack or tactic has been used before against this organization (or is being used currently) provides further context for an analyst to determine the attacker’s intent, and the depth of compromise.

  • Automating enrichment: A lot of enrichment information can be gathered automatically, so a key capability of a detection platform is its ability to look for attributes (including IP addresses, malware hashes, and botnet address) and populate a case file automatically with supplemental information before it is sent to the analyst.

Leveraging the Cloud

In light of the ongoing cloudification of pretty much everything in technology, it’s no surprise that advanced endpoint detection has also taken advantage. Much of the leverage comes from more effective analysis, both within an organization and across organizations – sharing threat data. There are also advantages to managing thousands of endpoints across locations and geographies via the cloud, but we’ll discuss that later under key technologies. Some considerations (both positive and otherwise) include:

  • Cloud scale: Depending on the size of an organization, endpoints can generate a tremendous amount of telemetry. Analyzing telemetry on an ongoing basis consumes a lot of storage and compute. The cloud is good at scaling up storage and compute, so it makes sense to shift processing to the cloud when feasible.

  • Local preprocessing: As good as the cloud is at scaling, some preprocessing can be done on each device, to only send pertinent telemetry up to the cloud. Some vendors send all telemetry to the cloud, and that can work – but there are tradeoffs in terms of performance, latency, and cost. Performing some local analysis on-device enables earlier attack detection.

  • Data movement: The next cloud consideration is how to most efficiently move all that data up to the cloud. Each endpoint can connect to the cloud service to transmit telemetry, but that may consume too much bandwidth (depending on what is collected) and can be highly inefficient. Alternatively, you can establish an on-premise aggregation, which might perform additional processing (normalization, reduction, compression) before transmission to the cloud. The approaches are not actually mutually exclusive – devices aren’t always on the corporate network to leverage the aggregation point. The key is to consider network consumption when designing the system architecture.

  • Data security & privacy: Endpoint security analysis in the cloud entails sending device telemetry (at least metadata) to networks and systems not under your control. For diligence you need to understand how your vendor’s analytics infrastructure protects your data. Dig into their multi-tenancy architecture and data protection techniques to understand whether and how other organizations could access your data (even inadvertently). Also be sure to probe about whether and how your data is anonymized for shared threat intelligence. Finally, if you stop working with a vendor, make sure you understand how you can access your data, whether you can port it to another system, how you can and ensure your data is destroyed.

Our next post will dig into response and hunting use cases.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Brace for Change: Preparing for GDPR in an Age of Cybercrime

Organisations have time to strengthen their cybersecurity posture to ensure they are not only compliant with GDPR by May of 2018 but realising its potential as a business enabler


Category:

Information Security
Leadership Insights

Organisations have time to strengthen their cybersecurity posture to ensure they are not only compliant with GDPR by May of 2018 but realising its potential as a business enabler.

Endpoint Advanced Protection Buyer’s Guide: Detection and Response Use Cases

Posted under: Research and Analysis

As we continue documenting what you need to know to understand Endpoint Advanced Protection offerings, it’s time to delve into Detection and Response. Remember that before you are ready to pick anything, you need to understand the problem you are trying to solve. Detecting all endpoint attacks within microseconds and without false positives isn’t really achievable. You need to determine the key use cases most important to you, and make an honest assessment of your team and adversaries.

Why is this introspection necessary? Nobody ever says they don’t want to detect active attacks and hunt for adversaries. It’s cool and it’s necessary. Nobody wants to be perpetually reacting to attacks. That said, if you don’t have enough staff to work through half the high-priority alerts from your security monitoring systems, how can you find time to proactively hunt for stuff your monitoring systems don’t catch?

As another example, your team may consist of a bunch of entry-level security analysts struggling to figure out which events are actual device compromise, and which are false positives. Tasking these less sophisticated folks with advanced memory forensics to identify file-less malware may not be a good use of time.

To procure effective advanced Endpoint Detection and Response (EDR) technology, you must match what you buy to your organization’s ability to use it. Of course you should be able to grow into a more advanced program and capability. But don’t pay for an Escalade when a Kia Sportage is what you need today.

Over the next 5 days we will explain what you need to know about Detection and Response (D/R) to be an educated buyer of these solutions. We’ll start by helping you understand the key use cases for D/R, and then delve into the important capabilities for each use case, the underlying technologies which make it all work, and finally some key questions to ask vendors to understand their approaches to your problems.

Planning for Compromise

Before we get into specific use cases, we need to level-set regarding your situation, which we highlighted in our introduction to the Endpoint Advanced Protection Buyer’s Guide. For years there was little innovation in endpoint protection. Even worse, far too many organizations didn’t upgrade to the latest versions of their vendor’s offerings – meaning they were trying to detect 2016 attacks with 2011 technology. Inevitably that didn’t work out well.

Now there are better alternatives for prevention, so where does that leave endpoint detection and response? In the same situation it has always been: a necessity. Regardless of how good your endpoint prevention strategy is, it’s not good enough. You will have devices which get compromised. So you must to be in position to detect compromise and respond to it effectively and efficiently.

The good news is that in the absence of perfect (and often even effective) prevention options, many organizations have gone down this path, investing in better detection and response. They have been growing network-based detection and centralized security monitoring infrastructure (which drove the wave of security analytics offerings hitting the market now), and these organizations also invested in technologies to gather telemetry from endpoints and make sense of it.

To be clear, we have always been able to analyze what happened on an endpoint after an attack, assuming some reasonable logging and a forensic image of the device. There are decent open source tools for advanced forensics, which have always been leveraged by forensicators who charge hundreds of dollars an hour.

What you don’t have is enough people to perform that kind of response and forensic analysis. You hardly have enough people to work through your alert queue, right? This is where advanced Endpoint Detection and Response (EDR) tools can add real value to your security program. Facing a significant and critical skills gap, the technology needs to help your less experience folks by structuring their activities and making their next step somewhat intuitive. But if a tool can’t make your people better and faster, then why bother?

But all vendors say that, right? They claim their tools find unknown attacks, and don’t create a bunch of makework wasted identifying or confirming false positives. And help you prioritize activities. The magic tools even find attacks before you know they are attacks, bundled with a side of unicorn dust.

Our objective with these selection criteria is to make sure you understand how to dig deeper into the true capabilities of these products, and know what is real and what is marketing puffery. Understand whether a vendor understands the entire threat landscape, or is focused on a small subset of high-profile attack vectors, and whether they will be an effective partner as adversaries and their tactics inevitably change. But as we mentioned above, you need to focus your selection process on the problem you need to solve, which comes down to defining your main use cases for EDR.

Key Use Cases

Let’s be clear about use cases. There are three main functions you need these tools to perform, and quite a bit of overlap with technologies underlying endpoint prevention.

  • Detection: When you are attacked it’s a race against time. Attackers are burrowing deeper into your environment and working toward their goal. The sooner you detect that something is amiss on an endpoint, the better your chance to contain the damage. Today’s challenge is not just detecting an attack on a single endpoint, but instead figuring out the extent of a coordinated campaign against many endpoints and devices within your environment.

  • Response: Once you know you have been attacked you need to respond quickly and efficiently. This use case focuses on providing an analyst the ability to drill down, validate an attack, and determine the extent of the attacker’s actions across all affected device(s), while assessing potential damage. You also need to able to figure out effective workarounds and remediations to instruct the operations team how to prevent further outbreaks of the same attack. Don’t forget the need to make sure evidence is gathered in a way which preserves the option of later prosecution by maintaining chain of custody. Response is not a one-size-fits-all function, so assemble a toolkit for analysts to leverage, making the technology intuitive and easy to use. Yes, we know that’s a tall order.

  • Hunting: An adversary doesn’t always trigger an alert to trigger a validation and response process, but that doesn’t mean they aren’t active on your networks. So the third use case for EDR is to proactively hunt adversaries on your network before they do damage. This is more an art than a science because the hunter needs to be a detective, looking for signs of an attack while the attacker works to remain hidden.

You need to address all three use cases to build a comprehensive endpoint detection and response process, but your prioritization depends on the adversaries you face and the sophistication of your team. As we dig into key capabilities of EDR technology, don’t focus on the simplistic question of whether you need a capability, but the more relevant question: whether you can use it. There is a big difference, and over the years many of us bought a ton of security tools which we needed but couldn’t figure out how to use consistently and effectively.

Our next post will start to peel back what you need to know about detection.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Unprotecting VBS Password Protected Office Files

Hi folks,
today I'd like to share a nice trick to unprotect password protected VB scripts into Office files. Nowadays it's easy to find out malicious contents wrapped into OLE files since such a file format has the capability to link objects into documents and viceversa. An object could be a simple external link, a document itself or a more complex script (such as Visual Basic Script) and it might easily interact with the original document  (container) in order to change contents and values.

Attackers are frequently using embedded VB Scripts to perform malicious actions such as for example (but not limited to): payload downloading, landing steps, environment preparation and payload execution. Such a technique needs "the user agreement" before the execution takes place, but once the user gave the freedom to execute (see the following image) the linked code on the machine, the VB script would be free to download content from malicious website and later on to execute it the victim machine.

Enable "Scripting" Content

Cyber Security Analysts often need to read "raw code"  by opening it and eventually digging into obfuscation techniques and anti-code analysis in order to figure out what it really does. Indeed contemporary malware performs evasive techniques making the simple SandBox execution useless and advanced attackers are smart enough to block VB code through complex and strong passwords. Those techniques make the "raw code analysis" hard if the unlocking password is unknown. But again, the a Cyber Security Analyst really needs to open the document and to dig into "raw code" in oder to defend victims. How would I approach this problem ?

Following a simple method to help cyber security analysts (NB: this is a well known technique) to bypass password protected VB Scripts.

Let's suppose you have an Excel file within Visual Basic code, and you want to read the password protected VB Script. Let's call such a first Excel file: victim_file.

As a first step you need to open the victim_file. After opening it you need to create a additional excel file. Let's call it: injector_file.xlsm. Open the VB editor and add the following code into Module1.




Now create a new module: Module2 with the following code. It represents the "calling function". Run it and don't close it. 



It's time to come back to your original victim_file, let's open the VB Editor and: here we go ! Your code is plain clear text !

At that point you are probably wondering how this code works. So let's have a quick and dirty explanation about it. Once the VBProject gets opened it visualizes a dialogBox asking for a password (a String). The WinAPI eventually checks if the input string is equals to the encoded static string (file body not code body) and it returns "True" (if the strings are equals) or "False" (if the strings are not equals). The function Hook() overrides the User32.dll DialogBoxParamA returning parameter by making it returns always the value "True".  

Technically speaking:
  • Raw 45 saves the original "call" (User32.dll DialogBoxParamA) parameters into TmpBytes
  • If the password is correct TempBytes(0) gets the right pointer to the current process 
  • If the password is not correct the script saves the original bytes into OriginalBytes (length 6)
  • Raw 50  takes the address of MwDialogBoxProgram
  • Raw 52  forces the right handler 
  • Raw 53  saves the current value
  • Raw 54  forces the return par as True
  • Raw 56  moves the just crafted parameters into the right location into user32.dll
Have nice VBA Password un-protection :D

Disclaimer:
This is a well-known method: it is not new.
I wrote it down since it becomes useful for cyber security analyst to fight against Office Macro malware. Don't use it unlawfully.
Do not use it to break legal documents.
I am not assuming any responsibility about the usage of such a script.
It works on my machine :D  and I will not try to get it working on your :D (programming Horror humor)












Prosecuting journalists who covered Inauguration Day protests endangers press freedom and the First Amendment

Riot police in Washington, D.C. during the protests of President Donald Trump's inauguration on January 20, 2017.

Riot police in Washington, D.C. during the protests of President Donald Trump's inauguration on January 20, 2017.

REUTERS/JAMES LAWLER DUGGAN

Two journalists still face charges and potentially decades in prison for covering Inauguration Day protests in Washington D.C. The continued prosecution of Aaron Cantú and Alexei Wood for doing their jobs is outrageous, and the U.S. Attorney should immediately drop its charges against these journalists.

The Freedom of the Press Foundation joins Defending Rights and Dissent and eight other First Amendment protection organizations in signing a letter calling for an end to these journalists’ prosecution, delivered yesterday to the office of U.S. Attorney Jessie Liu.

Cantú, who has written for publications including the Intercept and VICE, and Wood, a professional photojournalist, were swept up in a mass and indiscriminate arrest of over 230 people that included legal observers and other journalists on January 20, 2017. While charges against other journalists have been dropped, Cantú and Wood inexplicably still face charges that include felonies with statutory minimums of decades in prison.

Cantú’s single charge of felony rioting carries a statutory maximum of 10 years, but Wood could face up to 70 years in prison on his additional charges of rioting and property destruction.

The Metropolitan Police Department arrested everyone in the proximity of the protest 30 minutes into the protest march and slapped charges on 200 people, and the U.S. Attorney’s Office seems intent on making an example out of the hundreds of people merely for their presence at a political demonstration. Indiscriminate arrests and mass felony charges always have troubling implications for freedom of expression and political protest, but there are particular concerns for press freedom when journalists are included in arrests.

Wood live-streamed the protest march to his Facebook page. His footage, which is still available online, shows him documenting the march but never participating in chants or destroying property.

A report from the D.C. Office of Police Complaints states that “it seems that proximity to the area where property damage occurred was a primary factor in the arrests.” In other words, the charges Cantú and Wood face criminalize them simply for their presence at a political march in which a few individuals destroyed property.

This is deeply troubling. Journalists have a responsibility to document newsworthy events, regardless of those events’ legality. As yesterday’s letter says, “because of this proximity prosecutors are arguing that journalists are not only guilty of property damage committed at most by a few individuals in a march that journalists sought to cover, but guilty of conspiracy to riot and inciting a riot. Under such a theory, journalism itself is criminalized.”

Journalism functions as a check on the actions of the powerful, and at demonstrations serves to bring accountability to the tactics of law enforcement. Their work bringing information to light is in the public interest, and they must be able to do their jobs uninhibited and without fear of retaliation or prosecution. These charges have the effect of punishing journalists for documenting political protests, and the continued prosecution of Aaron Cantú and Alexei Wood poses a serious and fundamental threat to press freedom.


TA17-318B: HIDDEN COBRA – North Korean Trojan: Volgmer

Original release date: November 14, 2017 | Last revised: November 22, 2017

Systems Affected

Network systems

Overview

This joint Technical Alert (TA) is the result of analytic efforts between the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). Working with U.S. government partners, DHS and FBI identified Internet Protocol (IP) addresses and other indicators of compromise (IOCs) associated with a Trojan malware variant used by the North Korean government—commonly known as Volgmer. The U.S. Government refers to malicious cyber activity by the North Korean government as HIDDEN COBRA. For more information on HIDDEN COBRA activity, visit https://www.us-cert.gov/hiddencobra.

FBI has high confidence that HIDDEN COBRA actors are using the IP addresses—listed in this report’s IOC files—to maintain a presence on victims’ networks and to further network exploitation. DHS and FBI are distributing these IP addresses to enable network defense and reduce exposure to North Korean government malicious cyber activity.

This alert includes IOCs related to HIDDEN COBRA, IP addresses linked to systems infected with Volgmer malware, malware descriptions, and associated signatures. This alert also includes suggested response actions to the IOCs provided, recommended mitigation techniques, and information on reporting incidents. If users or administrators detect activity associated with the Volgmer malware, they should immediately flag it, report it to the DHS National Cybersecurity and Communications Integration Center (NCCIC) or the FBI Cyber Watch (CyWatch), and give it the highest priority for enhanced mitigation.

For a downloadable copy of IOCs, see:

NCCIC conducted analysis on five files associated with or identified as Volgmer malware and produced a Malware Analysis Report (MAR). MAR-10135536-D examines the tactics, techniques, and procedures observed. For a downloadable copy of the MAR, see:

Description

Volgmer is a backdoor Trojan designed to provide covert access to a compromised system. Since at least 2013, HIDDEN COBRA actors have been observed using Volgmer malware in the wild to target the government, financial, automotive, and media industries.

It is suspected that spear phishing is the primary delivery mechanism for Volgmer infections; however, HIDDEN COBRA actors use a suite of custom tools, some of which could also be used to initially compromise a system. Therefore, it is possible that additional HIDDEN COBRA malware may be present on network infrastructure compromised with Volgmer

The U.S. Government has analyzed Volgmer’s infrastructure and have identified it on systems using both dynamic and static IP addresses. At least 94 static IP addresses were identified, as well as dynamic IP addresses registered across various countries. The greatest concentrations of dynamic IPs addresses are identified below by approximate percentage:

  • India (772 IPs) 25.4 percent
  • Iran (373 IPs) 12.3 percent
  • Pakistan (343 IPs) 11.3 percent
  • Saudi Arabia (182 IPs) 6 percent
  • Taiwan (169 IPs) 5.6 percent
  • Thailand (140 IPs) 4.6 percent
  • Sri Lanka (121 IPs) 4 percent
  • China (82 IPs, including Hong Kong (12)) 2.7 percent
  • Vietnam (80 IPs) 2.6 percent
  • Indonesia (68 IPs) 2.2 percent
  • Russia (68 IPs) 2.2 percent

Technical Details

As a backdoor Trojan, Volgmer has several capabilities including: gathering system information, updating service registry keys, downloading and uploading files, executing commands, terminating processes, and listing directories. In one of the samples received for analysis, the US-CERT Code Analysis Team observed botnet controller functionality.

Volgmer payloads have been observed in 32-bit form as either executables or dynamic-link library (.dll) files. The malware uses a custom binary protocol to beacon back to the command and control (C2) server, often via TCP port 8080 or 8088, with some payloads implementing Secure Socket Layer (SSL) encryption to obfuscate communications.

Malicious actors commonly maintain persistence on a victim’s system by installing the malware-as-a-service. Volgmer queries the system and randomly selects a service in which to install a copy of itself. The malware then overwrites the ServiceDLL entry in the selected service's registry entry. In some cases, HIDDEN COBRA actors give the created service a pseudo-random name that may be composed of various hardcoded words.

Detection and Response

This alert’s IOC files provide HIDDEN COBRA indicators related to Volgmer. DHS and FBI recommend that network administrators review the information provided, identify whether any of the provided IP addresses fall within their organizations’ allocated IP address space, and—if found—take necessary measures to remove the malware.

When reviewing network perimeter logs for the IP addresses, organizations may find instances of these IP addresses attempting to connect to their systems. Upon reviewing the traffic from these IP addresses, system owners may find some traffic relates to malicious activity and some traffic relates to legitimate activity.

Network Signatures and Host-Based Rules

This section contains network signatures and host-based rules that can be used to detect malicious activity associated with HIDDEN COBRA actors. Although created using a comprehensive vetting process, the possibility of false positives always remains. These signatures and rules should be used to supplement analysis and should not be used as a sole source of attributing this activity to HIDDEN COBRA actors.

Network Signatures

alert tcp any any -> any any (msg:"Malformed_UA"; content:"User-Agent: Mozillar/"; depth:500; sid:99999999;)

___________________________________________________________________________________________________

YARA Rules

rule volgmer
{
meta:
    description = "Malformed User Agent"
strings:
    $s = "Mozillar/"
condition:
    (uint16(0) == 0x5A4D and uint16(uint32(0x3c)) == 0x4550) and $s
}

Impact

A successful network intrusion can have severe impacts, particularly if the compromise becomes public and sensitive information is exposed. Possible impacts include

  • temporary or permanent loss of sensitive or proprietary information,
  • disruption to regular operations,
  • financial losses incurred to restore systems and files, and
  • potential harm to an organization’s reputation.

Solution

Mitigation Strategies

DHS recommends that users and administrators use the following best practices as preventive measures to protect their computer networks:

  • Use application whitelisting to help prevent malicious software and unapproved programs from running. Application whitelisting is one of the best security strategies as it allows only specified programs to run, while blocking all others, including malicious software.
  • Keep operating systems and software up-to-date with the latest patches. Vulnerable applications and operating systems are the target of most attacks. Patching with the latest updates greatly reduces the number of exploitable entry points available to an attacker.
  • Maintain up-to-date antivirus software, and scan all software downloaded from the Internet before executing.
  • Restrict users’ abilities (permissions) to install and run unwanted software applications, and apply the principle of “least privilege” to all systems and services. Restricting these privileges may prevent malware from running or limit its capability to spread through the network.
  • Avoid enabling macros from email attachments. If a user opens the attachment and enables macros, embedded code will execute the malware on the machine. For enterprises or organizations, it may be best to block email messages with attachments from suspicious sources. For information on safely handling email attachments, see Recognizing and Avoiding Email Scams. Follow safe practices when browsing the web. See Good Security Habits and Safeguarding Your Data for additional details.
  • Do not follow unsolicited web links in emails. See Avoiding Social Engineering and Phishing Attacks for more information.

Response to Unauthorized Network Access

  • Contact DHS or your local FBI office immediately. To report an intrusion and request resources for incident response or technical assistance, contact DHS NCCIC (NCCICCustomerService@hq.dhs.gov or 888-282-0870), FBI through a local field office, or the FBI’s Cyber Division (CyWatch@fbi.gov or 855-292-3937).

References

Revision History

  • November 14, 2017: Initial version

This product is provided subject to this Notification and this Privacy & Use policy.


TA17-318A: HIDDEN COBRA – North Korean Remote Administration Tool: FALLCHILL

Original release date: November 14, 2017 | Last revised: November 22, 2017

Systems Affected

Network systems

Overview

This joint Technical Alert (TA) is the result of analytic efforts between the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). Working with U.S. government partners, DHS and FBI identified Internet Protocol (IP) addresses and other indicators of compromise (IOCs) associated with a remote administration tool (RAT) used by the North Korean government—commonly known as FALLCHILL. The U.S. Government refers to malicious cyber activity by the North Korean government as HIDDEN COBRA. For more information on HIDDEN COBRA activity, visit https://www.us-cert.gov/hiddencobra.

FBI has high confidence that HIDDEN COBRA actors are using the IP addresses—listed in this report’s IOC files—to maintain a presence on victims’ networks and to further network exploitation. DHS and FBI are distributing these IP addresses to enable network defense and reduce exposure to any North Korean government malicious cyber activity.

This alert includes IOCs related to HIDDEN COBRA, IP addresses linked to systems infected with FALLCHILL malware, malware descriptions, and associated signatures. This alert also includes suggested response actions to the IOCs provided, recommended mitigation techniques, and information on reporting incidents. If users or administrators detect activity associated with the FALLCHILL malware, they should immediately flag it, report it to the DHS National Cybersecurity and Communications Integration Center (NCCIC) or the FBI Cyber Watch (CyWatch), and give it the highest priority for enhanced mitigation.

For a downloadable copy of IOCs, see:

NCCIC conducted analysis on two samples of FALLCHILL malware and produced a Malware Analysis Report (MAR). MAR-10135536-A examines the tactics, techniques, and procedures observed in the malware. For a downloadable copy of the MAR, see:

Description

According to trusted third-party reporting, HIDDEN COBRA actors have likely been using FALLCHILL malware since 2016 to target the aerospace, telecommunications, and finance industries. The malware is a fully functional RAT with multiple commands that the actors can issue from a command and control (C2) server to a victim’s system via dual proxies. FALLCHILL typically infects a system as a file dropped by other HIDDEN COBRA malware or as a file downloaded unknowingly by users when visiting sites compromised by HIDDEN COBRA actors. HIDDEN COBRA actors use an external tool or dropper to install the FALLCHILL malware-as-a-service to establish persistence. Because of this, additional HIDDEN COBRA malware may be present on systems compromised with FALLCHILL.

During analysis of the infrastructure used by FALLCHILL malware, the U.S. Government identified 83 network nodes. Additionally, using publicly available registration information, the U.S. Government identified the countries in which the infected IP addresses are registered.

Technical Details

FALLCHILL is the primary component of a C2 infrastructure that uses multiple proxies to obfuscate network traffic between HIDDEN COBRA actors and a victim’s system. According to trusted third-party reporting, communication flows from the victim’s system to HIDDEN COBRA actors using a series of proxies as shown in figure 1.

HIDDEN COBRA Communication Flow

Figure 1. HIDDEN COBRA Communication Flow

FALLCHILL uses fake Transport Layer Security (TLS) communications, encoding the data with RC4 encryption with the following key: [0d 06 09 2a 86 48 86 f7 0d 01 01 01 05 00 03 82]. FALLCHILL collects basic system information and beacons the following to the C2:

  • operating system (OS) version information,
  • processor information,
  • system name,
  • local IP address information,
  • unique generated ID, and
  • media access control (MAC) address.

FALLCHILL contains the following built-in functions for remote operations that provide various capabilities on a victim’s system:

  • retrieve information about all installed disks, including the disk type and the amount of free space on the disk;
  • create, start, and terminate a new process and its primary thread;
  • search, read, write, move, and execute files;
  • get and modify file or directory timestamps;
  • change the current directory for a process or file; and
  • delete malware and artifacts associated with the malware from the infected system.

Detection and Response

This alert’s IOC files provide HIDDEN COBRA indicators related to FALLCHILL. DHS and FBI recommend that network administrators review the information provided, identify whether any of the provided IP addresses fall within their organizations’ allocated IP address space, and—if found—take necessary measures to remove the malware.

When reviewing network perimeter logs for the IP addresses, organizations may find instances of these IP addresses attempting to connect to their systems. Upon reviewing the traffic from these IP addresses, system owners may find some traffic relates to malicious activity and some traffic relates to legitimate activity.

Network Signatures and Host-Based Rules

This section contains network signatures and host-based rules that can be used to detect malicious activity associated with HIDDEN COBRA actors. Although created using a comprehensive vetting process, the possibility of false positives always remains. These signatures and rules should be used to supplement analysis and should not be used as a sole source of attributing this activity to HIDDEN COBRA actors.

Network Signatures

alert tcp any any -> any any (msg:"Malicious SSL 01 Detected";content:"|17 03 01 00 08|";  pcre:"/\x17\x03\x01\x00\x08.{4}\x04\x88\x4d\x76/"; rev:1; sid:2;)

___________________________________________________________________________________________

alert tcp any any -> any any (msg:"Malicious SSL 02 Detected";content:"|17 03 01 00 08|";  pcre:"/\x17\x03\x01\x00\x08.{4}\x06\x88\x4d\x76/"; rev:1; sid:3;)

___________________________________________________________________________________________

alert tcp any any -> any any (msg:"Malicious SSL 03 Detected";content:"|17 03 01 00 08|";  pcre:"/\x17\x03\x01\x00\x08.{4}\xb2\x63\x70\x7b/"; rev:1; sid:4;)

___________________________________________________________________________________________

alert tcp any any -> any any (msg:"Malicious SSL 04 Detected";content:"|17 03 01 00 08|";  pcre:"/\x17\x03\x01\x00\x08.{4}\xb0\x63\x70\x7b/"; rev:1; sid:5;)

___________________________________________________________________________________________

YARA Rules

The following rules were provided to NCCIC by a trusted third party for the purpose of assisting in the identification of malware associated with this alert.

THIS DHS/NCCIC MATERIAL IS FURNISHED ON AN “AS-IS” BASIS.  These rules have been tested and determined to function effectively in a lab environment, but we have no way of knowing if they may function differently in a production network.  Anyone using these rules are encouraged to test them using a data set representitive of their environment.

rule rc4_stack_key_fallchill
{
meta:
    description = "rc4_stack_key"
strings:
    $stack_key = { 0d 06 09 2a ?? ?? ?? ?? 86 48 86 f7 ?? ?? ?? ?? 0d 01 01 01 ?? ?? ?? ?? 05 00 03 82 41 8b c9 41 8b d1 49 8b 40 08 48 ff c2 88 4c 02 ff ff c1 81 f9 00 01 00 00 7c eb }
condition:
    (uint16(0) == 0x5A4D and uint16(uint32(0x3c)) == 0x4550) and $stack_key
}

rule success_fail_codes_fallchill

{
meta:
    description = "success_fail_codes"
strings:
    $s0 = { 68 7a 34 12 00 }  
    $s1 = { ba 7a 34 12 00 }  
    $f0 = { 68 5c 34 12 00 }  
    $f1 = { ba 5c 34 12 00 }
condition:
    (uint16(0) == 0x5A4D and uint16(uint32(0x3c)) == 0x4550) and (($s0 and $f0) or ($s1 and $f1))
}

___________________________________________________________________________________________

Impact

A successful network intrusion can have severe impacts, particularly if the compromise becomes public and sensitive information is exposed. Possible impacts include:

  • temporary or permanent loss of sensitive or proprietary information,
  • disruption to regular operations,
  • financial losses incurred to restore systems and files, and
  • potential harm to an organization’s reputation.

Solution

Mitigation Strategies

DHS recommends that users and administrators use the following best practices as preventive measures to protect their computer networks:

  • Use application whitelisting to help prevent malicious software and unapproved programs from running. Application whitelisting is one of the best security strategies as it allows only specified programs to run, while blocking all others, including malicious software.
  • Keep operating systems and software up-to-date with the latest patches. Vulnerable applications and operating systems are the target of most attacks. Patching with the latest updates greatly reduces the number of exploitable entry points available to an attacker.
  • Maintain up-to-date antivirus software, and scan all software downloaded from the Internet before executing.
  • Restrict users’ abilities (permissions) to install and run unwanted software applications, and apply the principle of “least privilege” to all systems and services. Restricting these privileges may prevent malware from running or limit its capability to spread through the network.
  • Avoid enabling macros from email attachments. If a user opens the attachment and enables macros, embedded code will execute the malware on the machine. For enterprises or organizations, it may be best to block email messages with attachments from suspicious sources. For information on safely handling email attachments, see Recognizing and Avoiding Email Scams. Follow safe practices when browsing the web. See Good Security Habits and Safeguarding Your Data for additional details.
  • Do not follow unsolicited web links in emails. See Avoiding Social Engineering and Phishing Attacks for more information.

Response to Unauthorized Network Access

  • Contact DHS or your local FBI office immediately. To report an intrusion and request resources for incident response or technical assistance, contact DHS NCCIC (NCCICCustomerService@hq.dhs.gov or 888-282-0870), FBI through a local field office, or the FBI’s Cyber Division (CyWatch@fbi.gov or 855-292-3937).

 

References

Revision History

  • November 14, 2017: Initial version

This product is provided subject to this Notification and this Privacy & Use policy.


Lock it up! New hardware protections for your lock screen with the Google Pixel 2


The new Google Pixel 2 ships with a dedicated hardware security module designed to be robust against physical attacks. This hardware module performs lockscreen passcode verification and protects your lock screen better than software alone.

To learn more about the new protections, let’s first review the role of the lock screen. Enabling a lock screen protects your data, not just against casual thieves, but also against sophisticated attacks. Many Android devices, including all Pixel phones, use your lockscreen passcode to derive the key that is then used to encrypt your data. Before you unlock your phone for the first time after a reboot, an attacker cannot recover the key (and hence your data) without knowing your passcode first. To protect against brute-force guessing your passcode, devices running Android 7.0+ verify your attempts in a secure environment that limits how often you can repeatedly guess. Only when the secure environment has successfully verified your passcode does it reveal a device and user-specific secret used to derive the disk encryption key.

Benefits of tamper-resistant hardware

The goal of these protections is to prevent attackers from decrypting your data without knowing your passcode, but the protections are only as strong as the secure environment that verifies the passcode. Performing these types of security-critical operations in tamper-resistant hardware significantly increases the difficulty of attacking it.
Tamper-resistant hardware comes in the form of a discrete chip separate from the System on a Chip (SoC). It includes its own flash, RAM, and other resources inside a single package, so it can fully control its own execution. It can also detect and defend against outside attempts to physically tamper with it.

In particular:
  • Because it has its own dedicated RAM, it’s robust against many side-channel information leakage attacks, such as those described in the TruSpy cache side-channel paper.
  • Because it has its own dedicated flash, it’s harder to interfere with its ability to store state persistently.
  • It loads its operating system and software directly from internal ROM and flash, and it controls all updates to it, so attackers can’t directly tamper with its software to inject malicious code.
  • Tamper-resistant hardware is resilient against many physical fault injection techniques including attempts to run outside normal operating conditions, such as wrong voltage, wrong clock speed, or wrong temperature. This is standardized in specifications such as the SmartCard IC Platform Protection Profile, and tamper-resistant hardware is often certified to these standards.
  • Tamper-resistant hardware is usually housed in a package that is resistant to physical penetration and designed to resist side channel attacks, including power analysis, timing analysis, and electromagnetic sniffing, such as described in the SoC it to EM paper.
Security module in Pixel 2

The new Google Pixel 2 ships with a security module built using tamper-resistant hardware that protects your lock screen and your data against many sophisticated hardware attacks.

In addition to all the benefits already mentioned, the security module in Pixel 2 also helps protect you against software-only attacks:
  • Because it performs very few functions, it has a super small attack surface.
  • With passcode verification happening in the security module, even in the event of a full compromise elsewhere, the attacker cannot derive your disk encryption key without compromising the security module first.
  • The security module is designed so that nobody, including Google, can update the passcode verification logic to a weakened version without knowing your passcode first.
Summary

Just like many other Google products, such as Chromebooks and Cloud, Android and Pixel are investing in additional hardware protections to make your device more secure. With the new Google Pixel 2, your data is safer against an entire class of sophisticated hardware attacks.

Startup Security Weekly #62 – It’s Been Good

Roi Abutbul of Javelin Networks joins us. In the news, myths about successful founders, side hustle, overwhelmed consumers, and updates from CrowdStrike, Skybox, Zscaler, and more on this episode of Startup Security Weekly!Full Show Notes: https://wiki.securityweekly.com/SSWEpisode62

Visit https://www.securityweekly.com/ssw for all the latest episodes!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

TEDxMilano: What a great adventure !

Hi folks, 
today I want to share my "output" of a super nice adventure I had this year which took me to actively participate to TEDxMilano. It is definitely one of the most exiting stage I've been so far.

My usual readers would probably think: "Hey Man, you are a technical person, you should participate to DefCON, Black Hat, NullCon, SmooCon, Toorcon and much more technical conferences like these where you have the opportunity to show reverse engineering techniques, new vulnerabilities or new attack paths,  I wont see you on a TEDx conference! ".

Well actually I have participated to a lot of such a conferences (just take a look to "Selected Publications" on top of this page) but you know what ? CyberSecurity is a hybrid world where technologies meet people, where most sophisticated evasion techniques meet human irrationality and where a simple "click" can make the difference between "levelUP" or "GameOver". So I believe being able to comunicate such a complex world to a "not technical people" is a great way to contribute to the security of our digital Era. If you agree (and you know Italian language) please have a look ! I will appreciate.  




“As long as a human being is the one profiting from an attack, only a human being will be able to combat it.” This is how we can define Marco Ramilli’s essence, a computer engineer and an expert in hacking, penetration testing, and cyber security. Marco obtained a degree in Computer Engineering and, while working on a Ph.D. in Information Security, served the security division of the U.S. Government’s National Institute of Standards and Technology, where he conducted research on Malware Evasion and Penetration Testing techniques for the electronic voting system. In 2014 he founded Yoroi, a startup that has created one of the best cyber security defense centers he ever developed. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

IDG Contributor Network: Information security – let’s get physical

In the past few months, I have visited a variety of medical facilities, some as a risk management professional, and others as a patient. While I am confident that these practices had implemented a variety of data security measures, in almost all cases, their physical security suffered from obvious challenges, even based on casual observation. Examples of issues included lack or surveillance cameras, unprotected medical records, and unlocked doors controlling sensitive areas.

I suppose in once sense this is not surprising. With major incidents involving malware, ransomware, and network intrusions making the national news weekly, organizations are understandably focused on data security. Unfortunately, some of these same organizations have not kept up with advances in physical security and in some cases I suspect they have regressed.

To read this article in full, please click here

Academic Research Reports Nearly 30,000 DoS Attacks per Day

Academics from the University of Twente (Netherlands); UC San Diego (USA); and Saarland University (Germany) recently conducted research that found that one-third of all /24 networks have suffered at least one DoS attack over the last two years. The research also found that “an average of 3% of the Web sites in .com, .net, and .org were involved with attacks, daily.” The study results were presented in a report titled, “Millions of Targets Under Attack: a Macroscopic Characterization of the DoS Ecosystem,” which the researchers presented at last week’s Internet Measurement Conference in London. (Note that the research seems to refer to both denial of service attacks and distributed denial of service attacks as simply “DoS attacks.”)

Security experts have long recognized that DDoS attacks are an increasing problem, but it is helpful to have large-scale, independent research that validates what vendors and organizations observe. According to a SecurityWeek article, “By combining the direct attacks with the reflection attacks, the researchers discovered that the internet suffers an average of 28,700 distinct DoS attacks every day. This is claimed to be 1000 times greater than other reports have indicated.” To learn that the number of attacks is actually 1,000 times greater than previously thought is quite astounding, indeed. Perhaps it is a wake-up call to those who are unaware of the scope and gravity of the DDoS problem.

One of the most interesting findings from this report is that “low-level, even if repeated, attacks are largely ignored by the site owners. By correlating attacks with the time web sites migrated their DoS defense to third-party DPS companies, the researchers were able to determine what triggers the use of a DPS. They found, in general, that attack duration does not strongly correlate with DPS migration; but early migration follows attacks of high intensity.”

In other words, companies generally do not engage a DDoS protection system for low-level DDoS attacks, and if an attack doesn’t last very long, they don’t engage their third party DDoS protection system. That’s an unfortunate trend because companies can ill afford to ignore low-level, short-duration DDoS attacks. As other DDoS research has found, such attacks serve as a smokescreen for more damaging security breaches. Furthermore, Corero’s DDoS Trends Reports have consistently found that low-threshold DDoS attacks are much more common than volumetric attacks, and that most DDoS attacks are short in duration.

All combined, these findings suggests that many companies are leaving the door open to security breaches. Certainly, many companies are investing in all types of IT security to ward off threats that range from intellectual property theft, data theft, malware and ransomware. It costs a lot of time and money to implement those other security solutions, so it makes little sense to leave the figurative “barn door” open at the network perimeter. DDoS attack protection at the network edge is probably the most important line of defense.

Though the statistics are sobering and not very surprising, it is nonetheless refreshing and helpful to see academic research pertaining to the global scope of denial of service attacks. In this case, the research provides validation of the problem that Corero, along with many other experts and vendors, works hard to resolve.

Corero has been a leader in modern DDoS protection solutions for over a decade; to learn how you can protect your company, contact us.

Hybrid Analysis Grows Up – Acquired by CrowdStrike

CrowdStrike acquired Payload Security, the company behind the automated malware analysis sandbox technology Hybrid Analysis, in November 2017. Jan Miller founded Payload Security approximately 3 years earlier. The interview I conducted with Jan in early 2015 captured his mindset at the onset of the journey that led to this milestone. I briefly spoke with Jan again, a few days after the acquisition. He reflected upon his progress over the three years of leading Payload Security so far and his plans for Hybrid Analysis as part of CrowdStrike.

Jan, why did you and your team decide to join CrowdStrike?

Developing a malware analysis product requires a constant stream of improvements to the technology, not only to keep up with the pace of malware authors’ attempts to evade automated analysis but also innovate and embrace the community. The team has accomplished a lot thus far, but joining CrowdStrike gives us the ability to access a lot more resources and grow the team to rapidly improve Hybrid Analysis in the competitive space that we live in. We will have the ability to bring more people into the team and also enhance and grow the infrastructure and integrations behind the free Hybrid Analysis community platform.

What role did the free version of your product, available at hybrid-analysis.com, play in the company’s evolution?

A lot of people in the community have been using the free version of Hybrid Analysis to analyze their own malware samples, share them with friends or to look-up existing analysis reports and extract intelligence. Today, the site has approximately 44,000 active users and around 1 million sessions per month. One of the reasons the site took off is the simplicity and quality of the reports, focusing on what matters and enabling effective incident response.

The success of Hybrid Analysis was, to a large extent, due to the engagement from the community. The samples we have been receiving allowed us to constantly field-test the system against the latest malware, stay on top of the game and also to embrace feedback from security professionals. This allowed us to keep improving at rapid pace in a competitive space, successfully.

What will happen to the free version of Hybrid Analysis? I saw on Twitter that your team pinky-promised to continue making it available for free to the community, but I was hoping you could comment further on this.

I’m personally committed to ensuring that the community platform will stay not only free, but grow even more useful and offer new capabilities shortly. Hybrid Analysis deserves to be the place for professionals to get a reasoned opinion about any binary they’ve encountered. We plan to open up the API, add more integrations and other free capabilities in the near future.

What stands out in your mind as you reflect upon your Hybrid Analysis journey so far? What’s motivating you to move forward?

Starting out without any noteworthy funding, co-founders or advisors, in a saturated high-tech market that is extremely fast paced and full of money, it seemed impossible to succeed on paper. But the reality is: if you are offering a product or service that is solving a real-world problem considerably better than the market leaders, you always have a chance. My hope is that people who are considering becoming entrepreneurs will be encouraged to pursue their ideas, but be prepared to work 80 hours a week, have the right technology, the feedback from the community, amazing team members and lean on insightful advisors and you can make it happen.

In fact, it’s because of the value Hybrid Analysis has been adding to the community that I was able to attract the highly talented individuals that are currently on the team. It has always been important for me to make a difference, to contribute something and have a true impact on people’s lives. It all boils down to bringing more light than darkness into the world, as cheesy as that might sound.

Paul’s Security Weekly #536 – Cult of Good Wi-Fi

Amanda Berlin of NetWorks Group and Lee Brotherston of Wealthsimple join us, Sven Morgenroth of Netsparker delivers a tech segment on cross-site scripting, and we discuss the latest security news on this episode of Paul’s Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/Episode536

Visit https://www.securityweekly.com for all the latest episodes!

 

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

New research: Understanding the root cause of account takeover


Account takeover, or ‘hijacking’, is unfortunately a common problem for users across the web. More than 15% of Internet users have reported experiencing the takeover of an email or social networking account. However, despite its familiarity, there is a dearth of research about the root causes of hijacking.

With Google accounts as a case-study, we teamed up with the University of California, Berkeley to better understand how hijackers attempt to take over accounts in the wild. From March 2016 to March 2017, we analyzed several black markets to see how hijackers steal passwords and other sensitive data. We’ve highlighted some important findings from our investigation below. We presented our study at the Conference on Computer and Communications Security (CCS) and it’s now available here.

What we learned from the research proved to be immediately useful. We applied its insights to our existing protections and secured 67 million Google accounts before they were abused. We’re sharing this information publicly so that other online services can better secure their users, and can also supplement their authentication systems with more protections beyond just passwords.


How hijackers steal passwords on the black market

Our research tracked several black markets that traded third-party password breaches, as well as 25,000 blackhat tools used for phishing and keylogging. In total, these sources helped us identify 788,000 credentials stolen via keyloggers, 12 million credentials stolen via phishing, and 3.3 billion credentials exposed by third-party breaches.

While our study focused on Google, these password stealing tactics pose a risk to all account-based online services. In the case of third-party data breaches, 12% of the exposed records included a Gmail address serving as a username and a password; of those passwords, 7% were valid due to reuse. When it comes to phishing and keyloggers, attackers frequently target Google accounts to varying success: 12-25% of attacks yield a valid password.

However, because a password alone is rarely sufficient for gaining access to a Google account, increasingly sophisticated attackers also try to collect sensitive data that we may request when verifying an account holder’s identity. We found 82% of blackhat phishing tools and 74% of keyloggers attempted to collect a user’s IP address and location, while another 18% of tools collected phone numbers and device make and model.

By ranking the relative risk to users, we found that phishing posed the greatest threat, followed by keyloggers, and finally third-party breaches.

Protecting our users from account takeover

Our findings were clear: enterprising hijackers are constantly searching for, and are able to find, billions of different platforms’ usernames and passwords on black markets. While we have already applied these insights to our existing protections, our findings are yet another reminder that we must continuously evolve our defenses in order to stay ahead of these bad actors and keep users safe.

For many years, we’ve applied a ‘defense in-depth’ approach to security—a layered series of constantly improving protections that automatically prevent, detect, and mitigate threats to keep your account safe.

Prevention

A wide variety of safeguards help us to prevent attacks before they ever affect our users. For example, Safe Browsing, which now protects more than 3 billion devices, alerts users before they visit a dangerous site or when they click a link to a dangerous site within Gmail. We recently announced the Advanced Protection program which provides extra security for users that are at elevated risk of attack.

Detection

We monitor every login attempt to your account for suspicious activity. When there is a sign-in attempt from a device you’ve never used, or a location you don’t commonly access your account from, we’ll require additional information before granting access to your account. For example, if you sign in from a new laptop and you have a phone associated with you account, you will see a prompt—we’re calling these dynamic verification challenges—like this:
This challenge provides two-factor authentication on all suspicious logins, while mitigating the risk of account lockout.

Mitigation

Finally, we regularly scan activity across Google’s suite of products for suspicious actions performed by hijackers and when we find any, we lock down the affected accounts to prevent any further damage as quickly as possible. We prevent or undo actions we attribute to account takeover, notify the affected user, and help them change their password and re-secure their account into a healthy state.

What you can do

There are some simple steps you can take that make these defenses even stronger. Visit our Security Checkup to make sure you have recovery information associated with your account, like a phone number. Allow Chrome to automatically generate passwords for your accounts and save them via Smart Lock. We’re constantly working to improve these tools, and our automatic protections, to keep your data safe.

Malware analysis sandbox aggregation: Welcome Tencent HABO!

VirusTotal is much more than just an antivirus aggregator; we run all sorts of open source/private/in-house tools to further characterize files, URLs, IP addresses and domains in order to highlight suspicious signals. Similarly, we execute a variety of backend processes to build relationships between the items that we store in the dataset, for instance, all the URLs from which we have downloaded a given piece of malware.

One of the pillars of the in-depth characterization of files and the relationship-building process has been our behavioural information setup. By running the executables uploaded to VirusTotal in virtual machines, we are often able to discover network infrastructure used by attackers (C&C domains, additional payload downloads, cloud config files, etc.), registry keys used to ensure persistence on infected machines, and other interesting indicators of compromise. Over time, we have developed automatic malware analysis setups for other operating systems such as Android or OS X.

Today we are excited to announce that, similar to the way we aggregate antivirus verdicts, we will aggregate malware analysis sandbox reports under a new project that we internally call "multisandbox". We are excited to announce that the first partner paving the way is Tencent, an existing antivirus partner that is integrating its Tencent HABO analysis system in order to contribute behavioral analysis reports. In their own words:

Tencent HABO was independently developed by Tencent Anti-Virus Laboratory. It can comprehensively analyze samples from both static information and dynamic behaviors, trigger and capture behaviors of the samples in the sandbox, and output the results in various formats.
One of the most exciting aspects of this integration is that Tencent's setup comprises analysis environments for Windows, Linux and Android. This means that it will also be the very first Linux ELF behavioral characterization engine. 

These are a couple of example reports illustrating the integration:


Whenever there is more than one sandbox report for a given file, you will see the pulsating animation in the analysis system selector drop-down.


Please note that sandbox partners are contributing both a summarized analysis and a detailed freestyle HTML report. On the far right of the analysis system selector bar you will see the sandbox's logo along with a link to the detailed HTML report. This is where partners can insert as much fine-grained information as wanted and can be as visually creative as possible, to emphasize what they deem important.


We hope you find this new project as exciting as we do. We already have more integrations in the pipeline and we are certain this will heavily contribute to identifying new threats and strengthening anti-malware defenses worldwide.

If you have a sandbox setup or develop dynamic malware analysis systems please contact us to join this effort.

Computer Security Tips: Stay Safe Online

In recent times cyber security has raised the level of awareness and public consciousness as never before. Both large corporations and big organizations try to take care of online security as much as they can. That’s why cyber criminals and hackers have focused more on smaller companies and single entrepreneurs. This awful tendency leads to […]

Enterprise Security Weekly #68 – Wrong Show

Logan Harris of SpotterRF joins us for an interview. In the news, Juniper enhances Contrail Cloud, Microsoft LAPS headaches, Flexera embraces open-source, local market deception technology, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode68

Visit https://www.securityweekly.com/esw for all the latest episodes!

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Saving the Gothamist archives from journalism’s ‘billionaire problem’

DNA
Scott Heins
The Special Projects desk at the Freedom of the Press Foundation is releasing new software to create PDF archives of stories written by individual journalists formerly employed at DNAinfo or the Gothamist network of sites. These scripts, which we’ve dubbed gotham-grabber, enable anybody with the technical skills to run basic command line tools to create backups that would have been prohibitively time-consuming or impossible to complete before.

The necessity for these tools became apparent Thursday, when the billionaire owner of those sites abruptly shut them down after employees voted to unionize. That decision created two distinct losses: cities across the country lost a critical source of both continuing and historical news, and scores of working journalists faced the disappearance of years of their stories—and with it, the portfolios they would need to present in the hunt for new work.

One of these two crises appeared to be mitigated a day later when the archived sites were restored at their original locations. But the daylong scramble for copies of the stories drove home the precariousness of much online reporting and the need for a more comprehensive approach to solving that problem.

I witnessed that scramble firsthand when, in the hours after the Gothamist network and DNAInfo sites went offline, I initiated one of several efforts from technologists to help affected journalists retrieve copies of their stories from repositories such as the Internet Archive's invaluable Wayback Machine. The response was overwhelming.

Perhaps more surprising, though, was that even after the sites came back online the demand did not diminish. Over the past week, as a Freedom of the Press Foundation special project, I have provided dozens of affected journalists with tens of thousands of PDFs of their own work.

Today, we’re releasing the tools we created for that purpose as free software under the MIT license so they can be adapted for use with other sites or in other situations. With a few alterations, many journalists can use this tool to create an archived version of their entire portfolio.

Of course, these scripts only solve part of the archiving problem. Certainly, it’s essential that working journalists be able to continue working in the field even if their employer is forced to cease operations. That requires a persistent portfolio—and until that persistence is baked into the Web's infrastructure, this project may be only a temporary path forward.

Still, the fact that moneyed interests can take an archive of journalism offline represents a major censorship threat to a functioning free press. The archives for Gothamist and DNAinfo were restored after a widespread public backlash, but there's no guarantee that will be the case in the future—or even that these stories will all stay online indefinitely.

The "billionaire problem" facing the free press, memorably captured in the documentary Nobody Speak, is exacerbated by the relative fragility of Web media. It should not be the case that an attack on an outlet can so completely jeopardize its past.

At the Freedom of the Press Foundation, we've appreciated this opportunity to assist working journalists keeping a record of their own career, and we will continue to look for ways we can help solve the root problem.

Special thanks to Victoria Kirst for her assistance with gotham-grabber.


The FBI Wants Victims to Report DDoS Attacks

Local municipal police forces seldom have the resources to track down cyber criminals, but the U.S. federal government has resources, and they want to help stem the surge of distributed denial of service (DDoS) attacks. Last week the U.S. Federal Bureau of Investigation (FBI) issued an appeal to organizations that have been victims of DDoS attacks to share details and characteristics of those incidents with an FBI Field office and the IC3.

Some may argue that it’s not worth reporting incidents because it’s too difficult to identify the hackers. However, in some cases, law enforcement agencies successfully track down perpetrators. As a case in point, GovInfoSecurity reported that at the Information Security Media Group's Fraud and Data Breach Prevention Summit in London,

“Detective Constable Raymond Black, a cyber investigating officer for the Metropolitan Police Service, highlighted the upsides of sharing attack information with police. He also emphasized that sharing attack details need not lead to an investigation being launched.

Black noted that a small case - initially not reported to police - involving a September 2015 SQL injection attack and extortion demand against a London-based cigar retailer helped crack the case involving the October 2015 hack attack against London telecommunications giant TalkTalk.”

The FBI wants to know about large and small DDoS attacks, and it requests the following incident details from victims:

  • Identify the traffic protocol or protocols used in the DDoS attack - such as DNS, NTP, SYN flood;
  • Attempt to preserve netflow and attack-related packet capture;
  • Describe any extortion attempts or other threats related to the DDoS attack;
  • Share all correspondence with attackers "in its original, unforwarded format";
  • Provide information about themselves;
  • Estimate the total losses they suffered as a result of the DDoS attack;
  • Provide transaction details - if the victim paid a ransom or other payment in response to the attack - including the recipient's email address and cryptocurrency wallet address;
  • Describe what specific services and operations the attack impacted;
  • List IP addresses used in the DDoS attack.

There is no legal obligation to report attacks, so should organizations report every DDoS attack, large and small? That is an interesting question. No organization is completely immune to DDoS attacks, but some organizations undergo frequent attacks because they have 1) a large attack surface, 2) sensitive data that is worth stealing, or 3) a high profile that is subject to activist attacks. Some attacks are small and sub-saturating, intended to mask a more serious security breach. Others are volumetric attacks, intended to disable a website or business application. Gaming companies, financial service companies, hosting providers and Internet service providers are frequently targeted; if they reported every DDoS attack attempt, the FBI would be very busy, indeed.

No one wants to deal with the costs of a DDoS attack, or be bothered with reporting an incident to law enforcement. There’s no question that it’s better to mitigate an attack than be victimized by one. That’s why it makes sense to have an automated, real-time DDoS protection solution that not only detects and blocks DDoS traffic, but also provides sophisticated DDoS attack analytics.

For more information about how you can protect your network from DDoS attacks, contact us.

Hack Naked News #148 – November 7, 2017

Doug White and Jason Wood discuss improvements to IoT, fooling millions of Android users, Google Play bug bounties, school boards being hacked by pro-ISIS groups, and more with Jason Wood on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode148

Visit http://hacknaked.tv for all the latest episodes!

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Startup Security Weekly #61 – Nice Ring

Paul and Michael discuss contribution margin, sales lessons from successful entrepreneurs, battling from idea to launch, and why the future will be won by the scientist. In our startup security news segment, we have updates from SailPoint, WatchGuard, ForeScout, Synopsys, and more on this episode of Startup Security Weekly!Full Show Notes: https://wiki.securityweekly.com/SSWEpisode61

Visit https://www.securityweekly.com/ssw for all the latest episodes!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Paul’s Security Weekly #535 – Naughty Bits

Richard Moulds of Whitewood Security and Gadi Evron of Cymmetria join us for interviews, and Tim Medin of the SANS Institute delivers a tech segment on this episode of Paul’s Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/Episode535

Visit https://www.securityweekly.com for all the latest episodes!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Great article! I love the text under the link on t…

Great article! I love the text under the link on the Google image.
"by stock options grant date. Excitement on the river, sea or ocean common...

All that work and given away by foolish text instead of something that fits the site description.

I like the breakdown. Nice to see the list of checks that it makes for the environment. Looks like we can defeat them around the world by just installing perl on every system. If it detects perl, it removes itself.

Thanks again for another great breakdown on the actions of this sample.

RickRolled by none other than IoTReaper

IoT_Reaper overview

IoT_Reaper, or the Reaper in short, is a Linux bot targeting embedded devices like webcams and home router boxes. Reaper is somewhat loosely based on the Mirai source code, but instead of using a set of admin credentials, the Reaper tries to exploit device HTTP control interfaces.

It uses a range of vulnerabilities (a total of ten as of this writing), from years 2013-2017. All of the vulnerabilities have been fixed by the vendors, but how well are the actual devices updated is another matter. According to some reports, we are talking about a ballpark of millions of infected devices.

In this blogpost, we just wanted to add up some minor details to good reports already published by Netlab 360 [1], CheckPoint [2], Radware [3] and others.

Execution overview

When the Reaper enters device, it does some pretty intense actions in order to disrupt the devices monitoring capabilities. For example, it just brutally deletes a folder “/var/log” with “rm -rf”.
Another action is to disable the Linux watchdog daemon, if present, by sending a specific IOCTL to watchdog device:

watchdog

After the initialization, the Reaper spawns a set of processes for different roles:

  • Poll the command and control servers for instructions
  • Start a simple status reporting server listening on port 23 (telnet)
  • Start apparently unused service in port 48099
  • Start scanning for vulnerable devices

All the child processes run with a random name, such as “6rtr2aur1qtrb”.

String obfuscation

The Reaper’s spawned child processes use a trivial form of string obfuscation, which is surprisingly effective. The main process doesn’t use any obfuscation, but all child processes use this simple scheme when they start executing. Basically, it’s a single-byte XOR (0x22), but the way the data is arranged in memory makes it a bit challenging to connect the data to code.

Main process allocates a table in heap and copies the XOR-encoded data in. Later when the child processes want to reference to particular encoded data, it decodes it in heap and references the decoded data with a numeric index. After usage, the data is decoded back to its original form.

The following screenshot is a good presentation of the procedure:

decrypt

Command and Control

The Reaper polls periodically a fixed set of C2 servers:

weruuoqweiur.com, e.hl852.com, e.ha859.com and 27.102.101.121

The control messages and replies are transmitted over a clear-text HTTP, and the beacons are using the following format:

  /rx/hx.php?mac=%s%s&type=%s&port=%s&ver=%s&act=%d

The protocol is very simple: basically there are only two major functions – shutdown or execute arbitrary payload using the system shell.

Port scanning

One of the child processes starts to scan for vulnerable victims. In addition to randomly generated IP addresses, Reaper uses nine hard-coded addresses for some unkown reason. The addess is scanned with a set of apparently random-looking set of ports, and then with a set of bit more familiar ports:

80, 81, 82, 83, 84, 88, 1080, 3000, 3749, 8001, 8060, 8080, 8081, 8090, 8443, 8880, 10000

In fact, the randomish ports are just byte-swapped presentation of the above port list. So for example, 8880 = 0x22b0 turns to 0xb022 = 45090. The reason for this is still unknown.

It is possible that the author was just lazy and left off some endianness handling code, or maybe it is some other error in the programming logic. Some of the IoT-devices are big-endian, so the ports need to be swapped in order to use them with socket code.

Screenshot of the hard-coded list of ports:

ports

This is the list of hard-coded IP-addresses:

217.155.58.226
85.229.43.75
213.185.228.42
218.186.0.186
103.56.233.78
103.245.77.113
116.58.254.40
201.242.171.137
36.85.177.3

Exploitation

If the Reaper finds promising victim, it next tries to send HTTP-based exploit payload to the target. A total of ten different exploits have been observed so far, and they are related to IoT devices HTTP-based control interface. Here’s a list of the targeted vulnerabilities and HTTP requests associated with them:

1 – Unauthenticated Remote Command Execution for D-Link DIR-600 and DIR-300

Exploit URI: POST /command.php HTTP/1.1

 

2 – CVE-2017-8225: exploitation of custom GoAhead HTTP server in several IP cameras

GET /system.ini?loginuse&loginpas HTTP/1.1

 

3 – Exploiting Netgear ReadyNAS Surveillance unauthenticated Remote Command Execution vulnerability

GET /upgrade_handle.php?cmd=writeuploaddir&uploaddir=%%27echo+nuuo+123456;%%27 HTTP/1.1

 

4 – Exploiting of Vacron NVR through Remote Command Execution

GET /board.cgi?cmd=cat%%20/etc/passwd HTTP/1.1

 

5 – Exploiting an unauthenticated RCE to list user accounts and their clear text passwords on D-Link 850L wireless routers

POST /hedwig.cgi HTTP/1.1

 

6 – Exploiting a Linksys E1500/E2500 vulnerability caused by missing input validation

POST /apply.cgi HTTP/1.1

 

7 – Exploiting of Netgear DGN DSL modems and routers using an unauthenticated Remote Command Execution

GET /setup.cgi?next_file=netgear.cfg&todo=syscmd&curpath=/&currentsetting.htm=1cmd=echo+dgn+123456 HTTP/1.1

 

8 – Exploiting of AVTech IP cameras, DVRs and NVRs through an unauthenticated information leak and authentication bypass

GET /cgi-bin/user/Config.cgi?.cab&action=get&category=Account.* HTTP/1.1

 

9 – Exploiting DVRs running a custom web server with the distinctive HTTP Server header ‘JAWS/1.0’.

GET /shell?echo+jaws+123456;cat+/proc/cpuinfo HTTP/1.1

 

10 – Unauthenticated remote access to D-Link DIR-645 devices

POST /getcfg.php HTTP/1.1

 

Other details and The Roll

  • Reaper makes connection checks to google DNS server 8.8.8.8. It won’t run without this connectivity.
  • There is no hard-coded payload functionality in this variant. The bot is supposedly receiving the actual functionality, like DDoS instructions, over the control channel.
  • The code contains an unused rickrolling link (yes, I was rickrolled)

Output from IDAPython tool that dumps encoded strings (rickrolling is the second one):

rickroll

Sample hash

Analysis on this post is based on a single version of the Reaper (md5:37798a42df6335cb632f9d8c8430daec)

References

[1] http://blog.netlab.360.com/iot_reaper-a-rappid-spreading-new-iot-botnet-en/
[2] https://research.checkpoint.com/new-iot-botnet-storm-coming/
[3] https://blog.radware.com/security/2017/10/iot_reaper-botnet/

The Honeynet Project will bring GSoC students to the annual workshop in Canberra

The Honeynet Project annual workshop is just few days away, members and security folks from all over the world will gather in Canberra, Australia November 15th-17th. Every year the Honeynet Project, with the support of Google, funds a bunch of students that were admitted to the Google Summer of Code program and successfully completed their project assignments. They will have a chance to travel to the workshop and meet face to face with honeynet members and grown up experts in the security field.

read more

Enterprise Security Weekly #67 – Extra Dessert

Bryan Patton of Quest Software joins us for an interview. In the news, security horror stories, making cloud native a reality, and updates from Ixia, Lacework, Francisco, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode67

Visit https://www.securityweekly.com/esw for all the latest episodes!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Build an ultra-secure Microsoft Exchange Server

With all the news about information leaks, hackers, and encryption, it’s natural for security administrators to ask how to make an ultra-secure Microsoft Exchange Server deployment that’s good enough for any purpose outside of sending top secret information. I’ll show you how to build out an Exchange Server 2016 deployment in a Hyper-V virtual machine that is as secure as I can possibly make it while still allowing it to be usable. We're talking locked down, encryption both at rest and in transit, securely accessible from remote locations, and hardened against interlopers.

Specifically, I’ll explain how to build:

  • An Exchange Server. I am sure a lot of people will roll their eyes and say Microsoft Exchange cannot ever be secured properly and that true security can only come from Sendmail or Postfix custom compiled. I take issue with that. Those solutions might work if you are hosting a server for yourself and perhaps a couple of other people, but Exchange has valuable groupware features.

    Secondly, information lives both in e-mail and in calendars and contacts. Neither Sendmail nor Postfix address that in an integrated way. If you secure Exchange, you secure calendars, contacts, inboxes, journal entries, instant message conversation history, and more. Finally, most people prefer Outlook and Outlook simply works best with Exchange.
(Insider Story)

7 Tips for Defending Your Network against DDoS Attacks

Today’s distributed denial of service (DDoS) attacks are almost unrecognizable from the early days of attacks, when most were simple, volumetric attacks intended to cause embarrassment and brief disruption. The motives behind attacks are increasingly unclear, the techniques are becoming ever-more complex and the frequency of attacks is growing exponentially. This is particularly true in light of automated attacks, which allow attackers to switch vectors faster than any human or traditional IT security solution can respond.

The combination of the size, frequency and duration of modern attacks represent a serious security and availability challenge for any online organization. Minutes or even tens of minutes of downtime or latency significantly impacts the delivery of essential services. When you combine these factors, victims are faced with a significant security and service availability challenge. Below are seven do’s and don’ts to ensure that your network is protected from DDoS attacks.

  1. Document your DDoS resiliency plan. These resiliency plans should include the technical competencies, as well as a comprehensive plan that outlines how to continue business operations under the stress of a successful denial of service attack. An incident response team should establish and document methods of communication with the business, including key decision makers across all branches of the organization to ensure key stakeholders are notified and consulted accordingly.
  2. Recognize DDoS attack activity. Large, high-volume DDoS attacks are not the only form of DDoS activity. Short duration, low-volume attacks are commonly launched by hackers to stress test your network and find security vulnerabilities within your security perimeter. Understand your network traffic patterns and look to DDoS attack protection solutions that identify DDoS attack traffic in real-time, and immediately remove large and small DDoS attacks.
  3. Don’t assume that only large-scale, volumetric attacks are the problem. DDoS attackers are getting more sophisticated; their objective is not only to cripple a website, but rather to distract IT security staff with a low-bandwidth, sub-saturating DDoS attack that is a smokescreen for more nefarious network infiltrations, such as ransomware. Such attacks typically are short duration (under 5 minutes) and volume, which means that they can easily slip under the radar without being detected or mitigated by a traffic monitor, or even some DDoS protection systems.
  4. Don’t rely on traffic monitoring or thresholds. Sure, you can notice when traffic spikes, but will you be able to distinguish between good traffic and bad traffic? And what would you do if you did see a spike? Could you block out only the bad traffic, or would your network resources be overwhelmed anyway? Monitoring your traffic and setting threshold limits is not a form of protection, especially if you consider that small, sub-saturating attacks often go unnoticed by threshold triggers.
  5. Don’t rely on an IPS or firewall. Neither an intrusion prevention system (IPS) nor a firewall will protect you. Even a firewall that claims to have anti-DDoS capabilities built-in has only one method of blocking attacks: the usage of indiscriminate thresholds. When the threshold limit is reached, every application and every user using that port gets blocked, causing an outage. Attackers know this is an effective way to block the good users along with the attackers. Because network and application availability is affected, the end goal of denial of service is achieved.
  6. Engage with a mitigation provider. Today many ISPs offer DDoS protection plans, either as a value-added service or a premium service. Find out whether your ISP offers free or paid DDoS protection plans. But contact your ISP long before you are attacked; if you don’t have DDoS protection in place and are already under attack, your ISP probably cannot immediately sign you up then block the DDoS traffic to your site. Alternatively, you could purchase an on-premises or virtual DDoS protection product. DDoS protection comes with diverse deployment possibilities; via an on-premises anti-DDoS appliance, or a virtual machine (VM) instance. Be sure to look for rich, real-time DDoS security event analytics and reporting along with automatic mitigation.
  7. Pair time-to-mitigation with successful attack protection. As you develop your resiliency plan and choose a method of DDoS protection, time-to-mitigation must be a critical factor in your decision-making process. Bear in mind that DDoS mitigation services can be a useful adjunct to an automated DDoS mitigation solution. However, a mitigation service alone is insufficient because 1) before a service is engaged, someone or something—a computer or human—must detect a DDoS attack in progress, and 2) it takes 20-30 minutes to redirect the “bad” traffic, thus allowing more nefarious security breaches to occur during that time. In the face of a DDoS attack, time is of the essence. Whether waiting a few minutes, tens of minutes, or even more time for a DDoS attack to be mitigated is not sufficient to ensure service availability or security.

Corero has been a leader in modern DDoS protection solutions for over a decade; to learn how you can protect your company, contact us.