Monthly Archives: November 2019

What sort of security software and backups do I need for a home business?

Allen wants to set up a small company working from home, and would like some advice

I’m looking to set up a small business working from home, and would like some advice on back up and security measures. I have an Office 365 account so my main directory for saving documents will be OneDrive. I was looking to back up on a Synology NAS drive, perhaps to two separate hard drives as a precaution. Also, I currently just use Windows’ built-in security, but wondered whether I should look for something else.

Initially, it would just be me, but if things go well then I may have another two or three people helping. I’m assuming I can just scale up any security measures as the need arises. Allen

Technology manufacturers cater to two very large markets with different needs: home users and businesses. You’re about to enter the SoHo (small office, home office) market where home technologies dominate because most single traders don’t need proper business systems with all the extra costs and complications involved.

Continue reading...

Twitter to clear out inactive accounts and free up usernames

Company has been criticised for handling of move it says will reduce risk from hacking

Twitter has announced it is to clear out inactive accounts, freeing up dormant usernames and reducing the risk of old accounts being hacked.

But the company is facing criticism for the way it has handled the announcement, with many concerned that the accounts of people who have died over the past decade will be removed with no way of saving their Twitter legacies.

Continue reading...

Tips for Brits to stay Secure on Black Friday

As Brits plan to go to extreme lengths to grab a bargain this Black Friday but are leaving themselves exposed to cyber-criminals?
  • Brits are gearing up to grab a bargain this Black Friday and Cyber Monday, with 17% already considering pulling a sickie.
  • Over half of UK online shoppers will use a mobile device to shop for deals, but more than one in five (21%) will shop on unsecured smartphones or using open wifi networks (19%).
  • F-Secure is warning people to install security software on any devices they’re shopping online with as last year the average count of spam increased by 45% during Cyber Monday.
  • Brits are one and a half times more likely to be affected by financial fraud than people in other countries with 26% of people reporting they or someone in their family has been affected by credit card fraud, compared to an average 17% in other countries.
  • New research highlights the lengths Brits will go to grab a bargain online, even though they may be leaving themselves vulnerable to cybercrime.
Ahead of the big shopping weekend, 17% of those surveyed admitted that they would consider pulling a sickie during Black Friday and Cyber Monday; 32% stated they were already putting items in their basket in anticipation; while a dedicated 18% admitted they would shop on their mobile phone while on the toilet to secure the best deals*.

Over half of UK online shoppers use a mobile device to shop for deals on Black Friday and Cyber Monday**, but more than one in five (21%) will shop on unsecured smartphones (with no security software installed). And nearly 1 in 5 (19%) intend to shop on their commute or during their lunch break using free public wifi (17%)*, all of which puts them at greater risk from cybercriminals who are also looking to cash in.

With Black Friday a growing phenomenon in the UK, and now the busiest retail period of the year, it’s no surprise that last year GCHQ predicted consumer loses of around £30 million due to online fraud.

The Common Security Pitfalls with Online Shipping at Peak Times
F-Secure has found that the biggest security pitfalls people fall into during online shopping peaks are:
  • Not having any protection on their mobile devices with three in four (75%) admitting that they don’t have security software installed to protect themselves from spam*.
  • Making one fateful click on a fake promotional email that promises an incredible deal, with over two thirds (69%) admitting they click on the links in emails rather than going directly to the website.
  • Having easy to guess passwords or the same password across multiple account log-ins, with only three in ten (30%) people ensuring all their accounts have strong, unique passwords, and just under one in eight (12%) using a password manager*. 
Tom Gaffney, cybersecurity consultant at F-Secure said: “When using a mobile phone, people are more likely to be in a System 1 mode of thinking where their guard is down and they make fast, unconscious, automatic decisions. People are much more error prone in this state of mind and more susceptible to cybercrime like phishing attacks. Add to this the heightened number of phishing emails - an increase of 45% during Cyber Monday and 21% leading up to the New Year in 2018 - and you’ve created a hackers field day.”

He continued: “Hackers prey on our vulnerabilities around this time of year so we’re urging consumers to be extra vigilant and to use software protection online to keep themselves safe.”

Additional international research by F-Secure, a company which has three decades of experience stopping advanced cyber attacks, shows that Brits are one and a half times more likely to be affected by financial fraud than people in other countries. Just over a quarter (26%) of people reported that they or someone in their family have been affected by credit card fraud, compared to an average 17% in other countries. Additionally, almost twice as many Brits (9%) reported unauthorised access to their online bank, in comparison to an average of 5% in other countries.


Top Tips to Stay Secure this Black Friday
To help keep consumers safe when shopping online this Black Friday, F-secure have shared their top five tips:

1. Forget your Passwords

If you can remember your passwords, they’re too weak. So what do you do with more than a dozen passwords you cannot remember? Use a password manager.

2. Secure all your Accounts with Two-Factor Authentication

The best password in the world can still be compromised if it is not properly secured by the site you’ve trusted with. Use two-factor authentication to secure your accounts whenever possible.

3. If you’re going to Shop on your Smart Phone, use a Retailer’s App

On your phone you maybe even more vulnerable to some basic scams. Since URLs are harder to view on a smaller screen, you could be tricked by the explosion of tricky newer top-level domains, such as .family or .club. Stick the official apps on your device and you won’t have to worry about checking those web addresses.

4. Use one Web Browser for all of your Shopping and Financial Transactions

It doesn't matter if it's Chrome, Firefox, Edge Opera or Safari. Pick one browser and only use it for anything that involves shopping, banking or checking your financial accounts. And don’t touch it for anything else—especially social media.

5. Take a Break from Clicking on the Links within Emails.
Criminals take advantage of holiday distractions and expectations of gifts and packages being shipped to your home. Keep your focus by avoiding clicking on links in all emails and going directly to a retailer or shipper’s site.

Consumers can find out more about F-secures online safety tools here: https://www.f-secure.com/gb-en/home

Consumer research for this articles statistics was carried out by Vital Research and Statistics, on behalf of F-secure, and surveyed a sample comprised of 2,005 UK adults. Research was carried out online between 1st November 2019 and 4th November 2019.

[VIDEO] How Veracode Leverages AWS to Eliminate AppSec Flaws at Scale

Veracode???s SaaS-native platform has scanned more than 10 trillion lines of code for security defects ??? that breaks down to more than 4 million applications, with 1 million of those scanned in the last year alone. By scanning in the Veracode platform, our customers benefit from the convenience of running programs, not systems, and developers free up much-needed processing power so they can continue writing code without any obstacles.

To deliver application security solutions with speed and accuracy at scale, Veracode needs massive computing power. Follow along as Veracode???s EMEA CTO, Paul Farrington, explains how the Veracode platform leverages Amazon Web Services to solve some of the hardest problems facing organizations today ??? securing software in an ever-changing digital landscape.

 

The Dark Web: What You Need to Know

Despite its negative connotations, the Dark Web is nothing to be afraid of. Few know that the Dark Web was actually thought out as a means of preserving privacy and security. However, this also enabled it to become a breeding ground for illegal activity.

There are certainly things to be distrustful of when navigating the Dark Web, and before venturing into it head-first, you should understand certain things about it.

What is the Dark Web?

The first thing you need to know is that there is no actual database for the Dark Web. Instead, there are only what are known as “peer to peer connections”, which means that the data you are accessing is not stored in just one place.

Instead, it is found on thousands of different computers that are part of the network, so that no one can actually identify where the information is coming from. You can upload to the network, but when downloading, there is no telling where you’re getting the data from.

Why do people use the Dark Web?

There are all kinds of uses for the dark web. Some of them are downright nefarious; others, not so much.

  • Drug sales

Taking into consideration the anonymous nature of the Dark Web, it was only a matter of time before it came into use to sell illegal drugs. It is the ideal avenue for this kind of transaction, because of the anonymity factor that is inherent to the Dark Web.

  • Illegal commerce

To say that you can buy anything on the Dark Web would be an understatement. Anything you can imagine, no matter how gruesome, can be purchased on the Dark Web, from guns to stolen data to organs.

  • Child porn

Is it really a surprise that child porn is rampant on the Dark Web? It’s one of the darker aspects of it, but the anonymous nature of it does lend itself to concealing horrible realities like this.

  • Communication

For all its negative connotations and activities, the Dark Web can also be a way to foster open communication that can sometimes save lives or make a change. Especially in cases where governments monitor online activity, having a place to speak out freely can be invaluable.

  • Reporting

The Dark Web can be used as an excellent source for journalists because sources can remain anonymous. Additionally, no one can track their activity, so it cannot attract consequences from authorities.

How to access

You may be wondering how you can access the Dark Web – after all, you can’t just Google it or access it in a regular browser.

Here are some of the aspects you need to keep in mind about accessibility, including the browser you need to use, the URLs, personal credentials you may need, and even acceptable currency, should you decide to make a purchase.

  • TOR browser

The most common way to access the Dark Web is via The Onion Router (TOR), the browser used by most people for this purpose. This ensures that your identity will remain concealed, as will your activity, because it encrypts everything.

You can obtain the TOR browser by downloading it from the official website. It’s as easy as installing it and running it like any normal program. And if you were worried about the legality of it – have no fear.

Both accessing the Dark Web and downloading the means to do so are entirely legal. While this can enable some pretty dark human behavior, it can also give us very necessary freedom to do positive things, as you will see. Not everyone uses it for nefarious purposes.

  • Exact URLs

Something that makes it difficult to navigate the Dark Web is the fact that the pages are not indexed by browsers. That means that anything you may be looking for will require an exact URL. That does limit the amount of people who can access the Dark Web, as well as the scope of the pages one can gain access to.

Unless you know exactly where to look, you may not have a lot of luck finding what you want. That can deter you from searching, or on the contrary, it can determine you to go looking for someone who is well versed in illegal activity and who can help you out.

  • Criminal activity

It comes as no surprise that the Dark Web is a hotbed of criminal activity. No one is advocating that one pick up criminal undertakings in order to use the Dark Web. But generally speaking, the people who will most likely be looking to access URLs here are people who are engaged in all manner of criminal activity.

  • Bitcoin

All transactions on the Dark Web are completed via Bitcoin, as this type of currency cannot be traced. That increases the degree of safety of the transaction, both for buyers and for sellers.

However, that does not mean that these transactions are always safe. There is a high degree of uncertainty that accompanies these transactions, regardless of what you are purchasing.

You might find that the person you are buying from is a scammer who can end up taking your money, but not sending over your product. While identities are protected, transactions are not, so a degree of care is always necessary.

The future of the Dark Web

While authorities are always making efforts to cut down on the number of sites present on the Dark Web, more are always created. In the end, it proves to be a bit of a wasted effort. The more websites get shut down, the more pop up in their place.

Does that mean that the Dark Web will continue in perpetuity? No one can say with any degree of certainty. It is entirely possible that people will seek refuge in the anonymity of the Dark Web as the degree of surveillance grows, or the opposite can happen and we can grow to accept surveillance as a means of ensuring a thin veneer of security.

Conclusion

The Dark Web will always be controversial, but it’s not nearly as scary as it seems. It’s true that it certainly conceals some illegal and immoral behavior, but it can also be used for good. The anonymous and untraceable aspects of it help it remain a somewhat neutral space where one can find the freedom to communicate, investigate, search, trade, make purchases, etc.

 

 

The post The Dark Web: What You Need to Know appeared first on CyberDB.

FIDL: FLARE’s IDA Decompiler Library

IDA Pro and the Hex Rays decompiler are a core part of any toolkit for reverse engineering and vulnerability research. In a previous blog post we discussed how the Hex-Rays API can be used to solve small, well-defined problems commonly seen as part of malware analysis. Having access to a higher-level representation of binary code makes the Hex-Rays decompiler a powerful tool for reverse engineering. However, interacting with the HexRays API and its underlying data sources can be daunting, making the creation of generic analysis scripts difficult or tedious.

This blog post introduces the FLARE IDA Decompiler Library (FIDL), FireEye’s open source library which provides a wrapper layer around the Hex-Rays API.

Background

Output from the Hex-Rays decompiler is exposed to analysts via an Abstract Syntax Tree (AST). Out of the box, processing a binary using the Hex-Rays API means iterating this AST using a tree visitor class which visits each node in the tree and issues a callback.  For every callback we can check to see what kind of node we are visiting (calls, additions, assignments, etc.) and then process that node. For more information on these constructs see our previous blog post.

The Problem

While powerful, this workflow can be difficult to use when creating a generic API for several reasons:

  • The order nodes are visited in, is not always obvious based on the decompiler output
  • When visiting a node, we have no context about where we are in the AST
  • Any problem which requires multiple steps requires multiple visitors or complicated logic in our callback function
  • The amount of cases to handle when walking up or down the AST can increase exponentially

Handling each of these cases in a single visitor callback function is untenable, so we need a way to more flexibly interact with the decompiler.

FIDL

FIDL, the FLARE IDA Decompiler Library, is our implementation of a wrapper around the Hex-Rays API. FIDL’s main goal is to abstract away the lower level details of the default decompiler API. FIDL solves multiple problems:

  • Provides analysts an easy-to-understand API layer which can be used to write more complicated binary processing scripts
  • Abstracts away the minutiae of processing the AST
  • Provides helper implementations for commonly needed functionality when working with the decompiler
  • Provides documented examples on how to use various Hex-Rays APIs

Many of FIDL’s benefits are exposed to users via the controlFlowinator class. When constructing this object FIDL will parse the AST for us and provides a high-level summary of a function using information extracted via the decompiler including APIs called, their parameters, and a summary of local variables and parameters for the function.

Figure 1 shows a subset of information available via a controlFlowinator next to the decompilation of the function.


Figure 1: Sample output available as part of a controlFlowinator

When parsing the AST during construction, the controlFlowinator also combines nodes representing the same logical expression into a more digestible form where each block translates roughly to one line of pseudocode. Figure 2 and Figure 3 show the AST and controlFlowinator representations of the same function.


Figure 2: The default rendering of the AST of a function


Figure 3: The control flow graph created by the controlFlowinator for the function shown in Figure 2

Compared to the default AST, this graph is organized by potential code paths that can be taken through a function. This gives analysts a much more logical structure to iterate when trying to determine context for a particular expression.

Readily available access to variables and API calls used in a function makes creating scripts to leverage the Hex-Rays API much more straightforward. In our previous blog post we introduced a script which uses the HexRays API to rename global variables based on the parameter to GetProcAddress. Figure 4 shows this script rewritten using the FIDL API. This new script is both easier to understand and does not rely on manually walking the AST.


Figure 4: Script that uses the FIDL API to map all calls to GetProcAddress to global variables

Rather than calling GetProcAddress malware commonly manually revolves needed imports by walking the Export Address Table (EAT) and comparing the hashes of a DLL’s exports looking for pre-computed values. As an analyst being able to quickly or automatically map these functions to their intended API makes it easier for us to identify which functions we should spend time analyzing. Figure 5 shows an example of how FIDL can be used to handle these cases. This script targets a DRIDEX sample with MD5 hash 7B82CF2CF9D08191C6828C3F62A2F914. This binary uses CRC32 with an XOR key of 0x65C54023 as the hashing algorithm during import resolution.


Figure 5: IDAPython script to automatically process and markup a DRIDEX sample

Running the above script results in output similar to what is shown in Figure 6, with comments labeling which functions are resolved.


Figure 6: The script in Figure 5 inserts comments into the decompiler output annotating decrypted strings

You can find FIDL in the FireEye GitHub repository.

Conclusion

While the Hex-Rays decompiler is a powerful source of information during reverse engineering, writing generic scripts and plugins using the default API is difficult and requires handling numerous edge cases. This post introduced the FIDL library, a wrapper around the Hex-Rays API, which fixes this by reducing the amount of low-level details an analyst needs to understand in order to create a script leveraging the decompiler and should make the creation of these scripts much faster. In future blog posts we will publish more scripts and analysis utilizing this library.

The Challenges of UK Cyber Security Standards

Article by Matt Cable, VP Solutions Architect and MD Europe, Certes Networks

Public sector organisations in the UK are in the midst of changing cyber security regulations. In mid-2018, the Government, in collaboration the NCSC, published a minimum set of cyber security standards. These standards are now mandated, along with a focus on continually “raising the bar”. The standards set minimum requirements for organisations to protect sensitive information and key operational services, which – given the way in which these services are increasingly dispersed – is driving significant changes in public sector network architecture and security.

In addition to setting today’s ‘minimum’ standards, however, the guidance also sets a target date of 2023 by which public sector organisations will be expected to have adopted a ‘gold-standard’ cyber security profile. Matt Cable, VP Solutions Architect and MD Europe, Certes Networks, therefore outlines the essential considerations that will help organisations select an encryption solution provider that can easily integrate into any network infrastructure as they migrate from Legacy MPLS to SDN or SD-WAN network architectures.

The Principles
For both public and private sector organisations, customer experience is key. From finance and utilities, to local authorities and smart cities, customer touchpoints are increasingly dispersed, remote and application-driven, necessitating a move from Legacy MPLS to SDN or SD-WAN. However, under the Government’s new minimum cyber security standards framework, ensuring sensitive information and key services are protected is a critical consideration.

The UK’s National Cyber Security Centre (NCSC) has therefore issued principles for cyber secure enterprise technology to organisations, including guidance on deploying and buying network encryption, with the aim of reducing risks to the UK by securing public and private sector networks. This guidance bears parallels with the US National Institute of Standard and Technology’s (NIST) Cybersecurity Framework and therefore applies equally to US and other federal organisations in a similar scenario.

Similar to the NIST framework, the NCSC guidance shares the same principle that networks should not be trusted. It recommends that to keep sensitive information protected, encryption should be used between devices, the applications on them, and the services being accessed. IPsec is the recommended method for protecting all data travelling between two points on a network to provide an understood level of security, with further guidance outlining a specific ‘gold-standard’ cipher suite profile known as PRIME.

The guidance is based on the network vendor being CAS(T) certified (CESG (Communications Electronics Security Group) Assured Services (Telecommunications)), which involves an independent assessment focused on the key security areas of service availability, insider attack, unauthorised access to the network and physical attack.

However, there are challenges.

Challenge #1 – Public Sector Adherence to CAS(T)
Many public sector organisations are no longer mandating CAS(T) based services and therefore the risk appetite is expected to be lowered, mainly to support the emergence of internet and SD-WAN suppliers network solutions. This is key as the current NCSC recommendation Foundation standards for IPsec will expire in 2023, and users are being encouraged to move quickly off legacy platforms.

Challenge #2 – Impact to Cloud Service Providers and Bearer Networks
This guidance, such as the protection of information flows on dedicated links between organisations, also applies to cloud service providers, or in the inter-data-centre connections in such providers' networks.

The underlying bearer network is assumed not to provide any security or resilience. This means that any bearer network (such as the Internet, Wi-Fi 4/5G, or a commercial MPLS network) can be used. The choice of bearer network(s) will have an impact on the availability that an encrypted service can provide.

Challenge #3 – Partner Collaboration
NCSC explicitly states in its guidance that establishing trustworthy encrypted network links is not just about technology. It is also important that the management of these networks links is carried out by appropriate individuals, performing their assigned management activities in a competent and trusted fashion, from a management system that protects the overall integrity of the system. Thus, for encryption solution providers, the partner’s service credentials impact how the end user may use the technology.

The Solution
IPsec helps protect the confidentiality and integrity of information as it travels across less-trusted networks, by implementing network-based encryption to establish Virtual Private Networks (VPNs).

Under PRIME principles, devices which implement cryptographic protection of information using IPsec should:

  • Be managed by a competent authority in a manner that does not undermine the protection they provide, from a suitable management platform
  • Be configured to provide effective cryptographic protection
  • Use certificates as a means of identifying and trusting other devices, using a suitable PKI
  • Be independently assured to Foundation Grade, and operated in accordance with published Security Procedures
  • Be initially deployed in a manner that ensures their future trustworthiness
  • Be disposed of securely
Keeping the network design simple is one of the most effective ways to ensure the network provides the expected security and performance. The use of certificates generated in a cryptographically secure manner allows VPN gateways and clients to successfully identify themselves to each other while helping to mitigate brute force attacks.

Conclusion
There are many encryption solutions to help agencies and federal governments who want to move from Legacy MPLS to SDN or SD-WAN. Layer 4 encryption, for example, can integrate easily into any network and encrypt data in transit without disrupting performance or replacing the current network architecture.

Selecting a provider that can offer a PRIME compliant solution – such as Layer 4 encryption - is key in conforming to both today and tomorrow’s cybersecurity standards. And with NCSC starting to treat all networks as untrusted networks (especially those agencies using internet), PRIME is becoming the gold standard for which NCSC will measure regulatory compliance.

Therefore, it is important to consider a vendor that can offer a security solution that is not only compliant but is simple and uncomplicated, minimising disruption, resources and costs.

Expanding the Android Security Rewards Program

The Android Security Rewards (ASR) program was created in 2015 to reward researchers who find and report security issues to help keep the Android ecosystem safe. Over the past 4 years, we have awarded over 1,800 reports, and paid out over four million dollars.

Today, we’re expanding the program and increasing reward amounts. We are introducing a top prize of $1 million for a full chain remote code execution exploit with persistence which compromises the Titan M secure element on Pixel devices. Additionally, we will be launching a specific program offering a 50% bonus for exploits found on specific developer preview versions of Android, meaning our top prize is now $1.5 million.

As mentioned in a previous blog post, in 2019 Gartner rated the Pixel 3 with Titan M as having the most “strong” ratings in the built-in security section out of all devices evaluated. This is why we’ve created a dedicated prize to reward researchers for exploits found to circumvent the secure elements protections.

In addition to exploits involving Pixel Titan M, we have added other categories of exploits to the rewards program, such as those involving data exfiltration and lockscreen bypass. These rewards go up to $500,000 depending on the exploit category. For full details, please refer to the Android Security Rewards Program Rules page.

Now that we’ve covered some of what’s new, let’s take a look back at some milestones from this year. Here are some highlights from 2019:

  • Total payouts in the last 12 months have been over $1.5 million.
  • Over 100 participating researchers have received an average reward amount of over $3,800 per finding (46% increase from last year). On average, this means we paid out over $15,000 (20% increase from last year) per researcher!
  • The top reward paid out in 2019 was $161,337.

Top Payout

The highest reward paid out to a member of the research community was for a report from Guang Gong (@oldfresher) of Alpha Lab, Qihoo 360 Technology Co. Ltd. This report detailed the first reported 1-click remote code execution exploit chain on the Pixel 3 device. Guang Gong was awarded $161,337 from the Android Security Rewards program and $40,000 by Chrome Rewards program for a total of $201,337. The $201,337 combined reward is also the highest reward for a single exploit chain across all Google VRP programs. The Chrome vulnerabilities leveraged in this report were fixed in Chrome 77.0.3865.75 and released in September, protecting users against this exploit chain.

We’d like to thank all of our researchers for contributing to the security of the Android ecosystem. If you’re interested in becoming a researcher, check out our Bughunter University for information on how to get started.

Starting today November 21, 2019 the new rewards take effect. Any reports that were submitted before November 21, 2019 will be rewarded based on the previously existing rewards table.

Happy bug hunting!

How Much is Your Data Worth on the Dark Web?

You may not know much about the dark web, but it may know things about you.

What is the Dark Web?

The dark web is a part of the internet that is not visible to search engines. What makes the dark web, dark? it allows users to anonymise their identity by hiding their IP addresses. This makes those using the dark web nearly impossible to identify.

Only 4% of the internet is available to the general public, which means a vast 96% of the internet is made up of the deep web. It’s important to note here, that the dark web is just a small section of the internet but it’s a powerful small sector.

How much are your bank details worth?
The dark web is full of stolen personal bank credentials. It’s common to see MasterCard, Visa, and American Express credentials on the dark web from a variety of different countries.

Credit card data in the US, UK, Canada and Australia increased in price anywhere from 33% to 83% in the time from 2015 to 2018. The average price for a UK Visa or Mastercard in 2015 was £9, however, this did increase to £17 in 2018. This is approximately an 83% increase. Bank accounts that can transfer funds in stealth mode to United Kingdom banks are considerably more expensive. An account with a £12,500 account balance goes for around £700.


How much are your subscription services worth?
The sale value of your PayPal credentials depends on the available account balance. PayPal details can be sold for as little as £40 and this can increase to £820 - £2,500 for an available balance of £6580.

Your Amazon, British Airways, Facebook, Fortnite and Netflix logins are also available on the dark web. These can go for around £7 which is surprising as they hold various information about your banking and identity. Stolen hotel loyalty programs and auctions accounts can cost as much as £1,150 due to the extensive information they provide the buyer.

Are you surprised to learn that even reward programs and viewing subscriptions can be purchased on dark web markets?


How much is your whole identity worth on the dark web
The average modern person now has many online accounts. These can range from email and Facebook to online shopping, food delivery and banking. Combine all of those accounts and the typical internet user's identity is worth around £987 to hackers. The personal loss for victims is of course much higher.

Jade works for Total Processing, an advanced independent payment gateway provider who answers only to our customers.

GTP Security: Securing 5G Networks with a GTP Firewall

Anthony Webb, EMEA Vice President at A10 Networks

It is often written that 5G will usher in the Fourth Industrial Revolution and change the economy. The speeds and capacity that 5G network promises to bring has the potential to be an indispensable technology. Verizon estimated that by 2035, 5G “will enable £10.5 trillion of global economic output and support 22 million jobs worldwide.

Therefore, 5G is not only important because it has the potential to support millions of devices at ultrafast speeds, but also because it has the potential to transform the lives of people around the world. But with this new opportunity also comes higher security risks as cyberattacks grow in sophistication and volume and use lightly protected mobile and IoT devices in their botnets or targeted attacks.

GTP today

Since the early days of 3G or 2.5G, GPRS Tunnelling Protocol (GTP) has been used to carry traffic and signalling through mobile networks and has continued to do so in 4G/LTE and recent 5G non-standalone architectures. But GTP was never designed with security in mind and therefore has no inherent security mechanisms.

As traffic, devices and interconnection partners surge, so does the use of GTP. The transition to 5G is happening and most operators will opt to deploy 5G in stages, using a common 4G core as they build out the 5G RAN. As a result, threats to 4G core elements from GTP-based attacks will still be present during this hybrid period. This where operators must now include a GTP firewall as part of their current network security posture and as they evolve the network to 5G.

GTP vulnerabilities have been well known by the industry and documented in GSMA reports. What is required is a GTP firewall which stops attackers from trying to exploit GTP vulnerabilities on the interfaces exposed to the network. These attacks target both mobile subscribers and mobile network infrastructure. The most common GTP security issues include confidential data disclosures, denial of service, network overloads, and a range of fraud activities. In 5G, additional security measures have been added, but GTP will continue to play an important role, especially in roaming.

What is required?

The simple answer is scalable security. Mobile operators face the challenge of securing roaming and EPC interfaces where GTP protocols are used extensively in and are known to have vulnerabilities that can be readily exploited by malicious actors. As vulnerable devices and partners expand, so does the attack surface available for malicious purposes. Operators need to meet the growing security challenges while also providing a seamless subscriber experience.

As they move towards 5G, with likely a 4G common core for many years, operators will need to tackle the risks inherent in GTP, as threats continue to grow against a much larger volume of traffic and applications. Roaming traffic, with its high complexity and large number of interconnect partners and hubs, can be an especially vulnerable and attractive target for malicious actors.

Common Threats

The most common threats from a GTP based attacks include the following:
Eavesdropping – intercepting and snooping into GTP traffic gaining valuable and confidential subscriber information

  • Fraud: Attackers can use services at the expense of the operator or another subscriber using invalid or hijacked IMSI
  • Injection of malicious GTP messages: Disrupting sessions and creating DDoS
  • Subscriber denial of service: Spoofing subscriber IDs to generate malicious messages that cause service disruption for an individual subscriber
  • Message Suppression and Modification: Prevent message delivery or allow malicious content delivery, disrupting service
  • Network Overload/DDoS: Malicious, malformed or invalid signalling packets are sent that overwhelm network elements or cause vulnerable elements to fail
GTP Firewall 
A GTP firewall provides security and scalability, while protecting the mobile core against GTP-based threats mentioned above through GTP interfaces in the access networks and GRX/IPX interconnect to support uninterrupted operations. The GTP firewall can be inserted into multiple interfaces carrying the GTP traffic. The primary use case is being inserted on S5-Gn and S8-Gp (roaming firewall) interfaces.

Using a built-in FIDO authenticator on latest-generation Chromebooks



We previously announced that starting with Chrome 76, most latest-generation Chromebooks gained the option to enable a built-in FIDO authenticator backed by hardware-based Titan security. For supported services (e.g. G Suite, Google Cloud Platform), enterprise administrators can now allow end users to use the power button on these devices to protect against certain classes of account takeover attempts. This feature is disabled by default, however, administrators can enable it by changing DeviceSecondFactorAuthentication policy in the Google Admin console.

Before we dive deeper into this capability, let’s first cover the main use cases FIDO technology solves, and then explore how this new enhancement can satisfy an advanced requirement that can help enterprise organizations.

Main use cases
FIDO technology aims to solve three separate use cases for relying parties (or otherwise referred to as Internet services) by helping to:
  1. Prevent phishing during initial login to a service on a new device;
  2. Reverify a user’s identity to a service on a device on which they’ve already logged in to; 
  3. Confirm that the device a user is connecting from is still the original device where they logged in from previously. This is typically needed in the enterprise. 
Security-savvy professionals may interpret the third use case as a special instance of use case #2. However, there are some differences, which we break down a bit further below:
  • In case #2, the problem that FIDO technology tries to solve is re-verifying a user’s identity by unlocking a private key stored on the device.
  • In case #3, FIDO technology helps to determine whether a previously created key is still available on the original device without any proof of who the user is.
How use case #1 works: Roaming security keys 

Because the whole premise of this use case is one in which the user logs in on a brand new device they’ve never authenticated before, this requires the user to have a FIDO security key (removeable, cross-platform, or a roaming authenticator). By this definition, a built-in FIDO authenticator on Chrome OS devices would not be able to satisfy this requirement, because it would not be able to help verify the user’s identity without being set up previously. Upon initial log-in, the user’s identity is verified together with the presence of a security key (such as Google’s Titan Security Key) previously tied to their account.

Titan Security Keys 


Once the user is successfully logged in, trust is conferred from the security key to the device on which the user is logging on, usually by placing a cookie or other token on the device in order for the relying party to “remember” that the user already performed a second factor authenticator on this device. Once this step is completed, it is no longer necessary to require a physical second factor on this device because the presence of the cookie signals to the relying party that this device is to be trusted.

Optionally, some services might require the user to still periodically verify that it’s the correct user in front of the already recognized device (for example, particularly sensitive and regulated services such as financial services companies). In almost all cases, it shouldn’t be necessary for the user to also-in addition to providing their knowledge factor (such as a password) - re-present their second factor when re-authenticating as they’ve already done that during initial bootstrapping.

Note that on Chrome OS devices, your data is encrypted when you’re not logged on, which further protects your data against malicious access.

How use case #2 works: Re-authentication 

Frequently referred to as “re-authentication,” use case #2 allows a relying party to reverify that the same user is still interacting with the service from a previously verified device. This mainly happens when a user performs an action that’s particularly sensitive, such as changing their password or when interacting with regulated services, such as financial services companies. In this case, a built-in biometric authenticator (e.g. a fingerprint sensor or PIN on Android devices) can be registered, which offers users a more convenient way to re-verify their identity to the service in question. In fact, we have recently enabled this use case on Android devices for some Google services.

Additionally, there are security benefits to this particular solution, as the relying party doesn’t only have to trust a previously issued cookie, but can now both verify that the right user is present (by means of a biometric) and that a particular private key is available on this particular device. Sometimes this promise is made based on key material stored in hardware (e.g. Titan security in Pixel Slate), which can be a strong indicator that the relying party is interacting with the right user on the right device.

How use case #3 works: Built-in device authenticator

The challenge of verifying that a device a user has previously logged in on is still the device from which they’re interacting with the relying party, is what the built-in FIDO authenticator on most latest-generation Chromebooks is able to help solve.

Earlier we noted that upon initial log-in, relying parties regularly place cookies or tokens on a user’s device, so they can remember that a user has previously authenticated. Under some circumstances, such as when there’s malware present on a device, it might be possible for these tokens to be exfiltrated. Asking for the “touch of a built-in authenticator” at regular intervals helps the relying party know that the user is still interacting from a legitimate device which has previously been issued a token. It also helps verify that the token has not been exfiltrated to a different device since FIDO authenticators offer increased protection against the exfiltration of the private key. This is because it’s usually housed in the hardware itself. For example, in the case of most latest-generation Chromebooks (e.g. Pixel Slate), it’s protected by hardware-based Titan security.

Pixel Slate devices are built with hardware-based Titan security 

In the case of our implementation on Chrome OS, the FIDO keys are also scoped to the specific logged in user, meaning that every user on the device essentially gets their own FIDO authenticator that can’t be accessed across user boundaries. We expect this use case to be particularly useful in enterprise environments, which is why the feature is not enabled by default. Administrators can enable it in the Google Admin console.

We still highly recommend users to have a primary FIDO security key, such as Titan Security Key or an Android phone. This should be used in conjunction with a “FIDO re-authentication” policy, which is supported by G Suite.

Enabling the built-in FIDO authenticator in the Google Admin console

Even though it’s technically possible to register the built-in FIDO authenticator on a Chrome OS device as a “security key” with services, it’s best to avoid this instance as users can run an increased risk of account lockout if they ever need to sign in to the service from a different machine.

Supported Chromebooks
Starting with Chrome 76, most latest-generation Chromebooks gained the option to enable a built-in FIDO authenticator backed by hardware-based Titan security. To see if your Chromebook can be enabled with this capability, you can navigate to chrome://system and check the “tpm-version” entry. If “vendor” equals “43524f53”, then your Chromebook is backed by Titan security.

Navigating to chrome://system on your Chromebook

Summary

In summary, we believe that this new enhancement can provide value to enterprise organizations that want to confirm that the device a user is connecting from is still the original device from which a user logged in from in the past. Most users, however, should be using roaming FIDO security keys, such as Titan Security Key, their Android phone, or security keys from other vendors, in order to avoid account lockouts.

5 Exciting players in the Breach and Attack Simulation (BAS) Cyber Security Category

Breach and Attack Simulation is a new concept that helps organizations evaluate their security posture in a continuous, automated, and repeatable way. This approach allows for the identification of imminent threats, provides recommended actions, and produces valuable metrics about cyber-risk levels. Breach and attack simulation is a fast-growing segment within the cybersecurity space, and it provides significant advantages over traditional security evaluation methods, including penetration testing and vulnerability assessments.

Going over the players in this industry, it is clear that the BAS category includes a number of different approaches with the common target to provide the customer with a clear picture of its actual vulnerabilities and how to mitigate them.

CyberDB has handpicked in this blog a number of exciting and emerging vendors. These players are (in alphabetical order):

Those companies have a number of characteristics in common, including  a very fast time to market, successful management team and strong traction. In addition, all of them have managed to raise Series A or B funding over the last 16 months, ranging from $5M to $32M.

Other notable players range from incumbent to emerging players, such as Rapid7, Qualys, ThreatCare,  AttackIQ, GuardiCore,  SafeBreach, Verodin (acquired lately by FireEye) and WhiteHaX.

Gartner defines Breach & Attack Simulation (BAS) technologies as tools “that allow enterprises to continually and consistently simulate the full attack cycle (including insider threats, lateral movement, and data exfiltration) against enterprise infrastructure, using software agents, virtual machines, and other means”.

What makes BAS special, is its ability to provide continuous and consistent testing at limited risk and that it can be used to alert IT and business stakeholders about existing gaps in the security posture or validate that security infrastructure, configuration settings and detection/prevention technologies are operating as intended. BAS can also assist in validating if security operations and the SOC staff can detect specific attacks when used as a complement to the red team or penetration testing exercises.

CyberDB strongly recommends exploring embedding BAS technologies as part of the overall modern Cyber security technology stack.

 

 

 

 

Cymulate was founded by an elite team of former IDF intelligence officers who identified frustrating inefficiencies during their cyber security operations. From this came their mission to empower organizations worldwide and make advanced cyber security as simple and familiar as sending an e-mail. Since the company’s inception in 2016, Cymulate’s platform was given the recognition of “Cool Vendor” in Application and Data Security by Gartner in 2018 and has received dozens of industry awards to date. Today, Cymulate has offices in Israel, United States, United Kingdom, and Spain. The company has raised $11 million with backing from investors Vertex Ventures, Dell Technologies Capital, Susquehanna Growth Equity, and Eyal Gruner.

Cymulate is a SaaS-based breach and attack simulation platform that makes it simple to test, measure and optimize the effectiveness of your security controls any time, all the time. With just a few clicks, Cymulate challenges your security controls by initiating thousands of attack simulations, showing you exactly where you’re exposed and how to fix it—making security continuous, fast and part of every-day activities.

Fully automated and customizable, Cymulate challenges your security controls against the full attack kill chain with thousands of simulated threats, both common and novel. Testing both internal and external defenses, Cymulate shortens test cycles, provides 360° visibility and actionable reporting, and offers a continuous counter-breach assessment technology that empowers security leaders to take a proactive approach to their cyber stance, so they can stay one step ahead of attackers. Always.

With a Research Lab that keeps abreast of the very latest threats, Cymulate proactively challenges security controls against the full attack kill chain, allowing hyper-connected organizations to avert damage and stay safe.

Overtaking manual, periodic penetration testing and red teaming, breach and attack simulation is becoming the most effective method to prepare and predict  oncoming attacks. Security professionals realize that to cope with evolving attackers, a continuous and automated solution is essential to ensure optimal non-stop security

Cymulate is trusted by hundreds of companies worldwide, from small businesses to large enterprises, including leading banks and financial services. They share our vision—to make it easy for anyone to protect their company with the highest levels of security. Because the easier cybersecurity is, the more secure your company—and every company—will be.

Back

 

 

 

Established in 2015 with offices in Israel, Boston, London and Zurich, Pcysys delivers an automated network penetration testing platform that assesses and helps reduce corporate cybersecurity risks. Hundreds of security professionals and service providers around the world use Pcysys to perform continuous, machine-based penetration tests that improve their immunity against cyber-attacks across their organizational networks. With over 60 enterprise global customers across all industries, Pcysys is the fastest-growing cybersecurity startup in Israel.

The Problem – Missing Cyber Defense Validation

We believe that penetration testing, as it is known today, is becoming obsolete. Traditionally, penetration testing has been performed manually by service firms, deploying expensive labor to uncover hidden vulnerabilities and produce lengthy reports, with little transparency along the way. Professional services-based penetration testing is limited in scope, time-consuming and costly. It represents a point-in-time snapshot, and cannot comply with the need for continuous security validation within a dynamic IT environment.

PenTera™ – One-Click Penetration Testing

Requiring no agents or pre-installations, Pcysys’s PenTera™ platform uses an algorithm to scan and ethically penetrate the network with the latest hacking techniques, prioritizing remediation efforts with a threat-facing perspective. The platform enables organizations to focus their resources on the remediation of the vulnerabilities that take part in a damaging “kill chain” without the need to chase down thousands of vulnerabilities that cannot be truly exploited towards data theft, encryption or service disruption.

Benefits

  • Continual vigilance – the greatest benefit of employing the PenTera platform is the ability to continually validate your security from an attacker’s perspective and grow your cyber resilience over time. Pen-testing is turning to be a daily activity.
  • Reduce external testing costs – with PenTera, you can minimize cost and dependency on external risk validation providers. While in some cases an annual 3rd-party pen-test is still required for compliance reasons, it can be reduced in scope and spend.
  • Test against the latest threats – as the threat landscape evolves, it is crucial to incorporate the latest threats into your regular pen-testing practices. Your PenTera subscription assures you stay current.

 Differentiators

  •  Agentless – zero agent installations or network configurations.
  • Real Exploits, No Simulations – PenTera performs real-time ethical exploitations.
  • Automated – press ‘Play’ and get busy doing other things while the penetration test progresses.
  • Complete Attack Vector Visibility – every step in the attack vector is presented and reported in detail to explain the attack “kill chain”.

Back

 

 

Company

Found in 2014, Picus has more than 100 customers and has been backed by EarlyBird, Social Capital and ACT-VC. Headquartered in San Francisco, Picus has offices in London and Ankara to serve its global customer base.

Picus Security’s customers include leading mid-sized companies and enterprises, across LATAM, Europe, APAC and the Middle East regions.

Solution

Picus continuously validates your security operations to harden your defenses. Picus empowers organizations to identify imminent threats, take the most viable defense actions and help businesses understand cyber risks to make the right decisions.

Picus Security is one of the leading Breach and Attack Simulation (BAS) vendors featured in several Gartner reports such as BAS Market Report, Market Guide For Vulnerability Assessment and Hype Cycle for Threat Facing Technologies. Picus has recently been recognized as a Cool Vendor in Security and Risk Management, 2H19 by Gartner. Picus was distinguished as one of the top 10 innovative cyber startups by PwC and the most innovative Infosec Startup of the year by Cyber Defense Magazine.

 Unlike penetration testing methods, Picus validates the security effectiveness continuously and in a repeatable manner that is completely risk-free for production systems. This approach helps customers identify imminent threats, take action and get a continuous view of the actual risk. Picus customers also maximize ROI from existing security tools, get continuous metrics on

their security level and can demonstrate the positive impact of security investments to business.

Picus can provide measurable context about descriptions, behavior, and methods of adversaries by running an extensive set of cyber-threats and attack scenarios 24/7 basis and in production networks on its fully risk-free platform with false-positive free.  Picus constantly assess organizational readiness for adversarial actions and prioritize findings based on adversarial context and helps immediate actions for mitigation of imminent threats.

  1. in-depth, full coverage threat database with more than 7,600 real-world payloads that are updated daily, and adversary-based attack scenarios and techniques mapped to the MITRE ATT&ACK framework to cover web application attacks, exploitations, malware, data exfiltration and endpoint scenarios.
  2. housing more than 34,000 mitigation signatures and 10 security vendor partnerships so analysts can gain insight into the most viable defense actions in response to adversaries, with immediate mitigation validation.
  3. providing actionable remediation recommendations tailored to organizations and their defense stacks and focusing only on attacks with mitigation solutions.

Back

 

 

Randori’s mission is to build the world’s most authentic, automated attack platform, to help security teams “train how they fight”. Founded in 2018 by a former Carbon Black Executive and leading red teamers, Randori provides a SaaS platform to allow security teams of all maturity to spar against an authentic adversary. Customers are testing their incident response, identifying weaknesses (not just vulnerabilities), and as a result, producing justifiable ways to ask for further investment.

Randori is based in Waltham, MA with offices in Denver, CO. Known customers include Houghton Mifflin Harcourt, Greenhill & Co, Carbon Black, RapidDeploy, and ClickSoftware.

The Randori platform consists of two products, Recon and Attack.

Recon provides comprehensive attack surface management powered by black-box discovery. Customers can “see” how attackers perceive their company from the outside. This is especially useful for enterprise organizations with a changing network footprint, such as M&A, high seasonality, or undergoing cloud migration. Their approach differs from “internet-wide scan” methods, which can produce false-positives and are not actionable. Recon results are prioritized using a Target Temptation engine, which takes into account factors like known weaknesses, post-exploitation potential, and the cost of action by an attacker. Recon is available for free trial; a complimentary Recon report can be provided to any company over 1000 employees.

Attack provides authentic adversary emulation across all stages of the kill chain. Customers choose from objective-based runbooks that the platform will use to gain initial access, maintain persistence, and move laterally across the network. Risk is assessed across vulnerabilities, misconfigurations, and credentials—the same ways attackers breach companies. Attack is available to select early access partners and will broaden access in 2020.

The Randori differentiator is authenticity: to get started with their platform, only a single email address is needed to understand one’s attack surface and put it to the test. The platform seeks not to “validate existing controls” or “detection of MITRE ATT&CK techniques”, but help security teams train against a real adversary.

Back

 

 

 

XM Cyber, a multi-award-winning breach and attack simulation (BAS) leader, was founded in 2016 by top security executives from the elite Israeli intelligence sector. XM Cyber’s core team is comprised of highly skilled and experienced veterans from the Israeli Intelligence with expertise in both o?ensive and defensive cyber security.

Headquartered in the Tel Aviv metro area, XM Cyber has offices in the US, UK, Israel and Australia, with global customers including leading financial institutions, critical infrastructure organizations, healthcare, manufacturers, and more.

HaXM by XM Cyber is the first BAS platform to simulate, validate and remediate attackers’ paths to your critical assets 24×7. HaXM’s automated purple teaming aligns red and blue teams to provide the full realistic advanced persistent threat (APT) experience on one hand while delivering vital prioritized actionable remediation on the other. Addressing real user behavior and exploits, the full spectrum of scenarios is aligned to your organization’s own network to expose blind spots and is executed using the most up-to-date attack techniques safely, without affecting network availability and user experience.

By continuously challenging the organizational network with XM Cyber’s platform, organizations gain clear visibility of the cyber risks, and an efficient, data-driven actionable remediation plan aimed at the most burning issues to fix.

  • The HaXM simulation and remediation platform continuously exposes attack vectors, from breach point to any organizational critical asset so you always know the attack vectors to your crown jewels.
  • The continuous loop of automated red teaming is completed by ongoing and prioritized actionable remediation of security gaps, so you know how to focus your resources on the most critical issues.
  • The platform addresses real user behavior, poor IT hygiene and security exploits to expose the most critical blind spots so that you improve your IT hygiene and practices.

Even when an organization has deployed and configured modern security controls, applied patches and refined policies, there is a plethora of ways hackers can still infiltrate the system and compromise critical assets. XM Cyber is the only one to address the crucial question for enterprises: “are my critical assets really secure?” XM Cyber provides the only solution on the market that actually simulates a real APT hacker automatically and continuously.

By automating sophisticated hacking tools and techniques and running them internally, XM Cyber allows you to see the impact a breach would have on your actual environment. And you can remediate gaps and strengthen security for your organization’s “crown jewels”, including your customer data, financial records, intellectual capital and other digital assets.

Back

 

The post 5 Exciting players in the Breach and Attack Simulation (BAS) Cyber Security Category appeared first on CyberDB.

NIST Seeking Input on Updates to NICE Cybersecurity Workforce Framework

The National Initiative for Cybersecurity Education (NICE), led by the National Institute of Standards and Technology (NIST), is planning to update the NICE Cybersecurity Workforce Framework, NIST Special Publication 800-181. The public is invited to provide input by January 13, 2020, for consideration in the update. The list of topics below covers the major areas in which NIST is considering updates. Comments received by the deadline will be incorporated to the extent practicable. The resulting draft revision to the NICE Cybersecurity Workforce Framework (NICE Framework), once completed, also

Something new from the Technology Partnerships Office

Patenting NIST technologies serves as a gateway to the commercialization of NIST innovations. Patents promote NIST’s critical mission, “To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.” Quick and easy access to NIST patents increases the likelihood of forming partnerships and ultimately commercialization of NIST technology. Previously, access to NIST patent information was only available through the Federal Laboratories Consortium (FLC) website. The

A UK Small Business is hacked every 19 seconds

Small UK businesses bear the brunt of cyberattacks according to the latest industry reports. SecureTeam have crunched the numbers and put together an InfoGraphic which depicts how cybercrime is impacting UK small business. They have concluded UK small businesses are targeted with 65,000 cyberattacks per day, with one small business hacked every 19 seconds!

As expected, email is by far the most commonly used attack vector, and the security posture of small businesses is not sufficiently robust enough to withstand cyberattacks, knowledge which the cybercriminals clearly understand.


Stem Cells and AI: Better Together

One day in the future when you need medical care, someone will examine you, diagnose the problem, remove some of your body’s healthy cells, and then use them to grow a cure for your ailment. The therapy will be personalized and especially attuned to you and your body, your genes, and the microbes that live in your gut. This is the dream of modern medical science in the field of “regenerative medicine.” There are many obstacles standing between this dream and its implementation in real life, however. One obstacle is complexity. Cells often differ so much from one another and differ in so many

Combating the Accidental Insider Data Leakage Threat

Article by Andrea Babbs, UK General Manager, VIPRE SafeSend

Cybercrime has rapidly become the world’s fastest growing form of criminal activity, and is showing no sign of slowing down with the number of attacks on businesses rising by more than 50% in the last year alone. While most corporates have made significant efforts to invest in cybersecurity defences to protect their organisations from the outside threat of cybercrime, few have addressed the risk of breaches that stem from the inside in the same way. Insider threats can come from accidental error, such as an employee mistakenly sending a sensitive document to the wrong contact, or from negligence such as an employee downloading unauthorised software that results in a virus spreading through the company’s systems.

We’re all guilty of accidentally hitting send on an email to the wrong person, or attaching the wrong document; but current levels of complacency around email security culture are becoming an ever greater threat. Few organisations have a clear strategy for helping their employees understand how a simple error can put the company at significant risk; even fewer have a strategy for mitigating that risk and protecting their staff from becoming an inside threat.

So where does the responsibility lie to ensure that company data is kept secure and confidential?
According to reports, 34% of all breaches are caused by insider fault, yet many employees are unaware of their responsibility when it comes to data protection. With employee carelessness and complacency the leading causes of data breaches - understandable when human error is inevitable in pressured working environments - there is clearly a lack of awareness and training. And while there is an obvious and urgent need for better employee education, should IT leaders not be doing more to provide the tools that take the risk of making accidental mistakes out of employees’ hands?

With simple technology in place that provides an essential double check for employees - with parameters determined by corporate security protocols - before they send sensitive information via email, accidental data loss can be minimised and an improved and proactive email security culture achieved. In addition to checking the validity of outbound and inbound email addresses and attachments - thereby also minimising the risk of staff falling foul of a phishing attack - the technology can also be used to check for keywords and data strings in the body of the email, to identify confidential or sensitive data before the user clicks send.

In order for organisations to limit the number of insider data breaches, it’s crucial for employees to understand the role they play in keeping the company’s data secure. But in addition to supporting employees with training, deploying an essential tool that prompts for a second check and warns when a mistake is about to be made, organisations can mitigate the risk of accidental error, and the potentially devastating consequences that might have on the business.

Email is arguably the key productivity tool in most working environments today; placing the full burden of responsibility for the security of that tool on employees is both an unnecessary overhead and, increasingly, a security risk. In contrast, supporting staff with a simple, extra prompt for them to double check they aren’t mistakenly sharing confidential data raises awareness, understanding and provides that essential security lock-step – before it’s too late.

Using Benchmarks to Make the Case for AppSec

In a recent Veracode webinar on the subject of making the business case for AppSec, Colin Domoney, DevSecOps consultant, introduced the idea of using benchmarking to rally the troops around your AppSec cause. He says, ???What you can do is you can show where your organization sits relative to other organizations and then your peers. If you're lagging, that's probably a good reason to further invest. If you're leading, perhaps you can use that opportunity to catch up on some of your more ambitious projects. We use benchmarking quite frequently. It's quite a useful task to undertake.???

Ultimately, the value of benchmarks is two-fold; you can see, as Colin says, ???where you???re lagging??? and use that data to make the case for more budget. But it also strengthens your ask by giving it priorities and a clear road map. For instance, you could say, ???we need more AppSec budget,??? but your argument is more powerful if you can say, ???OWASP???s maturity model recommends automating security testing,??? or ???most organizations in the retail industry are testing for security monthly.???

If you???re looking for some AppSec benchmarking data, we recommend considering the following:

OWASP???s OpenSAMM Maturity Model: OWASP???s Software Assurance Maturity Model (SAMM) is ???an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization. The resources provided by SAMM will aid in:

  • Evaluating an organization???s existing software security practices.
  • Building a balanced software security assurance program in well-defined iterations.
  • Demonstrating concrete improvements to a security assurance program.
  • Defining and measuring security-related activities throughout an organization.???

At the highest level, SAMM defines four critical business functions related to software development. Within each business function are three security practices, and within each practice there are three levels of maturity, each with related activities. For instance, under the Business Function ???Verification,??? there is a security practice called ???Implementation review,??? which has the following maturity levels:

  • Level one:  ???Opportunistically finding basic code-level vulnerabilities and other high-risk security issues.???
  • Level two: ???Make implementation review during development more accurate and efficient through automation.???
  • Level three: ???Mandate comprehensive implementation review process to discover language-level and application-specific risks.???

The model also goes into detail on each of the security activities, the success metrics, and more. There is also a related ???How-To Guide??? and ???Quick Start Guide.???

Veracode???s Verified Program: We created Verified to both give customers a way to prove to their customers that security is a priority, but also to give customers a road map toward application security maturity, based on our own 10+ years experience of what good AppSec looks like. Want to see how you stack up against a mature program? Take a look at the requirements for the highest Verified tier ??? Verified Continuous level. If your program looks more like the Standard or Team levels, use that to make the case to grow your program with a clear roadmap of what is entailed in taking your program to the next level.

Veracode State of Software Security (SOSS) report: Our annual report offers some valuable benchmarking data for your AppSec program. Because we are a SaaS platform, we are able to aggregate all our scan data and look at trends across industries, geographies, and development processes.

You can use the SOSS report to benchmark your program against all organizations, those in your industry, or against those that are implementing practices that are improving the state of their software security. For instance, this year???s report found that 80 percent of applications don???t contain any high-severity flaws ??? how do you measure up? In addition, we found that those who are scanning the most (260+ times per year) have 5x less security debt and improve their median time to remediation by 72 percent. How often are you scanning?

You can also use the SOSS report to measure your program and progress against your peers in your industry. For example, this year, we found that most of the top 10 flaw categories show a lower prevalence among retailers compared to the cross-industry average. The exceptions to that rule are Credentials Management and, to a lesser extent, Code Injection. It???s possible these tie back to core functionality in retail applications ??? authenticating users and handling user input. If you???re in the retail industry, you???ve now got a solid starting point for vulnerability types to focus on. If you???re in the Government and Education sector, your peers are struggling with Cross-Site Scripting flaws, are you? And finally, those in the financial sector, have the best fix rate among all industries at 76 percent ??? does your fix rate compare favorably?

Learn more

To find out more about making the case for AppSec, check out our new guide, Building a Business Case for Expanding Your AppSec Program.

Broken Security? Most Business Leaders aren’t confident about their Cybersecurity

Cybersecurity is a critical battleground for UK businesses today, as the digital footprints of individuals and enterprises continue to grow. However, according to a new study commissioned by VMware in partnership with Forbes Insights, only a quarter (25%) of business leaders across EMEA are confident in their current cybersecurity practices, with UK spending without adequate assessment of the needs of organisations now commonplace.

VMware research reveals British businesses battle sophisticated security threats with old tools and misplaced spend

Key findings of the Study
  • 78% of UK business and IT security leaders believe the cybersecurity solutions their organisation is working with are outdated (despite 40% having acquired new tools over the past 12 months to address potential threats)
  • 74% reveal plans to invest even more in detecting and identifying attacks in the next three years, despite having a multitude of products already installed – a quarter (26%) of businesses currently have 26 or more products for this
  • Only 16% state extreme confidence in the readiness of their organisation to address emerging security challenges
The research shows UK businesses are trapped in a routine of spending without adequately assessing the needs of their organisation. Three quarters (78%) of business and IT security leaders believe the cybersecurity solutions their organisation is working with are outdated, despite 40% having acquired new tools over the past year to address potential threats. Nearly three quarters (74%), meanwhile, reveal plans to invest even more in detecting and identifying attacks in the next three years, despite having a multitude of products already installed – a quarter (26%) of businesses currently have 26 or more products across their enterprises for this.

The apparent hope of UK businesses to spend their way out of security crises is coupled with a significant security skills gap: just 16% of UK respondents state extreme confidence in the readiness of their organisation to address emerging security challenges, with only 14% extremely confident in the readiness of their people and talent.

The result is that, despite British businesses shoring up their defences against an evolving threat landscape, the complexity surrounding multiple cybersecurity solutions is making it harder for organisations to respond, urgently adapt or improve their strategies. In fact, a third (34%) of IT security leaders state it can take up to an entire week to address an issue.

Ian Jenkins, Director, Networking and Security UK & Ireland, VMware, said of the findings: “Businesses across the UK and beyond continue to follow the same IT security paths, and yet expect to see different results. Yet we now live in a world of greater complexity, with more and more intricate interactions, more connected devices and sensors, dispersed workers and the cloud, all of which have created an exponentially larger attack surface. Investment in traditional security solutions continues to be dwarfed by the economic repercussions of breaches.”

The lack of confidence highlighted in this study sits within a chasm forming between business leaders and security teams. In the UK, only a quarter (24%) of IT teams consider C-suite executives in their organisation to be ‘highly collaborative’ when it comes to cybersecurity. Across EMEA, meanwhile, only 27% of executives and only 16% of IT security practitioners say they are collaborating in a significant way to address cybersecurity issues.

Jenkins concludes, “Modern-day security requires a fundamental shift away from prevailing preventative solutions that try to prevent breaches at all costs. British businesses must invest in solutions that make security intrinsic to everything – the application, the network, essentially everything that connects and carries data. Breaches are inevitable, but how fast and how effectively you can mitigate that threat and protect the continuity of operations is what matters. Combining this approach with a culture of security awareness and collaboration across all departments is crucial to driving cyber best practice forward, and helping enterprises in the UK and across EMEA stay one step ahead in the world of sophisticated cybercrime.”

How to write an ISO 27001-compliant risk assessment procedure

As part of your ISO 27001 certification project, your organisation will need to prove its compliance with appropriate documentation.

ISO 27001 says that you must document your information security risk assessment process.

Key elements of the ISO 27001 risk assessment procedure

Clause 6.1.2 of the Standard states that organisations must “define and apply” a risk assessment process.

An information security risk assessment is a formal, top management-driven process and sits at the core of an ISO 27001 information security management system (ISMS).

There are five simple steps that you should take to conduct a successful risk assessment:

  1. Establish a risk management framework
  2. Identify risks
  3. Analyse risks
  4. Evaluate risks
  5. Select risk treatment options

The risk assessment process determines the controls that have to be deployed in your ISMS. It leads to the Statement of Applicability, which identifies the controls that you are deploying in light of your risk assessment process.

Our bestselling book, Nine Steps to Success – An ISO 27001 Implementation Overview, provides more information on the topic of risk management.

Conducting a risk assessment

For an ISO 27001 risk assessment to be successful, it needs to reflect the organisation’s view on risk management – and it must produce “consistent, valid and comparable results”.

The risk assessment procedure should be detailed, and describe who is responsible for each task, when they must be completed and in what order.

This can be a daunting task for many. Inexperienced assessors often rely on spreadsheets, spending hours interviewing people in their organisation, exchanging documents and methodologies with other departments and filling in data. After all that, they’ll probably realise how inconvenient spreadsheets are. For example:

  • They are prone to user error;
  • They are hard to maintain;
  • It’s difficult to find relevant data in multiple tabs; and
  • They don’t automatically conform to ISO 27001

It doesn’t have to be like this. The risk assessment software vsRisk Cloud provides a simple and fast way to identify relevant threats, and deliver repeatable, consistent assessments year after year.

Find out more about vsRisk Cloud

Its asset library assigns organisational roles to each asset group, applying relevant potential threats and risks by default.

Additionally, its integrated risk, vulnerability and threat databases eliminate the need to compile a list of potential risks, and the built-in control sets help you comply with multiple frameworks.

 

Learn more


A version of this blog was originally published on 11 January 2018.

ISO 27001 toolkit

The post How to write an ISO 27001-compliant risk assessment procedure appeared first on IT Governance UK Blog.

Attention is All They Need: Combatting Social Media Information Operations With Neural Language Models

Information operations have flourished on social media in part because they can be conducted cheaply, are relatively low risk, have immediate global reach, and can exploit the type of viral amplification incentivized by platforms. Using networks of coordinated accounts, social media-driven information operations disseminate and amplify content designed to promote specific political narratives, manipulate public opinion, foment discord, or achieve strategic ideological or geopolitical objectives. FireEye’s recent public reporting illustrates the continually evolving use of social media as a vehicle for this activity, highlighting information operations supporting Iranian political interests such as one that leveraged a network of inauthentic news sites and social media accounts and another that impersonated real individuals and leveraged legitimate news outlets.

Identifying sophisticated activity of this nature often requires the subject matter expertise of human analysts. After all, such content is purposefully and convincingly manufactured to imitate authentic online activity, making it difficult for casual observers to properly verify. The actors behind such operations are not transparent about their affiliations, often undertaking concerted efforts to mask their origins through elaborate false personas and the adoption of other operational security measures. With these operations being intentionally designed to deceive humans, can we turn towards automation to help us understand and detect this growing threat? Can we make it easier for analysts to discover and investigate this activity despite the heterogeneity, high traffic, and sheer scale of social media?

In this blog post, we will illustrate an example of how the FireEye Data Science (FDS) team works together with FireEye’s Information Operations Analysis team to better understand and detect social media information operations using neural language models.

Highlights

  • A new breed of deep neural networks uses an attention mechanism to home in on patterns within text, allowing us to better analyze the linguistic fingerprints and semantic stylings of information operations using modern Transformer models.
  • By fine-tuning an open source Transformer known as GPT-2, we can detect social media posts being leveraged in information operations despite their syntactic differences to the model’s original training data.
  • Transfer learning from pre-trained neural language models lowers the barrier to entry for generating high-quality synthetic text at scale, and this has implications for the future of both red and blue team operations as such models become increasingly commoditized.

Background: Using GPT-2 for Transfer Learning

OpenAI’s updated Generative Pre-trained Transformer (GPT-2) is an open source deep neural network that was trained in an unsupervised manner on the causal language modeling task. The objective of this language modeling task is to predict the next word in a sentence from previous context, meaning that a trained model ends up being capable of language generation. If the model can predict the next word accurately, it can be used in turn to predict the following word, and then so on and so forth until eventually, the model produces fully coherent sentences and paragraphs. Figure 1 depicts an example of language model (LM) predictions we generated using GPT-2. To generate text, single words are successively sampled from distributions of candidate words predicted by the model until it predicts an <|endoftext|> word, which signals the end of the generation.


Figure 1: An example GPT-2 generation prior to fine-tuning after priming the model with the phrase “It’s disgraceful that.”  

The quality of this synthetically generated text along with GPT-2’s state of the art accuracy on a host of other natural language processing (NLP) benchmark tasks is due in large part to the model’s improvements over prior 1) neural network architectures and 2) approaches to representing text. GPT-2 uses an attention mechanism to selectively focus the model on relevant pieces of text sequences and identify relationships between positionally distant words. In terms of architectures, Transformers use attention to decrease the time required to train on enormous datasets; they also tend to model lengthy text and scale better than other competing feedforward and recurrent neural networks. In terms of representing text, word embeddings were a popular way to initialize just the first layer of neural networks, but such shallow representations required being trained from scratch for each new NLP task and in order to deal with new vocabulary. GPT-2 instead pre-trains all the model’s layers using hierarchical representations, which better capture language semantics and are readily transferable to other NLP tasks and new vocabulary.

This transfer learning method is advantageous because it allows us to avoid starting from scratch for each and every new NLP task. In transfer learning, we start from a large generic model that has been pre-trained for an initial task where copious data is available. We then leverage the model’s acquired knowledge to train it further on a different, smaller dataset so that it excels at a subsequent, related task. This process of training the model further is referred to as fine-tuning, which involves re-learning portions of the model by adjusting its underlying parameters. Fine-tuning not only requires less data compared to training from scratch, but typically also requires less compute time and resources.

In this blog post, we will show how to perform transfer learning from a pre-trained GPT-2 model in order to better understand and detect information operations on social media. Transformers have shown that Attention is All You Need, but here we will also show that Attention is All They Need: while transfer learning may allow us to more easily detect information operations activity, it likewise lowers the barrier to entry for actors seeking to engage in this activity at scale.

Understanding Information Operations Activity Using Fine-Tuned Neural Generations

In order to study the thematic and linguistic characteristics of a common type of social media-driven information operations activity, we first fine-tuned an LM that could perform text generation. Since the pre-trained GPT-2 model's dataset consisted of 40+ GB of Internet text data extracted from 8+ million reputable web pages, its generations display relatively formal grammar, punctuation, and structure that corresponds to the text present within that original dataset (e.g. Figure 1). To make it appear like social media posts with their shorter length, informal grammar, erratic punctuation, and syntactic quirks including @mentions, #hashtags, emojis, acronyms, and abbreviations, we fine-tuned the pre-trained GPT-2 model on a new language modeling task using additional training data.

For the set of experiments presented in this blog post, this additional training data was obtained from the following open source datasets of identified accounts operated by Russia’s famed Internet Research Agency (IRA) “troll factory”:

  • NBCNews, over 200,000 tweets posted between 2014 and 2017 tied to IRA “malicious activity.”
  • FiveThirtyEight, over 1.8 million tweets associated with IRA activity between 2012 and 2018; we used accounts categorized as Left Troll, Right Troll, or Fearmonger.
  • Twitter Elections Integrity, almost 3 million tweets that were part of the influence effort by the IRA around the 2016 U.S. presidential election.
  • Reddit Suspicious Accounts, consisting of comments and submissions emanating from 944 accounts of suspected IRA origin.

After combining these four datasets, we sampled English-language social media posts from them to use as input for our fine-tuned LM. Fine-tuning experiments were carried out in PyTorch using the 355 million parameter pre-trained GPT-2 model from HuggingFace’s transformers library, and were distributed over up to 8 GPUs.

As opposed to other pre-trained LMs, GPT-2 conveniently requires minimal architectural changes and parameter updates in order to be fine-tuned on new downstream tasks. We simply processed social media posts from the above datasets through the pre-trained model, whose activations were then fed through adjustable weights into a linear output layer. The fine-tuning objective here was the same that GPT-2 was originally trained on (i.e. the language modeling task of predicting the next word, see Figure 1), except now its training dataset included text from social media posts. We also added the <|endoftext|> string as a suffix to each post to adapt the model to the shorter length of social media text, meaning posts were fed into the model according to:

“#Fukushima2015 Zaporozhia NPP can explode at any time
and that's awful! OMG! No way! #Nukraine<|endoftext|>”

Figure 2 depicts a few example generations made after fine-tuning GPT-2 on the IRA datasets. Observe how these text generations are formatted like something we might expect to encounter scrolling through social media – they are short yet biting, express certainty and outrage regarding political issues, and contain emphases like an exclamation point. They also contain idiosyncrasies like hashtags and emojis that positionally manifest at the end of the generated text, depicting a semantic style regularly exhibited by actual users.


Figure 2: Fine-tuning GPT-2 using the IRA datasets for the language modeling task. Example generations are primed with the same phrase from Figure 1, “It’s disgraceful that.” Hyphens are added for readability and not produced by the model.

How does the model produce such credible generations? Besides the weights that were adjusted during LM fine-tuning, some of the heavy lifting is also done by the underlying attention scores that were learned by GPT-2’s Transformer. Attention scores are computed between all words in a text sequence, and represent how important one word is when determining how important its nearby words will be in the next learning iteration. To compute attention scores, the Transformer performs a dot product between a Query vector q and a Key vector k:

  • q encodes the current hidden state, representing the word that searches for other words in the sequence to pay attention to that may help supply context for it.
  • k encodes the previous hidden states, representing the other words that receive attention from the query word and might contribute a better representation for it in its current context.

Figure 3 displays how this dot product is computed based on single neuron activations in q and k using an attention visualization tool called bertviz. Columns in Figure 3 trace the computation of attention scores from the highlighted word on the left, “America,” to the complete sequence of words on the right. For example, to decide to predict “#” following the word “America,” this part of the model focuses its attention on preceding words like “ban,” “Immigrants,” and “disgrace,” (note that the model has broken “Immigrants” into “Imm” and “igrants” because “Immigrants” is an uncommon word relative to its component word pieces within pre-trained GPT-2's original training dataset).  The element-wise product shows how individual elements in q and k contribute to the dot product, which encodes the relationship between each word and every other context-providing word as the network learns from new text sequences. The dot product is finally normalized by a softmax function that outputs attention scores to be fed into the next layer of the neural network.


Figure 3: The attention patterns for the query word highlighted in grey from one of the fine-tuned GPT-2 generations in Figure 2. Individual vertical bars represent neuron activations, horizontal bars represent vectors, and lines represent the strength of attention between words. Blue indicates positive values, red indicates negative values, and color intensity represents the magnitude of these values.

Syntactic relationships between words like “America,” “ban,” and “Immigrants“ are valuable from an analysis point of view because they can help identify an information operation’s interrelated keywords and phrases. These indicators can be used to pivot between suspect social media accounts based on shared lexical patterns, help identify common narratives, and even to perform more proactive threat hunting. While the above example only scratches the surface of this complex, 355 million parameter model, qualitatively visualizing attention to understand the information learned by Transformers can help provide analysts insights into linguistic patterns being deployed as part of broader information operations activity.

Detecting Information Operations Activity by Fine-Tuning GPT-2 for Classification

In order to further support FireEye Threat Analysts’ work in discovering and triaging information operations activity on social media, we next fine-tuned a detection model to perform classification. Just like when we adapted GPT-2 for a new language modeling task in the previous section, we did not need to make any drastic architectural changes or parameter updates to fine-tune the model for the classification task. However, we did need to provide the model with a labeled dataset, so we grouped together social media posts based on whether they were leveraged in information operations (class label CLS = 1) or were benign (CLS = 0).

Benign, English-language posts were gathered from verified social media accounts, which generally corresponded to public figures and other prominent individuals or organizations whose posts contained diverse, innocuous content. For the purposes of this blog post, information operations-related posts were obtained from the previously mentioned open source IRA datasets. For the classification task, we separated the IRA datasets that were previously combined for LM fine-tuning, and selected posts from only one of them for the group associated with CLS = 1. To perform dataset selection quantitatively, we fine-tuned LMs on each IRA dataset to produce three different LMs while keeping 33% of the posts from each dataset held out as test data. Doing so allowed us to quantify the overlap between the individual IRA datasets based on how well one dataset’s LM was able to predict post content originating from the other datasets.


Figure 4: Confusion matrix representing perplexities of the LMs on their test datasets. The LM corresponding to the GPT-2 row was not fine-tuned; it corresponds to the pretrained GPT-2 model with reported perplexity of 18.3 on its own test set, which was unavailable for evaluation using the LMs. The Reddit dataset was excluded due to the low volume of samples.

In Figure 4, we show the result of computing perplexity scores for each of the three LMs and the original pre-trained GPT-2 model on held out test data from each dataset. Lower scores indicate better perplexity, which captures the probability of the model choosing the correct next word. The lowest scores fell along the main diagonal of the perplexity confusion matrix, meaning that the fine-tuned LMs were best at predicting the next word on test data originating from within their own datasets. The LM fine-tuned on Twitter’s Elections Integrity dataset displayed the lowest perplexity scores when averaged across all held out test datasets, so we selected posts sampled from this dataset to demonstrate classification fine-tuning.


Figure 5: (A) Training loss histories during GPT-2 fine-tuning for the classification (red) and LM (grey, inset) tasks. (B) ROC curve (red) evaluated on the held out fine-tuning test set, contrasted with random guess (grey dotted).

To fine-tune for the classification task, we once again processed the selected dataset’s posts through the pre-trained GPT-2 model. This time, activations were fed through adjustable weights into two linear output layers instead of just the single one used for the language modeling task in the previous section. Here, fine-tuning was formulated as a multi-task objective with classification loss together with an auxiliary LM loss, which helped accelerate convergence during training and improved the generalization of the model. We also prepended posts with a new [BOS] (i.e. Beginning Of Sentence) string and suffixed posts with the previously mentioned [CLS] class label string, so that each post was fed into the model according to:

“[BOS]Kevin Mandia was on @CNBC’s @MadMoneyOnCNBC with @jimcramer discussing targeted disinformation heading into the… https://t.co/l2xKQJsuwk[CLS]”

The [BOS] string played a similar delimiting role to the <|endoftext|> string used previously in LM fine-tuning, and the [CLS] string encoded the hidden state ∈ {0, 1} that was the label fed to the model’s classification layer. The example social media post above came from the benign dataset, so this sample’s label was set to CLS = 0 during fine-tuning. Figure 5A shows the evolution of classification and auxiliary LM losses during fine-tuning, and Figure 5B displays the ROC curve for the fine-tuned classifier on its test set consisting of around 66,000 social media posts. The convergence of the losses to low values, together with a high Area Under the ROC Curve (i.e. AUC), illustrates that transfer learning allowed this model to accurately detect social media posts associated with IRA information operations activity versus benign ones. Taken together, these metrics indicate that the fine-tuned classifier should generalize well to newly ingested social media posts, providing analysts a capability they can use to separate signal from noise.

Conclusion

In this blog post, we demonstrated how to fine-tune a neural LM on open source datasets containing social media posts previously leveraged in information operations. Transfer learning allowed us to classify these posts with a high AUC score, and FireEye’s Threat Analysts can utilize this detection capability in order to discover and triage similar emergent operations. Additionally, we showed how Transformer models assign scores to different pieces of text via an attention mechanism. This visualization can be used by analysts to tease apart adversary tradecraft based on posts’ linguistic fingerprints and semantic stylings.

Transfer learning also allowed us to generate credible synthetic text with low perplexity scores. One of the barriers actors face when devising effective information operations is adequately capturing the nuances and context of the cultural climate in which their targets are situated. Our exercise here suggests this costly step could be bypassed using pre-trained LMs, whose generations can be fine-tuned to embody the zeitgeist of social media. GPT-2’s authors and subsequent researchers have warned about potential malicious use cases enabled by this powerful natural language generation technology, and while it was conducted here for a defensive application in a controlled offline setting using readily available open source data, our research reinforces this concern. As trends towards more powerful and readily available language generation models continue, it is important to redouble efforts towards detection as demonstrated by Figure 5 and other promising approaches such as Grover.

This research was conducted during a three-month FireEye IGNITE University Program summer internship, and represents a collaboration between the FDS and FireEye Threat Intelligence’s Information Operations Analysis teams. If you are interested in working on multidisciplinary projects at the intersection of cyber security and machine learning, please consider applying to one of our 2020 summer internships.

How to Leverage YAML to Integrate Veracode Solutions Into CI/CD Pipelines

YAML scripting is frequently used to simplify configuration management of CI/CD tools. This blog post shows how YAML scripts for build tools like Circle CI, Concourse CI, GitLab, and Travis can be edited in order to create integrations with the Veracode Platform. Integrating Veracode AppSec solutions into CI/CD pipelines enables developers to embed remediation of software vulnerabilities directly into their SDLC workflows, creating a more efficient process for building secure applications. You can also extend the script template proposed in this blog to integrate Veracode AppSec scanning with almost any YAML-configured build tool.  

Step One: Environment Requirements

The first step is to confirm that your selected CI tool supports YAML-based pipeline definitions, where we assume that you are spinning up Docker images to run your CI/CD workflows. Your Docker images can run either on Java or .Net. Scripts included in this article are targeted only for Java, and you will need to confirm this step before moving on to the next one.

Step Two: Setting Up Your YAML File

The second step is to locate the YAML configuration file, which for many CI tools is labeled as config.yml. The basic syntax is the same for most build tools, with some minor variations. The links below contain configuration file scripts for Circle CI, Concourse CI, GitLab, and Travis, which you can also use as examples for adjusting methods of config files for other build tools.

Step Three: Downloading the Java API Wrapper

The next step requires downloading the Java API wrapper, which can be done by using the script below.

 # grab the Veracode agent
run:
	name: "Get the Veracode agent"
	command: |
	wget https://repo1.maven.org/maven2/com/veracode/vosp/api/wrappers/vosp-api-wrappers-java/19.2.5.6/vosp-api-wrappers-java-19.2.5.6.jar -O VeracodeJavaAPI.jar

Step Four: Adding Veracode Scan Attributes to Build Pipelines

The final step requires entering in the script all the information required to interact with Veracode APIs, including data attributes like users??? access credentials, application name, build tool version number, etc. Veracode has created a rich library of APIs that provide numerous options for interacting with the Veracode Platform, and that enable customers and partners to create their own integrations. Information on Veracode APIs is available in the Veracode Help Center.

The script listed below demonstrates how to add attributes to the Circle CI YAML configuration file, so that the script can run the uploadandscan API, which will enable application uploading from Circle CI to the Veracode Platform, and trigger the Platform to run the application scan.

run:
	     name: "Upload to Veracode"
	     command: java -jar VeracodeJavaAPI.jar 
	       -vid $VERACODE_API_ID 
	       -vkey $VERACODE_API_KEY 
	       -action uploadandscan 
	       -appname $VERACODE_APP_NAME 
	       -createprofile false 
	       -version CircleCI-$CIRCLE_BUILD_NUM 
	       -filepath upload.zip

In this example, we have defined:

Name ??? YAML workflow name defined in this script

Command ??? command to run Veracode API. Details on downloading API jar are already provided in the previous step

-vid $VERACODE_API_ID - user???s Veracode ID access credential

--vkey $VERACODE_API_KEY ??? user???s Veracode Key access credential

-action uploadandscan ??? name of Veracode API invoked by this script

$VERACODE_APP_NAME ??? name of customer application targeted for uploading and scanning by the Platform. This application name should be defined identically to the way that it is defined in the application profile on the Veracode Platform

-createprofile false ??? is a Boolean that defines whether application profile should be automatically created if the veracode_app_name does not find a match for an existing application profile.  

  • If defined as true, application profile will be created automatically if no app_name match is found, and upload and scan steps will continue
  • If defined as false, application profile will not be created, with no further actions for upload and scan

-version CircleCI - $CIRCLE_BUILD_NUM ??? version number of the Circle CI tool that the customer is using to run this integration

-filepath upload.zip ??? location where the application file resides prior to interacting with the Veracode API

With these four steps, Veracode scanning is now integrated into a new CI/CD pipeline.

Integrating application security scanning directly into your build tools enables developers to incorporate security scans directly into their SDLC cycles. Finding software vulnerabilities earlier in the development cycle allows for simpler remediation and more efficient issue resolution, enabling Veracode customers to build more secure software, without compromising on development deadlines.

For additional information on Veracode Integrations, please visit our integrations page.

For Caught in the Crossfire of Cyberwarfare

Authored by Dr Sandra Bell, Head of Resilience Consulting EMEA, Sungard Availability Services 

The 2019 National Cyber Security Centre’s (NCSC) Annual Review does not shy away from naming the four key protagonists when it comes to state-based cyber threats against our country. The review sites China, Russia, North Korea and Iran as being actively engaged in cyber operations against our Critical National Infrastructure and other sectors of society. That being said, the main cyber threat to businesses and individual citizens remains organised crime. But with the capability of organised crime matching some state-based activity and the sharing (if not direct support) of state-based techniques with cyber criminals, how are we expected to defend ourselves against such sophisticated cyberattack means?

The answer offered by Ciaran Martin, CEO of the NCSC, in his Forward to the 2019 Review only scratches the surface of the cultural change we need to embrace if we are to become truly cyber resilient to these modern-day threats.

“Looking ahead, there is also the risk that advanced cyberattack techniques could find their way into the hands of new actors, through the proliferation of such tools on the open market. Additionally, we must always be mindful of the risk of accidental impact from other attacks. Cyber security has moved away from the exclusive prevail of security and intelligence agencies towards one that needs the involvement of all of government, and indeed all of society.”

There are a few key points to draw out from this statement. Firstly, there is an acceptance that all of us may be collateral damage in a broader state-on-state cyberattack. Secondly, we should accept also that we maybe the victims of very sophisticated cyberattacks that have their roots in state-sponsored development. And finally, we must all accept that cyber security is a collective responsibility and, where businesses are concerned, this responsibility must be accepted and owned at the very top.

Modern life is now dependent on cyber security but we are yet to truly embrace the concept of a cyber secure culture. When we perceived terrorism as the major threat to our security, society quickly adopted a ‘reporting culture’ of anything suspicious, but have we seen the same mindset shift with regards to cyber threats? The man in the street may not be the intended target of a state-based or organised crime cyberattack but we can all easily become a victim, either accidentally as collateral damage or intentionally as low-hanging fruit. Either way we can all, individual citizens and businesses alike, fall victim to the new battleground of cyberwarfare.

What can business do in the face of such threats?
One could argue that becoming a victim of cybercrime is a when not an if. This can in turn bring about a sense of the inevitability. But what is clear when you see the magnitude of recent Information Commissioner’s Office (ICO) fines, is that businesses cannot ignore cyber security issues. A business that embraces the idea of a cybersecurity culture within its organisation will not only be less likely to be hit with a fine from the ICO should things go horribly wrong, but are also less likely to fall victim in the first place. Cyber security is about doing the basics well and preparing your organisation to protect itself, and responding correctly when an incident occurs.

Protecting against a new kind of warfare
Organisations need to prepare to potentially become the unintended targets of broad-brush cyberattacks, protecting themselves against the impact they could have on their operations and customer services. With each attack growing in its complexity, businesses must in-tow respond in a swift and sophisticated manner. Defence mechanisms need to be as scalable as the nefarious incidents they may be up against. To give themselves the best chance of ensuring that an attack doesn’t debilitate them and the country in which they operate, there are a few key things that businesses can do:

1) Act swiftly
A cyberattack requires an immediate response from every part of a business. Therefore, when faced with a potential breach, every individual must know how to react precisely and quickly. IT and business teams will need to locate and close any vulnerabilities in IT systems or business processes and switch over to Disaster Recovery arrangements if they believe there has been a data corruption. Business units need to invoke their Business Continuity Plans and the executive Crisis Management Team needs to assemble. This team needs to be rehearsed in cyber related crisis events and not just the more traditional Business Continuity type of crisis.

Both the speed and effectiveness of a response will be greatly improved if businesses have at their fingertips the results of a Data Protection Impact Assessment (DPIA) that details all the personal data collected, processed and stored, categorised by level of sensitivity. If companies are scrambling around, unsure of who should be taking charge and what exactly should be done, then the damage caused by the data encryption will only be intensified.

2) Isolate the threat
Value flows from business to business through networks and supply chains, but so do malware infections. Having adequate back-up resources not only brings back business availability in the wake of an attack, but it also serves to act as a barrier to further disruption in the network. The key element that cybercriminals and hacking groups have worked to iterate on is their delivery vector.

Phishing attempts are more effective if they’re designed using the techniques employed in social engineering. A study conducted by IBM found that human error accounts for more than 95 per cent of security incidents. The majority of the most devastating attacks from recent years have been of the network-based variety, i.e. worms and bots.

Right now, we live in a highly connected world with hyper-extended networks comprised of a multitude of mobile devices and remote workers logging in from international locations. Having a crisis communication plan that sets out in advance who needs to be contacted should a breach occur will mean that important stakeholders based in different locations don’t get forgotten in the heat of the moment.

3) Rely on resilience
Prevention is always better than cure. Rather than waiting until a data breach occurs to discover the hard way which threats and vulnerabilities are present in IT systems and business processes, act now.

It’s good business practice to continuously monitor risk, including information risk, and ensure that the controls are adequate. However, in the fast-paced cyber world where the threats are constantly changing this can be difficult in practice.

With effective Disaster Recovery and cyber focused Business Continuity practices written into business contingency planning, organisations remain robust and ready to spring into action to minimise the impact of a data breach.

The most effective way to test business resilience without unconscious bias risking false-positive results is via evaluation by external security professionals. By conducting physical and logical penetration testing and regularly checking an organisation’s susceptibility to social engineering, effective business continuity can be ensured, and back-up solutions can be rigorously tested.

Cyber Resilience must be woven into the fabric of business operations, including corporate culture itself. Crisis leadership training ensures the C-suite has the skills, competencies and psychological coping strategies that help lead an organisation through the complex, uncertain and unstable environment that is caused by a cyberattack, emerging the other side stronger and more competitive than ever before.

A look ahead to the future
A cyberattack is never insignificant, nor expected, but if a business suffers one it is important to inform those that are affected as quickly as possible. Given the scale at which these are being launched, this couldn’t be truer. It’s vital in the current age of state-backed attacks that businesses prioritise resilience lest they be caught in the crossfire. In a business landscape defined by hyper-extended supply chains, having a crisis communication plan that sets out in advance who needs to be contacted should a breach occur will mean that important stakeholders don’t get forgotten in the heat of the moment and that the most important assets remain protected.

To Measure Bias in Data, NIST Initiates ‘Fair Ranking’ Research Effort

A new research effort at the National Institute of Standards and Technology (NIST) aims to address a pervasive issue in our data-driven society: a lack of fairness that sometimes turns up in the answers we get from information retrieval software. Software of this type is everywhere, from popular search engines to less-known algorithms that help specialists comb through databases. This software usually incorporates forms of artificial intelligence that help it learn to make better decisions over time. But it bases these decisions on the data it receives, and if that data is biased in some way

NICE Webinar: Cybersecurity Career Opportunities with the Federal Government

The PowerPoint slides used during this webinar can be found here. Speakers: Danielle Santos Program Manager National Initiative for Cybersecurity Education (NICE) Caitlin Bertoni Deputy Chief for Marketing, Outreach and Testing National Security Agency (NSA) Recruitment Amanda Martens Recruiter Office of the Chief Human Capital Officer, Cybersecurity and Infrastructure Security Agency (CISA) Synopsis: According to CyberSeek.org, there are nearly 20,000 cybersecurity job openings in the public sector. The cybersecurity jobs available in the federal government cover the range of occupations from

State of Software Security v10: Top 5 Takeaways for Security Professionals

It???s the 10th anniversary of our State of Software Security (SOSS) report! This year, like every year, we dug into our data from a recent 12-month period (this year we analyzed 85,000 applications, 1.4 million scans, and nearly 10 million security findings), but we also took a look back at 10 years of software security. With a decade???s worth of analysis about software vulnerabilities and the best ways to address them, we???re in a unique position to offer insights into creating secure code. There???s a lot to unpack in our most recent SOSS, including some then vs. now comparisons, a look at the most popular vulnerabilities, and a deep dive into security debt. Here are the five takeaways we consider most noteworthy for security professionals:

Apps are insecure

Eighty-three percent of applications have at least one flaw in their initial scan. And we???ve been hovering around that number for the past decade. In addition, the types of flaws that were plaguing code a decade ago are still wreaking havoc today. The top two flaw types seen in code 10 years ago are the same top two we saw this past year: information leakage and cryptographic issues. And many of the top 10 flaws in Volume 1 remain on the top 10 list today, including CRLF injection, Cross-Site Scripting, SQL Injection, and Credentials Management.

What is going on here? We???ve said it before, and we???ll say it again: we need to do a better job helping developers create secure code. We recently partnered with DevOps.com to conduct a survey surrounding DevSecOps skills and found that less than one in four developers or other IT pros were required to take a single college course on security. Meantime, once developers get on the job, employers aren't advancing their security training options, either. Approximately 68 percent of developers and IT pros say their organizations don't provide them adequate training in application security.

Security debt is a significant problem

In the good news department, we do see improvement in fix rates. For example, half of applications showed a net reduction in flaws over the sample time frame. Another 20 percent either had no flaws or showed no change. This means 70 percent of development teams are keeping pace or pulling ahead in the flaw-busting race! However, we also found that teams are prioritizing newly found security flaws over older flaws, leading to security debt piling up. This year???s data reveals that flaws are much more likely to be fixed soon after they???re discovered.

We???re doing a better job tackling high-severity flaws, but not the most exploitable ones

As we said above, developers are doing a better job fixing what they find, and they are prioritizing both the most recently discovered, and the most severe. On the one hand, this is good news. On the other, we found the security debt that has accumulated across organizations is comprised primarily of Cross-Site Scripting, with Injection, Authentication, and Misconfiguration flaws making up sizable portions as well. This is noteworthy because Injection is the second most prevalent flaw category in reported exploits. Bottom line: Exploitability of a flaw needs to be prioritized, and older flaws need to be addressed. An older injection flaw is just as dangerous as a newly discovered one.

When you scan more, you secure more

This year???s report also looked at the effect of both scanning cadence and frequency on security debt and fix rate. And the results were striking. Those that scanned the most, and the most regularly, had dramatically better fix rates and less security debt. In fact, those with the highest scan frequency (260+ scans per year) had 5x less security debt, and a 72 percent reduction in median time to remediation.

There are some differences in how organizations in different industries are securing software

Looking at the software security trends in your own industry gives you an idea of how your program compares, and where to focus your security efforts.

And we did find some significant differences this year in how different industries are tackling AppSec. For instance, we found that organizations in the retail sector are doing the best job at keeping security debt at bay, while those in the government and education space are doing the worst.

The infrastructure industry is fixing flaws almost 4X faster than any other industry, and 13X faster than the median time to remediation for healthcare. The financial industry has an impressive fix rate, but one of the slowest median times to remediation.

You???ll find all the SOSS X industry infosheets, which include details on which vulnerabilities are most common in each industry, on our Resources page.

Read the report

Read the full SOSS report to learn more about best practices that can help keep your software security. Check out our SOSS X page for access to the full report, additional data highlights, videos of Veracode experts discussing the results, and more.

National Cybersecurity Career Awareness Week is Here!

National Cybersecurity Career Awareness Week is Here! #cybercareerweek | #mycyberjob nist.gov/nice/nccaw National Cybersecurity Career Awareness Week, brought to you by the National Initiative for Cybersecurity Education (NICE), is an opportunity for government, business, education, community-based organizations, students, and workers to inspire, educate, and engage citizens to pursue careers in cybersecurity. The week-long campaign provides an opportunity to learn about the contributions and innovations of cybersecurity practitioners and the plethora of job opportunities that

GWP-ASan: Sampling heap memory error detection in-the-wild

Memory safety errors, like use-after-frees and out-of-bounds reads/writes, are a leading source of vulnerabilities in C/C++ applications. Despite investments in preventing and detecting these errors in Chrome, over 60% of high severity vulnerabilities in Chrome are memory safety errors. Some memory safety errors don’t lead to security vulnerabilities but simply cause crashes and instability.

Chrome uses state-of-the-art techniques to prevent these errors, including:

  • Coverage-guided fuzzing with AddressSanitizer (ASan)
  • Unit and integration testing with ASan
  • Defensive programming, like custom libraries to perform safe math or provide bounds checked containers
  • Mandatory code review

Chrome also makes use of sandboxing and exploit mitigations to complicate exploitation of memory errors that go undetected by the methods above.

AddressSanitizer is a compiler instrumentation that finds memory errors occurring on the heap, stack, or in globals. ASan is highly effective and one of the lowest overhead instrumentations available that detects the errors that it does; however, it still incurs an average 2-3x performance and memory overhead. This makes it suitable for use with unit tests or fuzzing, but not deployment to end users. Chrome used to deploy SyzyASAN instrumented binaries to detect memory errors. SyzyASAN had a similar overhead so it was only deployed to a small subset of users on the canary channel. It was discontinued after the Windows toolchain switched to LLVM.

GWP-ASan, also known by its recursive backronym, GWP-ASan Will Provide Allocation Sanity, is a sampling allocation tool designed to detect heap memory errors occurring in production with negligible overhead. Because of its negligible overhead we can deploy GWP-ASan to the entire Chrome user base to find memory errors happening in the real world that are not caught by fuzzing or testing with ASan. Unlike ASan, GWP-ASan can not find memory errors on the stack or in globals.

GWP-ASan is currently enabled for all Windows and macOS users for allocations made using malloc() and PartitionAlloc. It is only enabled for a small fraction of allocations and processes to reduce performance and memory overhead to a negligible amount. At the time of writing it has found over sixty bugs (many are still restricted view). About 90% of the issues GWP-ASan has found are use-after-frees. The remaining are out-of-bounds reads and writes.

To learn more, check out our full write up on GWP-ASan here.

National Cyber Security Committee urges vigilance as two concerning cyber security threats are in the wild

UPDATE: As at 12th November 2019 the CIMA level returned to Level 5 - Normal Conditions. The Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), with its state and territory partners, is continuing to respond to the widespread malware campaign known as Emotet while responding to reports that hackers are exploiting the BlueKeep vulnerability to mine cryptocurrency. The Cyber Incident Management Arrangements (CIMA) remain activated, however the alert level has been downgraded to Level 4 – ‘Lean Forward’.

Seven Security Strategies, Summarized

This is the sort of story that starts as a comment on Twitter, then becomes a blog post when I realize I can't fit all the ideas into one or two Tweets. (You know how much I hate Tweet threads, and how I encourage everyone to capture deep thoughts in blog posts!)

In the interest of capturing the thought, and not in the interest of thinking too deeply or comprehensively (at least right now), I offer seven security strategies, summarized.

When I mention the risk equation, I'm talking about the idea that one can conceptually image the risk of some negative event using this "formula": Risk (of something) is the product of some measurements of Vulnerability X Threat X Asset Value, or R = V x T x A.

  1. Denial and/or ignorance. This strategy assumes the risk due to loss is low, because those managing the risk assume that one or more of the elements of the risk equation are zero or almost zero, or they are apathetic to the cost.
  2. Loss acceptance. This strategy may assume the risk due to loss is low, or more likely those managing the risk assume that the cost of risk realization is low. In other words, incidents will occur, but the cost of the incident is acceptable to the organization.
  3. Loss transferal. This strategy may also assume the risk due to loss is low, but in contrast with risk acceptance, the organization believes it can buy an insurance policy which will cover the cost of an incident, and the cost of the policy is cheaper than alternative strategies.
  4. Vulnerability elimination. This strategy focuses on driving the vulnerability element of the risk equation to zero or almost zero, through secure coding, proper configuration, patching, and similar methods.
  5. Threat elimination. This strategy focuses on driving the threat element of the risk equation to zero or almost zero, through deterrence, dissuasion, co-option, bribery, conversion, incarceration, incapacitation, or other methods that change the intent and/or capabilities of threat actors. 
  6. Asset value elimination. This strategy focuses on driving the threat element of the risk equation to zero or almost zero, through minimizing data or resources that might be valued by adversaries.
  7. Interdiction. This is a hybrid strategy which welcomes contributions from vulnerability elimination, primarily, but is open to assistance from loss transferal, threat elimination, and asset value elimination. Interdiction assumes that prevention eventually fails, but that security teams can detect and respond to incidents post-compromise and pre-breach. In other words, some classes of intruders will indeed compromise an organization, but it is possible to detect and respond to the attack before the adversary completes his mission.
As you might expect, I am most closely associated with the interdiction strategy. 

I believe the denial and/or ignorance and loss acceptance strategies are irresponsible.

I believe the loss transferal strategy continues to gain momentum with the growth of cybersecurity breach insurance policies. 

I believe the vulnerability elimination strategy is important but ultimately, on its own, ineffective and historically shown to be impossible. When used in concert with other strategies, it is absolutely helpful.

I believe the threat elimination strategy is generally beyond the scope of private organizations. As the state retains the monopoly on the use of force, usually only law enforcement, military, and sometimes intelligence agencies can truly eliminate or mitigate threats. (Threats are not vulnerabilities.)

I believe asset value elimination is powerful but has not gained the ground I would like to see. This is my "If you can’t protect it, don’t collect it" message. The limitation here is obviously one's raw computing elements. If one were to magically strip down every computing asset into basic operating systems on hardware or cloud infrastructure, the fact that those assets exist and are networked means that any adversary can abuse them for mining cryptocurrencies, or as infrastructure for intrusions, or for any other uses of raw computing power.

Please notice that none of the strategies listed tools, techniques, tactics, or operations. Those are important but below the level of strategy in the conflict hierarchy. I may have more to say on this in the future. 

The App Defense Alliance: Bringing the security industry together to fight bad apps


Fighting against bad actors in the ecosystem is a top priority for Google, but we know there are others doing great work to find and protect against attacks. Our research partners in the mobile security world have built successful teams and technology, helping us in the fight. Today, we’re excited to take this collaboration to the next level, announcing a partnership between Google, ESET, Lookout, and Zimperium. It’s called the App Defense Alliance and together, we’re working to stop bad apps before they reach users’ devices.
The Android ecosystem is thriving with over 2.5 billion devices, but this popularity also makes it an attractive target for abuse. This is true of all global platforms: where there is software with worldwide proliferation, there are bad actors trying to attack it for their gain. Working closely with our industry partners gives us an opportunity to collaborate with some truly talented researchers in our field and the detection engines they’ve built. This is all with the goal of, together, reducing the risk of app-based malware, identifying new threats, and protecting our users.
What will the App Defense Alliance do?
Our number one goal as partners is to ensure the safety of the Google Play Store, quickly finding potentially harmful applications and stopping them from being published
As part of this Alliance, we are integrating our Google Play Protect detection systems with each partner’s scanning engines. This will generate new app risk intelligence as apps are being queued to publish. Partners will analyze that dataset and act as another, vital set of eyes prior to an app going live on the Play Store.
Who are the partners?
All of our partners work in the world of endpoint protection, and offer specific products to protect mobile devices and the mobile ecosystem. Like Google Play Protect, our partners’ technologies use a combination of machine learning and static/dynamic analysis to detect abusive behavior. Multiple heuristic engines working in concert will increase our efficiency in identifying potentially harmful apps.
We hand-picked these partners based on their successes in finding potential threats and their dedication to improving the ecosystem. These partners are regularly recognized in analyst reports for their work.
Industry collaboration is key
Knowledge sharing and industry collaboration are important aspects in securing the world from attacks. We believe working together is the ultimate way we will get ahead of bad actors. We’re excited to work with these partners to arm the Google Play Store against bad apps.
Want to learn more about the App Defense Alliance’s work? Visit us here.

Open Security Controls Assessment Language (OSCAL) Workshop

The National Institute of Standards and Technology is hosting the first of a new series of workshops focusing on the Open Security Controls Assessment Language (OSCAL). OSCAL provides a standardized set of XML-, JSON- and YAML-based formats for use by authors and maintainers of security and privacy control catalogs, control baselines, and system security plans. These formats provide for the automated exchange of control-related information between tools and facilitate the automated assessment of security and privacy controls implemented in an information system. We are seeking attendees who

SBIR funds internationally marketable quality audit instrument

En’Urga Inc., an advanced diagnostic equipment company, recently found success with a Small Business Innovation Research award from the National Institute of Standards and Technology (NIST) by developing an instrument that will help the quality audits of the pharmaceutical, automotive and aerospace industries. The small business developed a device for estimating the number and size of drops in a spray. This may sound trivial, but the implications are profound in terms of efficiency and accuracy for the pharmaceutical, automotive and aerospace industries on an international scale. “People are