Daily Archives: November 14, 2019

Balancing patient security with healthcare innovation | TECH(talk)

Healthcare organizations are one of the most targeted verticals when it comes to cyberattacks. While those organizations must work to secure patients' sensitive data, it can also be helpful to analyze that data to improve patient outcomes. Jason James, CIO of Net Health, joins Juliet to discuss why attackers target healthcare organizations, Google's Project Nightingale and what it means for a tech giant to have access to the medical data of millions of people.

Attention is All They Need: Combatting Social Media Information Operations With Neural Language Models

Information operations have flourished on social media in part because they can be conducted cheaply, are relatively low risk, have immediate global reach, and can exploit the type of viral amplification incentivized by platforms. Using networks of coordinated accounts, social media-driven information operations disseminate and amplify content designed to promote specific political narratives, manipulate public opinion, foment discord, or achieve strategic ideological or geopolitical objectives. FireEye’s recent public reporting illustrates the continually evolving use of social media as a vehicle for this activity, highlighting information operations supporting Iranian political interests such as one that leveraged a network of inauthentic news sites and social media accounts and another that impersonated real individuals and leveraged legitimate news outlets.

Identifying sophisticated activity of this nature often requires the subject matter expertise of human analysts. After all, such content is purposefully and convincingly manufactured to imitate authentic online activity, making it difficult for casual observers to properly verify. The actors behind such operations are not transparent about their affiliations, often undertaking concerted efforts to mask their origins through elaborate false personas and the adoption of other operational security measures. With these operations being intentionally designed to deceive humans, can we turn towards automation to help us understand and detect this growing threat? Can we make it easier for analysts to discover and investigate this activity despite the heterogeneity, high traffic, and sheer scale of social media?

In this blog post, we will illustrate an example of how the FireEye Data Science (FDS) team works together with FireEye’s Information Operations Analysis team to better understand and detect social media information operations using neural language models.

Highlights

  • A new breed of deep neural networks uses an attention mechanism to home in on patterns within text, allowing us to better analyze the linguistic fingerprints and semantic stylings of information operations using modern Transformer models.
  • By fine-tuning an open source Transformer known as GPT-2, we can detect social media posts being leveraged in information operations despite their syntactic differences to the model’s original training data.
  • Transfer learning from pre-trained neural language models lowers the barrier to entry for generating high-quality synthetic text at scale, and this has implications for the future of both red and blue team operations as such models become increasingly commoditized.

Background: Using GPT-2 for Transfer Learning

OpenAI’s updated Generative Pre-trained Transformer (GPT-2) is an open source deep neural network that was trained in an unsupervised manner on the causal language modeling task. The objective of this language modeling task is to predict the next word in a sentence from previous context, meaning that a trained model ends up being capable of language generation. If the model can predict the next word accurately, it can be used in turn to predict the following word, and then so on and so forth until eventually, the model produces fully coherent sentences and paragraphs. Figure 1 depicts an example of language model (LM) predictions we generated using GPT-2. To generate text, single words are successively sampled from distributions of candidate words predicted by the model until it predicts an <|endoftext|> word, which signals the end of the generation.


Figure 1: An example GPT-2 generation prior to fine-tuning after priming the model with the phrase “It’s disgraceful that.”  

The quality of this synthetically generated text along with GPT-2’s state of the art accuracy on a host of other natural language processing (NLP) benchmark tasks is due in large part to the model’s improvements over prior 1) neural network architectures and 2) approaches to representing text. GPT-2 uses an attention mechanism to selectively focus the model on relevant pieces of text sequences and identify relationships between positionally distant words. In terms of architectures, Transformers use attention to decrease the time required to train on enormous datasets; they also tend to model lengthy text and scale better than other competing feedforward and recurrent neural networks. In terms of representing text, word embeddings were a popular way to initialize just the first layer of neural networks, but such shallow representations required being trained from scratch for each new NLP task and in order to deal with new vocabulary. GPT-2 instead pre-trains all the model’s layers using hierarchical representations, which better capture language semantics and are readily transferable to other NLP tasks and new vocabulary.

This transfer learning method is advantageous because it allows us to avoid starting from scratch for each and every new NLP task. In transfer learning, we start from a large generic model that has been pre-trained for an initial task where copious data is available. We then leverage the model’s acquired knowledge to train it further on a different, smaller dataset so that it excels at a subsequent, related task. This process of training the model further is referred to as fine-tuning, which involves re-learning portions of the model by adjusting its underlying parameters. Fine-tuning not only requires less data compared to training from scratch, but typically also requires less compute time and resources.

In this blog post, we will show how to perform transfer learning from a pre-trained GPT-2 model in order to better understand and detect information operations on social media. Transformers have shown that Attention is All You Need, but here we will also show that Attention is All They Need: while transfer learning may allow us to more easily detect information operations activity, it likewise lowers the barrier to entry for actors seeking to engage in this activity at scale.

Understanding Information Operations Activity Using Fine-Tuned Neural Generations

In order to study the thematic and linguistic characteristics of a common type of social media-driven information operations activity, we first fine-tuned an LM that could perform text generation. Since the pre-trained GPT-2 model's dataset consisted of 40+ GB of Internet text data extracted from 8+ million reputable web pages, its generations display relatively formal grammar, punctuation, and structure that corresponds to the text present within that original dataset (e.g. Figure 1). To make it appear like social media posts with their shorter length, informal grammar, erratic punctuation, and syntactic quirks including @mentions, #hashtags, emojis, acronyms, and abbreviations, we fine-tuned the pre-trained GPT-2 model on a new language modeling task using additional training data.

For the set of experiments presented in this blog post, this additional training data was obtained from the following open source datasets of identified accounts operated by Russia’s famed Internet Research Agency (IRA) “troll factory”:

  • NBCNews, over 200,000 tweets posted between 2014 and 2017 tied to IRA “malicious activity.”
  • FiveThirtyEight, over 1.8 million tweets associated with IRA activity between 2012 and 2018; we used accounts categorized as Left Troll, Right Troll, or Fearmonger.
  • Twitter Elections Integrity, almost 3 million tweets that were part of the influence effort by the IRA around the 2016 U.S. presidential election.
  • Reddit Suspicious Accounts, consisting of comments and submissions emanating from 944 accounts of suspected IRA origin.

After combining these four datasets, we sampled English-language social media posts from them to use as input for our fine-tuned LM. Fine-tuning experiments were carried out in PyTorch using the 355 million parameter pre-trained GPT-2 model from HuggingFace’s transformers library, and were distributed over up to 8 GPUs.

As opposed to other pre-trained LMs, GPT-2 conveniently requires minimal architectural changes and parameter updates in order to be fine-tuned on new downstream tasks. We simply processed social media posts from the above datasets through the pre-trained model, whose activations were then fed through adjustable weights into a linear output layer. The fine-tuning objective here was the same that GPT-2 was originally trained on (i.e. the language modeling task of predicting the next word, see Figure 1), except now its training dataset included text from social media posts. We also added the <|endoftext|> string as a suffix to each post to adapt the model to the shorter length of social media text, meaning posts were fed into the model according to:

“#Fukushima2015 Zaporozhia NPP can explode at any time
and that's awful! OMG! No way! #Nukraine<|endoftext|>”

Figure 2 depicts a few example generations made after fine-tuning GPT-2 on the IRA datasets. Observe how these text generations are formatted like something we might expect to encounter scrolling through social media – they are short yet biting, express certainty and outrage regarding political issues, and contain emphases like an exclamation point. They also contain idiosyncrasies like hashtags and emojis that positionally manifest at the end of the generated text, depicting a semantic style regularly exhibited by actual users.


Figure 2: Fine-tuning GPT-2 using the IRA datasets for the language modeling task. Example generations are primed with the same phrase from Figure 1, “It’s disgraceful that.” Hyphens are added for readability and not produced by the model.

How does the model produce such credible generations? Besides the weights that were adjusted during LM fine-tuning, some of the heavy lifting is also done by the underlying attention scores that were learned by GPT-2’s Transformer. Attention scores are computed between all words in a text sequence, and represent how important one word is when determining how important its nearby words will be in the next learning iteration. To compute attention scores, the Transformer performs a dot product between a Query vector q and a Key vector k:

  • q encodes the current hidden state, representing the word that searches for other words in the sequence to pay attention to that may help supply context for it.
  • k encodes the previous hidden states, representing the other words that receive attention from the query word and might contribute a better representation for it in its current context.

Figure 3 displays how this dot product is computed based on single neuron activations in q and k using an attention visualization tool called bertviz. Columns in Figure 3 trace the computation of attention scores from the highlighted word on the left, “America,” to the complete sequence of words on the right. For example, to decide to predict “#” following the word “America,” this part of the model focuses its attention on preceding words like “ban,” “Immigrants,” and “disgrace,” (note that the model has broken “Immigrants” into “Imm” and “igrants” because “Immigrants” is an uncommon word relative to its component word pieces within pre-trained GPT-2's original training dataset).  The element-wise product shows how individual elements in q and k contribute to the dot product, which encodes the relationship between each word and every other context-providing word as the network learns from new text sequences. The dot product is finally normalized by a softmax function that outputs attention scores to be fed into the next layer of the neural network.


Figure 3: The attention patterns for the query word highlighted in grey from one of the fine-tuned GPT-2 generations in Figure 2. Individual vertical bars represent neuron activations, horizontal bars represent vectors, and lines represent the strength of attention between words. Blue indicates positive values, red indicates negative values, and color intensity represents the magnitude of these values.

Syntactic relationships between words like “America,” “ban,” and “Immigrants“ are valuable from an analysis point of view because they can help identify an information operation’s interrelated keywords and phrases. These indicators can be used to pivot between suspect social media accounts based on shared lexical patterns, help identify common narratives, and even to perform more proactive threat hunting. While the above example only scratches the surface of this complex, 355 million parameter model, qualitatively visualizing attention to understand the information learned by Transformers can help provide analysts insights into linguistic patterns being deployed as part of broader information operations activity.

Detecting Information Operations Activity by Fine-Tuning GPT-2 for Classification

In order to further support FireEye Threat Analysts’ work in discovering and triaging information operations activity on social media, we next fine-tuned a detection model to perform classification. Just like when we adapted GPT-2 for a new language modeling task in the previous section, we did not need to make any drastic architectural changes or parameter updates to fine-tune the model for the classification task. However, we did need to provide the model with a labeled dataset, so we grouped together social media posts based on whether they were leveraged in information operations (class label CLS = 1) or were benign (CLS = 0).

Benign, English-language posts were gathered from verified social media accounts, which generally corresponded to public figures and other prominent individuals or organizations whose posts contained diverse, innocuous content. For the purposes of this blog post, information operations-related posts were obtained from the previously mentioned open source IRA datasets. For the classification task, we separated the IRA datasets that were previously combined for LM fine-tuning, and selected posts from only one of them for the group associated with CLS = 1. To perform dataset selection quantitatively, we fine-tuned LMs on each IRA dataset to produce three different LMs while keeping 33% of the posts from each dataset held out as test data. Doing so allowed us to quantify the overlap between the individual IRA datasets based on how well one dataset’s LM was able to predict post content originating from the other datasets.


Figure 4: Confusion matrix representing perplexities of the LMs on their test datasets. The LM corresponding to the GPT-2 row was not fine-tuned; it corresponds to the pretrained GPT-2 model with reported perplexity of 18.3 on its own test set, which was unavailable for evaluation using the LMs. The Reddit dataset was excluded due to the low volume of samples.

In Figure 4, we show the result of computing perplexity scores for each of the three LMs and the original pre-trained GPT-2 model on held out test data from each dataset. Lower scores indicate better perplexity, which captures the probability of the model choosing the correct next word. The lowest scores fell along the main diagonal of the perplexity confusion matrix, meaning that the fine-tuned LMs were best at predicting the next word on test data originating from within their own datasets. The LM fine-tuned on Twitter’s Elections Integrity dataset displayed the lowest perplexity scores when averaged across all held out test datasets, so we selected posts sampled from this dataset to demonstrate classification fine-tuning.


Figure 5: (A) Training loss histories during GPT-2 fine-tuning for the classification (red) and LM (grey, inset) tasks. (B) ROC curve (red) evaluated on the held out fine-tuning test set, contrasted with random guess (grey dotted).

To fine-tune for the classification task, we once again processed the selected dataset’s posts through the pre-trained GPT-2 model. This time, activations were fed through adjustable weights into two linear output layers instead of just the single one used for the language modeling task in the previous section. Here, fine-tuning was formulated as a multi-task objective with classification loss together with an auxiliary LM loss, which helped accelerate convergence during training and improved the generalization of the model. We also prepended posts with a new [BOS] (i.e. Beginning Of Sentence) string and suffixed posts with the previously mentioned [CLS] class label string, so that each post was fed into the model according to:

“[BOS]Kevin Mandia was on @CNBC’s @MadMoneyOnCNBC with @jimcramer discussing targeted disinformation heading into the… https://t.co/l2xKQJsuwk[CLS]”

The [BOS] string played a similar delimiting role to the <|endoftext|> string used previously in LM fine-tuning, and the [CLS] string encoded the hidden state ∈ {0, 1} that was the label fed to the model’s classification layer. The example social media post above came from the benign dataset, so this sample’s label was set to CLS = 0 during fine-tuning. Figure 5A shows the evolution of classification and auxiliary LM losses during fine-tuning, and Figure 5B displays the ROC curve for the fine-tuned classifier on its test set consisting of around 66,000 social media posts. The convergence of the losses to low values, together with a high Area Under the ROC Curve (i.e. AUC), illustrates that transfer learning allowed this model to accurately detect social media posts associated with IRA information operations activity versus benign ones. Taken together, these metrics indicate that the fine-tuned classifier should generalize well to newly ingested social media posts, providing analysts a capability they can use to separate signal from noise.

Conclusion

In this blog post, we demonstrated how to fine-tune a neural LM on open source datasets containing social media posts previously leveraged in information operations. Transfer learning allowed us to classify these posts with a high AUC score, and FireEye’s Threat Analysts can utilize this detection capability in order to discover and triage similar emergent operations. Additionally, we showed how Transformer models assign scores to different pieces of text via an attention mechanism. This visualization can be used by analysts to tease apart adversary tradecraft based on posts’ linguistic fingerprints and semantic stylings.

Transfer learning also allowed us to generate credible synthetic text with low perplexity scores. One of the barriers actors face when devising effective information operations is adequately capturing the nuances and context of the cultural climate in which their targets are situated. Our exercise here suggests this costly step could be bypassed using pre-trained LMs, whose generations can be fine-tuned to embody the zeitgeist of social media. GPT-2’s authors and subsequent researchers have warned about potential malicious use cases enabled by this powerful natural language generation technology, and while it was conducted here for a defensive application in a controlled offline setting using readily available open source data, our research reinforces this concern. As trends towards more powerful and readily available language generation models continue, it is important to redouble efforts towards detection as demonstrated by Figure 5 and other promising approaches such as Grover.

This research was conducted during a three-month FireEye IGNITE University Program summer internship, and represents a collaboration between the FDS and FireEye Threat Intelligence’s Information Operations Analysis teams. If you are interested in working on multidisciplinary projects at the intersection of cyber security and machine learning, please consider applying to one of our 2020 summer internships.

How to Leverage YAML to Integrate Veracode Solutions Into CI/CD Pipelines

YAML scripting is frequently used to simplify configuration management of CI/CD tools. This blog post shows how YAML scripts for build tools like Circle CI, Concourse CI, GitLab, and Travis can be edited in order to create integrations with the Veracode Platform. Integrating Veracode AppSec solutions into CI/CD pipelines enables developers to embed remediation of software vulnerabilities directly into their SDLC workflows, creating a more efficient process for building secure applications. You can also extend the script template proposed in this blog to integrate Veracode AppSec scanning with almost any YAML-configured build tool.  

Step One: Environment Requirements

The first step is to confirm that your selected CI tool supports YAML-based pipeline definitions, where we assume that you are spinning up Docker images to run your CI/CD workflows. Your Docker images can run either on Java or .Net. Scripts included in this article are targeted only for Java, and you will need to confirm this step before moving on to the next one.

Step Two: Setting Up Your YAML File

The second step is to locate the YAML configuration file, which for many CI tools is labeled as config.yml. The basic syntax is the same for most build tools, with some minor variations. The links below contain configuration file scripts for Circle CI, Concourse CI, GitLab, and Travis, which you can also use as examples for adjusting methods of config files for other build tools.

Step Three: Downloading the Java API Wrapper

The next step requires downloading the Java API wrapper, which can be done by using the script below.

 # grab the Veracode agent
run:
	name: "Get the Veracode agent"
	command: |
	wget https://repo1.maven.org/maven2/com/veracode/vosp/api/wrappers/vosp-api-wrappers-java/19.2.5.6/vosp-api-wrappers-java-19.2.5.6.jar -O VeracodeJavaAPI.jar

Step Four: Adding Veracode Scan Attributes to Build Pipelines

The final step requires entering in the script all the information required to interact with Veracode APIs, including data attributes like users’ access credentials, application name, build tool version number, etc. Veracode has created a rich library of APIs that provide numerous options for interacting with the Veracode Platform, and that enable customers and partners to create their own integrations. Information on Veracode APIs is available in the Veracode Help Center.

The script listed below demonstrates how to add attributes to the Circle CI YAML configuration file, so that the script can run the uploadandscan API, which will enable application uploading from Circle CI to the Veracode Platform, and trigger the Platform to run the application scan.

run:
	     name: "Upload to Veracode"
	     command: java -jar VeracodeJavaAPI.jar 
	       -vid $VERACODE_API_ID 
	       -vkey $VERACODE_API_KEY 
	       -action uploadandscan 
	       -appname $VERACODE_APP_NAME 
	       -createprofile false 
	       -version CircleCI-$CIRCLE_BUILD_NUM 
	       -filepath upload.zip

In this example, we have defined:

Name – YAML workflow name defined in this script

Command – command to run Veracode API. Details on downloading API jar are already provided in the previous step

-vid $VERACODE_API_ID - user’s Veracode ID access credential

--vkey $VERACODE_API_KEY – user’s Veracode Key access credential

-action uploadandscan – name of Veracode API invoked by this script

$VERACODE_APP_NAME – name of customer application targeted for uploading and scanning by the Platform. This application name should be defined identically to the way that it is defined in the application profile on the Veracode Platform

-createprofile false – is a Boolean that defines whether application profile should be automatically created if the veracode_app_name does not find a match for an existing application profile.  

  • If defined as true, application profile will be created automatically if no app_name match is found, and upload and scan steps will continue
  • If defined as false, application profile will not be created, with no further actions for upload and scan

-version CircleCI - $CIRCLE_BUILD_NUM – version number of the Circle CI tool that the customer is using to run this integration

-filepath upload.zip – location where the application file resides prior to interacting with the Veracode API

With these four steps, Veracode scanning is now integrated into a new CI/CD pipeline.

Integrating application security scanning directly into your build tools enables developers to incorporate security scans directly into their SDLC cycles. Finding software vulnerabilities earlier in the development cycle allows for simpler remediation and more efficient issue resolution, enabling Veracode customers to build more secure software, without compromising on development deadlines.

For additional information on Veracode Integrations, please visit our integrations page.

For Caught in the Crossfire of Cyberwarfare

Authored by Dr Sandra Bell, Head of Resilience Consulting EMEA, Sungard Availability Services 

The 2019 National Cyber Security Centre’s (NCSC) Annual Review does not shy away from naming the four key protagonists when it comes to state-based cyber threats against our country. The review sites China, Russia, North Korea and Iran as being actively engaged in cyber operations against our Critical National Infrastructure and other sectors of society. That being said, the main cyber threat to businesses and individual citizens remains organised crime. But with the capability of organised crime matching some state-based activity and the sharing (if not direct support) of state-based techniques with cyber criminals, how are we expected to defend ourselves against such sophisticated cyberattack means?

The answer offered by Ciaran Martin, CEO of the NCSC, in his Forward to the 2019 Review only scratches the surface of the cultural change we need to embrace if we are to become truly cyber resilient to these modern-day threats.

“Looking ahead, there is also the risk that advanced cyberattack techniques could find their way into the hands of new actors, through the proliferation of such tools on the open market. Additionally, we must always be mindful of the risk of accidental impact from other attacks. Cyber security has moved away from the exclusive prevail of security and intelligence agencies towards one that needs the involvement of all of government, and indeed all of society.”

There are a few key points to draw out from this statement. Firstly, there is an acceptance that all of us may be collateral damage in a broader state-on-state cyberattack. Secondly, we should accept also that we maybe the victims of very sophisticated cyberattacks that have their roots in state-sponsored development. And finally, we must all accept that cyber security is a collective responsibility and, where businesses are concerned, this responsibility must be accepted and owned at the very top.

Modern life is now dependent on cyber security but we are yet to truly embrace the concept of a cyber secure culture. When we perceived terrorism as the major threat to our security, society quickly adopted a ‘reporting culture’ of anything suspicious, but have we seen the same mindset shift with regards to cyber threats? The man in the street may not be the intended target of a state-based or organised crime cyberattack but we can all easily become a victim, either accidentally as collateral damage or intentionally as low-hanging fruit. Either way we can all, individual citizens and businesses alike, fall victim to the new battleground of cyberwarfare.

What can business do in the face of such threats?
One could argue that becoming a victim of cybercrime is a when not an if. This can in turn bring about a sense of the inevitability. But what is clear when you see the magnitude of recent Information Commissioner’s Office (ICO) fines, is that businesses cannot ignore cyber security issues. A business that embraces the idea of a cybersecurity culture within its organisation will not only be less likely to be hit with a fine from the ICO should things go horribly wrong, but are also less likely to fall victim in the first place. Cyber security is about doing the basics well and preparing your organisation to protect itself, and responding correctly when an incident occurs.

Protecting against a new kind of warfare
Organisations need to prepare to potentially become the unintended targets of broad-brush cyberattacks, protecting themselves against the impact they could have on their operations and customer services. With each attack growing in its complexity, businesses must in-tow respond in a swift and sophisticated manner. Defence mechanisms need to be as scalable as the nefarious incidents they may be up against. To give themselves the best chance of ensuring that an attack doesn’t debilitate them and the country in which they operate, there are a few key things that businesses can do:

1) Act swiftly
A cyberattack requires an immediate response from every part of a business. Therefore, when faced with a potential breach, every individual must know how to react precisely and quickly. IT and business teams will need to locate and close any vulnerabilities in IT systems or business processes and switch over to Disaster Recovery arrangements if they believe there has been a data corruption. Business units need to invoke their Business Continuity Plans and the executive Crisis Management Team needs to assemble. This team needs to be rehearsed in cyber related crisis events and not just the more traditional Business Continuity type of crisis.

Both the speed and effectiveness of a response will be greatly improved if businesses have at their fingertips the results of a Data Protection Impact Assessment (DPIA) that details all the personal data collected, processed and stored, categorised by level of sensitivity. If companies are scrambling around, unsure of who should be taking charge and what exactly should be done, then the damage caused by the data encryption will only be intensified.

2) Isolate the threat
Value flows from business to business through networks and supply chains, but so do malware infections. Having adequate back-up resources not only brings back business availability in the wake of an attack, but it also serves to act as a barrier to further disruption in the network. The key element that cybercriminals and hacking groups have worked to iterate on is their delivery vector.

Phishing attempts are more effective if they’re designed using the techniques employed in social engineering. A study conducted by IBM found that human error accounts for more than 95 per cent of security incidents. The majority of the most devastating attacks from recent years have been of the network-based variety, i.e. worms and bots.

Right now, we live in a highly connected world with hyper-extended networks comprised of a multitude of mobile devices and remote workers logging in from international locations. Having a crisis communication plan that sets out in advance who needs to be contacted should a breach occur will mean that important stakeholders based in different locations don’t get forgotten in the heat of the moment.

3) Rely on resilience
Prevention is always better than cure. Rather than waiting until a data breach occurs to discover the hard way which threats and vulnerabilities are present in IT systems and business processes, act now.

It’s good business practice to continuously monitor risk, including information risk, and ensure that the controls are adequate. However, in the fast-paced cyber world where the threats are constantly changing this can be difficult in practice.

With effective Disaster Recovery and cyber focused Business Continuity practices written into business contingency planning, organisations remain robust and ready to spring into action to minimise the impact of a data breach.

The most effective way to test business resilience without unconscious bias risking false-positive results is via evaluation by external security professionals. By conducting physical and logical penetration testing and regularly checking an organisation’s susceptibility to social engineering, effective business continuity can be ensured, and back-up solutions can be rigorously tested.

Cyber Resilience must be woven into the fabric of business operations, including corporate culture itself. Crisis leadership training ensures the C-suite has the skills, competencies and psychological coping strategies that help lead an organisation through the complex, uncertain and unstable environment that is caused by a cyberattack, emerging the other side stronger and more competitive than ever before.

A look ahead to the future
A cyberattack is never insignificant, nor expected, but if a business suffers one it is important to inform those that are affected as quickly as possible. Given the scale at which these are being launched, this couldn’t be truer. It’s vital in the current age of state-backed attacks that businesses prioritise resilience lest they be caught in the crossfire. In a business landscape defined by hyper-extended supply chains, having a crisis communication plan that sets out in advance who needs to be contacted should a breach occur will mean that important stakeholders don’t get forgotten in the heat of the moment and that the most important assets remain protected.