Monthly Archives: May 2018

Application Development GDPR Compliance Guidance

Last week IBM developerWorks released a three-part guidance series I have written to help 
Application Developers develop GDPR compliant applications.

Developing GDPR Compliant Applications Guidance

The GDPR
The General Data Protection Regulation (GDPR) was created by the European Commission and Council to strengthen and unify Europe's data protection law, replacing the 1995 European Data Protection Directive. Although the GDPR is a European Union (EU) regulation, it applies to any organizations outside of Europe that handle the personal data of EU citizens. This includes the development of applications that are intended to process the personal information of EU citizens. Therefore, organizations that provide web applications, mobile apps, or traditional desktop applications that can indirectly process EU citizen's personal data or allow EU citizens sign in are subject to the GDPR's privacy obligations. Organizations face the prospect of powerful sanctions should applications fail to comply with the GDPR.

Part 1: A Developer's Guide to the GDPR
Part 1 summarizes the GDPR and explains how the privacy regulation impacts and applies to developing and supporting applications that are intended to be used by European Union citizens.

Part 2: Application Privacy by Design
Part 2 provides guidance for developing applications that are compliant with the European Union’s General Data Protection Regulation. 

Part 3: Minimizing Application Privacy Risk

Part 3  provides practical application development techniques that can alleviate an application's privacy risk.

APT28 Rollercoaster: The Lowdown on Hijacked LoJack

Recently, the ASERT team at Arbor Networks, published a report on an old version of the Absolute Software product, Absolute LoJack for laptops, being illicitly modified by suspected APT28 actors. The LoJack implant, previously known as Computrace and brought into the spotlight in 2014 at Black Hat USA because it was enabled on some brand new laptops, is an anti-theft technology used in modern laptops to allow remote tracing, data deletion, and system lockdown.

Based on information from a number of reports, ASERT estimates with moderate confidence that the APT28 group, also known as Fancy Bear, has maliciously modified and deployed Absolute LoJack samples to support its own campaigns against government and defense-related contractors. (See the advice from Absolute Software  regarding these modified samples.) As sophisticated implants often reveal non-trivial dynamic behaviors, we began an investigation process to analyze this threat in more detail.

A New Sample

From the IOCs in the ASERT report, we hunted through our knowledge base for additional samples using the search query below:

code_hash: '21B04C7DF33277B9927D0D3E3ADC545D' AND user_agent: 'Mozilla/4.0 (compatible; MSIE 6.0;)' AND NOT domain: 'search.namequery.com'

The code hash search term allowed us to query for all LoJack implants. We filtered on user agent while removing the resolved domain used by all legitimate implants. The search uncovered a new and previously unmentioned sample, below (see Figure 1 for the analysis overview):

SHA1: 09d2e2c26247a4a908952fee36b56b360561984f
Compilation time: 2008-04-01 19:35:07
C&C server: webstp[.]com

This new sample was detected for the first time in the second half of 2017, the sample relies on a previously unknown C&C server: the webstp[.]com domain originally resolved to the IP address 185.94.191[.]65, but was later sinkholed at sinkhole.tigersecurity.pro via 91.134.203[.]113 and later 54.36.134[.]247.

Analysis overview of the newly discovered hijacked LoJack sample

Figure 1: Analysis overview of the newly discovered hijacked LoJack sample.

Our investigation started by focusing on that specific domain and IP association. While we had no reputation information related to the IP 185.94.191[.]65, whois historical data showed that webstp[.]com had been registered on August 11, 2017 with the hosting company ititch[.]com (Figure 2).

Registration details for webstp.com before sinkholing (from whoxy.com).

Figure 2: Registration details for webstp.com before sinkholing (from whoxy.com).

In their own words:

“IT Itch is different. We’re all about helping people, businesses and activist groups keep their digital identity anonymous, their online data private and their websites always online. We’re doing this by actively ignoring and impeding digital data requests and take-down notices, and we’re pretty good at it – we have a 100% non-compliance rate. We’re also helping domain name owners and website operators understand more about data privacy protection, and offering products and services designed to keep your identity secret, such as anonymous web hosting and private domain registration.”

Whether intentionally or not, this desire to support internet freedoms has attracted cyberthreat actors because of the operational anonymity and bulletproof infrastructure.

The Fallback Revelation

During our investigation, we decided to verify whether we could detect other hijacked LoJack samples being used in the wild. As the traffic from hijacked agents is otherwise indistinguishable from legitimate agents, we deployed a network signature to monitor HTTP traffic with the same distinct User-Agent header, but connecting to other (non-legitimate) domains. Unfortunately, this approach quickly proved to be prone to false positives; in particular, in a small number of legitimate LoJack interactions, the contacted host of the HTTP request was a plain IP address instead of a domain name (see Figure 3).

A legitimate LoJack network interaction using the IP address as contacted hostname.

Figure 3: A legitimate LoJack network interaction using the IP address as contacted hostname.

The fact that a legitimate sample was including this fallback mechanism led us to wonder whether the malicious binary was doing the same. As Figure 4 shows, that was indeed the case: it turns out Computrace samples embed both a XORed IP address as well as a XORed domain. The XORed IP address is used as a backup communication channel when domain callout fails (the logic can be seen in Figure 5).

The XORed IP and domain name.

Figure 4: The XORed IP and domain name.

The logic used to decide when to fallback to an IP address.

Figure 5: The logic used to decide when to fallback to an IP address.

We also checked all other hijacked LoJack implants, and in all cases, the binary included a fallback IP, meaning that in all cases the implant would have been able to connect to the C&C regardless of whether the domain had been sinkholed or blacklisted. Only in two cases, the fallback IP address did not match the address pointed to by the domain. Table 1 shows detailed network IOCs for all known hijacked LoJack samples.

Sample SHA1 Domain Original resolving IP Fallback IP
1470995de2278ae79646d524e7c311dad29aee17 sysanalyticweb[.]com 54.37.104[.]106 93.113.131[.]103
10d571d66d3ab7b9ddf6a850cb9b8e38b07623c0 sysanalyticweb[.]com 54.37.104[.]106 93.113.131[.]103
397d97e278110a48bd2cb11bb5632b99a9100dbd elaxo[.]org 86.106.131[.]54 86.106.131[.]54
ddaa06a4021baf980a08caea899f2904609410b9 ikmtrust[.]com 185.144.82[.]239 185.144.82[.]239
2529f6eda28d54490119d2123d22da56783c704f lxwo[.]org 185.86.149[.]54 185.86.149[.]54
09d2e2c26247a4a908952fee36b56b360561984f webstp[.]com 185.94.191[.]65 185.94.191[.]65

Table 1: Hijacked LoJack samples and their C&C infrastructure (including the fallback IP). In bold the sample we uncovered.

Attribution

We can assert with high confidence that this specific hijacked Absolute LoJack for laptops sample appears to be related to the campaign recently unveiled by Arbor Networks. Our reasoning follows:

  • The domain webstp[.]com is associated to an Absolute LoJack agent utilizing the exact same compile time of other hijacked LoJack samples, thus matching the same criteria used to cluster the original artifacts.
  • The domain sysanalyticweb[.]com and webstp[.]com share the same registrar, ititch[.]com, a company claiming to provide bulletproof hosting, and used in other APT28 campaigns.
  • The domain in the registrant email address for webstp[.]com is centrum[.]cz, which appears frequently in other registrant email addresses of domains linked to previous APT28 campaigns associated with ititch[.]com.
  • The Absolute LoJack for laptops agent connecting to webstp[.]com has been seen in the wild during the same time frame of all other samples.

Conclusions

In this blog post, we further analyzed the C&C infrastructure used by the samples of Absolute LoJack for laptops illicitly modified by APT28. We unveiled a new sample submitted from organizations with a consistent victimology to other LoJack targets and domains that are part of the C&C infrastructure. We also discovered that all Absolute LoJack for laptops samples had a fallback mechanism to increase their robustness against sinkholes. This infrastructure used in our new sample is linked to pro-European Union companies and/or ex-Soviet Union states. We will continue to publish new IOCs in this blog for the LoJack campaign as future submissions appear.

The post APT28 Rollercoaster: The Lowdown on Hijacked LoJack appeared first on Lastline.

Attacks against machine learning — an overview

This blog post survey the attacks techniques that target AI (artificial intelligence) systems and how to protect against them.

At a high level, attacks against classifiers can be broken down into three types:

  • Adversarial inputs , which are specially crafted inputs that have been developed with the aim of being reliably misclassified in order to evade detection. Adversarial inputs include malicious documents designed to evade antivirus, and emails attempting to evade spam filters.
  • Data poisoning attacks , which involve feeding training adversarial data to the classifier. The most common attack type we observe is model skewing, where the attacker attempts to pollute training data in such a way that the boundary between what the classifier categorizes as good data, and what the classifier categorizes as bad, shifts in his favor. The second type of attack we observe in the wild is feedback weaponization, which attempts to abuse feedback mechanisms in an effort to manipulate the system toward misclassifying good content as abusive (e.g., competitor content or as part of revenge attacks).
  • Model stealing techniques , which are used to “steal” (i.e., duplicate) models or recover training data membership via blackbox probing. This can be used, for example, to steal stock market prediction models and spam filtering models, in order to use them or be able to optimize more efficiently against such models.

This post explores each of these classes of attack in turn, providing concrete examples and discussing potential mitigation techniques.

This post is the fourth, and last, post in a series of four dedicated to providing a concise overview of how to use AI to build robust anti-abuse protections. The first post explained why AI is key to building robust protection that meets user expectations and increasingly sophisticated attacks. Following the natural progression of building and launching an AI-based defense system, the second post covered the challenges related to training classifiers. The third one looked at the main difficulties faced when using a classifier in production to block attacks.

This series of posts is modeled after the talk I gave at RSA 2018. Here is a re-recording of this talk:

You can also get the slides here .

Disclaimer: This post is intended as an overview for everyone interested in the subject of harnessing AI for anti-abuse defense, and it is a potential blueprint for those who are making the jump. Accordingly, this post focuses on providing a clear high-level summary, deliberately not delving into technical details. That said, if you are an expert, I am sure you’ll find ideas, techniques and references that you haven’t heard about before, and hopefully you’ll be inspired to explore them further.

Adversarial inputs

Adversaries constantly probe classifiers with new inputs/payloads in an attempt to evade detection. Such payloads are called adversarial inputs because they are explicitly designed to bypass classifiers.

Gmail Adversarial input

Here is a concrete example of an adversarial input: A few years back, one clever spammer realized that if the same multipart attachment appeared multiple times in an email, Gmail would only display the last attachment as visible in the screenshot above. He weaponized this knowledge by adding an invisible first multipart that contained many reputable domains in an attempt to evade detection. This attack is a variation of the class of attacks known as keyword stuffing .

More generally sooner or later classifiers face two types of adversarial input: mutated inputs, which are variations of a known attack specifically engineered to avoid your classifier, and zero-day inputs, which are never-seen-before payloads. Let’s explore each of these in turn.

Mutated inputs

Blackmarket services

Over the last few years we have seen an explosion of underground services designed to help cybercriminals craft undetectable payloads best known in the underworld as “FUD” (fully undetectable) ones. These services range from testing services that allow to test payloads against all anti-virus software, to automated packers that aim to obfuscate malicious documents in a way that makes them undetectable (with a warranty!). The screenshot above showcases two such services.

This recrudescence of underground services specialized in payload crafting emphasizes the fact that:

Attackers actively optimize their attack to ensure they minimize classifiers detection rate.

Therefore, it is essential to develop detection systems in such a way that they makes it hard for attackers to perform payload optimization. Below are three key design strategies to help with that.

1.Limit information leakage

CommitStrip

The goal here is to ensure that attackers gain as little insight as possible when they are probing your system. It’s important to keep the feedback minimal and delay it as much as possible, for example avoid returning detailed error codes or confidence values.

2. Limit probing

The goal of this strategy is to slow down attackers by limiting how often many payload they can test against your systems. By restricting how much testing an attacker can perform against your systems, you’ll effectively reduce the rate at which they can devise harmful payloads.

StackOverflow captcha

This strategy is mostly carried out by implementing rate limiting on scarce resources such as IP and accounts. A classical example of such rate limiting is to ask the user to solve a CAPTCHA if he is posting too frequently as illustrated above.

Account Blackmarket

The negative side effect of such active rate limiting is that it creates an incentive for bad actors to create fake accounts and use compromised user computers to diversify their pool of IPs. The widespread use of rate limiting through the industry is a major driving factor behind the rise of very active blackmarket forums where accounts and IP addresses are routinely sold, as visible in the screenshot above.

3. Ensemble learning

Last, but not least, it’s important to combine various detection mechanisms to make it harder for attackers to bypass the overall system. Using ensemble learning to combine different type of detection methods, such as reputation-based ones, AI classifiers, detection rules and anomaly detection, improves the robustness of your system because bad actors have to craft payloads that avoid all those mechanisms at once.

Gmail classifier breakdown

For example as shown in the screenshot above, to ensure Gmail classifier robustness against spammers we combine multiple classifiers and auxiliary systems. Such systems include a reputation system, a large linear classifier, a deep learning classifier and a few other secret techniques ;)

Examples of adversarial attacks against deep neural networks

Adversarial input example

A very active related research field is how to craft adversarial examples that fool deep-neural networks (DNNs). It is now trivia to create imperceptible perturbation that completely fools DNN as shown in the screenshot above, taken from this paper .

Recent work suggests that CNN are vulnerable to adversarial input attack because they tend to learn superficial dataset regularity instead of generalizing well and learning high-level representation that would be less susceptible to noise.

This type of attack affects all DNN, including reinforcement-based ones , as highlighted in the video above. To learn more about such attacks, you should read Ian’s intro post on the subject or start experimenting with Clever Hans .

From a defender perspective, this type of attack has proven (so far) to be very problematic because we don’t yet have an effective way of defending against such attacks. Fundamentally, we don’t have an efficient way to get DNNs to generate good output for all inputs. Getting them to do so is incredibly hard because DNNs perform nonlinear/non-convex optimizations within very large spaces and we have yet to teach them to learn high level representation that generalize well. You can read Ian and Nicolas’ in-depth post to know more about this.

Zero-day inputs

The other obvious type of adversarial inputs that can completely throw off classifiers are new attacks. New attacks don’t happen very often, but its’ still important to know how to deal with them as they can be quite devastating.

Incentive for new attacks

While there are many unpredictable underlying reasons why new attacks emerge, in our experience the following two types of events are likely to trigger their emergence:

New product or feature launch: By nature adding functionalities opens up new attack surfaces that attackers are very quick to probe. This is why it is essential (and yet hard) to provide day zero defense when a new product launches.

Increased incentive: While rarely discussed, many new attacks surge are driven by an attack vector becoming very profitable. A very recent example of such behavior is the rise of abusing cloud services such as Google Cloud to mine cryptocurrencies in response to the surge of bitcoin price late 2017.

As bitcoin prices skyrocketed past $10,000, we saw a surge of new attacks that attempted to steal Google cloud compute resources to mine. I will cover how we detected those new attacks a little later in this post.

Black Swan

All in all, the Black Swan theory formalized by Nassim Taleb applies to AI-based defenses, as it does with any type of defense:

Sooner or later an unpredictable attack will throw off your classifier and it will have a major impact.

However, it is not because you can’t predict which attacks will throw off your classifier, or when such an attack will strike that you are powerless. You can plan around such attacks happening and put in place contingency plans to mitigate it. Here are a few directions to explore while preparing for black swan events.

1. Develop an incident response process

The first thing to do is to develop and test an incident recovery process to ensure you react appropriately when you get caught off guard. This includes, but is not limited to, having the necessary controls in place to delay or halt processing while you are debugging your classifiers, and knowing who to call.

The (free) Google SRE (Site Reliability Engineering) handbook has a chapter on managing incidents , and another on emergency responses . For a more cybersecurity-centric document you should look at the NIST (National Institute of Standards and Technology) cybersecurity event recovery guide . Finally, if you’d rather watch a talk instead, take a look at the video on how Google runs its Disaster Recovery Training (DiRT) program , and the video on how Facebook do incident response (the recording don’t show the slides)

2. Use transfer learning to protect new products

The obvious key difficult is that you don’t have past data to train your classifiers on. One way to mitigate this issue is to make use of transfer learning , which allows you to reuse already existing data from one domain, and apply it to another.

For example, if you’re dealing with images you can leverage an existing pre-trained model , while if you’re dealing with text you can use public datasets such as the Jigsaw dataset of toxic comments .

3. Leverage anomaly detection

Anomaly detection algorithms can be used as a first line of defense because, by nature, a new attack will create a never previously encountered set of anomalies related to how they exploit your system.

Windfall lottery

An historical example of a novel attack triggering a swath of new anomalies was the “MIT gambling syndicate” attack against the Massachusetts WinFall lottery game .

Back in 2005, multiple groups of gambling syndicates discovered a flaw in the WinFall lottery system: when the jackpot was split among all participants, you would earn $2.3 on average for each $2 ticket you bought. This split, known as a “roll-down,” occurred every time the pool of money exceeded $2 million.

To avoid sharing the gains with other groups, the MIT gang decided to trigger a roll-down way ahead of time by performing a massive buyout of tickets three weeks before a roll-down was expected. Obviously, this massive surge of tickets —bought from very few retailers—created a host of anomalies that were detected by the lottery organization.

Bitcoin visual

More recently, as alluded to earlier in this post, when Bitcoin prices rose like crazy in 2017 we started to see an army of bad actors trying to benefit from this surge by mining using Google cloud instances for free. To acquire instances for “free,” they attempted to exploit many attack vectors that included trying to abuse our free tier, use stolen credit cards, compromise legitimate cloud users’ computers, and hijack cloud users’ accounts via phishing.

Youtube tutorial for cloud mining

Very quickly, this type of attack became so popular that it led to thousands of people watching YouTube tutorials on how to mine on Google cloud (which is unprofitable under normal circumstances). Obviously, we couldn’t anticipate that abusive mining would become such a huge issue.

Cloud miner detection

Fortunately, we did have an anomaly detection system in place for Google Cloud instances when that happened. As expected, and shown in the chart above which was taken directly from our anomaly detection system dashboard, it turns out that when instances start mining their temporal behavior shift drastically because the associated resource usage is fundamentally different from the traditional resource usage exhibited by uncompromised cloud instances. We were able to use this shift detection to curb this new vector of attacks, ensure that our cloud platform remains stable and warm GCE clients that they were compromised.

Data poisoning

The second class of attacks faced by classifiers relates to adversaries attempting to poison your data to make your system misbehave.

Model skewing

The first type of poisoning attack is called model skewing, where attackers attempt to pollute training data to shift the learned boundary between what the classifier categorizes as good input, and what the classifier categorizes as bad input. For example, model skewing can be used to try to pollute training data to trick the classifier to mark specific malicious binaries as benign.

Concrete example

Gmail adversarial skewing attack

In practice, we regularly see some of the most advanced spammer groups trying to throw the Gmail filter off-track by reporting massive amounts of spam emails as not spam. As shown in the figure, between the end of Nov 2017 and early 2018, there were at least four malicious large-scale attempts to skew our classifier.

Thus, when designing an AI base defense, you need to account for the fact that:

Attackers actively attempt to shift the learned boundary between abusive and legitimate use in their favor.

Mitigation strategies

To prevent attackers from skewing models, you can leverage the following three strategies:

  • Use sensible data sampling: You need to ensure that a small group of entities, including IPs or users, can’t account for a large fraction of the model training data. In particular, be cautious to not over-weighting false positives and false negatives reported by users. This can potentially be achieved by limiting the number of examples that each user can contribute, or using decaying weights based on the number of examples reported.
  • Compare your newly trained classifier to the previous one to estimate how much has changed. For example, you can perform a dark launch and compare the two outputs on the same traffic. Alternative options include A/B testing on a fraction of the traffic, and backtesting .
  • Build a golden dataset that your classifier must accurately predict in order to be launched into production. This dataset ideally contains a set of curated attacks and normal content that are representative of your system. This process will ensure that you can detect when a weaponization attack was able to generate a significant regression in your model before it negatively impacted your users.

Feedback weaponization

The second type of data poisoning attack is the weaponization of user feedback systems to attack legitimate users and content. As soon as attackers realize that you are using user feedback, one way or another—for penalization purposes—they will try to exploit this fact to their advantage.

Concrete examples

Mobile assault

One of the most egregious attempts to weaponize user feedback we witnessed in 2017 was a group of 4chan users that decided to tank the CNN app ranking on the Play Store and App Store by leaving thousands of 1-star ratings.

Blackmarket feedback weaponization

Feedback weaponization is actively used by bad actors for a number of reasons, including: attempting to take down the competition, exacting revenge, and covering their tracks. The screenshot above showcases a blackmarket post discussing how to “use Google” to take out a competitor.

Accordingly, while building your system you need to work under the assumption that:

Any feedback mechanism will be weaponized to attack legitimate users and content.

Mitigation strategies

Here are the two key points to keep in mind while working on mitigating feedback weaponization:

  • Don’t create a direct loop between feedback and penalization. Instead, make sure the feedback authenticity is assessed and combined with other signals before making a decision.
  • Don’t assume that the owner of the content that is benefiting from the abuse is responsible for it. For example, it’s not because a photo has hundreds of fake likes that the owner may have bought it. We have seen countless cases where attackers juiced up legitimate content in an attempt to cover their tracks or try to get us to penalize innocent users.

Model-stealing attacks

This post would not be complete without mentioning attacks that aim to recover models or information about the data used during training. Such attacks are a key concern because models represent valuable intellectual property assets that are trained on some of a company’s most valuable data, such as financial trades, medical information or user transactions.

Ensuring the security of models trained on user-sensitive data such as cancer-related data is paramount as such models can potentially be abused to disclose sensitive user information .

Attacks

The two main model-stealing attacks are:

Model reconstruction: The key idea here is that the attacker is able to recreate a model by probing the public API and gradually refining his own model by using it as an Oracle. A recent paper showed that such attacks appear to be effective against most AI algorithms, including SVM, Random Forests and deep neural networks.

Membership leakage: Here, the attacker builds shadow models that enable him to determine whether a given record was used to train a model. While such attacks don’t recover the model, they potentially disclose sensitive information.

Defense

PATE aggregation

The best-known defense against model-stealing attacks is known as PATE ( latest paper ) —a privacy framework developed by Ian Goodfellow et al. As shown in the figure above, the key idea behind PATE is to partition the data and train multiple models that are combined to make a decision. This decision is then fudged with noise like other differential privacy systems .

To know more about differential privacy read Matt's intro post . To know more about PATE and model stealing attacks read Ian's post on the subject.

Conclusion

It’s about time to wrap up this (rather long!) series of posts on how to use AI to combat fraud and abuse. The key takeaway of this series (as detailed in the first post) is that:

AI is key to building protections that keep up with both users’ expectations and increasingly sophisticated attacks.

As discussed in this post and the two previous ones, there are some challenges to overcome to make this work in practice. However, now that AI frameworks are mature and well documented, there’s never been a better time to start using AI in your defense systems, so don’t let those challenges stop you as the upsides are very strong.

Thank you for reading this blog post up to the end. And please don’t forget to share it, so your friends and colleagues can also learn how to use AI for anti-abuse purposes.

To get notified when my next post is online, follow me on Twitter , Facebook , Google+ , or LinkedIn . You can also get the full posts directly in your inbox by subscribing to the mailing list or via RSS .

A bientôt!

ACM Digital Threats: Research and Practice… and Columns!

We at CERT are very proud of our collaboration with ACM to create the journal ACM Digital Threats: Research and Practice. One of the goals of the journal is to facilitate the communication between researchers and practitioners in the field of Cybersecurity. We have two columns to aid us in achieving this goal.

The first column is called "With the Benefit of Hindsight".

Adjusting from past experience--both mistakes and successes--is an important facet for researchers and practitioners in the field of cybersecurity. The goal of this column is to consider past security events and reflect on how they changed the security landscape. We ask the authors to consider themes such as:

• Lessons learned--and difficulties faced
• How would the event be approached differently in today's cybersecurity climate?
• Has the landscape changed significantly since the event? In a good or bad way?
• What further development remains to be done?
• Would the author have changed anything, knowing what is known today?

The topic also applies to authors of seminal works and papers. Learning from past mistakes and planning for the future is an important part of any research topic.

The second column is called "Leaving the Laboratory: Putting Research into Practice".

In this column a practitioner would examine a peer-reviewed research article from a recent issue of the publication and provide his or her insight with respect to implementing that research. Thus taking the research into practice.

Other possible topics of this column are general concerns for both researchers and practitioners that relate to doing research effectively. For example, considering sample bias and how it affects research or examining actual changes seen in the security landscape as a result of researchers and practitioners collaborating.

If you are interested in contributing to the journal by writing a column, please contact the editors of the journal at dtrap-editors@acm.org.

The Art of the Presentation (Part 2 of 3)

You’ve created your presentation, now you need to practise. Or as the great Yogi Berra put it: In theory there is no difference between theory and practise. In practise, there is. Almost certainly in the early days of your presenting you will need to practise a considerable amount. There are two main reasons for this; … Read More