Daily Archives: May 3, 2018

Risky Biz Soap Box: Root9b on agentless threat hunting

In this edition of Soap Box we’re chatting with Root9b. They’ve just launched an updated version of their ORION platform. And I guess the way you’d describe Root9b is as a threat hunt product maker and managed threat hunt provider. And their approach is a bit different – their software is agentless. They basically authenticate to a machine, inject various payloads into memory, and use that to pull back all sorts of telemetry from machines.

They say this means it’s much less likely that attackers will see them and they offer this as a product, ORION, or they offer it as a service. They say their managed services customers come to them because pretty unhappy with their MDR and MSSP providers and want better signalling.

So I was joined by John Harbaugh, COO of Root9b, and Mike Morris, CTO. Both of these guys were US Air Force cyberdudes before jumping out to the private sector. The company actually started off doing training before developing their platform ORION.

John and Mike joined me by Skype for this podcast. Enjoy!

Bug Alert! All 330 Million Twitter Users Need to Change Their Passwords Immediately

Tweet, tweet! No, that’s not a bird you’re hearing outside your window, that’s Twitter kindly reminding you to change your password immediately. And that goes for every single user, as it was discovered just today, on World Password Day no less, that all 330 million Twitter users need to change their passwords to their accounts after a bug exposed them in plain text.

So, how did this exactly happen? According to Twitter, this vulnerability came about due to an issue within the hashing process that masks passwords. This process is supposed to mask these passwords by replacing them with a random string of characters that get stored on Twitter’s system. However, an error occurred during this process that caused these passwords to be saved in plain text to an internal log.

This news first came to light via a company blog, as Twitter confirmed that “we found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.” So far, Twitter has not revealed how many users’ passwords may have been potentially compromised or how long the bug was exposing passwords before the issue was discovered – which is precisely why the company has advised every user to change their password just in case. But, beyond changing their passwords, what other security steps can Twitter users take to ensure they stay protected from this bug? Start by following these tips:

  • Make your next password strong. When changing your password, make sure the next one you create is a strong password that is hard for cybercriminals to crack. Include numbers, lowercase and uppercase letters, and symbols. The more complex your password is, the more difficult it will be to crack. Finally, avoid common and easy to crack passwords like “12345” or “password.”
  • Use unique passwords for every account. Was your Twitter password the same one used for other accounts? If that’s the case, you need to also change those passwords immediately. It’s a good security rule of thumb – always use different passwords for your online accounts so you avoid having all of your accounts become vulnerable if you are hacked. It might seem difficult to keep so many passwords, but it will help you keep your online accounts secure.
  • Use a password manager. Take your security to another level with a password manager. A password manager can help you create strong passwords, remove the hassle of remembering numerous passwords and log you into your favorite websites automatically.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Bug Alert! All 330 Million Twitter Users Need to Change Their Passwords Immediately appeared first on McAfee Blogs.

On The Road – Enterprise Security Weekly #89

This week, Paul and John interview Adam Gordon, Edutainer at ITPro.TV! In the news, we have updates from Cisco, IBM, LogRhythm, ServiceNow, and more! In our final segment, we are joined by Security Weekly's own Jeff Man, who will give us an RSA Vendor Wrap-Up! All that and more, on this episode of Enterprise Security Weekly!

 

Full Show Notes: https://wiki.securityweekly.com/ES_Episode89

 

Visit https://www.securityweekly.com/esw for all the latest episodes!

Introducing ThreatConnect’s Intel Report Cards

Providing insight into how certain feeds are performing within ThreatConnect

As part of our latest release, we've introduced a new feature to help users better understand the intelligence they're pumping into their systems.  Intelligence can be a fickle thing; indicators are by their very nature ephemeral and part of our job is to better curate them. We find patterns not only in the intelligence itself, but in the sources of intelligence. As analysts, we frequently find ourselves asking a simple question: "Who's telling me this, and how much do I care?" We sought to tackle this problem on a few fronts in ThreatConnect in the form of Report Cards, giving you insight into how certain feeds are performing across the ThreatConnect ecosystem.  

First and foremost, we wanted to leverage any insights gleaned from our vast user base. We have users spanning dozens of industries across a global footprint. If a customer in Europe is seeing a lot of false positives come from a set of indicators, we want the rest of ThreatConnect users to learn from that. This is where ThreatConnect's CAL™ (Collective Analytics Layer) comes in. All participating instances of CAL are sending some anonymized, aggregated telemetry back. This gives us centralized insight which we can distribute to our customers. This telemetry includes automated tallies, such as how often an indicator is being observed in networks, as well as human-curated data such as how often False Positives are being reported.

Feed selection interface, driven by CAL's insights (27 March 2018)

 

By combining and standardizing these metrics, CAL can start to paint a picture of various intelligence feeds.  CAL knows which feeds are reporting on which indicators, and can overlay this information at scale with the above telemetry.  This has an impact at the strategic level, when deciding which feeds to enable in your instance. We're all familiar with the "garbage in, garbage out" problem -- simply turning on every feed may not be productive for your environment and team.  High-volume feeds that yield a lot of false positives, report on indicators outside of your areas of interest, or are simply repeated elsewhere may not be worth your time. Now system administrators can make an informed decision on which feeds they would like to enable in their instance, and with a single button click can get months of historical data.  These feeds are curated by the ThreatConnect Research team, who is doing their best to prune and automatically deprecate older data to keep the feeds relevant.

ThreatConnect's Intelligence Report card helps you better understand a candidate feed (27 March 2018)

 

The Report Card view goes into more depth on a particular feed. For each feed CAL knows about, it will give you a bullet chart containing the feed's performance on a few key dimensions, as determined by the ThreatConnect analytics team.  In short, a bullet chart identifies ranges of performance (red, yellow, and green here) to give you a quick representation of the groupings we've identified for a particular metric. A vertical red line indicates what we consider to be a successful "target" number for that metric, and the gray line indicates the selected feed's actual performance on that metric. We've identified a few key metrics that we think will help our users make decisions:

  • Reliability Rating is a measure of false positive reports on indicators reported by this feed. It's more than just a count of how many votes have been tallied by users. We also consider things like how egregious a false positive is, since alerting on something like google.com in your SIEM is a much more grave offense in our book. We give this a letter grade, from A-F, to help you identify how likely this feed is to waste your time.
  • Unique Indicators is a simple percentage of how many indicators contained within this feed aren't found anywhere else. If a feed's indicators are often found elsewhere, then some organizations may prefer not to duplicate data by adding them again. There may be reasons for this, as we see below with ThreatAssess. Nonetheless, this metric is a good way to help you understand how much novelty does this feed add?
  • First Reported measures the percentage of indicators which, when identified in other feeds, were found in this feed first. Even if a feed's indicators are often found elsewhere, this feed may have value if it's reporting those indicators significantly earlier. This metric helps you understand how timely a feed is relative to other feeds.
  • Scoring Disposition is a measure of the score that CAL assigns to indicators, on a 0-1000 scale. This score can be factored into the ThreatAssess score (alongside your tailored, local analysis). The Scoring Disposition is not an average of those score, but a weighted selection based on the indicators we know our users care about. This metric helps answer how bad are the things in this feed according to CAL?

The Report Card also contains a few other key fields, namely the Daily Indicators graph and the Common Classifiers box. The Daily Indicators chart shows you the indicator volume coming from a source over time, to help you understand the ebbs and flows of a particular feed. The Common Classifiers box shows which Classifiers are most common on indicators in the selected feed. Combined, these can give you an idea of how many indicators am I signing up for, and what flavors are they?

All of these insights are designed to help you make better decisions throughout your security lifecycle. Ultimately, the decision to add a feed should be a calculated one. When an analyst sees that an indicator was found in a particular feed, they may choose to use that information based on the Reliability Rating of that feed. You can leverage these insights as trust levels via ThreatAssess, allowing you to make such choices for every indicator in your instance.

We'll continue to improve our feed offering and expand upon our Report Cards as we hear more feedback from you, so please feel free to Tweet us @ThreatConnect.

The post Introducing ThreatConnect's Intel Report Cards appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Cybersecurity pervasiveness subsumes all security concerns

Given the increased digitization of society and explosion of devices generating data (including retail, social media, search, mobile, and the internet of things), it seems like it might have been inevitable that cybersecurity pervasiveness would eventually touch every aspect of life. But, it feels more like everything has been subsumed by infosec.

All information in our lives is now digital — health records, location data, search habits, not to mention all of the info we willingly share on social media — and all of that data has value to us. However, it also has value to companies that can use it to build more popular products and serve ads and it has value to malicious actors too.

The conflict between the interests of these three groups means cybersecurity pervasiveness is present in every facet of life. Users want control of their data in order to have a semblance of privacy. Corporations want to gather and keep as much data as possible, just in case trends can be found in it to increase the bottom line. And, malicious actors want to use that data for financial gain — selling PII, credit info or intellectual property on the dark web, holding systems for ransom, etc. — or political gain.

None of these cybersecurity pervasiveness trends are necessarily new for those in the infosec community, but issues like identity theft or stolen credit card numbers haven’t always registered with the general public or mass media as cybersecurity problems because they tended to be considered in individual terms — a few people here and there had those sorts of issues but it couldn’t be too widespread, right?

Now, there are commercials on major TV networks pitching “free dark web scans” to let you know whether your data is being sold on the black market. (Spoiler alert: your data has almost certainly been compromised, it’s more a matter of whether you’re unlucky enough to have your ID chosen from the pile by malicious actors or not. And, a dark web scan won’t make the awful process of getting a new social security number any better.)

Data breaches are so common and so far-reaching that everyone has either been directly affected or is no more than about two degrees of separation from someone who has been. Remember: the Yahoo breach alone affected 3 billion accounts and the latest stats say there are currently only about 4.1 billion people who have internet access. The Equifax breach affected 148 million U.S. records and the U.S. has an estimated population of 325 million.

Everyone has been affected in one way or another. Everything we do can be tracked including our location, our search and purchase history, our communications and more.

But, cybersecurity pervasiveness no longer affects only financial issues and the general public has seen in stark reality how digital platforms and the idea of truth itself can be manipulated by threat actors for political gain.

Cyberattacks have become shows of nation-state power in a type of new Cold War, at least until cyberattacks impact industrial systems and cause real world harm.

Just as threat actors can find the flaws in software, there are flaws in human psychology that can be exploited as part of traditional phishing schemes or fake news campaigns designed to sway public opinion or even manipulate elections.

For all of the issues that arise from financially-motivated threat actors, the security fixes range from relatively simple to implement — encryption, data protection, data management, stronger privacy controls, and so on — to far more complex issues like replacing the woefully outmatched social security number as a primary form of ID.

However, the politically-minded attacks are far more difficult to mitigate, because you can’t patch human psychology. Better critical reading skills are hard to build across people who might not believe there’s even an issue that needs fixing. Pulling people out of echo chambers will be difficult.

Social networks need to completely change their platforms to be better at enforcing abuse policies and to devalue constant sharing of links. And the media also needs to stop prioritizing conflict and inflammatory headlines over real news. All of this means prioritizing the public good over profits, a notoriously difficult proposition under the almighty hand of capitalism.

None of these are easy to do and some may be downright impossible. But, like it or not, the infosec community has been brought to the table and can have a major voice in how these issues get fixed. Are we ready for the challenge?

The post Cybersecurity pervasiveness subsumes all security concerns appeared first on Security Bytes.

Yahoo! Fined 35 Million USD For Late Disclosure Of Hack

Yahoo! Fined 35 Million USD For Late Disclosure Of Hack

Ah Yahoo! in trouble again, this time the news is Yahoo! fined for 35 million USD by the SEC for the 2 years delayed disclosure of the massive hack, we actually reported on the incident in 2016 when it became public – Massive Yahoo Hack – 500 Million Accounts Compromised.

Yahoo! has been having a rocky time for quite a few years now and just recently has sold Flickr to SmugMug for an undisclosed amount, I hope that at least helps pay off some of the fine.

Read the rest of Yahoo! Fined 35 Million USD For Late Disclosure Of Hack now! Only available at Darknet.

State Machine Testing with Echidna

Property-based testing is a powerful technique for verifying arbitrary properties of a program via execution on a large set of inputs, typically generated stochastically. Echidna is a library and executable I’ve been working on for applying property-based testing to EVM code (particularly code written in Solidity).

Echidna is a library for generating random sequences of calls against a given smart contract’s ABI and making sure that their evaluation preserves some user-defined invariants (e.g.: the balance in this wallet must never go down). If you’re from a more conventional security background, you can think of it as a fuzzer, with the caveat that it looks for user-specified logic bugs rather than crashes (as programs written for the EVM don’t “crash” in any conventional way).

The property-based testing functionality in Echidna is implemented with Hedgehog, a property-based testing library by Jacob Stanley. Think of Hedgehog as a nicer version of QuickCheck. It’s an extremely powerful library, providing automatic minimal testcase generation (“shrinking”), well-designed abstractions for things like ranges, and most importantly for this blog post, abstract state machine testing tools.

After reading a particularly excellent blog post by Tim Humphries (“State machine testing with Hedgehog,” which I’ll refer to as the “Hedgehog post” from now on) about testing a simple state machine with this functionality, I was curious if the same techniques could be extended to the EVM. Many contracts I see in the wild are just implementations of some textbook state machine, and the ability to write tests against that invariant-rich representation would be invaluable.

The rest of this blog post assumes at least a degree of familiarity with Hedgehog’s state machine testing functionality. If you’re unfamiliar with the software, I’d recommend reading Humphries’s blog post first. It’s also worth noting that the below code demonstrates advanced usage of Echidna’s API, and you can also use it to test code without writing a line of Haskell.

First, we’ll describe our state machine’s states, then its transitions, and once we’ve done that we’ll use it to actually find some bugs in contracts implementing it. If you’d like to follow along on your own, all the Haskell code is in examples/state-machine and all the Solidity code is in solidity/turnstile.

Step 0: Build the model

Fig. 1: A turnstile state machine

The state machine in the Hedgehog post is a turnstile with two states (locked and unlocked) and two actions (inserting a coin and pushing the turnstile), with “locked” as its initial state. We can copy this code verbatim.

data ModelState (v :: * -> *) = TLocked
                              | TUnlocked
                              deriving (Eq, Ord, Show)

initialState :: ModelState v
initialState = TLocked

However, in the Hedgehog post the effectful implementation of this abstract model was a mutable variable that required I/O to access. We can instead use a simple Solidity program.

contract Turnstile {
  bool private locked = true; // initial state is locked

  function coin() {
    locked = false;
  }

  function push() returns (bool) {
    if (locked) {
      return(false);
    } else {
      locked = true;
      return(true);
    }
  }
}

At this point, we have an abstract model that just describes the states, not the transitions, and some Solidity code we claim implements a state machine. In order to test it, we still have to describe this machine’s transitions and invariants.

Step 1: Write some commands

To write these tests, we need to make explicit how we can execute the implementation of our model. The examples given in the Hedgehog post work in any MonadIO, as they deal with IORefs. However, since EVM execution is deterministic, we can work instead in any MonadState VM.

The simplest command is inserting a coin. This should always result in the turnstile being unlocked.

s_coin :: (Monad n, MonadTest m, MonadState VM m) => Command n m ModelState
s_coin = Command (\_ -> Just $ pure Coin)
                 -- Regardless of initial state, we can always insert a coin
  (\Coin -> cleanUp >> execCall ("coin", []))
  -- Inserting a coin is just calling coin() in the contract
  -- We need cleanUp to chain multiple calls together
  [ Update $ \_ Coin _ -> TUnlocked
    -- Inserting a coin sets the state to unlocked
  , Ensure $ \_ s Coin _ -> s === TUnlocked
    -- After inserting a coin, the state should be unlocked
  ]

Since the push function in our implementation returns a boolean value we care about (whether or not pushing “worked”), we need a way to parse EVM output. execCall has type MonadState VM => SolCall -> m VMResult, so we need a way to check whether a given VMResult is true, false, or something else entirely. This turns out to be pretty trivial.

match :: VMResult -> Bool -> Bool
match (VMSuccess (B s)) b = s == encodeAbiValue (AbiBool b)
match _ _ = False

Now that we can check the results of pushing, we have everything we need to write the rest of the model. As before, we’ll write two Commands; modeling pushing while the turnstile is locked and unlocked, respectively. Pushing while locked should succeed, and result in the turnstile becoming locked. Pushing while unlocked should fail, and leave the turnstile locked.

s_push_locked :: (Monad n, MonadTest m, MonadState VM m) => Command n m ModelState
s_push_locked = Command (\s -> if s == TLocked then Just $ pure Push else Nothing)
                        -- We can only run this command when the turnstile is locked
  (\Push -> cleanUp >> execCall ("push", []))
  -- Pushing is just calling push()
  [ Require $ \s Push -> s == TLocked
    -- Before we push, the turnstile should be locked
  , Update $ \_ Push _ -> TLocked
    -- After we push, the turnstile should be locked
  , Ensure $ \before after Push b -> do before === TLocked
                                        -- As before
                                        assert (match b False)
                                        -- Pushing should fail
                                        after === TLocked
                                        -- As before
  ]
s_push_unlocked :: (Monad n, MonadTest m, MonadState VM m) => Command n m ModelState
s_push_unlocked = Command (\s -> if s == TUnlocked then Just $ pure Push else Nothing)
                          -- We can only run this command when the turnstile is unlocked
  (\Push -> cleanUp >> execCall ("push", []))
  -- Pushing is just calling push()
  [ Require $ \s Push -> s == TUnlocked
    -- Before we push, the turnstile should be unlocked
  , Update $ \_ Push _ -> TLocked
    -- After we push, the turnstile should be locked
  , Ensure $ \before after Push b -> do before === TUnlocked
                                        -- As before
                                        assert (match b True)
                                        -- Pushing should succeed
                                        after === TLocked
                                        -- As before
  ]

If you can recall the image from Step 0, you can think of the states we enumerated there as the shapes and the transitions we wrote here as the arrows. Our arrows are also equipped with some rigid invariants about the conditions that must be satisfied to make each state transition (that’s our Ensure above). We now have a language that totally describes our state machine, and we can simply describe how its statements compose to get a Property!

Step 2: Write a property

This composition is actually fairly simple, we just tell Echidna to execute our actions sequentially, and since the invariants are captured in the actions themselves, that’s all that’s required to test! The only thing we need now is the actual subject of our testing, which, since we work in any MonadState VM, is just a VM, which we can parametrize the property on.

prop_turnstile :: VM -> property
prop_turnstile v = property $ do
  actions <- forAll $ Gen.sequential (Range.linear 1 100) initialState
    [s_coin, s_push_locked, s_push_unlocked
  -- Generate between 1 and 100 actions, starting with a locked (model) turnstile
  evalStateT (executeSequential initialState actions) v
  -- Execute them sequentially on the given VM.

You can think of the above code as a function that takes an EVM state and returns a hedgehog-checkable assertion that it implements our (haskell) state machine definition.

Step 3: Test

With this property written, we’re ready to test some Solidity! Let’s spin up ghci to check this property with Echidna.

λ> (v,_,_) <- loadSolidity "solidity/turnstile/turnstile.sol" -- set up a VM with our contract loaded
λ> check $ prop_turnstile v -- check that the property we just defined holds
  ✓ passed 10000 tests.
True
λ>

It works! The Solidity we wrote implements our model of the turnstile state machine. Echidna evaluated 10,000 random call sequences without finding anything wrong.

Now, let’s find some failures. Suppose we initialize the contract with the turnstile unlocked, as below. This should be a pretty easy failure to detect, since it’s now possible to push successfully without putting a coin in first.

We can just slightly modify our initial contract as below:

contract Turnstile {
  bool private locked = false; // initial state is unlocked

  function coin() {
    locked = false;
  }

  function push() returns (bool) {
    if (locked) {
      return(false);
    } else {
      locked = true;
      return(true);
    }
  }
}

And now we can use the exact same ghci commands as before:

λ> (v,_,_) <- loadSolidity "solidity/turnstile/turnstile_badinit.sol"
λ> check $ prop_turnstile v
  ✗ failed after 1 test.

       ┏━━ examples/state-machine/StateMachine.hs ━━━
    49 ┃ s_push_locked :: (Monad n, MonadTest m, MonadState VM m) => Command n m ModelState
    50 ┃ s_push_locked = Command (\s -> if s == TLocked then Just $ pure Push else Nothing)
    51 ┃   (\Push -> cleanUp >> execCall ("push", []))
    52 ┃   [ Require $ \s Push -> s == TLocked
    53 ┃   , Update $ \_ Push _ -> TLocked
    54 ┃   , Ensure $ \before after Push b -> do before === TLocked
    55 ┃                                         assert (match b False)
       ┃                                         ^^^^^^^^^^^^^^^^^^^^^^
    56 ┃                                         after === TLocked
    57 ┃ ]

       ┏━━ examples/state-machine/StateMachine.hs ━━━
    69 ┃ prop_turnstile :: VM -> property
    70 ┃ prop_turnstile v = property $ do
    71 ┃   actions <- forAll $ Gen.sequential (Range.linear 1 100) initialState 72 ┃ [s_coin, s_push_locked, s_push_unlocked] ┃ │ Var 0 = Push 73 ┃ evalStateT (executeSequential initialState actions) v This failure can be reproduced by running: > recheck (Size 0) (Seed 3606927596287211471 (-1511786221238791673))

False
λ>

As we’d expect, our property isn’t satisfied. The first time we push it should fail, as the model thinks the turnstile is locked, but it actually succeeds. This is exactly the result we expected above!

We can try the same thing with some other buggy contracts as well. Consider the below Turnstile, which doesn’t lock after a successful push.

contract Turnstile {
  bool private locked = true; // initial state is locked

  function coin() {
    locked = false;
  }

  function push() returns (bool) {
    if (locked) {
      return(false);
    } else {
      return(true);
    }
  }
}

Let’s use those same ghci commands one more time

λ> (v,_,_) <- loadSolidity "solidity/turnstile/turnstile_nolock.sol"
λ> check $ prop_turnstile v
  ✗ failed after 4 tests and 1 shrink.

       ┏━━ examples/state-machine/StateMachine.hs ━━━
    49 ┃ s_push_locked :: (Monad n, MonadTest m, MonadState VM m) => Command n m ModelState
    50 ┃ s_push_locked = Command (\s -> if s == TLocked then Just $ pure Push else Nothing)
    51 ┃   (\Push -> cleanUp >> execCall ("push", []))
    52 ┃   [ Require $ \s Push -> s == TLocked
    53 ┃   , Update $ \_ Push _ -> TLocked
    54 ┃   , Ensure $ \before after Push b -> do before === TLocked
    55 ┃                                         assert (match b False)
       ┃                                         ^^^^^^^^^^^^^^^^^^^^^^
    56 ┃                                         after === TLocked
    57 ┃  ]

       ┏━━ examples/state-machine/StateMachine.hs ━━━
    69 ┃ prop_turnstile :: VM -> property
    70 ┃ prop_turnstile v = property $ do
    72 ┃   [s_coin, s_push_locked, s_push_unlocked]
       ┃   │ Var 0 = Coin
       ┃   │ Var 1 = Push
       ┃   │ Var 3 = Push
    73 ┃   evalStateT (executeSequential initialState actions) v

    This failure can be reproduced by running:
    > recheck (Size 3) (Seed 133816964769084861 (-8105329698605641335))

False
λ>

When we insert a coin then push twice, the second should fail. Instead, it succeeds. Note that in all these failures, Echidna finds the minimal sequence of actions that demonstrates the failing behavior. This is because of Hedgehog’s shrinking features, which provide this behavior by default.

More broadly, we now have a tool that will accept arbitrary contracts (that implement the push/coin ABI), check whether they implement our specified state machine correctly, and return either a minimal falsifying counterexample if they do not. As a Solidity developer working on a turnstile contract, I can run this on every commit and get a simple explanation of any regression that occurs.

Concluding Notes

Hopefully the above presents a motivating example for testing with Echidna. We wrote a simple description of a state machine, then tested four different contracts against it; each case yielded either a minimal proof the contract did not implement the machine or a statement of assurance that it did.

If you’d like to try implementing this kind of testing yourself on a canal lock, use this exercise we wrote for a workshop.

What to do after a data breach: 5 steps to minimize risk

It happened again. Another major web service lost control of its database, and now you’re scrambling to stay ahead of the bad guys. As much as we hate them, data breaches are here to stay. The good news is they don’t have to elicit full-blown panic no matter how sensitive the pilfered data might be. There are usually some very simple steps you can take to minimize your exposure to the potential threat.

Here’s how.

Step 1: Determine the damage

hacker Thinkstock

The first thing to figure out is what the hackers took. If they got your username and password, for example, there’s little point in alerting your credit card company.

To read this article in full, please click here

COURSE LAUNCH: Penetration Testing Professional version 5 – PTPv5

We are launching the Penetration Testing Professional training course version 5 (PTPv5), the best way to learn Professional Pentesting skills, on May 22 2018.

Penetration Testing Professional version 5

Find out why Penetration Testing Professional version 5 – PTPv5 is the best way to learn Professional Penetration Testing skills, see the complete syllabus and of course take part in an exciting live demonstration during this launch Webinar on May 22nd. Special deals and prizes are waiting for all attendees, so please invite your friends and colleagues too.


Win PTPv5 for Free

To make your start into IT Security even easier, we decided to give every attendee of this live webinar the option to get a free PTSv3. This Penetration Testing Student training course covers all the pre-requisites to start with the newly launching PTPv5. 2 lucky winners will also get their hands on the brand-new PTPv5 training course in the Full or Elite Edition for free. The winners will be picked from all attendees and announced during the webinar along with special deals and prizes for everyone! Register for the webinar below:

Register for the launch webinar HERE

See you on May 22 2018, 1:00pm ET.