Daily Archives: July 5, 2018

Announcing New Veracode Dynamic Analysis

Effective application security assesses applications across the entire software lifecycle – beyond the development phase and into production. Why is this necessary? If you’ve shifted security left, into the development process, why do you need to shift it right into production? To put it bluntly: Because people aren’t perfect, and bad guys never sleep. With the speed of today’s development processes, it would be foolish to assume that every defect has been found and fixed when an app hits production, and likewise, it would be foolish to assume that cyberattackers are done inventing new ways to access your code. In addition, scanning an app dynamically at runtime will find issues and vulnerabilities you simply can’t identify looking at the app statically. The bottom line is that scanning apps in production with dynamic analysis is a critical piece of an effective application security program. However, dynamic analysis solutions have to work with DevOps processes and keep software secure without slowing or stopping releases.

To help you meet this need to dynamically scan apps in production, while ensuring you keep pace in a DevOps world, we’re launching a new and improved DAST solution, Veracode Dynamic Analysis. With its automation, depth of coverage, and unmatched scalability, Veracode Dynamic Analysis helps you:

Save time and effort on production scanning

With Veracode Dynamic Analysis’ recurring scheduling feature, you don’t have to remember to kick off scans. You can easily set up scans on a schedule that you do not have to continuously monitor. In addition, with the automated pause & resume feature, you don’t have to worry about disrupting IT maintenance windows because Dynamic Analysis will automatically pause at maintenance windows and resume where it left off.

Dynamically scan all your apps quickly and accurately

Veracode Dynamic Analysis covers all your applications, even difficult-to-scan web apps, such as single page and large web apps. And we will keep your development teams moving both with the speed at which our solution crawls and audits pages, and with our low false-positive rate (<1%), which keeps your developers from spinning their wheels chasing down non-existent threats.

Easily onboard apps and scale to cover your entire application landscape

You can set up a Veracode Dynamic Analysis scan with just the URL; you don’t need to coordinate with the development team to hunt down code or binaries. And when you need to scan multiple applications, you don’t have to upload them one at a time. You simply upload a .csv document to Dynamic Analysis with all of the URLs. In addition, you can schedule a group of applications into a batch scan and assess multiple applications concurrently. No matter the size of your organization, concurrent scanning means you don’t have to wait for a scan to complete before starting the next one.

Get all your testing results in one place

With Veracode, you’ll find results from all your AppSec tests – static, dynamic, SCA, pen testing – in one central location. This single view of test results makes it easy to coordinate remediation between multiple teams and track your progress.

Learn more

Keep your code secure across the software lifecycle, without slowing development cycles; get more details on the new Veracode Dynamic Analysis.

Threat Model Thursdays: Crispin Cowan

Over at the Leviathan blog, Crispin Cowan writes about “The Calculus Of Threat Modeling.” Crispin and I have collaborated and worked together over the years, and our approaches are explicitly aligned around the four question frame.

What are we working on?

One of the places where Crispin goes deeper is definitional. He’s very precise about what a security principal is:

A principal is any active entity in system with access privileges that are in any way distinct from some other component it talks to. Corollary: a principal is defined by its domain of access (the set of things it has access to). Domains of access can, and often do, overlap, but that they are different is what makes a security principal distinct.

This also leads to the definition of attack surface (where principals interact), trust boundaries (the sum of the attack surfaces) and security boundaries (trust boundaries for which the engineers will fight). These are more well-defined than I tend to have, and I think it’s a good set of definitions, or perhaps a good step forward in the discussion if you disagree.

What can go wrong?

His approach adds much more explicit description of principals who own elements of the diagram, and several self-check steps (“Ask again if we have all the connections..”) I think of these as part of “did we do a good job?” and it’s great to integrate such checks on an ongoing basis, rather than treating it as a step at the end.

What are we going to do about it?

Here Crispin has assessing complexity and mitigations. Assessing complexity is an interesting approach — a great many vulnerabilities appear on the most complex interfaces, and I think it’s a useful strategy, similar to ‘easy fixes first’ for a prioritization approach.

He also has “c. Be sure to take a picture of the white board after the team is done describing the system.” “d. Go home and create a threat model diagram.” These are interesting steps, and I think deserve some discussion as to form (I think this is part of ‘what are we working on?’) and function. To function, we already have “a threat model diagram,” and a record of it, in the picture of the whiteboard. I’m nitpicking here for two very specific reasons. First, the implication that what was done isn’t a threat model diagram isn’t accurate, and second, as the agile world likes to ask “why are you doing this work?”

I also want to ask, is there a reason to go from whiteboard to Visio? Also, as Crispin says, he’s not simply transcribing, he’s doing some fairly nuanced technical editing, “Collapse together any nodes that are actually executing as the same security principal.” That means you can’t hand off the work to a graphic designer, but you need an expensive security person to re-consider the whiteboard diagram. There are times that’s important. If the diagram will be shown widely across many meetings; if the diagram will go outside the organization, say, to regulators; if the engineering process is waterfall-like.

Come together

Crispin says that tools are substitutes for expertise, and that (a? the?) best practice is for a security expert and the engineers to talk. I agree, this is a good way to do it — I also like to train the engineers to do this without security experts each time.

And that brings me to the we/you distinction. Crispin conveys the four question frame in the second person (What are you doing, what did you do about it), and I try to use the first person plural (we; what are we doing). Saying ‘we’ focuses on collaboration, on dialogue, on exploration. Saying ‘you’ frames this as a review, a discussion, and who knows, possibly a fight. Both of us used that frame at a prior employer, and today when I consult, I use it because I’m really not part of the doing team.

That said, I think this was a super-interesting post for the definitions, and for showing the diagram evolution and the steps taken from a whiteboard to a completed, colored diagram.

The image is the frontspiece of Leviathan by Thomas Hobbes, with its famous model of the state, made up of the people.

What Security Pros Will Get Out of Our Virtual Summit on Open Source Risk

There has been a fundamental shift in the way code is developed in the past 15 to 20 years. Today, developers do far more re-using of existing code than creating code from scratch. Taking advantage of the millions of open source libraries available has become standard operating procedure. And this new model comes with tremendous benefits – both for developers, and for the business – allowing both to move and innovate with unprecedented speed. Ultimately, with everyone creating code this way, it has become a necessary practice in order to remain competitive.

But what about security? Shouldn’t open source code be more secure because it’s got millions of eyeballs on it? The reality is that it’s becoming increasingly clear that the “eyeball theory” is simply not playing out, and that open source code is just as vulnerable as in-house-developed code. In fact, in many ways, open source code is less secure, if you consider that reusable code means reusable vulnerabilities, and that a breach in one piece of open source code has far-reaching implications. And the bad guys know that attacking open source code gives them the most bang for their buck – one breach, millions affected. In addition, with the pace of open source libraries being generated, vulnerabilities are also being generated faster than anyone can keep track of.

How should security professionals approach this new threat landscape? If stopping open source library use is not an option, how do you secure its use? Our Virtual Summit, The Open Source Library Conundrum: Managing Your Risk, set out to tackle this complicated and critical issue. We gathered experts across and beyond our company to give you advice and tips on all sides of this issue – the problem, the solutions, the process, and the technology. Of note is the keynote speaker for this summit, OWASP founder Mark Curphey. Veracode recently acquired Mark’s company SourceClear, which has a groundbreaking approach to open source library security – centered around the idea that the accuracy of your security testing of open source code is critical. For instance, you may be using a vulnerable open source library, but are you using the vulnerable part of that library? This approach and Mark’s expertise are woven throughout the Summit.

Check out the Summit to get:

  • A better understanding of today’s threat landscape
  • A clear view into the latest approaches toward the open source security problem
  • Practical tips and advice on using open source libraries securely, from both a technology and process perspective
  • A look at how companies are currently managing their open source library risk

Get more details on the Summit sessions and how to register here.

A Closer Look at Security’s Role in a DevSecOps Organization

In February, we hosted a virtual summit titled “Assembling the Pieces of the DevSecOps Puzzle.” The goal of the summit was to provide organizations with tools and information to implement a DevSecOps strategy in their organization—and make the shift from theory into practice. 

In his educational webinar at the summit, Chris Wysopal—Veracode’s CTO and co-founder—tackles an important, practical question head-on: If AppSec is shifting left, and the responsibility of testing security now belongs to developers, what does this mean for the security team?

First, it is important to note that DevSecOps does not shift all security responsibility onto the development team. Instead, it requires that the teams integrate “so that they’re working together as one team as opposed to a separate team [development] with an audit function [security],” noted Wysopal.

Therefore, security teams need to shift their thinking and start acting as “builders,” rather than “breakers.” They need to work to integrate security into development, instead of fixing it after the fact. This requires educating developers, forming open means of communication between the teams, and, most importantly, automating as much as possible so that it does not burden development teams.

Automate or batch

Automation is key to shifting security testing left into development. You need to keep developers moving forward, not asking them to stop coding to open another tool and start scanning. But what about testing that simply can’t be automated? For testing like manual testing and threat modeling, make sure you take it out of band of the DevOps pipeline – you don’t want to create gates that hold up the build. Additionally, conduct manual processes in small batch sizes so they don’t hold things up for extended periods of time. For example, conduct threat modeling on a piece of software, rather than the whole application. Or manually test just the pieces of the code that need it, for example, a password reset mechanism. 

Keep in mind that, not only do you want to help developers find the problems, you want to enable strategies for them to fix the vulnerabilities and errors found. You can do this by creating guidelines, checklists, and recipe-style formats so developers know what to do in a particular situation. This way, developers have the necessary information on hand and do not need to waste time calling and searching for help.  

Create security champions

Wysopal also discussed the importance of “security champions,” which Sonali Shah, VP of Product Strategy at Veracode, discussed in her summit webinar. Security champions play an increasingly important role with the rise of DevSecOps and the shifting left of security. The security champion acts as a security ambassador on the development team, maintaining consistent communication and knowledge sharing between both groups. In a poll conducted during the webinar, Wysopal found that most AppSec security advisors oversee nearly 200 developers, so it is evident that security officers need liaisons and well-trained security specialists to relay important information.

Have developer empathy

If the security team is going to start working more closely with the development team, they need a better understanding of what developers do. To effectively develop relationships between the development and security teams, Wysopal emphasized the importance of having developer empathy. Essentially, security should understand developers’ goals, incentives, and, most importantly, struggles. This way, security can understand what motivates developers and how to properly integrate security so as to maintain their culture and deadlines.

The security/development relationship still a struggle

At the end of the webinar, Wysopal answered questions from viewers that were very revealing in terms of what organizations are struggling with in the shift toward DevSecOps:

How do I foster a relationship between security and engineering?

Be united: Meet with counterparts and think of yourselves as one team.

Have empathy: What is the other team struggling with? What is the other team’s goals?

Shared accountability: Set common goals and hold each other to them.

What if my security officers don’t want to work with my developers?

Be clear; working with developers is a mandatory part of their role.

Vocalize benefits: It is better for their career going forward since they will gain a breadth of skills, understanding, and experiences.

How do I get my overworked AppSec experts to prioritize helping DevOps teams?

Security champions: Build this program quickly and effectively to alleviate some of the responsibility from overworked security officers

Listen to Chris Wysopal’s full talk here.