Daily Archives: October 12, 2018

Kanye’s Password

Everyone and his brother, inside of infosec and outside has been chortling at Kanye’s iPhone password.   Its 00000.

Not everyone is in on the joke.
Some express OUTRAGE.  “how dare you share that man’s password” (it was on CNN, its out there now).
Some (and these remind me of the 4D Chess MAGA people) theorize that Kanye is thinking 12 steps ahead.  He knew his password input would be on camera while at the White House so he temporarily set it to 00000.
And the last were the virtue signalers I mentioned.  “how dare you password shame Kanye, at least he has a password.”

And it is true.  By HAVING a password, encryption is enabled on the iPhone.
it is also true that few of us would be able to exploit knowledge of Kanye’s password as we don’t have hands on his phone.

But lets be honest.  A password of 00000 is pretty hilarious.

Self disclosure – one of my older accounts has failed the LastPass password audit because the password was compromised previously, and its used in more than one place.   But given the difficulty in updating this particular password, and the low risk level, I let it be until a forced password change recently.   Sometimes you just use dumb passwords.  And if it filmed I’d be razzed for it too.

The post Kanye’s Password appeared first on Roger's Information Security Blog.

GAO Report on Equifax

I have regularly asked why we don’t know more about the Equifax breach, including in comments in “That Was Close! Reward Reporting of Cybersecurity ‘Near Misses’.” These questions are not intended to attack Equifax. Rather, we can use their breach as a mirror to reflect, and ask questions about how defenses work, and learn things we can bring to our own systems.

Ian Melvin was kind enough to point out a GAO report, “Actions Taken by Equifax and Federal Agencies in Response
to the 2017 Breach
.” As you’d expect of a GAO report, it is level headed and provides a set of facts.


However, I still have lots of questions. Some very interesting details start on page 11:

Equifax officials added that, after gaining the ability to issue system-level commands on the online dispute portal that was originally compromised, the attackers issued queries to other databases to search for sensitive data. This search led to a data repository containing PII, as well as unencrypted usernames and passwords that could provide the attackers access to several other Equifax databases. According to Equifax’s interim Chief Security Officer, the attackers were able to leverage these credentials to expand their access beyond the 3 databases associated with the online dispute portal, to include an additional 48 unrelated
databases.

The use of encryption allowed the attackers to blend in their malicious actions with regular activity on the Equifax network and, thus, secretly maintain a presence on that network as they launched further attacks without being detected by Equifax’s scanning software. (Editor’s note: I’ve inverted the order of the paragraphs from the source.)

So my questions include:

  • How did the attackers get root?
  • Why wasn’t the root shell noticed? Would our organization notice an extra root sell in production?
  • How did they get access to the other 48 databases?
  • Why didn’t the pattern of connections raise a flag? “As before, Equifax
    officials stated that the attackers were able to disguise their presence by
    blending in with regular activity on the network.” I find this statement to be surprising, and it raises questions: Does the dispute resolution database normally connect to these other databases and run the queries which were run? How was that normal activity characterized and analyzed? Encryption provides content confidentiality, not meta-data confidentiality. Would we detect these extra connections?

Specifically, while Equifax had installed a device to inspect network traffic for evidence of malicious activity, a misconfiguration allowed encrypted traffic to pass through the network without being inspected. According to Equifax officials, the misconfiguration was due to an expired digital certificate. The certificate had expired about 10 months before the breach occurred, meaning that encrypted traffic was not being inspected throughout that period.

Would your organization notice if one of hundreds or dozens of IDSs shut up for a week, or if one ruleset stopped firing?

Google and Android have your back by protecting your backups



Android is all about choice. As such, Android strives to provide users many options to protect their data. By combining Android’s Backup Service and Google Cloud’s Titan Technology, Android has taken additional steps to securing users' data while maintaining their privacy.

Starting in Android Pie, devices can take advantage of a new capability where backed-up application data can only be decrypted by a key that is randomly generated at the client. This decryption key is encrypted using the user's lockscreen PIN/pattern/passcode, which isn’t known by Google. Then, this passcode-protected key material is encrypted to a Titan security chip on our datacenter floor. The Titan chip is configured to only release the backup decryption key when presented with a correct claim derived from the user's passcode. Because the Titan chip must authorize every access to the decryption key, it can permanently block access after too many incorrect attempts at guessing the user’s passcode, thus mitigating brute force attacks. The limited number of incorrect attempts is strictly enforced by a custom Titan firmware that cannot be updated without erasing the contents of the chip. By design, this means that no one (including Google) can access a user's backed-up application data without specifically knowing their passcode.

To increase our confidence that this new technology securely prevents anyone from accessing users' backed-up application data, the Android Security & Privacy team hired global cyber security and risk mitigation expert NCC Group to complete a security audit. Some of the outcomes included positives around Google’s security design processes, validation of code quality, and that mitigations for known attack vectors were already taken into account prior to launching the service. While there were some issues discovered during this audit, engineers corrected them quickly. For more details on how the end-to-end service works and a detailed report of NCC Group’s findings, click here.

Getting external reviews of our security efforts is one of many ways that Google and Android maintain transparency and openness which in turn helps users feel safe when it comes to their data. Whether it’s 100s of hours of gaming data or your personalized preferences in your favorite Google apps, our users' information is protected.

We want to acknowledge contributions from Shabsi Walfish, Software Engineering Lead, Identity and Authentication to this effort

Support FBI whistleblower Terry Albury, who is set to be sentenced next week

albury

Terry Albury

FBI whistleblower Terry Albury, the second person prosecuted by the Trump administration for leaking information to the press, will be be sentenced next week in federal court. The documents he is assumed to have shared detail the FBI’s recruitment tactics and how the agency monitors journalists. For his act of courage, he could face years in prison.

Albury pleaded guilty to two counts of violating the Espionage Act in March—each punishable by up to ten years in prison. Passed in 1917, the archaic law was originally intended for use against foreign spies, but since its inception, it has been weaponized against sources of journalists and whistleblowers. (Read more about the history of the Espionage Act here.)

Albury is no spy. His attorneys have described him as being driven to action by a “conscientious commitment to long-term national security and addressing the well-documented systemic biases within the FBI.” Albury has stated he witnessed discrimination both as by working as the only black field agent in the agency’s Minneapolis office, and by observing profiling of minority communities in Minnesota.

Although the complaint against Albury did not name a specific news organization, he is assumed to be the source behind The Intercept’s important “FBI Files” investigative series that  detailed controversial FBI tactics for investigating minorities and for monitoring journalists through National Security Letters (NSLs).

By using NSLs, the FBI can obtain journalists’ phone records with no court oversight whatsoever and can circumvent the Justice Department’s normally strict guidelines for spying on journalists. The fact that we know this is (presumably) thanks to Albury.

The charges against Albury came as the Justice Department ramped up its leak investigations by 800% since the previous administration. Albury’s case is the latest in a travesty of leak prosecutions under the Espionage Act, a practice normalized by the Obama administration and expanded on by Trump.

Albury will be sentenced on October 18 in St. Paul, Minnesota. His attorneys argue that guidelines suggest a sentence of approximately three years, but that given his moral character, role as a father of two young children, and the fact that he no longer works for the FBI, a sentence of probation would be most appropriate.

In the sentencing motion, Albury’s defense draw attention to his workplace environment, and how the racism he experienced within the FBI and the racial profiling witnessed the agency propagate in Minnesota sickened and isolated him.

“In 2016, Terry Albury, a highly-regarded and decorated FBI agent in the Minneapolis office (and who had previously served the FBI in Iraq), and the only black field agent in his region at that time, disclosed classified materials to a reporter relating to FBI surveillance, profiling, and informant-recruitment practices in national security cases,” the motion reads. “He did so as an act of conscience, of patriotism and in the public interest, and for no personal gain whatsoever.”

Trevor Timm, executive director of Freedom of the Press Foundation, noted in April that former FBI officials like James Comey and and Andrew McCabe have received extensive media coverage and public and financial support—and don’t face prosecution. But Albury, who apparently released information with huge implications for racial inequity and press freedom, has received very little such support.

Albury's lawyers have launched a GoFundMe page to help with his legal expenses, which you can contribute to here.

The justice system is deeply broken when a courageous whistleblower like Albury should face any prison time at all for speaking out about racial profiling and discrimination within his workplace and making details of monitoring of the press public.

Pushing Left, Like a Boss: Part 4 — Secure Coding

As previously published in my blog, SheHacksPurple.
In the previous article in this series we discussed secure design concepts such as least privilege, reducing attack surface, failing safe and defense in depth (layered protection). In this article, we are going to talk about secure coding principles which could be used to help guide developers when implementing security controls within in software.
As we discussed before, a security flaw is a design problem, while a security bug is an implementation problem (a problem in the code). Whoever wrote that code had the best intentions, but may not have had enough information, time or guidance on how to do it correctly.  
Coding Phase of the SDLC

What is “secure coding”?

Sometimes called “defensive coding”, it is the act of coding with security in mind, and guarding against accidental or intentional misuse of your application. It is to assume that your application will be used in a myriad of ways(not necessarily just the way that you intended) and to code it accordingly.

Why is it ‘secure coding’ important?

I’m not going to answer that. If you are reading this blog, you already understand why secure coding is important. I think the real question here is: “How do I explain how important it is to developers? To project managers? To executives? How do I get them to give me time in the project for it?” I’m asked this quite often, so let me give you a few options.
  • You can explain using statistics and numbers, to predict the financial implications of a major security incident or breach. You can provide a cost/benefit analysis of how much less an AppSec program would cost. I used this approach and I was approved to launch my first AppSec program.
  • You can explain the business implications of a major incident, the loss of reputation or legal implications that would result from a major incident or data breach. I tend to use this when trying to justify large changes such as creating a disaster recovery site, or an AppSec advocacy program, or giving developers security tools (that tends to scare the pants off of most management types).
  • You can create a proof of concept to explain a current vulnerability you have in your product, to show them directly the consequences that can occur. This might lose you some friends, but it certainly does get your point across.
  • You can sit down with whoever is blocking you and have a real discussion about why you are worried about your current security posture. Explain it to them like they are a highly intelligent person, who happens to not know much about security (which means respectfully, and with enough detail that they understand the gravity of the situation.) It is at this point that I would tell them that I need them to sign off on the risk if we do not correct the problem, and that I can no longer be responsible for this. It is at this point that either 1) I get what I want or 2) I know this is no longer my responsibility.

Why are users the worst?

The one thing that you should always remember when coding defensively is  to assume that users will do something that you did not plan on.
Photo: https://wiki.sei.cmu.edu/confluence/display/seccode/Top+10+Secure+Coding+Practices
In the next post in this series I intend to publish a secure coding guideline. But before we continue onto that, please allow me to present my #1 advice on this topic: always use the security features in your framework. If your framework passes an anti-CSRF token for you, output encodes your data, or handles session management, use those features! *Never* write your own security control if one is available to you in your framework. This is especially true of encryption; leave it to the experts. Also, whenever possible, use the latest and greatest version of your framework — it’s usually the most secure version. Keep your framework up-to-date for less technical-debt and more cool features.
Up next in the ‘Pushing Left, Like a Boss’ series: a secure coding guideline.

The Language and Nature of Fileless Attacks Over Time

The language of cybersecurity evolves in step with changes in attack and defense tactics. You can get a sense for such dynamics by examining the term fileless. It fascinates me not only because of its relevance to malware—which is one of my passions—but also because of its knack for agitating many security practitioners.

I traced the origins of “fileless” to 2001, when Eugene Kaspersky (of Kaskersky Labs) used it in reference to Code Red worm’s ability to exist solely in memory. Two years later, Peter Szor defined this term in a patent for Symantec, explaining that such malware doesn’t reside in a file, but instead “appends itself to an active process in memory.”

Eugene was prophetic in predicting that fileless malware “will become one of the most widespread forms of malicious programs” due to antivirus’ ineffectiveness against such threats. Today, when I look at the ways in which malware bypasses detection, the evasion techniques often fall under the fileless umbrella, though the term expanded beyond its original meaning.

Fileless was synonymous with in-memory until around 2014.

The adversary’s challenge with purely in-memory malware is that disappears once the system restarts. In 2014, Kevin Gossett’s Symantec article explained how Powerliks malware overcame this limitation by using legitimate Windows programs rundll32.exe and powershell.exe to maintain persistence, extracting and executing malicious scripts from the registry. Kevin described this threat as “fileless,” because it avoided placing code directly on the file system. Paul Rascagnères at G Data further explained that Poweliks infected systems by using a boobietrapped Microsoft Word document.

The Powerliks discussion, and similar malware that appeared afterwards, set the tone for the way fileless attacks are described today. Yes, fileless attacks strive to maintain clearly malicious code solely or mostly in memory. Also, they tend to involve malicious documents and scripts. They often misuse utilities built into the operating system and abuse various capabilities of Windows, such as the registry, to maintain persistence.

However, the growing ambiguity behind the modern use of the term fileless is making it increasingly difficult to understand what specific methods fileless malware uses for evasion. It’s time to disambiguate this word to hold fruitful conversations about our ability to defend against its underlying tactics.

Here’s my perspective on the methods that comprise modern fileless attacks:

  • Malicious Documents: They can act as flexible containers for other files. Documents can also carry exploits that execute malicious code. They can execute malicious logic that begins the infection and initiates the next link in the infection chain.
  • Malicious Scripts: They can interact with the OS without the restrictions that some applications, such as web browsers, might impose. Scripts are harder for anti-malware tools to detect and control than compiled executables. In addition, they offer a opportunity to split malicious logic across several processes.
  • Living Off the Land: Microsoft Windows includes numerous utilities that attackers can use to execute malicious code with the help of a trusted process. These tools allow adversaries to “trampoline” from one stage of the attack to another without relying on compiled malicious executables.
  • Malicious Code in Memory: Memory injection abuses features of Microsoft Windows to interact with the OS even without exploiting vulnerabilities. Attackers can wrap their malware into scripts, documents or other executables, extracting payload into memory during runtime.

While some attacks and malware families are fileless in all aspects of their operation, most modern malware that evades detection includes at least some fileless capabilities. Such techniques allow adversaries to operate in the periphery of anti-malware software. The success of such attack methods is the reason for the continued use of term fileless in discussions among cybersecurity professionals.

Language evolves as people adjust the way they use words and the meaning they assign to them. This certainly happened to fileless, as the industry looked for ways to discuss evasive threats that avoided the file system and misused OS features. For a deeper dive into this topic, read the following three articles upon which I based this overview: