Daily Archives: October 12, 2018

Kanye’s Password

Everyone and his brother, inside of infosec and outside has been chortling at Kanye’s iPhone password.   Its 00000.

Not everyone is in on the joke.
Some express OUTRAGE.  “how dare you share that man’s password” (it was on CNN, its out there now).
Some (and these remind me of the 4D Chess MAGA people) theorize that Kanye is thinking 12 steps ahead.  He knew his password input would be on camera while at the White House so he temporarily set it to 00000.
And the last were the virtue signalers I mentioned.  “how dare you password shame Kanye, at least he has a password.”

And it is true.  By HAVING a password, encryption is enabled on the iPhone.
it is also true that few of us would be able to exploit knowledge of Kanye’s password as we don’t have hands on his phone.

But lets be honest.  A password of 00000 is pretty hilarious.

Self disclosure – one of my older accounts has failed the LastPass password audit because the password was compromised previously, and its used in more than one place.   But given the difficulty in updating this particular password, and the low risk level, I let it be until a forced password change recently.   Sometimes you just use dumb passwords.  And if it filmed I’d be razzed for it too.

The post Kanye’s Password appeared first on Roger's Information Security Blog.

The Dangers of Linking Your Apple ID to Financial Accounts

The digital wallets of Chinese citizens are under attack thanks to a few bad apples. A recent string of cyberattacks in China utilized stolen Apple IDs to break into customers’ accounts and steal an undisclosed amount of money, according to a Bloomberg report. Almost immediately, Chinese e-transaction giants Tencent Holdings and Alipay warned their customers to monitor their accounts carefully, especially those who have linked their Apple IDs to Alipay accounts, WeChat Pay or their digital wallets and credit cards.

While Alipay works with Apple to figure out how this rare security breach happened and how hackers were able to hijack Apple IDs, they’re urging customers to lower their transaction limits to prevent any further losses while this investigation remains ongoing. Because Apple has yet to resolve this issue, any users who have linked their Apple IDs to payment methods including WeChat Pay — the popular digital wallet of WeChat which boasts over a billion users worldwide and can be used to pay for almost anything in China — remain vulnerable to theft. Apple also advises users to change their passwords immediately.

This security breach represents a large-scale example of a trend that continues to rise: the targeting of digital payment services by cybercriminals, who are capitalizing on the growing popularity of these services. Apple IDs represent an easy entry point of attack considering they connect Apple users to all the information, devices and products they care about. That interconnectivity of personal data is a veritable goldmine for cybercriminals if they get their hands on something like an Apple ID. With so much at stake for something as seemingly small as an Apple ID, it’s important for consumers to know how to safeguard their digital identifiers against potential financial theft. Here are some ways they can go about doing so:

  • Make a strong password. Your password is your first line of defense against attack, so you should make it as hard as possible for any potential cybercriminals to penetrate it. Including a combination of uppercase and lowercase letters, numbers, and symbols will help you craft a stronger, more complex password that’s difficult for cybercriminals to crack. Avoid easy to guess passwords like “1234” or “password” at all costs.
  • Change login information for different accounts. An easy trap is using the same email and password across a wide variety of accounts, including Apple IDs. To better protect your Apple ID, especially if it’s linked to your financial accounts, it’s best to create a wholly original and complex password for it.
  • Enable two-factor authentication. While Apple works on identifying how these hackers hijacked Apple IDs, do yourself a favor and add an extra layer of security to your account by enabling two-factor authentication. By having to provide two or more pieces of information to verify your identity before you can log into your account, you place yourself in a better position to avoid attacks.
  • Monitor your financial accounts. When linking credentials like Apple IDs to your financial accounts, it’s important to regularly check your online bank statements and credit card accounts for any suspicious activity or transactions. Most banks and credit cards offer free credit monitoring as well. You could also invest in an identity protection service, which will reimburse you in the case of identity fraud or financial theft.

Stay on top of the latest consumer and mobile security threats by following me and @McAfee_Home on Twitter, listening to our podcast Hackable?, and ‘Liking’ us on Facebook.

The post The Dangers of Linking Your Apple ID to Financial Accounts appeared first on McAfee Blogs.

GAO Report on Equifax

I have regularly asked why we don’t know more about the Equifax breach, including in comments in “That Was Close! Reward Reporting of Cybersecurity ‘Near Misses’.” These questions are not intended to attack Equifax. Rather, we can use their breach as a mirror to reflect, and ask questions about how defenses work, and learn things we can bring to our own systems.

Ian Melvin was kind enough to point out a GAO report, “Actions Taken by Equifax and Federal Agencies in Response
to the 2017 Breach
.” As you’d expect of a GAO report, it is level headed and provides a set of facts.


However, I still have lots of questions. Some very interesting details start on page 11:

Equifax officials added that, after gaining the ability to issue system-level commands on the online dispute portal that was originally compromised, the attackers issued queries to other databases to search for sensitive data. This search led to a data repository containing PII, as well as unencrypted usernames and passwords that could provide the attackers access to several other Equifax databases. According to Equifax’s interim Chief Security Officer, the attackers were able to leverage these credentials to expand their access beyond the 3 databases associated with the online dispute portal, to include an additional 48 unrelated
databases.

The use of encryption allowed the attackers to blend in their malicious actions with regular activity on the Equifax network and, thus, secretly maintain a presence on that network as they launched further attacks without being detected by Equifax’s scanning software. (Editor’s note: I’ve inverted the order of the paragraphs from the source.)

So my questions include:

  • How did the attackers get root?
  • Why wasn’t the root shell noticed? Would our organization notice an extra root sell in production?
  • How did they get access to the other 48 databases?
  • Why didn’t the pattern of connections raise a flag? “As before, Equifax
    officials stated that the attackers were able to disguise their presence by
    blending in with regular activity on the network.” I find this statement to be surprising, and it raises questions: Does the dispute resolution database normally connect to these other databases and run the queries which were run? How was that normal activity characterized and analyzed? Encryption provides content confidentiality, not meta-data confidentiality. Would we detect these extra connections?

Specifically, while Equifax had installed a device to inspect network traffic for evidence of malicious activity, a misconfiguration allowed encrypted traffic to pass through the network without being inspected. According to Equifax officials, the misconfiguration was due to an expired digital certificate. The certificate had expired about 10 months before the breach occurred, meaning that encrypted traffic was not being inspected throughout that period.

Would your organization notice if one of hundreds or dozens of IDSs shut up for a week, or if one ruleset stopped firing?

Google and Android have your back by protecting your backups



Android is all about choice. As such, Android strives to provide users many options to protect their data. By combining Android’s Backup Service and Google Cloud’s Titan Technology, Android has taken additional steps to securing users' data while maintaining their privacy.

Starting in Android Pie, devices can take advantage of a new capability where backed-up application data can only be decrypted by a key that is randomly generated at the client. This decryption key is encrypted using the user's lockscreen PIN/pattern/passcode, which isn’t known by Google. Then, this passcode-protected key material is encrypted to a Titan security chip on our datacenter floor. The Titan chip is configured to only release the backup decryption key when presented with a correct claim derived from the user's passcode. Because the Titan chip must authorize every access to the decryption key, it can permanently block access after too many incorrect attempts at guessing the user’s passcode, thus mitigating brute force attacks. The limited number of incorrect attempts is strictly enforced by a custom Titan firmware that cannot be updated without erasing the contents of the chip. By design, this means that no one (including Google) can access a user's backed-up application data without specifically knowing their passcode.

To increase our confidence that this new technology securely prevents anyone from accessing users' backed-up application data, the Android Security & Privacy team hired global cyber security and risk mitigation expert NCC Group to complete a security audit. Some of the outcomes included positives around Google’s security design processes, validation of code quality, and that mitigations for known attack vectors were already taken into account prior to launching the service. While there were some issues discovered during this audit, engineers corrected them quickly. For more details on how the end-to-end service works and a detailed report of NCC Group’s findings, click here.

Getting external reviews of our security efforts is one of many ways that Google and Android maintain transparency and openness which in turn helps users feel safe when it comes to their data. Whether it’s 100s of hours of gaming data or your personalized preferences in your favorite Google apps, our users' information is protected.

We want to acknowledge contributions from Shabsi Walfish, Software Engineering Lead, Identity and Authentication to this effort

Support FBI whistleblower Terry Albury, who is set to be sentenced next week

albury

Terry Albury

FBI whistleblower Terry Albury, the second person prosecuted by the Trump administration for leaking information to the press, will be be sentenced next week in federal court. The documents he is assumed to have shared detail the FBI’s recruitment tactics and how the agency monitors journalists. For his act of courage, he could face years in prison.

Albury pleaded guilty to two counts of violating the Espionage Act in March—each punishable by up to ten years in prison. Passed in 1917, the archaic law was originally intended for use against foreign spies, but since its inception, it has been weaponized against sources of journalists and whistleblowers. (Read more about the history of the Espionage Act here.)

Albury is no spy. His attorneys have described him as being driven to action by a “conscientious commitment to long-term national security and addressing the well-documented systemic biases within the FBI.” Albury has stated he witnessed discrimination both as by working as the only black field agent in the agency’s Minneapolis office, and by observing profiling of minority communities in Minnesota.

Although the complaint against Albury did not name a specific news organization, he is assumed to be the source behind The Intercept’s important “FBI Files” investigative series that  detailed controversial FBI tactics for investigating minorities and for monitoring journalists through National Security Letters (NSLs).

By using NSLs, the FBI can obtain journalists’ phone records with no court oversight whatsoever and can circumvent the Justice Department’s normally strict guidelines for spying on journalists. The fact that we know this is (presumably) thanks to Albury.

The charges against Albury came as the Justice Department ramped up its leak investigations by 800% since the previous administration. Albury’s case is the latest in a travesty of leak prosecutions under the Espionage Act, a practice normalized by the Obama administration and expanded on by Trump.

Albury will be sentenced on October 18 in St. Paul, Minnesota. His attorneys argue that guidelines suggest a sentence of approximately three years, but that given his moral character, role as a father of two young children, and the fact that he no longer works for the FBI, a sentence of probation would be most appropriate.

In the sentencing motion, Albury’s defense draw attention to his workplace environment, and how the racism he experienced within the FBI and the racial profiling witnessed the agency propagate in Minnesota sickened and isolated him.

“In 2016, Terry Albury, a highly-regarded and decorated FBI agent in the Minneapolis office (and who had previously served the FBI in Iraq), and the only black field agent in his region at that time, disclosed classified materials to a reporter relating to FBI surveillance, profiling, and informant-recruitment practices in national security cases,” the motion reads. “He did so as an act of conscience, of patriotism and in the public interest, and for no personal gain whatsoever.”

Trevor Timm, executive director of Freedom of the Press Foundation, noted in April that former FBI officials like James Comey and and Andrew McCabe have received extensive media coverage and public and financial support—and don’t face prosecution. But Albury, who apparently released information with huge implications for racial inequity and press freedom, has received very little such support.

Albury's lawyers have launched a GoFundMe page to help with his legal expenses, which you can contribute to here.

The justice system is deeply broken when a courageous whistleblower like Albury should face any prison time at all for speaking out about racial profiling and discrimination within his workplace and making details of monitoring of the press public.

McAfee Database Security in Rapid Deployment Mode

Location independent – speed is everything

Deploying any software into the enterprise has always been a race against time. Every time someone has to manually install software, valuable time for business critical tasks is lost. Ever since the cloud became more than just something for start-up, fast paced companies, the focus of a speedy deployment has increased.

To deploy endpoint related software quickly and seamlessly you may have to use Windows Server Update Services (WSUS) or something similar.

Then, there is software that requires more configuration which generally is a manual process to setup all the required additional services and dependencies; or, in many cases, these are baked into virtual machines (VM) for easy deployment.

Plus, VMs, as practical as they might be, have their own challenges. Often, the overhead created by deploying multiple VMs is considerable, creating a new set of VMs even from templates is generally not what would be considered fast and is impractical for environments requiring rapid deploy and destroy cycles.

The cloud has proven what rapid deployment should look like. New resources are created within a few minutes and are always only a few clicks away. Scalable application services, VM’s, backend databases, everything is easy to roll out and get into production.

Introducing Containers

Container deployments have been around for a number of years, but they have been increasing in popularity over recent years especially in the cloud. It’s easy to see why. Containers are ultra-portable, ultra-lightweight and ultra-fast to deploy.

This lends itself to environments where rapid deployment is a must have. Most cloud platform providers have introduced support for their own container environments. Simple, fast push button deployments in both, the cloud and on premise are becoming ever more popular.

With this in mind McAfee Database Security has recently introduced its offering in a containerized version, both as a Docker image as well as through the Microsoft Azure Marketplace. Not only is this the first security software as a container on Azure’s Marketplace it also underlines the clear direction McAfee Database Security has, providing one set of controls no matter of the location for the monitoring of Databases. This allows for fast and easy deployment of the Database Security solution without the need to go through lengthy install processes.

Take a look at what I had to say at this year’s Microsoft’s Ignite Conference on our recent McAfee Database Security container release:

More information on McAfee Database Security in the Azure Marketplace can be found here, and don’t forget to take a look at the product page.

The post McAfee Database Security in Rapid Deployment Mode appeared first on McAfee Blogs.

The Language and Nature of Fileless Attacks Over Time

The language of cybersecurity evolves in step with changes in attack and defense tactics. You can get a sense for such dynamics by examining the term fileless. It fascinates me not only because of its relevance to malware—which is one of my passions—but also because of its knack for agitating many security practitioners.

I traced the origins of “fileless” to 2001, when Eugene Kaspersky (of Kaskersky Labs) used it in reference to Code Red worm’s ability to exist solely in memory. Two years later, Peter Szor defined this term in a patent for Symantec, explaining that such malware doesn’t reside in a file, but instead “appends itself to an active process in memory.”

Eugene was prophetic in predicting that fileless malware “will become one of the most widespread forms of malicious programs” due to antivirus’ ineffectiveness against such threats. Today, when I look at the ways in which malware bypasses detection, the evasion techniques often fall under the fileless umbrella, though the term expanded beyond its original meaning.

Fileless was synonymous with in-memory until around 2014.

The adversary’s challenge with purely in-memory malware is that disappears once the system restarts. In 2014, Kevin Gossett’s Symantec article explained how Powerliks malware overcame this limitation by using legitimate Windows programs rundll32.exe and powershell.exe to maintain persistence, extracting and executing malicious scripts from the registry. Kevin described this threat as “fileless,” because it avoided placing code directly on the file system. Paul Rascagnères at G Data further explained that Poweliks infected systems by using a boobietrapped Microsoft Word document.

The Powerliks discussion, and similar malware that appeared afterwards, set the tone for the way fileless attacks are described today. Yes, fileless attacks strive to maintain clearly malicious code solely or mostly in memory. Also, they tend to involve malicious documents and scripts. They often misuse utilities built into the operating system and abuse various capabilities of Windows, such as the registry, to maintain persistence.

However, the growing ambiguity behind the modern use of the term fileless is making it increasingly difficult to understand what specific methods fileless malware uses for evasion. It’s time to disambiguate this word to hold fruitful conversations about our ability to defend against its underlying tactics.

Here’s my perspective on the methods that comprise modern fileless attacks:

  • Malicious Documents: They can act as flexible containers for other files. Documents can also carry exploits that execute malicious code. They can execute malicious logic that begins the infection and initiates the next link in the infection chain.
  • Malicious Scripts: They can interact with the OS without the restrictions that some applications, such as web browsers, might impose. Scripts are harder for anti-malware tools to detect and control than compiled executables. In addition, they offer a opportunity to split malicious logic across several processes.
  • Living Off the Land: Microsoft Windows includes numerous utilities that attackers can use to execute malicious code with the help of a trusted process. These tools allow adversaries to “trampoline” from one stage of the attack to another without relying on compiled malicious executables.
  • Malicious Code in Memory: Memory injection abuses features of Microsoft Windows to interact with the OS even without exploiting vulnerabilities. Attackers can wrap their malware into scripts, documents or other executables, extracting payload into memory during runtime.

While some attacks and malware families are fileless in all aspects of their operation, most modern malware that evades detection includes at least some fileless capabilities. Such techniques allow adversaries to operate in the periphery of anti-malware software. The success of such attack methods is the reason for the continued use of term fileless in discussions among cybersecurity professionals.

Language evolves as people adjust the way they use words and the meaning they assign to them. This certainly happened to fileless, as the industry looked for ways to discuss evasive threats that avoided the file system and misused OS features. For a deeper dive into this topic, read the following three articles upon which I based this overview:

CVE-2018-12469 (enterprise_developer, enterprise_server)

Incorrect handling of an invalid value for an HTTP request parameter by Directory Server (aka Enterprise Server Administration web UI) in Micro Focus Enterprise Developer and Enterprise Server 2.3 Update 2 and earlier, 3.0 before Patch Update 12, and 4.0 before Patch Update 2 causes a null pointer dereference (CWE-476) and subsequent denial of service due to process termination.