Category Archives: security

Cisco fixes Remote Code Execution flaws in Webex Network Recording Player

Cisco released security patches to fix RCE flaws in the Webex Network Recording Player for Advanced Recording Format (ARF).

Cisco released security patches to address vulnerabilities in the Webex Network Recording Player for Advanced Recording Format (ARF) (CVE-2018-15414, CVE-2018-15421, and CVE-2018-15422) that could be exploited by an unauthenticated, remote attacker to execute arbitrary code on a vulnerable system

The Webex Meetings Server is a collaboration and communications solution that can be deployed on a private cloud and which manages the Webex Meetings Suite services and Webex Meetings Online hosted multimedia conferencing solutions.

The Meetings services allow customers to record meetings and store them online or in an ARF format or on a local computer, in WRF format.

The relative player Network Recording Player can be installed either automatically when a user accesses a recording file hosted on a Webex Meetings Suite site or manually by downloading it from the Webex site.

The lack of proper validation for the Webex recording files is the root cause of the vulnerabilities that could allow unauthenticated, remote attacker to execute arbitrary code on the target machine.

“Multiple vulnerabilities in the Cisco Webex Network Recording Player for Advanced Recording Format (ARF) could allow an unauthenticated, remote attacker to execute arbitrary code on a targeted system.” reads the security advisory published by Cisco.

“The vulnerabilities are due to improper validation of Webex recording files. An attacker could exploit these vulnerabilities by sending a user a link or email attachment containing a malicious file and persuading the user to open the file in the Cisco Webex Player. A successful exploit could allow the attacker to execute arbitrary code on an affected system.”

An attacker could exploit the flaw by tricking victims into opening a malicious file in the Cisco Webex Player, the file could be sent via email as an attachment or through a link in the content referencing it.

The vulnerabilities affect the following ARF recording players:

  • Cisco Webex Meetings Suite (WBS32) – Webex Network Recording Player versions prior to WBS32.15.10
  • Cisco Webex Meetings Suite (WBS33) – Webex Network Recording Player versions prior to WBS33.3
  • Cisco Webex Meetings Online – Webex Network Recording Player versions prior to 1.3.37
  • Cisco Webex Meetings Server – Webex Network Recording Player versions prior to 3.0MR2

Each version of the Webex Network Recording Players for Windows, OS X, and Linux is affected by at least one of the issues.

The following Network Recording Player updates address  the vulnerabilities:

  • Meetings Suite (WBS32) – Player versions WBS32.15.10 and later and Meetings Suite (WBS33) – Player versions WBS33.3 and later;
  • Meetings Online – Player versions 1.3.37 and later; and Meetings Server – Player versions 3.0MR2 and later.

Cisco warns that there are no known workarounds for these issues.

“The Cisco Webex Network Recording Player (for .arf files) will be automatically upgraded to the latest, non-vulnerable version when users access a recording file that is hosted on a Cisco Webex Meetings site that contains the versions previously specified,” concludes the Cisco advisory.

Pierluigi Paganini

(Security Affairs – Cisco Webex Network Recording Player, RCE)

The post Cisco fixes Remote Code Execution flaws in Webex Network Recording Player appeared first on Security Affairs.

Is Your Security Dashboard Ready for the Cloud?

The ability to feed key security information onto a big screen dashboard opens up many new opportunities for managing the day-to-day security and maintenance workload as well as providing a useful method of highlighting new incidents faster than “just another email alert.” Most Security Operation Centres I’ve visited in recent years have embraced having a […]… Read More

The post Is Your Security Dashboard Ready for the Cloud? appeared first on The State of Security.

The State of Security: Is Your Security Dashboard Ready for the Cloud?

The ability to feed key security information onto a big screen dashboard opens up many new opportunities for managing the day-to-day security and maintenance workload as well as providing a useful method of highlighting new incidents faster than “just another email alert.” Most Security Operation Centres I’ve visited in recent years have embraced having a […]… Read More

The post Is Your Security Dashboard Ready for the Cloud? appeared first on The State of Security.



The State of Security

Radware Blog: Don’t Let Your Data Seep Through The Cracks: Cybersecurity For the Smart Home

Technology and wireless connectivity have forever changed households. While we don’t have the personal hovercrafts or jetpacks that we were promised as children, infinite connectivity has brought a whirlwind of “futuristic” benefits and luxuries few could have imagined even a decade ago. But more importantly, it has re-defined how the modern domicile needs to be […]

The post Don’t Let Your Data Seep Through The Cracks: Cybersecurity For the Smart Home appeared first on Radware Blog.



Radware Blog

California May Ban Terrible Default Passwords On Connected Devices

According to Engadget, the California Senate has sent Governor Jerry Brown draft legislation that would require manufacturers to either have to use unique preprogrammed passwords or make you change the credentials the first time you use it. "Companies will also have to 'equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device,'" reports Engadget. From the report: If Brown signs the bill into law, it will take effect at the beginning of 2020. But critics claim the wording is vague and doesn't go far enough in ensuring manufacturers don't include unsecured features. "It's like dieting, where people insist you should eat more kale, which does little to address the problem you are pigging out on potato chips," Robert Graham of Errata Security said in a blog post. "The key to dieting is not eating more but eating less." Given the huge number of connected devices available, it's also not clear how the state plans to enforce and regulate the rules.

Read more of this story at Slashdot.

Adobe issued a critical out-of-band patch to address CVE-2018-12848 Acrobat flaw

Adobe releases a critical out-of-band patch for CVE-2018-12848 Acrobat flaw, the security updates address a total of 7 vulnerabilities.

Adobe address seven vulnerability in Acrobat DC and Acrobat Reader DC, including one critical vulnerability that could be exploited by attackers to execute arbitrary code.

“Adobe has released security updates for Adobe Acrobat and Reader for Windows and MacOS. These updates address critical and important vulnerabilities.  Successful exploitation could lead to arbitrary code execution in the context of the current user.” reads the security advisory.

The flaws affect Acrobat DC and Acrobat Reader DC for Windows and macOS (versions 2018.011.20058 and earlier; Acrobat 2017 and Acrobat Reader 2017 for Windows and macOS (versions 2017.011.30099 and earlier), and Acrobat DC and Acrobat Reader DC for Windows and macOS (2015.006.30448 and earlier).

The security patches have been released just one week after Adobe released its Patch Tuesday updates for September 2018 that addressed 10 vulnerabilities in Flash Player and ColdFusion.

The most severe flaw, tracked as CVE-2018-12848,  is a critical out-of-bounds write issue that could allow arbitrary code execution.

The flaw was reported by Omri Herscovici, research team leader at Check Point Software Technologies, the expert also found other 3 vulnerabilities.

CVE-2018-12848 Adobe Acrobat Reader flaw

The remaining flaws are out-of-bounds read vulnerabilities (CVE-2018-12849, CVE-2018-12850, CVE-2018-12801, CVE-2018-12840, CVE-2018-12778, CVE-2018-12775) that are rated as “important” and could lead to information disclosure.

The CVE-2018-12778 and CVE- 2018-12775 vulnerabilities were anonymously reported via Trend Micro’s Zero Day Initiative, while the CVE-2018-12801 issue was discovered by experts at Cybellum Technologies LTD.

The good news is that Adobe is not aware of any malicious exploitation of the flaw in attacks.

Pierluigi Paganini

(Security Affairs – Adobe, CVE-2018-12848)

The post Adobe issued a critical out-of-band patch to address CVE-2018-12848 Acrobat flaw appeared first on Security Affairs.

Security Affairs: Adobe issued a critical out-of-band patch to address CVE-2018-12848 Acrobat flaw

Adobe releases a critical out-of-band patch for CVE-2018-12848 Acrobat flaw, the security updates address a total of 7 vulnerabilities.

Adobe address seven vulnerability in Acrobat DC and Acrobat Reader DC, including one critical vulnerability that could be exploited by attackers to execute arbitrary code.

“Adobe has released security updates for Adobe Acrobat and Reader for Windows and MacOS. These updates address critical and important vulnerabilities.  Successful exploitation could lead to arbitrary code execution in the context of the current user.” reads the security advisory.

The flaws affect Acrobat DC and Acrobat Reader DC for Windows and macOS (versions 2018.011.20058 and earlier; Acrobat 2017 and Acrobat Reader 2017 for Windows and macOS (versions 2017.011.30099 and earlier), and Acrobat DC and Acrobat Reader DC for Windows and macOS (2015.006.30448 and earlier).

The security patches have been released just one week after Adobe released its Patch Tuesday updates for September 2018 that addressed 10 vulnerabilities in Flash Player and ColdFusion.

The most severe flaw, tracked as CVE-2018-12848,  is a critical out-of-bounds write issue that could allow arbitrary code execution.

The flaw was reported by Omri Herscovici, research team leader at Check Point Software Technologies, the expert also found other 3 vulnerabilities.

CVE-2018-12848 Adobe Acrobat Reader flaw

The remaining flaws are out-of-bounds read vulnerabilities (CVE-2018-12849, CVE-2018-12850, CVE-2018-12801, CVE-2018-12840, CVE-2018-12778, CVE-2018-12775) that are rated as “important” and could lead to information disclosure.

The CVE-2018-12778 and CVE- 2018-12775 vulnerabilities were anonymously reported via Trend Micro’s Zero Day Initiative, while the CVE-2018-12801 issue was discovered by experts at Cybellum Technologies LTD.

The good news is that Adobe is not aware of any malicious exploitation of the flaw in attacks.

Pierluigi Paganini

(Security Affairs – Adobe, CVE-2018-12848)

The post Adobe issued a critical out-of-band patch to address CVE-2018-12848 Acrobat flaw appeared first on Security Affairs.



Security Affairs

‘WaitList.dat’ Windows File May Be Secretly Hoarding Your Passwords, Emails

A file named WaitList.dat, found only on touchscreen-capable Windows PCs, may be collecting your sensitive data like passwords and emails. According to ZDNet, in order for the file to exist users have to enable "the handwriting recognition feature that automatically translates stylus/touchscreen scribbles into formatted text." From the report: The handwriting to formatted text conversion feature has been added in Windows 8, which means the WaitList.dat file has been around for years. The role of this file is to store text to help Windows improve its handwriting recognition feature, in order to recognize and suggest corrections or words a user is using more often than others. "In my testing, population of WaitList.dat commences after you begin using handwriting gestures," [Digital Forensics and Incident Response expert Barnaby Skeggs] told ZDNet in an interview. "This 'flicks the switch' (registry key) to turn the text harvester functionality (which generates WaitList.dat) on." "Once it is on, text from every document and email which is indexed by the Windows Search Indexer service is stored in WaitList.dat. Not just the files interacted via the touchscreen writing feature," Skeggs says. Since the Windows Search Indexer service powers the system-wide Windows Search functionality, this means data from all text-based files found on a computer, such as emails or Office documents, is gathered inside the WaitList.dat file. This doesn't include only metadata, but the actual document's text. "The user doesn't even have to open the file/email, so long as there is a copy of the file on disk, and the file's format is supported by the Microsoft Search Indexer service," Skeggs told ZDNet. "On my PC, and in my many test cases, WaitList.dat contained a text extract of every document or email file on the system, even if the source file had since been deleted," the researcher added. Furthermore, Skeggs says WaitList.dat can be used to recover text from deleted documents.

Read more of this story at Slashdot.

Newegg fell victim to month-long card skimming hack

It's not just British companies succumbing to large-scale payment data breaches in recent weeks. RiskIQ and Volexity have discovered that hackers inserted Magecart card skimming code into Newegg's payment page between August 14th and September 18th, intercepting credit card data and sending it to a server with a similar-looking domain.

Via: TechCrunch

Source: Volexity, RiskIQ

Integration with Cisco Technologies Delivers IT / ICS Security

Large organizations utilize a variety of technologies and solutions to create cyber resiliency, an important part of the best practice known as Defense in Depth. But, using disparate systems can actually result in increased security exposure and risks, and slower response to threats.

A few years ago, Cisco began working with the best and brightest minds around the world to address this issue. This led to the creation of their security technology program, which included an open platform for collaboration called the Cisco Security Technology Alliance (CSTA).

Nozomi Networks has integrated its ICS security solution with the CSTA to deliver comprehensive operational visibility and cyber security across IT/OT networks.

Nozomi Networks Integrates with Cisco Security Policy Platform and Devices

The CSTA provides an environment for leading security solution providers like us to integrate with Cisco APIs and SDKs across the Cisco security portfolio.

Nozomi Networks kicked off membership in CSTA with security integration for Cisco’s Identity Services Engine (ISE).

The Identity Services Engine (ISE) is a security policy management platform that helps organizations manage users and devices on business networks. Sharing contextual usage data amongst IT systems and solutions makes it much easier to enforce policies for resource access, and more.

If you want to learn more click here

The post Integration with Cisco Technologies Delivers IT / ICS Security appeared first on IT SECURITY GURU.

Radware Blog: Millennials and Cybersecurity: Understanding the Value of Personal Data

From British Airways to Uber, recent data breaches have shown how valuable our data is to cybercriminals – and the lengths to which they will go to access it. The size and impact of these breaches has meant that topics once reserved for tech experts and IT personnel have transitioned into a more mainstream conversation. […]

The post Millennials and Cybersecurity: Understanding the Value of Personal Data appeared first on Radware Blog.



Radware Blog

6.4 billion fake emails a day: How can you avoid the risks

How to avoid the risks of fake emails

Employees send and receive dozens of emails every day and, although the majority are innocuous, buried among them, there are more and more fake emails that can damage companies in a myriad of ways. This is one of the findings of the report, 2018 Email Fraud Landscape, which has uncovered an alarming figure: 6.4 billion fraudulent emails are sent every day. If we also take into account the fact that, according to Cofense, 91% of all cyberattacks start with a phishing email, there can be no doubt that email is the highest risk attack vector for companies. Similarly, 81% of heads of corporate IT security have detected an increase in the number of cases of attacks getting in through this channel. But what are the most dangerous phishing scams, and how can we avoid them?

BEC: a costly scam

As we have previously explained on this blog a BEC (Business Email Compromise) scam is a type of phishing attack where cyberattackers pass themselves off as a client or supplier in order to try to get money. One distinctive feature of this type of email fraud is that around 60% of the emails involved in BEC scams don’t contain a link, making it harder for cybersecurity systems to detect them. At times, they make use of something as simple as writing an account number in order for the recipient to make a transfer.

Another aspect that makes it stand out from most phishing attacks is that, rather than being based on indiscriminate mass emailing, BEC scams usually seek very specific individual profiles. Following this pattern, there is an even more sophisticated version of the BEC scam, known as the “CEO fraud”. In this case, as the name suggests, the cyberattacker passes himself off as the head of the whole company. To do so, attackers employ spear phishing techniques; that is, they research the company and the employee, looking for news, and profiles on social networks in order to read up on the victim and make the email as believable as possible.

For all these reasons, this type of scam is especially dangerous and costly for companies: according to FBI figures, they have cost businesses over 12 billion dollars since 2013.

How can you avoid the risks of the most dangerous phishing attacks?

Finding vulnerabilities and security breaches is a complex task for cyberattackers who have their eye on companies: a lot of the time they come across firewalls or security systems that require an advanced level of skill to get through. This is why it is much easier for them to rely on deceit, and it is also the reason that phishing attacks are so common. BEC scams add a sense of urgency and authority to this kind of fraudulent activities, especially if they use a version of the CEO fraud, since nobody wants to put themselves in a compromising position in front of the boss. Cybercriminals know how to take advantage of this, which is what makes them so dangerous. For this reason, the first thing to bear in mind in order to avoid attacks is common sense and calm must prevail in order to avoid making a false step.

In this vein, here are some key recommendations to help avoid email attacks on your company:

  • Carrying out phishing drills so that employees can learn to identify them.
  • Detection of social engineering with the aim of getting employees to ask themselves questions before responding to an email.
  • Encrypting emails to keep sensitive information from being stolen.

Practices like this are also valid for BEC scams, but they are not enough. Since it is such a personalized type of phishing, it’s advisable to verify in any way possible the source of the email. To do so, there’s no better way than to teach employees not to rely exclusively on email. It is better that they check the content of the email with the workmate they suspect is being impersonated, or with the CEO, whether it’s on the phone or face to face.

Finally, as can be said of most cybersecurity problems, the risks related to being attacked over email can be avoided with a combination of human and technological factors: common sense and employee training in order to acquire experience and prevent and detect attacks, along with the use of advanced cybersecurity platforms that have the capacity to warn of any dangers that we may have overlooked.

The post 6.4 billion fake emails a day: How can you avoid the risks appeared first on Panda Security Mediacenter.

The State of Security: U.S. Federal IoT Policy: What You Need to Know

Over the past several months, increased attention has been paid to U.S. federal government policies surrounding internal use of IoT devices. In January 2018, researchers discovered they could track the movements of fitness tracker-wearing military personnel over the Internet. In July, a similar revelation occurred with fitness app Polar, which was exposing the locations of […]… Read More

The post U.S. Federal IoT Policy: What You Need to Know appeared first on The State of Security.



The State of Security

U.S. Federal IoT Policy: What You Need to Know

Over the past several months, increased attention has been paid to U.S. federal government policies surrounding internal use of IoT devices. In January 2018, researchers discovered they could track the movements of fitness tracker-wearing military personnel over the Internet. In July, a similar revelation occurred with fitness app Polar, which was exposing the locations of […]… Read More

The post U.S. Federal IoT Policy: What You Need to Know appeared first on The State of Security.

SN 681: The Browser Extension Ecosystem

This week we prepare for the first ever Presidential Alert unblockable nationwide text message, we examine Chrome's temporary "www" removal reversal, checkout Comodo's somewhat unsavory marketing, discuss a forthcoming solution to BGP hijacking, examine California's forthcoming IoT legislation, deal with the return of Cold Boot attacks, choose not to click on a link that promptly crashes any Safari OS, congratulate Twitter on adding some auditing, check in on the Mirai Botnet's steady evolution, look at the past year's explosion in DDoS number of size, note another new annoyance brought to us by Windows 10... Then we take a look at the state of the quietly evolving web browser extension ecosystem.

We invite you to read our show notes.

Hosts: Steve Gibson and Jason Howell

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Sponsors:

Dark Web: US court seizes assets and properties of deceased AlphaBay operator

By Waqas

AlphaBay was one of the largest dark web marketplaces – In 2017, its admin Alexandre Cazes committed suicide in a Thai prison. The Fresno Division of the U.S. District Court for the Eastern District of California has finally concluded a 14-month long civil forfeiture case and allowed seizure of property and assets of a Canadian national Alexandre Cazes […]

This is a post from HackRead.com Read the original post: Dark Web: US court seizes assets and properties of deceased AlphaBay operator

State Department email breach leaks employees’ personal data

The latest government data breach affected State Department employee emails. On September 7th, workers were notified that their personally identifiable information was obtained by an unnamed actor, according to a recent report from Politico. It apparently impacted "less than one percent" of employees and direct victims of the breach were alerted at the time. Apparently, this didn't affect classified information, so at least there's that.

Via: TechCrunch

Source: Politico

Radware Blog: IoT, 5G Networks and Cybersecurity: Safeguarding 5G Networks with Automation and AI

By 2020, Gartner says there will be 20.4 billion IoT devices. That rounds out to almost three devices per person on earth. As a result, IoT devices will show up in just about every aspect of daily life. While IoT devices promise benefits such as improved productivity, longevity and enjoyment, they also open a Pandora’s […]

The post IoT, 5G Networks and Cybersecurity: Safeguarding 5G Networks with Automation and AI appeared first on Radware Blog.



Radware Blog

Linux & Windows hit with disk wiper, ransomware & cryptomining Xbash malware

By Waqas

Xbash is an “all in one” malware. Palo Alto Networks’ Unit 42 researchers have come to the conclusion that the notorious Xbash malware that has been attacking Linux and Windows servers is being operated by the Iron Group which is an infamous hacker collective previously involved in a number of cyber crimes involving the use […]

This is a post from HackRead.com Read the original post: Linux & Windows hit with disk wiper, ransomware & cryptomining Xbash malware

TaoSecurity: Firewalls and the Need for Speed

I was looking for resources on campus network design and found these slides (pdf) from a 2011 Network Startup Resource Center presentation. These two caught my attention:



This bothered me, so I Tweeted about it.

This started some discussion, and prompted me to see what NSRC suggests for architecture these days. You can find the latest, from April 2018, here. Here is the bottom line for their suggested architecture:






What do you think of this architecture?

My Tweet has attracted some attention from the high speed network researcher community, some of whom assume I must be a junior security apprentice who equates "firewall" with "security." Long-time blog readers will laugh at that, like I did. So what was my problem with the original recommendation, and what problems do I have (if any) with the 2018 version?

First, let's be clear that I have always differentiated between visibility and control. A firewall is a poor visibility tool, but it is a control tool. It controls inbound or outbound activity according to its ability to perform in-line traffic inspection. This inline inspection comes at a cost, which is the major concern of those responding to my Tweet.

Notice how the presentation author thinks about firewalls. In the slides above, from the 2018 version, he says "firewalls don't protect users from getting viruses" because "clicked links while browsing" and "email attachments" are "both encrypted and firewalls won't help." Therefore, "since firewalls don't really protect users from viruses, let's focus on protecting critical server assets," because "some campuses can't develop the political backing to remove firewalls for the majority of the campus."

The author is arguing that firewalls are an inbound control mechanism, and they are ill-suited for the most prevalent threat vectors for users, in his opinion: "viruses," delivered via email attachment, or "clicked links."

Mail administrators can protect users from many malicious attachments. Desktop anti-virus can protect users from many malicious downloads delivered via "clicked links." If that is your worldview, of course firewalls are not important.

His argument for firewalls protecting servers is, implicitly, that servers may offer services that should not be exposed to the Internet. Rather than disabling those services, or limiting access via identity or local address restrictions, he says a firewall can provide that inbound control.

These arguments completely miss the point that firewalls are, in my opinion, more effective as an outbound control mechanism. For example, a firewall helps restrict adversary access to his victims when they reach outbound to establish post-exploitation command and control. This relies on the firewall identifying the attempted C2 as being malicious. To the extent intruders encrypt their C2 (and sites fail to inspect it) or use covert mechanisms (e.g., C2 over Twitter), firewalls will be less effective.

The previous argument assumes admins rely on the firewall to identify and block malicious outbound activity. Admins might alternatively identify the activity themselves, and direct the firewall to block outbound activity from designated compromised assets or to designated adversary infrastructure.

As some Twitter responders said, it's possible to do some or all of this without using a stateful firewall. I'm aware of the cool tricks one can play with routing to control traffic. Ken Meyers and I wrote about some of these approaches in 2005 in my book Extrusion Detection. See chapter 5, "Layer 3 Network Access Control."

Implementing these non-firewall-based security choices requries a high degree of diligence, which requires visibility. I did not see this emphasized in the NSRC presentation. For example:


These are fine goals, but I don't equate "manageability" with visibility or security. I don't think "problems and viruses" captures the magnitude of the threat to research networks.

The core of the reaction to my original Tweet is that I don't appreciate the need for speed in research networks. I understand that. However, I can't understand the requirement for "full bandwidth, un-filtered access to the Internet." That is a recipe for disaster.

On the other hand, if you define partner specific networks, and allow essentially site-to-site connectivity with exquisite network security monitoring methods and operations, then I do not have a problem with eliminating firewalls from the architecture. I do have a problem with unrestricted access to adversary infrastructure.

I understand that security doesn't exist to serve itself. Security exists to enable an organizational mission. Security must be a partner in network architecture design. It would be better to emphasize enhance monitoring for the networks discussed above, and think carefully about enabling speed without restrictions. The NSRC resources on the science DMZ merits consideration in this case.

TaoSecurity

Firewalls and the Need for Speed

I was looking for resources on campus network design and found these slides (pdf) from a 2011 Network Startup Resource Center presentation. These two caught my attention:



This bothered me, so I Tweeted about it.

This started some discussion, and prompted me to see what NSRC suggests for architecture these days. You can find the latest, from April 2018, here. Here is the bottom line for their suggested architecture:






What do you think of this architecture?

My Tweet has attracted some attention from the high speed network researcher community, some of whom assume I must be a junior security apprentice who equates "firewall" with "security." Long-time blog readers will laugh at that, like I did. So what was my problem with the original recommendation, and what problems do I have (if any) with the 2018 version?

First, let's be clear that I have always differentiated between visibility and control. A firewall is a poor visibility tool, but it is a control tool. It controls inbound or outbound activity according to its ability to perform in-line traffic inspection. This inline inspection comes at a cost, which is the major concern of those responding to my Tweet.

Notice how the presentation author thinks about firewalls. In the slides above, from the 2018 version, he says "firewalls don't protect users from getting viruses" because "clicked links while browsing" and "email attachments" are "both encrypted and firewalls won't help." Therefore, "since firewalls don't really protect users from viruses, let's focus on protecting critical server assets," because "some campuses can't develop the political backing to remove firewalls for the majority of the campus."

The author is arguing that firewalls are an inbound control mechanism, and they are ill-suited for the most prevalent threat vectors for users, in his opinion: "viruses," delivered via email attachment, or "clicked links."

Mail administrators can protect users from many malicious attachments. Desktop anti-virus can protect users from many malicious downloads delivered via "clicked links." If that is your worldview, of course firewalls are not important.

His argument for firewalls protecting servers is, implicitly, that servers may offer services that should not be exposed to the Internet. Rather than disabling those services, or limiting access via identity or local address restrictions, he says a firewall can provide that inbound control.

These arguments completely miss the point that firewalls are, in my opinion, more effective as an outbound control mechanism. For example, a firewall helps restrict adversary access to his victims when they reach outbound to establish post-exploitation command and control. This relies on the firewall identifying the attempted C2 as being malicious. To the extent intruders encrypt their C2 (and sites fail to inspect it) or use covert mechanisms (e.g., C2 over Twitter), firewalls will be less effective.

The previous argument assumes admins rely on the firewall to identify and block malicious outbound activity. Admins might alternatively identify the activity themselves, and direct the firewall to block outbound activity from designated compromised assets or to designated adversary infrastructure.

As some Twitter responders said, it's possible to do some or all of this without using a stateful firewall. I'm aware of the cool tricks one can play with routing to control traffic. Ken Meyers and I wrote about some of these approaches in 2005 in my book Extrusion Detection. See chapter 5, "Layer 3 Network Access Control."

Implementing these non-firewall-based security choices requries a high degree of diligence, which requires visibility. I did not see this emphasized in the NSRC presentation. For example:


These are fine goals, but I don't equate "manageability" with visibility or security. I don't think "problems and viruses" captures the magnitude of the threat to research networks.

The core of the reaction to my original Tweet is that I don't appreciate the need for speed in research networks. I understand that. However, I can't understand the requirement for "full bandwidth, un-filtered access to the Internet." That is a recipe for disaster.

On the other hand, if you define partner specific networks, and allow essentially site-to-site connectivity with exquisite network security monitoring methods and operations, then I do not have a problem with eliminating firewalls from the architecture. I do have a problem with unrestricted access to adversary infrastructure.

I understand that security doesn't exist to serve itself. Security exists to enable an organizational mission. Security must be a partner in network architecture design. It would be better to emphasize enhance monitoring for the networks discussed above, and think carefully about enabling speed without restrictions. The NSRC resources on the science DMZ merit consideration in this case.

US government payment site leaks 14 million customer records

Government Payment Service Inc -- the company thousands of local governments in the US use to accept online payments for everything from court-ordered fines and licensing fees -- has compromised more than 14 million customer records dating back to 2012, KrebsOnSecurity reports. According to the security investigation site, the leaked information includes names, addresses, phone numbers and the last four digits of credit cards.

Source: KrebsOnSecurity

The future of security lies in quantum computing

The future of security in quantum computing

“Quantum” is a word that stirs in its wake a litany of questions. No one can deny that the future of computing is to be found in the unique features of quantum mechanics, the branch of physics that studies nature at an infinitely small scale. However, it seems hard to grasp how it could be that the sector that has most to gain from quantum computing is, in fact, the security sector.

What is quantum computing?

Computers currently work in bits. Traditional computing is conditioned by the amount of information that can be contained in these binary chains of zeros and ones. This also implies a limit in computing that sets a series of technological hurdles and some limits on what we can do.

But what if we were to expand this binary limit? Qubits which are the computation unit of these systems, not only consist of two values, but they can use a set of quantum states that include the superposition of this binary.

In other words, qubits can adopt a value to represent 0, 1, and 0 and 1 at the same time, or any quantum superposition of those two qubit states. This is caused exclusively by the characteristics of quantum physics. With appropriate adaptations, it allows a multiplying of the computing capacity to solve certain tasks which would otherwise be impossible to deal with.

What is quantum computing for?

Or rather, can it be used for everything? No; quantum computing isn’t intended to “substitute” current computing. At least, not for now. This is what Mikhail Lukin — co-founder of the Russian Quantum Center and creator of the first 51 qubit quantum computer — explained during last year’s International Conference on Quantum Technologies, ICQT 2017.

Because of the characteristics that grant it its special properties, quantum machines aren’t useful for carrying out many of the everyday tasks that our computers perform. But what they do allow is to do things that until now we thought impossible. Thus, the first quantum computers, just as we saw in ICQT 2017, will be applied to research, in order to process massive amounts of data; and to artificial intelligence, especially in self-driving cars; and, above all, to digital security.

The highest possible security

Are we really looking at impregnable systems? If we take into account the fact that no systems are 100% safe forever, we can’t make such a claim. But if we understand how quantum cryptography works, we can understand why it is so important for the future of security.

Quantum cryptography is a cryptological system that harnesses several of the properties of quantum mechanics to send messages securely. In fact, it’s the safest form that is known of to date.

Firstly, if a third party were to intercept the information during the creation of the secret key, the process would alter itself, meaning that the system would reveal the intruder before any information could be sent.

Secondly, quantum cryptography makes use of another property called entanglement, which can be used to send information safely without a means of transmission. This means that there is no way that a failure in the channel can cause an information leak.

To all of this can be added coding under the most secure conditions ever known due to the incredible processing capacity offered by quantum computing. Because of all this, this is the most promising system to safeguard privacy in the future of communication. A future that is almost upon us.

Quantum cryptography is already here

While it may seem like we still have decades of development before it can be implemented, the fact is that we have already seen several examples of how quantum computing and cryptography can be implemented. For example, during ICQT2017 Lukin announced the first computer with 51 real (not simulated) qubits, the most powerful up to that point. During the same conference, John Martinis, head of the Artificial Intelligence section of Google, explained the company’s plans to develop their own quantum computers.

According to the experts at the conference, in a few years’ time, we will have practical machines that are capable of fulfilling the requisites that will enable these computers to be used commercially. Security in companies will have to be adapted to the new possibilities associated with these super powerful computers.

Because, all of a sudden, passwords won’t be so secure unless we have quantum security measures. This leads us to the second question: quantum cryptography is much more advanced that we thought. In January this year, a joint China-Austria team showed that communication between continents with quantum encryption was possible.

The latest breakthrough achieved by this group consists of combining quantum communication from the Micius satellite with the fiber optic network in Beijing. It is the first practical proof that technology that allows networks to use quantum encryption is already available. How long will it be before we see a commercial application? Probably not long.

While we’re waiting for quantum computing and security to come to the business world, however, we need to continue to make sure we have the strongest measures against cybercriminals: a good security plan and a good security team; efficient tools like Panda Adaptive Defense, which allows us to have absolute control over what happens on the company’s systems and networks. We can even consider including new approaches to security, such as applying chaos engineering to our security plan.

The post The future of security lies in quantum computing appeared first on Panda Security Mediacenter.

Matt Flynn: Information Security | Identity & Access Mgmt.: Convergence is the Key to Future-Proofing Security

I published a new article today on the Oracle Security blog that looks at the benefits of convergence in the security space as the IT landscape grows more disparate and distributed.

Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.

Here's are a couple of excerpts:
Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.
 ...
 Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.
Click to read the full article: Convergence is the Key to Future-Proofing Security

Matt Flynn: Information Security | Identity & Access Mgmt.

Convergence is the Key to Future-Proofing Security

I published a new article today on the Oracle Security blog that looks at the benefits of convergence in the security space as the IT landscape grows more disparate and distributed.

Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.

Here's are a couple of excerpts:
Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.
 ...
 Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.
Click to read the full article: Convergence is the Key to Future-Proofing Security

Hackers disrupt UK’s Bristol Airport flight info screens after ransomware attack

By Uzair Amir

The ransomware attack disrupted the screens for two days.  In a nasty ransomware attack, flight information screens at the United Kingdom’s Bristol airport were taken over and hijacked by malicious hackers on September 15th Friday morning. The ransomware attack forced the airport staff to go manual by using whiteboards and hand-written information to assist passengers regarding their […]

This is a post from HackRead.com Read the original post: Hackers disrupt UK’s Bristol Airport flight info screens after ransomware attack

Extended Validation Certificates are Dead

Presently sponsored by: Build scalable, reliable and secure cloud native applications with Tech Fabric

Extended Validation Certificates are Dead

That's it - I'm calling it - extended validation certificates are dead. Sure, you can still buy them (and there are companies out there that would just love to sell them to you!), but their usefulness has now descended from "barely there" to "as good as non-existent". This change has come via a combination of factors including increasing use of mobile devices, removal of the EV visual indicator by browser vendors and as of today, removal from Safari on iOS (it'll also be gone in Mac OS Mojave when it lands next week):

Extended Validation Certificates are Dead

I chose Comodo's website to illustrate this change as I was reminded of the desperation involved in selling EV just last month when they sent around a marketing email with the title "How To Get The Green Address Bar On Your Website". The "alternate truth" of what EV does comes through very early on, starting with this image:

Extended Validation Certificates are Dead

This is indeed what Firefox looks like today, but they entirely neglect to mention anywhere within the marketing email that this is an arbitrary visual indicator chosen at the discretion of the browser vendor. Obviously Apple have already killed it off, but even for many people on Chrome, the Comodo website actually looks very different:

The email goes on to talk about how EV fights deceptive websites and claims the following:

The verified company name display allows the user to quickly determine the legal entity behind the website, making phishing and deception harder.

In other words, seeing the company name results in higher levels of trust or if we invert that statement, not seeing the company name results in decreased trust, right? The problem is, people simply aren't conditioned to expect to see the company name and there's very simple, effective demonstration of why this is the case:

Comodo goes on with an attempt to establish the efficacy of EV by referring to "a recent study":

A recent survey by DevOps.com found that customers are 50% more likely to trust and purchase from a website with a green address bar.

They link through to a lengthy page on the Comodo store and whilst never explicitly saying it, use language that implies the study was somehow independent and unbiased: "Devops.com conducted a survey", and other such phrases. I shared a tweet thread about this back in July, but this one tweet tells you all you need to know about the motives of the "survey":

I did honestly try to get clarity on the source of this work as well, first by tweeting the author of it then, after not receiving a reply, following up with him again and copying @TechSpective for whom he's the editor in chief along with @devopsdotcom (which follows me) who published the survey:

Eventually, what was already abundantly clear was confirmed:

I wish this was made clear in the report itself because Comodo's vested interest is clearly going to introduce bias. It'd be like an oil company commissioning a report that concludes fossil fuels aren't harmful to the environment or a tobacco company stating smoking doesn't lead to adverse health outcomes. If you ever had any doubt about whether DevOps.com actually believes in the "findings", take a look at how much confidence they themselves have in EV certificates and who they chose to go to when acquiring a cert:

Extended Validation Certificates are Dead

This resource is mentioned again throughout the Comodo email but we'll skip that for now. Moving on, they then state that you can "activate the green address bar" simply by purchasing an EV cert:

To activate the green address bar on your website, you just need to purchase and install an Extended Validation (EV) SSL certificate.

Unless you're using the world's most popular browser running on an iOS device:

Extended Validation Certificates are Dead

Same again if you load the site up in Chrome on an Android, the world's most popular operating system:

Extended Validation Certificates are Dead

Even try going to Microsoft Edge on iOS and it's a now predictable result:

Extended Validation Certificates are Dead

These are really, really important images as far as the value proposition of EV goes for two key reasons: Firstly, we're approaching two thirds of all browsing being done on mobile which means that those images above - the ones that don't show EV - are the predominant browsing experience any website owner should be considering. Secondly, as a result, this means that companies cannot tell their customers to expect EV because most of them will never see it. Despite this, Comodo suggests there's value in EV because of the "bigger security display":

The larger security indicator makes it very clear to the user that the website is secure.

You know what makes people think the website is "secure"? When the website says "secure" just as it does next to the URL in the browser right now if you're reading this in Chrome on the desktop! Paradoxically, you only get the "secure" indicator when not using an EV cert and one could quite reasonably argue that this actually creates a greater sense of confidence by literally using the word "secure". And in case you're reading this and thinking "hang on, Chrome doesn't do that anymore", you're completely right:

Extended Validation Certificates are Dead

I wrote the first part of that paragraph before Chrome 69 hit on September 4 and removed both the "Secure" text and the green indicators. That's not just a DV change either, sites with EV now also look rather different:

Extended Validation Certificates are Dead

The point I'm trying to highlight here is both the fact that visual indicators are entirely at the discretion of the client and that they change over time. As such, the title "How To Get The Green Address Bar On Your Website" is now even more incorrect than it was when it was written! In fact, the only piece of the email that even came close to accurately representing EV was the admission that you can't get an EV wildcard cert. But wait! There's a solution and it's easily available just by spending more money, it's called a multi-domain certificate and the default option when looking at Comodo's Enterprise SSL Pro with EV Multi-Domain product will actually save you $5,002.44*:

Extended Validation Certificates are Dead

* Note: You must spend $9,746.75 before the saving is realised

To be clear, this isn't a 4-year certificate either; as the text at the bottom of the image points out, the CA/B Forum guidelines limit certificate validity to 2 years and after that you need to manually go back through the entire verification and issuance process again. But hey, let's not allow that to get in the way of selling 4 year's worth of certs!

And what if you don't renew the cert then? Well, you get a great big pile of this:

Extended Validation Certificates are Dead

Now, you may be thinking "well that's kinda obvious and the same holds true whether it's EV or DV", but it's more nuanced than that. Firstly, neglecting to renew a cert happens with alarming regularity and it happens to the big guys too. For example, Microsoft failed to renew secure.microsoft.co.uk back in 2001. Too long ago? They also failed to renew an Azure one in 2013 and just to be clear about it certainly not being a Microsoft thing, HSBC forgot one in 2008, Instagram forgot one in 2015 and LinkedIn forgot one last year. There are many, many more examples and they all adhere to the same underlying truth; if something is important and repetitive, automate it!

Which brings me to the second point: certificate renewal should be automated and that's something that you simply can't do once identity verification is required. DV is easy and indeed automation is a cornerstone of Let's Encrypt which is a really important attribute of it. I recently spent some time with the development team in a major European bank and they were seriously considering ditching EV for precisely this reason. Actually, it was more than that reason alone, it was also the risk presented if they needed to quickly get themselves a new cert (i.e. due to key compromise) as the hurdles you have jump over are so much higher for EV than they are DV. Plus, long-lived certs actually create other risks due to the fact that revocation is broken so iterating quickly (for example, Let's Encrypt certs last for 3 months) is a virtue. Certs lasting for 2 years is not a virtue, unless you're coming from the perspective of being able to cash in on them...

(Paradoxically, the LinkedIn story I linked to above is on TheSSLStore.com which is a certificate reseller. You can probably see where this is going, but rather than suggesting that automation is a key part of the solution to cert renewal, they instead suggest solutions "that scale to Enterprise level" from CAs such as Comodo who, of course, are pushing EV. No mention of Let's Encrypt, but then this is also the company that's been vocally critical of them for issuing certs to phishing sites (that do correctly validate domain ownership) whilst neglecting to mention that Comodo was issuing just as many at that time!)

A lack of wildcard support is one of the big technical reasons EV is avoided (the other reasons are mostly just common-sense ones), and loading up subject alternate names is a barely sufficient alternative. For example, we use a wildcard cert for Report URI so that you can send reports to https://[my company name].report-uri.com and we've got hundreds of those. Comodo will happily support that scale too:

Extended Validation Certificates are Dead

Other than the fact that Scott Helme and I aren't really in a position to shell out $808k, this is also a far cry from what a genuine wildcard cert does as you need to specify all host names at the time of issuance as opposed to being able to dynamically serve them up.

The final point of note on the marketing email is the promise of a warranty:

Extended Validation Certificates are Dead

That actually links straight back to the page with the super pricey multi-domain EV certs and doesn't even attempt explain what the warranty is, which is a bit odd. But it's also consistent because nobody actually knows what the warranty is and if anyone has ever claimed it. Seriously - that's not intended to be a flippant statement, Scott and I genuinely tried to get to the bottom of that earlier this year and we simply couldn't get straight answers. When we did manage to engage in dialogue, I was accused of being in "nerdville":

This was admittedly a very surprising response from someone that holds a position as the CEO at CertCentre because one would imagine that he, of all people, would want to espouse the virtues of cert warranties (assuming there actually are any, of course). If you're paying a company like CertCentre money for a product with a stated set of features, being a "nerd" by asking how those features work seems perfectly reasonable and not something that should result in ridicule from the bloke running the place. Unfortunately, rather than answering the question, Andreas decided it was easier to take the tried and tested ostrich approach:

Extended Validation Certificates are Dead

The thing I have a real issue with here is that there's a financial incentive to promote the warranty (you certainly don't get a warranty with a Let's Encrypt certificate), but no willingness to explain what you get for your money. CertCentre actively lists warranties as a "Top Security Feature" too:

Extended Validation Certificates are Dead

But hey, if you can't even spell warranty, what are the chances of actually understanding what it does?!

Driving the nail even further into the EV coffin is Scott's 6-monthly Alexa Top 1M report from last month. In here he shared a very encouraging stat which is the growth in sites redirecting from HTTP to HTTPS:

Extended Validation Certificates are Dead

It's now 52% which is enormously positive for the web in general. But it was this comment about EV which piqued my curiosity:

Despite seeing strong growth in HTTPS across the top 1 million sites, EV certificates have not seen much of that growth at all.

Let's put it in raw numbers: in Feb there were 366,005 sites redirecting from HTTP to HTTPS and 19,802 of them used EV certs so call it 5.41% of all HTTPS sites using EV. Fast forward to August and there were 489,293 sites redirecting to HTTPS with 25,158 serving up EV certs which equates to 5.14%. In other words, the EV market share declined by about 5%. As a proportion of all sites using certificates, EV is far from growing, it's actually going backwards.

(Incidentally, in case you're looking at the 489k figure above and thinking "that's actually less than half of 1M", Scott's scan failed on about 47k websites so they're excluded from the stats.)

As it turns out, many sites are actually removing EV certs. Last month Scott detailed a number of major sites that used to have EV and they spanned everything from Shutterstock to Target to UPS to Visa to the UK police. Around the same time, I noticed that even Twitter had killed their EV cert:

Twitter has been a bit of an odd duck for a while as far as EV goes; back in the earlier tweet showing the world's largest websites don't have EV, there were a bunch of replies from people saying it does have EV. We later discovered that depending on where you are in the world, you may or may not see EV on Twitter. For example:

Extended Validation Certificates are Dead

Certainly, as of today, EV is not being served up when I connect from Australia so for whatever reason, Twitter don't see it as important enough to show consistently and will switch in and out of EV as you move across the globe. That also says something significant about the effectiveness of EV: if they're willing to constantly add and remove it depending on where you are, do you think people are behaving differently and no longer trusting the site when they don't see EV? No, of course not, but that's the foundation that the mechanics of EV is built on!

I don't just want to focus on Comodo and CertCentre though because disinformation campaigns go well beyond those 2, for example:

Moving past the choice of historic browsers used in the illustration (just how old is that image?!), the piece that tweet links to makes the following claim:

Web security experts recommend adopting EV SSL Certificate for platforms such as E-commerce, Banking, Social Media, Health Care, Governmental and Insurance platforms.

Now I'm not sure who they're referring to in those first few words, but I do know that with the exception of banking, that statement simply doesn't hold water for the remaining industry sectors. It only takes a few minutes to demonstrate how fundamentally wrong this is so let's do it now:

Here's the world's top shopping sites, click through to see if any of them are on EV:

  1. Amazon
  2. Netflix
  3. eBay

You might argue that Alexa has miscategorised Netflix as "shopping" so just for good measure, try the next largest which is walmart.com and, well, it's the same result. No EV. Anywhere.

Moving on and social media is the same deal:

  1. Facebook
  2. Twitter
  3. LinkedIn

As discussed earlier, Twitter has a bit of an identity crisis in terms of whether it's in or out on the EV front so give the 4th largest a go if in doubt which is Pinterest.

Onto the world's most popular health sites and it's more of the same:

  1. National Institute of Health
  2. WebMD
  3. Mayo Clinic

No EV. Nada. Zip. Not a single one.

I couldn't find one clear listing of global government websites so I pulled the data from Scott's nightly Alexa Top 1M crawl and grabbed the biggest .gov ones. The NIH was the largest but we've already covered that so let's take the next 3:

  1. Unique Identification Authority of India (which has other fundamentally basic HTTPS problems)
  2. Indian Income Tax Department
  3. GOV.UK

By now you'll already realise the chances of EV being anywhere aren't real good. You're right - not a single EV cert to be seen.

Last up is the top insurance sites:

  1. United Services Automobile Association
  2. Kaiser Permanente
  3. Geico

We got one! The USAA actually does have an EV cert! The other two don't but hey, at least that's something, right?

If "web security experts" are recommending EV for sites of these classes then clearly those responsible for actually making the decisions aren't listening. Except that nobody who's actually thought through the logic of EV properly is actually making these recommendations anyway so perhaps there's just a bit of poetic licence there in the copy.

Another set of unsubstantiated claims made by About SSL is that EV "increases transaction conversion rates", "lowers shopping cart abandonment" and "protects from phishing attacks". You can understand why they're making these claims and there's a pretty clear call to action immediately under the list of conveniently bold green selling points of EV:

Extended Validation Certificates are Dead

So we're back to there being a clear bias again. But hey, they're just out there trying to run a business so I get the motives. One would also assume that in running this business where you can purchase items online they'd like to increase their transaction conversion rates and lower shopping cart abandonment, right? Well there's a funny thing about that:

Extended Validation Certificates are Dead

Even the company selling EV is smart enough to know it's not worth actually paying money for! Plus, of course, the whole "green address bar" thing is now completely defunct courtesy of the world's most popular browser killing it in version 69.

But then there's the phishing situation and indeed this is often touted as being a strength of EV in that it somehow reduces it. In fact, this (much maligned) slide by Entrust from earlier this year makes precisely that point:

There's a whole pile of things wrong here and the best way to understand precisely what is to read through this thread from Ryan Sleevi who analysed the paper the claims were based on:

Ryan is a super smart crypto guy working on Chromium and has a very articulate way of tearing bullshit arguments to shreds. Towards the end of the thread he summarises the problem:

And we're back to EV only being effective if people behave differently due to a UI change they don't know to look for and increasingly, doesn't even exist anymore. Either that or it's changed in nuanced ways people don't expect to look for; remember the first image in the blog post showing Comodo in Safari no longer displaying the registered business name in their EV cert? Take a look at it next to this blog, also loaded in Safari on iOS 12:

Extended Validation Certificates are Dead

See the difference? The URL of the EV site and the padlock next to it are now in green whereas the DV site is in black. So now if you want to set an EV expectation you have to tell customers to look for the green URL and padlock... unless they're on Chrome which has now removed all the green bits! You can see how ridiculous this whole premise of telling normal everyday folks what nuances to look for in the browser is, especially with the rate at which they're changing.

Back on the About SSL site, there's an embedded video which espouses the virtues of EV along the same sorts of lines we've seen already. It's about 6 minutes long if you've got the patience to view it:

Or we can just skip to the good bits, such as when the presenter (and Comodo Product Marketing Manager) talks about the criticality of EV during a financial transaction:

Right at the moment of truth, when they're weighing whether or not to go forward with a transaction, this striking visual indicator (the green EV bar) accompanied by information certifying their business name, location and certification authority that validated it is presented providing needed reassurance to continue

Backing up her position is a screen cap of the Excalibur Cutlery & Gifts website:

Extended Validation Certificates are Dead

You can probably sense where this is going by now... and you're right:

Extended Validation Certificates are Dead

No EV. No commercial DV either but instead a perfectly good free Let's Encrypt cert. It's like the video was a remnant of a bygone era and as it progressed and showed websites running in IE8 on Windows XP I couldn't help but feel the information was somewhat... dated. Which turned out to be a fair assumption:

Extended Validation Certificates are Dead

Now I wouldn't normally hold a video of almost a decade ago against today's standards were it not for the fact that the views expressed there are consistent with those expressed today. Plus, of course, the video was linked to from a tweet only last month under the guise of "An essential guide about an Extended Validation SSL Certificate" so it's fair game in this case.

Comodo using sites to promote EV that don't use EV seems to be a bit of a pattern. Just this month, someone forwarded me on a domain renewal email they got from Comodo that looks like this:

Extended Validation Certificates are Dead

Naturally, he was curious about Mostlydead.com and headed over to take a look at how well that "20% increase in sales was going". You know, because of how much EV "creates consumer confidence". Apparently, not so much anymore:

Extended Validation Certificates are Dead

The more you delve into it, the more you can't help but conclude that EV is... mostly dead (we're beginning to see a pattern here). The thing is, this isn't just some random site that went from EV to DV, it's one that Comodo specifically chose to show the value of EV! This is meant to be a poster child site for the value proposition of extended validation and it's one Comodo still promotes to this very day. Yet, here we are, with Ken Kriz obviously having a change of heart on the efficacy of EV (or possibly never having really been endorsed in it in the first place).

Right about now, the whole EV thing may be starting to feel a bit like this:

Extended Validation Certificates are Dead

But we're not done yet, there's more and that brings me to another site which used to have EV and has now gone back to DV. It's this site:

Extended Validation Certificates are Dead

I changed that cert just over one day ago and so far, nobody has even mentioned it. Nobody. Not a single person and I've got an audience that's far more aware of this sort of thing than your average person. There's certainly been no shortage of people that could have noticed it over that period too:

Extended Validation Certificates are Dead

Nearly 2 years ago now, I wrote about my journey to an EV cert. Like many of the posts I write, this one was as much for my own education as it was for yours; I wanted to go through the EV process myself (it had always been done by other teams in my previous roles), and frankly, I wanted to see if it actually provided any value. I honestly didn't know at the time and I summarised the post as follows:

This whole EV cert thing is hard to measure in terms of value; I have no idea how many more people will put their email address into HIBP or how much more media or good will or donations it will get. No idea at all.

A couple of years on, I'm pretty convinced of the value: there isn't any. Now that's not to say there was a downside to having the cert in place as I became increasingly disillusioned with the whole premise of EV, but rather there's also no upside. As the renewal date approached (it was 14 December), I made the call to proactively kill the cert and roll over to a free one issued by Cloudflare. There was absolutely no reason at all to pay the renewal fee (I'd previously paid $472 for a 2 year cert) and there was also no reason to wait to roll over to DV short of loss aversion which makes about as much sense as, well, EV certs.

I've often pondered the rationale of paying for EV certs and indeed paying for certs at all in an era of freely available ones. I spend a lot of time in companies around the world talking about HTTPS and when I probe on the decision-making process for certs, the phrase "nobody ever got fired for buying IBM" regularly comes up. I wanted to find a good reference to explain the intention of this phrase and I found an excellent one on Wikipedia's definition of FUD:

By spreading questionable information about the drawbacks of less well known products, an established company can discourage decision-makers from choosing those products over its own, regardless of the relative technical merits. This is a recognized phenomenon, epitomized by the traditional axiom of purchasing agents that "nobody ever got fired for buying IBM equipment". The aim is to have IT departments buy software they know to be technically inferior because upper management is more likely to recognize the brand.

In other words, people are making uninformed decisions on what they think is a "safe bet" due to the marketing FUD. I suspect it's a similar mentality to companies placing third party security seals on their websites; they lack the sophistication to realise they can actually increase risk but hey, they were marketed well!

So that's it - EV is now gone from HIBP and nobody will miss it which would be entirely consistent with the experiences of others who've dropped it:

This turned out to be a long blog post because every time I sat down to write, more and more evidence on the absolute pointlessness of EV presented itself. I started jotting notes down well before some of the events listed above, not least of which was Chrome 69 and the removal of the green address bar which killed one of the big EV marketing headlines. It's hard to conclude anything other than EV has gradually suffered death by a thousand cuts; it was something that could be sold at a point in time in the past when the landscape was very different but today, it's just become a pointless relic of a bygone era. Browser vendors know this and are acting accordingly and it's only a matter of time before the final nail is in the coffin:

That tweet was obviously from before I removed EV from HIBP and it's a glimpse into the future. When Chrome does finally remove the EV visual indicator from the browser (just as they've already done on mobile devices and as Apple has done across the Safari line), that'll well and truly be the end of EV. Perhaps then, the FUD will finally end.

I'll leave you with one final piece that explains the absolute futility of EV and it's a talk I did in London earlier this year. It's embedded at the point where I begin talking about EV and it's the audience interaction here that really makes it. Have a look at how a room full of smart technical people responds when I ask about what visual indicators they expect to see on popular websites. Enjoy!

Safari & Firefox browser to block user data tracking with new security add-ons

By Waqas

Apple has been trying hard to improve the security mechanisms of its hardware and software products.  The addition of new privacy features in Safari browser is yet another attempt to toughen security measures for preventing breaches and tracking by websites like Facebook. It is a well-known fact that companies use cookies to keep track of […]

This is a post from HackRead.com Read the original post: Safari & Firefox browser to block user data tracking with new security add-ons

Emergence of Global Legislation Against ‘Fake News’ May Present Regulatory Risks

In response to fake news becoming an increasingly pervasive issue affecting the global political climate, many countries have implemented, or are in the process of implementing, legislation to combat the online spread of false information. While it’s difficult to reach uniform conclusions about these different legislative acts, organisations with an online presence in countries with anti-fake news laws may be subjected to increased government scrutiny, as well as potential fines or sanctions.

The following countries have passed legislation to combat the spread of fake news:

Qatar

As the first country to pass legislation criminalising the spread of fake news, Qatar’s 2014 cybercrime law provoked a great deal of controversy due to its broad language, which leaves ample room for interpretation. Under this law, it is illegal to spread false news that jeopardises the safety of the state, its general order, and its local or international peace. Offenders found guilty of circulating false information may face prison sentences and/or hefty fines. The law also places harsh sanctions on those found guilty of libel or slander.

The lack of clear criteria for fake news under Qatari law, as well as the prohibition of news that violates “any social values or principles,” presents considerable risks for individuals and businesses in Qatar. For example, in November 2015 a woman was found guilty of violating Qatari cybercrime law because she used insulting language in private messages to her landlord. In the absence of a clear standard for what constitutes such language, this law could similarly be used against firms doing business in Qatar if any of their employees happen to use insulting language over digital channels.

These laws have also been used against media organisations. In 2016, an assistant editor of a Doha newspaper was reportedly questioned by police and spent a night in jail after an individual convicted of child molestation demanded that the newspaper redact a story describing the crimes he had been accused of, on the grounds that such a story damaged his reputation. Although the assistant editor’s case was eventually dismissed, the arrest still illustrates the law’s ability to impact the operations of media outlets.

Malaysia

On April 2, the lower house of Malaysia’s parliament passed the controversial Anti-Fake News Act, a bill calling for fines of up to RM500,000 ($123,100 USD) or up to six years in prison for individuals found guilty of spreading “news, information, data and reports which is or are wholly or partly false.” The first person prosecuted under the law was a Danish citizen, who was fined RM10,000 ($2,460 USD) after accusing Malaysian police of responding slowly to the April 21 shooting of a Palestinian lecturer.

Since the legislation was passed shortly before Malaysia’s May elections following a corruption scandal involving then-incumbent prime minister Najib Razak, many a commentator framed the law as an attempt to shield Najib from negative publicity. Najib ultimately lost the election, and the Anti-Fake News Act was repealed on Aug. 16.

The passing and subsequent repeal of Malaysia’s short-lived Anti-Fake News Act demonstrates the potential for political volatility to affect the regulatory business climate. According to Reuters, the law applied to digital publications and social media, including offenders outside of Malaysia, if Malaysia or a Malaysian citizen were affected. As such, if it had achieved longevity, the law could have had serious implications for any international news outlet or social media platform with users in Malaysia.

Kenya

On May 16, Kenyan president Uhuru Kenyatta signed the Computer Misuse and Cybercrimes Act, intended to combat illegal online activity, including the spread of fake news. The law was criticised for the broad, ambiguous language used to define fake news, which leaves enough room for interpretation for the Kenyan government to prosecute dissenting journalism or online speech. Although Kenyatta has already signed the bill into law, it remains to be seen how the law will be implemented and whether it will stand up to legal challenges.

France

After heated debates, the French parliament passed a bill to combat fake news during the three months leading up to elections on July 3. The law requires social media platforms to allow users to flag stories they believe are false, notify authorities, and publicly disclose actions taken to address fake news. In addition, political candidates would be able to call upon a judge to rule on whether to take down a news story within 48 hours.

The law has been widely criticised for threatening free speech, causing confusion, and it’s unrealistic, 48-hour lead time for judges to verify contested news stories. Moreover, since the law concerns the spread of fake news rather than its production, it will affect a variety of social media websites and other digital platforms with users in France.

Egypt

On July 16, Egyptian parliament passed legislation that classifies social media users with more than 5,000 followers as media outlets, making them subject to prosecution if found guilty of spreading fake news or inciting readers to break the law. The bill fails to establish clear standards by which the veracity of reports could be judged, leading human-rights activists to express concern that the law was simply instated as a legal justification for ongoing efforts to suppress free speech.

The Egyptian bill has not yet been signed into law by President Abdel Fattah el-Sisi, but there are no indications that he opposes the measure, and he recently ratified other legislation tightening government control of online activity.

Russia

On July 22, the Russian parliament conducted its first of three votes on a bill that would hold social networks accountable for users’ circulation of false information on their platform. According to the legislation, websites with more than 100,000 visitors per day and a commenting function could be fined 50 million RUB ($800,000 USD) for not removing inaccurate content within 24 hours of its appearance. The law will also require social media companies operating in Russia to establish offices there, which could subject social media giants to increased surveillance from the Russian government.

Flashpoint analysts believe the bill is likely to pass without any serious hurdles, as Russian parliament has demonstrated a willingness to adopt laws governing social media content in the past.

Assessment

Laws intended to combat fake news introduce a variety of regulatory risk for businesses, especially in countries that adopt legislation broadly worded enough to hold online platforms accountable, not only for the content they publish, but also for the content shared or created by users. As such, companies operating media platforms or social networks with international user bases should monitor the global regulatory landscape for legislation that may present liabilities and adjust their operations accordingly.

The post Emergence of Global Legislation Against ‘Fake News’ May Present Regulatory Risks appeared first on IT SECURITY GURU.

Kroll Earns Global CREST Accreditation for Penetration Testing Services

Kroll, a division of Duff & Phelps, a global leader in risk mitigation, investigations, compliance, cyber resilience, security and incident response solutions, announces that CREST has accredited Kroll as a global CREST Penetration Testing service provider. This accreditation affirms Kroll’s expertise and authority to conduct penetration testing for clients around the world and helps provide assurance to organisations regarding the strength of their cyber resilience.

 

CREST was set up in 2006 in response to the need for more regulated professional services in the technical security sector. The non-profit organisation is now recognised globally as the preeminent accreditation and certification body for providers of penetration testing, cyber incident response, threat intelligence and security operations centre (“SOC”) services. CREST accreditation is a mandatory requirement for CBEST engagements commissioned under the framework of the Bank of England.

 

“Earning this elite accreditation exemplifies how Kroll is continuously enhancing the depth and breadth of our Cyber Risk offerings to help clients around the world achieve greater security and resiliency,” said Jason Smolanoff, Senior Managing Director and Global Cyber Risk Practice Leader for Kroll. “We are proud to be part of an influential community of organisations and professionals who are shaping cyber security best practices for a dynamically changing future.”

 

“Ultimately, it’s the knowledge, skills and relevant insight that the professional tester brings to the client’s environment that determines the value of penetration testing to an organisation,” said Andrew Beckett, Managing Director and EMEA Leader for Kroll’s Cyber Risk Practice. “Kroll works on hundreds of cases a year, including some of the most complex and highest profile matters in the world. This CREST accreditation underscores how our wide-ranging experience on the cyber security front lines, rigorous methodologies and threat intelligence-based technology all combine to deliver meaningful cyber risk assessments and, if necessary, pragmatic remedial solutions.”

 

“CREST is delighted to welcome Kroll as a member company,” said Ian Glover, president of CREST. “To become a CREST member, Kroll has been through a demanding assessment process that examined test methodologies, legal and regulatory requirements, data protection standards, logging and auditing, internal and external communications with stakeholders, as well as how test data security is maintained.  Awarding Kroll membership for its penetration testing services means that we are formally recognising that the company consistently delivers the highest professional security services standards to its customers.”

 

Associate Managing Director William Rimington, based in London, leads the global CREST program for Kroll. Rimington, a prominent authority in the area of penetration testing, has over 20 years of experience in technology architecture and testing, risk and cyber security. Prior to joining Kroll, Rimington led the Global Centre of Excellence for Ethical Hacking at a Big Four firm and was instrumental in the firm’s becoming a global member of CREST as well as a UK-approved provider of services for CBEST.

The post Kroll Earns Global CREST Accreditation for Penetration Testing Services appeared first on IT SECURITY GURU.

Weaving the security thread into the business conversation

It used to be difficult to discuss security within an organisation, terms like Phishing needed explanation, Denial of Service was when the local garage couldn’t change the oil in your car, and forget about Botnets. However over the years, and at an accelerated pace it has become easier for us security professionals to communicate types of risks and vulnerabilities – why? Because they are now part of our everyday lives, and when they become normal they don’t require explaining, they are familiar.

 

We all consume services that often today carry the same fundamental weaknesses as they did ten years ago. Can an attacker steal your password today? Yes. Can an adversary take down your preferred social channel? More than likely.

 

Agreed that improvements have been made, security has been bolstered to attempt to make successful attacks that much more difficult, but let’s not forget the opposition, those hackers, hacktivists, state sponsored military led attackers have also matured in leaps and bounds. The progression on both sides almost equal each other out. Good against bad, right against wrong, it’s a stalemate position right now and there doesn’t seem to be an end in sight.

 

“So Nick, what are the options, what do you suggest?”. One thing is for sure, we cannot stop, we must collectively continue to invest in all areas of security, to improve on what we have today and protect against what we sense may be the attacks of tomorrow; to do anything else would be almost negligent. But what we really need is a change to break this cycle. The hamster wheel will always spin when there is a hamster running on it.

 

Can we rely on technology when technologies can always be broken, after all if a human put it together, a human can pull it apart. As an example, there are a lot of companies in the security world hedging their bets on Blockchain as a silver bullet to some of our security problems, with practical uses being debated in R&D labs. Fighting technology with technology – is that what we are doing?

 

However, I do believe that we are closer to solving some of the problems we face such as Phishing. Changes to how we manage ‘identity’ and ‘access’, getting rid of passwords where possible, that ball is already rolling and gathering speed. But that’s just one example and there are many others where the ball isn’t rolling, it’s as good as stuck.

 

Once again it all comes back to people, to be vigilant, to understand the risks, to remain informed, to be responsible, to identify when something isn’t quite right. And until there is a breakthrough in the fundamental way we technically protect, such as a re-engineering or security overlay to the Internet, new attacks will be born and gifted a name, which at first will require explanation until they are simply weaved into the fabric of our everyday lives.

The post Weaving the security thread into the business conversation appeared first on IT SECURITY GURU.

Security Affairs: One year later BlueBorne disclosure, over 2 Billion devices are still vulnerable

One year after the discovery of the BlueBorne Bluetooth vulnerabilities more than 2 billion devices are still vulnerable to attacks.

In September 2017, experts with Armis Labs devised a new attack technique, dubbed BlueBorne, aimed at mobile, desktop and IoT devices that use Bluetooth.  The BlueBorne attack exposes devices to a new remote attack, even without any user interaction and pairing, the unique condition for BlueBorne attacks is that targeted systems must have Bluetooth enabled.

The attack technique leverages on a total of nine vulnerabilities in the Bluetooth design that expose devices to cyber attacks.

A hacker in range of the targeted device can trigger one of the Bluetooth implementation issues for malicious purposes, including remote code execution and man-in-the-middle (MitM) attacks. The attacker only needs to determine the operating system running on the targeted device in order to use the correct exploit.

According to the experts, in order to launch a BlueBorne attack, it is not necessary to trick the victim into clicking on a link or opening a malicious file.

The attack is stealthy and victims will not notice any suspicious activity on their device.

blueborne attack

Two months later, experts at Armis also revealed that millions of AI-based voice-activated personal assistants, including Google Home and Amazon Echo, were affected by the Blueborne flaws.

At the time of BlueBorne disclosure, Armis estimated that the security flaw initially affected roughly 5.3 billion Bluetooth-enabled devices.

One year after the company published a new report that warns that roughly one-third of the 5.3 billion impacted devices are still vulnerable to cyber attacks.

“Today, about two-thirds of previously affected devices have received updates that protect them from becoming victims of a BlueBorne attack, but what about the rest? Most of these devices are nearly one billion active Android and iOS devices that are end-of-life or end-of-support and won’t receive critical updates that patch and protect them from a BlueBorne attack.” states the new report published by Armis.

“The other 768 million devices are still running unpatched or unpatchable versions of Linux on a variety of devices from servers and smartwatches to medical devices and industrial equipment.

  • 768 million devices running Linux
  • 734 million devices running Android 5.1 (Lollipop) and earlier
  • 261 million devices running Android 6 (Marshmallow) and earlier
  • 200 million devices running affected versions of Windows
  • 50 million devices running iOS version 9.3.5 and earlier”

It is disconcerting, one billion devices are still running a version of Android that no longer receives security updates, including Android 5.1 Lollipop and earlier (734 million), and Android 6 Marshmallow and earlier (261 million).

It is interesting to note that 768 million Linux devices are running an unpatched or unpatchable version, they include servers, industrial equipment, and IoT systems in many industries.

“An inherent lack of visibility hampers most enterprise security tools today, making it impossible for organizations to know if affected devices connect to their networks,” continues the report published by Armis.

“Whether they’re brought in by employees and contractors, or by guests using enterprise networks for temporary connectivity, these devices can expose enterprises to significant risks.”

Armis notified its findings to vendors five months ago, but the situation is not changed.

“As vulnerabilities and threats are discovered, it can take weeks, months, or more to patch them. Between the time Armis notified affected vendors about BlueBorne and its public disclosure, five months had elapsed. During that time, Armis worked with these vendors to develop fixes that could then be made available to partners or end-users.” added Armis.

Unmanaged and IoT devices grow exponentially in the enterprise dramatically enlarging the attack surface and attracting the interest of hackers focused in the exploitation of Bluetooth as an attack vector.

Pierluigi Paganini

(Security Affairs – BlueBorne, hacking)

The post One year later BlueBorne disclosure, over 2 Billion devices are still vulnerable appeared first on Security Affairs.



Security Affairs

One year later BlueBorne disclosure, over 2 Billion devices are still vulnerable

One year after the discovery of the BlueBorne Bluetooth vulnerabilities more than 2 billion devices are still vulnerable to attacks.

In September 2017, experts with Armis Labs devised a new attack technique, dubbed BlueBorne, aimed at mobile, desktop and IoT devices that use Bluetooth.  The BlueBorne attack exposes devices to a new remote attack, even without any user interaction and pairing, the unique condition for BlueBorne attacks is that targeted systems must have Bluetooth enabled.

The attack technique leverages on a total of nine vulnerabilities in the Bluetooth design that expose devices to cyber attacks.

A hacker in range of the targeted device can trigger one of the Bluetooth implementation issues for malicious purposes, including remote code execution and man-in-the-middle (MitM) attacks. The attacker only needs to determine the operating system running on the targeted device in order to use the correct exploit.

According to the experts, in order to launch a BlueBorne attack, it is not necessary to trick the victim into clicking on a link or opening a malicious file.

The attack is stealthy and victims will not notice any suspicious activity on their device.

blueborne attack

Two months later, experts at Armis also revealed that millions of AI-based voice-activated personal assistants, including Google Home and Amazon Echo, were affected by the Blueborne flaws.

At the time of BlueBorne disclosure, Armis estimated that the security flaw initially affected roughly 5.3 billion Bluetooth-enabled devices.

One year after the company published a new report that warns that roughly one-third of the 5.3 billion impacted devices are still vulnerable to cyber attacks.

“Today, about two-thirds of previously affected devices have received updates that protect them from becoming victims of a BlueBorne attack, but what about the rest? Most of these devices are nearly one billion active Android and iOS devices that are end-of-life or end-of-support and won’t receive critical updates that patch and protect them from a BlueBorne attack.” states the new report published by Armis.

“The other 768 million devices are still running unpatched or unpatchable versions of Linux on a variety of devices from servers and smartwatches to medical devices and industrial equipment.

  • 768 million devices running Linux
  • 734 million devices running Android 5.1 (Lollipop) and earlier
  • 261 million devices running Android 6 (Marshmallow) and earlier
  • 200 million devices running affected versions of Windows
  • 50 million devices running iOS version 9.3.5 and earlier”

It is disconcerting, one billion devices are still running a version of Android that no longer receives security updates, including Android 5.1 Lollipop and earlier (734 million), and Android 6 Marshmallow and earlier (261 million).

It is interesting to note that 768 million Linux devices are running an unpatched or unpatchable version, they include servers, industrial equipment, and IoT systems in many industries.

“An inherent lack of visibility hampers most enterprise security tools today, making it impossible for organizations to know if affected devices connect to their networks,” continues the report published by Armis.

“Whether they’re brought in by employees and contractors, or by guests using enterprise networks for temporary connectivity, these devices can expose enterprises to significant risks.”

Armis notified its findings to vendors five months ago, but the situation is not changed.

“As vulnerabilities and threats are discovered, it can take weeks, months, or more to patch them. Between the time Armis notified affected vendors about BlueBorne and its public disclosure, five months had elapsed. During that time, Armis worked with these vendors to develop fixes that could then be made available to partners or end-users.” added Armis.

Unmanaged and IoT devices grow exponentially in the enterprise dramatically enlarging the attack surface and attracting the interest of hackers focused in the exploitation of Bluetooth as an attack vector.

Pierluigi Paganini

(Security Affairs – BlueBorne, hacking)

The post One year later BlueBorne disclosure, over 2 Billion devices are still vulnerable appeared first on Security Affairs.

HACKMAGEDDON: 16-31 August Cyber Attacks Timeline

Here we go with the second timeline of August covering the main cyber attacks occurred between August 16th and August 31st. A timeline apparently indicating that the malicious actors decided to end their summer break quite soon, as the number of recorded events is considerable higher that the first timeline.

HACKMAGEDDON

The State of Security: The Challenges of Artificial Intelligence (AI) for Organisations

Governments, businesses and societies as a whole benefit enormously from Artificial Intelligence (AI). AI assists organisations in reducing operational costs, boosting user experience, elevating efficiency and cultivating revenue. But it also creates a number of security challenges for personal data and forms many ethical dilemmas for organisations. Such challenges for information security professionals mean re-calibration […]… Read More

The post The Challenges of Artificial Intelligence (AI) for Organisations appeared first on The State of Security.



The State of Security

The Challenges of Artificial Intelligence (AI) for Organisations

Governments, businesses and societies as a whole benefit enormously from Artificial Intelligence (AI). AI assists organisations in reducing operational costs, boosting user experience, elevating efficiency and cultivating revenue. But it also creates a number of security challenges for personal data and forms many ethical dilemmas for organisations. Such challenges for information security professionals mean re-calibration […]… Read More

The post The Challenges of Artificial Intelligence (AI) for Organisations appeared first on The State of Security.

What Is the Most Important Skill Cyber Security Professionals Can Possess? The Experts Weigh In

The cyber security field is booming, with demand for cyber security professionals far outpacing supply. This talent shortage has created an industry where pay is high and the options for job seekers are plentiful. Yet there is also a shortage of cyber talent caused by a confluence of factors, including employers demanding too many required […]… Read More

The post What Is the Most Important Skill Cyber Security Professionals Can Possess? The Experts Weigh In appeared first on The State of Security.

The State of Security: What Is the Most Important Skill Cyber Security Professionals Can Possess? The Experts Weigh In

The cyber security field is booming, with demand for cyber security professionals far outpacing supply. This talent shortage has created an industry where pay is high and the options for job seekers are plentiful. Yet there is also a shortage of cyber talent caused by a confluence of factors, including employers demanding too many required […]… Read More

The post What Is the Most Important Skill Cyber Security Professionals Can Possess? The Experts Weigh In appeared first on The State of Security.



The State of Security

Medical records & patient-doctor recordings of thousands of people exposed

By Carolina

Another day, another trove of medical records leaked online, thanks to a misconfigured AWS S3 bucket. Medical records are considered to be sensitive documents and when a malicious third party has access to them it is a bad news as these records can be used for fraud, blackmailing and marketing purposes against patients’ will. However, […]

This is a post from HackRead.com Read the original post: Medical records & patient-doctor recordings of thousands of people exposed

Apache Struts & SonicWall’s GMS exploits key targets of Mirai & Gafgyt IoT malware

By Waqas

Security researchers at Palo Alto Networks’ Unit 42 have discovered modified versions of the notorious Mirai and Gafgyt Internet of Things (IoT) malware. The malware have the capability of targeting flaws that affect Apache Struts and SonicWall Global Management System (GMS). Moreover, the Unit 42 researchers also discovered new versions of Mirai and Gafgyt (aka BASHLITE) […]

This is a post from HackRead.com Read the original post: Apache Struts & SonicWall’s GMS exploits key targets of Mirai & Gafgyt IoT malware

Cryptocurrency App Mocks Competitor For Getting Hacked. Gets Hacked 4 Days Later

An anonymous reader writes: A hacker going online by the pseudonym of "aabbccddeefg" has exploited a vulnerability to steal over 44,400 EOS coins ($220,000) from a blockchain-based betting app. The hack targeted a blockchain app that lets users bet with EOS coins in a classic dice game. The entire incident is quite hilarious because four days before it happened, the company behind the app was boasting on Twitter that every other dice betting game had been hacked and lost funds. "DEOS Games, a clone and competitor of our dice game, has suffered a severe hack today that drained their bankroll," the company said in a now deleted tweet. "As of now every single dice game and clone site has been hacked. We have the biggest bankroll, the best developers, and a superior UI. Play on." While the hack is somewhat the definition of karma police, it is also quite funny because the hacker himself didn't really care about hiding his tracks or laundering the stolen funds. "So this guy hacks EOSBET and what does he do? Play space invaders. I'm not even kidding...," a user analyzing the hacker's account said.

Read more of this story at Slashdot.

Official mobile version of Tor Browser released for Android – Download now

By Waqas

There is good news for pro-anonymity web users who rely upon the Tor browser for using the internet. The first ever mobile browser app for the Tor browser has been released by the Tor Project, the organization behind the Tor network. The mobile version of the Tor browser is available for download at the Google […]

This is a post from HackRead.com Read the original post: Official mobile version of Tor Browser released for Android – Download now

Canadian town forced to pay Bitcoin after nasty ransomware attack

By Uzair Amir

The town of Midland, Ontario, Canada, has decided to pay cybercriminals after its servers were targeted and infected with a nasty ransomware on Saturday, September 1, at approximately 2 a.m. The total amount of ransom payment has not been disclosed but the demand from cybercriminals was that they must be paid in Bitcoin if the town wants […]

This is a post from HackRead.com Read the original post: Canadian town forced to pay Bitcoin after nasty ransomware attack

Cloud data management firm exposes database with over 440M emails & IP addresses

By Waqas

Data management software companies are mandatorily believed to be having perfectly capable of managing their own data. However, it turns out that some companies, the most popular ones too, struggle to do so. The well-known cloud data management firm Veeam has been in the news lately for grave mismanagement of its customer data, something the […]

This is a post from HackRead.com Read the original post: Cloud data management firm exposes database with over 440M emails & IP addresses

Russian Cybercriminal Pleads Guilty to Operating Kelihos Botnet

By Uzair Amir

A Russian national namely Peter Yuryeich Levashov has pleaded guilty to operating the Kelihos botnet, which was used to launch a huge spamming and credential stealing campaign across the globe. Levashov, a 38-year old resident of St. Petersburg, Russia, was presented before a Connecticut US District Court and admitted to being involved in a large […]

This is a post from HackRead.com Read the original post: Russian Cybercriminal Pleads Guilty to Operating Kelihos Botnet

ICS CERT warns of several flaws in Fuji Electric V-Server

Experts discovered several flaws in Fuji Electric V-Server, a tool that connects PCs within the organizations to Industrial Control Systems (ICS).

Experts discovered several vulnerabilities in Fuji Electric V-Server, a tool that connects PCs within the organizations to Industrial Control Systems (ICS) on the corporate network. The ICS-CERT published two advisories to warn of the existence of the flaws that could have a severe impact on a broad range of companies in the critical manufacturing sector.

Fuji Electric V server

The vulnerabilities rated as “high severity” could be exploited by a remote attacker to execute arbitrary code, The kind of issues affecting products that control ICS systems are very dangerous and pose a severe threat to the companies, their security is essential to avoid ugly surprises.

Vulnerabilities affecting products that connect the corporate network to industrial control systems (ICS) can pose a serious threat since that is how many threat actors attempt to make their way onto sensitive systems.

Fuji Electric V-Server devices access to programmable logic controllers (PLCs) on the corporate network via Ethernet. The control of the PLCs is implemented via the Monitouch human-machine interfaces (HMI).

“Successful exploitation of these vulnerabilities could allow for remote code execution on the device, causing a denial of service condition or information exposure.” reads the advisory published by the ICS CERT.

The list of vulnerabilities includes use-after-free, untrusted pointer dereference, heap-based buffer overflow, out-of-bounds write, integer underflow, out-of-bounds read, and stack-based buffer overflow vulnerabilities that could be exploited by remote attackers to execute arbitrary code and trigger denial-of-service (DoS) condition or information disclosure.

The bad news is that public exploits for some flaws are already available online.

The ICS-CERT also warns of another high severity buffer overflow in V-Server Lite that can lead to a DoS condition or information leakage. The flaw could be triggered by tricking victims into opening specially crafted project files.

The vendor addressed the issues with the release of version 4.0.4.0.

The flaws were reported to the vendor via Trend Micro’s Zero Day Initiative (ZDI) by researchers Steven Seeley from Source Incite and Ariele Caltabiano.

ZDI rated the flaws as “medium severity” with a CVSS score of 6.8, while the most severe issue was the one found by Caltabiano.

“This vulnerability allows remote attackers to execute arbitrary code on vulnerable installations of Fuji Electric V-Server. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.” states the advisory from ZDI.

“The specific flaw exists within the parsing of a VPR file. The issue results from the lack of proper validation of user-supplied data, which can result in an integer underflow before writing to memory. An attacker can leverage this vulnerability to execute code under the context of the V-Server process.”

Pierluigi Paganini

(Security Affairs – Fuji Electric V-Server, China)

The post ICS CERT warns of several flaws in Fuji Electric V-Server appeared first on Security Affairs.

Researchers Discover Vulnerability in Tesla Model S Key

A group of COSIC experts form KL Leuven University in Belgium have developed a new relay attack called Passive Key

Researchers Discover Vulnerability in Tesla Model S Key on Latest Hacking News.

Apple removes top anti-malware apps from its store for “stealing data”

By Waqas

Adware Doctor and Trend Micro Apps have been kicked out by Apple. Apple Inc. has always propagated its products as designed with most advanced security and privacy practices. The company has also promoted itself as the only firm that prioritizes and safeguards user privacy. The iOS and Mac App Stores are referred to as the […]

This is a post from HackRead.com Read the original post: Apple removes top anti-malware apps from its store for “stealing data”

The 42M Record kayo.moe Credential Stuffing Data

Presently sponsored by: Netsparker - dead accurate web application security scanning solution - Scan websites for SQL Injection, XSS & other vulnerabilities

The 42M Record kayo.moe Credential Stuffing Data

This is going to be a brief blog post but it's a necessary one because I can't load the data I'm about to publish into Have I Been Pwned (HIBP) without providing more context than what I can in a single short breach description. Here's the story:

Kayo.moe is a free, public, anonymous hosting service. The operator of the service (Kayo) reached out to me earlier this week and advised they'd noticed a collection of files uploaded to the site which appeared to contain personal data from a breach. Let me be crystal clear about one thing early on:

This is not about a data breach of kayo.moe - there's absolutely no indication of any sort of security incident involving a vulnerability of that service.

Concerned that the data may indicate a previously unknown breach, Kayo then sent me over a total of 755 files totaling 1.8GB. The vast majority of the files were in a format similar to this:

The 42M Record kayo.moe Credential Stuffing Data

This is very typical username:password pair used in credential stuffing attacks. These attacks typically take data from multiple breaches then combine them into a single unified list so that they can be used in account takeover attempts on other services. In May last year, I loaded more than 1 billion records from other incidents very similar to this and the real risk it poses to people is that if they've reused their password in multiple places, each of those accounts is now in jeopardy if the username and password appears in one of these lists.

The data also contained a variety of other files; some with logs, some with partial credit card data and some with Spotify details. This doesn't indicate a Spotify breach, however, as I consistently see pastes implying a breach yet every time I've delved into it, it's always come back to account takeover via password reused. In short, this data is a combination of sources intended to be used for malicious purposes.

When I pulled the email addresses out of the file, I found almost 42M unique values. I took a sample set and found about 89% of them were already in HIBP which meant there was a significant amount of data I've never seen before. (Later, after loading the entire data set, that figure went up to 93%.) There was no single pattern for the breaches they appeared in and the only noteworthy thing that stood out was a high hit rate against numeric email address aliases from Facebook also seen in the (most likely fabricated) Badoo incident. Inverting that number and pro-rata'ing to the entire data set, I'd never seen more than 4M of the addresses. So I loaded the data.

There's always questions after data like this is loaded so let me do a very brief Q&A:

Do the filenames indicate the source? No, each file name is obfuscated, I believe as part of the upload process to kayo.moe.

Can I provide the password used? No, I've written about why not and it still poses an unacceptable risk to both individuals in the breach and myself.

Had these passwords been seen before? I found a sample set of the data showed that more than 91% of the passwords were already in Pwned Passwords so if you're worried about yours, check there.

Will you load these into Pwned Passwords? Possibly. My hesitation is that there's a large number of files that aren't all in a consistent format so it's a non-trivial exercise. I'm committing to looking at it, but I can't put a timeframe on it.

Doesn't this make the data useless in HIBP? Time and time again, I've asked if I should load incidents like this under the constraints mentioned above and I always get a resounding "yes". If it's not of use to you, ignore it.

What can I actually do about this? These lists take advantage of password reuse so if you're not reusing passwords, you're all good. If you are, get a password manager (I use 1Password).

In short, this is another one of those awareness incidents. I made a commitment to HIBP subscribers to let them know when I see their data so here we are, even if it's not as immediately actionable as a data breach with a clearly identifiable source is. To be honest, if your personal security practices are up to scratch (password manager plus 2FA), this is a bit of a non-event.

Finally, I want to thank Kayo for the support and I'll ask for their input in the comments below if there's any questions related specifically to the kayo.moe service.

The kayo.moe credential stuffing data is now in HIBP and as with previous similar data sets, it's flagged as unverified.

What Cloud Migration Means for Your Security Posture

It shouldn’t come as a surprise to anyone reading this article that there has been a major shift towards businesses hosting their critical applications in the cloud. Software-as-a-Service (SaaS), as well as cloud-based servers from Amazon or Microsoft, have changed the way we build networked business systems for any size organization. Cloud-hosted solutions can (but […]… Read More

The post What Cloud Migration Means for Your Security Posture appeared first on The State of Security.

Pakistani hacker reports address bar spoofing flaws in Edge & Safari browser

By Waqas

Rafay Baloch has reported Vulnerability in Edge and Safari Browsers that Allows Address Bar Exploitation. Nowadays the phishing attacks have become increasingly sophisticated and difficult to detect so it is indeed appreciable that security researchers are managing to spot such campaigns in their initial phases. Reportedly, a security researcher from Pakistan Rafay Baloch has discovered […]

This is a post from HackRead.com Read the original post: Pakistani hacker reports address bar spoofing flaws in Edge & Safari browser

Air-conditioned apocalypse: A blackout scenario involving smart climate control devices

By David Balaban

Science fiction movies often depict various situations related to cybercriminals’ activity. These can include predicaments where threat actors disrupt the transportation system of a large city or cause power outages in entire regions. In fact, this is beyond science fiction these days – impacting the power grid isn’t that difficult. The only viable way to […]

This is a post from HackRead.com Read the original post: Air-conditioned apocalypse: A blackout scenario involving smart climate control devices

The many faces of omnichannel fraud

The rise of new technologies, social networks, and other means of online communication have brought about compelling changes in industries across the board.

For example, in retail, organizations use digital tools such as websites, email, and apps to reach out to their current and potential clients, anticipate their needs, and fully tailor their business strategies around making the user shopping experience as positive, seamless, frictionless, and convenient as possible.

This is the heart of the omnichannel approach. And while the foreseen outcome may sound lovely in the ears of consumers and businesses, it’s actually easier said than done. A lot of planning, executing, aligning of goals and core values, and—most importantly to us—securing is involved.

As for the organizations who have adopted this approach, a majority of them believe that they don’t have adequate tools and measures in place to protect their businesses against fraud in the omnichannel environment.

What is omnichannel?

To understand how we can protect businesses in an omnichannel environment, we should go back to basics. It’s important to know what omnichannel is, how it works, and how it affects clients of organizations using this approach.

Omnichannel—also spelled omni-channel—is a compound word composed of the words “omnis” and “channel.” Omnis is the Latin word for “all,” while channel, in this case, pertains to a way of making something, such as information or a product, available. With these in mind, one could roughly define omnichannel as available in all channels, irrespective of the business or the industry it belongs to.

For example, although an omnichannel banking strategy looks different from an omnichannel retail strategy, both apply the same principles. Here’s a simple illustration:

In omnichannel banking, the customer can access their accounts anywhere, pay their bills anywhere, and get money anywhere.

In omnichannel retail, the customer can browse items anywhere, pay anywhere, and return them anywhere.

It’s safe to assume that a majority of businesses already have the “all channels” part covered, but the basic tenet that sets the omnichannel approach apart from the multi-channel approach is its focus: Omnichannel pays more attention to how the organization interacts with the client and less on the actual transaction. The interaction between customer and organization is seamless—meaning, the customer won’t meet bumps when switching from one device to another in the middle of a purchase—regardless of the channel the customer chooses.

Because communication among channels also happens at the backend, the organization is able to anticipate a customer’s future needs, wants, and likes, which they then use to (1) tailor their pitches and/or ads and (2) communicate messages to the customer consistently across channels.

A successful and effective omnichannel strategy fosters a deeper relationship between customer and organization, which in turn translates into invaluable, loyal, and happy customers.

When a new strategy introduces new security risks

Risks are unavoidable when an organization undergoes strategic change. It’s already challenging enough for organizations to let their channels start talking to each other as part of the drive to enhance customer experience. With customers now becoming more informed, connected, and knowledgeable about what they want and what they don’t want to encounter when interacting with a brand, they significantly influence and shape the way retailers respond to them.

And why not? Nowadays, it’s relatively easy for customers to be put off by a brand that doesn’t address their growing demand for a faster, more personalized, flexible, and seamless experience overall.

Addressing such demands inevitably leads to introducing new ways consumers can shop, an uptick in the availability of fulfillment options, and the increased availability of new payment options to users. Of course, where a hand-over of money, product, or data is involved, fraud is fast on its heels.

Types of fraud in omnichannel

Organizations looking into adopting an omnichannel approach should also look into ways they can protect user data, user accounts, and sensitive financial data (if they haven’t already), on top of protecting their physical and digital assets. Below, we have identified several fraud types that are found in an omnichannel retail environment. (Note that some of these can also be found in multi-channel retail environments as well):

  • Card-not-present (CNP) fraud. A well-known scam where a fraudster uses stolen card and owner details to make online or over-the-phone purchases. As the fraudster cannot show the card to the retailer for visual inspection, they get away with the fraudulent purchase.
  • Cross-border or cross-channel fraud. Fraudsters steal credentials and sensitive personal information used by their target in one channel so they can commit fraud to another or an associated channel.
  • Click-and-collect fraud. This is otherwise known as the “buy online, pick-up-in-store” fraud. This occurs when a fraudster, armed with stolen card details and details of the real owners (for backup), buys online then picks up the item from the store. The purchase is flagged as fraudulent.
  • Card-testing fraud. Also known as “stolen card number testing,” this tactic occurs when fraudsters use a merchant’s website to test if stolen card credentials are still valid by making small, incremental purchases. According to Radial, an omnichannel solutions company, there has been a 200 percent increase in card-testing fraud in 2017.
  • Return fraud. This comes in many shapes and sizes. One type, which is friendly fraud, happens when a seemingly legitimate buyer purchases an item online, receives it, and then contacts their card issuer to claim that they never received the item they bought. Return fraud also happens when a buyer purchases electronics, takes out their expensive parts, and then returns the item to the store.
  • Mobile payment fraud. In a world that is now described as “mobile-first,” it’s only logical to expect that fraud born from mobile device usage could outpace web fraud. And it has. Before, mobile browsers were typically the point-of-origin of such fraud; nowadays, fraud can be done via mobile apps.

Addressing omnichannel fraud

With the current amount of fraud omnichannel organizations are vulnerable to, a unified approach to solving all of them is a must. There are already third-party solution service providers that an organization can approach to assist them in this. However, there are practical ways organizations can do and lean on, especially if the budget is particularly tight, to nip fraud in the bud.

Track fraud across your channels. This allows organizations to identify the flaws in each of their channels so they can tailor their security strategy. Consider putting together an exclusive department to oversee this task and manage the data. With a team or one person focused on assessing, identifying, and coming up with ways to mitigate the business’s risk against fraud,  it would be easier to get executive backing, especially when it’s time to invest funds on more sophisticated protection tools as the business grows.

Come up with a fraud prevention strategy. And this can only be done after the data from tracking channels has been collected and analyzed. Remember that for a fraud prevention strategy (or any strategy for that matter) to be effective, it should be designed based on the current and future needs of the organization.

Implement multi-factor authentication (MFA). Authentication is the first line of defense against fraud, so having at least two forms implemented is better than not using any authentication protocol at all. But organizations must make sure that the auth methods they want to adopt are reliable and difficult to intercept. That said, SMS authentication should no longer be an option.

If consumers want a unified and consistent experience across all channels, they should expect the same when it comes to identity authentication. While a true omnichannel authentication is still in its infancy, many organizations already recognize its importance and potential. This is good news, and organizations must keep an eye on.

Encrypt data. It’s one of the fundamental ways an organization can protect the exchange of data between their clients and their systems. Yet, there are still organizations that transfer, share, and store sensitive data in human-readable format. They probably think it’s still okay to do this in the age of breaches, even when point-to-point encryption methods are already available for businesses to use. But here’s the truth: This. Shouldn’t. Be. Happening. Anymore.

Dear Organization, please don’t be that company.


Read: Encryption: types of secure communication and storage


Secure your e-commerce website. Principles we learned in Security 101 apply here: Keep your software updated, use HTTPS hosting, use strong passwords (especially for those with admin accounts), back up data regularly, and use security software. Also, we hastily add not storing sensitive data to your server. Instead, use a third-party payment solution to conduct secure payment transactions between the organization and your clients.

The store of the future and cybersecurity: final thoughts

Going omnichannel is a continuing trend that won’t be going away any time soon. In retail, today’s customer demands and expectations are high, and businesses are expected to meet or exceed them. Doing so gives organizations an edge over their competitors, not to mention that evolving to omnichannel is a sure way of future-proofing their businesses. However, organizations must keep this in mind: If the omnichannel approach increases the user convenience, it may be convenient for fraudsters, too.

While overall growth is a business’s main objective, cybersecurity considerations should not be deprioritized. In an omnichannel environment, exposure to fraud, malware, and other digital crimes are heightened. As such, a lot more assets need to be protected.

The post The many faces of omnichannel fraud appeared first on Malwarebytes Labs.

What is the difference between sandboxing and honeypots?

Honeypots and Sandboxing

We’ve said it more than once on this blog: when it comes to cybersecurity, it’s not enough to simply act reactively: acting preventively is also vital, because the best way to defend against an attack is to get ahead of it, preempt it, and stop it from happening.

For this reason, in their eagerness to stay ahead, an increasing number of companies allocate part of their corporate cybersecurity resources to studying new trends, analyzing the latest cybercrime strategies, and, ultimately, to being able to protect their company’s IT security in a much more efficient way, avoiding problems before they even appear.

This is where we start to see two concepts that are very common in the sector: honeypots and sandboxing, two IT risk prevention strategies that, while they may seem similar, in fact differ in several ways.

What is a honeypot?

A honeypot is a cybersecurity strategy aimed, among other things, at deceiving potential cybercriminals. Whether it’s via software or human actions, honeypots are when a company pretends to have a few “ways in” to their systems that haven’t been adequately protected.

The tactic is as follows: In the first step, a company decides to activate a series of servers or systems that seem to be sensitive. Ostensibly, this company has left a few loose ends untied and seems to be vulnerable. Once the trap is set, the aim is to attract attackers, who will respond to this call, and attempt to get in. However, what the cybercriminal doesn’t know is that, far from having found a vulnerable door, they are being regulated and monitored the whole time by the company in question.

This gives companies a triple benefit: firstly, they can stop genuinely dangerous attacks; secondly, they can keep attackers busy, wearing them out and making them waste time; and finally, they can analyze their movements and use this information to detect possible new attack strategies that are being used in the sector.

Honeypots are similar to so called cyber counterintelligence, which also uses a strategy of placing cybersecurity bait that, because of its vulnerable appearance, lures attackers in and tricks them, thwarting their attempts, while at the same time spying on them, analyzing and monitoring their movements.

In fact, there are ways to make the tactic even more sophisticated: if the honeypot isn’t developed on unused networks, but rather on real applications and systems, this is when we start to talk about a honeynet, that will be able to further mislead the cybercriminal and make them believe without a shadow of a doubt that they are attacking the very heart of the company’s IT security.

Ultimately, honeypots are a strategy that can be very useful, especially for large companies, since these companies usually store a large amount of confidential information and, as a result of the volume of activity, are extremely tempting targets for potential attackers.

What is a sandbox?

Sandboxes, on the other hand, have several elements that set them apart from honeypots. This is a much less risky tactic, and is carried out when a company suspects that some of their programs or applications may contain malware.

In this case, the company totally isolates the process. Not only will it be carried out on another server and the possible ways in closed, but it will also be run on just one computer, making sure that at no time does this computer establish any kind of connection with other devices in the company.

So, while the goal of the honeypot is to attract attackers in order to avoid their attacks, making them waste their time, sandboxing is focused on evaluating possible infections that could already have affected the system, and running them in isolation so that they don’t affect the rest of the company.

Sandboxing is therefore a perfect strategy for companies that work with material downloaded from the Internet that could potentially compromise IT security. It is also very useful for when an employee, because of a lack of cybersecurity training and awareness, downloads an attachment that could be a threat to the company’s IT systems.

The fact is that there is one thing that needs to be made clear in companies: independently of their size, right now, all of them are susceptible to being attacked and falling victim to cybercrime. Therefore, in this context, it is vital to broaden the range of options when it comes to protecting cybersecurity using IT risk prevention.

The post What is the difference between sandboxing and honeypots? appeared first on Panda Security Mediacenter.

SN 680: Exploits & Updates

This week we discuss Windows 7's additional three years of support life, MicroTik routers back in the news (and not in a good way), Google Chrome 69's new features, the hack of MEGA's cloud storage extension for Chrome, Week 3 of the Windows Task Scheduler 0-day, a new consequence of using '1234' as your password, Tesla makes their white hat hacking policies clear... just in time for a big new hack!, our PCs as the new malware battlefield, a dangerous OpenVPN feature is spotted, and Trend Micro, caught spying, gets kicked out of the MacOS store.

Hosts: Steve Gibson and Jason Howell

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Sponsors:

Security firm uses Twitter to disclose critical zero-day flaw in Tor Browser

By Waqas

Zerodium, an infosec and premium zero-day acquisition platform tweeted about the flaw in Tor browser on Monday. The infamous exploit vendor and buyer/seller of popular software vulnerabilities, Zerodium has revealed a critical flaw in Tor browser software. According to a tweet posted by Zerodium, the zero-day vulnerability is present in the NoScript browser plugin and can […]

This is a post from HackRead.com Read the original post: Security firm uses Twitter to disclose critical zero-day flaw in Tor Browser

British Airways hackers used same tools behind Ticketmaster breach

The British Airways web hack wasn't an isolated incident. Analysts at RiskIQ have reported that the breach was likely perpetrated by Magecart, the same criminal enterprise that infiltrated Ticketmaster UK. In both cases, the culprits used similar virtual card skimming JavaScript to swipe data from payment forms. For the British Airways attack, it was just a matter of customizing the scripts and targeting the company directly instead of going through compromised third-party customers.

Via: The Verge

Source: RiskIQ

Researchers demonstrate how to unlock Tesla wireless key fobs in 2 seconds

By Waqas

Vulnerabilities and security flaws in vehicle security systems aren’t as surprising for us as it is that even the most renowned car manufacturers aren’t able to provide consumers with fool-proof systems. Wired reports that Tesla recently fixed a vulnerability in the security systems of its cars after a group of researchers in Belgium proved that the […]

This is a post from HackRead.com Read the original post: Researchers demonstrate how to unlock Tesla wireless key fobs in 2 seconds

How to bridge the cybersecurity skills gap

By 2021, there will be more than 3.5 million unfilled jobs in the cybersecurity sector.

The statistic from Cybersecurity Ventures published in June 2017, highlighted the growing structural deficit of security professionals. The cybersecurity skills gap continues to grow – but just how large and severe is it? And what can businesses do to mitigate the problem?

Bridging the cybersecurity skills gap is one of the biggest challenges organisations face today – and many are already struggling. Few organisations have the resources to deal with the growing threat posed by cyber criminals and advanced attacks. Viruses, malware and other threats are increasingly diverse and complex, and most organisations lack the staff and skill to deal with the threats appearing now, let alone the ones that will appear in the future.

  • Hire and train more talent

    Organisations need to acquire the best cybersecurity analysts and use them as mentors for talented but inexperienced cybersecurity trainees.

    The benefit is twofold. On the one hand, organisations benefit from the expertise that trained analysts can provide, and on the other, cybersecurity trainees learn from the best and can quickly get up to speed.

Only 1 in 10 organisations have cybersecurity experts on their teams

A study conducted earlier this year by Forrester Consulting for Hiscox, revealed that only 11% of the organisations reviewed actually had ‘experts’ on their security teams and were, therefore, well prepared to face cybersecurity challenges. On the other hand, nearly three-quarters of organisations (73%) fell into the novice category, suggesting they had a long way to go before they were ‘cyber ready’.

With skilled cybersecurity professionals in short supply, it’s expected that organisations will continue to struggle to fill cybersecurity positions with the right employees.

  • Outsource endpoint security management to specialist service providers or managed detection and response teams

    Gartner estimates that, by 2020, 50% of managed security service providers (MSSPs) will offer Managed, Detection and Response (MR) services.

    For organisations unable to hire or train cybersecurity analysts as quickly as possible, outsourcing cybersecurity management (or elements of it) to specialist service providers, or MDR teams is a viable option. This should reduce the risk with 24/7 threat monitoring, detection and response capabilities, and also give organisations access to the best cybersecurity professionals.

    With such an approach, organisations can augment their existing cybersecurity network, providing an additional layer of protection, as well as use the expertise provided by MDR teams to get insight, actionable advice, threat context and coverage.

Almost half of security alerts are not investigated

According the Cisco 2017 Security Capabilities Benchmark Study, 44% – almost half – of security alerts are not investigated.

The study found that, due to “various constraints”, such as resource, budget and lack of trained personnel, organisations can only investigate 56% of the security alerts they receive. Of the alerts investigated, only 46% are remediated, leaving 54% of those alerts unresolved.

The main problem is that security alerts need to be reviewed and remediated manually. Cybersecurity systems can flag threats, yes, but those threats also need to be manually verified and prioritised by analysts. As a result, the process takes significantly longer – and with so many threats being received on a daily basis, it’s no surprise that many go unchecked.

  • Invest in more robust and accurate cybersecurity systems

    A major challenge for organisations is the remediation and reprioritisation of threats. Cybersecurity systems can detect issues, but often those issues need to be resolved manually. According to our own research, more than half of the cybersecurity professionals we reviewed estimated that half of threat alerts are improperly reprioritised by systems and had to be fixed manually.

    With many organisations’ security teams stretched thin and responding to an overwhelming number of threats on a daily basis, systems need to be honed and adapted as threats evolve and increase. That is the only way to truly be cyber resilient.

Don’t make the mistake of treating cybersecurity as a “technical problem” and delegate it to the IT department. The reality is that cybersecurity is a business-wide issue. Defending an organisation from cyber-attack requires an understanding of what is at stake.

The IT department can resolve the issue, sure, but what’s the point if poor employee practice means that they face another problem as soon as one is fixed?

Wider business context and an appreciation of business risk, exposure and priorities is needed. Departments within organisations need to work together with the IT department, not as a separate entity.

If you want to learn more about the cybersecurity skills gap, the threats facing modern businesses, and how best to prepare for and combat those threats, download our report by clicking the button below.

Download the PandaLabs Anual Report 2017

The post How to bridge the cybersecurity skills gap appeared first on Panda Security Mediacenter.

The Effectiveness of Publicly Shaming Bad Security

Presently sponsored by: Netsparker - dead accurate web application security scanning solution - Scan websites for SQL Injection, XSS & other vulnerabilities

The Effectiveness of Publicly Shaming Bad Security

Here's how it normally plays out: It all begins when a company pops up online and makes some sort of ludicrous statement related to their security posture, often as part of a discussion on a public social media platform such as Twitter. Shortly thereafter, the masses descend on said organisation and express their outrage at the stated position. Where it gets interesting (and this is the whole point of the post), is when another group of folks pop up and accuse the outraged group of doing a bit of this:

The Effectiveness of Publicly Shaming Bad Security

Shaming. Or chastising, putting them in their place or taking them down a peg or two. Whatever synonym you choose, the underlying criticism is that the outraged group is wrong for expressing their outrage towards the organisation involved, especially if it's ever construed as being targeted towards whichever individual happens to be the mouthpiece of the organisation at the time. Shame, those opposed to it will say, is not the way. I disagree and I want to explain - and demonstrate - precisely why.

Let's start with a few classic examples of the sort of behaviour I'm talking about in terms of those ludicrous statements:

The Effectiveness of Publicly Shaming Bad Security

See the theme? Crazy statements made by representatives of the companies involved. The last one from Betfair is a great example and the entire thread is worth a read. What it boiled down to was the account arguing with a journalist (pro tip: avoid arguing being a dick to those in a position to write publicly about you!) that no, you didn't just need a username and birth date to reset the account password. Eventually, it got to the point where Betfair advised that providing this information to someone else would be a breach of their terms. Now, keeping in mind that the username is your email address and that many among us like cake and presents and other birthday celebratory patterns, it's reasonable to say that this was a ludicrous statement. Further, I propose that this is a perfect case where shaming is not only due, but necessary. So I wrote a blog post..

Shortly after that blog post, three things happened and the first was that it got press. The Register wrote about it. Venture Beat wrote about it. Many other discussions were held in the public forum with all concluding the same thing: this process sucked. Secondly, it got fixed. No longer was a mere email address and birthday sufficient to reset the account, you actually had to demonstrate that you controlled the email address! And finally, something else happened that convinced me of the value of shaming in this fashion:

A couple of months later, I delivered the opening keynote at OWASP's AppSec conference in Amsterdam. After the talk, a bunch of people came up to say g'day and many other nice things. And then, after the crowd died down, a bloke came up and handed me his card - "Betfair Security". Ah shit. But the hesitation quickly passed as he proceeded to thank me for the coverage. You see, they knew this process sucked - any reasonable person with half an idea about security did - but the internal security team alone telling management this was not cool wasn't enough to drive change. Negative media coverage, however, is something management actually listens to. Exactly the same scenario played out at a very similar time when I wrote about how you really don't want bank grade security with one of the financial institutions on that list rapidly fixing their shortcomings after that blog post. A little while later at another conference, the same discussion I'd had in Amsterdam played out: "we knew our SSL config was bad, we just couldn't get the leadership support to fix it until we were publicly shamed".

I wanted to set that context because it helps answer questions such as this one:

What public shaming does is appeals to a different set of priorities; if, for example, I was to privately email NatWest about their lack of HTTPS then I'd likely get back a response along the lines of "we take security seriously" and my feedback would go into a queue somewhere. As it was, the feedback I was providing was clearly falling on deaf ears:

And now we have another perfect example of precisely the sort of response that needs to be shamed so NatWest earned themselves a blog post. How this changed their priorities was to land the negative press on the desk of an executive somewhere who decided this wasn't a good look. As a result, their view on the security of this page is rather different than it was just 9 months ago:

The Effectiveness of Publicly Shaming Bad Security

Now I don't know how much of this change was due to my public shaming of their security posture, maybe they were going to get their act together afterward anyway. Who knows. However, what I do know for sure is that I got this DM from someone not long after that post got media attention (reproduced with their permission):

Hi Troy, I just want to say thanks for your blog post on the Natwest HTTPS issue you found that the BBC picked up on. I head up the SEO team at a Media agency for a different bank and was hitting my head against a wall trying to communicate this exact thing to them after they too had a non secure public site separate from their online banking. The quote the BBC must have asked from them prompted the change to happen overnight, something their WebDev team assured me would cost hundreds of thousands of pounds and at least a year to implement! I was hitting my head against the desk for 6 months before that so a virtual handshake of thanks from my behalf! Thanks!

Let me change gear a little and tackle a common complaint about shaming in this fashion and I'll begin with this tweet:

Notwithstanding my civic duty as an Aussie to take the piss out of the English, clearly this was a ridiculous statement for Santander to make. Third party password managers are precisely what we need to address the scourge of account takeover attacks driven by sloppy password management on behalf of individuals. Yet somehow, Santander had deliberately designed their system to block the ability to use them. Their customer service rep then echoed this position which subsequently led to the tweet above. That tweet, then led to this one:

Andy is concerned that shaming in this fashion targets the individual behind the social media account (JM) rather than the organisation itself. I saw similar sentiments expressed after T-Mobile in Austria defended storing passwords in plain text with this absolute clanger:

In each incident, the respective corporate Twitter accounts got a lot of pretty candid feedback. And they deserved it - here's why:

These accounts are, by design, the public face of the respective organisations. Santander literally has the word "help" in the account name and T-Mobile's account indicates that Käthe is a member of the service team. They are absolutely, positively the coal faces of the organisation and it's perfectly reasonable to expect that feedback about their respective businesses should go to them.

This is not to say that the feedback should be rude or abusive; it shouldn't and at least in the discussions I've been involved in, that's extremely rare to see. But to suggest that one shouldn't engage with the individuals controlling the corporate social media account in this fashion is ludicrous - that's exactly who you should be engaging with!

A huge factor in how these discussions play out is how the organisations involved deal with shaming of the likes mentioned above. Many years ago now I wrote about how customer care people should deal with technical queries and I broke it down into 5 simple points:

  1. Never get drawn into technical debates
  2. Never allow public debate to escalate
  3. Always take potentially volatile discussions off the public timeline
  4. Make technical people available (privately)
  5. Never be dismissive

Let me give you a perfect example of how to respond well to public shaming and we'll start with my own tweet:

Business as usual there, just another day on the internet. But watch how Medibank then deals with that tweet:

And in case you're wondering, yes, I did give them an e-pat on the back for that because they well and truly deserved it! The point is that shaming, when done right, leads to positive change without needing to be offensive or upsetting to the folks controlling the social accounts.

The final catalyst for finishing this blog post (I've been dropping example into it since Xmas!) was a discussion just last week which, once again, highlighted everything said here. As per usual, it starts with a ridiculous statement on security posture:

Shaming ensues (I mentioned my Aussie civic duty, right?!):

Once again, the press picks it up and also once again, people get uppity about it:

And just to be clear, stating that "Non HTTPS pages are safe to use despite messages from some browsers" is not a very bright position to take whether you're on minimum wage or you're the CEO. Income doesn't factor when you make public statements as a company representative. Predictably, just as with all the previous example, positive change followed:

The Effectiveness of Publicly Shaming Bad Security

That whole incident actually turned out to be much more serious than they originally thought and once again, the issue was brought to the forefront by shaming. I've seen this play out so many times before that frankly, I've little patience for those decrying shaming in this fashion because it might hurt the feelings of the very people charged with receiving feedback from the public. If a company is going to take a position on security either in the way they choose to build their services or by what their representatives state on the public record, they can damn well be held accountable for it:

Whether those rejecting shaming of the likes I've shared above agree with the practice or not, they can't argue with the outcome. I'm sure there'll be those that apply motherhood statements such as "the end doesn't justify the means", but that would imply that the means is detrimental in some way which it simply isn't. Keep it polite, use shaming constructively to leverage social pressure and we're all better off for it.

Answers to Your Questions on Our Apps in the Mac App Store

https://blog.trendmicro.com/answers-to-your-questions-on-our-mac-apps-store/

Reports that Trend Micro is “stealing user data” and sending them to an unidentified server in China are absolutely false.

Trend Micro has completed an initial investigation of a privacy concern related to some of its MacOS consumer products. The results confirm that Dr. Cleaner, Dr. Cleaner Pro, Dr. Antivirus, Dr. Unarchiver, Dr. Battery, and Duplicate Finder collected and uploaded a small snapshot of the browser history on a one-time basis, covering the 24 hours prior to installation. This was a one-time data collection, done for security purposes (to analyze whether a user had recently encountered adware or other threats, and thus to improve the product & service). The potential collection and use of browser history data was explicitly disclosed in the applicable EULAs and data collection disclosures accepted by users for each product at installation (see, for example, the Dr. Cleaner data collection disclosure here:  https://esupport.trendmicro.com/en-us/home/pages/technical-support/1119854.aspx). The browser history data was uploaded to a U.S.-based server hosted by AWS and managed/controlled by Trend Micro.

Trend Micro is taking customer concerns seriously and has decided to remove this browser history collection capability from the products at issue.

Update as of September 10

We apologize to our community for concern they might have felt and can reassure all that their data is safe and at no point was compromised.

We have taken action and have 3 updates to share with all of you.

First, we have completed the removal of browser collection features across our consumer products in question. Second, we have permanently dumped all legacy logs, which were stored on US-based AWS servers. This includes the one-time 24 hour log of browser history held for 3 months and permitted by users upon install. Third, we believe we identified a core issue which is humbly the result of the use of common code libraries. We have learned that browser collection functionality was designed in common across a few of our applications and then deployed the same way for both security-oriented as well as the non-security oriented apps such as the ones in discussion. This has been corrected.

Update as of September 11

We can confirm this situation is contained to the consumer apps in question. None of the other Trend Micro products, including consumer, small business or enterprise, are known to have ever utilized the browser data collection module or behavior leveraged in these consumer apps.

We’ve always aimed for full transparency concerning our collection and use of customer data and this incident has highlighted an opportunity for further improvement in some areas. To that end, we are currently reviewing and re-verifying the user disclosure, consent processes and posted materials for all Trend Micro products.

All of our apps are currently unavailable on the App Store. Thank you for the patience as we address this.

Update as of September 12 

Please note that ‘Open Any Files’ app leverages the same module in question. Henceforth, we will no longer publish or support this product.

We have updated our consumer apps in question to fully comply with Apple’s requirements and are in the process of resubmitting them to Apple. We are aware that our other apps have been suspended as well and we are working to resolve this as soon as possible, but thus far the basis for these suspensions is unclear. We are actively pursuing the chance to engage with Apple to understand their decision further and address any issues.

As we read through your questions, we realized that there is some confusion between Trend Micro consumer products and one from another vendor. Several of our apps have been grouped together with a completely unrelated vendor in media articles. To be clear – Adware Doctor is not a Trend Micro product.

Question Answer

 

Which of your apps were collecting 24hours worth of browser history previous to the app installation? The specific MacOS consumer apps are Dr. Cleaner, Dr. Cleaner Pro, Dr. Antivirus, Dr. Unarchiver, Dr. Battery, and Duplicate Finder.

 

What information did these apps collect and why?

 

They collected and uploaded a small snapshot of the browser history on a one-time basis, covering the 24 hours prior to installation. This was a one-time data collection, done for security purposes (to analyze whether a user had recently encountered adware or other threats, and thus to improve the product & service).

 

What actions have you taken to date? We have removed the browser collection module from the consumer products listed above and disabled the backend API that enabled the collection for older versions. In addition, we have permanently deleted all legacy logs that contained the one-time 24 hour log of browser history held for 3 months and permitted by users upon install. We have also updated the one app, which did not include a clear pop-up window during installation, Dr. Unarchiver, with links to our EULA, privacy policy, and data collection notice.

 

Is Open Any Files a Trend Micro app? Yes, but we have decided to no longer publish or support this product.

 

What kind of information do these apps acquire? A complete and transparent overview on what data our apps were collecting is available in our Data Collection Notice:

https://success.trendmicro.com/data-collection-disclosure

 

Do these apps obtain the consent from users about data acquisition? Yes. During installation the user accepts a EULA with links to the detailed Data Collection Notice for the applicable product.
Please note that the EULA pop up was not active in the GUI for one of our apps, Dr. Unarchiver, but was available from the download page on the App Store. We have rectified that and it will be reflected once this app is available on the App Store again.
Media reports claim that browser information was sent to a server in China. Is that true? No. Any reports saying that Trend Micro is “stealing user data” and sending them to an unidentified server in China are absolutely false. The browser history data was uploaded to a server hosted by AWS and managed/controlled by Trend Micro, physically located in the U.S.

 

Why were your apps removed from Mac App Store? Apple has suspended our apps and we are working with Apple via their formal dispute process. We have updated our consumer apps in question to fully comply with Apple’s requirements and are in the process of resubmitting them to Apple. We are aware that our other apps have been suspended as well and we are working to resolve this as soon as possible, but thus far the basis for these suspensions has not been clearly articulated to us. We are actively pursuing the chance to engage with Apple to understand their decision further and address any issues.

 

The post Answers to Your Questions on Our Apps in the Mac App Store appeared first on .

From the year of ransomware to the year of cryptojacking

2017 was the year when the word ransomware stopped being a term exclusive to cybersecurity experts and IT departments. The enormous media attention that attacks such as WannaCry and Petya/GoldenEye received turned this type of cyberthreat into one of the key trends for businesses last year.

But the constant evolution of cybercriminality has found a new mother-lode: cryptomining. It is no coincidence that bitcoin was included on Fundéu BBVA’s shortlist for word of the year in 2017, highlighting the impact that virtual currencies are currently having. And if there’s one group that knows this more than anyone, it’s cybercriminals, who have been able to develop a strategy of attacking third party computers and using them without consent to mine cryptocurrencies for their own financial gain. This has given rise to the concept that has irrefutably defined cybersecurity in 2018: cryptojacking.

2018, the year of cryptojacking

Back in March, we at Panda Security warned of the rise of cryptojacking as a threat to businesses, given the large amount of IT resources found in companies. As we explained, using malware, cybercriminals are able to leverage part of a device’s processing power in order to covertly mine cryptocurrencies; the victim notices nothing more than the slowing down of the device — an occurrence that they will most likely put down to something other than a cyberattack.

The year kicked off with several notable cases where such well known IT programs and websites as Microsoft Word, GitHub and YouTube were affected. But illegitimate cryptomining continues. We’ve recently seen new massive attacks: 200,000 MikroTik routers in Brasil were affected by one attack; CMS Drupal by another; and in China, a criminal group that had infected more than a million computers with cryptojacking tools over two years was arrested.

In light of all of this, it is perhaps unsurprising that in the first half of 2018 alone, there has been a 4,000% increase in the number of cryptojacking attacks on Public Administration. Conversely the number of ransomware cases fell 2% in the same period, according to data from the CNI (Spanish National Intelligence Center).

Other European countries have also been witness to this astronomical growth. In the United Kingdom, 59% of companies have been affected at one time or another by this cyberthreat, and 80% of the attacks that have been detected happened in 2018.  This trend is also on the up in the Netherlands. The Dutch National Coordinator for Security and Counterterrorism warned that cryptojacking has become an “attractive and notable” cybercriminal strategy, and highlighted that criminals seek to illegally mine cryptocurrencies “more and more often”.

What to do in light of such a pessimistic landscape

First of all, don’t panic. By following a series of handy tips, your company can protect itself against possible incidents related to the cyberthreat de rigueur. Among the most indispensable tips on the list are:

  • Carrying out periodical risk evaluations to identify possible vulnerabilities.
  • Regularly updating all of the company’s systems and devices, and considering uninstalling software that isn’t being used.
  • Protecting web browsers on endpoints with the installation of specific extensions that stop cryptomining by blocking malicious scripts.
  • Thoroughly investigating any spikes in IT problems related to unusual CPU performance. If multiple employees report that their computers are slowing down or overheating, it could be a case of cryptojacking.

These actions need to be accompanied with the implementation of an advanced cybersecurity solution that provides key features such as detailed visibility of the activity on every endpoint, and that provides control of all running processes. This is what is provided by Panda Adaptive Defense, Panda Security’s cybersecurity suite, which is primed to protect all your company’s computers against any kind of cyberthreat, be it the classics, or the latest trends.

The post From the year of ransomware to the year of cryptojacking appeared first on Panda Security Mediacenter.

BSides Idaho Falls Preview: The Industrialization of Red and Blue Teaming

When we think of industrialization and the industrial revolution, images of smoke stacks, purpose-built machinery, and automation come to mind. Some examples are the Jacquard Machine, as pictured below. This machine simplified the process of manufacturing textiles in the early 1800s, and some consider it an early example of computer punch cards and punch tape […]… Read More

The post BSides Idaho Falls Preview: The Industrialization of Red and Blue Teaming appeared first on The State of Security.

A Clean Start: Finding Vulnerabilities in your Docker Base Images

The ability to find and use a free public Docker base image makes it easy to bootstrap the creation of a new Microservice. However, “easy” doesn’t equate to “good.” Using a Docker base image is much like including an external library. It’s really important to know what baggage you are dragging into your project, particularly […]… Read More

The post A Clean Start: Finding Vulnerabilities in your Docker Base Images appeared first on The State of Security.

Teen arrested for DDoS attack on ProtonMail & making fake bomb threats

By Waqas

ProtonMail, a Swiss-based end-to-end email encryption service, has announced the name of one of the attackers involved in the DDoS attack against the company earlier this year. Due to the attack, the email service of ProtonMail stopped responding for a minute several times despite having adequate mitigation measures in place. The identified hacker, a teenager […]

This is a post from HackRead.com Read the original post: Teen arrested for DDoS attack on ProtonMail & making fake bomb threats

Key suspect in JPMorgan hack is now in US custody

Closure might be coming for victims of the massive JPMorgan Chase hack in 2014. The country of Georgia has extradited the alleged (and until now mysterious) hacker at the core of the crime, Andrei Tyurin, to the US. The Russian citizen pleaded not guilty in a New York court to charges that included conspiracy, hacking, identity theft and wire fraud. He reportedly worked with mastermind Gery Shalon to steal personal data from JPMorgan and other banks for use in a pump-and-dump stock scheme that may have made hundreds of millions of dollars.

Source: Bloomberg

Sound, Fury, And Nothing One Year After Equifax

One year ago today, Equifax suffered what remains one of the largest and most impactful data breaches in U.S. history. Last September, it was revealed that the personal information of 145 million Americans, almost 700,000 UK citizens, and 19,000 Canadians was stolen by cybercriminals.

This information included names, addresses, birthdays, Social Security numbers, and—in some cases—driver’s licenses. All critical, personally identifiable information (PII) that can resold in the underground and used to commit identity fraud.

This breach had very real impact on the millions affected. On Equifax? Or the industry as a whole? Not so much…

The result is that your personal information remains “entrusted” with various agencies without your knowledge. Agencies that may or may not have your best interests at heart. A year after the Equifax breach, your data has never been at greater risk. Why?

The Equifax breach made international headlines for weeks. It’s a story that has corporate intrigue, political uproar, and controversy…yet nothing really has changed.

What Happened?

Cybercriminals gained access to Equifax’s systems through a known vulnerability in Apache Struts (a web application framework). This easily exploited vulnerability has been left unpatched and unmitigated by Equifax for weeks.

When Equifax discovered the breach, they waited weeks to notify affected individuals and the general public. That notification came in the form of an insecure site on a new domain name. This contributed to the criticism the company faced as they bumbled the response.

The saga took a number of twists and unexpected turns as executives were accused of insider trading, having sold shared valued at $1.8 million dollars after the breach was discovered but before the public announcement. The CIO and CISO stepped down in the wake of the breach. As the company continued to see pushback, political and consumer frustration, the CEO eventually resigned allowing the company to try and turn the page.

After all, Equifax had the tools, people, and process in place to prevent the breach but simply dropped the ball…with catastrophic results.

Customers?

One of the biggest challenges in light of this breach was the relationship that Equifax had with the affected individuals. Equifax maintained a significant amount of personally identifiable information on hundreds of millions of individuals in the US and around the world yet very few of these individuals had a direct relationship with the company.

Equifax and a handful of other consumer credit reporting agencies make their money by selling customer profiles and credit ratings to other business, essentially acting as massive reputation clearing houses.

Given the role played by these agencies, individuals in the US have alarmingly few actions they can take in recourse to an error or breach of their information in care of such an agency. This was a key point raised in the uproar after the Equifax breach.

One year later, let’s check in on the progress made so far…

Lack of Personal Data Protections

Alarmingly, there has been no federal action and only one state has passed legislation regarding personal data protections since the Equifax breach.

In June, California passed the California Consumer Privacy Act of 2018 (AB 375). This landmark legislation takes a much needed step towards personal data protections in the state of California. While not the driving factor for the legislation, the breach contributed to awareness of the need for such protections.

This protects Californian’s in a similar manner to European’s under GDPR. If either piece of legislation was in effect during the Equifax breach, the company would have been looking at major fines.

What Now?

Despite the initial uproar, very little has happened in wake of the Equifax breach. The creation of strict regulation in the EU had been underway for years. The initiative in California had already been underway when this breach happened.

Despite the outrage, very little came of the breach outside of Equifax itself. They brought in new leadership and have tried to shift the security culture, both solid steps. The consent letter signed will help ensure that Equifax continues to build a strong security culture but it doesn’t impact any of the other agencies.

Is this the future? As more and more companies move to monetize data and customer behaviours, a lack of political will and a lack of consumer pressure means that YOUR data remains at risk.

Regulation is always challenging but it’s clear that the market isn’t providing a solution as few of the affected individuals have a relationship with the companies holding the data. Your personal information is just that…yours and very personal.

Individuals need the ability to hold organizations that put that information at risk accountable.

The post Sound, Fury, And Nothing One Year After Equifax appeared first on .

Red player one: learning the right security lessons from a red team exercise

A red team exercise can be a valuable way of testing how effective your security controls are. Having your internal security team, or an external consultant, simulate an attacker trying to breach your defences can reveal plenty. Their success or otherwise can show where you need to improve from a security perspective, or what you’re doing right.

I asked Neha Thethi, senior cybersecurity analyst at BH Consulting, to explain how a red team exercise works, how it can be useful, and how to interpret the results.

Are you ready to play?

Firstly, she said that they’re best suited to companies with advanced security already in place. Part of the aim is to check how the organisation reacts when it discovers a breach or attack in progress. For anyone still understanding their risk exposure, a more general vulnerability assessment is more useful than a red team.

Prepare for attack

Whereas pen testing usually involves testing a specific server for technical flaws, a red team exercise has a much wider brief. That could involve finding a weakness in the company’s online application or dropping USB keys near its offices to see if someone will try it on their machine and unwittingly infect the network with malware. In some cases, it’s about using social engineering to gain physical entry to a target’s building or to learn information by a phone call.

“The very nature of red team is to check how you can infiltrate an organisation through whatever means possible, so it could be anything and everything,” Neha explains.

Easy intelligence gathering

A red team generally starts by carrying out reconnaissance of its target, just like any good attacker would. This stage involves gathering open source intelligence, also called OSINT. In practice, Neha describes this as “Google the hell out of them and gather email addresses of key employees from places like LinkedIn.” This can reap big returns for attackers, as it’s a great chance to size up a target’s weak spots.

With a red team exercise, nothing is out of scope unless the client specifically asks for it. Neha says it’s worth agreeing in advance what systems or processes if any are included in the scope of the project.

Set the time

Allocating the right time to a red team exercise is a matter of balancing budgets and priorities. Some red team exercises can go on for months. When the brief is simply to “try and break in”, the “attacker” has the element of surprise on their side. Other times, the goal is to see what progress a supposed attacker could make within a defined period. This might be as little as a week. So, the attacker might find vulnerabilities in a victim’s application but may not get to exploit them in time. Even such cases yield valuable intelligence for the organisation. It identifies potential threats they might not have known about, and they can assess the risk to decide if they need to invest in making it more secure.

Learning the lessons

So what benefit does an organisation get from asking a red team to probe and poke at its security? “The most value the organisation gets is getting the assurance about the areas that they’re good in. it’s very important not just to know what they were weak at, but to report on the things they do well,” Neha says.

The outcome of a red team exercise is a report highlighting any negative aspects of security that the “attacker” uncovers. The document should make recommendations about ways to make those weak points more secure. Equally, it’s just as important to highlight areas where the organisation is doing things right. Many real-world attacks start with phishing. So if a red team’s attempt was unsuccessful because the organisation had trained its employees to spot suspicious emails, that’s a valid finding to include. “One of the most important things in a report is to assure the customer that the measures they have are effective,” Neha says.

With a red team report, context is everything. Some security people tend to focus only on weaknesses. While they have their place, it’s important to take the red team’s findings in context. There will usually be room for improvement, but it’s always encouraging to know what you’re doing right.

The post Red player one: learning the right security lessons from a red team exercise appeared first on BH Consulting.

British Airways website hack exposed customer financial data

While we've gotten used to regular data breaches, it's been awhile since news of one hit the airline industry. But customers who booked flights on British Airways' website or app between 22:58 BST on August 21st and 21:45 BST on September 5th had their personal and financial data compromised due to a cybersecurity breach. The company's post announcing the event unwaveringly stated that anyone who made a booking in that time frame had their information stolen.

Via: BNO News

US charges North Korean man linked to Sony hack and WannaCry

The US Treasury Department announced today that it has sanctioned one individual and one group connected to malicious cyber activities perpetuated by North Korea's government. Park Jin Hyok, a computer programmer, was sanctioned today along with Korea Expo Joint Venture, an agency he allegedly worked for. The Treasury Department claims Hyok is part of a conspiracy responsible for the 2014 Sony Pictures hack, the 2016 Bangladesh Bank heist and last year's WannaCry ransomware attack. The Department of Justice also confirmed to reporters that it has charged Hyok with extortion, wire fraud and hacking crimes, according to Motherboard.

Source: US Treasury Department

DOJ will reportedly charge North Korean operative for Sony hack

The Justice Department will reportedly announce charges today against at least one North Korean operative connected to the 2014 cyberattack on Sony Pictures, the Washington Post reports. Officials told the publication that computer hacking charges would be brought against Park Jin Hyok, who is said to have worked with North Korea's military intelligence agency the Reconnaissance General Bureau. It's the first time these types of charges have been brought against an operative of North Korea.

Source: Washington Post

Five school cybersecurity questions you should ask on your next parent-teacher conference

The summer is officially over, and children are back to school! Whether you are at work or left home alone, you are probably feeling a mixture of relief and sadness. Even though you always keep them in mind, your precious bundles of joy are now away from you for the most of the day.

Children spend more time in front of the little screens than ever. We are sure that you’ve given them plenty of pieces of advice on how to stay safe in both the real world and the online one. You most likely already have full control over their digital life and have installed parental control software on their connected devices. However, sometimes the children are not the only ones who need some cybersecurity education, the school employees may need some guidance too. Even if you are one of the lucky parents who send their children to schools that ban the use of smartphones on learning grounds, there are a few questions concerning the safety of your children that you should raise next time you speak with the school administration.

What information do schools keep on your children and who has access to it?

The school system stores a lot of information about your children. It often includes standard directory information such as names, addresses, and phone numbers as well as more sensitive data such as SSN and DOB. All information is generally protected by the federal Family Educational Rights and Privacy Act (FERPA). However, many educational institutions sometimes grant access to such information to school employees who do not need it but can take advantage of it. It’s always worth asking the question!

What happens if the school becomes a target of a ransomware attack?

Educational institutions are targets of hacker attacks all the time. Sometimes the attackers request a ransom. Asking what would the procedure be if your children personal details get stolen is a must. Knowing more about what does the school do to prevent such attacks is also a good question. Educational entities are often underfunded and do not have the resources to take good care of their students’ cybersecurity needs. Knowing more about such procedures must be on your checklist every time you choose a school.

How often do children and school employees change passwords?

Following the right procedures for password maintenance is a must for educational entities that store such sensitive information and must be implemented on both students and employees. The last thing you want is for hackers to steal the identity of innocent children and ruin their lives before they even have a chance to defend themselves. SSNs generally never change so the information taken now might be used 5-10 years from now when the children become adults. Best practices say that passwords should never be reused and they must be changed every three months.

What data is kept after students and employees leave the school?

Educational entities are supposed to deactivate the accounts of former students and employees. However, this is not always the case as it is known that some accounts are often overlooked and left active for years. The IT departments either do it by mistake or do the students/employees a favor so they can continue taking advantage of the educational benefits – as you know some services, including Apple Music, provide discounts through educational email verification. Such unmonitored accounts could sometimes be used by hackers to get into the internal systems of educational institutions.

What steps does your school administration do to prevent school cyberbullying?

The fight against cyberbullying, access to inappropriate websites, online predators and dangerous games such as the Blue Whale Challenge starts at school – proper cybersecurity education would help both students and employees. Students will know how to not only protect themselves but also report inappropriate behavior, and pedagogues will increase their knowledge in spotting disturbing actions. This is questions that need to be discussed on a regular basis as technology trends change all the time and staying up-to-date is not an easy task, especially in underfunded and underperforming schools.

For some of the questions, there is not a right answer, nor a wrong one. However, raising the topics is vital as it will encourage educational institutions to always be on top of their game, and will give you the peace of mind you need.

Download your Antivirus

The post Five school cybersecurity questions you should ask on your next parent-teacher conference appeared first on Panda Security Mediacenter.

Stop Impersonations of Your CEO by Checking the Writing Style

If one of your employees receives an email that looks like it’s from the CEO asking to send sensitive data or to make a wire transfer, could that employee spot it as a fake based on how it is written? He or she may be so concerned with pleasing the CEO that they may urgently respond without a second thought. What if artificial intelligence could recognize that the writing style of suspect email doesn’t match the style of your CEO to spot fraud? It can.

Writing Style DNA technology is now available to prevent Business Email Compromise Attacks (BEC) which according to the FBI has cost organizations $12.5 billion with some companies losing as much as $56 million dollars.

Want to skip the reading? Watch this short video

Unique Writing Style

Some of us write long sentences with a variety of words while others are more direct with short words and small paragraphs. If we look at the email of three Enron executives (based on a dataset of 500,000 emails released publicly during the Federal Energy Regulatory Commission’s investigation) we can see the differences in how they write. Looking at the emails from Jeffrey Skilling, Sally Beck, and David Delainey, we can compare writing style elements such as sentence length, word length, repeated words, paragraph length, pronoun usage, and adjective usage.

Graph of writing style elements of 3 Enron executives

We see that the three executives style vary across the 16 elements in the chart above. As humans, we can perhaps come up with 50 or maybe 100 different writing style elements to measure. A computer AI though can see many more differences between users writing. The AI powering Writing Style DNA can exam an email for 7000 writing style elements in less than a quarter of a second.

If we know what an executive’s writing style looks like, then the AI can compare the expected style to the writing in an email suspected of impersonating that executive. 

Training an AI model of a User’s Writing Style

Based on previous Business Email Compromise attacks, we see that the CEO and Director are most likely to be impersonated and can define these individuals as “high-profile users” within the admin console for Trend Micro Cloud App Security or ScanMail for Exchange.

 

Titles of impersonated senders in 1H 2018 Business Email Compromise attempts 

To create a well-defined model of a high-profile user’s writing style, the AI examines 300-500 previously sent emails. Executive’s email is highly sensitive and to protect privacy, the AI extracts metadata describing the writing style but not the actual text. 

Your executives style of writing isn’t static but rather evolves over time just like this infographic shows JK Rowling’s style changing over the course of writing the Harry Potter books. As such, the AI model for a high-profile user can be regularly updated at a select interval. 

Process Flow

When an external email from a name similar to a high-profile user, the writing style of the email content is examined after other anti-fraud checks. The volume of BEC attacks is small to start with (compared to other types of phishing) and other AI based technologies catch most attacks which leaves only a small number of the stealthiest attacks for writing style examination. For these attacks, if the style doesn’t match, the recipient is warned not to act on the email unless he/she verifies the sender’s identity using a phone number or email from the company directory. Optionally, the impersonated executive can also be warned of the fraud attempt on their behalf. 

Internal and Beta Results

Internally, Trend Micro has been testing this since January of 2018. Writing style models are in place for our executive team and some other high-profile users. During this time, Writing Style DNA detected 15 additional BEC attacks which were attempting to impersonate our CEO, Eva Chen. This works out to an average of 1 additional attack detected every other week. To date, there have been no false positives.

Sample BEC attempt detected with Writing Style DNA

We have also had more than 60 beta customers try the technology over the past few months. Many initially found their executives were using their personal email accounts occasionally to email others at the organization and these personal accounts can be whitelisted by the admin. Writing Style DNA detected 15 additional BEC attacks at 7 organizations. 

Available now and included with your license

Writing Style DNA is now available with Cloud App Security for Office 365 and ScanMail for Microsoft Exchange at no additional charge.

The Cloud App Security service has been updated already to include this functionality and ScanMail customers can upgrade to SMEX 12.5 SP1 to start using this technology. ScanMail customers can learn more about upgrading to v12.5 SP1 at this webinar September 6.

The post Stop Impersonations of Your CEO by Checking the Writing Style appeared first on .

Why Security Configuration Management (SCM) Matters

In the Godfather Part II, Michael Corleone says, “There are many things my father taught me here in this room. He taught me: keep your friends close, but your enemies closer.” This lesson Vito Corleone taught his son Michael is just as applicable to IT security configuration management (SCM). Faster breach detection Today’s cyber threat […]… Read More

The post Why Security Configuration Management (SCM) Matters appeared first on The State of Security.

The Risk of IoT Security Complacency

Cyber security professionals are increasingly relying on their key security solutions to bridge staff and knowledge gaps.

Trend Micro recently surveyed 1,150 IT executives globally. We found a gap between the perceived risk from IoT and the planned mitigation for that risk. Most senior executives recognize that IoT can introduce security risk to the organization, but few will invest resources to remediate that risk. Click here for more details. Senior leadership should perform an IoT Gap Analysis on their IoT risk vs. remediation plans.

Your Gap Analysis should look at two things. First, discover the perceived risk of any IT-connected IoT (and Industrial Control Systems generally) held by your senior leadership. Include their understanding of the responsible persons and organizational functions addressing that risk. Compare that with the view held by your supervisors and technicians managing those devices, including the actual individuals and organizations or teams handling the mitigation efforts. Second, assess the investment in mitigation supported by senior leadership compared with the actual steps taken to deploy and use those supported actions by the supervisors and technicians (and anything else they might have innovated, as well).

You are looking for consistency of intent and effectiveness of investment.

Figure 1: Cybersecurity Risk Matrix

When evaluating the likelihood of an event, remember that cybercriminals use automated tools to scan all networks for vulnerabilities. Any Internet-connected, unpatched IoT device has a “High” likelihood of being attacked.

As you stage your remediation activities, start with the items in the top right three boxes, labeled in red. Then address the items on the diagonal. Once you have mitigated the issues in these zones, you can then institute a program of continually monitoring activity that might change the risk profile, and as a result broaden your attach profile. As you roll new technology out, build this assessment into your design or product selection, release, and operations procedures. Verify that your business partners who also use those technologies are in harmony with your security program. Supply chain risks abound in the Internet of Things. Finally, consider the items that fall into the three lower left boxes shown in green.

Use your internal audit team to help your IoT Gap Analysis. They know how to assess program risk, and they know how to talk to technologists and managers. If your organization perceives high risk but does not take steps to mitigate that risk, you may be violating your business partners’ expectations – not to mention their contractual terms, and possibly regulatory or legal constraints.

As you complete your risk assessment, remain pragmatic. It is far better to be generally right than precisely wrong. With a bit of effort, you can justify a cost-effective, comprehensive information security program that will include your ICS, IoT, and existing IT infrastructure.

Let me know what you think! Comment below, or reach me on Twitter: @WilliamMalikTM . 

The post The Risk of IoT Security Complacency appeared first on .

An EHR Systems Check-Up: 3 Use Cases for Updating Cyber Hygiene

Have you ever wondered how much your patient health record could garner on the black market? Whereas a cybercriminal only needs to shell out a mere dollar for your social security number, your electronic health record (EHR) is likely to sell for something closer to the tune of $50. This is according to research firm […]… Read More

The post An EHR Systems Check-Up: 3 Use Cases for Updating Cyber Hygiene appeared first on The State of Security.

SN 679: SonarSnoop

This week we cover the expected exploitation of the most recent Apache STRUTS vulnerability, a temporary interim patch for the Windows 0-day privilege elevation, an information disclosure vulnerability in all Android devices, Instagram's moves to tighten things up, another OpenSSH information disclosure problem, an unexpected outcome of the GDPR legislation and sky high fines, the return of the Misfortune Cookie, many thousands of Magneto commerce sites are being exploited, a fundamental design flaw in the TPM v2.0 spec, trouble with Mitre's CVE service, Mozilla's welcome plans to further control tracking, a gratuitous round of Win10 patches from Microsoft.... and then a working sonar system which tracks smartphone finger movements!

We invite you to read our show notes!

Hosts: Steve Gibson and Jason Howell

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Sponsors:

Social-Engineer Newsletter Vol 08 – Issue 108

 

Vol 08 Issue 108
September 2018

In This Issue

  • Information Security, How Well is it Being Used to Protect Our Children at School?
  • Social-Engineer News
  • Upcoming classes

As a member of the newsletter you have the option to OPT-IN for special offers. You can click here to do that.


Check out the schedule of upcoming training on Social-Engineer.com

3-4 October, 2018 Advanced Open Source Intelligence for Social Engineers – Louisville, KY (SOLD OUT)

If you want to ensure your spot on the list register now – Classes are filling up fast and early!


The SEVillage at Def Con 26 would not have been possible without it’s amazing Sponsors!

Thank you to our Sponsor for SEVillage at DerbyCon 8.0!


Do you like FREE Stuff?

How about the first chapter of ALL OF Chris Hadnagy’s Best Selling Books

If you do, you can register to get the first chapter completely free just go over to http://www.social-engineer.com to download now!


To contribute your ideas or writing send an email to contribute@social-engineer.org


If you want to listen to our past podcasts hit up our Podcasts Page and download the latest episodes.


Our good friends at CSI Tech just put their RAM ANALYSIS COURSE ONLINE – FINALLY.

The course is designed for Hi-Tech Crime Units and other digital investigators who want to leverage RAM to acquire evidence or intelligence which may be difficult or even impossible to acquire from disk. The course does not focus on the complex structures and technology behind how RAM works but rather how an investigator can extract what they need for an investigation quickly and simply

Interested in this course? Enter the code SEORG and get an amazing 15% off!
http://www.csitech.co.uk/training/online-ram-analysis-for-investigators/

You can also pre-order, CSI Tech CEO, Nick Furneaux’s new book, Investigating Cryptocurrencies: Understanding, Extracting, and Analyzing Blockchain Evidence now!


The team at Social-Engineer, LLC proudly uses:


A Special Thanks to:

The EFF for supporting freedom of speech

Keep Up With Us

Friend on Facebook Facebook
Follow on Twitter Twitter

Information Security, How Well is it Being Used to Protect Our Children at School?

Information Security, How Well is it Being Used to Protect Our Children at School?

August and September are ordinary months to some, but to others they are a time of mixed emotions. It’s the start of another school year. Some are sad to see their children off, while others celebrate that day. The start of the school year brings with it a lot of paperwork and sharing of sensitive information. How well is information security being used to protect our children’s information, and even the school staff’s, personally identifiable information (PII)? How well is it being used to protect against social engineering attacks?

Think about the information that the schools keep; when you registered your child, you may have had to give them copies of their birth certificate, social security number, your phone number, and other personal information. You may have had to give your own social security number, especially if you had to fill out an application for free and reduced-price meals, or you had to register to volunteer at the school. If your child is in a college or university, even more information has to be given such as financial records, medical records, and high school transcripts. What is being done to keep that information secure?

When I read the following headlines they make me a little concerned, how about you?

These are only a few of the many stories out there. According to the Breach Level Index by Gemalto, the education sector had 33.4 million records breached in 2017 and a total of 199 reported breaches. This is a 20% increase of reported incidents over 2016. It gives meaning as to how widespread the incidents are when I see it visually on the K-12 Cyber Incident Map by the K-12 Cybersecurity Resource Center.

Who are breaching school networks and why are they doing it

Who are trying to breach a school’s network? It’s not just the student doing it to change grades or for fun, it’s also the elite attacker and the common cybercriminal. Thanks to the ease of availability of hacking tools, and the sharing of malicious attack techniques on the dark web, they are able to install ransomware, encrypt drives, and demand payment to decrypt them. They are also able to exfiltrate PII and passwords to gain further access to networks and steal and create identities. Identity thieves will use the child’s information to create their own false identity where they can take out credit cards and loans, ruining your child’s credit. When this happens, it can make it difficult to get a license, go to college, or get any loans.

How are they doing it?

Cybercriminals are opportunists who will take advantage of any vulnerabilities, especially with organizations that are less secure. Unfortunately for educational institutions, their security stance is usually poor and at a high risk. They battle staffing and budgetary constraints, their view of cybersecurity has been one of a low priority, and they view security as an inconvenience.

Another point of weakness is the ease of accessibility to the school’s network. They usually have free Wi-fi, large numbers of desktop and mobile devices, and weak passwords which all present potential points of entry into the network. In addition, students will browse the web from insecure networks and often pick up malware which can then be inadvertently shared with others via email or uploads of coursework to the secure school network.

So, what do cybercriminals do? They use a variety of web- and email-based attacks that are at their disposal. One web-based attack is that they actively target sites where students will commonly browse. These are often completely legitimate sites, such as Thesaurus.com. No click required; just viewing the ad can initiate the malware download.

An example of an email-based (phishing) attack targeting education was at Northeastern University, where some Blackboard Learning users were targeted by an email that tried to influence the reader into clicking a link that was disguised to be legitimate and tried to compel the action by using a time constraint.

With web- and email-based attacks, the cybercriminal can deliver ransomware and steal student records. All at a great cost to the school system and to those that have their information compromised.

What can be done?

When it comes to protecting our children we are willing to do anything, so what can we do to protect our children’s information?

Here are some things that parents can do:

1. Make sure that the personal computer that is used to log into the school’s network is up-to-date;

2. Make sure that computer has more than just an antivirus installed, add malware protection as well;

3. Be proactive and educate yourself and your children on security awareness;

  • Read the Social Engineer Framework;
  • Have your child create usernames that don’t contain personal information, such as birth year;
  • Look at using a private VPN when on an insecure network, such as at Starbucks. Trustworthy VPNs will usually have a fee for using them;
  • Teach children the importance of not giving out information;
  • Use a secure password manager and don’t share passwords;
  • Make sure teens don’t take a picture of their license and share it on social media; and
  • Don’t throw important documents in the trash, shred them.

4. Be watchful of your student’s browsing activity; and

5. Something you may wish to look into is an identity theft protection service to protect your child against identity theft.

Remember that just because you are asked to give out information doesn’t mean you have to. Ask, “why is it necessary for them to have that information?”

Schools need to follow the industry best practice in information security and we, as parents, need to demand that it be done. Schools should also be forced to address the human element in security:

  • Staff, teachers, students, and parents need to be educated and used as a line of defense; and
  • Institute security awareness training which includes: Performing simulated phishing exercises; Recruiting on-campus security advocates; and Holding onsite security education activities, lectures, and in-class training.

Following these suggestions will help to protect our children’s information at school.

Need Inspiration?

If you want some inspiration, look at what some schools are doing:

  • One example is that the July 2017 article of The Educator in San Diego, CA said that, “the local ESET office runs an annual cyber boot camp for about 50 middle and high school students.”
  • Another example was in the June 2017 article of The Educator, where it discusses how the Macquarie University in Australia uses the BlackBerry AtHoc as part of the University’s Emergency Management Plan and that the system will assist the school in managing and mitigating social engineering incidents, for example, by sending a message to staff and students recommending not to open a certain email or click on a certain link.

To some, the suggestions may be easier said than done, but, if they aren’t followed, the school nearest you may be the next cybersecurity incident we read about. Information security must be implemented to protect the sensitive information (PII) that is housed at the schools, especially that of protecting our children’s information.

Stay safe and secure.

Written By: Mike Hadnagy

Sources:

https://www.theeducatoronline.com/au/news/is-your-school-protected-against-cyber-threats/237855

https://www.theeducatoronline.com/au/technology/infrastructure-and-equipment/how-malware-could-be-threatening-your-school/246146

https://edtechmagazine.com/k12/article/2016/04/how-ever-worsening-malware-attacks-threaten-student-data

https://blogs.cisco.com/education/the-surprisingly-high-cost-of-malware-in-schools-and-how-to-stop-it

https://blog.barkly.com/school-district-malware-cyber-attacks

https://in.pcmag.com/asus-zenpad-s-80-z580ca/124559/news/facebook-serves-up-internet-101-lessons-for-kids

https://www.stuff.co.nz/business/105950814/schools-promised-better-protection-from-ransomware-as-taranaki-school-blackmailed

https://www.eset.com/int/about/why-eset/

As part of the newsletter group, you will be the first to receive special offers to services and products by Social-Engineer.Com.


 

 

The post Social-Engineer Newsletter Vol 08 – Issue 108 appeared first on Security Through Education.

Securing the Convergence of IT with OT

The Industrial Internet of Things (IIoT) is the leading edge of the convergence of Operational Technology (OT) with IT. This convergence begins with network connectivity but requires enhancements in operational procedures, technology, and training as well.

Beginning with the network, IT and OT use different protocols. Within the OT world, vendors have created many proprietary protocols over the past 50 years: MODBUS dates from 1969; ABB alone has over 20 protocols. IIoT vendors offer gateways to simplify and transform information before it moves to IT’s cloud for aggregation and processing. The volume of data can be huge, so IIoT gateways use compression, aggregation, and exception reporting to minimize network traffic. Gateways are Edge processors.

Operational procedures differ between IT and OT environments. The guiding principles of OT networks are two: safety, and service reliability. However, the IT information security principles are data availability, data integrity, and data confidentiality. These principles are orthogonal: they do not overlap. From an IT perspective, and industrial process is not “information” so falls out of scope for information security.

IT and OT processes could converge as they each evolve. DevOps breaks down the barriers between development and operations for more rapid deployment of new function without compromising controls governing software quality. Figure 1 shows a converged DevOps process:

 

Figure 1: Converged DevOps Process

In the OT realm, enhancements to Process Hazard Analysis are driving the evolution of Cyber Process Hazard Analysis, as shown in Figure 2.

Figure 2: Cyber Process Hazard Analysis (Cyber PHA)

The OT evolution shows two processes: on the left in blue, the ongoing asset security analysis, which influences the OT Program and Governance Model in step 5 on the right. As new threats come to light, engineers update the model which flows into a new, more secure, steady state for the environment.

OT technology is evolving as core technologies offer greater processing power, storage capacity, battery life, and network connectivity. Early OT protocols had no authentication or encryption, and could not accept over-the-air software and firmware updates securely. Newer processor chips can support these requirements, but the IIoT vendors must build these capabilities, requiring larger code bases for development and some mechanism to issue patches during operations. IIoT vendors do not have experience running bug bounty programs. They will need some way to get feedback from their customers and researchers to fix problems before they get out of hand.

Training means more than ad hoc learning as the opportunity presents itself. Information security skills are scares and growing more so. Organizations need to provide additional skills to their existing staff, and may need to rely on outsourced support to bridge the gap while those new skills come on-line. But simply handing off responsibility to a third party will not eliminate risk: the organization itself will have to enhance its operational procedures to handle patch/fix requirements in time.

At Trend Micro, we understand this complexity, so we address it from different angles. Securing the connected world is one of our highest priorities.  So far this year, we have launched a series of programs and partnerships to help IIoT manufacturers and their marketplaces. The Zero-Day Initiative (ZDI) includes Industrial Control Systems (ICT) defect reports. ZDI processed 202 SCADA HCI defects in the first half of 2018. Deep Security already has over 500 filters/virtual patches for OT protocols traveling over IP. Trend Micro offers guidance on deploying information security tools in the development cycle so the CD/CI process does not experience a disruption as security contexts change with production deployment. The IoT SDK helps IoT device manufacturers build core information security functions into devices during development, as with Panasonic’s In-Vehicle Infotainment (IVI) systems. By offering IoT vendors access to ZDI, Trend Micro extends its expertise in managing bug bounty programs to new entrants from outside the conventional IT realm. Partnerships with IIoT vendors such as Moxa extend 30 years of Trend’s information security expertise to a broad range of industrial control platforms. Trend Micro’s offering for telecommunications brings work-hardened network and server security to carriers for secure, reliable communications. Contact Trend Micro for more information about the threat landscape and available solutions.

For more information, please click here.

What do you think? Let me know by commenting below, or reach me @WilliamMalikTM .

The post Securing the Convergence of IT with OT appeared first on .

BEC is Big Business for Hackers: What makes these attacks so hard to prevent?

For years, one of the most lucrative ways for hackers to generate profits was through ransomware attacks. These instances involve the use of strong encryption to lock victims out of their files and data – attackers then sell the decryption key in exchange for an untraceable Bitcoin ransom payment.

Now, however, another highly profitable attack style is emerging, particularly within the enterprise sector.

Business Email Compromise, or BEC, is creating considerable opportunities for cybercriminals to make money off of their malicious activity, and the sophistication and urgency of these infiltrations make them particularly difficult to guard against.

The rise of BEC

Although organizations are now becoming increasingly aware of the BEC attack approach, this strategy has actually been generating income for hackers for years now. Trend Micro researchers reported that, in 2016, attackers generated an average of $140,000 in losses by launching BEC attacks on businesses across the globe.

In the past, BEC was known as a “man-in-the-email” scam, in which hackers leverage legitimate-looking emails to support bogus wire transfers from enterprise victims. As Trend Micro researchers pointed out, these attacks can come in an array of different styles, including fraudulent invoices, attacks on the company CEO, account compromise or impersonation, and even traditional data theft.

Judging by the level of profit hackers have been able to generate, supported by the successful attacks they’ve been able to pull off, chances are good that BEC will only continue its rise in the near future.

How big of a business is BEC?

Whereas hackers caused an average of $140,000 in business losses two years ago, cybercriminals who leverage BEC schemes have been able to increase their potential for profit since then.

In July 2018, the FBI’s Internet Crime Complaint Center reported a 136 percent rise in losses related to BEC attacks, specifically between December 2016 and May 2018. Overall, this means 

hackers have raked in a total of $12.5 billion in company BEC losses, spanning both international and domestic attacks. The sheer amount of loss – and profit on the side of hackers – is $3 billion higher than the prediction Trend Micro researchers made in our Paradigm Shifts: Security Predictions for 2018 report.

Fueling BEC: What makes these attacks difficult to guard against?

An increase in successful attacks translates to a rise in profits on the part of hackers, and a larger number of affected business victims. Due to this environment landscape, it’s imperative that enterprise decision-makers and IT stakeholders not only understand that these attacks are taking place, but that they also boost their awareness of the challenges in protection. In this way, businesses can take proactive action to better protect their email systems, critical data, finances and other assets.

Let’s examine a few of the factors that contribute to the difficulties in protecting against BEC attacks:

Sophisticated use of social engineering

In the instances of BEC, hackers don’t just craft a catch-all email with common language and hope it dupes their target. Instead, they take their time to complete sophisticated social engineering. In this way, they are able to use an attack style that will boost their chances of the target opening and responding to the message.

Specially-crafted email

Thanks to the robust social engineering involved, cybercriminals can create incredibly legitimate-looking emails that include targets’ names, and can even appear to be from others within the organization. For example, an accountant may receive a fraudulent email request for a wire transfer from the company CEO, which includes a spoofed version of the CEO’s email address and even the CEO’s own email signature. Accordingly, he or she will be more likely to send the funds, because the email appears very real.

Lack of malicious links or attachments

While hackers’ background and foundational effort is in-depth and sophisticated, the process of delivery is surprisingly simple. BEC attacks rely on a convincing email with a strong message, meaning that the normal red flags used to identify a potential attack are lacking.

“Because these scams do not have any malicious links or attachments, they can evade traditional solutions,” Trend Micro pointed out.

Sense of urgency in the message

In addition to leveraging social engineering to include legitimate names, addresses and other details to fool victims, hackers also include a strong sense of urgency in BEC messages to encourage a successful attack. Many messages analyzed by Trend Micro researchers were found to include powerful language like “urgent,” “payment,” “transfer,” “request,” and other words that can support the overall message.

“The sense of urgency, a request for action, or a financial implication used in BEC schemes tricks targets into falling for the trap,” Trend Micro explained. “For instance, a cybercriminal contacts either the employees and/or executives of the company and pose as either third-party suppliers, representatives of law firms or even chief executive officers (CEOs), manipulating the targeted employee/executive into secretly handling the transfer of funds.”

Business Email Compromise attacks involve social engineering and strong language.

Array of different styles to appeal to different victims

In addition, the fact that attackers have established a wide variety of different attack styles means they can utilize the one that will be most successful with their target, based on their social engineering research. For instance, a hacker who wants to attack a company CEO could pose as a third-party vendor requiring payment for an overdue invoice. An attacker looking to launch an attack on a company that may not commonly use outside vendors, and thus may not fall for that approach, could pose as an internal HR employee needing personally identifiable data.

With so many different styles available, hackers have a veritable playbook to choose from and can craft the most legitimate message which will support the chances of successful fraud and attack.

Further leveraging a compromised account: Continuing the cycle

Finally, and unfortunately, the BEC cycle doesn’t have to end after a fraudulent wire transfer has been made by the victim. Once an account has been compromised, it can be leveraged to support further BEC schemes, sending phishing or other BEC messages to others within the compromised account address book.

Hackers are also positioning victims as “money mules,” according to the FBI IC3’s report. These are victims, recruited through romance or blackmail scams, that hackers use to open new accounts to leverage for BEC. While these accounts may only remain open for a short time, they provide additional, malicious opportunities for attackers.

Security experts don’t believe BEC attacks will diminish anytime in the near future. In addition to user awareness, enterprises should leverage advanced security solutions to prevent BEC intrusions. Technology from Trend Micro, which utilizes advanced strategies like artificial intelligence to detect email impersonators and machine learning to strengthen overall security, can be beneficial assets.

To find out more about how to guard against BEC within your enterprise, connect with the experts at Trend Micro today.

The post BEC is Big Business for Hackers: What makes these attacks so hard to prevent? appeared first on .

This Week in Security News: Air Canada and Cryptojacking

Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, Air Canada reported a data breach that exposed passport details for more than 20,000 customers on their mobile app. Also, Trend Micro’s Midyear Security Roundup reported an increase in cryptojacking and a decrease in ransomware attacks.

Read on:

Cybercriminals Changing Tactics as Seen in First Half Report

Trend Micro has seen a shift from large ransomware spam campaigns to more targeted attacks using ransomware as the tool to disrupt critical business operations.

The Urpage Connection to Bahamut, Confucius and Patchwork

In the process of monitoring changes in the threat landscape, we get a clearer insight into the way threat actors work behind the schemes. 

Microsoft Windows zero-day vulnerability disclosed through Twitter

Microsoft has quickly reacted to the disclosure of a previously unknown zero-day vulnerability in the Windows operating system.

Addressing Challenges in Hybrid Cloud Security

Hybrid environments can come with risks and challenges, especially for organizations adopting DevOps.

Air Canada Reveals Mobile Data Breach, Passport Numbers Potentially Exposed

Air Canada reported a data breach involving the airline’s mobile app which may have led to the exposure of passport details for 20,000 customers.

Banks in Peru Hit by Phishing Attack Using Bitcoin Advertisements as Lure

Using phishing emails intended to lure victims via clickable links, phishing attempts were also seen in other countries, including Thailand, Malaysia, Indonesia, the USA, and more.

Tech Industry Pursues a Federal Privacy Law, on Its Own Terms

Tech giants are lobbying government officials to outline a federal privacy law that would overrule the recent California law.

Unseen Threats, Imminent Losses

A review of the first half of 2018 shows a threat landscape that not only has familiar features, but also has morphing and uncharted facets: Ever-present threats grew while emerging ones used stealth.

Exclusive: Iran-Based Political Influence Operation – Bigger, Persistent, Global

An Iranian influence operation targeting internet users worldwide is bigger than previously identified, encompassing a network of anonymous websites and social media accounts in 11 different languages.

Supply Chain Attack Operation Red Signature Targets South Korean Organizations

Together with our colleagues at IssueMakersLab, Trend Micro uncovered Operation Red Signature, an information theft-driven supply chain attack targeting organizations in South Korea.

T-Mobile was Hit by a Data Breach Affecting Around 2 Million Customers

Hackers gained access to personal information from roughly 2 million T-Mobile customers, including the name, billing zip code, phone number, email address, account number and account type of users.

Did the results from Trend Micro’s 2018 security report roundup surprise you? Why or why not? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.

The post This Week in Security News: Air Canada and Cryptojacking appeared first on .

Operational Update Regarding the KSK Rollover for Administrators of Recursive Name Servers

Currently scheduled for October 11, 2018, the Internet Corporation for Assigned Names and Numbers (ICANN) plans to change the cryptographic key that helps to secure the internet’s Domain Name System (DNS) by performing a Root Zone Domain Name System Security Extensions (DNSSEC) key signing key (KSK) rollover.

Originally scheduled to take place in October 2017, ICANN decided to postpone the root zone KSK rollover in light of newly-available data at the time from recursive name servers.1 This data showed that a small population of recursive servers did not update their trust anchors as expected. Users of those recursive servers could experience resolution failures when the KSK rollover occurs.

Since then, ICANN has undertaken efforts to determine if and when the KSK rollover should proceed. Earlier this year, they began contacting operators of recursive servers that reported only the old trust anchor. However, in many cases, a responsible party could not be identified, due in large part to dynamic addressing of ISP subscribers. Also, late last year, ICANN began receiving trust anchor signaling data from more root server operators, as well as data from more recursive name servers as the recursive name servers updated to software versions that provided these signals. ICANN makes this data publicly available.2 As of now, percentages are relatively stable at roughly 7% of reporters still signaling the 2010 trust anchor.

After soliciting community feedback,3 ICANN now plans to take the next step of rolling the Root KSK on October 11th, 2018, subject to final approval by the ICANN Board of Directors. This date was chosen to give the community time to review the plan and attempt to get more validating resolver operators ready for the rollover.4

In advance of the KSK rollover, Verisign is conducting a multi-faceted technical outreach program as a root server operator, a registry operator, and as the Root Zone Maintainer to help ensure the security, stability, and resiliency of the internet. Building on ICANN’s previous outreach effort, Verisign is coordinating with US-CERT and other national CERTs, industry partners, various DNS operator groups, and performing direct outreach to out-of-date signalers. Verisign.com/KSKRollover

If you operate a recursive name server, we strongly encourage you to check your trust anchor configuration immediately. ICANN provides instructions for monitoring the current trust anchors in DNS validating resolvers,5 which will walk you through the process to update the trust anchor for your servers. To remain informed about the rollover schedule, visit https://www.icann.org/resources/pages/ksk-rollover.

1 https://blog.verisign.com/domain-names/root-zone-ksk-rollover postponed/
2 http://root-trust-anchor-reports.research.icann.org/
3 https://www.icann.org/public-comments/ksk-rollover-restart-2018-02-01-en
4 https://www.icann.org/en/system/files/files/plan-continuing-root-ksk-rollover-01feb18-en.pdf
5 https://www.icann.org/dns-resolvers-checking-current-trust-anchors

The post Operational Update Regarding the KSK Rollover for Administrators of Recursive Name Servers appeared first on Verisign Blog.

     

Comments

Related Stories

 

SN 678: Never a Dull Moment

This week we catch-up with another busy week. We look at Firefox's changing certificate policies, the danger of grabbing a second-hand domain, the Fortnite mess on Android, another patch-it-now Apache Struts RCE, a frightening jump in Mirai Botnet capability, an unpatched Windows 0-day privilege elevation, malware with a tricky new C&C channel, A/V companies are predictably unhappy with Chrome, Tavis found more serious problems in GhostScript, a breakthrough in contactless RSA key extraction, a worrisome flaw that has always been present in OpenSSH, and problems with never-dying Hayes AT commands in Android devices.

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Sponsors:

Cybercriminals Changing Tactics as Seen in First Half Report

Today, Trend Micro released its first half 2018 security roundup report in which we want to share the threat intelligence we discovered through the Trend Micro™ Smart Protection Network™ that allows us to identify the threats that have targeted our customer base. Below are some thoughts I’d like to share with you about these trends and how they could affect you and your organization.

Cybercriminals regularly change who they target, how they target them, and what they are after. Most recently we’ve seen a shift from large ransomware spam campaigns to more targeted attacks using ransomware as the tool to disrupt critical business operations. Any organization that depends on critical systems to run their businesses need to ensure they have prepared themselves for a targeted attack. Secondly, we’ve seen a shift towards cryptomining and cryptojacking as the predominate threat for many cybercriminals today. This threat has taken over as the threat du jour within the criminal undergrounds, with a lot of chatter on how best to perpetrate this crime. While this threat is not as destructive as ransomware, it can disrupt system operations, as the goal of most cryptomining malware is to use as many system resources as possible to perform the mining functions, and as such the system will not be supporting its primary business operation.

Any organization that supports critical infrastructure needs to look at how to harden up their ICS/SCADA networks as we’re starting to see threat actors looking to perform destructive attacks versus simply doing reconnaissance and testing capabilities when compromising these networks. As our Zero Day Initiative is finding out, vulnerabilities within the applications and devices in this sector are increasing and, more worrying, we’re not seeing quick patching of the vulnerabilities by the affected vendors. This will likely change as the vendors are made more accountable for fixing their bugs, but until then providers of critical infrastructure need to build improved patching processes, like the use of virtual patching at the network and host layers.

As the FBI has shared, the BEC threat has been increasing every year since 2013 with total losses from this threat reaching $12B US. This shows the threat actors behind these attacks are emboldened due to the simplicity (i.e. low investment in perpetrating), as well as the high monetary returns. We will likely see more actors and criminal syndicates leveraging this threat to target businesses of all sizes. The good news is that diligence in educating your financial and HR employees on how to identify this threat, along with implementing two-factor verification of requests, can greatly mitigate the risk of compromise.

Overall, organizations need to continue being vigilant in reviewing their security processes, as well as their existing cybersecurity solutions. Solution sprawl is a real problem due to technological complexities and a lack of trained personnel required to run them. Instead, businesses should look at consolidating and connecting their defenses in a way that allows faster protections from new threats and improved visibility across their entire network infrastructure. Lastly, look to invest in and enable advanced threat protections that are coming to market using artificial intelligence and machine learning, but don’t forget that many traditional technologies are still very effective at stopping a bulk of today’s threats.

There’s more details within our report you should read to ensure you have a full understanding of the threats we saw during this most recent first half. I will also be covering the trends and data in my upcoming live monthly threat webinar series I do on August 30 or watch it on-demand later.

If you have any questions or comments, please do so below.

The post Cybercriminals Changing Tactics as Seen in First Half Report appeared first on .

Buckle up: what the auto industry can teach us about IoT security

Help Net Security has published an op-ed from Brian Honan entitled ‘IoT security: lessons we can learn from the evolution of road safety’. The piece compares the lack of safety features in cars 50 years ago with today’s Internet of Things.

Inspired by a conversation with his father about growing up in rural Ireland, when cars were a relatively new technology, Brian writes:

“My father’s description of the near-total disregard for road safety in a relatively new mode of transport made me think about cybersecurity and how many of the issues we face today as professionals also relate to the interaction of user behaviour, devices, systems and rules.”

As Brian points out, it took decades for the safety features we now take for granted in cars to evolve. In many cases, the driver for this change – pun partly intended – was not manufacturers but governments and insurance companies. He argues that the security industry has arrived at a similar crossroads. It urgently needs to act to tackle the risk from ever-increasing numbers of IoT devices.

Many of the safety features in modern cars rely on a combination of user knowledge and inherent design. That’s a model the security industry needs to follow for IoT, Brian concludes.

The full article is available to read here for free at Help Net Security. At BH Consulting, we’ve previously blogged about the risks in IoT security. In a post from last year, we looked at the debate for stronger security. In our May newsround, we wrote about a free tool from ENISA that identifies good security practices for IoT projects.

 

The post Buckle up: what the auto industry can teach us about IoT security appeared first on BH Consulting.

Hackers gain access to millions of T-Mobile customer details

T-Mobile has fallen foul of yet another cybersecurity issue. In a statement released this week the company said that an unauthorized entry into its network may have given hackers access to customer records, including billing ZIP codes, phone numbers, email addresses and account numbers. According to T-Mobile, the intrusion was quickly shut down, and no financial data, social security numbers or passwords were compromised.

Source: ZDNet

This Week in Security News: Facebook and Faxploits

Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, Facebook removed 652 fake accounts originating from Russia and Iran. Also, Microsoft identified and removed fake internet domains that mimicked U.S political institutions thought to be created by Russian operatives.

Read on:

Back to Basics: Why We Need to Encourage More Secure IoT Development

In cybersecurity, there’s a rule: It’s more effective to fix a problem in the development phase. Nowhere is this truer than in IoT, where devices may never be secured once they leave the production line.

Microsoft’s Anti-Hacking Efforts Make it an Internet Cop

Microsoft identified and forced the removal of fake internet domains mimicking conservative U.S. political institutions, which led Russian officials to accuse the company of an anti-Russian “witch hunt.”

How Digital Extortion Impacts Today’s Enterprises

As digital extortion attacks increase, it’s imperative that IT leaders are aware of these threats and the impacts they can have on overall company reputation and it’s relationships with partners and customers.

Experts Urge Rapid Patching of ‘Struts’ Bug

In September 2017, Equifax disclosed that a failure to patch one of its Internet servers against a pervasive software flaw — in a Web component known as Apache Struts — led to a breach that exposed personal data on 147 million Americans. 

DNC Says Hack Attack Was Actually Just a Cybersecurity Test

After initially reporting a potential cyberattack, the Democratic National Committee now believes its database of voters was the target of a third-party test of its cybersecurity.

AI and Machine Learning: Boosting Compliance and Preventing Spam

Trend Micro takes a closer look at AI and machine learning and the ways these approaches can support compliance with industry requirements and prevent spam messages.

Philips Reveals Code Execution Vulnerabilities in Cardiovascular Devices

Vulnerabilities have been discovered in multiple versions of Philips cardiovascular imaging devices.

Simplifying and Prioritizing Advanced Threat Response Measures

Trend Micro’s recently introduced Deep Security™, which will help IT security professionals understand more about the attacks on their networks.

Facebook Identifies New Influence Operations Spanning Globe

Facebook found and removed 652 fake accounts, pages and groups originating from Iran and Russia that were trying to mislead people around the world.

Phishing for Payroll: Nigerian National Convicted for Attempted Stealing of $6M+ via Phishing

After using phishing scams in an attempt to steal over $6 million from employees of several US colleges and universities, a Nigerian national was convicted on several charges.

Trend Micro Takes Multi-Pronged Approach to Narrowing the Gaping Cybersecurity Skills Gap

To alleviate the skills gap, barriers between software developers and IT operations must be broken down, and cybersecurity training must be applied to DevOps and across an organization.

Faxploit: Vulnerabilities in HP OfficeJet Printers Can Let Hackers Infiltrate Networks

At DEF CON 2018, security researchers demonstrated how they were able to infiltrate networks by exploiting vulnerabilities in HP OfficeJet All-in-One printers.

Do you think using AI and Machine Learning to boost compliance and prevent spam will be effective? Why or why not? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.

The post This Week in Security News: Facebook and Faxploits appeared first on .

DNC cyberattack scare was just a phishing test

Yesterday, reports surfaced that the Democratic National Committee had been the target of a phishing scheme aimed at collecting officials' login information for a voter database. But it turns out the incident was just a security test. "We, along with the partners who reported the [fake] site, now believe it was built by a third party as part of a simulated phishing test on VoteBuilder," DNC Chief Security Officer Bob Lord said in a statement to the Washington Post. "The test, which mimicked several attributes of actual attacks on the Democratic party's voter file, was not authorized by the DNC, Votebuilder nor any of our vendors," he said.

Via: Washington Post

DNC reports attempted cyberattack targeting its voter database

The Democratic National Committee appears to be the target of another cybersecurity attack, CNN reports, and it has alerted the FBI about a phishing attempt aimed at gaining access to its voter database. A fake login page created to look just like the one Democratic officials use to log into a service called Votebuilder was spotted by a the cybersecurity firm Lookout earlier this week. Lookout then informed the DNC of its findings.

Source: CNN

SN 677: The Foreshadow Flaw

As we head into our 14th year of Security Now​, this week we look at some of the research released during last week's USENIX Security symposium, we also take a peek at last week's Patch Tuesday details, Skype's newly released implementation of Open Whisper Systems' Signal privacy protocol, Google's Chrome browser's increasing pushback against being injected into, news following last week's observation about Google's user tracking, Microsoft's announcement of more spoofed domain takedowns, another page table sharing vulnerability, believe it or not... "Malicious Regular Expressions", some numbers on how much money CoinHive is raking in, flaws in browser and their add-ons that allow tracking-block bypasses, two closing-the-loop bits of feedback, and then a look at the details of the latest Intel Speculation disaster known as "The Foreshadow Flaw".

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Sponsors:

Australian teen pleads guilty to hacking Apple

An Australian teenager pleaded guilty today to charges over repeatedly hacking into Apple's computer systems, The Age reports. He reportedly was able to access authorized keys, view customer accounts and download 90GB of secure files before being caught. Once alerted to the repeated intrusions, Apple blocked the teen and notified the FBI of the breaches. The agency in turn contacted the Australian Federal Police who raided the teenager's home last year, seizing two Apple laptops, a mobile phone and a hard drive.

Via: Apple Insider

Source: The Age

Here’s the missing ingredient in a solid security and business continuity plan

Security incidents can cast an unforgiving light on many organisations’ readiness. They highlight the need for security programmes that go further than just fixing things when they break.

Response has been security’s classic default reaction to an incident. Something is broken, so we need to fix it. But this misses a critical ingredient: resilience. If an important system fails, organisations need to know they can continue by using alternative systems, be they technical or manual.

Resilience, not just recovery

“Security incidents invariably lead to downtime. So it makes sense to focus on resilience in security programmes, not just detection and recovery. This way, a business can continue to survive and function even if key services are disrupted or temporarily unavailable,” says Brian Honan, CEO of BH Consulting.

Last year’s ransomware outbreaks were a classic case in point. FedEx’s TNT subsidiary shipped a $300 million loss following the NotPetya infection. It took weeks to restore IT operations fully, and deliveries and sales declined during this time. The NHS in the UK cancelled 22,000 hospital appointments as it struggled to cope in the wake of WannaCry.

Steps to resilience

Brian recommends that organisations should become more resilient by integrating incident response and business continuity. He suggests the following four steps:

  • Identify key systems and services for your business
  • Look at the key risks and threats to those services
  • Based on that risk analysis, identify the key areas to address such as single points of failure, inter-reliance of systems and interdependency of systems
  • Engineer ways to mitigate the impact of any potential failure, either through cybercrime or other means.

Once you start talking the language of risk, you’re talking the language of business, not IT. That’s why Brian recommends getting agreement from business owners as to how the organisation manages the risks that it discovers. Suppose the assessment stage uncovers a vulnerable system. The organisation has three choices: replace the system outright, upgrade the current version, or accept the risk that the business will be unavailable for whatever time it takes to recover from downtime.

Decision time for the business

Each option comes at a price, but it’s up to the business to determine the cost it’s willing to bear. “The security professional’s role is to give the business well informed data and analysis so that they can make the appropriate decision for the business. The CSO is chief security officer, not the chief scapegoat officer,” says Brian.

There’s still vigorous debate over the meaning of resilience in the context of information security. Kelly Shortridge of Security Scorecard recently wrote a lengthy and thoughtful post that’s well worth reading. Drawing on a range of examples from far beyond security, she says security has too often focused on robustness. “Resilience is ultimately about accepting reality and building a defensive strategy around reality,” she writes. Quoting the ecological economics scholar Peter Timmerman, she adds: “resilience is the building of ‘buffering capacity’ into a system, to improve its ability to continually cope going forward.”

Another word for resilience is flexibility. That’s arguably an incomplete definition too, but the term hints at an ability to bounce back from an interruption to something approaching normal service.

The post Here’s the missing ingredient in a solid security and business continuity plan appeared first on BH Consulting.

SN 676: The Mega FaxSploit

This week we cover lots of discoveries revealed during last week's Black Hat 2018 and DEF CON 26 Las Vegas security conferences. Among them, 47 vulnerabilities across 25 Android smartphones, Android "Disk-In-The-Middle" attacks, Google tracking when asked not to, more Brazilian DLink router hijack hijinks, a backdoor found in VIA C3 processors, a trusted-client attack on WhatsApp, a macOS 0-day, a tasty new feature for Win10 Enterprise, a new Signal-based secure eMail service, Facebook's FIZZ TLS v1.3 library, another Let's Encrypt milestone, and then "FaxSploit" the most significant nightmare in recent history (FAR worse, I think, than any of the theoretical Spectre & Meltdown attacks).

Check out our Show Notes!

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Sponsors:

Hundreds of Instagram users report similarly hacked accounts

A number of people have reported having their Instagram accounts hacked this month, Mashable reports, and many of these hacks appear to have taken the same approach. Users suddenly find themselves logged out of their accounts and when they try to log back in, they discover that their handle, profile image, contact info and bios have all been changed. Often the profile image has been changed to a Disney or Pixar character and the email address connected to the account is changed to one with a .ru Russian domain, according to Mashable. Some even had their two-factor authentication turned off by hackers.

Via: CNET

Source: Mashable

Elaborate hack turned Amazon Echo speakers into spies

Some people worry that hackers could infiltrate their smart speakers and spy on them, but that hasn't been the practical reality -- not for Amazon's Echo, at least. A team of researchers from China's Tencent has come about as close as you can get right now, however. They've disclosed an attack on the Echo that uses both a modified speaker and a string of Alexa web interface vulnerabilities to remotely eavesdrop on regular models. It sounds nefarious, but it requires more steps than would be viable for most intruders.

Via: Wired

Source: Def Con

New Pluralsight Course: Modern Browser Security Reports

Presently sponsored by: Netsparker - dead accurate web application security scanning solution - Scan websites for SQL Injection, XSS & other vulnerabilities

New Pluralsight Course: Modern Browser Security Reports

Rounding out a recent spate of new Pluralsight courses is one final one: Modern Browser Security Reports. This time, it's with Scott Helme who for most of my followers, needs no introduction. You may remember Scott from such previous projects as securityheaders.io, Report URI and, as it relates to this course, our collective cleaning up at a couple of recent UK awards nights:

That particular awards night relates to this course because at that particular event, our little Report URI project won the SC Award for Best Emerging Technology and what that project does it precisely what we're talking about in this course. In fact, we recorded this course in London only a few days after that pic was taken (although admittedly in a less well-dressed fashion).

Clearly, Scott and I have a bias when it comes to how awesome we think browser reporting is but bear with me because it is awesome! Many of you would have seen us talk about CSP reporting before and that's a great place to start: create a policy about all the things that are allowed to load into a website and from where then as soon as that policy is violated (for example, when someone finds an XSS vector on your site), you get notified. You can do the same thing with HPKP and Expect-CT plus there's also CAA reporting (although strictly speaking that's a report issued by a CA rather than a browser).

One of my favourite to demo is XSS auditor reporting. Check out that link and you'll see an alert box; dismiss that, open your dev tools, reload the page then have a look at the response headers. See the "x-xss-protection" header? And the "report" directive in the value? Open up your favourite proxy (such as Fiddler) then click on the link in the web page above and watch the requests. See the one that gets sent to https://demo.report-uri.com/r/default/xss/enforce - that's the browser automatically sending a report to let me know that its XSS auditor fired and I may well have a cross site scripting vulnerability on my site. Like the other reporting constructs, it's free, dead simple to implement and could well be the early warning sign I need to identify a major vulnerability in my site.

That's just a very brief intro, there's about an hour's worth of content in this course with Scott and I casually discussing the topic in typical "Play by Play" fashion. I hope you enjoy our course, Modern Browser Security Reports is now live!

Just a little side note: for the first time ever, I had to stop a fellow Pluralsight author during the recording of this course due to an unfortunate language incident. I captured this piece of video after the course which I believe aptly covers the nature of the problem:

I know what I heard, you be the judge and chime in with your comments below 🙂

SN 675: New WiFi Password Attack

This week we discuss yet another new and diabolical router hack and attack, Reddit's discovery of SMS 2FA failure, WannaCry refuses to die, law enforcement's ample unused forensic resources, a new and very clever BGP-based attack, Windows 10 update dissatisfaction, Google advances their state-sponsored attack notifications, what is Google's project Dragonfly?, a highly effective and highly targeted Ransomware campaign, some closing-the-loop feedback from our listeners, and a breakthrough in hacking/attacking WiFi passwords.

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Sponsors:

Reviewing election cybersecurity in this week’s primary states

Since learning of Russia's attempts to hack into the elections systems of 21 states during the 2016 US presidential race, legislators have been on high alert. Cybersecurity experts have warned it's likely the Kremlin will attack again, and already it's been caught attempting to infiltrate legislators' computers and use phony social media accounts to influence the outcome of 2018 state primaries.

Four states are holding elections on Tuesday -- Kansas, Michigan, Missouri and Washington -- and some lawmakers are doing more than others to protect their systems against cyber attacks. Here's a breakdown of each state's approach to elections cybersecurity, as of August 2018:

New Pluralsight Course: Defending Against JavaScript Keylogger Attacks on Payment Card Information

Presently sponsored by: Netsparker - dead accurate web application security scanning solution - Scan websites for SQL Injection, XSS & other vulnerabilities

New Pluralsight Course: 
Defending Against JavaScript Keylogger Attacks on Payment Card Information

Only a few weeks ago, I wrote about a new GDPR course with John Elliott. We've been getting fantastic feedback on that course and I love the way John has been able to explain GDPR in a way that's actually practical and makes sense! In my experience, that's a bit of a rare talent in GDPR land...

When we recorded that course in London a couple of months back, we also recorded another one on Defending Against JavaScript Keylogger Attacks on Payment Card Information. John has a background in payment systems and he's seen more than his fair share of attacks against them, particularly those which scrape card data straight out of the client side.

As luck would have it (or "bad luck", depending on your perspective), after recording that course but before posting this piece we saw a perfect industry example of the problem. Actually, it dates back to before the June record date to this tweet the month before:

In this tweet, Shane is attempting to draw Ticketmaster's attention to the fact that there was some malicious JavaScript running on their site. Literally whilst John and I were recording this course, visitors to their site were being served this script and having their card data siphoned off. More than 6 weeks after that tweet, Monzo bank in the UK identified a pattern of Ticketmaster customers experiencing card fraud. It's a little alarming that it took the bank to figure out that one of their merchants had been pwned rather than identifying it themselves, yet here we are.

It later eventuated that the compromise was due to a single line of code or more specifically, a script tag on Ticketek's website that embedded a chatbot from a company called Inbenta. Inbenta than had their script compromised and because it was embedded on the Ticketmaster payment page, that's it, game over, the contents of the DOM and any input fields are now accessible via a malicious party. This is eerily similar to the Browsealound incident only a few months earlier although rather than a bit of (mostly) harmless crypto coin mining, it led to full on card theft.

This sort of thing is alarming common and you really want to think about whose script you embed on your site:

But we also have good defences against these things going wrong. For example, John and I talk about content security policies and subresource integrity, both free and easily accessible browser constructs that stop attacks like this dead. CSP in particular could have not only stopped that attack, but actually alerted Ticketek to it as soon as it began. There's a whole heap more beyond that, of course, and it's all baked into one of those very conversational "play by plays" so it's easy watching and only just over an hour long.

Defending Against JavaScript Keylogger Attacks on Payment Card Information is now live!

The Quest for Optimal Security

There's no shortage of guidance available today about how to structure, build, and run a security program. Most guidance comes from a standpoint of inherent bias, whether it be to promote a product class, specific framework/standard, or to best align with specific technologies (legacy/traditional infrastructure, cloud, etc.). Given all the competing advice out there, I often find it's hard to suss out exactly what one should be doing. As someone actively on the job hunt, this reality is even more daunting because job descriptions will typically contain a smattering of biases, confirmed or contradicted through interview processes. But, I digress...

At end of day, the goal of your security program should be to chart a path to an optimal set of capabilities. What exactly constitutes "optimal" will in fact vary from org to org. We know this is true because otherwise there would already be a settled "best practice" framework to which everyone would align. That said, there are a lot of common pieces that can be leveraged in identifying the optimal program attributes for your organization.

The Basics

First and foremost, your security program must account for basic security hygiene, which creates the basis for arguing legal defensibility; which is to say, if you're not doing the basics, then your program can be construed insufficient, exposing your organization to legal liability (a growing concern). That said, what exactly constitutes "basic security hygiene"?

There are a couple different ways to look at basic security hygiene. For starters, you can look at it be technology grouping:
- Network
- Endpoint
- Data
- Applications
- IAM
- etc.

However, listing out specific technologies can become cumbersome, plus it doesn't necessarily lend itself well to thinking about security architecture and strategy. A few years ago I came up with an approach that looks like this:

Ben-matrix.png

More recently, I learned of the OWASP Cyber Defense Matrix, which takes a similar approach to mine above, but mixing it with the NIST Cybersecurity Framework.


Overall, I like the simplicity of the CDM approach as I think it covers sufficient bases to project a legally defensible position, while also ensuring a decent starting point that will cross-map to other frameworks and standards depending on the needs of your organization (e.g., maybe you need to move to ISO 27001 or complete a SOC 1/2/3 certification).

Org Culture

One of the oft-overlooked, and yet insanely important, aspects of designing an approach to optimal security for your organization is to understand that it must exist completely within the organization's culture. After all, the organization is comprised of people doing work, and pretty much everything you're looking to do will have some degree of impact on those people and their daily lives.

Ben-pyramid.png

As such, when you think about everything, be it basic security hygiene, information risk management, or even behavioral infosec, you must first consider how it fits with org culture. Specifically, you need to look at the values of the organization (and its leadership), as well as the behaviors that are common, advocated, and rewarded.

If what you're asking people to do goes against the incentive model within which they're operating, then you must find a way to either better align with those incentives or find a way to change the incentives such that they encourage preferred behaviors. We'll talking more about behavioral infosec below, so for this section the key takeaway is this: organizational culture creates the incentive model(s) upon which people make decisions, which means you absolutely must optimize for that reality.

For more on my thoughts around org culture, please see my post "Quit Talking About "Security Culture" - Fix Org Culture!"

Risk Management

Much has been said about risk management over the past decade+, whether it be PCI DSS advocating for a "risk-based approach" to vulnerability management, or updates to the NIST Risk Management Framework, or various advocation by ISO 27005/31000 or proponents of a quantitative approach (such as the FAIR Institute).

The simply fact is that, once you have a reasonable base set of practices in place, almost everything else should be driven by a risk management approach. However, what this means within the context of optimal security can vary substantially, not the least being due to staffing challenges. If you are a small-to-medium-sized business, then your reality is likely one where you, at best, have a security leader of some sort (CISO, security architect, security manager, whatever) and then maybe up to a couple security engineers (doers), maybe someone for compliance, and then most likely a lot of outsourcing (MSP/MSSP/MDR, DFIR retainer, auditors, contractors, consultants, etc, etc, etc).

Risk management is not your starting point. As noted above, there are a number of security practices that we know must be done, whether that be securing endpoints, data, networks, access, or what-have-you. Where we start needing risk management is when we get beyond the basics and try to determine what else is needed. As such, the crux of optimal security is having an information risk management capability, which means your overall practice structure might look like this:

Ben-pyramid2.png

However, don't get wrapped around the axel too much on how the picture fits together. Instead, be aware that your basics come first (out of necessity), then comes some form of risk mgmt., which will include gaining a deep understanding of org culture.

Behavioral InfoSec

The other major piece of a comprehensive security program is behavioral infosec, which I have talked about previously in my posts "Introducing Behavioral InfoSec" and "Design For Behavior, Not Awareness." In these posts, and other places, I talk about the imperative to key in on organizational culture, and specifically look at behavior design as part of an overall security program. However, there are a couple key differences in this approach that set it apart from traditional security awareness programs.
1) Behavioral InfoSec acknowledges that we are seeking preferred behaviors within the context of organizational culture, which is the set of values of behaviors promoted, supported, and rewarded by the organization.
2) We move away from basic "security awareness" programs like annual CBTs toward practices that seek measurable, lasting change in behavior that provide positive security benefit.
3) We accept that all security behaviors - whether it be hardening or anti-phishing or data security (etc) - must either align with the inherent cultural structure and incentive model, or seek to change those things in order to heighten the motivation to change while simultaneously making it easier to change.

To me, shifting to a behavioral infosec mindset is imperative for achieving success with embedding and institutionalizing desired security practices into your organization. Never is this more apparent than in looking at the Fogg Behavior Model, which explains behavior thusly:

In writing, it says that behavior happens when three things come together: motivation, ability, and a trigger (prompt or cue). We can diagram behavior (as above) wherein motivate is charted on the Y-axis from low to high, ability is charted on the X-axis from "hard to do" to "easy to do," and then a prompt (or trigger) that falls either to the left or right of the "line of action," which means the prompt itself is less important than one's motivation and the ease of the action.

We consistently fail in infosec by not properly accounting for incentive models (motivation) or by asking people to do something that is, in fact, too difficult (ability; that is, you're asking for a change that is hard, maybe in terms of making it difficult to do their job, or maybe just challenging in general). In all things, when we think about information risk mgmt. and the kinds of changes we want to see in our organizations beyond basic security hygiene, it's imperative that we also under the cultural impact and how org culture will support, maybe even reward, the desired changes.

Overall, I would argue that my original pyramid diagram ends up being more useful insomuch as it encourages us to think about info risk mgmt. and behavioral infosec in parallel and in conjunction with each other.

Putting It All Together

All of these practices areas - basic security hygiene, info risk mgmt, behavioral infosec - ideally come together in a strategic approach that achieves optimal security. But, what does that really mean? What are the attributes, today, of an optimal security program? There are lessons we can learn from agile, DevOps, ITIL, Six Sigma, and various other related programs and research, ranging from Deming to Senge and everything in between. Combined, "optimal security" might look something like this:


Conscious
   - Generative (thinking beyond the immediate)
   - Mindful (thinking of people and orgs in the whole)
   - Discursive (collaborative, communicative, open-minded)

Lean
   - Efficient (minimum steps to achieve desired outcome)
   - Effective (do we accomplish what we set out to do?)
   - Managed (haphazard and ad hoc are the enemy of lasting success)

Quantified
   - Measured (applying qualitative or quantitative approaches to test for efficiency and effectiveness)
   - Monitored (not just point-in-time, but watched over time)
   - Reported (to align with org culture, as well as to help reform org culture over time)

Clear
   - Defined (what problem is being solved? what is the desired outcome/impact? why is this important?)
   - Mapped (possibly value stream mapping, possibly net flows or data flows, taking time to understand who and what is impacted)
   - Reduced (don't bite off too much at once, acknowledge change requires time, simplify simplify simplify)

Systematic
   - Systemic understanding (the organization is a complex organism that must work together)
   - Automated where possible (don't install people where an automated process will suffice)
   - Minimized complexity (perfect is the enemy of good, and optimal security is all about "good enough," so seek the least complex solutions possible)


Obviously, much, much more can be said about the above, but that's fodder for another post (or a book, haha). Instead, I present the above as a starting point for a conversation to help move everyone away from some of our traditional, broken approaches. Now is the time to take a step back and (re-)evaluate our security programs and how best to approach them.

New Pluralsight Course: Bug Bounties for Researchers

Presently sponsored by: Netsparker - dead accurate web application security scanning solution - Scan websites for SQL Injection, XSS & other vulnerabilities

New Pluralsight Course: Bug Bounties for Researchers

Earlier this year, I spent some time in San Fran with friend and Bugcrowd founder Casey Ellis where we recorded a Pluralsight "Play by Play" titled Bug Bounties for Companies. I wrote about that in the aforementioned post which went out in May and I mentioned back then that we'd also created a second course targeted directly at researchers. We had to pull together some additional material on that one but I'm please to now share the finished product with you: Bug Bounties for Researchers

This course covers many of the issues folks considering getting involved in bug bounties often ask: How do they find bounties? How do they stay out of legal trouble? How successful can good researchers be? This is only a 36-minute course and it's in the very casual Play by Play format (basically just Casey and I having a chat and sharing some screen content) so it's easy watching.

We hope you enjoy this course, Bug Bounties for Researchers is now live!

DNC-led Def Con event tests election websites against child hackers

At the Def Con hacker conference next week, the Democratic National Committee is co-sponsoring a contest that will pit child hackers against replicas of state government websites, Wired reports. Kids between the ages of eight and 16 will try to break into replicas of the websites secretaries of state use to post election results, and the one that devises the best defensive strategy will win $500 from the DNC. Another $2,000 will be awarded to whoever can penetrate a site's defenses. The University of Chicago and a non-profit called r00tz Asylum that offers cybersecurity lessons for children are also sponsoring the event.

Source: Wired

Three men arrested for stealing over 15 million payment cards

US officials announced today that three alleged leaders of the cybercrime group known alternatively as Fin7, Carbanak and the Navigator Group have been arrested in Germany, Poland and Spain and charged with 26 felony counts. The charges include conspiracy, wire fraud, computer hacking, access device fraud and aggravated identity theft. The Department of Justice alleges that Fin7 members have targeted more than 100 US companies, hacked thousands of computer systems and stolen 15 million credit and debit card numbers. The group is said to have breached networks in 47 states and Washington, DC and hacked 6,500 point-of-sale terminals at over 3,600 business locations.

Source: Department of Justice

SN 674: Attacking Bluetooth Pairing

This week we examine still another new Spectre processor speculation attack, we look at the new "Death Botnet", the security of the US DoD websites, lots of Google Chrome news, a push by the US Senate toward more security, the emergence and threat of clone websites in other TLDs, more cryptocurrency mining bans, Google's Titan hardware security dongles, and we finish by examining the recently discovered flaw in the Bluetooth protocol which has device manufacturers and OS makers scrambling. (But do they really need to?)

We invite you to read our show notes.

Hosts: Jason Howell and Steve Gibson

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsor:

Why No HTTPS? Questions Answered, New Data, Path Forward

Presently sponsored by: Netsparker - dead accurate web application security scanning solution - Scan websites for SQL Injection, XSS & other vulnerabilities

Why No HTTPS? Questions Answered, New Data, Path Forward

So that little project Scott Helme and I took on - WhyNoHTTPS.com - seems to have garnered quite a bit of attention. We had about 81k visitors drop by on the first day and for the most part, the feedback has been overwhelmingly positive. Most people have said it's great to have the data surfaced publicly and they've used that list to put some pressure on sites to up their game. We're already seeing some sites on the Day 1 list go HTTPS (although frankly, if the site is that large and they've done it that quickly then I doubt it's because of our list), and really, that's the best possible outcome of this project - seeing websites drop off because an insecure request is now redirected to a secure one.

In the launch blog post, I wrote about the nuances of assessing whether a site redirects insecure requests appropriately. The tl;dr of it was that there's a bunch of factors that can lead to pretty inconsistent behaviour. Just read the comments there and you'll see a heap of them along the lines of "Hey Troy, site X is redirecting to HTTPS and shouldn't be on there", followed by me saying "No they're not, here's the evidence". For example, roblox.com:

Why No HTTPS? Questions Answered, New Data, Path Forward

And if you're going to roblox.com over the insecure scheme now and thinking "these guys have got it wrong", look at the requests the browser makes:

Why No HTTPS? Questions Answered, New Data, Path Forward

If we drill into the response of that first request, we can see how it's all tied together:

It's a rather bizarre redirect model where it sets a cookie then reloads the same insecure path but by virtue of having said cookie present, that request then redirects to HTTPS. I'm going to talk more about this later on in terms of why it doesn't warrant removing Roblox from the list, for now I just wanted to highlight how inconsistent redirects can be and how what you observe in the browser may not be consistent with what's on WhyNoHTTPS.com.

Moving on, we wanted to get an updated list out ASAP because there are indeed sites that are going secure and they deserve that recognition. However, that's turned out to be a non-trivial task and I want to explain why here.

What Causes a Site to be Removed From WhyNoHTTPS.com?

Well, when it starts redirecting everyone to HTTPS by default, right? Easy? No.

I ran Scott's latest crawl of the Alexa Top 1M sites and grabbed the JSON for sites served over HTTP (he makes all of these publicly accessible so you can follow along if you'd like). I then pumped it into a local DB and worked out what had dropped off the list from Day 1 and found sites that included the following domains:

  1. sberbank.ru
  2. 360doc.com
  3. ci123.com

Nope, nope and nope. Each one of them still sticks on HTTP, even in the browser which would otherwise follow client-side script redirects. So what gives?

What we have to be conscious of that these 3 sites stuck out because they weren't on Scott's like of HTTP sites. However, they also weren't on his list of HTTPS sites (also available for you to download), so, again, what gives? Quite simply, the crawler couldn't get a response from the site. The site could have been down, the network connection could have dropped or the crawler itself could have been blocked (although note that it should be indistinguishable from a normal browser as far as the server is concerned). So what should we do?

My biggest concern was that now we have a baseline on the site with the Day 1 data, deviations from that state will be seen by many people. If we publish an update and sberbank.ru drops of the list and everyone is like "good on you Sberbank" (and yes, that is a bank), that would be rather misleading and wouldn't do much for confidence in our project.

So what if we go the other way? I mean what if instead of listing everything in the HTTP list, we took the entire 1M list and just subtracted the HTTPS one? This changes the business rule from "the site loaded over HTTP" to "the site didn't load over HTTPS". Anything caught in the gaps between those 2 is then flagged as not doing HTTPS. So I ran the numbers from the latest scan, and here they are:

  1. Of the top 1M sites, there were 451,938 in the HTTP list and 399,179 in the HTTPS list
  2. In total, that means 851,117 sites were logged as loading over either HTTP or HTTPS
  3. Subsequently, 148,883 sites couldn't be accounted for because they simply didn't return a response

Nearly 15% is a lot and that's worrying because there could easily be a heap of false-positives in there. For example, the list includes Instagram, Google in Russia and Netflix. They all do HTTPS.

Consequently, the HTTP list alone won't cut it and the Alexa Top 1M list minus the HTTPS list also won't cut it either, so what are we left with? There was only one way forward:

  1. All the sites explicitly returning content over HTTP to the crawler and not redirecting make the list
  2. All the sites that never returned any response need to be tested entirely independently and if they don't redirect, they make the list

That last point may seem redundant but after some quick checks, I found that I could consistently get responses from sites on the 15% gap list where Scott's crawler couldn't. Maybe it's my location, maybe it's because I wrote my own that inevitably behaves slightly differently, I don't know, the point is that this effectively gives those 148,883 sites a second chance at serving content over HTTPS.

But that doesn't always work either! Of the top 10K Alexa ranked domains in that gap list, I still couldn't get a response from 2,907 of them. I was still finding domains which simply wouldn't resolve at all, for example the top 3 are:

  1. microsoftonline.com
  2. googleusercontent.com
  3. resultieser.com

These were the 81st, 118th and 162nd top ranked sites respectively and even manually testing them in the browser, none of them go anywhere. The first 2 don't resolve at all and the 3rd one redirects to home.resultieser.com which then, itself, doesn't resolve. I don't know why they make the Alexa Top 1M list (certainly not that high in the order) and I've not dug into it any further so if you have ideas on why this is, leave a comment below.

The bottom line is that we're down to about 4% and a bit of the Alexa Top 1M we simply can't account for and a bunch of those definitely don't go anywhere. I'd like to have a perfect list but the reality of it is that we never will so we just need to do our best within the constraints that we have.

As of now, our first revision of sites not supporting HTTPS is now live at whynohttps.com

I would have liked to have gotten a revised list out earlier but it's because of idiosyncrasies like those above that it took a while to get there. This is also the reason we haven't automated it yet; I'd love to rebuild the index on the site nightly, but right now it really needs that extra bit of human validation to make sure everything is spot on. Automation is definitely on the cards though.

Is it OK to Redirect Without a 30X?

I want to touch on a question that came up quite a few times and indeed I showed this behaviour earlier on with Roblox. What happens if a website doesn't respond with a redirect in the HTTP response header? Is an HTTP 200 and a meta refresh tag or some funky JS sufficient? Let's address that by starting with precisely what these response codes mean.

An HTTP 301 is "Moved Permanently", that is forever and a day the client should assume the resource is no longer at the requested location and is instead now at the location returned in the "Location" header. For example, take this request for troyhunt.com over HTTP:

Why No HTTPS? Questions Answered, New Data, Path Forward

You can see "301 Moved Permanently" on the second line then further down,

Location: https://www.troyhunt.com/

I'm telling any client that attempts to request that naked domain name (without www) over the insecure scheme that it should always refer to my site securely and with the www prefix. This is the correct way to redirect from HTTP to HTTPS; a 301 response with the final destination URL in that location header. You'll see sites sometimes doing multiple redirects which doesn't influence how we grade them HTTPS wise, but is inefficient as each redirect requires another request from the client.

Then there's HTTP 302 which is "Found", that is the resource has been temporarily redirected to the location in the response header. This is not want you want when using the status code to redirect people from HTTP to HTTPS because this shouldn't be a temporary situation, it should be permanent. Always do HTTPS all of the time and 301 is the semantically correct response code to do just that. We will still flag the site as redirecting to HTTPS if a 302 is used, but it's not ideal.

As you'll see from the Mozilla articles I linked to on those status codes, this has an impact on SEO. A 301 indicates that crawlers should index the content on the page being redirected to, a 302 indicates that they shouldn't. Browsers will also permanently cache a 301 redirect. Now this actually is important in terms of HTTPS because it ensures the same request for an insecure URL issued at a later date is sent securely before being sent over the wire where it's at risk of interception. (And yes, I'll get to HSTS shortly, let's just finish on status codes first.)

Meta refresh tags and client-side script are not "correct" implementations of redirects from insecure to secure schemes. Yes, they usually work, but they also have exceptions. For example, look at what happens to Roblox if I disable JS in Chrome:

Why No HTTPS? Questions Answered, New Data, Path Forward

This simply isn't a sufficient implementation of HTTPS as it's just served the entire page over HTTP without any redirect. I have no idea why Roblox has taken this approach, it's very unusual and it's hard to see what upside they're gaining from it.

Of course, the other issue is that particularly in a case such as Roblox's, it's extremely difficult for a parser to reliably figure out if HTTPS redirection is happening. We'd have to somehow programmatically work out that there's a cookie being set and the page reloaded then the behaviour changing when the cookie is there. Consequently, a semantically correct redirect is the only thing what will keep the site off the HTTP list.

But a 301 is only the first step, let's talk about HSTS.

HSTS

Let me reiterate that last sentence - a 301 is the first step - because whilst there are other steps after that, you're not going anywhere without first 301'ing insecure requests. I'm going to delve into HSTS now and just in case that's a new acronym for you, have a read of Understanding HTTP Strict Transport Security (HSTS) and preloading it into the browser if need be.

There are a couple of dependencies for properly implementing HSTS and they're worthwhile understanding before we proceed because I'm going to highlight a case where it hasn't been understood. The first is that the browser will only honour the response header when returned over an HTTPS connection. Yes, you can return an HSTS header over HTTP but the browser will ignore it (think of the havoc and MitM could cause if it didn't...)

Next is that if you want to preload HSTS (which is really where you want to be), there are certain criteria to be met:

Why No HTTPS? Questions Answered, New Data, Path Forward

I've highlighted the key one because that speaks to one of the misunderstandings I saw in the wake of us launching the site. Let me illustrate with this site:

Why No HTTPS? Questions Answered, New Data, Path Forward

This is the 4th largest Aussie site to make the list and it's a popular local one with an active forum. A thread sprung up last week about some local media coverage getting its inclusion on WhyNoHTTPS.com wrong which judging by the image above, is clearly not correct. There's some "passionate" backwards and forwards there as people are prone to do on forums, amongst which there's some genuinely insightful commentary. But there's also this from the operator of the site:

Why No HTTPS? Questions Answered, New Data, Path Forward

The first para is best skipped so getting to the HSTS bit, this is missing the most fundamental requirement for HSTS: you must redirect from insecure requests. (There's also the whole point of a 301 putting requests on the secure scheme ASAP in order to dramatically reduce the number of requests sent insecurely.) Just as hstspreload.org explains in the earlier image, without a redirect from HTTP to HTTPS you can't preload. And just in case you're thinking, "ah, but the Whirlpool forum in question is on a subdomain of forums.whirlpool.net.au", firstly, it doesn't redirect from HTTP to HTTPS (that only happens once proceeding to the login page) and secondly, you can't preload subdomains:

Why No HTTPS? Questions Answered, New Data, Path Forward

What you end up with on a site like Whirlpool which isn't consistently redirecting insure requests to secure ones is a bit of a hodgepodge of security postures which leads to things like this:

So I'll end this section the way I began it: HTTP 301 is the first step to doing HTTPS right and if a site can't do that on request to the domain then it deserves a place on WhyNoHTTPS.com.

Giving People Actionable Advice

One change we've made to the site since launch is to address precisely the sort of thing we saw in the Whirlpool case above: help fill knowledge gaps by providing actionable resources. As a result, you'll now see this on the site:

Why No HTTPS? Questions Answered, New Data, Path Forward

If there's other good ones you know that fill in the sorts of knowledge gaps you see people with when going HTTPS, do please let me know in the comments.

Summary

One of the design decisions I made early on was to only show the top 100 sites globally and the top 50 on a country-by-country basis. The reason was simply to avoid getting into all the sorts of nuanced debates that people have already had about a much broader collection of sites. If ever we get a much more reliable means of addressing all the sorts of edge cases I outlined above that might change, but for now keeping it simple is making it easier to manage.

If nothing else, I hope this post illustrates just how much effort has gone into trying to represent fair and accurate accounts of who's doing HTTPS properly and who's not.

Idaho inmates hacked prison tablets and stole $225,000

Inmates in five Idaho prisons exploited a vulnerability on their JPay tablets to steal almost $225,000 worth of credits, according to officials. The Idaho Department of Correction said 364 prisoners boosted their JPay account balances, according to The Associated Press. The department unearthed the issue earlier this month, and noted taxpayer dollars were not affected.

Source: The Associated Press

How Dropbox dropped the ball with anonymized data

Dropbox found itself in hot water this week over an academic study that used anonymized data to analyze the behavior and activity of thousands of customers.

The situation seemed innocent enough at first — an article in Harvard Business Review, researchers at Northwestern University Institute on Complex Systems (NICO) detailed an extensive two-year study of best practices for collaboration and communication on the cloud file hosting platform. Specifically, the study examined how thousands of academic scientists used Dropbox, which gave the NICO researchers project-folder data from more than 1,000 university departments.

But it wasn’t long before serious issues were revealed. The article, titled “A Study of Thousands of Dropbox Projects Reveals How Successful Teams Collaborate,” initially claimed that Dropbox gave the research team raw user data, which the researchers then anonymized. After Dropbox was hit with a wave of criticism, the article was revised to say the original version was incorrect – Dropbox anonymized the user data first and then gave it to the researchers.

That’s an extremely big error for the authors to make (if indeed it was an error) about who anonymized the data and when the data was anonymized — especially considering article was co-authored by a Dropbox manager (Rebecca Hinds, head of Enterprise Insights at Dropbox). I have to believe the article went through some kind of review process from Dropbox before it was published.

But let’s assume one of the leading cloud collaboration companies in the world simply screwed up the article rather than the process of handling and sharing customer data. There are still issues and questions for Dropbox, starting with the anonymized data itself. A Dropbox spokesperson told WIRED the company “randomized or hashed the dataset” before sharing the user data with NICO.

Why did Dropbox randomize *or* hash the datasets? Why did the company use two different approaches to anonymizing the user data? And how did it decide which types of data to hash and which types to randomize?

Furthermore, how was the data hashed? Dropbox didn’t say, but that’s an important question. I’d like to believe that a company like Dropbox wouldn’t use an insecure, deprecated hashing algorithm like MD5 or SHA-1, but there’s plenty of evidence those algorithms are still used by many organizations today.

The Dropbox spokesperson also told WIRED it grouped the dataset into “wide ranges” so no identifying information could be derived. But Dropbox’s explanation of the process is short on details. As a number of people in the infosec community have pointed out this week, anonymized data may not always be truly anonymous. And while some techniques work better than others, the task of de-anonymization appears to be getting easier.

And these are just the issues relating to the anonymized data; there are also serious questions about Dropbox’s privacy policy. The company claims its privacy policy covers the academic research, which has since sparked a debate about the requirements of informed consent. The policy states Dropbox may share customer data with “certain trusted third parties (for example, providers of customer support and IT services) to help us provide, improve, protect, and promote our services,” and includes a list of those trusted third parties like Amazon, Google and Salesforce. NICO, however, is not on the list. It’s also not entirely clear whether the anonymized data was given to NICO to improve the Dropbox service or to advance scientific research.

And while this isn’t close to the gross abuse of personal data we’ve seen with the Cambridge Analytica scandal, it’s nevertheless concerning. These types of questionable decisions regarding data usage and sharing can lead to accidental breaches, which can be just as devastating as any malicious attack that breaches and exposes user data. If companies in the business of storing and protecting data — like Dropbox — don’t have clear policies and procedures for sharing and anonymizing data, then we’re in for plenty more unforced errors.

The post How Dropbox dropped the ball with anonymized data appeared first on Security Bytes.

Insurance Occurrence Assurance?

You may have seen my friend Brian Krebs’ post regarding the lawsuit filed last month in the Western District of Virginia after $2.4 million was stolen from The National Bank of Blacksburg from two separate breaches over an eight-month period. Though the breaches are concerning, the real story is that the financial institution suing its insurance provider for refusing to fully cover the losses.

From the article:

In its lawsuit (PDF), National Bank says it had an insurance policy with Everest National Insurance Company for two types of coverage or “riders” to protect it against cybercrime losses. The first was a “computer and electronic crime” (C&E) rider that had a single loss limit liability of $8 million, with a $125,000 deductible.

The second was a “debit card rider” which provided coverage for losses which result directly from the use of lost, stolen or altered debit cards or counterfeit cards. That policy has a single loss limit of liability of $50,000, with a $25,000 deductible and an aggregate limit of $250,000.

According to the lawsuit, in June 2018 Everest determined both the 2016 and 2017 breaches were covered exclusively by the debit card rider, and not the $8 million C&E rider. The insurance company said the bank could not recover lost funds under the C&E rider because of two “exclusions” in that rider which spell out circumstances under which the insurer will not provide reimbursement.

Cyber security insurance is still in its infancy and issues with claims that could potentially span multiple policies and riders will continue to happen – think of the stories of health insurance claims being denied for pre-existing conditions and other loopholes. This, unfortunately, is the nature of insurance. Legal precedent, litigation, and insurance claim issues aside, your organization needs to understand that cyber security insurance is but one tool to reduce the financial impact on your organization when faced with a breach.

Cyber security insurance cannot and should not, however, be viewed as your primary means of defending against an attack.

The best way to maintain a defensible security posture is to have an information security program that is current, robust, and measurable. An effective information security program will provide far more protection for the operational state of your organization than cyber security insurance alone. To put it another way, insurance is a reactive measure whereas an effective security program is a proactive measure.

If you were in a fight, would you want to wait and see what happens after a punch is thrown to the bridge of your nose? Perhaps you would like to train to dodge or block that punch instead? Something to think about.

SN 673: The Data Transfer Project

This week as we examine still another new Spectre processor speculation attack, some news on DRAM hammering attacks and mitigation, the consequences of freely available malware source code, the reemergence of concern over DNS rebinding attacks, Venmo's very public transaction log, more Russian shenanigans, the emergence of flash botnets, Apple continuing move of Chinese data to China, another (the 5th) Cisco secret backdoor found, an optional missing Windows patch from last week, a bit of Firefox news and piece of errata... and then we look at "The Data Transfer Project" which, I think, marks a major step of maturity for our industry.

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

I read the news today, oh boy: social sharing and the dangers of false information

We’ve all done it: shared a post on social media in the belief that it’s spreading an important message or helping someone in need. But how many of us check to see whether it’s genuine? Earlier today I appeared on East Coast FM Radio in Ireland to talk about this problem.

The interview came after a message circulated widely on social media in Ireland, warning about a child abduction gang supposedly active in south Dublin. The message shows a photo of a clearly identifiable foreign man who is wrongly accused of being in the gang. The photo has already been shared more than 2,000 times on Facebook.

The Irish police, An Garda Siochana, felt the need to intervene. They urged people not to share the warnings on social media or on WhatsApp groups, while confirming they’re not investigating any such kidnapping group.

Bad news travels fast; false news travels even faster

Interestingly, researchers have tracked this very phenomenon and came to some interesting conclusions. A recent MIT study, “The spread of true and false news online”, examined 12 years’ worth of data from Twitter. The researchers found that fake reports promising some new, juicy detail spread far faster and wider, often reaching more than 10,000 Twitter users. Verifiably true news, on the other hand, rarely reached more than 1,000 users.

What’s more, bots aren’t the problem: people are. The research also found that bots spread news equally whether it was true or not, “implying that false news spreads more than the truth because humans, not robots, are more likely to spread it”. Science Magazine has a good writeup of the main findings.

Trust, but verify

In the radio interview, I said it was important for people not to accept all stories at face value, even when they come from a perceived trusted source. When it’s an update from a friend on social media, people are more inclined to spread them. Even if you’re sharing with good intentions, it’s still worth taking some simple steps before hitting the ‘forward’ button:

  • Apply critical thinking to the situation: don’t just believe what you read
  • Look at other trusted sources to verify the information in the message
  • Check with local police if you have a genuine concern relating to a possible crime.

A simple web search will usually be enough to debunk a story. It’s also worth bookmarking an independent fact-checking website like Snopes.com. The Garda website has a useful page that answers frequently asked questions about typical cyber scams that many people encounter online.

We’ve written about online frauds many times on the BH Consulting blog. The ‘gotcha’ behind CEO fraud or phishing is that criminals want to trick you through social engineering. There’s usually a financial motive. But it’s becoming clear just how insidious false news stories can be. They create fear and promote mistrust and xenophobia, since they often target ethnic groups or foreign nationals. They also undermine people’s trust in media sources.

The internet is a breeding ground for urban myths and untruths. Every time we unthinkingly share false news, we’re helping them to grow and spread. We might as well be feeding weeds with fertiliser – and we all know what that’s made from.

The post I read the news today, oh boy: social sharing and the dangers of false information appeared first on BH Consulting.

Upcoming cybersecurity events featuring BH Consulting

Here is a summary of upcoming cybersecurity events, conferences, webinars and training programmes where BH Consulting staff will deliver presentations about issues relating to cybersecurity, data protection, GDPR, and privacy. Each listing includes links for more information and registration.

BAM2018: Bristol, 4-6 September

Valerie Lyons will speak at the British Academy of Management annual conference at Bristol Business School, University of the West of England. The overall theme of the event is about driving productivity in uncertain and challenging times. Follow this link to find out more, and to register to attend.

CIO & IT Leaders Summit: Dublin, 12 September

Valerie Lyons will host a round-table debate on privacy at the upcoming CIO and IT Leaders Summit. She will chair a discussion on the challenges involved in balancing the tensions between data use and data privacy. The day-long conference is for senior IT leaders and decision makers, as well as technology service providers. It will take place at Croke Park on Wednesday September 12. For more details and to book tickets, visit here.

BSides Belfast 2018: 27 September

BH Consulting is sponsoring the Belfast edition of the popular BSides security conference, which is now in its third year. As ever, the event will feature a mix of discussions, demos and talks from local and international experts. The conference will take place at the Europa Hotel. Check the event website for updates on speakers and presentations, along with a review of previous years. There is a nominal fee of £10 per ticket and you can book via this link.

COSAC and the SABSA World Congress: Naas, 30 September-4 October

Valerie Lyons will speak about leading information security teams at the COSAC World Congress at Killashee Hotel in Naas, Co Kildare. Now in its 25th year, the information security symposium it includes the SABSA World Congress. The event prohibits any sales content, and focuses purely on practicing professionals sharing their experience and debating issues. For more details, and registration options, visit here.

IP EXPO Europe: London, 3-4 October

Brian Honan will deliver a keynote at IP EXPO Europe, which takes place in London this October. The event comprises multiple tracks including one dedicated to cybersecurity. In his presentation, Brian will look at how car safety has evolved and how cybersecurity needs to do the same. For more details about the two-day conference, and to book tickets, go to the IP EXPO website.

BruCON: Brussels, 3-5 October

Once again, BH Consulting is delighted to sponsor BruCON. The organisers describe their event as for security researchers, hackers, nerds and other beings with a creative and critical view of life. The conference runs for three days in the Belgian capital. More details and a link to buy tickets are available here

Dublin Information Sec 2018: 15 October

Brian Honan is among a host of prominent security commentators lined up to present at this annual conference. The agenda promises to cover a range of topics, from regulations like GDPR to emerging security problems, and gauging your organisation’s security to looking at new technology with potential use for security. The all-day conference will take place on Monday 15 October at the RDS. Visit here for details and ticket booking.

IRISSCON: Dublin, 22 November

The tenth annual IRISSCERT Cyber Crime Conference has one of its strongest lineups yet. Confirmed speakers include Wendy Nather, Dave Lewis, Andrew Hay, Jack Daniel, Javvad Malik, Martijn Grooten, Quentyn Taylor, Robert McArdle and Eoin Keary. IRISSCON will take place at the Ballsbridge Hotel on Thursday 22 November, running all day. Staff from BH Consulting will be there on the day. Check the conference website for more updates. Tickets cost just €50 and you can book via this link.

The post Upcoming cybersecurity events featuring BH Consulting appeared first on BH Consulting.

Pen testing: why do you need it, and five steps to doing it right

Penetration testing can contribute a lot to an organisation’s security by helping to identify potential weaknesses. But for it to be truly valuable, it needs to happen in the context of the business.

I asked Brian Honan, CEO of BH Consulting, to explain the value of pen testing and when it’s needed. “A pen test is a technical assessment of the vulnerabilities of a server, but it needs the business to tell you which server is most important. Pen testing without context, without proper scoping and without regular re-testing has little value,” he said.

Steps to do pen testing right

Some organisations feel they need to conduct a pen test because they have to comply with regulations like PCI, or to satisfy auditors, or because the board has asked for it. They’re often the worst places to start. To do it right, a business should:

  • Dedicate appropriate budget and time to the test
  • Carry out a proper scoping exercise first
  • Set proper engagement parameters
  • Run it regularly – preferably quarterly and more than just once a year
  • Use pen testing to check new systems before they go into production.

Absent those key elements, the test will not fail as such, but the approach from the start is just to tick a box. That’s why a one-off test will tell you little about how secure a system is. “A pen test is only a point-in-time assessment of a particular system, and there are ways to game the test. We have done pen tests where a client told us ‘these systems are out of scope’ – but they would be in scope for a criminal,” said Brian.

Prioritising business risks

The reason for running a pen test before systems go into production is that criminals may target them once they are live. It’s especially important if the new system will be critical to the business. “The value of doing a good pen test within context of the business, is that it will identify vulnerabilities and issues that the organisation can prioritise based on the business impact,” said Brian.

Pen testing, though valuable, is only one element of good security. “Unfortunately, many people think that if they run a pen test against their website, and it finds nothing, therefore their security is OK,” Brian said. “Just because you have car insurance doesn’t mean you won’t have an accident. There are many other factors that come into play: road conditions, other drivers on the road, confidence and experience of the driver.”

Brian warned against the risk of using pen testing as a replacement for a comprehensive security programme. If organisations have limited budget, spending it on a pen test arguably won’t make them any more secure. “Just doing it once a year to keep an auditor happy is not the best approach. It’s not a replacement for a good security programme,” he said.

The post Pen testing: why do you need it, and five steps to doing it right appeared first on BH Consulting.

Microsoft detected Russian phishing attacks on three 2018 campaigns

Russia is still launching cyberattacks against the US, a Microsoft exec has revealed, contradicting what the President claimed just a few days ago. According to Microsoft VP for customer security and trust Tom Burt (shown above second from right, with his hand raised), his team discovered a spear-phishing campaign targeting three candidates running for office in 2018. Burt announced his team's findings while speaking on a panel at the Aspen Security Forum, where he also revealed that they traced the new campaign to a group believed to be operated by the GRU, Russia's largest foreign intelligence agency. In other words, those three candidates are being targeted by the same organization that infiltrated the DNC and Hillary Clinton's Presidential campaign in 2016.

Source: Buzzfeed News, Aspen Institute (YouTube)

SN 672: All Up in Their Business

This week we look at even MORE, new, Spectre-related attacks, highlights from last Tuesday's monthly patch event, advances in GPS spoofing technology, GitHub's welcome help with security dependencies, Chrome's new (or forthcoming) "Site Isolation" feature, when hackers DO look behind the routers they commandeer, the consequences of deliberate BGP routing misbehavior... and reading between the lines of last Friday's DOJ indictment of the US 2016 election hacking by 12 Russian operatives -- the US appears to really have been "all up in their business."

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

Is banning USB drives the key to better security behaviour?

Convenience often beats security where users are concerned. Take USB keys, for example. They’re a very handy way to transfer files between computers, but they’re also a huge security risk. IBM recently attempted taking the drastic step of banning all removable portable storage devices (eg: USB, SD card, flash drive) completely. Should others follow suit?

To explore this issue deeper, I spoke to Neha Thethi, senior cybersecurity analyst at BH Consulting. She said for an attacker who has physical access to the victim’s machine, USB sticks are an effective way to install malicious software on a device or a network. Human nature being what it is, unsuspecting users will often plug unknown drives into their computers. From there, attackers have multiple ways to compromise a victim’s machine.

In fact, a classic tactic for security experts to test an organisation’s security awareness levels is to drop infected USB drives in a public area as part of a ‘red team’ exercise. If a percentage of employees picks up a key and plugs it into their machine, it’s a useful indicator of gaps in that organisation’s security.

Alternatives for file sharing

In Neha’s experience, given the current file sharing technologies available, many employees don’t need to use USBs for general tasks anyway. “We have found that restricting USB keys can definitely work. Most users in an organisation don’t really need access to those ports,” she said. Even where colleagues might need to share documents, it’s easier and safer to use a cloud service approved by their organisation.

But before banning USBs (or other removable media) outright, Neha recommends taking these five steps:

  • Discover what data you have
  • Know where you are storing the data
  • Classify the data according to its importance
  • Carry out a risk assessment for the most important data
  • Protect the data based on the level of risk – including encryption if necessary.

A company can take some of the steps by itself, but it’s best to use the experience of a security specialist within the company or a third party to carry out the security risk assessment. “The assessment should be conducted with the help of an expert team based on the type of industry and service you provide. Otherwise, you end up with an inaccurate picture of the security risks the organisation faces,” she said.

Prepare for pushback

If a USB ban is identified as a risk treatment measure, be prepared for pushback from some employees. Some of that will stem from company culture. Is the organisation reliant on rules, or do staff expect a degree of freedom? “Not everyone will give a round of applause for more security, because it is a hindrance and an extra step,” Neha warned. “Expect and anticipate pushback and therefore put in place incentives for blocking USBs. If people aren’t happy and are not on board with the change, it leads to them bending the rules.”

In some cases, there may be genuine exceptions to a no-USB rule. IBM itself faced pushbacks and is reportedly considering making a few exemptions. Neha also gave the example of a media company that uses high-quality digital photographs for its work. While it restricted USB ports for all employees, it made an exception for its media person. This person needed to transfer these high-quality images from the camera to a company device. Their specific role meant they got formal approval to have their USB port enabled.

Banning USB sticks should be workable in many cases, because better, more convenient and secure alternatives exist in the form of cloud sharing platforms. But like with the implementation of most security measures, it always helps to be prepared and plan for multiple scenarios.

The post Is banning USB drives the key to better security behaviour? appeared first on BH Consulting.

Finding Connections in the Global Village

To commemorate F-Secure’s 30th year of innovation, we’re profiling 30 of our fellows from our more than 25 offices around the globe.

The global village can be a pretty great place to live. But it can still be a challenge working across distances or boundaries – whether they be cultural, logistical, linguistic, or whatever else. But it’s a challenge that F-Secure’s Jani Kallio has taken since he joined the company with F-Secure’s acquisition of cyber security firm nSense in 2015.

“At nSense I started building our Security & Risk Management consulting practice as the Global Practice Leader. But during the 2 years since we joined F-Secure, I left that responsibility and my great team of 15 consultants, and transitioned to a business development role,” explains Jani. “Now, one of my key tasks is to prepare expansion opportunities for our consulting business through M&A.”

And Jani’s favorite part of this is finding common ground between different people – what he calls “a common language”. It’s something he’s been doing while based in London, since F-Secure acquired Digital Assurance Consulting – a small but reputable penetration testing company . Part of that process was his relocation to the city with his wife and daughter in summer 2017.

It’s a skill Jani uses to help F-Secure consulting expand further. “The best part of my job is when I’m able to identify similarities in cultures and untapped potentials which we could address together for mutual benefit,” said Jani. “When trying to motivate entrepreneurs into selling their company, it’s all about people. I need to sell the idea of F-Secure as a new home. It’s actually kind a match making process and it doesn’t work if you are not able to sell a joint vision.”

The cyber security business is still all about security, and you have to know what it means to customers, but Jani thinks it might surprise some people how differently that is seen across industries.

“In a single day, I can be in talks about the technical details of a security vulnerability with a consultant, discounted cash flow variables with a M&A advisor, business strategies with a senior entrepreneur, and coverage of cyber insurance with a broker or underwriter. It is a privilege, really, to be involved in so many different fields with different people.”

In terms of career advice, Jani has a few suggestions on how people could progress in the cyber security business.

“Choose a team who you can learn from, preferably strong in other areas than your own background. Choose a boss who believes in you. Share what you know with the people around you. And never stop learning, because this world will never slow down to wait for you to catch up.“

And if you’re looking for a career that will take you to different countries, remember to find things you have in common with the people around you, because there are bound to be differences. Some of them are really small, an example Jani discovered from working in London.

“Londoners have lunch over their keyboards and eat a good 2 hours later than I’m used to. Most restaurants do not even serve proper lunch before 12:00 in Soho, which is one thing I’m still getting used to.”

After Jani’s interview, F-Secure announced the acquisition of MWR InfoSecurity, a privately held cyber security company with close to 400 employees and operating globally from its HQ in London and other offices in the UK, the US, South Africa and Singapore.

And check out our open positions if you want to join Jani and the hundreds of other great fellows fighting to keep internet users safe from online threats.

SN 671: STARTTLS Everywhere

This week we discuss another worrisome trend in malware, another fitness tracking mapping incident and mistake, something to warn our friends and family to ignore, the value of periodically auditing previously-granted web app permissions, when malware gets picky about the machines it infects, another kinda-well-meaning Coinhive service gets abused, what are the implications of D-Link losing control of its code signing cert?, some good news about Android apps, iOS v11.4.1 introduces "USB Restricted Mode"... but is it?, a public service reminder about the need to wipe old thumb drives and memory cards, what about those free USB fans that were handed out at the recent North Korea / US summit?... and then we take a look at eMail's STARTTLS system and the EFF's latest initiative to increase its usefulness and security.

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

Apple’s new USB security feature has a major loophole

Apple's new USB Restricted Mode, which dropped with the iOS 11.4.1 release yesterday, may not be as secure as previously thought. The feature is designed to protect iPhones against USB devices used by law enforcement to crack your passcode, and works by disabling USB access after the phone has been locked for an hour. Computer security company ElcomSoft, however, has found a loophole.

Source: ElcomSoft

Timehop admits attacker stole 21 million users’ data

Timehop, a popular app that reminds you of your social media posts from the same day in past years, is the latest service to suffer a data breach. The attacker struck on July 4th, and grabbed a database which included names and/or usernames along with email addresses for around 21 million users. About 4.7 million of those accounts had phone numbers linked to them, which some people use to log in with instead of a Facebook account.

Via: The Register

Source: Timehop

Olympic hackers may be attacking chemical warfare prevention labs

The team behind the 2018 Winter Olympics hack is still active, according to security researchers -- in fact, it's switching to more serious targets. Kaspersky has discovered that the group, nicknamed Olympic Destroyer, has been launching email phishing attacks against biochemical warfare prevention labs in Europe and Ukraine as well as financial organizations in Russia. The methodology is extremely familiar, including the same rogue macros embedded in decoy documents as well as extensive efforts to avoid typical detection methods.

Via: Wired

Source: Securelist

Fraudster caught using OPM hack data from 2015

Way back in 2015, the US Office of Personnel Management (OPM) was electronically burgled, with hackers making off with 21.5 million records. That data included social security numbers, fingerprints, usernames, passwords and data from interviews conducted for background checks. Now, a woman from Maryland has admitted to using data from that breach to secure fraudulent loans through a credit union.

Via: Reuters

Source: Department of Justice

Cortana can be used to hack Windows 10 PCs

Cortana might be super helpful at keeping track of your shopping lists, but it turns out it's not so great at keeping your PC secure. Researchers from McAfee have discovered that by activating Cortana on a locked Windows 10 machine, you can trick it into opening up a contextual menu which can then be used for code execution. This could deploy malicious software, or even reset a Windows account password.

Via: The Verge

Source: McAfee

Meet Sunder, a New Way to Share Secrets

The moment a news organization is given access to highly sensitive materials—such as the Panama Papers, the NSA disclosures or the Drone Papers—the journalist and their source may be targeted by state and non-state actors, with the goal of preventing disclosures. How can whistleblowers and news organizations prepare for the worst?

The Freedom of the Press Foundation is requesting public comments and testing of a new open source tool that may help with this and similar use cases: Sunder, a desktop application for dividing access to secret information between multiple participants.

Sunder is not yet ready for high stakes use cases. It has not been audited and is alpha-quality software. We are looking for early community feedback, especially from media organizations, activists, and nonprofits.

While Sunder is a new tool that aims to make secret-sharing easy to use, the underlying cryptographic algorithm is far from novel: Shamir's Secret Sharing was developed in 1979 and has since found many applications in security tools. It divides a secret into parts, where some or all parts are needed to reconstruct the secret. This enables the conditional delegation of access to sensitive information. The secret could be social media account credentials, or the passphrase to an encrypted thumb drive, or the private key used to log into a server.

Sunder is currently available for Mac and Linux, and in source code form. See the documentation for installation and usage instructions. We also invite you to complete a short survey which will influence the future direction of this tool.

If you are interested in getting involved in development, we welcome your contributions! Please especially take a look at issues marked "easy" or "docs". Sunder is based on the open source RustySecrets library, which is also open to new contributors.

Sunder screenshot
Sunder allows you to divide a secret into shares, a certain number of which are required to reconstruct it


How could Sunder be useful for journalists, activists and whistleblowers?

Until a quorum of participants agrees to combine their shares (the number is configurable, e.g., 5 out of 8), the individual parts are not sufficient to gain access, even by brute force methods. This property makes it possible to use Sunder in cases where you want to disclose a secret only if certain conditions are met.

The most frequently cited example is disclosure upon an adverse event. Let's say an activist's work is threatened by powerful interests. She provides access to an encrypted hard drive that contains her research to multiple news organizations. Each receives a share of the passphrase, under the condition that they only combine the shares upon her arrest or death, and that they take precautions to protect the shares until then.

Secret sharing can also used to protect the confidentiality of materials over a long running project. An example would be a documentary film project accumulating terabytes of footage that have to be stored safely. By "sundering" the key to an encrypted drive containing archival footage, the filmmaking team could reduce the risk of accidental or deliberate disclosure.

But most importantly, we want to hear what you think. Please give Sunder a spin by downloading one of the releases and following the documentation, and please take our survey!


Disclaimer

As noted above, Sunder is still alpha quality software. It's very possible that this version has bugs and security issues, and we do not recommend it for high stakes use cases. Indeed, Sunder and the underlying library have not received a third party audit yet.

Furthermore, any secret sharing implementation is only as robust as the operational security around it. If you distribute or store shares in a manner that can be monitored by an adversary (e.g., online without the use of end-to-end encryption) this could compromise your security.


Inquiries

For inquiries, please contact us at sunder@freedom.press.



Credits

Sunder was primarily developed by Gabe Isman and Garrett Robinson. Conor Schaefer has acted as a maintainer and release manager; Lilia Kai recently also joined the project as a maintainer. RustySecrets was developed by the RustySecrets team. Conversations between Ed Snowden and Frederic Jacobs were the original impetus for the project.

Algorithmic discrimination: A coming storm for security?

“If you don’t understand algorithmic discrimination, then you don’t understand discrimination in the 21st century.”

Bruce Schneier’s words, which came at the end of his wide-ranging session at RSA Conference last week, continued to echo in my ears long after I returned from San Francisco. Schneier, the well-known security expert, author and CTO of IBM Resilient, was discussing how technologists can become more involved in government policy, and he advocated for joint computer science-law programs in higher education.

“I think that’s very important. Right now, if you have a computer science-law degree, then you become a patent attorney,” he said. “Yes, it makes you a lot of money, but it would be great if you could work for the ACLU, the Sothern Poverty Law Center and the NAACP.”

Those organizations, he argued, need technologists that understand algorithmic discrimination. And given some recent events, it’s hard to argue with Schneier’s point. But with all of the talk at RSA Conference this year about the value of machine learning and artificial intelligence, just as in previous years, I wondered if the security industry truly does understand the dangers of bias and discrimination, and what kind of problems will come to the surface if it doesn’t.

Inside the confines of the Moscone Center, algorithms were viewed with almost complete optimism and positivity. Algorithms, we’re told, will help save time and money for enterprises that can’t find enough skilled infosec professionals to fill their ranks.

But when you step outside the infosec sphere, it’s a different story. We’re told how algorithms, in fact, won’t save us from vicious conspiracy theories and misinformation, or hate speech and online harassment, or any number of other negative factors afflicting our digital lives.

If there are any reservations about machine learning and AI, they are generally limited to a few areas such as improper training of AI models or how those models are being used by threat actors to aid cyberattacks. But there’s another issue to consider: how algorithmic discrimination and bias could negatively impact these models.

This isn’t to say that algorithmic discrimination will necessarily afflict cybersecurity technology in a way that reveals racial or gender bias. But for an industry that so often misses the mark on the most dangerous vulnerabilities and persistent yet preventable threats, it’s hard to believe infosec’s own inherent biases won’t somehow be reflected in the machine learning and AI-based products that are now dominating the space.

Will these products discriminate against certain risks over more pressing ones? Will algorithms be designed to prioritize certain types of data and threat intelligence at the expense of others, leading to data discrimination? It’s also not hard to imagine racial and ethnic bias creeping into security products with algorithms that demonstrate a predisposition toward certain languages and regions (Russian and Eastern Europe, for example). How long will it take for threat actors to pick up on those biases and exploit them?

It’s important to note that in many cases outside the infosec industry, the algorithmic havoc is wreaked not by masterful black hats and evil geniuses but by your average internet trolls and miscreants. They simply spent enough time studying how, for example, YouTube functions on a day-to-day basis and flooded the systems with content to figure out how they could weaponize search engine optimization.

If Google can’t construct algorithms to root out YouTube trolls and prevent harassers from abusing the sites’ search and referral features, then why do we in the infosec industry believe that algorithms will be able to detect and resolve even the low-hanging fruit that afflicts so many organizations?

The question isn’t whether the algorithms will be flawed. These machine learning and AI systems are built by humans, and flaws come with the territory. The question is whether they will be – unintentionally or purposefully – biased, and if those biases will be fixed or reinforced as the systems learn and grow.

The world is full of examples of algorithms gone wrong or nefarious actors gaming systems to their advantage. It would be foolish to think infosec will somehow be exempt.

The post Algorithmic discrimination: A coming storm for security? appeared first on Security Bytes.

5 Common Sense Security and Privacy IoT Regulations

F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here

For most of human history, the balance of power in commercial transactions has been heavily weighted in favour of the seller. As the Romans would say, caveat emptor – buyer beware!

However, there is just as long a history of people using their collective power to protect consumers from unscrupulous sellers, whose profits are too often based on externalising their costs which are then borne by the society. Probably the earliest known consumer safety law is found in Hammurabi’s Code nearly 4000 years ago – it is quite a harsh example:

If a builder builds a house for someone, and does not construct it properly, and the house which he built falls in and kills its owner, then that builder shall be put to death.

However, consumer safety laws as we know them today are a relatively new invention. The Consumer Product Safety Act became law in the USA in 1972. The Consumer Protection Act became law in the UK in 1987.

Today’s laws provide for stiff penalties – for example the UK’s CPA makes product safety issues into criminal offenses liable with up to 6 months in prison and unlimited fines. These laws also mandate enforcement agencies to set standards, buy and test products, and to sue sellers and manufacturers.

So if you sell a household device that causes physical harm to someone, you run some serious risks to your business and to your personal freedom. The same is not true if you sell a household device that causes very real financial, psychological, and physical harm to someone by putting their digital security at risk. The same is not true if you sell a household device that causes very real psychological harm, civil rights harm, and sometimes physical harm to someone by putting their privacy rights at risk. In those cases, your worst case risk is currently a slap on the wrist.

This situation may well change at the end of May 2017 when the EU General Data Protection Regulation (GDPR) goes into force across the EU, and for all companies with any presence or doing business in the EU. The GDPR provides two very welcome threats that can be wielded against would-be negligent vendors: the possibility of real fines – up to 2% of worldwide turnover; and a presumption of guilt if there is a breach – it will be up to the vendor to show that they were not negligent.

However, the GDPR does not specifically regulate digital consumer goods – in other words Internet of Things (IoT) “smart” devices. Your average IoT device is a disaster in terms of both security and privacy – as our Mikko Hypponen‘s eponymous Law states: “smart device” = “vulnerable device”, or if you prefer the Fennel Corollary: “smart device” = “vulnerable surveillance device”.

The current IoT market is like the household goods market before consumer safety laws were introduced. This is why I am very happy to see initiatives like the UK government’s proposed Secure by Design: Improving the cyber security of consumer Internet of Things Report. While the report has many issues, there is clearly a need for the addition of serious consumer protection laws in the security and privacy area.

So if the UK proposal does not go far enough, what would I propose as common sense IoT security and privacy regulation? Here are 5 things I think are mandatory for any serious regulation in this area:

  1. Consumer safety laws largely work due to the severe penalties in place for any company (and their directors) who provide consumers with goods that place their safety in danger, as well as the funding and willingness of a governmental consumer protection agency to sue companies on consumers’ behalf. The same rigorous, severe, and funded structure is required for IoT goods that place consumers’ digital and physical security in danger.
  2. The danger to consumers from IoT goods is not only in terms of security, but also in terms of privacy. I believe similar requirements must be put in place for Privacy by Design, including severe penalties for any collecting, storing, and selling (whether directly, or indirectly via profiling for targeting of advertising) of consumers’ personal data if it is not directly required for the correct functioning of the device and service as seen by the consumer.
  3. Similarly, the requirements should include a strict prohibition on any backdoor, including government or law enforcement related, to access user data, usage information, or any form of control over the devices. Additionally, the requirements should include a strict prohibition on vendors providing any such information or control via “gentleman’s agreements” with a governmental or law enforcement agency/representative.
  4. In terms of the requirements for security and privacy, I believe that any requirements specifically written into law will always be outdated and incomplete. Therefore I would mandate independent standards agencies in a similar way to other internet governing standards bodies. A good example is the management of TLS certificate security rules by the CA/Browser Forum.
  5. Requirements must also deal with cases of IoT vendors going out of business or discontinuing devices and/or software updates. There must be a minimum software update duration, and in the case of discontinuation of support, vendors should be required to provide the latest firmware and update tool as Open Source to allow support to be continued by the user or a third party.

Just as there will always be ways for a determined person to hack around any physical or software security controls, people will find ways around any regulations. However, it is still better to attempt to protect vulnerable consumers than to pretend the problem doesn’t exist; or even worse, to blame the users who have no real choice and no possibility to have any kind of informed consent for the very real security and privacy risks they face.

Let’s start somewhere!

Finding subdomains for open source intelligence and pentest

Finding subdomains for open source intelligence and pentest

Many of us are in the security consulting business, or bug bounties, or even network intelligence and have now and then come across a need to find subdomains. The requirement can be from either side of the table - a consultant assessing a client's internet presence, or a company validating its own digital footprint. In more than a decade, it has happened so many times that people are not aware of what old assets are they running, and hence can be exploited to either damage the brand image, or actual networks. These assets can also be used as the proxy or hops to gain access to thought-so-well guarded data.

Most common way to search for subdomains (that I have used) so far is old school Google search with dorks: site:example.com. To dig deeper, iterate it with all visible subdomains from results, i.e. site:example.com -www or site:example.com -www -test. It will exclude example.com, www.example.com and test.example.com and so on. Later I found some more tools like, Pentest Tool Subdomain, DNS Dumster, Cloudpiercer, Netcraft etc. All these tools are either expensive or don't do the job well. Meh!

Finally, while having a conversation with the SPYSE team (the astounding squad behind CertDB project) and I got to know about their new project - FindSubDomains, a free and fantastic tool/ project to find subdomains for a given domain. Last time I covered their CertDB project in detail, and now after being impressed by FindSubDomains, it was time to review and share with you this excellent tool! It not only lists subdomains but a whole lot of intelligence behind it like,

  1. IP Addresses
  2. DNS records
  3. Countries
  4. Subnets
  5. AS Blocks
  6. Organization names etc.

Any of these parameters can be used to filter the list of subdomains, or search - I mean, it's terrific!

But how does this stack against the common known tools? Let's find out. For the sake of testing, let's take the domain apple.com and try finding the subdomains with different tools/ mediums. Let's start with old school google search,

Finding subdomains for open source intelligence and pentest

Only after 4-5 searches/iterations, it became a tedious process. And, when you try to automate it; Google merely pops up re-captcha challenge. In general, it's worth to search few targeted domains, but worthless to query wild subdomains. Not recommended for such tasks!

How about using the pentest-tools tool? First thing first, it is not a free service and would require you to buy credits. I just performed a free search, and the results were not convincing with pentest-tools,

Finding subdomains for open source intelligence and pentest

After the search, it could only find 87 subdomains of apple.com, and the details included subdomain and respective IP addresses. Netcraft and DNSDumster also had the same disappointing results - the first found 180 records with no scope to download or filter them, and the later was capped at 150 results with lousy UI/UX. To summarise none of the tools could deliver a straightforward and intelligent list of subdomains.

FindSubDomains: Is it any different; any better?

To give you a straight answer - Hell yes! Kudos to the SPYSE team, it is way better than the ones I were using before.
The same apple.com subdomain search performed via FindSubDomains resulted in 1900+ results. It is remarkable!

I mean when others failed miserably to provide even 200 results, FindSubDomains just nailed it with 1900+ results. Bravo!

Finding subdomains for open source intelligence and pentest

All of these 1900+ results are at your disposal without paying a single cent, pop-up advertisement, credits or cap etc. You not only can list these results on the UI but also download them as TXT file. Also, you can view the IP address, geographical region, IP segment and respective AS block details for each subdomain. That is some remarkable open source intelligence in a second without scripts or endless iterations!

To me, the SPYSE team 100% justify their project objective,

FindSubDomains is designed to automate subdomains discovering. This task often comes before system security specialists who study the company in the form of a black box in order to search for vulnerabilities, as well as for marketers and entrepreneurs whose goal is to competitively analyze other players on the market in the hope of quickly learning about the emergence of a new vector in the development of a competitor, as well as obtaining information about the internal infrastructure.

FindSubDomains: Search and Filter

On top of the search, their filters are amazing if you have to search specific information on a domain, subdomain or it's respective fields as discussed. They also have some pre-filtered results or trivia points,

  1. Top 100 sites and their subdomains: https://findsubdomains.com/world-top
  2. Sites with the highest number of subdomains: https://findsubdomains.com/top-subdomain-count
  3. Top countries with the highest number of subdomains: https://findsubdomains.com/countries
    = UNITED STATES: https://findsubdomains.com/world-top?country=US
    = INDIA: https://findsubdomains.com/world-top?country=IN
    = CHINA: https://findsubdomains.com/world-top?country=CN
  4. Top names for subdomains (my favourite) or most common subdomains: https://findsubdomains.com/top-subdomain-name-count

The last one is convenient when network surveying a client, or shocking client with their digital footprint.
Finding subdomains for open source intelligence and pentest

FindSubDomains: Dashboard and Custom Tasks

Now, when I signed-in (sign-up is easy) I was welcomed by a Dashboard which shows Total, Ongoing and Remaining tasks. I can start a new task by either using the domain or a word to search. The word search is great if I don't know the complete domain name. This task executing capability is to supplement anything you don't find on their main page, or existing database (which believe me is huge). For every task, it can list up to 50,000 subdomains for and takes around 6 minutes (you can setup alert, and the platform will notify you via email on its completion).

To execute the task of finding subdomains, it uses various techniques,

  1. Many subdomains can be defined using the site's crawling and analyzing its pages, as well as resource files;
  2. AXFR (DNS Zone Transfer) requests for some domains often reveal a lot of valuable information about them;
  3. Search and analysis of historical data often matches with the search term.
    Finding subdomains for open source intelligence and pentest

While the tool is impressive, and I can't repeat enough; I would have appreciated the capability to execute the tasks via an API, and having some programmable way to automate via command-line/ terminal. I know, I may find ways to do with curl, but API key would have made things more comfortable, and convenient.

FindSubDomains: Usage Scenarios

Here are some scenarios I can use this tool,

  1. During pentest reconnaissance phase, collecting information on the target network.
  2. As a supporting tool to gather network intelligence on firms and their respective domains.
  3. Assessing your company's network, and digital footprint. Many a times you will be surprised to find the wide unaccounted exposure.
  4. Keeping a track of external facing subdomains - UAT, SIT, STAGING etc. which ideally should either be locked down or white-listed. Imagine how insecure are these platforms which often even contain production data.

To summarize, this is yet another amazing tool after CertDB which shows the potential of SPYSE team. The FindSubDomains has made my search so easier and efficient. I would highly recommend the readers to use this tool in finding subdomains.

Cover Image Credit: Photo by Himesh Kumar Behera

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

This year I have witnessed too many DNS stories - rising from the Government censorship programs to privacy-centric secure DNS (DNS over TLS) in order to protect the customers' queries from profiling or profiting businesses. There are some DNS which are attempting to block the malicious sites (IBM Quad9 DNS and SafeDNS) while others are trying to give un-restricted access to the world (Google DNS and CISCO OpenDNS) at low or no costs.

Yesterday, I read that Cloudflare has joined the race with it's DNS service (Quad1, or 1.1.1.1), and before I dig further (#punintended) let me tell you - it's blazing fast! Initially I thought it's a classic April Fool's prank but then Quad1, or 1.1.1.1 or 4/1 made sense. This is not a prank, and it works just as proposed. Now, this blog post shall summarize some speed tests, and highlight why it's best to use Cloudflare Quad1 DNS.

Quad1 DNS 1.1.1.1 Speed Test

To test the query time speeds (in milliseconds or ms), I shall resolve 3 sites: cybersins.com, my girl friend's website palunite.de and my friend's blog namastehappiness.com against 4 existing DNS services - Google DNS (8.8.8.8), OpenDNS (208.67.222.222), SafeDNS (195.46.39.39), IBM Quad9 DNS (9.9.9.9) and Cloudflare Quad1 (1.1.1.1)

Website Google DNS OpenDNS IBM Quad9 SafeDNS CloudFlare
cybersins.com 158 187 43 238 6
palunite.de 365 476 233 338 3
namastehappiness.com 207 231 178 336 3

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

This looks so unrealistic, that I had to execute these tests again to verify, and these numbers are indeed true.

Privacy and Security with Quad1 DNS 1.1.1.1

This is the key element that has not been addressed for quite a while. The existing DNS services are slow, but as well store logs and can profile a user based on the domains they query. The existing DNS services run on UDP port 53, and are vulnerable to MITM (man in the middle) kind of attacks. Also, your ISP has visibility in this clear text traffic to sensor or monetize you, if required. In the blogpost last weekend, Matthew Prince, co-founder and CEO of Cloudflare mentioned,

The web should have been encrypted from the beginning. It's a bug it wasn't. We're doing what we can do fix it ... DNS itself is a 35-year-old protocol and it's showing its age. It was never designed with privacy or security in mind.

The Cloudflare Quad1 DNS overcomes this by supporting both DNS over TLS and HTTPS which means you can setup your internal DNS server and then route the queries to Cloudflare DNS over TLS or HTTPS. To address the story behind the Quad1 or 1.1.1.1 choice, Matthew Prince quoted,

But DNS resolvers inherently can't use a catchy domain because they are what have to be queried in order to figure out the IP address of a domain. It's a chicken and egg problem. And, if we wanted the service to be of help in times of crisis like the attempted Turkish coup, we needed something easy enough to remember and spraypaint on walls.

Kudos to Cloudflare for launching this service, and committing to the privacy and security of the end-users in keeping short-lived logs. Cloudflare confirmed that they don't see a need to write customer's IP addresses to the disk, and retain the logs for more than 24 hours.

Cheers and be safe.

What Were the CryptoWars ?

F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here

In a previous article, I mentioned the cryptowars against the US government in the 1990s. Some people let me know that it needed more explanation. Ask and thou shalt receive! Here is a brief history of the 1990s cryptowars and cryptography in general.

Crypto in this case refers to cryptography (not crypto-currencies like BitCoin). Cryptography is a collection of clever ways for you protect information from prying eyes. It works by transforming the information into unreadable gobbledegook (this process is called encryption). If the cryptography is successful, only you and the people you want can transform the gobbledegook back to plain English (this process is called decryption).

People have been using cryptography for at least 2500 years. While we normally think of generals and diplomats using cryptography to keep battle and state plans secret, it was in fact used by ordinary people from the start. Mesopotamian merchants used crypto to protect their top secret sauces, lovers in ancient India used crypto to protect their messages, and mystics in ancient Egypt used crypto to keep more personal secrets.

However, until the 1970s, cryptography was not very sophisticated. Even the technically and logistically impressive Enigma machines, used by the Nazis in their repugnant quest for Slavic slaves and Jewish genocide, were just an extreme version of one of the simplest possible encryptions: a substitution cipher. In most cases simple cryptography worked fine, because most messages were time sensitive. Even if you managed to intercept a message, it took time to work out exactly how the message was encrypted and to do the work needed to break that cryptography. By the time you finished, it was too late to use the information.

World War II changed the face of cryptography for multiple reasons – the first was the widespread use of radio, which meant mass interception of messages became almost guaranteed instead of a matter of chance and good police work. The second reason was computers. Initially computers meant women sitting in rows doing mind-numbing mathematical calculations. Then later came the start of computers as we know them today, which together made decryption orders of magnitude faster. The third reason was concentrated power and money being applied to surveillance across the major powers (Britain, France, Germany, Russia) leading to the professionalization and huge expansion of all the relatively new spy agencies that we know and fear today.

The result of this huge influx of money and people to the state surveillance systems in the world’s richest countries (i.e. especially the dying British Empire, and then later America’s growing unofficial empire) was a new world where those governments expected to be able to intercept and read everything. For the first time in history, the biggest governments had the technology and the resources to listen to more or less any conversation and break almost any code.

In the 1970s, a new technology came on the scene to challenge this historical anomaly: public key cryptography, invented in secret by British spies at GCHQ and later in public by a growing body of work from American university researchers Merkle, Diffie, Hellman, Rivest, Sharmir, and Adleman. All cryptography before this invention relied on algorithm secrecy in some aspect – in other words the cryptography worked by having a magical secret method only known to you and your friends. If the baddies managed to capture, guess, or work out your method, decrypting your messages would become much easier.

This is what is known as “security by obscurity” and it was a serious problem from the 1940s on. To solve this, surveillance agencies worldwide printed thousands and thousands of sheets of paper with random numbers (one-time pads) to be shipped via diplomatic courier to embassies and spies around the world. Public key cryptography changed this: the invention meant that you could share a public key with the whole world, and share the exact details of how the encryption works, but still protect your secrets. Suddenly, you only had to guard your secret key, without ever needing to share it. Suddenly it didn’t matter if someone stole your Enigma machine to see exactly how it works and to copy it. None of that would help your adversary.

And because this was all normal mathematical research, it appeared in technical journals, could be printed out and go around the world to be used by anyone. Thus the US and UK governments’ surveillance monopoly was in unexpected danger. So what did they do? They tried to hide the research, and they treated these mathematics research papers as “munitions”. It became illegal to export these “weapons of war” outside the USA without a specific export license from the American government, just like for tanks or military aircraft.

This absurd situation persisted into the early 1990s when two new Internet-age inventions made their continued monopoly on strong cryptography untenable. Almost simultaneously, Zimmermann created a program (PGP) to make public key cryptography easy for normal people to use to protect their email and files, and Netscape created the first SSL protocols for protecting your connection to websites. In both cases, the US government tried to continue to censor and stop these efforts. Zimmermann was under constant legal threat, and Netscape was forced to make an “export-grade” SSL with dramatically weakened security. It was still illegal to download, use, or even see, these programs outside the USA.

But by then the tide had turned. People started setting up mirror websites for the software outside the USA. People started putting copies of the algorithm on their websites as a protest. Or wearing t-shirts with the working code (5 lines of Perl is all that’s needed). Or printing the algorithm on posters to put up around their universities and towns. In the great tradition of civil disobedience against injustice, geeks around the world were daring the governments to stop them, to arrest them. Both the EFF (Electronic Frontier Foundation) and the EPIC (Electronic Privacy Information Center) organizations were created as part of this fight for our basic (digital) civil rights.

In the end, the US government backed down. By the end of the 1990s, the absurd munitions laws still existed but were relaxed sufficiently to allow ordinary people to have basic cryptographic protection online. Now they could be protected when shopping at Amazon without worrying that their credit card and other information would be stolen in transit. Now they could be protected by putting their emails in an opaque envelope instead of sending all their private messages via postcard for anyone to read.

However that wasn’t the end of the story. Like in so many cases “justice too long delayed is justice denied”. The internet is becoming systematically protected by encryption in the last two years thanks to the amazing work of LetsEncrypt. However, we have spent almost 20 years sending most of our browsing and search requests via postcard, and that “export-grade” SSL the American government forced on Netscape in the 1990s is directly responsible for the existence of the DROWN attack putting many systems at risk even today.

Meanwhile, thanks to the legal threats, email encryption never took off. We had to wait until the last few years for the idea of protecting everybody’s communications with cryptography to become mainstream with instant messaging applications like Signal. Even with this, the US and UK governments continue to lead the fight to stop or break this basic protection for ordinary citizens, despite the exasperated mockery from everyone who understands how cryptography works.

Measure Security Performance, Not Policy Compliance

I started my security (post-sysadmin) career heavily focused on security policy frameworks. It took me down many roads, but everything always came back to a few simple notions, such as that policies were a means of articulating security direction, that you had to prescriptively articulate desired behaviors, and that the more detail you could put into the guidance (such as in standards, baselines, and guidelines), the better off the organization would be. Except, of course, that in the real world nobody ever took time to read the more detailed documents, Ops and Dev teams really didn't like being told how to do their jobs, and, at the end of the day, I was frequently reminded that publishing a policy document didn't translate to implementation.

Subsequently, I've spent the past 10+ years thinking about better ways to tackle policies, eventually reaching the point where I believe "less is more" and that anything written and published in a place and format that isn't "work as usual" will rarely, if ever, get implemented without a lot of downward force applied. I've seen both good and bad policy frameworks within organizations. Often they cycle around between good and bad. Someone will build a nice policy framework, it'll get implemented in a number of key places, and then it will languish from neglect and inadequate upkeep until it's irrelevant and ignored. This is not a recipe for lasting success.

Thinking about it further this week, it occurred to me that part of the problem is thinking in the old "compliance" mindset. Policies are really to blame for driving us down the checkbox-compliance path. Sure, we can easily stand back and try to dictate rules, but without the adequate authority to enforce them, and without the resources needed to continually update them, they're doomed to obsolescence. Instead, we need to move to that "security as code" mentality and find ways to directly codify requirements in ways that are naturally adapted and maintained.

End Dusty Tomes and (most) Out-of-Band Guidance

The first daunting challenge of security policy framework reform is to throw away the old, broken approach with as much gusto and finality as possible. Yes, there will always be a need for certain formally documented policies, but overall an organization Does. Not. Need. large amounts of dusty tomes providing out-of-band guidance to a non-existent audience.

Now, note a couple things here. First, there is a time and a place for providing out-of-band guidance, such as via direct training programs. However, it should be the minority of guidance, and wherever possible you should seek to codify security requirements directly into systems, applications, and environments. For a significant subset of security practices, it turns out we do not need to repeatedly consider whether or not something should be done, but can instead make the decision once and then roll it out everywhere as necessary and appropriate.

Second, we have to realize and accept that traditional policy (and related) documents only serve a formal purpose, not a practical or pragmatic purpose. Essentially, the reason you put something into writing is because a) you're required to do so (such as by regulations), or b) you're driven to do so due to ongoing infractions or the inability to directly codify requirements (for example, requirements on human behavior). What this leaves you with are requirements that can be directly implemented and that are thus easily measurable.

KPIs as Policies (et al.)

If the old ways aren't working, then it's time to take a step back and think about why that might be and what might be better going forward. I'm convinced the answer to this query lies in stretching the "security as code" notion a step further by focusing on security performance metrics for everything and everyone instead of security policies. Specifically, if you think of policies as requirements, then you should be able to recast those as metrics and key performance indicators (KPIs) that are easily measured, and in turn are easily integrated into dashboards. Moreover, going down this path takes us into a much healthier sense of quantitative reasoning, which can pay dividends for improved information risk awareness, measurement, and management.

Applied, this approach scales very nicely across the organization. Businesses already operate on a KPI model, and converting security requirements (née policies) into specific measurables at various levels of the organization means ditching the ineffective, out-of-band approach previously favored for directly specifying, measuring, and achieving desired performance objectives. Simply put, we no longer have to go out of our way to argue for people to conform to policies, but instead simply start measuring their performance and incentivize them to improve to meet performance objectives. It's then a short step to integrating security KPIs into all roles, even going so far as to establish departmental, if not whole-business, security performance objectives that are then factored into overall performance evaluations.

Examples of security policies-become-KPIs might include metrics around vulnerability and patch management, code defect reduction and remediation, and possibly even phishing-related metrics that are rolled up to the department or enterprise level. When creating security KPIs, think about the policy requirements as they're written and take time to truly understand the objectives they're trying to achieve. Convert those objectives into measurable items, and there you are on the path to KPIs as policies. For more on thoughts on security metrics, I recommend checking out the CIS Benchmarks as a starting point.

Better Reporting and the Path to Accountability

Converting policies into KPIs means that nearly everything is natively built for reporting, which in turn enables executives to have better insight into the security and information risk of the organization. Moreover, shifting the focus to specific measurables means that we get away from the out-of-band dusty tomes, instead moving toward achieving actual results. We can now look at how different teams, projects, applications, platforms, etc., are performing and make better-informed decisions about where to focus investments for improvements.

This notion also potentially sparks an interesting future for current GRC-ish products. If policies go away (mostly), then we don't really need repositories for them. Instead, GRC products can shift to being true performance monitoring dashboards, allowing those products to broaden their scope while continuing to adapt other capabilities, such as those related to the so-called "SOAR" market (Security Orchestration, Automation, and Response). If GRC products are to survive, I suspect it will be by either heading further down the information risk management path, pulling in security KPIs in lieu of traditional policies and compliance, or it will drive more toward SOAR+dashboards with a more tactical performance focus (or some combination of the two). Suffice to say, I think GRC as it was once known and defined is in its final days of usefulness.

There's one other potentially interesting tie-in here, and that's to overall data analytics, which I've noticed slowly creeping into organizations. A lot of the focus has been on using data lakes, mining, and analytics in lieu of traditional SIEM and log management, but I think there's also a potentially interesting confluence with security KPIs, too. In fact, thinking about pulling in SOAR capabilities and other monitoring and assessment capabilities and data, it's not unreasonable to think that KPIs become the tweakable dials CISOs (and up) use to balance out risk vs reward in helping provide strategic guidance for address information risk within the enterprise. At any rate, this is all very speculative and unclear right now, but something to nonetheless watch. But I have digressed...

---
The bottom line here is this: traditional policy frameworks have generally outlived their usefulness. We cannot afford to continue writing and publishing security requirements in a format that isn't easily accessible in a "work as usual" format. In an Agile/DevOps world, "security as code" is imperative, and that includes converting security requirements into KPIs.

Security is not a buzz-word business model, but our cumulative effort

Security is not a buzz-word business model, but our cumulative effort

This article conveys my personal opinion towards security and it's underlying revenue model; I would recommend to read it with a pinch of salt (+ tequila, while we are on it). I shall be covering either side of the coin, the heads where pentesters try to give you a heads-up on underlying issues, and tails where the businesses still think they can address security at the tail-end of their development.

A recent conversation with a friend who's in information security triggered me to address the white elephant in the room. He works in a security services firm that provides intelligence feeds and alerts to the clients. Now he shared a case where his firm didn't share the right feed at the right time even though the client was "vulnerable" because the subscription model is different. I understand business is essential, but on the contrary isn't security a collective argument? I mean tomorrow if when this client gets attacked, are you going just to turn a blind eye because it didn't pay you well? I understand the remediation always cost money (or more efforts) but holding the alert to a client on some attack you witnessed in the wild based on how much money are they paying you is hard to contend.

I don't dream about the utopian world where security is obvious but we surely can walk in that direction.

What is security to a business?

Is it a domain, a pillar or with the buzz these days, insurance? Information security and privacy while being the talk of the town are still come where the business requirements end. I understand there is a paradigm shift to the left, a movement towards the inception for your "bright idea" but still we are far from an ideal world, the utopian so to speak! I have experienced from either side of the table - the one where we put ourselves in the shoes of hackers and the contrary where we hold hands with the developers to understand their pain points & work together to build a secure ecosystem. I would say it's been very few times that business pays attention to "security" from day-zero (yeah, this tells the kind of clients I am dealing with and why are in business). Often business owners say - Develop this application, based on these requirements, discuss the revenue model, maintenance costs, and yeah! Check if we need these security add-ons or do we adhere to compliance checks as no one wants auditors knocking at the door for all the wrong reasons.

This troubles me. Why don't we understand information security as important a pillar as your whole revenue model?

Security is not a buzz-word business model, but our cumulative effort

How is security as a business?

I have many issues with how "security" is being tossed around as a buzz-word to earn dollars, but very few respect the gravity or the very objective of its existence. I mean whether it's information, financial, or life security - they all have very realistic and quantifiable effects on someone's physical well-being. Every month, I see tens (if not hundreds) of reports and advisories where quality is embarrassingly bad. When you tap to find the right reasons - either the "good" firms are costly, or someone has a comfort zone with existing firms, or worst that neither the business care nor do they pressure firms for better quality. I mean at the end, it's a just plain & straightforward business transaction or a compliance check to make auditor happy.

Have you ever asked yourself the questions,

  1. You did a pentest justifying the money paid for your quality; tomorrow that hospital gets hacked, or patients die. Would you say you didn't put your best consultants/efforts because they were expensive for the cause? You didn't walk the extra mile because the budgeted hours finished?
  2. Now, to you Mr Business, CEO - You want to cut costs on security because you would prefer a more prominent advertisement or a better car in your garage, but security expenditure is dubious to you. Next time check how much companies and business have lost after getting breached. I mean just because it's not an urgent problem, doesn't say it can't be. If it becomes a problem, chances are it's too late. These issues are like symptoms; if you see them, you already are in trouble! Security doesn't always have an immediate ROI, I understand, but don't make it an epitome of "out of sight, out of mind". That's a significant risk you are taking on your revenue, employees or customers.

Now, while I have touched both sides of the problem in this short article; I hope you got the message (fingers crossed). Please do take security seriously, and not only as your business transaction! Every time you do something that involves security on either sides, think - You invest your next big crypto-currency in an exchange/ market that gets hacked because of their lack of due-diligence? Or, your medical records became public because someone didn't perform a good pen-test. Or, you lose your savings because your bank didn't do a thorough "security" check of its infrastructure. If you think you are untouchable because of your home router security; you, my friend are living in an illusion. And, my final rant to the firms where there are good consultants but the reporting, or seriousness in delivering the message to the business is so fcuking messed up, that all their efforts go in vain. Take your deliverable seriously; it's the only window business has to peep into the issues (existing or foreseen), and plan the remediation in time.

That's all my friends. Stay safe and be responsible; security is a cumulative effort and everyone has to be vigilant because you never know where the next cyber-attack be.

How to filter and query SSL/TLS certs for intelligence

How to filter and query SSL/TLS certs for intelligence

Recently I noticed a new service/ project that is turning few heads among my peers in security community - CertDB. A one of its kind which indexes the domains SSL certs with their details, IP records, geo-location and timelines, common-name etc. They term themselves as Internet-wide search engine for digital certificates. They have a unique business statement when you get to understand the different components (search vectors) they are incorporating in this project. I know there are few transparent cert registries like Certificate Search but as per their website,

Examining the data hidden in digital certificates provides a lot of insight about business activity in a particular geography or even collaboration between 2 different companies.

I know and agree with them on these insights that they do come handy while performing reconnaissance during a security assessment (OR) validating the SSL/ TLS certificates for your client. It does reflect on the fact that maybe the certificate is about to expire, or new domains have been registered in the same certificate (example, Subject Alternate Name: DNS Name). But when I browsed through their project website, I was surprised the way they articulated their USP (unique selling point),

For example, the registration of a new unknown domain in Palo Alto hints at a new start-up; switching from the "Wildcard" certificate to "Let's Encrypt" tells us about the organization's budget constraints; issuing a certificate in an organization with domains of another organization speaks about collaboration between companies, or even at an acquisition of one company by another.

Now, I am intrigued to do a detailed article on their services, business model, filters and even an interview with their project team.

Question: Are you curious/interested, and what would you like to ask them? Do leave a comment.

Do you want to read more on certDB?
Yes
No
meh, I am Swiss.

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

It's been a long time since I audited someone's DNS file but recently while checking a client's DNS configuration I was surprised that the CAA records were set randomly "so to speak". I discussed with the administrator and was surprised to see that he has no clue of CAA, how it works and why is it so important to enable it correctly. That made me wonder, how many of us actually know that; and how can it be a savior if someone attempts to get SSL certificate for your domain.

What is CAA?

CAA or Certificate Authority Authorization is a record that identifies which CA (certificate authorities) are allowed to issue certificate for the domain in question. It is declared via CAA type in the DNS records which is publicly viewable, and can be verified before issuing certificate by a certificate authority.

Brief Background

While the first draft was documented by Phillip Hallem-Baker and Rob Stradling back in 2010, it accelerated the work in last 5 years due to issues with CA and hacks around. The first CA subversion was in 2001 when VeriSign issued 2 certificates to an individual claiming to represent Microsoft; these were named "Microsoft Corporation". These certificate(s) could have been used to spoof identity, and providing malicious updates etc. Further in 2011 fraudelent certificates were issued by Comodo[1] and DigiNotar[2] after being attacked by Iranian hackers (more on Comodo attack, and dutch DigiNotar attack); an evidence of their use in a MITM attack in Iran.

Further in 2012 Trustwave issued[3] a sub-root certificate that was used to sniff SSL traffic in the name of transparent traffic management. So, it's time CA are restricted or whitelisted at domain level.

What if no CAA record is configured in DNS?

Simply put the CAA record shall be configured to announce which CA (certificate authorities) are permitted to issue a certificate for your domain. Wherein, if no CAA record is provided, any CA can issue a certificate for your domain.

CAA is a good practice to restrict your CA presence, and their power(s) to legally issue certificate for your domain. It's like whitelisting them in your domain!

The process mandates a Certificate Authority[4] (yes, it mandates now!) to query DNS for your CAA record, and the certificate can only be issued for your hostname, if either no record is available, or this CA has been "whitelisted". The CAA record enables the rules for the parent domain, and the same are inherited by sub-domains. (unless otherwise stated in DNS records).

Certificates authorities interpret the lack of a CAA record to authorize unrestricted issuance, and the presence of a single blank issue tag to disallow all issuance.[5]

CAA record syntax/ format

The CAA record has the following format: <flag> <tag> <value> and has the following meaning,

Tag Name Usage
flag This is an integer flag with values 1-255 as defined in the RFC 6844[6]. It is currently used to call the critical flag.[7]
tag This is an ASCII string (issue, issuewild, iodef) which identifies the property represented by the record policy.
value The value of the property defined in the <tag>

The tags defined in the RFC have the following meaning and understanding with the CA records,

  • issue: Explicitly authorizes a "single certificate authority" to issue any type of certificate for the domain in scope.
  • issuewild: Explicitly authorizes a "single certificate authority" to issue only a wildcard certificate for the domain in scope.
  • iodef: certificate authorities will report the violations accordingly if the certificate is issued, or requested that breach the CAA policy defined in the DNS records. (options: mailto:, http:// or https://)
DNS Software Support

As per excerpt from Wikipedia[8]: CAA records are supported by BIND (since version 9.10.1B),Knot DNS (since version 2.2.0), ldns (since version 1.6.17), NSD (as of version 4.0.1), OpenDNSSEC, PowerDNS (since version 4.0.0), Simple DNS Plus (since version 6.0), tinydns and Windows Server 2016.
Many hosted DNS providers also support CAA records, including Amazon Route 53, Cloudflare, DNS Made Easy and Google Cloud DNS.

Example: (my own website DNS)

As per the policy, I have configured that ONLY "letsencrypt.org" but due to Cloudflare Universal SSL support, the following certificate authorities get configured as well,

  • 0 issue "comodoca.com"
  • 0 issue "digicert.com"
  • 0 issue "globalsign.com"
  • 0 issuewild "comodoca.com"
  • 0 issuewild "digicert.com"
  • 0 issuewild "globalsign.com"

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

Also, configured iodef for violation: 0 iodef "mailto:hello@cybersins.com"

How's the WWW doing with CAA?

After the auditing exercise I was curious to know how are top 10,000 alexa websites doing with CAA and strangely enough I was surprised with the results: only 4% of top 10K websites have CAA DNS record.

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

[Update 27-Feb-18]: This pie chart was updated with correct numbers. Thanks to Ich Bin Niche Sie for identifying the calculation error.

Now, we have still a long way to go with new security flags and policies like "CAA DNS Record", "security.txt" file etc. and I shall be covering these topics continuously to evangelize security in all possible means without disrupting business. Remember to always work hand in hand with the business.

Stay safe, and tuned in.


  1. Comodo CA attack by Iranian hackers: https://arstechnica.com/information-technology/2011/03/independent-iranian-hacker-claims-responsibility-for-comodo-hack/ ↩︎

  2. Dutch DigiNotar attack by Iranian hackers: https://arstechnica.com/information-technology/2011/08/earlier-this-year-an-iranian/ ↩︎

  3. Trustwave Subroot Certificate: http://www.h-online.com/security/news/item/Trustwave-issued-a-man-in-the-middle-certificate-1429982.html ↩︎

  4. CAA Checking Mandatory (Ballot 187 results) 2017: https://cabforum.org/pipermail/public/2017-March/009988.html ↩︎

  5. Wikipedia Article: https://en.wikipedia.org/wiki/DNS_Certification_Authority_Authorization ↩︎

  6. IETF RFC 6844 on CAA record: https://tools.ietf.org/html/rfc6844 ↩︎

  7. The confusion of critical flag: https://tools.ietf.org/html/rfc6844#section-7.3 ↩︎

  8. Wikipedia Support Section: https://en.wikipedia.org/wiki/DNS_Certification_Authority_Authorization#Support ↩︎

New World, New Rules: Securing the Future State

I published an article today on the Oracle Cloud Security blog that takes a look at how approaches to information security must adapt to address the needs of the future state (of IT). For some organizations, it's really the current state. But, I like the term future state because it's inclusive of more than just cloud or hybrid cloud. It's the universe of Information Technology the way it will be in 5-10 years. It includes the changes in user behavior, infrastructure, IT buying, regulations, business evolution, consumerization, and many other factors that are all evolving simultaneously.

As we move toward that new world, our approach to security must adapt. Humans chasing down anomalies by searching through logs is an approach that will not scale and will not suffice. I included a reference in the article to a book called Afterlife. In it, the protagonist, FBI Agent Will Brody says "If you never change tactics, you lose the moment the enemy changes theirs." It's a fitting quote. Not only must we adapt to survive, we need to deploy IT on a platform that's designed for constant change, for massive scale, for deep analytics, and for autonomous security. New World, New Rules.

Here are a few excerpts:
Our environment is transforming rapidly. The assets we're protecting today look very different than they did just a few years ago. In addition to owned data centers, our workloads are being spread across multiple cloud platforms and services. Users are more mobile than ever. And we don’t have control over the networks, devices, or applications where our data is being accessed. It’s a vastly distributed environment where there’s no single, connected, and controlled network. Line-of-Business managers purchase compute power and SaaS applications with minimal initial investment and no oversight. And end-users access company data via consumer-oriented services from their personal devices. It's grown increasingly difficult to tell where company data resides, who is using it, and ultimately where new risks are emerging. This transformation is on-going and the threats we’re facing are morphing and evolving to take advantage of the inherent lack of visibility.
Here's the good news: The technologies that have exacerbated the problem can also be used to address it. On-premises SIEM solutions based on appliance technology may not have the reach required to address today's IT landscape. But, an integrated SIEM+UEBA designed from the ground up to run as a cloud service and to address the massively distributed hybrid cloud environment can leverage technologies like machine learning and threat intelligence to provide the visibility and intelligence that is so urgently needed.
Machine Learning (ML) mitigates the complexity of understanding what's actually happening and of sifting through massive amounts of activity that may otherwise appear to humans as normal. Modern attacks leverage distributed compute power and ML-based intelligence. So, countering those attacks requires a security solution with equal amounts of intelligence and compute power. As Larry Ellison recently said, "It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers."
Click to read the full article: New World, New Rules: Securing the Future State.

DevSecOps is coming! Don’t be afraid of change.

DevSecOps is coming! Don't be afraid of change.

There has been a lot of buzz about the relationship between Security and DevOps as if we are debating their happy companionship. To me they are soulmates, and DevSecOps is a workable, scalable, and quantifiable fact unlike the big button if applied wisely.

What is DevOps?

The development cycle has undergone considerable changes in last few years. Customers and clients have evolving requirements and the market demands speed, and quality. The relationship between developers and operations have grown much closer to address this change. IT infrastructure has evolved in parallel to cater to quick timelines, and release cycles. The old legacy infrastructure with multiple toll gates if drifting away, and fast, responsive API(s) are taking place to spawn and scale vast instances of software and hardware.

Developers who were slowly getting closer to the operations team have now decided to wear both the hats and skip a 'redundant' hop. This integration has helped organisations achieve quick releases with better application stability and response times. Now, the demands of the customer or end-user can be addressed & delivered directly by the DevOps team. Sometimes people confuse agile and DevOps and its natural with the everchanging landscape.

Simply put, Agile is a methodology and is about processes (scrums, sprints etc.) while DevOps is about technical integration (CI/CD, tool and IT automation)

While Agile talks about SDLC, DevOps also integrate Operations and fluidity in Agile. It focuses on being closer to the customer and not just committing working software. DevOps in its arsenal has many tools that support - release, monitoring, management, virtualisation, automation, and orchestration of different parts of delivery fast and efficient. Its the need of the hour with the constant changes in requirements, and ecosystem. It has to evolve & release ongoing updates to keep up with the pace of the customer, and market demands. It's not mono-directional water flow; Instead, it's like an omnidirectional tube of water flowing in a gravity-free ecosystem.

What is DevSecOps?

The primary objective of DevSecOps is to integrate security at early stages of development on the process side and to make sure everyone in the team is responsible for security. It evangelises security as a strong glue to hold the bond between development and operations, by the single task force. In DecSecOps, security ought to be a part of automation via tools, controls and processes.

Traditional SDLC (software development life cycle) often perceives security as a toll gate at the end, to validate the efforts on the scale of visible threats. In DevSecOps, security is everywhere, at all stages/ phases of development and operations. It is embedded right into the life cycle that has a continuous integration between the drawing pad, security tools, and release cycle.

As Gartner documents, DevSecOps can be depicted graphically as the rapid and agile iteration from development into operations, with continuous monitoring and analytics at the core.

DevSecOps is coming! Don't be afraid of change.
Photo by Redmine

Another key driving factor for DevSecOps is the fact that perimeter security is failing to adjust with increasing integration points and the blurring of the trust boundaries. It's getting less opaque and fuzzier where the perimeter is in this cyber ecosystem. It is eminent that software has to be inherently secure itself without relying on the border security controls. Rapid development and releases lead to shortening the supply chain timeline to implement custom controls like filters, policies and firewalls.

I have tried to make the terms well understandable in this series; there are many challenges faced by organizations, and their possible solutions which I shall cover in next article.
Stay tuned.

An Interview by Timecamp on Data Protection

An Interview by Timecamp on Data Protection

A few months back I was featured in an interview on Data Protection Tips with Timecamp. Only a handful of questions but they are well articultated for any organisation which is proactive & wants to address security in corporations, and their employees' & customers responsibilities.

--

How do you evaluate people's awareness regarding the need to protect their private data?

This is an exciting question as we have often faced challenges during data protection training on how to evaluate with certainty that a person understood the importance of data security & is not just mugging for the test.

Enterprise Security is as closely related to the systems as with the people interacting with them.

One way to perform evaluations is to include surprise checks and discussions within the teams. A team of security aware individuals are trained and then asked to carry on the tasks of such inspections. For example, if a laptop is found logged-in, and unattended for long, the team confuscates it and submits to a C-level executive (e.g. CIO or COO). As a consultant, I have also worked on an innovative solution of using such awareness questions as "the second level" check while logging into the intranet applications. And, we all are aware of phishing campaigns that management can execute on all employees and measure their receptiveness to such emails. But, it must be followed up with training on how an individual can detect such attack, and what can it can do to avoid falling prey to such scammers in the future. We must understand that while data protection is vital, all the awareness training and assessment should not cause speed bumps in a daily schedule.

These awareness checks must be regularly performed without adding much stress for the employee. More the effort, more the employee would like to either bypass or avoid it. Security teams must work with the employees and support their understanding of data protection.Data protection must function as the inception of understanding security, and not a forced argument.

Do you think that an average user pays enough attention to the issue of data protection?

Data protection is an issue which can only be dealt with a cumulative effort, and though each one of us cares about privacy, few do that collectively within an enterprise.It is critical to understand that security is a culture, not a product. It needs an ongoing commitment to providing a resilient ecosystem for the business. Social engineering is on the rise with phishing attacks, USB drops, fraudulent calls and messages. An employee must understand that their casual approach towards data protection, can bring the whole business to ground zero. And, core business must be cautious when they do data identification and classification. The business must discern the scope of their application, and specify what's the direct/ indirect risk if the data gets breached. Data breach is not only an immediate loss of information but a ripple effect leading to disclosure of the enterprise's inner sanctum.

Now, how close are we to achieving this? Unfortunately, we are far from the point where an "average user" accepts data protection as a cornerstone of success in the world where information in the asset. Businesses consider security as a tollgate which everyone wants to bypass because neither do they like riding with it, nor being assessed by it. Reliable data protection can be achieved when it's not a one-time effort, but the base to build our technology.

Until unless we use the words "security" and "obvious" in the same line, positively, it would always be a challenge which an "average user" would try to deceive than achieve.

Why is the introduction of procedures for the protection of federal information systems and organisations so important?

Policies and procedures are essential for the protection of federal or local information as they harmonise security with usability. We should understand security is a long road, and when we attempt to protect data, it often has its quirks which confuse or discourages an enterprise to evolve. I have witnessed many fortune 500 firms safeguard their assets and getting absorbed in like it's a black hole. They invest millions of dollars and still don't reach par with the scope & requirements. Therefore, it becomes essential to understand the needs of business, the data it handles, and which procedures apply in their range. Now, specifically, procedures help keep the teams aligned in how to implement a technology or a product for the enterprise. Team experts or SME, usually have a telescopic vision in their domain, but a blind eye on the broader defence in depth.Their skills tunnel their view, but a procedure helps them to attain sync with the current security posture, and the projected roadmap. Also, a procedure reduces the probability of error while aligning with a holistic approach towards security. A procedure dictates what and how to do, thereby leaving a minimal margin of misunderstanding in implementing sophisticated security measures.

Are there any automated methods to test the data susceptibility to cyber-attacks, for instance, by the use of frameworks like Metasploit? How reliable are they in comparison to manual audits?

Yes, there are automated methods to perform audits, and to some extent, they are well devised to detect low hanging fruits. In simpler terms, a computerised assessment has three key phases - Information gathering, tool execution to identify issues, report review. Security aware companies and the ones that fall under strict regulations often integrate such tools in their development and staging environments. This CI (continuous integration) keeps the code clean and checks for vulnerabilities and bugs on a regular basis. It also helps smoothen out the errors that might have come in due to using existing code, or outdated functions. On the other side, there are tools which validate the sanity of the production environment and also perform regular checks on the infrastructure and data flows.

Are these automated tools enough? No. They are not "smart" enough to replace manual audits.

They can validate configurations & issues in the software, but they can't evolve with the threat landscape. Manual inspections, on the other hand, provide a peripheral vision while verifying the ecosystem resilience. It is essential to have manual audits, and use the feedback to assess, and even further tune the tools. If you are working in a regulated and well-observed domain like finance, health or data collection - the compliance officer would always rely on manual audits for final assurance. The tools are still there to support, but remember, they are as good as they are programmed and configured to do.

How to present procedures preventing attacks in one's company, e.g., to external customers who demand an adequate level of data protection?

This is a paramount concern, and thanks for asking this. External clients need to "trust you" before they can share data, or plug you into their organisation. The best approach that has worked for me is an assurance by what you have, and how well are you prepared for the worst.> The cyber world is very fragile, and earlier we used to construct "if things go bad ... " but now we say "when things go bad ...".

This means we have accepted the fact that an attack is pertinent if we are dealing with data/ information. Someone is observing to attempt a strike at the right time especially if you are a successful firm. Now, the assurance can be achieved by demonstrating the policies you have in place for Information Security and Enterprise Risk Management. These policies must be supplemented with standards which identify the requirements, wherein the procedures as the how-to document on the implementation. Most of the cases if you have to assure the client on your defence in depth, the security policy, architecture and previous third-party assessment/ audit suffice. In rare cases, a client may ask to perform its assessment of your infrastructure which is at your discretion. I would recommend making sure that your policy handles not only security but also incidence to reflect your preparedness for the breach/ attack.

On the other hand, if your end customers want assurance, you can entirely reflect that by being proactive on your product, blog, media etc. on how dedicated you are in securing their data. For example, the kind of authentication you support tells whether your commitment to protecting the vault. Whether it's mandated or not depends on the usability and UI, but to allow support shows your commitment to addressing the security-aware customers & understanding the need for the hour.

--
Published at https://www.timecamp.com/blog/index.php/2017/11/data-protection-tips/ with special thanks to Ola Rybacka for this opportunity.

Don’t be a security snob. Support your business team!

Don't be a security snob. Support your business team!

There have been many a times that access controls have been discussed in the meetings related to web development. With an interconnected world of APIs it is very important to understand the authentication of these end-points. One of the best approach I always vouch for is mutual authentication on SSL certificates (or 2 way SSL). Most of the times it is viable but it fails when either of party couldn't support it (hence not mutual). So, what to do when the business can't implement your "security requirement"?

The role of security is not to hinder the business, but to support it. It has to act as a pillar, and not a tollgate. We all know, that's audit!

Are you a security snob?
The rules/ regulations made by us, auditors and regulators are to make sure the architecture, implementation and roll-out is secure, and the information is tightly controlled. It is in no manner adding to the miseries of developers at the last stage of go-live. The security requirements must be clear right from the design phase. There must be a security architect appointed to work in accordance with the industry standards, and security nitty-gritties. Sometimes the security team gets to know that few important implementations have not been considered and now the project is at final stage. What should the security do - Shall it take business to the grinding halt? Shall it take the developers back to drawing board? No and no! Don't be a snob!

Look forward, and figure out the workarounds; strong mitigations steps to find a way to lower the risk. As long as you can lower the risk to minimum by using WAF, access controls, and white-listing etc. the business can make a plan to "fix" it in the next release. Make sure business understands the risk - brand or financial, and then if the risk is too high - involve the "C" suite executives, but support the business instead of bashing them with - you didn't do this, or that. It is counter-productive and doesn't help any party.

In most cases "business" accounts for the IT security paychecks and it's your (security team) job to avoid it looking like an overhead, but an investment!
IT security is NOT generating money. So don't point fingers, but hold hands!

Now, in the case of mutual authentication - what if the 2-way SSL is not available? Is IP white-listing a possible option with API credentials? Yes, if the IP is not shared by the whole network & the traffic is over secure channel. It's a strong measure to apply and restrict the participating parties to talk 1:1 on an encrypted channel. But then, I have been asked what if there is IP spoofing? Come'on guys! IP spoofing doesn't work the way you think. It's a TCP handshake; how do you expect the handshake to succeed when the IP doesn't ACK the SYN-ACK? Rememeber, the "actual IP" is not expecting the SYN-ACK & traffic will not go to the "malicious IP". So, IP spoofing over Internet is out of picture.

As a security specialist, try to understand that there are various ways to strengthen the security without being a pain in the ass. There are ways to implement compensatory controls; making sure the traffic is encrypted, access controls are tightly restricted, and risk is lowered significantly. If you can do this, you can definitely help business go live, and give them time to manage the security expectations more constructively.

Cheers, and be safe.

Design For Behavior, Not Awareness

October was National Cybersecurity Awareness Month. Since today is the last day, I figured now is as good a time as any to take a contrarian perspective on what undoubtedly many organizations just did over the past few weeks; namely, wasted a lot of time, money, and good will.

Most security awareness programs and practices are horrible BS. This extends out to include many practices heavily promoted by the likes of SANS, as well as the current state of "best" (aka, failing miserably) practices. We shouldn't, however, be surprised that it's all a bunch of nonsense. After all, awareness budgets are tiny, the people running these programs tend to be poorly trained and uneducated, and in general there's a ton of misunderstanding about the point of these programs (besides checking boxes).

To me, there are three kinds of security awareness and education objectives:
1) Communicating new practices
2) Addressing bad practices
3) Modifying behavior

The first two areas really have little to do with behavior change so much as they're about communication. The only place where behavior design comes into play is when the secure choice isn't the easy choice, and thus you have to build a different engagement model. Only the third objective is primarily focused on true behavior change.

Awareness as Communication

The vast majority of so-called "security awareness" practices are merely focused on communication. They tell people "do this" or "do that" or, when done particularly poorly, "you're doing X wrong idiots!" The problem is that, while communication is important and necessary, rarely are these projects approached from a behavior design perspective, which means nobody is thinking about effectiveness, let alone how to measure for effectiveness.

Take, for example, communicating updated policies. For example, maybe your organization has decided to revise its password policy yet again (woe be to you!). You can undertake a communication campaign to let people know that this new policy is going into effect on a given date, and maybe even explain why the policy is changing. But, that's about it. You're telling people something theoretically relevant to their jobs, but not much more. This task could be done just as easily be your HR or internal communication team as anyone else. What value is being added?

Moreover, the best part of this is that you're not trying to change a behavior, because your "awareness" practice doesn't have any bearing on it; technical controls do! The password policy is implemented in IAM configurations and enforced through technical controls. There's no need for cognition by personnel beyond "oh, yeah, I now have to construct my password according to new rules." It's not like you're generally giving people the chance to opt out of the new policy, and there's no real decision for them to make. As such, the entire point of your "awareness" is communicating information, but without any requirement for people to make better choices.

Awareness as Behavior Design

The real role of a security awareness and education program should be on designing for behavior change, then measuring the effectiveness of those behavior change initiatives. The most rudimentary example of this is the anti-phishing program. Unfortunately, anti-phishing programs also tend to be horrible examples because they're implemented completely wrong (e.g., failure to benchmark, failure to actually design for behavior change, failure to get desired positive results). Yes, behavior change is what we want, but we need to be judicious about what behaviors we're targeting and how we're to get there.

I've had a strong interest in security awareness throughout my career, including having built and delivered awareness training and education programs in numerous prior roles. However, it's only been the last few years that I've started to find, understand, and appreciate the underlying science and psychology that needs to be brought to bear on the topic. Most recently, I completed BJ Fogg's Boot Camp on behavior design, and that's the lens through which I now view most of these flaccid, ineffective, and frankly incompetent "awareness" programs. It's also what's led me to redefine "security awareness" as "behavioral infosec" in order to highlight the importance of applying better thinking and practices to the space.

Leveraging Fogg's models and methods, we learn that Behavior happens when three things come together: Motivation, Ability, and a Trigger (aka a prompt or cue). When designing for behavior change, we must then look at these three attributes together and figure out how to specifically address Motivation and Ability when applying/instigating a trigger. For example, if we need people to start following a better, preferred process that will help reduce risk to the organization, we must find a way to make it easy to do (Ability) or find ways to make them want to follow the new process (Motivation). Thus, when we tell them "follow this new process" (aka Trigger), they'll make the desired choice.

In this regard, technical and administrative controls should be buttressed by behavior design whenever a choice must be made. However, sadly, this isn't generally how security awareness programs view the space, and thus just focus on communication (a type of Trigger) without much regard for also addressing Motivation or Ability. In fact, many security programs experience frustration and failure because what they're asking people to do is hard, which means the average person is not able to do what's asked. Put a different way, the secure choice must be the easy choice, otherwise it's unlikely to be followed. Similarly, research has shown time and time again that telling people why a new practice is desirable will greatly increase their willingness to change (aka Motivation). Seat belt awareness programs are a great example of bringing together Motivation (particularly focused on negative outcomes from failure to comply, such as reality of death or serious injury, as well as fines and penalties), Ability (it's easy to do), and Triggers to achieved a desired behavioral outcome.

Overall, it's imperative that we start applying behavior design thinking and principles to our security programs. Every time you ask someone to do something different, you must think about it in terms of Motivation and Ability and Trigger, and then evaluate and measure effectiveness. If something isn't working, rather than devolving to a blame game, instead look at these three attributes and determine if perhaps a different approach is needed. And, btw, this may not necessarily mean making your secure choice easier so much as making the insecure choice more difficult (for example, someone recently noted on twitter that they simply added a wait() to their code to force deprecation over time)

Change Behavior, Change Org Culture

Another interesting aspect of this discussion on behavior design is this: organizational culture is the aggregate of behaviors and values. That is to say, when we can change behaviors, we are in fact changing org culture, too. The reverse, then, is also true. If we find bad aspects of org culture leading to insecure practices, we can then factor those back into the respective behaviors, and then start designing for behavior change. In some cases, we may need to break the behaviors into chains of behaviors and tackle things more slowly over time, but looking at the world through this lens can be quite enlightening. Similarly, looking at the values ensconced within org culture also let's us better understand motivations. People generally want to perform their duties, and do a reasonably decent job at it. This is generally how performance is measured, and those duties and performance measures are typically aligned against outcomes and - ultimately - values.

One excellent lesson that DevOps has taught us (there are many) is that we absolutely can change how the org functions... BUT... it does require a shift in org culture, which means changing values and behaviors. These sorts of shifts can be done either top-down or bottom-up, but the reality is that top-down is much easier in many regards, whereas bottom-up requires that greater consensus and momentum be built to achieve a breakthrough.

DevOps itself is cultural in nature and focuses heavily on changing behaviors, ranging from how dev and ops function, to how we communicate and interact, and so on. Shortened feedback loops and creating space for experimentation are both behavioral, which is why so many orgs struggle with how to make them a reality (that is, it's not simply a matter of better tools). Security absolutely should be taking notes and applying lessons learned from the DevOps movement, including investing in understanding behavior design.

---
To wrap this up, here are three quick take-aways:

1) Reinvent "security awareness" to be "behavioral infosec" toward shifting to a behavior design approach. Behavior design looks at Motivation, Ability, and Triggers in affecting change.

2) Understand the difference between controls (technical and administrative) and behaviors. Resorting to basic communication may be adequate if you're implementing controls that take away choices. However, if a new control requires that the "right" choice be made, you must then apply behavior design to the project, or risk failure.

3) Go cross-functional and start learning lessons from other practice areas like DevOps and even HR. Understand that everything you're promoting must eventually tie back into org culture, whether it be through changes in behavior or values. Make sure you clearly understand what you're trying to accomplish, and then make a very deliberate plan for implementing changes while addressing all appropriate objectives.

Going forward, let's try to make "cybersecurity awareness month" about something more than tired lines and vapid pejoratives. It's time to reinvent this space as "behavioral infosec" toward achieving better, measurable outcomes.

WAF and IPS. Does your environment need both?

WAF and IPS. Does your environment need both?

I have been in fair amount of discussions with management on the need for WAF, and IPS; they often confuse them and their basic purpose. It has been usually discussed after a pentest or vulnerability assessment, that if I can't fix this vulnerability - shall I just put an IPS or WAF to protect the intrusion/ exploitation? Or, sometimes they are considered as the silver bullet to thwart off the attackers instead of fixing the bugs. So, let me tell you - This is not good!

The security products are well suited to protect from something "unknown" or something that you have "unknowingly missed". It is not a silver bullet or an excuse to keep systems/ applications unpatched.

Security shouldn't be an AND/OR case. More the merrier only if they have been configured properly and each one of the product(s) has a different role to play under the flag of defense in depth! So, while I started this article as WAF vs. IPS - it's time to understand it's WAF and IPS. The ecosystem of your production environment is evolving and so is the threat landscape - it's more complex to protect than it was 5 years ago. Attackers are running at your pace, if not faster & a step ahead. These adversary as well piggy-back existing threats to launch their exploits. Often something that starts as simple as DDOS to overwhelm your networks, concedes in an application layer attack. So, network firewall, application firewall, anti-malware, IPS, SIEM etc. all have an important task and should be omnipresent with bells and whistles!

Nevertheless, whether it's a WAF or an IPS; each has it's own purpose and though they can't replace each other, they often have gray areas under which you can rest your risks. This blog will try to address these gray areas, and the associated differences to make life easier when it comes to WAF (Web Application Firewall) or IPS (Intrusion Prevention System). The assumption is both are modern products, and the IPS have deep packet inspection capabilities. Now, let's try to understand the infrastructure, environment and scope of your golden eggs before we can take a call which is the best way to protect the data,

  1. If you are protecting only the "web applications" running on HTTP sockets, then WAF is enough. IPS will be cherry on cake.
  2. If you are protecting all sorts of traffic - SSH, FTP, HTTP etc. then WAF is of less use at it can't inspect non HTTP traffic. I would recommend having a deep packet inspection IPS.
  3. WAF must not be considered as an alternative for traditional network firewalls. It works on the application layer and hence is primarily useful on HTTP, SSL (decryption), Javascript, AJAX, ActiveX, Session management kind of traffic.
  4. A typical IPS does not decrypt SSL traffic, and therefore is insufficient in packet inspection on HTTPS session.
  5. There is wide difference in the traffic visibility and base-lining for anomalies. While WAF has an "understanding" of traffic - HTTP GET, POST, URL, SSL etc. the IPS only understands it as network traffic and therefore can do layer 3/4 checks - bandwidth, packet size, raw protocol decoding/ anomalies but not the GET/ POST or session management.
  6. IPS is useful in cases where RDP, SSH or FTP traffic has to be inspected before it reaches the box to make sure that the protocol is not tampered or wrapped with another TCP packet etc.

Both the technologies have matured and have many gray areas of working but understand that WAF knows and capture the contents of HTTP traffic to see if there is a SQL injection, XSS or cookie manipulation but the IPS have very little or no understanding of the underlying application, therefore can't do much with the traffic contents. An IPS can't raise an alarm if someone is getting confidential data out, or even sending a harmful parameter to your application - it will let it through if it's a valid HTTP packet.

Now, with the information I just shared, try to have a conversation with your management on how to provide the best layered approach in security. How to make sure the network, and application is resilient to complex attacks and threats lurking at your perimeter, or inside.

Be safe.

I know I haven’t patched yet, and there’s a zero-day knocking at my door

I know I haven't patched yet, and there's a zero-day knocking at my door

Patching is important, but let's agree it takes time. It takes time to test & validate the patch in your environment, check the application compatibility with the software and the underlying services. And then, one fine day, an adversary just hacks your server due to this un-patched code while you are testing it. It breaks my heart and I wonder "what can be done in the delta period while the team is testing the patch"? Adversary on the other hand is busy either reversing the patch, or using a zero-day to attack the systems! I mean once a patch is released it's a race,

Either bad guys reverse it and release a working exploit, OR good guys test, verify and update their environment. A close game, always.

Technically, I wouldn't blame the application security team, or the one managing the vulnerable server. They have their SLA to apply updates on the OS or Application Servers. In my experience, a high severity patch has to be applied in 15 days, medium in 30 days, and low in 45 days. Now, if the criticality is too severe; it can should be managed in 24 to 48 hours with enough testing on functionality, compatibility, and test cases with application team; or server management team. Now, what to do when there is a zero-day exploit lurking in your backyard? It used to be a low-probability gamble, but now it's getting more realistic and frequent. The recent case of Apache Struts vulnerability has done enough damage for many big companies like Equifax. I already addressed this issue in a blog-post before, and the need for alternatives such as WAF in Secure SDLC.

What shall I do if there's a 0-day lurking in my backyard?

Yes, I know there's a zero day for your web-application or underlying server, and you are busy patching but what other security controls do you have in place?
Ask yourself these questions,

  1. Do I have understanding of the zero-day exploit? Is it affecting my application, or a particular feature?
  2. Do I have a product/ tool for prevention at the application layer for network perimeter that can filter bad requests - Network WAF (Web Application Firewall), Network IPS (Intrusion Prevention System) etc.?
  3. Do I have a product/ tool for prevention at the application layer for host - Host based IPS, WAF etc.
  4. Can I just take the application offline, while I patch?
  5. What's the threat model and risk appetite if the exploitation is successful?
  6. Can I brace for impact by lowering the interaction with other components, or by preventing it to spread across my environment?

Let's understand how these answers will support your planning to develop a resilient environment,

>> Understanding of the zero-day exploit

You know there's an exploit in the wild; but does your security team or devops guys take a look at it? Did they find the exploit and understood the impact on your application? It is very important to understand what are you dealing with before you plan to secure your environment. Not all exploits are in scope of your environment due to the limitations, frameworks, plugins etc. So, do research a bit, ask questions and accordingly work on your timelines. Best case, understand the pattern you have to protect your application from.

>> Prevention at the application layer for network perimeter

If you know what's coming to hit you, you can plan a strategy to block it as well. Blocking is more effective when it's at the perimeter - earlier the better. And, if you have done good research on the exploit, or the threat-vector that can affect you; please take a note of the pattern and find a way to block it at the perimeter while you patch the application.

>> Prevention at the application layer for host

There are sometimes even when you know the pattern, and the details on the exploit but still network perimeter is incapable of blocking it. Example, if the SSL offload is on the server/ load balancer. In this case make sure the server knows what is expected; blocks everything else including an anomaly. This can be achieved by Host based protection: IPS, or WAF.
Even a small thing like tripwire can monitor the directory, and files to make sure attacker is either not able to create files; or you get the alert at the right time to react. This can make a huge difference!

Note: Make sure the IPS (network/ host) is capable of in-depth packet filtering. If the pattern can be blocked on the WAF with a quick rule, do it and make sure it doesn't generate false positives which can impact your business. Also, do monitor the WAF for alerts which can tell you if there have been failed attempts by the adversaries. Remember, the attackers won't directly use their best weapon; usually it starts with "information gathering", or uploading files, or executing known exploits before customizing the case for their needs.

You have very high chances to detect adversaries while they are gathering insights about you. Keep a keen eye on any alert from production environment.

>> Taking application offline

Is it possible to take the offline while you patch the software? This depends on the fact what's the exposure of the application, what is the kind of CIA (Confidentiality, Integrity and Availability) rating and what kind of business impact assessment has been performed. If you think that taking it offline can speed up the process, and also reduce the exposure without hurting your business; do it. Better safe than sorry.

>> Threat model and risk appetite

You have to assess & perform threat modeling of the application. The reason it is required is not every risk is high. Not every application needs the same attention, and the vulnerable application may well be internal that will substantially reduce the exposure and underlying impact! Do ask your team - is the application Internet facing, how many users are using it, what kind of data is it dealing with etc. and act accordingly.

>> Brace for impact

Finally, if things still look blurred, start prepping yourself for impact. Try to minimize it by validating and restricting the access to the server. You can perform some sanity checks, and implement controls like,

  1. Least privilege accounts for application use
  2. Least interaction with the rest of production environment
  3. Restricted database requests and response to limit the data ex-filtration
  4. Keep the incident management team on high-alert.
Incident management - Are you sure you are not already breached?

Now, what are the odds that while you reading this blog, trying to answer all the questions and getting ready - you haven't already been compromised? Earlier such statement of incidents used to begin with "What if..." but now it says "When..." so, yeah make sure all your monitoring systems are reporting the anomalies and someone is monitoring it well. These tools are only good if some human being is responsibly validating the alerts. Once an alert is flagged red; a process should trigger to analyze and minimize the impact.
Read more about incident monitoring failures in my earlier blogpost. Don't be one of them.

Now, once you address these questions you must have a fairly resilient environment to either mitigate or absorb the impact. Be safe!

5 Ways to Future-Proof IoT Devices

The absence of regulation is what has resulted in the innovation of software we see today​. But as hardware and software merge, as the shelf life of software becomes the shelf life of hardware, we are going to need a number of guarantees to ensure that the benefits keep outweighing the risks.​

I have never replaced my thermostat. Nor the control system of my lights in my flat. People buy a new oven or refrigerator every 10 years, a new car every 20 years maybe. And these are all run by software, old software, with bugs. And that is “fine” (mind the quotes), to the extent that someone takes responsibility for the system or solution as a whole, the collection of all these parts with a single brand name on the box that is legally responsible. Now think IoT. Thousands of individual vendors who sit mostly abroad, offshore code development with for the most part a lack of teams, unity or any other form of structure or legal jurisdiction for that matter. Low to no profit margins for technology sold by the lowest bidder where neither the buyer nor the seller have any interest in security.

The chip-maker of the device says they just sell chips, the manufacturer says they just implemented the chips and put them on the board, the software makers build the software for maybe hundreds of chips, ignoring some of the extra features and weaknesses that come with certain components. The product ships and problems are found at a later stage either through design errors or implementation errors while implementing a piece of software that has vulnerabilities. And this is where we are today.

Not a single snowflake feels responsible for the avalanche.

So, five things I would like to see as part of a basic set of guarantees when purchasing some of these products in the future:

  1. Guaranteed life expectancy
    When IoT vendors say they offer “life time support,” it is not your life, or the product’s life. It is the life of the company. We saw this with Revolv last year. Guaranteeing a certain number of years of product focus, updates, community support e.g. forums, as well as guaranteeing that the device will work is paramount. This means tracking the life cycle of the technology inside the devices, ensuring whatever cloud services are being used will still be there and cannot be interrupted or hijacked afterwards
  2. Privacy and data handling transparency
    Inform the consumer where the data is being saved to i.e. physical country, how long the data will be there as well as what data is being saved and to what level of detail. Give the consumer the option to remove all data produced by the device if you can prove ownership of the device. I have no problems waiving some of my rights when telling the IoT vendor and potentially the world I like to make something that needs the pizza setting of my IoT oven Sunday morning, but inform me first. Will my data go to a European cloud or a US cloud and what laws can be enforced upon my data and the correlation based on my data
  3. Technology transparency
    To the extent possible, inform the consumer about what technology is being used with regards to e.g. open source software and licensed software. Food manufacturers have to ensure the correct labeling of their product as far as ingredients go. Why not technology for the individual parts or software components, at least to some extent so that consumers can make informed choices about what it is they can and want to use
  4. Security feature transparency
    Is the product allowing management through a cloud service with two-factor authentication? Or only Bluetooth, Wifi? Will it detect your neighbor trying to log on to your device? Can someone break into my device remotely? What kind of features the device has will hopefully in the future start influencing the buying behavior of the consumer. If you want all devices to only use the cloud for remote control then that should be a choice that can be made by looking at the box
  5. Planned obsolescence
    A more difficult one but an important one. For IoT that is more sensitive or even vital, a shut down process should be explored to be able to shut down the device when it has exceeded its life or has been declared end of life. When reliance becomes dependence then planning is required in order to ensure that the benefits and added value of the product can be sustained. This is easier with pace makers and other devices that receive a lot of care and tracking. But for other devices that are basically enable-and-forget, this implies being able to signal its remaining lifetime to the owner and thus implies knowing who the owner is. This last part might be a more difficult issue as it has been tried with for example tying domain names to people for the purpose of reporting abuse cases. Not only that, this would mean another potential privacy problem if the information is leaked. This is a sensitive topic but more discussion is needed to see how devices can be categorized and what the possibilities are. This can also lead to abuse from the vendor side. Printer and printer ink cartridge vendors were very quick in jumping on the planned obsolescence track being very quick in flagging printer ink cartridges as empty, forcing the customer to buy more. More discourse on this subject is needed from all sides: designers, vendors, suppliers and consumers.

Quit Talking About "Security Culture" – Fix Org Culture!

I have a pet peeve. Ok, I have several, but nonetheless, we're going to talk about one of them today. That pet peeve is security professionals wasting time and energy pushing a "security culture" agenda. This practice of talking about "security culture" has arisen over the past few years. It's largely coming from security awareness circles, though it's not always the case (looking at you anti-phishing vendors intent on selling products without the means and methodology to make them truly useful!).

I see three main problems with references to "security culture," not the least of which being that it continues the bad old practices of days gone by.

1) It's Not Analogous to Safety Culture

First and foremost, you're probably sitting there grinding your teeth saying "But safety culture initiatives work really well!" Yes, they do, but here's why: Safety culture can - and often does - achieve a zero-sum outcome. That is to say, you can reduce safety incidents to ZERO. This factoid is excellent for when you're around construction sites or going to the hospital. However, I have very bad news for you. Information (or cyber or computer) security will never be a zero-sum game. Until the entirety of computing is revolutionized, removing humans from the equation, you will never prevent all incidents. Just imagine your "security culture" sign by the entrance to your local office environment, forever emblazoned with "It Has Been 0 Days Since Our Last Incident." That's not healthy or encouraging. That sort of thing would be outright demoralizing!

Since you can't be 100% successful through preventative security practices, you must then shift mindset to a couple things: better decisions and resilience. Your focus, which most of your "security culture" programs are trying to address (or should be), is helping people make better decisions. Well, I should say, some of you - the few, the proud, the quietly isolated - have this focus. But at the end of the day/week/month/year you'll find that people - including well-trained and highly technical people - will still make mistakes or bad decisions, which means you can't bank on "solving" infosec through better decisions.

As a result, we must still architect for resiliency. We must assume something will breakdown at some point resulting in an incident. When that incident occurs, we must be able to absorb the fault, continue to operate despite degraded conditions, while recovering to "normal" as quickly, efficiently, and effectively as possible. Note, however, that this focus on resiliency doesn't really align well with the "security culture" message. It's akin to telling people "Safety is really important, but since we have no faith in your ability to be safe, here's a first aid kit." (yes, that's a bit harsh, to prove a point, which hopefully you're getting)

2) Once Again, It Creates an "Other"

One of the biggest problems with a typical "security culture" focus is that it once again creates the wrong kind of enablement culture. It says "we're from infosec and we know best - certainly better than you." Why should people work to make better decisions when they can just abdicate that responsibility to infosec? Moreover, since we're trying to optimize resiliency, people can go ahead and make mistakes, no big deal, right?

Part of this is ok, part of it is not. On the one hand, from a DevOps perspective, we want people to experiment, be creative, be innovative. In this sense, resilience and failure are a good thing. However, note that in DevOps, the responsibility for "fail fast, recover fast, learn fast" is on the person doing the experimenting!!! The DevOps movement is diametrically opposed to fostering enablement cultures where people (like developers) don't feel the pain from their bad decisions. It's imperative that people have ownership and responsibility for the things they're doing. Most "security culture" dogma I've seen and heard works against this objective.

We want enablement, but we don't want enablement culture. We want "freedom AND responsibility," "accountability AND transparency," etc, etc, etc. Pushing "security culture" keeps these initiatives separate from other organizational development initiatives, and more importantly it tends to have at best a temporary impact, rather than triggering lasting behavioral change.

3) Your Goal Is Improving the Organization

The last point here is that your goal should be to improve the organization and the overall organizational culture. It should not be focused on point-in-time blips that come and go. Additionally, your efforts must be aimed toward lasting impact and not be anchored around a cult of personality.

As a starting point, you should be working with org dev personnel within your organization, applying behavior design principles. You should be identifying what the target behavior is, then working backward in a piecemeal fashion to determine whether that behavior can be evoked and institutionalized through one step or multiple steps. It may even take years to accomplish the desired changes.

Another key reason for working with your org dev folks is because you need to ensure that anything "culture" that you're pursuing is fully aligned with other org culture initiatives. People can only assimilate so many changes at once, so it's often better to align your work with efforts that are already underway in order to build reinforcing patterns. The worst thing you can do is design for a behavior that is in conflict with other behavior and culture designs underway.

All of this is to underline the key point that "security culture" is the wrong focus, and can in some cases even detract from other org culture initiatives. You want to improve decision-making, but you have to do this one behavior at a time, and glossing over it with the "security culture" label is unhelpful.

Lastly, you need to think about your desired behavior and culture improvements in the broader context of organizational culture. Do yourself a favor and go read Laloux's Reinventing Organizations for an excellent treatise on a desirable future state (one that aligns extremely well with DevOps). As you read Laloux, think about how you can design for security behaviors in a self-managed world. That's the lens through which you should view things, and this is where you'll realize a "security culture" focus is at best distracting.

---
So... where should you go from here? The answer is three-fold:
1) Identify and design for desirable behaviors
2) Work to make those behaviors easy and sustainable
3) Work to shape organizational culture as a whole

Definitionally, here are a couple starters for you...

First, per Fogg, Behavior happens when three things come together: Motivation, Ability (how hard or easy it is to do the action), and a Trigger (a prompt or cue). When Motivation is high and it's easy to do, then it doesn't take much prompting to trigger an action. However, if it's difficult to take the action, or the motivation simply isn't there, you must then start looking for ways to address those factors in order to achieve the desired behavioral outcome once triggered. This is the basis of behavior design.

Second, when you think about culture, think of it as the aggregate of behaviors collectively performed by the organization, along with the values the organization holds. It may be helpful, as Laloux suggests, to think of the organization as its own person that has intrinsic motivations, values, and behaviors. Eliciting behavior change from the organization is, then, tantamount to changing the organizational culture.

If you put this all together, I think you'll agree with me that talking about "security culture" is anathema to the desired outcomes. Thinking about behavior design in the context of organizational culture shift will provide a better path to improvement, while also making it easier to explain the objectives to non-security people and to get buy-in on lasting change.

Bonus reference: You might find this article interesting as it pertains to evoking behavior change in others.

Good luck!

Confessions of an InfoSec Burnout

Soul-crushing failure.

If asked, that is how I would describe the last 10 years of my career, since leaving AOL.

I made one mistake, one bad decision, and it's completely and thoroughly derailed my entire career. Worse, it's unclear if there's any path to recovery as failure piles on failure piles on failure.

The Ground I've Trod

To understand my current state of career decrepitude, as well as how I've seemingly become an industry pariah...

I have worked for 11 different organizations over the past 10 years. I left AOL in September 2007, right before a layoff (I should have waited for the layoff and gotten a package!). I had been there for more than 3.5 years and I was miserable. It was a misery of my own making in many ways. My team manager had moved up the ranks, leaving an opening. All my teammates encouraged me to throw my hat in the ring, but I demurred, telling myself I simply wasn't ready to manage. Oops. Instead, our new manager came through an internal process, and immediately made life un-fun. I left a couple months later.

When I left AOL, it was to take a regional leadership role in BT-INS (BT Global Services - they bought International Network Services to build-out their US tech consulting). A month into the role as security lead for the Mid-Atlantic, where I was billable on day 1, the managing director left and a re-org merged us in with a different region where there was already a security lead. 2 of 3 sales reps left and the remaining person was unable and unwilling to sell security. I sat on the bench for a long time, traveling as needed. An idle, bored Ben is a bad thing.

From BT I took a leadership role with this weird tech company in Phoenix. There was no budget and no staff, but I was promised great things. They let me start remote for a couple months before relocating. I knew it was a bad fit and not a good company before we made the move. I could feel it in my gut. But, I uprooted the family in the middle of the school year (my wife is an elementary teacher) and went to Phoenix, ignoring my gut. 6 months later they eliminated the position. The fact is that they'd hired a new General Counsel who also claimed a security background (he had a CISSP), and thus they made him the CISO. The year was 2009, the economy was in tatters after the real estate bubble had burst. We were stranded in a dead economy and had no place to go.

Thankfully, after a month of searching, someone threw me a life-line and I promptly started a consulting gig with Foreground Security. Well, that was a complete disaster and debacle. We moved back to Northern Virginia and my daughter immediately got sick and ended up in the hospital (she'd hardly had a sniffle before!). By the time she got out of the hospital I was sicker than I'd ever been before. The doctors had me on a couple different antibiotics and I could hardly get out of bed. This entire time the president of the company would call and scream at me every day. Literally, yelling at the top of his lungs over the phone. Hands-down the most unprofessional experience I'd had. The company partnership subsequently fell apart and I was kacked in the process. I remember it clearly to this day: I'm at my parents house in NW MN over the winter holidays and the phone rings. It's the company president, who starts out by telling me they'd finally had the kid they were expecting. And, they're letting me go. Yup, that's how the conversation went ("We had a baby. You're termed.").

Really, being out of Foreground was a relief given how awful it had been. Luckily they relocated us no strings attached, so I didn't owe anything. But, I once again was out of a job for the second time in 3 months. I'd had 3 employers in 2009 and ended the year unemployed.

In early 2010 I was able to land a contract gig, thinking I'd try a solo practice. It didn't work out. The client site was in Utah, but they didn't want to pay for a ton of travel, so I tried working remotely, but people refused to answer the phone or emails, meaning I couldn't do the work they wanted. The whole situation was a mess.

Finally, I connected with Peter Hesse at Gemini Security Solutions to do a contract-to-hire tryout. His firm was small, but had a nice contract with a large client that helped underpin his business. He brought me in to do a mix of consulting and biz dev, but after a year+ of trying to bring in new opportunities (and have them shot down internally for various reasons), I realized that I wasn't going to be able to make a difference there. Plus, being reminded almost daily that I was an expensive resource didn't help. I worked my butt off but in the end it was unappreciated, so I left for LockPath.

The co-founders of LockPath had found me when I was in Phoenix thanks to a paper I'd written on PCI for some random website. They came out to visit me and told me what they were up to. I kept in touch with them over the years, including through their launch of Keylight 1.0 on 10/10/10. I somewhat forced my way into a role with them, initially to build a pro svcs team, but that got scrapped almost immediately and I ended up more in a traveling role, presenting at conferences to help get the name out there, as well as doing customer training. After a year-and-a-half of doing this, they hired a full-time training coordinator who immediately threw me under the bus (it was a major wtf moment). They wanted to consolidate resources at HQ and moving to Kansas wasn't in the cards, so seeing the writing on the wall I started a job search. Things came to an end in mid-May while I was on the road for them. I remember it clearly, having dropped my then-3yo daughter with the in-laws the night before, I had just gotten into my hotel room in St. Paul, MN, ahead of Secure360 and the phone rang. I was told it was over, but he was going to think about it overnight. I asked "Am I still representing the company when I speak at the conference tomorrow?" and got no real answer, but was promised one first thing the next morning. That call never came, so I spoke to a full room the next morning and worked the booth all that day and the morning after that. I met my in-laws for lunch to pick-up my kiddo, and was sitting in the airport awaiting our flight home when the call finally came in delivering the final news. I was pretty burned-out at that time, so in many ways it was welcome news. Startup life can be crazy-intense, and I thankfully maintain a decent relationship with the co-founders today. But those days were highly stressful.

The good news was that I was already in-process with Gartner, and was able to close on the new gig a couple weeks later. Thus started what I thought would be one of my last jobs. Alas, I was wrong. As was much with my time there.

It bears noting here before I go any further an important observation: The onboarding experience is all-important. If you screw it up, then it sets a horrible tone for the entire gig, and the likelihood of success drops significantly. If onboarding is professional and goes smoothly, then people will feel valued and able to contribute. If it goes poorly, then people will feel undervalued from the get-go and they will literally start from an emotional hole. Don't do this to people! I don't care if you're a startup or a Fortune 50 large multi-national. Take care of people from Day 1 and things will go well. Fail at it and you'd might as well stop and release them asap.

Ok, anyway... back to Gartner. It was a difficult beginning. I was assigned a mentor, per their process, but he was gone 6 of the first 9 weeks I was there. I was sent to official "onboarding training" the end of August (the week before Labor Day!) despite having been there for 2 months by that time. I was not prepped at all before going to onboarding, and as it turns out I should have been. Others showed up with documents to be edited and an understanding of the process. I showed up completely stressed out, not at all ready to do the work that was expected, and generally had a very difficult time. It was also the week before Labor Day, which at the time meant it was teacher workshops, and I was on the road for it with 2 young kids at home. Thankfully, the in-laws came and helped out, but suffice to say it was just really not good all-around.

I really enjoyed the manager I worked for initially, but all that changed in February 2014 when my former mentor, with whom I did not at all get along, became the team manager. The stress levels immediately spiked as the focus quickly shifted to strong negativity. I had been struggling to get paper topics approved and was fighting against the reality that the target audience for Gartner research is not the leading edge of thinking, but the middle of the market. It took me nearly a full year to finally get my feet under me and start producing at an appropriate pace. My 1 yr mark roughly corresponded with the mid-year review, which was highly negative. By the end of the year I finally found my stride and had a ton of research in the pipeline (most of which would publish in early 2015). Unfortunately, the team manager, Captain Negative, couldn't see that and gave me one of the worst performance reviews I've ever received. It was hands-down the most insulted I'd ever been by a manager. It seemed very clear from his disrespectful actions that I wasn't wanted there, and so I launched an intensive job search. Meanwhile, I published something like 4 papers in 6 weeks while also having 4 talks picked up for that year's Security & Risk Management Conference. All I heard from my manager was negativity despite all that progress and success. I felt like shit, a total failure. There were no internal opportunities, so outward I looked, eventually landing at K12.

Oh, what a disaster that place was. K12 is hands-down the most toxic environment I've ever seen (and I've seen a lot!). Literally, all 10 people with whom I'd interviewed had lied to me - egregiously! I'd heard rumblings of changes in the executive ranks, but the hiring manager assured me there was nothing that would affect me. A new CIO - my manager's boss - started the same day I did. Yup, nothing that would affect me. Ha. Additionally, it turns out that they already had a "security manager" of sorts working in-house. He wasn't part of the interview process for my "security architect" role. They said they were doing DevOps, but it was just a side pilot that wasn't getting anywhere. Etc. Etc. Etc. Suffice to say, it was really bad. I frankly wondered how they were still in business, especially in light of the constant stream of lawsuits emanating from the states where they had "online public schools." Oy...

Suffice to say, I started looking for work on Day 1 at K12. But, there wasn't much there, and recruiters were loathe to talk to me given such a short stint. Explanations weren't accepted, and I was truly stuck. The longer I was there, the worse it looked. Finally, my old manager from AOL reached out as he was starting a CISO role at Ellucian. He rescued me and in October 2015 I started with them in a security architect role.

There's not much I can say about my experience at Ellucian. Things seemed ok at first, but after a CIO change a few months in, plus a couple other personnel issues, things got wonky, and it became clear my presence was no longer desired. When your boss starts cancelling weekly 1-on-1 meetings with you, it becomes pretty clear that he doesn't really want you there. New Context reached out in May 2016 and offered me an opportunity to do research and publishing for them, so I jumped at it and got the heck out of dodge. It turns out, this was a HUGE mistake, too...

There's even less I can say about New Context... we'll just put it at this: Despite my best efforts, I was never able to get things published due to a lack of internal approvals. After a year of banging my head against the wall, my boss and I concluded it wasn't going to happen, and they let me go a couple weeks later.

From there, I launched my own solo practice and signed what was to be a 20-wk contract with an LA-based client. They had been chasing me for several months to come help them out in a consulting (staff augmentation, really) capacity. I closed the deal with them and started on July 31st of this year. That first week was a mess with them not being ready for me on day 1, then sending me a botched laptop build on day 2, and then finally getting me online on day 3. I flew to LA to be on-site with them the following week and immediately locked horns with the other security architect. That first week on-site was horribly stressful. Things had finally started leveling off last wk, and then yesterday (Monday 8/28/17) they called and cancelled the contract. While I'm disappointed, it's also a bit of a relief. It wasn't a good fit, it was a very difficult client experience, and overall I was actively looking for new opportunities while I did what I could for them.

Shared Culpability or Mea Culpa?

After all these years, I'm tired of taking the blame and being the seemingly constant punchline to some joke I don't get. I'm tired, I'm burned-out, I'm frustrated, I'm depressed, and more than anything I just don't understand why things have gone so completely wrong over the past 10 years. How could one poor decision result in so much career chaos and heartache? It's astonishing. And appalling. And depressing.

I certainly share responsibility in all of this. I tend to be a fairly high-strung person (less so over the years) and onboarding is always highly stressful for me. Increasingly, employers want you engaged and functional on Day 1, even though that is incredibly unrealistic. Onboarding must be budgeted for a minimum of 3-6 months. If a move is involved, then even longer! Yet nobody is willing to allow that any more. I don't know if it's mythology or downward pressure or what... but the expectations are completely unreasonable.

But I do have a responsibility here, and I've certainly not been Mr. Sunshine the past few years, which means I tend to come off as extremely negative and sarcastic, which can be off-putting to people. Attitude is something I need to focus on when starting, and I need to find ways to better manage all the stress that comes with commencing a new gig.

That said, I also seem to have a knack for picking the wrong jobs. This even precedes my time at AOL, which is really a shining anchor in the middle of a turbulent career. Coming into the workforce just before the DOT-COM bubble burst, I've been through lots of layoffs and turmoil. I simply have a really bad track record of making good employment choices. I'm not even sure how to go about fixing that, short of finding people to advise me on the process.

However, lastly, it's important for companies to realize that they're also failing employees. The onboarding process is immensely important. Treating people respectfully and mindfully from Day 1 is immensely important. Setting reasonable expectations is immensely important. If you do not actively work to set your personnel up for success, then it is extremely unlikely that they'll achieve it! And even in this day and age where companies really, truly don't value personnel (except for execs and directors), it must be acknowledged that there is a significant cost in lost productivity, efficiency, and effectiveness that can be directly tied to employee turnover. This includes making sure managers are reasonably well trained and are actually well-suited to being managers. You owe it to your employees to treat them as humans, not just replaceable cogs in a machine.

Where To Go From Here?

The pull of deep depression is ever stronger. Resistance becomes evermore difficult with each successive failure. I feel like I cannot buy a break. My career is completely off-track and I decreasingly see a path to recovery. Every morning is a struggle to get up and look for work yet again. I feel like I've been doing this almost constantly for the past 10 years. I've not been settled anywhere since AOL (maybe BT).

I initially launched a solo practice, Falcon's View Consulting, to handle some contracts. And, that's still out there if I need it. However, what I really need is a full-time job. With a good, stable company. In a role with a good manager. A role that eventually has upward mobility (in order to get back on track).

Where that role is based I really do not care (my family might). Put me in a leadership role, pay me a reasonable salary, and relocate me to where you need me. At this point, I'm willing to go to bat and force the family to move, but you gotta make it easy and compelling. Putting me into financial hardship won't get it done. Putting me into a difficult position with no support won't get it done. Moving me and not being committed to keeping me onboard through the most stressful times won't get it done.

I'm quite seriously at the end of my rope. I feel like I have about one more chance left, after which it'll be bankruptcy and who knows what... I've given just about everything I can to this industry, and my reward has been getting destroyed in the process. This isn't sustainable, it isn't healthy, and it's altogether stupid.

I want to do good work. I want to find an employer that values me that I can stay with for a reasonable period of time. I've never gone into any FTE role thinking "this is just a temporary stop while I find something better." I throw my whole self into my work, which is - I think - why it is so incredibly painful when rejection and failure final happen. But I don't know another way to operate. Nor should anyone else, for that matter.

Two roads diverged in the woods / And I... I took the wrong one / And that has made all the difference

Google Begins Campaign Warning Forms Not Using HTTPS Protocol

August 2014, Google released an article sharing their thoughts on how they planned to focus on their “HTTPS everywhere” campaign (originally initiated at their Google I/O event). The premise of...

Read More

The post Google Begins Campaign Warning Forms Not Using HTTPS Protocol appeared first on PerezBox.

On Titles, Jobs, and Job Descriptions (Not All Roles Are Architects)

Folks: Please stop calling every soup-to-nuts, everything-but-the-kitchen-sink security job a "security architect" role. It's harmful to the industry and it's doing you no favors trying to find the right resources. In fact, please stop posting these "one role does everything security under the sun" positions altogether. It's hurting your recruitment efforts, and it makes it incredibly difficult to find positions that are a good fit. Let me explain...

For starters, there are generally three classes of security people, management and pentesters aside:
- Analysts
- Engineers
- Architects

(Note that these terms tend to be loaded due to their use in other industries. In fact, in some states you might even have to come up with a different equivalent term for positions due to legal definitions (or licensing) of roles. Try to bear with me and just go with the flow, eh?)

Analysts are people who think about stuff and write about stuff and sometimes help initiate actions, but they are not the implementers of security tools or practices. An analyst may or may not be particularly technical, depending on the nature of the role. For example, there are tons of entry-level SOC analyst positions today that can provide a first taste of infosec work life. You rarely need to have a lot of technical skills, at least initially, to land one of these gigs (this varies by org). Similarly, there are GRC analyst roles that tend not to be technical at all (despite often including "technical writing," such as for policies, in the workload). On the far end of the spectrum, you may have incident response (IR) analysts who are very technical, but again note the nature of their duties: thinking about stuff, writing about stuff, and maybe initiating actions (such as the IR process or escalations therein).

Engineers are people who do most of the hands-on work. If you're looking for someone to do a bunch of implementation work, particularly around security tools and tech, then you want a security engineer, and that should be clearly stated in your job description. Engineers tend to be people who really enjoy implementation and maintenance work. They like rolling up their sleeves and getting their hands dirty. You might also see "administrator" used in this same category (though that's muddy water as sometimes a "security administrator" might be more like an analyst in being less technical, skilled in one kind of tool, like adding and removing users to Active Directory or your IAM of choice). In general, if you're listing a position that has implementation responsibilities, then you need to be calling it an engineer role (or equivalent), not an analyst and certainly not an architect.

Architects are not your implementers. And, while they are thinkers who may do a fair amount of technical writing, the key differentiators here are that 1) they tend to be way more technical than the average analyst, 2) they see a much bigger picture than the average analyst or engineer, and 3) they've often risen to this position through one or both of the other roles, but almost certainly with considerable previous hands-on implementation experience as an engineer. It's very important to understand that your architects, while likely having a background in engineering, is unlikely to want to do much hands-on implementation work. What hands-on work they are willing/interested to do is likely focused heavily on proofs of concept (POCs) and testing new ideas and technologies. Given their technical backgrounds, they'll be able to go toe-to-toe on technical topics with just about anyone in the organization, even though they may not be able to sit down and crank out a bunch of server builds in short order any more (or, maybe they can!). A good security architect provides experiential, context-relevant guidance on how to design /secure/ systems and applications, as well as providing guidance on technology purchasing decisions, technical designs, etc. Where they differ from, say, GRC/policy analysts is that when they provide a recommendation on something, they can typically back it up with more than a flaccid reference to "best practices" or some other lame appeal to authority; they can instead point to proven experiences and technical rationale.

Going all the way back to before my Gartner days, I've long told SMBs that their first step should not be hiring a security manager, but rather a security architect who reports up through the IT food chain, preferably directly to the IT manager/director or CIO (depending on size and structure of the org). The reason for this recommendation is that small IT shops already have a number of engineers/administrators and analysts, but what they oftentimes lack is someone with broad AND deep technical expertise in security who can provide all sorts of guidance and value to the organization. Part and parcel to this is that SMBs especially do not need to build out a "security team" or "security department"! (In fact, I often argue only the largest enterprises should ever go this route, and only to improve efficiency and effectiveness. Status quo and conventional wisdom be damned.) Most small IT shops just need someone to help out with decisions and evaluations to ensure that the organization is making smart security decisions. This security architect role should not be focused on implementation or administration, but instead should be serving in an almost quasi-EA (enterprise architect) role that cuts across the entire org. In many ways, a security architect in a counselor who works with teams to improve their security decisions. It's common in larger organizations for security architects to have a focus on one part of the business simply as a matter of scale and supportability.

So that's it. Nothing too crazy, right? But, I think it's important. Yes, some of you may debate and question how I've defined things, and that's fine, but the main takeaway here, hopefully, is that job descriptions need to be reset again around some standard language. In particular, orgs need to stop listing a ton of implementation work for "security architect" roles because that's misleading and really not what a security architect does. Properly titling and describing roles is very important, and will help you more readily find your ideal candidates. Calling everything a "security architect" does not do anything positive for you, and it serves to frustrate and disenfranchise your candidate pools (not to mention wasting your time on screening).

fwiw. ymmv. cheers!

Hacking the Universe with Quantum Encraption

Ladies and Gentlemen of the Quantum Physics Community:

  I want you to make a Pseudorandom Number Generator!

  And why not!  I’m just a crypto nerd working on computers, I only get a few discrete bits and a handful of mathematical operations.  You have such an enormous bag of tricks to work with!  You’ve got a continuous domain, trigonometry, complex numbers, eigenvectors…you could make a PRNG for the universe!  Can you imagine it?  Your code could be locally hidden in every electron, proton, fermion, boson in creation.

  Don’t screw it up, though.  I can’t possibly guess what chaos would (or would fail to) erupt, if multiple instances of a PRNG shared a particular seed, and emitted identical randomness in different places far, far away.  Who knows what paradoxes might form, what trouble you might find yourself entangled with, what weak interactions might expose your weak non-linearity.  Might be worth simulating all this, just to be sure.

  After all, we wouldn’t want anyone saying, “Not even God can get crypto right”.

—–

  Cryptographically Secure Pseudorandom Number Generators are interesting.  Given a relatively small amount of data (just 128 bits is fine) they generate an effectively unlimited stream of bits completely indistinguishable from the ephemeral quantum noise of the Universe.  The output is as deterministic as the digits of pi, but no degree of scientific analysis, no amount of sample data will ever allow a model to form for what bits will come next.

  In a way, CSPRNGs represent the most practical demonstration of Godel’s First Incompleteness Theorem, which states that for a sufficiently complex system, there can be things that are true about it that can never be proven within the rules of that system.  Science is literally the art of compressing vast amounts of experimentally derived output on the nature of things, to a beautiful series of rules that explains it.  But as much as we can model things from their output with math, math can create things we can never model.  There can be a thing that is true — there are hidden variables in every CSPRNG — but we would never know.

  And so an interesting question emerges.  If a CSPRNG is indistinguishable from the quantum noise of the Universe, how would we know if the quantum noise of the universe was not itself a CSPRNG?  There’s an infinite number of ways to construct a Random Number Generator, what if Nature tried its luck and made one more?  Would we know?

  Would it be any good?

   I have no idea.  I’m just a crypto nerd.  So I thought I’d look into what my “nerds from another herd”, Quantum Physicists, had discovered.

—–

  Like most outsiders diving into this particular realm of science, I immediately proceeded to misunderstand what Quantum Physics had to say.  I thought Bell’s Theorem ruled out anything with secret patterns:

“No physical theory of local hidden variables can ever reproduce all the predictions of quantum mechanics.”  

  I thought that was pretty strange.  Cryptography is the industrial use of chaotic systems with hidden variables.  I had read this to mean, if there were ever local hidden variables in the random data that quantum mechanics consumed, the predictions would be detectably different from experimental evidence.

  Quantum Physics is cool, it’s not that cool.  I have a giant set of toys for encrypting hidden variables in a completely opaque datastream, what, I just take my bits, put them into a Quantum Physics simulation, and see results that differ from experimental evidence?  The non-existence of a detection algorithm distinguishing encrypted datastreams from pure quantum entropy, generic across all formulations and levels of complexity, might very well be the safest conjecture in the history of mathematics.  If such a thing existed, it wouldn’t be one million rounds of AES we’d doubt, it’d be the universe.

  Besides, there’s plenty of quantum mechanical simulations on the Internet, using JavaScript’s Math.Random.  That’s not exactly a Geiger counter sitting next to a lump of Plutonium.  This math needs uniform distributions, it does not at all require unpredictable ones.

  But of course I completely misunderstood Bell.  He based his theorem on what are now called Bell Inequalities.  They describe systems that are in this very weird state known as entanglement, where two particles both have random states relative to the universe, but opposite states relative to eachother.  It’s something of a bit repeat; an attacker who knows a certain “random” value is 1 knows that another “random” value is 0.  But it’s not quite so simple.  The classical interpretation of entanglement often demonstrated in relation to the loss of a shoe (something I’m familiar with, long story).  You lose one shoe, the other one is generally identical.

  But Bell inequalities, extravagantly demonstrated for decades, demonstrate that’s just not how things work down there because the Universe likes to be weird.  Systems at that scale don’t have a ground truth, as much as a range of possible truths.  Those two particles that have been entangled, it’s not their truth that is opposite, it’s their ranges.  Normal cryptanalysis isn’t really set up to understand that — we work in binaries, 1’s and 0’s.  We certainly don’t have detectors that can be smoothly rotated from “detects 1’s” to “detects 0’s”, and if we did we would assume as they rotated there would be a linear drop in 1’s detected matching a linear increase in 0’s.

  When we actually do the work, though, we never see linear relationships.  We always see curves, cos^2 in nature, demonstrating that the classical interpretation is wrong.  There are always two probability distributions intersecting.

—–

  Here’s the thing, and I could be wrong, but maybe I’ll inspire something right.  Bell Inequalities prove a central thesis of quantum mechanics — that reality is probabilistic — but Bell’s Theorem speaks about all of quantum mechanics.  There’s a lot of weird stuff in there!  Intersecting probability distributions is required, the explanations that have been made for them are not necessarily necessary.

  More to the point, I sort of wonder if people think it’s “local hidden variables” XOR “quantum mechanics” — if you have one, you can’t have the other.  Is that true, though?  You can certainly explain at least Bell Inequalities trivially, if the crystal that is emitting entangled particles emits equal and opposite polarizations, on average.  In other words, there’s a probability distribution for each photon’s polarization, and it’s locally probed at the location of the crystal, twice.

  I know, it would seem to violate conservation of angular momentum.  But, c’mon.  There’s lots of spare energy around.  It’s a crystal, they’re weird, they can get a tiny bit colder.  And “Nuh-uh-uh, Isaac Newton!  For every action, there is an equal and opposite probability distribution of a reaction!” is really high up on the index of Shit Quantum Physicists Say.

Perhaps more likely, of course, is that there’s enough hidden state to bias the probability distribution of a reaction, or is able to fully describe the set of allowable output behaviors for any remote unknown input.  Quantum Physics biases random variables.  It can bias them more.  What happens to any system with a dependency on random variables that suddenly aren’t?  Possibly the same thing that happens to everything else.

  Look.  No question quantum mechanics is accurate, it’s predictive of large chunks of the underlying technology the Information Age is built on.  The experiment is always right, you’re just not always sure what it’s right about.  But to explain the demonstrable truths of probability distribution intersection, Quantum Physicists have had to go to some pretty astonishing lengths.  They’ve had to bend on the absolute speed limit of the universe, because related reactions were clearly happening in multiple places in a manner that would require superluminal (non-)communication.

  I guess I just want to ask, what would happen if there’s just a terrible RNG down there — non-linear to all normal analysis, but repeat its seed in multiple particles and all hell breaks loose?  No really, what would happen?

   Because that is the common bug in all PRNGs, cryptographically secure and otherwise.  Quantum mechanics describes how the fundamental unstructured randomness of the universe is shaped and structured into probability distributions.  PRNGs do the opposite — they take structure, any structure, even fully random bits limited only by their finite number — and make them an effectively unbound stream indistinguishable from what the Universe has to offer.

  The common PRNG bug is that if the internal state is repeated, if the exact bits show up in the same places and the emission counter (like the digit of pi requested) is identical, you get repeated output.

  I’m not saying quantum entanglement demonstrates bad crypto.  I wouldn’t know.  Would you?

  Because here’s the thing.  I like quantum physics.  I also like relativity.  The two fields are both strongly supported by the evidence, but they don’t exactly agree with one another.  Relativity requires nothing to happen faster than the speed of light; Quantum Physics kind of needs its math to work instantaneously throughout the universe.  A sort of detente has been established between the two successful domains, called the No Communication theorem.  As long as only the underlying infrastructure of quantum mechanics needs to go faster than light, and no information from higher layers can be transmitted, it’s OK.

   It’s a decent hack, not dissimilar to how security policies never seem to apply to security systems.  But how could that even work?  Do particles (or waves, or whatever) have IP addresses?  Do they broadcast messages throughout the universe, and check all received messages for their identifier?  Are there routers to reduce noise?  Do they maintain some sort of line of sight at least?  At minimum, there’s some local hidden variable even in any non-local theory, because the system has to decide who to non-locally communicate with.  Why not encode a LUT (Look Up Table) or a function that generates the required probability distributions for all possible future interactions, thus saving the horrifying complexity of all particles with network connections to all other particles?

  Look, one can simulate weak random number generators in each quantum element, and please do, but I think non-locality must depend on some entirely alien substrate, simulating our universe with a speed of light but choosing only to use that capacity for its own uses.  The speed of light itself is a giant amount of complexity if instantaneous communication is available too.

  Spooky action at a distance, time travel, many worlds theories, simulators from an alien dimension…these all make for rousing episodes of Star Trek, but cryptography is a thing we actually see in the world on a regular basis.  Bad cryptography, even more so.

—-

  I mentioned earlier, at the limit, math may model the universe, but our ability to extract that math ultimately depends on our ability to comprehend the patterns in the universe’s output.  Math is under no constraint to grant us analyzable output.

  Is the universe under any constraint to give us the amount of computation necessary to construct cryptographic functions?  That, I think, is a great question.

  At the extreme, the RSA asymmetric cipher can be interpreted symmetrically as F(p,q)==n, with p and q being large prime numbers and F being nothing more than multiply.  But that would require the universe to support math on numbers hundreds of digits long.  There’s a lot of room at the bottom but even I’m not sure there’s that much.  There’s obviously some mathematical capacity, though, or else there’d be nothing (and no one) to model.

  It actually doesn’t take that much to create a bounded function that resists (if not perfectly) even the most highly informed degree of relinearizing statistical work, cryptanalysis.  This is XTEA:

/* take 64 bits of data in v[0] and v[1] and 128 bits of key[0] - key[3] */

void encipher(unsigned int num_rounds, uint32_t v[2], uint32_t const key[4]) {
    unsigned int i;
    uint32_t v0=v[0], v1=v[1], sum=0, delta=0x9E3779B9;
    for (i=0; i < num_rounds; i++) {
        v0 += (((v1 << 4) ^ (v1 >> 5)) + v1) ^ (sum + key[sum & 3]);
        sum += delta;
        v1 += (((v0 << 4) ^ (v0 >> 5)) + v0) ^ (sum + key[(sum>>11) & 3]);
    }
    v[0]=v0; v[1]=v1;
}

  (One construction for PRNGs, not the best, is to simply encrypt 1,2,3… with a secret key.  The output bits are your digits, and like all PRNGs, if the counter and key repeat, so does the output.)

  The operations we see here are:

  1. The use of a constant.  There are certainly constants of the universe available at 32 bits of detail.
  2. Addition.  No problem.
  3. Bit shifts.  So that’s two things — multiplication or division by a power of two, and quantization loss of some amount of data.  I think you’ve got that, it is called quantum mechanics after all.
  4. XOR and AND.  This is tricky.  Not because you don’t have exclusion available — it’s not called Pauli’s Let’s Have A Party principle — but because these operations depend on a sequence of comparisons across power of two measurement agents, and then combining the result.  Really easy on a chip, do you have that kind of magic in your bag of tricks?  I don’t know, but I don’t think so.

  There is a fifth operation that is implicit, because this is happening in code.  All of this is happening within a bitvector 32 bits wide, or GF(2^32), or % 2**32, depending on which community you call home.  Basically, all summation will loop around.  It’s OK, given the proper key material there’s absolutely an inverse function that will loop backwards over all these transformations and restore the original state (hint, hint).

  Modular arithmetic is the math of clocks, so of course you’d expect it to exist somewhere in a world filled with things that orbit and spin.  But, in practical terms, it does have a giant discontinuity as we approach 1 and reset to 0.  I’m sure that does happen — you either do have escape velocity and fly off into the sunset, or you don’t, crash back to earth, and *ahem* substantially increase your entropy — but modular arithmetic seems to mostly express at the quantum scale trigonometrically.  Sine waves can possibly be thought of as a “smoothed” mod, that exchanges sharp edges for nice, easy curves.

  Would trig be an improvement to cryptography?  Probably not! It would probably become way easier to break!  While the universe is under no constraint to give you analyzable results, it’s also under no constraint not to.  Crypto is hard even if you’re trying to get it right; randomly throwing junk together will (for once) not actually give you random results.

  And not having XOR or AND is something notable (a problem if you’re trying to hide the grand secrets of the universe, a wonderful thing if you’re trying to expose them).  We have lots of functions made out of multiply, add, and mod.  They are beloved by developers for the speed at which they execute.  Hackers like ‘em too, they can be predicted and exploited for remote denial of service attacks.  A really simple function comes from the legendary Dan Bernstein:

unsigned djb_hash(void *key, int len)
{
    unsigned char *p = key;
    unsigned h = 0;
    int i;

    for (i = 0; i < len; i++)
    {
        h = 33 * h + p[i];
    }

    return h;
}

  You can see the evolution of these functions at http://www.eternallyconfuzzled.com/tuts//jsw_tut_hashing.aspx , what should be clear is that there are many ways to compress a wide distribution into a small one, with various degrees of uniformity and predictability.

  Of course, Quantum Physicists actually know what tools they have to model the Universe at this scale, and their toolkit is vast and weird.  A very simple compression function though might be called Roulette — take the sine of a value with a large normal or Poisson distribution, and emit the result.  The output will be mostly (but not quite actually) uniform.

  Now, such a terrible RNG would be vulnerable to all sorts of “chosen plaintext” or “related key” attacks.  And while humans have learned to keep the function static and only have dynamic keys if we want consistent behavior, wouldn’t it be tragic if two RNGs shipped with identical inputs, one with a RNG configured for sine waves, the other configured for cosine?  And then the results were measured against one another?  Can you imagine the unintuitive inequalities that might form?

  Truly, it would be the original sin.

—–

  I admit it.  I’m having fun with this (clearly).  Hopefully I’m not being too annoying.  Really, finally diving into the crazy quantum realm has been incredibly entertaining.  Have you ever heard of Young’s experiment?  It was something like 1801, and he took a pinhole of sunlight coming through a wall and split the light coming out of it with a note card.  Boom!  Interference pattern!  Proved the existence of some sort of wave nature for light, with paper, a hole, and the helpful cooperation of a nearby stellar object.  You don’t always need a particle accelerator to learn something about the Universe..

  You might wonder why I thought it’d be interesting to look at all this stuff.  I blame Nadia Heninger.  She and her friends discovered that about (actually, at least) one in two hundred private cryptographic keys were actually shared between systems on the Internet, and were thus easily computed.  Random number generation had been shown to have not much more than two nines of reliability in a critical situation.  A lot of architectures for better RNG had been rejected, because people were holding out for hardware.  Now, of course, we actually do have decent fast RNG in hardware, based on actual quantum noise.  Sometimes people are even willing to trust it.

  Remember, you can’t differentiate the universe from hidden variable math, just on output alone.

  So I was curious what the de minimus quantum RNG might look like.  Originally I wanted to exploit the fact that LEDs don’t just emit light, they generate electricity when illuminated.  That shouldn’t be too surprising, they’re literally photodiodes.  Not very good ones, but that’s kind of the charm here.  I haven’t gotten that working yet, but what has worked is:

  1. An arduino
  2. A capacitor
  3. There is no 3

  It’s a 1 Farad, 5V capacitor.  It takes entire seconds to charge up.  I basically give it power until 1.1V, and let it drain to 1.0V.  Then I measure, with my nifty 10 bit ADC, just how much voltage there is per small number of microseconds.

  Most, maybe all TRNGs, come down to measuring a slow clock with a fast clock.  Humans are pretty good at keeping rhythm at the scale of tens of milliseconds.  Measure us to the nanosecond, and that’s just not what our meat circuits can do consistently.

   How much measurement is enough?  10 bits of resolution to model the behavior of trillions of electrons doesn’t seem like much.  There’s structure in the data of course, but I only need to think I have about 128 bits before I can do what you do, and seed a CSPRNG with the quantum bits.  It’ll prevent any analysis of the output that might be, you know, correlated with temperature or power line conditions or whatnot.

  And that’s the thing with so-called True RNGs, or TRNGs.  Quantum Physics shapes the fundamental entropy of the universe, whether you like it or not, and acts as sort of a gateway filter to the data you are most confident lacks any predictable structure, and adds predictable structure.  So whenever we build a TRNG, we always overcollect, and very rarely directly expose.  The great thing about TRNGs is — who knows what junk is in there?  The terrifying thing about TRNGs is, not you either.

  In researching this post, I found the most entertaining paper:  Precise Monte Carlo Simulation of Single Photon Detectors (https://arxiv.org/pdf/1411.3663.pdf).  It had this quote:

Using a simple but very demanding example of random number generation via detection of Poissonian photons exiting a beam splitter, we present a Monte Carlo simulation that faithfully reproduces the serial autocorrelation of random bits as a function of detection frequency over four orders of magnitude of the incident photon flux.

  See, here is where quantum nerds and crypto nerds diverge.

  Quantum nerds:  “Yeah, detectors suck sometimes, universe is fuzzy whatcha gonna do”

  Crypto nerds:  “SERIAL AUTOCORRELATION?!  THAT MEANS YOUR RANDOM BITS ARE NOT RANDOM”

  Both are wrong, both are right, damn superposition.  It might be interesting to investigate further.

—–

  You may have noticed throughout this post that I use the phrase randomness, instead of entropy.  That is because entropy is a term that cryptographers borrowed from physicists.  For us, entropy is just an abstract measure of how much we’d have to work if we threw up our hands on the whole cryptanalysis enterprise and just tried every possibility.  For experimental physicists, entropy is something of a thing, a condition, that you can remove from a system like coal on a cart powered by a laser beam.

  Maybe we should do that.  Let me explain.  There is a pattern, when we’re attacking things, that the closer you get to the metal the more degrees of freedom you have to mess with its normal operations.  One really brutal trick involves bypassing a cryptographic check, by letting it proceed as expected in hardware, and then just not providing enough electrons to the processor at the very moment it needs to report the failure.  You control the power, you control the universe.

   Experimental physicists control a lot of this particular universe.  You know what sort of cryptographic attack we very rarely get to do?  A chosen key attack.

  Maybe we should strip as much entropy from a quantum system as physically possible, and see just how random things are inside the probability distributions that erupt upon stimulation.  I don’t think we’ll see any distributional deviations from quantum mechanics, but we might see motifs (to borrow a phrase from bioinformatics) — sequences of precise results that we’ve seen before.  Course grain identity, fine grain repeats.

  Worth taking a look.  Obviously, I don’t need to tell physicists how to remove entropy from their system.  But it might be worth mentioning, if you make things whose size isn’t specified to matter, a multiple of prime integer relationships to a size that is known to be available to the system, you might see unexpected peaks as integer relationships in unknown equations expose as sharing factors with your experimental setup.  I’m not quite sure you’ll find anything, and you’ll have to introduce some slop (and compensate for things like signals propagating at different speeds as photons in free space or electronic vibrations within objects) maybe, if this isn’t already common exploratory experimental process, you’ll find something cool.

   I know, I’m using the standard hacker attack patterns where they kind of don’t belong.  Quantum Physics has been making some inroads into crypto though, and the results have been interesting.  If you think input validation is hard now, imagine if packet inspection was made illegal by the laws of the Universe.  There was actually this great presentation at CCC a few years ago that achieved 100% key recovery on common quantum cryptographic systems — check it out.

   So maybe there’s some links between our two worlds, and you’ll grant me some leeway to speculate wildly (if you’ve read this far, I’m hoping you already have).  Let’s imagine for a moment, that in the organization I’ll someday run with a small army dedicated to fixing the Internet, I’ve got a couple of punk experimentalist grad students who know their way around an optics table and still have two eyes.  What would I suggest they do?

  I see lots of experiments providing positive confirmation of quantum mechanics, which is to be expected because the math works.  But you know, I’d try something else.  A lot of the cooler results from Quantum Physics show up in the two slit experiment, where coherent light is shined through two slits and interferes as waves on its way to a detector.  It’s amazing, particularly since it shows up even when there’s only one photon, or one electron, going through the slits.  There’s nothing else to interfere with!  Very cool.

  There’s a lot of work going on in showing interference patterns in larger and larger things.  We don’t quite know why the behaviors correctly predicted by Quantum Physics don’t show up in, like, baseballs.  The line has to be somewhere, we don’t know why or where.  That’s interesting work!  I might do something else, though.

  There exists an implemented behavior:  An interference pattern.  It is fragile, it only shows up in particular conditions.  I would see what breaks that fragile behavior, that shouldn’t.  The truth about hacking is that as creative as it is, it is the easy part.  There is no human being on the planet that can assemble a can of Coca-Cola, top to bottom.  Almost any person can destroy a can though, along with most of the animal kingdom and several natural processes.

  So yes.  I’m suggesting fuzzing quantum physics.  For those who don’t know, a lot of systems will break if you just throw enough crap at the wall.  Eventually you’ll hit some gap between the model a developer had in his mind for what his software did, and what behaviors he actually shipped.

  Fuzzing can be completely random, and find lots of problems.  But one of the things we’ve discovered over the years is that understanding what signals a system is used to processing, and composing them in ways a system is not used to processing, exposes all sorts of failure conditions.  For example, I once fuzzed a particular web browser.  Those things are huge!  All sorts of weird parsers, that can be connected in almost but not quite arbitrary ways.  I would create these complex trees of random objects, would move elements from one branch to another, would delete a parent while working on a child, and all the while, I’d stress the memory manager to make sure the moment something was apparently unneeded, it would be destroyed.

  I tell you, I’d come to work the next day and it’d be like Christmas.  I wonder what broke today!  Just because it can compose harmlessly, does not at all mean it will.  Shared substrates like the universe of gunk lashing a web browser together never entirely implement their specifications perfectly.  The map is not the territory, and models are always incomplete.

  Here’s the thing.  We had full debuggers set up for our fuzzers.  We would always know exactly what caused a particular crash.  We don’t have debuggers for reality at the quantum scale, though wow, I wish we did.  Time travel debugging would be awesome.  

  I want to be cautious here, but I think this is important to say.  Without a debugger, many crashes look identical.  You would not believe the number of completely different things that can cause a web browser to give up the ghost.  Same crash experience every time, though.  Waves, even interference waves, are actually a really generic failure mode.  The same slits that will pass photons, will also pass air molecules, will also pass water molecules.  Stick enough people in a stadium and give them enough beer and you can even make waves out of people.

  They’re not the same waves, they don’t have the same properties, that’s part of the charm of Quantum Physics.  Systems at different scales do behave differently.  The macro can be identical, the micro can be way, way different.

  Interference is fairly intuitive for multi-particle systems.  Alright, photons spin through space, have constructive and destructive modes when interacting in bulk, sure.  It happens in single photon and electron systems too, though.  And as much as I dislike non-locality, the experiment is always right.  These systems behave as if they know all the paths they could take, and choose one.

  This does not necessarily need to be happening for the same reasons in single photon systems, as it is in long streams of related particles.  It might be!  But, it’s important to realize, there won’t just be waves from light, air, and water.  Those waves will have similarities, because while the mechanisms are completely different, the ratios that drive them remain identical (to the accuracy of each regime).

  Bug collisions are extremely annoying.

  I know I’m speaking a bit out of turn.  It’s OK.  I’m OK with being wrong, I just generally try to not be, you know.  Not even wrong.  What’s so impressive about superposition is that the particle behaves in a manner that belies knowledge it should not have.  No cryptographic interpretation of the results of Quantum Physics can explain that; you cannot operate on data you do not have.  Pilot wave theory is a deterministic conception of quantum physics, not incompatible at all with this cryptographic conjecture, but it too has given up on locality.  You need to have an input, to account for it in your output.

  But the knowledge of the second slit is not necessarily absent from the universe as perceived by the single photon.  Single photon systems aren’t.  It’s not like they’re flying through an infinitely dark vacuum.  There’s black body radiation everywhere, bouncing off the assembly, interfering through the slits, making a mess of things.  I know photons aren’t supposed to feel the force of others at different wavelengths, but we’re talking about the impact on just one.  Last I heard, there’s a tensor field of forces everything has to go through, maybe it’s got a shadow.  And the information required is some factor of the ratio between slits, nothing else.  It’s not nothing but it’s a single value.

  The single particle also needs to pass through the slits.  You know, there are vibratory modes.  Every laser assembly I see isolates the laser from the world.  But you can’t stop the two slits from buzzing, especially when they’re being hit by all those photons that don’t miss the assembly.  Matter is held together by electromagnetic attraction; a single photon versus a giant hunk of mass has more of an energy differential than myself and Earth.  There doesn’t need to be much signal transfer there, to create waves.  There just needs to be transfer of the slit distance.

Might be interesting to smoothly scale your photon count from single photon in the entire assembly (not just reaching the photodetector), through blindingly bright, and look for discontinuities.  Especially if you’re using weak interactions to be trajectory aware.

  In general, change things that shouldn’t matter.  There are many other things that have knowledge of the second photon path.  Reduce the signal so that there’s nothing to work on, or introduce large amounts of noise so it doesn’t matter that the data is there.  Make things hot, or cold.  Introduce asymmetric geometries, make a photon entering the left slit see a different (irrelevant) reality than the photon entering the right.  As in, there are three slits, nothing will even reach the middle slit because it’s going to be blocked by a mirror routing it to the right slit, but the vibratory mode between left and middle is different than that for middle and right.  Or at least use different shapes between the slits, so that the vibratory paths are longer than crow flies distance.  Add notch filters and optical diodes where they shouldn’t do anything.  Mirrors and retroreflectors too.  Use weird materials — ferromagnetic, maybe, or anti-ferromagnetic.  Bismuth needs its day in the sun.  Alter density, I’m sure somebody’s got some depleted uranium around, gravity’s curvature of space might not be so irrelevant.  Slits are great, they’re actually not made out of anything!  You know what might be a great thing to make two slits out of?  Three photodetectors!  Actually, cell phones have gotten chip sensors to be more sensitive than the human eye, which in the right conditions is itself a single photon detector.  I wonder just what a Sony ISX-017 (“Starvis”) can do.

You know what’s not necessarily taking nanoseconds to happen?  Magnetization!  It can occur in femtoseconds and block an electron from the right slit while the left slit is truly none the wiser.  Remember, you need to try each mechanism separately, because the failure mode of anything is an interference pattern.

   Just mess with it!  Professors, tell your undergrads, screw things up.  Don’t set anything on fire.  You might not even have to tell them that.

  And then you go set something on fire, and route your lasers through it.  Bonus points if they’re flaming hoops.  You’ve earned it.

  I’ll be perfectly honest.  If any of this works, nobody would be more surprised than me.  But who knows, maybe this will be like that time somebody suggested we just send an atomic clock into space to unambiguously detect time dilation from relativity.  A hacker can dream!  I don’t want to pretend to be telling anyone how the universe works, because how the heck would I know.  But maybe I can ask a few questions.  Perhaps, strictly speaking, this is a disproof of Bell’s Theorem that is not superdeterminism.  Technically a theory does not need to be correct to violate his particular formulation.  It might actually be the case that this… Quantum Encraption is a local hidden variable theory that explains all the results of quantum mechanics.

–Dan

P.S. This approach absolutely does not predict a deterministic universe.  Laser beams eventually decohere, just not immediately.  Systems can absolutely have a mix of entropy sources, some good, some not.  It takes very, very little actual universal entropy to create completely unpredictable chaos, and that’s kind of the point.  The math still works just as predictably even with no actual randomness at all.  Only if all entropy sources were deterministic at all scales could the universe be as well.  And even then, the interaction of even extremely weak cryptosystems is itself strongly unpredictable over the scale of, I don’t know, billions of state exchanges.  MD5 is weak, a billion rounds of MD5 is not.  So there would be no way to predict or influence the state of the universe even given perfect determinism without just outright running the system.

[edit]P.P.S. “There is no outcome in quantum mechanics that cannot be handled by encraption, because if there was, you could communicate with it.”  I’m not sure that’s correct but you know what passes the no communication theory really easily?  No communication.  Also, please, feel free to mail me privately at dan@doxpara.com or comment below.

Diving into the Issues: Observations from SOURCE and AtlSecCon

Last week I had the pleasure of presenting three times, at two conferences, in two different countries: SOURCE in Boston, MA and at the Atlantic Security Conference (AtlSecCon) in Halifax, NS, Canada.

The first event of my week was SOURCE Boston. This year marked the tenth anniversary of SOURCE Conference and it continues to pride itself on being one of the only venues that brings business, technology and security professionals together under one roof to focus on real-world, practical security solutions for some of todays toughest security issues. Though I was only there for the first day, I was able to catch up with friends, play some Hacker Movie Trivia with Paul Asadoorian (@securityweekly), and chat with attendees on some of the biggest challenges we face around detecting and mitigating ransomware attacks.

After my presentation, I rushed off to Logan Airport to sit in, on what I now choose to call, the “Air Canada Ghetto” – a small three gate departure area segregated from the rest of the airport and its amenities. A minor four hour delay later, I was on my way to Halifax for AtlSecCon.

Between meetings and casual conversations I was enlightened by several presentations. Raf Los (@Wh1t3Rabbit), managing director of solutions research & development at Optiv, discussing Getting Off the Back Foot – Employing Active Defence which talked about an outcome-oriented and capabilities-driven model for more effective enterprise security.

After his talk, Aunshul Rege (@prof_rege), an assistant professor with the Criminal Justice department at Temple University, gave a very interesting talk entitled Measuring Adversarial Behavior in Cyberattacks. With a background in criminology, Aunshul presented her research from observations and interviews conducted at the Industrial Control Systems Computer Emergency Response Team’s (ICS-CERT) Red/Blue cybersecurity training exercise held at Idaho National Laboratory. Specifically, she covered how adversaries might engage in research and planning, offer team support, manage conflict between group members, structure attack paths (intrusion chains), navigate disruptions to their attack paths, and how limited knowledge bases and self-induced mistakes can possibly impact adversaries.

The last presentation was Mark Nunnikhoven’s (@marknca) highlighting Is Your Security Team Set up To Fail? Mark, the VP of cloud research at Trend Micro and a personal friend, examined the current state of IT security programs and teams…delving into the structure, goals, and skills prioritized by the industry.

The second day of the conference was filled with meetings for me but I was able to sit through Michael Joyce’s talk entitled A Cocktail Recipe for Improving Canadian Cybersecurity.  Joyce described the goals and objectives of The Smart Cybersecurity Network (SERENE-RISC) – a federally funded, not-for-profit knowledge mobilization network created to improve the general public’s awareness of cybersecurity risks and to empower all to mitigate them through knowledge. He was an excellent presenter and served as a call to action for those looking to help communicate the need for cybersecurity to all Canadians.

At both conferences I presented my latest talk entitled The Not-So-Improbable Future of Ransomware which explored how thousands of years of human kidnap and ransom doctrine have served as a playbook for ransomware campaign operators to follow. It was well received by both audiences and sparked follow-up conversations and discussions throughout the week. The SOURCE version can be found here and the AtlSecCon version here.

The conversation was received some early praise on the SOURCE session in addition to written pieces by Bill Brenner (@billbrenner70) from Sophos:


And Taylor Armerding (@tarmerding2) from CSO:


At AtlSecCon I joined a panel entitled Security Modelling Fundamentals: Should Security Teams Model a SOC Around Threats or Just Build Layers? Chaired by Tom Bain (@tmbainjr1), VP of marketing at CounterTack, the session served as a potpourri of security threats and trends ranging from ransomware, to regulation, to attack mitigation. It was quite fun and a great way to end the day.

Though it was a long series of flights home to the Bay Area I thoroughly enjoyed both conferences. I would highly recommend attending and/or speaking at both next year if you are provided with the opportunity.

Next up, (ISC)² CyberSecureGov 2017 in Washington, D.C. and the Rocky Mountain Information Security Conference (RMISC) in Denver, CO. Perhaps I’ll see some of our readers there!

The post Diving into the Issues: Observations from SOURCE and AtlSecCon appeared first on LEO Cyber Security.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mist