Category Archives: IT security

Application Security and Software Development

Due to faster connectivity and the lower barriers to application development with open source software, the amount of applications and data held by organizations has continued to grow. As a

The post Application Security and Software Development appeared first on The Cyber Security Place.

Why You Need a Concrete Incident Response Plan (Not Strategy)

Recently, I had the privilege to be part of a four-person discussion panel at a security event in London where the topic was about incident response. The panel was hosted by another security professional, and over 50 professionals from the industry were present in the audience. I’ve worked in information security for 15 years, and […]… Read More

The post Why You Need a Concrete Incident Response Plan (Not Strategy) appeared first on The State of Security.

The State of Security: Why You Need a Concrete Incident Response Plan (Not Strategy)

Recently, I had the privilege to be part of a four-person discussion panel at a security event in London where the topic was about incident response. The panel was hosted by another security professional, and over 50 professionals from the industry were present in the audience. I’ve worked in information security for 15 years, and […]… Read More

The post Why You Need a Concrete Incident Response Plan (Not Strategy) appeared first on The State of Security.



The State of Security

Why the CISO’s Voice Must be Heard Beyond the IT Department

In a recent company board strategy meeting the CFO presented the financial forecast and outcome and made some interesting comments about fiscal risks and opportunities on the horizon. The COO

The post Why the CISO’s Voice Must be Heard Beyond the IT Department appeared first on The Cyber Security Place.

The Importance of “S” in “CISO”

A Chief Information Security Officer is the brigadier general of the security force of an organization. While the c-suite normally looks at the financial and overall management of an organization,

The post The Importance of “S” in “CISO” appeared first on The Cyber Security Place.

How Corporate Boards Can Be More Proactive Mitigating Cyber Risks

Many corporate boards have made significant progress about understanding the importance of cyber security to the competitive health and sustainability of the companies they oversee. They’ve certainly gotten the message

The post How Corporate Boards Can Be More Proactive Mitigating Cyber Risks appeared first on The Cyber Security Place.

What is the challenge in embracing multi-factor authentication?

Only 20% of IBM mainframe customers are embracing multi-factor authentication to protect data and applications, according to findings from a new poll of 81 mainframe users conducted by Macro 4

The post What is the challenge in embracing multi-factor authentication? appeared first on The Cyber Security Place.

Center for Connected Medicine Polls Top Health Systems About 2019 Priorities

Cybersecurity is still the big one. But interoperability and telehealth are not far behind for leading organizations’ technology goals. The Center for Connected Medicine polled IT executives across 38 health

The post Center for Connected Medicine Polls Top Health Systems About 2019 Priorities appeared first on The Cyber Security Place.

Security vs. Compliance: What’s the Difference?

Security and compliance are often said in the same breath as if they are two sides of the same coin, two members of the same team or two great tastes that go great together. As much as I would like to see auditors and developers (or Security Analysts) living in harmony like a delicious Reese’s […]… Read More

The post Security vs. Compliance: What’s the Difference? appeared first on The State of Security.

Securing your company’s supply chain with objective information

By Ewen O’Brien, EMEA Director at  BitSight Understanding the risk posed by third- and fourth-party companies can help mitigate security problems In light of the almost daily news of companies

The post Securing your company’s supply chain with objective information appeared first on The Cyber Security Place.

Cyber Security Implementation: Firms Want It, But Less Do It, Finds Survey

Most respondents to a survey says cyber security implementation is critical, but only half think they are resilient enough to protect against cyber attacks. Despite 99% of respondents stating that

The post Cyber Security Implementation: Firms Want It, But Less Do It, Finds Survey appeared first on The Cyber Security Place.

Strategising for the New Year: What Will IT Security Teams Face in 2019

As the stores start to stock Christmas-related goods, the radio stations slowly introduce festive music and social media is awash with countdown memes, it can mean only one thing. It is time for those 2019 IT security trend prediction articles, blogs and reports. Who are we to buck a tradition? Infinigate UK has had a […]… Read More

The post Strategising for the New Year: What Will IT Security Teams Face in 2019 appeared first on The State of Security.

Improve Security by Thinking Beyond the Security Realm

It used to be that dairy farmers relied on whatever was growing in the area to feed their cattle. They filled the trough with vegetation grown right on the farm. They probably relied heavily on whatever grasses grew naturally and perhaps added some high-value grains like barley and corn. Today, with better technology and knowledge, dairy farmers work with nutritionists to develop a personalized concentrate of carbohydrates, proteins, fats, minerals, and vitamins that gets added to the natural feed. The result is much healthier cattle and more predictable growth.

We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.

In this post to the Oracle Security blog, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.

Click to read the full article: Improve Security by Thinking Beyond the Security Realm

QuickTip: Howto secure your Mikrotik/RouterOS Router and specially Winbox

I didn’t post anything about the multiple security problems in the Mikrotik Winbox API, as I thought that whoever is leaving the management of a router open to the Internet should not configure routers at all. Of course it is common sense to open the management interface only on internal network interfaces and to source IP addresses you’re managing the routers.  But as this is quick tip I’ll show you how I configure my Mikrotiks for years.

/ip service
set telnet address=0.0.0.0/0 disabled=yes
set ftp address=0.0.0.0/0 disabled=yes
set www address=0.0.0.0/0 disabled=yes
set ssh address=10.7.0.0/16
set api disabled=yes
set winbox address=127.0.0.1/32
set api-ssl disabled=yes

As you see I’ve only enabled ssh and winbox and winbox is only listening on localhost. The ssh is protected with the Firewall to to be only reachable from my admin network. Also I disable the weak ciphers:

/ip ssh set strong-crypto=yes

And I’ve configured public key authentication for the ssh access.  Now your question is how to access the router with winbox? Simple, use ssh port forwarding. So the Winbox API is only accessible by users that have a valid ssh logon – and ssh is much more robust and secure than Winbox. On Linux  the port forwarding is done like this:

ssh -L 8291:127.0.0.1:8291 admin@<mikrotik>

On Windows you can do that same with Putty. In Winbox just connect to localhost:

Convergence is the Key to Future-Proofing Security

I published a new article today on the Oracle Security blog that looks at the benefits of convergence in the security space as the IT landscape grows more disparate and distributed.

Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.

Here's are a couple of excerpts:
Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.
 ...
 Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.
Click to read the full article: Convergence is the Key to Future-Proofing Security

Some VPN providers leak your IPv6 IP address

Just a short note. Today a friend called me if I could help him to get TV streaming from TV stations in the US running. When I looked at it, he even selected a VPN provider which offers servers in the US to circumvent the Geo restrictions, but still it didn’t work. He showed me the NBC website where the first ad was shown and than the screen stayed black. Having no experience with VPN providers and TV streaming sites I first checked the openvpn configuration and made sure that the routing table was correct (sending all non local traffic to the VPN). Looked good, so I opened the developer tools in the browser and saw following repeating.

 

Searching the Internet did not provide an answer … than I just tried to download the file with wget and I got following:

$ wget http://nbchls-prod.nbcuni.com/tve-adstitch/4421/xxxx-1.ts
--2018-08-10 19:20:20-- http://nbchls-prod.nbcuni.com/tve-adstitch/4421/xxxx-1.ts
Resolving nbchls-prod.nbcuni.com (nbchls-prod.nbcuni.com)... 2600:1406:c800:495::308, 2600:1406:c800:486::308, 104.96.129.98
Connecting to nbchls-prod.nbcuni.com (nbchls-prod.nbcuni.com)|2600:1406:c800:495::308|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2018-08-10 xx:xx:xx ERROR 403: Forbidden.

Seeing this it hit me … its using IPv6 … so I did a fast check with

% wget -4 http://nbchls-prod.nbcuni.com/tve-adstitch/4421/xxxx-1.ts
--2018-08-10 19:20:30-- http://nbchls-prod.nbcuni.com/tve-adstitch/4421/xxxx-1.ts
Resolving nbchls-prod.nbcuni.com (nbchls-prod.nbcuni.com)... 104.96.129.98
Connecting to nbchls-prod.nbcuni.com (nbchls-prod.nbcuni.com)|104.96.129.98|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 242520 (237K)

So with a IPv4 request it worked. His VPN provider was leaking the IPv6 traffic to the Internet – that is potentially a security/privacy problem as many use a VPN provider to hide them! So make sure to check before relying on the VPN security/privacy.

New World, New Rules: Securing the Future State

I published an article today on the Oracle Cloud Security blog that takes a look at how approaches to information security must adapt to address the needs of the future state (of IT). For some organizations, it's really the current state. But, I like the term future state because it's inclusive of more than just cloud or hybrid cloud. It's the universe of Information Technology the way it will be in 5-10 years. It includes the changes in user behavior, infrastructure, IT buying, regulations, business evolution, consumerization, and many other factors that are all evolving simultaneously.

As we move toward that new world, our approach to security must adapt. Humans chasing down anomalies by searching through logs is an approach that will not scale and will not suffice. I included a reference in the article to a book called Afterlife. In it, the protagonist, FBI Agent Will Brody says "If you never change tactics, you lose the moment the enemy changes theirs." It's a fitting quote. Not only must we adapt to survive, we need to deploy IT on a platform that's designed for constant change, for massive scale, for deep analytics, and for autonomous security. New World, New Rules.

Here are a few excerpts:
Our environment is transforming rapidly. The assets we're protecting today look very different than they did just a few years ago. In addition to owned data centers, our workloads are being spread across multiple cloud platforms and services. Users are more mobile than ever. And we don’t have control over the networks, devices, or applications where our data is being accessed. It’s a vastly distributed environment where there’s no single, connected, and controlled network. Line-of-Business managers purchase compute power and SaaS applications with minimal initial investment and no oversight. And end-users access company data via consumer-oriented services from their personal devices. It's grown increasingly difficult to tell where company data resides, who is using it, and ultimately where new risks are emerging. This transformation is on-going and the threats we’re facing are morphing and evolving to take advantage of the inherent lack of visibility.
Here's the good news: The technologies that have exacerbated the problem can also be used to address it. On-premises SIEM solutions based on appliance technology may not have the reach required to address today's IT landscape. But, an integrated SIEM+UEBA designed from the ground up to run as a cloud service and to address the massively distributed hybrid cloud environment can leverage technologies like machine learning and threat intelligence to provide the visibility and intelligence that is so urgently needed.
Machine Learning (ML) mitigates the complexity of understanding what's actually happening and of sifting through massive amounts of activity that may otherwise appear to humans as normal. Modern attacks leverage distributed compute power and ML-based intelligence. So, countering those attacks requires a security solution with equal amounts of intelligence and compute power. As Larry Ellison recently said, "It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers."
Click to read the full article: New World, New Rules: Securing the Future State.

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

A security minded guy forced to buy a Wifi enabled cleaning robot

First I want to tell you all that I wanted a vacuum cleaning robot without Internet connection, but I couldn’t find one which fulfilled the requirements. At first I thought the DEEBOT M81 from ECOVACS would be such a device (vacuum and mop combo and possible to carry between rooms as it works randomly), but don’t buy it if you’ve stairs. On the first day alone at home it went 2 floors down, somehow it did look okay and still worked after the kamikaze. We just needed to search for it through the whole house. After that I did some tests, I found out that it stops 6 times at the stairs and falls down the 7 or 8 time. Searching through the Internet showed me that I’m not the only one. The second problem was that configuring the timer differently for some days (like not cleaning on weekends) was not possible. After loosing my last chance for a non Internet connected device I went for the DEEBOT M81 Pro which needs an Android or IPhone app and WiFi, if you want to configure the timer for not cleaning on weekends. This is my story about that – I guess – a typical IoT device.

 

The App – ECOVACS
After unpacking and charging of the robot, I went and installed the App on my test mobile. Why not on my real mobile? Take a look at the required permissions:

I though that is just an App to control my vacuum robot …. guess not. Anyway I installed it on my test system and created a dummy user. Of course I took a look at the traffic. First it connects to ecosphere-app.ecovacs-japan.com,

where it does an HTTPS connect. Hm, maybe thats better than I thought, but the TLS config of the server is bad, but at least it encrypted – so there is still hope.

Looking at the other traffic I saw a XMPP / jabber connection (lbat.ecouser.net / 47.91.78.247), which was encrypted, but sadly with a self signed certificate. I’ll thought I’ll take a look at the traffic via MitM later, lets get it to work before.

Getting it to work
It looks like the robot is creating a SSID for the App on the mobile to connect to, after you pressed the WiFi button >3 sec. So the exchange of the WiFi password seems to secure enough. But it took me almost 1h to get the robot to connect to my IoT network and I didn’t find any information or tips online. I changed following on my side to get it to work, maybe that helps somebody else:

  • I enabled the location stuff (which I’ve disabled by default) on the mobile as I remembered the WiFi Analyser App always tells me to enabled that to sees WiFi networks.
  • I needed to change my IoT network to support legacy WiFi modes. My normal setup is:

    I needed to change it to following in order for the robot to be able to connect:

Robot traffic

The first request from the robot after getting an IP address is to request a HTTP connection to lbo.ecouser.net (47.91.78.247) on Port 8007

Hey we know the IP address and port – that’s the Jabber server the App also connects to. But before the robot connects to the Jabber server he does a second HTTP request, this time to an IP address (47.88.193.19:8005) and not a DNS name. Thats interesting:

That looks like a check for newer firmware …. firmware updates unencrypted .. what can possible go wrong here. As the request currently returns no new firmware I can’t look at that more closely – something for the future. Checking Shodan Info on that IP address is interesting. It runs a portmapper and ntp server reachable from the internet … someone already using that as DDOS amplifier? I’m not talking about the not configured nginx which also leaks IP addresses in the certificate: IP Address:120.26.244.107, IP Address:121.41.41.198, IP Address:47.88.193.19

Let’s go back to the Jabber server the robot connect to. The App uses a self signed certificate “protected” channel but the robot does connect completely in the clear – thats nice so I don’t need to do a MitM attack. The wireshark trace is so full on information that I’m really not sure what I can show you without making it too easy for you to control my robot.

Following is shown in the screenshot (which shows only a a part of the communication):

  • The logon to the server via PLAIN authentication, which is comprised of
    • username: Is the serial number of the device, which is also printed onto the box the device is sold in.
    • password: Looks like a MD5 hash of something, as its 32 hex chars – something to investigate
  • It shares its online (presence status in jabber terms) with the app
  • It gets asked for a version, I guess the firmware version which it returns as 0.16.46 – hope a thats already stable

Looking at later traffic following requests issued by the app:

  • GetDeviceInfo
  • SetTime
  • GetChargeState
  • GetBatteryInfo
  • GetWKVer
  • GetError
  • GetOnOff
  • GetSched
  • GetLifeSpan

I didn’t control the device via the App otherwise there should be much more commands.

Questions and thoughts

I don’t really see a peering which makes sure that only the right App can control a robot, so it is maybe possible to control other robots. As the user ID used on the Jabber server is just the serial number with @141.ecorobot.net/atom added, it should be ease to guess additional user IDs. There is no need to know the password of the robot. On the other side it should be possible to create your own Jabber server and redirect traffic to it. Also writing a DIY App without all that App permissions should be possible and not to hard. The robot I bought is not so interesting for an attacker as it cannot provide room layouts as the more expensive ones provide. The screenshots of the App show what is possible:

I guess I wait for the next versions of the robots that provide a microphone and/or a camera – than it gets really interesting.

As I was able to configure the schedules via the App and set the time,  I’ll try if that still works if the robot is not able to connect to the Internet. If so I’ll got that route and enable the Internet connection only if I need to change the schedules.

Ps: you should really have a separate IoT network.

WannaCry happened and nobody called me during my vacation – I tell you why

I was since last Wednesday on a biking trip through Austria and Bavaria, when on Friday reading main stream media the world broke down with WannaCry. Ok, I thought sensationalism by the main media but now as I’m at home, I cannot believe what I read in tech blogs and the IT media. I won’t link all of them here, just the one I plainly can’t understand and to which I disagree in the strongest way possible – telling plainly patching is hard and we can’t do anything.

Lets start with how a WannaCry infection spreads through a company.

  • The malware needed to get into the company network – be it via open SMB ports (445 TCP) to the Internet and via Email. As I read through the articles its not 100% clear how the infection – lets assume both methods have been used for this post
  • In the second case a user clicks onto the attachment and the malware gets executed
  • Than it searches through the company network and tries with a RCE (remote code execution) to infect other PCs
  • It encrypted the local hard drive

Now lets talk why this should not have been any problem in your organization

  • Port 445 TCP reachable from the Internet? Really? If you’re unsure, quickly go to Shodan and type into the search field net:"xxx.xxx.xxx.xxx/yy" with xxx.xxx.xxx.xxx you IP address range followed by the yy subnet mask and take a look if you know about open ports and services.
  • And now lets take a look at all the stuff I wrote over a year ago, what you should have done before the Locky malware happened (yes this is not the first ransomware making big waves), to be not affected:Stop panicking about the Locky ransomware [Update 2]
    • For the Email infection vector:
      • Block EXE attachments in emails
      • Remove active code from Word, Excel and Powerpoint files by default
      • Block EXE downloads on the Proxy
    • For both  infection vectors:
      • Use application white-listing – we moved to whitelisting for firewall rules a long time ago, its past time to do that for applications. Guess why there is not so much iPhone malware – Apple is effectively white-listing software.
      • Block client 2 client traffic – Even if that is not possible on a day2day basis, it should be prepared to be enabled in a case of emergency.

    With one of the last two alone an widespread infection would not have been possible.

  • Microsoft provided a patch on March 14 and called the vulnerability critical. Lets take a look when Microsoft calls some vulnerabilities  critical and when important. The difference is that with important the user gets asked and than infected, with critical there is nothing, just infection. So important is remote code execute and critical is wormable remote code execution.
    And at last take a look at following text from Microsoft: “Mitigating Factors: Microsoft has not identified any mitigating factors for this vulnerability.” To make it short if you read about such a vulnerability in Windows and know that an exploit is in the wild, drop everything and start patching that hole at once.

Looking at the above ways the malware/worm could have easily been blocked. Anyway at last I really want now to take a look at the post linked above from the SMBlog by Steven M. Bellovin.

  • Because patching is very hard and very risk, and the more complex your systems are, the harder and riskier it is.
    Thats not true in this case.

    • Port 445 open to the Internet, no real network separation, deactivated local Windows Firewall and still have SMB1 activated on Windows Client Systems (see Microsoft recommendation from 2016) – thats not at patching problem, thats a security policy failure (e.g. base hardening of operating systems)
    • standard client PCs (for the normal employee) not patched  – not talking about special systems – we patch thousand PCs every month after the Windows patches are released without any problems in years. The special systems needed to get infected after all by something.
    • If non company managed client PCs got connected to the network and infected special systems, its a failure in network access control – plane and simple
    • no mitigation prepared for a case like a worm breakout – Just to make a point, we prepared a client2client block  ACLs for all edge switches, which could be activated within a few minutes, in 2011 – as you newer can know. This is a missing emergency plan like required in ISO 27001.
    • 2 month window for patching an remote code execution wormable vulnerability. If it was not possible in 2 month to patch something like that, than the company has a high technical/security debt. This is a management failure.
    • still running non supported software – that is a management failure, by not making correct contracts with the vendor or ignoring the problem like a Ostrich.
  • So—if you’re the CIO, what do you do? Break the company, or risk an attack? (Again, this is an imaginary conversation.)
    Thats the wrong question – if the CIO is at the questing he has done a bad job before:

    • All your critical software should have a maintenance contract which specially handles security updates (and specially the timeline) of the underlaying operating system and the software itself and there must be contractual penalty in it. Done that for year now with “call for bids” – Big IT companies provide that security handling without you asking for it – so this is mainly for special software.
    • If the IT department has not the time to patch everything a Triage needs to be done. The vulnerabilities with the highest probability and potential for damage need to be patched first – this vulnerability must have been on the top of any list.
    • The systems are not as in German is called “Stand der Technik” which can be translated as “state of the art” – an Windows XP system is not state of the art, no meaningful network separation is not state of the art, …..
  • That patching is so hard is very unfortunate. Solving it is a research question. Vendors are doing what they can to improve the reliability of patches, but it’s a really, really difficult problem.
    • Ok patching might be not a easy as it could be but
      • A big institution that got caught by this malware did leave the back door open and is now complaining that a herd of wild boars went through the house and did damage.
      • the security and IT department just failed at their job – just do a postmortem without finger pointing and fix the problems. I’m quite sure the affected IT departments got caught also by the Locky malware and didn’t learn a thing.
      • Vendors doing not enough, sure in this case Microsoft did patch it but specially with IoT devices vendor to nothing.
      • Searching through Google for problems after installing MS17-010 reviles only a few post after billions of updated PCs –> there are no problems with this patch –> no reason to not install it

So thats my view onto the WannaCry stuff after being on vacation ….. tell me your views – did I miss something?

Mitigating application layer (HTTP(S)) DDOS attacks

DDOS attacks seem to be new norm on the Internet. Years before only big websites and web applications got attacked but nowadays also rather small and medium companies or institutions get attacked. This makes it necessary for administrators of smaller sites to plan for the time they get attacked. This blog post shows you what you can do yourself and for what stuff you need external help. As you’ll see later you most likely can only mitigate DDOS attacks against the application layer by yourself and need help for all other attacks. One important part of a successful defense against a DDOS attack, which I won’t explain here in detail, is a good media strategy. e.g. If you can convince the media that the attack is no big deal, they may not report sensational about it and make the attack appear bigger and more problematic than it was. A classic example is a DDOS against a website that shows only information and has no impact on the day to day operation. But there are better blogs for this non technical topic, so lets get into the technical part.

different DDOS attacks

From the point of an administrator of a small website or web application there are basically 3 types of attacks:

  • An Attack that saturates your Internet or your providers Internet connection. (bandwidth and traffic attack)
  • Attacks against your website or web application itself. (application attack)

saturation attacks

Lets take a closer look at the first type of attack. There are many different variations of this connection saturation attacks and it does not matter for the SME administrator. You can’t do anything against it by yourself. Why? You can’t do anything on your server as the good traffic can’t reach your server as your Internet connection or a connection/router of your Internet Service Provider (ISP) is already saturated with attack traffic. The mitigation needs to take place on a system which is before the part that is saturated. There are different methods to mitigate such attacks.

Depending on the type of website it is possible to use a Content Delivery Networks (CDN). A CDN basically caches the data of your website in multiple geographical distributed locations. This way each location gets only attacked by a part of the attacking systems. This is a nice way to also guard against many application layer attacks but does not work (or not easily) if the content of your site is not the same for every client / user. e.g. an information website with some downloads and videos is easily changed to use a CDN but a application like a Webmail system or an accounting system will be hard to adapt and will not gain 100% protection even than. An other problem with CDNs is that you must protect each website separately, thats ok if you’ve only one big website that is the core of your business, but will be a problem if attacker can choose from multiple sites/applications. An classic example is that a company does protect its homepage with an CDN but the attacker finds via Google the Webmail of the companies Exchange Server. Instead of attacking the CDN, he attacks the Internet connection in front of the Qebmail. The problem will now most likely be that the VPN site-2-site connections to the remote offices of the company are down and working with the central systems is not possible anymore for the employees in the remote locations.

So let assume for the rest of the document that using a CDN is not possible or not feasible. In that case you need to talk to your ISPs. Following are possible mitigations a provider can deploy for you:

  • Using a dedicated DDOS mitigation tool. These tools take all traffic and will filter most of the bad traffic out. For this to work the mitigation tool needs to know your normal traffic patterns and the DDOS needs to be small enough that the Internet connections of the provider are able to handle it. Some companies sell on on-premise mitigation tools, don’t buy it, its wasting money.
  • If the DDOS attack is against an IP address, which is not mission critical (e.g. attack is against the website, but the web application is the critical system) let the provider block all traffic to that IP address. If the provider as an agreement with its upstream provider it is even possible to filter that traffic before it reaches the provider and so this works also if the ISPs Internet connection can not handle the attack.
  • If you have your own IP space it is possible for your provider(s) to stop announcing your IP addresses/subnet to every router in the world and e.g. only announce it to local providers. This helps to minimize the traffic to an amount which can be handled by a mitigation tool or by your Internet connection. This is specially a good mitigation method, if you’re main audience is local. e.g. 90% of your customers/clients are from the same region or country as you’re – you don’t care during an attack about IP address from x (x= foreign far away country).
  • A special technique of the last topic is to connect to a local Internet exchange which maybe also helps to reduce your Internet costs but in any case raises your resilience against DDOS attacks.

This covers the basics which allows you to understand and talk with your providers eye to eye. There is also a subsection of saturation attacks which does not saturate the connection but the server or firewall (e.g. syn floods) but as most small and medium companies will have only up to a one Gbit Internet connection it is unlikely that a descend server (and its operating system) or firewall is the limiting factor, most likely its the application on top of it.

application layer attacks

Which is a perfect transition to this chapter about application layer DDOS. Lets start with an example to describe this kind of attacks. Some years ago a common attack was to use the ping back feature of WordPress installations to flood a given URL with requests. I’ve seen such an attack which requests on a special URL on an target system, which did something CPU and memory intensive, which let to a successful DDOS against the application with less than 10Mbit traffic. All requests were valid requests and as the URL was an HTTPS one (which is more likely than not today) a mitigation in the network was not possible. The solution was quite easy in this case as the HTTP User Agent was WordPress which was easy to filter on the web server and had no side effects.

But this was a specific mitigation which would be easy to bypassed if the attacker sees it and changes his requests on his botnet. Which also leads to the main problem with this kind of attacks. You need to be able to block the bad traffic and let the good traffic through. Persistent attackers commonly change the attack mode – an attack is done in method 1 until you’re able to filter it out, than the attacker changes to the next method. This can go on for days. Do make it harder for an attacker it is a good idea to implement some kind of human vs bot detection method.

I’m human

The “I’m human” button from Google is quite well known and the technique behind it is that it rates the connection (source IP address, cookies (from login sessions to Google, …) and with that information it decides if the request is from a human or not. If the system is sure the request is from a human you won’t see anything. In case its sightly unsure a simple green check-mark will be shown, if its more unsure or thinks the request is by a bot it will show a CAPTCHA.  So the question is can we implement something similar by ourself. Sure we can, lets dive into it.

peace time

Set an special DDOS cookie if an user is authenticated correctly, during peace time. I’ll describe the data in the cookie later in detail.

war time

So lets say, we detected an attack manually or automatically by checking the number of requests eg. against the login page. In that case the bot/human detection gets activated. Now the web server checks for each request the presence of the DDOS cookie and if the cookie can be decoded correctly. All requests which don’t contain a valid DDOS cookie get redirected temporary to a separate host e.g. https://iamhuman.example.org. The referrer is the original requested URL. This host runs on a different server (so if it gets overloaded it does not effect the normal users). This host shows a CAPTCHA and if the user solves it correctly the DDOS cookie will be set for example.org and a redirect to the original URL will be send.

Info: If you’ve requests from some trusted IP ranges e.g. internal IP address or IP ranges from partner organizations you can exclude them from the redirect to the CAPTCHA page.

sophistication ideas and cookie

An attacker could obtain a cookie and use it for his bots. To guard against it write the IP address of the client encrypted into the cookie. Also put the timestamp of the creation of the cookie encrypted into it. Also storing the username, if the cookie got created by the login process, is a good idea to check which user got compromised.

Encrypt the cookie with an authenticated encryption algorithm (e.g. AES128 GCM) and put following into it:

  • NONCE
  • typ
    • L for Login cookie
    • C for Captcha cookie
  • username
    • Only if login cookie
  • client IP address
  • timestamp

The key for the encryption/decryption of the cookie is static and does not leave the servers. The cookie should be set for the whole domain to be able to protected multiple websites/applications. Also make it HttpOnly to make stealing it harder.

implementation

On the normal web server which checks the cookie following implementations are possible:

  • The apache web server provides a module called mod_session_* which provides some functionality but not all
  • The apache module rewriteMap (https://httpd.apache.org/docs/2.4/rewrite/rewritemap.html) and using „prg: External Rewriting Program“ should allow everything. Performance may be an issue.
  • Your own Apache module

If you know about any other method, please write a comment!

The CAPTCHA issuing host is quite simple.

  • Use any minimalistic website with PHP/Java/Python to create cookie
  • Create your own CAPTCHA or integrate a solution like Recaptcha

pro and cons

  • Pro
    • Users than accessed authenticated within the last weeks won’t see the DDOS mitigation. Most likely these are your power users / biggest clients.
    • Its possible to step up the protection gradually. e.g. the IP address binding is only needed when the attacker is using valid cookies.
    • The primary web server does not need any database or external system to check for the cookie.
    • The most likely case of an attack is that the cookie is not set at all which does take really few CPU resources to check.
    • Sending an 302 to the bot does create only a few bytes of traffic and if the bot requests the redirected URL it on an other server and there no load on the server we want to protect.
    • No change to the applications is necessary
    • The operations team does not to be experts in mitigating attacks against the application layer. Simple activation is enough.
    • Traffic stats local and is not send to external provider (which may be a problem for a bank or with data protections laws in Europe)
  • Cons
    • How to handle automatic requests (API)? Make exceptions for these or block them in case of an attack?
    • Problem with non browser Clients like ActiveSync clients.
    • Multiple domains need multiple cookies

All in all I see it as a good mitigation method for application layer attacks and I hope the blog post did help you and your business. Please leave feedback in the comments. Thx!

Implementing IoT securely in your company – Part 3

This is Part 3 of the series implementing IoT securely in your company, click here for part 1 and here for part 2. As it is quite common that new IoT devices are ordered and also maintained by the appropriate department and not by the IT department, it is important that there is a policy in place.

This policy is specially important in this case as most non IT departments don’t think about IT security and maintaining the system. They are often used to think about buying a device and it will run for years and often even longer, without doing much. We on the other hand in the IT know that the buying part is the easy part, maintaining it is the hard one.

Extend existing security policies

Most companies won’t need to start from scratch, as they most likely have policies for common stuff like passwords, patching and monitoring. The problem here is the scope of the policies and that you’re current able to technically enforce many of them:

  • Most passwords are typically maintained by an identity management system and the password policy is therefore enforced for the whole company. The service/admin passwords are typically configured and used by members of the IT department. For IoT devices that maybe not true as the devices are managed by the using department and technically enforcing it may not be possible.
  • Patching of the software is typically centrally done by the IT department, be it the client or server team. But who is responsible for updating the IoT devices? Who monitors that updates are really done? How does he monitor that? What happens if a department does not update their devices? What happens if a vendor stops providing security updates for a given device?
  • Centrally by the IT department provided services are generally monitored by the IT department. Is the IT department responsible for monitoring the IoT devices?  Who is responsible for looking into the problem?

You should look at this and write it down as a policy which is accepted by the other departments before deploying IoT devices. In the beginning they will say yes sure we’ll update the devices regularly and replace the devices before the vendors stops providing security updates – and often can’t remember it some years later.

Typical IoT device problems

Beside extending the policies to cover IoT devices it’s also important to check the policies if the fit the IoT space and cover typical problems. I’ll list some of them here, which I’ve seen done wrong in the past. Sure some of them also apply for normal IT server/services but are maybe consider so basically that everyone just does it right, that it is maybe not covered by your policy.

  • No Update is possible
    Yes, there are devices out in the wild that can’t be updated. What does your policy say?
  • Default Logins
    Many IoT devices come with a default login and as the management of the devices is done via a central (cloud) management system, it is often forgotten that the devices may have also a administration interface.What does your policy say?
  • Recover from IoT device loss
    Let’s assume that an attacker is able to get into one IoT device or that the IoT device gets stole. Is the same password used on the server? Do all devices use the same password? Will the IT department get informed at all? What does your policy say?
  • Naming and organizing things
    For IT devices it’s clear that we use the DNS structure – works for servers, switches, pc’s. Make sure that the same gets used for IoT device. What does your policy say?
  • Replacing IoT devices
    Think about > 100 IoT devices running for 4 years and now some break down, and the the devices are end of sales. Can you connect new models to the old ones?  does someone keep spare parts?  What does your policy say?
  • Self signed certificates
    If the system/devices uses TLS (e.g. HTTPS) it needs to be able to use your internal PKI certificates. Self signed certificates are basically the same as unencrypted traffic. What does your policy say?
  • Disable unused services
    IoT enable often all services by default, like I had a device providing a FTP and telnet server – but for administration only HTTP was ever used. What does your policy say?

I hope that article series helps you to implement IoT devices somewhat securely.

 

Implementing IoT securely in your company – Part 2

After Part 1 which focused on setting up your network for IoT this post focus on making sure that the devices are the right ones and that they work in your network. The first can be accomplished by asking basic security questions and talking only with the more secure vendors further.  In my experience that also leads to the better vendors which know IT and whom will make your life easier in the long run. There are plenty of vendors out there for whom the whole IT part is new as they are an old vendor in a given field which now needs to do the “network thing” and don’t have the employees for it. Johannes B. Ullrich at SANS ISC InfoSec came up with the idea to preselect IoT vendors with 5 questions. (You can read more on his reasoning behind each question in his post):

5 preselect questions

  1. For how long, after I purchase a device, should I expect security updates?
    This time frame will show us how long we can plan to use the device in our network, as using devices which get no security updates will be a compliance violation in most companies.
  2. How will I learn about security updates?
    Responsible vendors will add you to a security mailing list where you will get informed on all security related stuff via email.
  3. Can you share a pentest report for your device?
    If the vendor cares at all at security he let an external expert make a pentest, which will at least find the worst and stupid security holes. If the vendor is able to show you such an report, you should really take that vendor in consideration.
  4. How can I report vulnerabilities?
    We often found security holes in programs or devices and sometimes it is really hard to report that to the vendor in a way he accepts it and fixes the hole in a reasonable time frame. Sometimes we needed to go via our local Austrian CERT and sometimes that even was not enough as the vendor was in the US and only did something after their CERT asked them pointed questions. So a direct connection the guy(s) responsible for the security of device is important.
  5. If you use encryption, then disclose what algorithms you use and how it is implemented
    If the vendor tells you something about “Proprietary” run away from the product!  If you read that they use MD5 or RC4, the software on the device seems a little bit dated.

After selecting the best vendors ranked by the preselect questions you should make sure that the devices will run in your network. If you’re new to this kind of work you will not believe what garbage some vendors deliver. Some points are connected to your network and how it will look in the future.

  • The device needs to support DHCP!
    • Use DHCP reservations to provide fixed IP addresses
    • Special case in a secure network is to disable ARP learning on the Layer 3 switches (makes MitM attack a lot harder). In this case DHCP is used for filling the ARP table.
  • Check if the device will work with MAC oder 802.1x authentication flawlessly
    • Some devices only send a packet if queried, which won’t work if the device got de-authenticated e.g. idle timeout or network problem. The device needs to send a packet ever so often so the switch sees the MAC address and can make a RADIUS request.
  • The devices needs to support routing
    • We had devices that where only able to talk within the subnet. In some cases we were not sure if the product really didn’t support it or just the technician was unable to configure it.
    • As the PCs and servers need to be separated via a Firewall (see Part 1), this feature is a deal breaker
  • It should be possible to configure a local NTP Server
    • If not, the device time runs off or you need to allow the device to connect to the Internet, which can get complicated or insecure if you’ve different devices each using an other NTP server
  • The devices needs to support automatic restart of services after power or network outage
    • We had some devices which needed manual interventions to reconnect to the servers again after a network problem
  • Embedding of external resources should be looked at. e.g. If a device needs jquery for its web GUI and lets the browser load that via jquery.org it will not work it your Internet is down. In some cases that does not matter, in some thats a deal breaker.
  • support of 1Gbit Ethernet connection
    • Sure I know that IoT devices do not need 1Gibt, but the devices will maybe run 10 years and you’ll have 10Gbit switches by than. It is not sure that 100Mbit will be supported or work flawlessly. e.g. Some current Broadcom 10Gbit chipsets don’t support 100Mbit half duplex anymore. You need an other chipset which is a little bit more expensive .. and you know what switch vendors will pick? 😉

So so far for part 2 of this series … the next part will be on some policy stuff you need to agree with department wanting that devices.

Virtual Directory as Database Security

I've written plenty of posts about the various use-cases for virtual directory technology over the years. But, I came across another today that I thought was pretty interesting.

Think about enterprise security from the viewpoint of the CISO. There are numerous layers of overlapping security technologies that work together to reduce risk to a point that's comfortable. Network security, endpoint security, identity management, encryption, DLP, SIEM, etc. But even when these solutions are implemented according to plan, I still see two common gaps that need to be taken more seriously.

One is control over unstructured data (file systems, SharePoint, etc.). The other is back door access to application databases. There is a ton of sensitive information exposed through those two avenues that aren't protected by the likes of SIEM solutions or IAM suites. Even DLP solutions tend to focus on perimeter defense rather than who has access. STEALTHbits has solutions to fill the gaps for unstructured data and for Microsoft SQL Server so I spend a fair amount of time talking to CISOs and their teams about these issues.

While reading through some IAM industry materials today, I found an interesting write-up on how Oracle is using its virtual directory technology to solve the problem for Oracle database customers. Oracle's IAM suite leverages Oracle Virtual Directory (OVD) as an integration point with an Oracle database feature called Enterprise User Security (EUS). EUS enables database access management through an enterprise LDAP directory (as opposed to managing a spaghetti mapping of users to database accounts and the associated permissions.)

By placing OVD in front of EUS, you get instant LDAP-style management (and IAM integration) without a long, complicated migration process. Pretty compelling use-case. If you can't control direct database permissions, your application-side access controls seem less important. Essentially, you've locked the front door but left the back window wide open. Something to think about.

Performing Clean Active Directory Migrations and Consolidations


Active Directory Migration Challenges

Over the past decade, Active Directory (AD) has grown out of control. It may be due to organizational mergers or disparate Active Directory domains that sprouted up over time, but many AD administrators are now looking at dozens of Active Directory forests and even hundreds of AD domains wondering how it happened and wishing it was easier to manage on a daily basis.

One of the top drivers for AD Migrations is enablement of new technologies such as unified communications or identity and access management. Without a shared and clearly articulated security model across Active Directory domains, it’s extremely difficult to leverage AD for authentication to new business applications or to establish the related business rules that may be based on AD attributes or security group memberships.

Domain consolidation is not a simple task. Whether you're moving from one platform to another, doing some AD security remodeling, or just consolidating domains for improved management and reduced cost, there are numerous steps, lots of unknowns and an overwhelming feeling that you might be missing something. Sound familiar?

One of the biggest fears in Active Directory migration projects is that business users will lose access to their critical resources during the migration. To reduce the likelihood of that occurring, many project leaders choose to enable a dirty migration; they enable historical SIDs which carry old credentials and group memberships from the source domain and apply them to the new domain. Unfortunately, enabling historical SIDs proliferates one of the main challenges that initially drove the migration project. The dirty migration approach maintains the various security models that have been implemented over the years making AD difficult to manage and near impossible to understand who has what rights across the environment.

Clean Active Directory Migrations

The alternative to a dirty migration is to disallow historical SIDs and thereby enable a clean migration where rights are applied as-needed in an easy-to-manage and well articulated security model. Security groups are applied on resources according to an intentional model that is defined up-front and permissions are limited to a least-privilege model where only those who require rights actually get them.

All consolidation or migration projects aren't the same. The motivations differ, the technologies differ, and the Active Directory organizational structure and assets differ wildly. Most solutions on the market provide point A to point B migrations of Active Directory assets. This type of migration often contributes to making the problem worse over time. There's nothing wrong with using an Active Directory tool to help you perform an AD forest or domain migration, but knowing which assets to move and how to structure or even restructure them in the target domain is critical.

Enabling a clean migration and transforming the Active Directory security model requires a few steps to be followed. It starts with assessment and cleanup of the source Active Directory environments. You should assess what objects are out there, how they’re being used, and how they’re currently organized. Are there dormant user accounts or unused computer objects? Are there groups with overlapping membership? Are there permissions that are unused or inappropriate? Are there toxic or high-risk conditions in the environment? This type of intelligence enables visibility into which objects you need to move, how they're structured, how the current domain compares to the target domain, and where differences exist in GPO policies, schema, and naming conventions. The dormant and unused objects as well as any toxic or high-risk conditions can be remediated so that those conditions aren’t propagated to the target environment.

Once the initial assessment and cleanup is complete, a gap-analysis should be performed to understand where the current state differs from the intended model. Where possible, the transformation should be automated. Security groups can be created, for example, based on historical user activity so that group membership is determined by actual need. This is a key requirement for numerous legal regulations.

The next step is to perform a deep scan into the Active Directory forests and domains that will be consolidated and look at server-level permissions and infrastructure across Active Directory, File Systems, Security Policies, SharePoint, SQL Server, and more. This enables the creation of business rules that will transform existing effective permissions into the target model while adhering to new naming conventions and group utilization. Much of this transformation should be automated to avoid human error and reduce effort.

Maintaining a Clean Active Directory

Once the migration or consolidation project is complete and adherence to the intended security model has been enforced, it’s vital that a program is in place to maintain Active Directory in its current state. There are a few capabilities that can help achieve this goal.

First, a mandatory periodic audit should be enforced. Security Group owners should confirm that groups are being used as-intended. Resource owners should confirm that the right people have the right level of access to their resources. Business managers should confirm that their people have access to the right resources. These reviews should be automated and tracked to ensure that these reviews are completely thoroughly and on-time.

Second, tools should be implemented that provide visibility into the environment answering questions as they come up. When a security administrator needs to see how a user is being granted rights to something they should perhaps not have, they’ll need tools that provide answers in a timely fashion.

Third, a system-wide scan should be conducted regularly to identify any toxic or high-risk conditions that occur over time. For example, if a user account becomes dormant, notification should be sent out according to business rules. Or if a group is nested within itself perhaps ten layers deep, you want an automated solution to discover that condition and provide related reporting.

Finally, to ensure adherence to Active Directory security policies, a real-time monitoring solution should be put in place to enforce rules, prevent unwanted changes via event blocking, and to maintain an audit trail of critical administrative activity.

Complete visibility across the entire Active Directory infrastructure enables a clean AD domain consolidation while making life easier for administrators, improving security, and enabling adoption of new technologies

About the Author

Matt Flynn has been in the Identity & Access Management space for more than a decade. He’s currently a Product Manager at STEALTHbits Technologies where he focuses on Data & Access Governance solutions for many of the world’s largest, most prestigious organizations. Prior to STEALTHbits, Matt held numerous positions at NetVision, RSA, MaXware, and Unisys where he was involved in virtually every aspect of identity-related projects from hands-on technical to strategic planning. In 2011, SYS-CON Media added Matt to their list of the most powerful voices in Information Security.

Active Directory Unification

[This is a partial re-post of an entry on the STEALTHbits blog. I think it's relevant here and open for discussion on the concepts surrounding clean migrations and AD unification.]

It’s no secret that over the past decade, Active Directory has grown out of control across many organizations. It’s partly due to organizational mergers or disparate Active Directory domains that sprouted up over time, but you may find yourself looking at dozens or even hundreds of Active Directory domains and realize that it's time to consolidate. And it probably feels overpowering. But despite the effort in front of you, there’s an easy way and a right way.

Domain consolidation is not a simple task. Whether you're moving from one platform to another, trying to implement a new security model, or just consolidating domains for improved management and reduced cost, there are numerous steps, lots of unknowns and an overwhelming feeling that you might be missing something. Sound familiar?

According to Gartner analyst Andrew Walls, “The allure of a single AD forest with a simple domain design is not fool’s gold. There are real benefits to be found in a consolidated AD environment. A shared AD infrastructure enables user mobility, common user provisioning processes, consolidated reporting, unified management of machines, etc.

Walls goes on to discuss the politics, cost justification, and complexity of these projects noting that “An AD consolidation has to unite and rationalize the ID formats, password policy objects, user groups, group policy objects, schema designs and application integration methods that have grown and spread through all of the existing AD environments. At times, this can feel like spring cleaning at the Aegean stables. Of course, if you miss something, users will not be able to log in, or find their file shares, or access applications. No pressure.

Walls offers advice on how to avoid some of the pain. “You fight proliferation of AD at every turn and realize that consolidation is not a onetime event. The optimal design for AD is a single domain within a single forest. Any deviation from this approach should be justified on the basis of operational requirements that a unified model cannot possibly support.

What does this mean for you? Well, the most significant take-away from Walls’ advise is that it’s not a onetime event. AD Unification is an ongoing effort. You don’t simply move objects from point-A to point-B and then pack it in for the day. The easy way fails to meet the core objectives of an improved security model, simplified management, reduced cost, and a common provisioning process (think integration with Identity Management solutions).

If you take everything from three source domains and simply move it all to a target domain, you haven’t achieved any of the objectives other than now having a single Active Directory. There’s a good chance that your security model will remain fragmented, management will become more difficult, and your user provisioning processes will require additional logic to accommodate for the new mess. On a positive note, if this model is your intent, there are numerous solutions on the market that will help.

STEALTHbits, of course, embraces the right way. “Control through Visibility” is about improving your security posture and your ability to manage IT by increasing your visibility into the critical infrastructure.


If you'd like to learn more about the solution, you can start by reading the rest of this blog entry or contact STEALTHbits.