Monthly Archives: June 2016

Great List of Hack Sites

The SANS Pen Test group has a poster they hand out at SANS conferences and put in mailings that has an awesome mind map of hacking challenge/skills sites.

Won't be going to conference soon or not on the mailing list? No worries, the poster can be downloaded here and all of the SANS posters are located here, including  DFIR, Threat Intelligence, CIS and some general information posters.

If you're not familiar with SANS posters, they're not just marketing tools (but of course they advertise an event, too) but are full of good, useful information. Like the hack sites mind map.


CSRF Referer header strip


Most of the web applications I see are kinda binary when it comes to CSRF protection; either they have one implemented using CSRF tokens (and more-or-less covering the different functions of the web application) or there is no protection at all. Usually, it is the latter case. However, from time to time I see application checking the Referer HTTP header.

A couple months ago I had to deal with an application that was checking the Referer as a CSRF prevention mechanism, but when this header was stripped from the request, the CSRF PoC worked. BTW it is common practice to accept empty Referer, mainly to avoid breaking functionality.

The OWASP Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet tells us that this defense approach is a baaad omen, but finding a universal and simple solution on the Internetz to strip the Referer header took somewhat more time than I expected, so I decided that the stuff that I found might be useful for others too.

Solutions for Referer header strip

Most of the techniques I have found were way too complicated for my taste. For example, when I start reading a blog post from Egor Homakov to find a solution to a problem, I know that I am going to:
  1. learn something very cool;
  2. have a serious headache from all the new info at the end.
This blog post from him is a bit lighter and covers some useful theoretical background, so make sure you read that first before you continue reading this post. He shows a few nice tricks to strip the Referer, but I was wondering; maybe there is an easier way?

Rich Lundeen (aka WebstersProdigy) made an excellent blog post on stripping the Referer header (again, make sure you read that one first before you continue). The HTTPS to HTTP trick is probably the most well-known one, general and easy enough, but it quickly fails the moment you have an application that only runs over HTTPS (this was my case).

The data method is not browser independent but the about:blank trick works well for some simple requests. Unfortunately, in my case the request I had to attack with CSRF was too complex and I wanted to use XMLHttpRequest. He mentions that in theory, there is anonymous flag for CORS, but he could not get it work. I also tried it, but... it did not work for me either.

Krzysztof Kotowicz also wrote a blog post on Referer strip, coming to similar conclusions as Rich Lundeen, mostly using the data method.

Finally, I bumped into Johannes Ullrich's ISC diary on Referer header and that led to me W3C's Referrer Policy. So just to make a dumb little PoC and show that relying on Referer is a not a good idea, you can simply use the "referrer" meta tag (yes, that is two "r"-s there).

The PoC would look something like this:
<meta name="referrer" content="never">
<form action="" method="POST">
<input type="hidden" name="param1" value="1" />
<input type="hidden" name="param2" value="2" />


As you can see, there is quite a lot of ways to strip the Referer HTTP header from the request, so it really should not be considered a good defense against CSRF. My preferred way to make is PoC is with the meta tag, but hey, if you got any better solution for this, use the comment field down there and let me know! :)

Bad analogy, bad. No biscuit.

If you use the “If I leave my door unlocked you don;t have the right to walk in…” analogy when discussing web disclosures, you really need to stop.  Bad analogies are bad.

You know the cases, folks find things on the Internet that people didn’t mean to make public, and a storm ensues and all kinds of people say all kinds of naïve stuff, including people who should know better.

Your website is not a house, and not just because of the physical vs. virtual difference.  If we have to use this analogy, let’s at least get it more accurate.

You live on a road, it may be public, or it may be private, but either way it is open to the public- in fact public use is encouraged.  That’s why you put your house there, because of good access in and out to the rest of the world.  You put sensitive data on signs in your yard, visible from the road.  There might even be a sign that says “only read your own data”, but it is all visible.  Someone drives by and reads someone else’s sign from the road.  Maybe they take pictures of the signs.

Still imperfect, but much more accurate.  And so convoluted it doesn’t help make any point.  These issues are not simple and misrepresenting them and oversimplifying things does not help.

Note that I have not made any judgements about who exposed what where, and who drove by and looked at it.  If it is your house and you post my data in an irresponsible manner, you are being irresponsible.  If someone feels the need to copy everything to prove a point, that causes problems, even when their intentions are good.

Without picking any specific cases, most of the ones that make the news are a combination of errors on both sides.  You should act like sensitive data is, I don’t know, sensitive.  And when you stumble across things like that (and you will if you use the Internet and pay attention), you should think about how folks will react, and keep the CFAA in mind.  Right or wrong, that’s the world we live in.  I think the CFAA is horrible and horribly out of date, as is the DMCA- but while they are the law and enforced, ignore them at your peril.  It is worth considering that when people find stuff that shouldn’t be posted publicly, it generally doesn’t require downloading the entire dataset to report the problem, in fact that is likely to create problems for everyone.

And yes, that’s a gross oversimplification from me in a post where I decry gross oversimplification.  Literary license or something.

And because I actually care about this mess we’re in, I’ll make an offer I hope I don’t regret: if you stumble across things which are exposed and you really don’t know how to handle it please pause and reach out to me.  I’ll ask friends in law enforcement for guidance for you if you wish to remain anonymous, or I’ll try to help you find the right folks to work with.  If you are outside of the US, I’m unlikely to be if much help, but I’ll still make inquiries.

Note that if you are on any side of one of these situations and act like a dumbass, I reserve the right to call you a dumbass.  I’ll still try to help, but I’m calling you a dumbass if you deserve it.  That’s as close to idealistic as you’ll get from me.



WordPress Mobile Detector Incorrect Fix Leads To Stored XSS

Recently, Wordpress Mobile Detector plugin was in news for the "Remote Code Execution" vulnerability that was found inside the resize.php file. The vulnerability allowed an external attacker to upload arbitrary files to the server as there was no validation being performed for the file-type that has to be retrieved from an external source.

Soon after the vulnerability became public, the plugin was taken down from wordpress directory until the issue was fixed. However, as per my analysis the fix is incomplete and leads to stored XSS. 

The Vulnerability

Let's discuss about the initial vulnerability first. The following PHP code takes input via src parameter (GET or POST) and checks for the existence of the file. If it exists, appropriate content-type header is set. 


if (isset($_REQUEST['src'])) { $path = dirname(__FILE__) . "/cache/" . basename($_REQUEST['src']);
file_put_contents($path, file_get_contents($_REQUEST['src']));


It then utilizes the file_get_contents function in order to fetch file contents from a URL and upload it to the webhost under cache directory.  Please note that, this is only possible if allow_url_fopen is enable upon the server which limits the effectiveness and impact of the exploit.  The problem with the above code is that the code does not perform any check for extensions that are allowed.  So, in case if an can fetch/execute PHP, ASPX code it results in a code execution.

The (incomplete) Fix

The following fix was implemented which defined a whitelist of all extensions that are acceptable (primarily images). The code checks if the requested file ends with the whitelisted extensions before they are fetched and uploaded. 

$acceptable_extensions = ['png','gif','jpg','jpeg','jif','jfif','svg'];
$info = pathinfo($_REQUEST['src']);
// Check file extension
file_put_contents($path, file_get_contents($_REQUEST['src']));


The problem with the above fix is that it whitelists "svg" extension. It is a widely known fact that svg images can execute JavaScript. 

Using SVG To Trigger Stored XSS

In order to demonstrate the finding, The following svg file would be hosted on a Remote Server. 


<?xml version="1.0" standalone="no"?><!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" ""><svg onload="alert(1)" xmlns=""><defs><font id="x"><font-face font-family="y"/></font></defs></svg>

This image once requested via "src" parameter will be saved to cache directory:

Upon visiting the uploaded image:

Other Attack Possibilities

i) In case where path finishes with an allowed extensions there is an attack possibility -

ii) In older version of PHP, it is possible to append a nullbyte and tricking the server into uploading a malicious PHP file. Example -

iii) In case if Display_errors is set to true in php.ini file. The file_get_contents() function can be utilized for . A similar issue was discovered by me in the year 2013. You can refer to the following blog post -  phpThumb Server Side Request Forgery

iv) In case where path finishes with an allowed extensions there is an attack possibility - 

v) Even allowing external users to fetch and upload images can external images might introduce issues such as someone can host porn images and tarnish companies reputation, someone can deliberately upload a copyrighted image and sue the company, since there is no limit to the number of images one can upload, one can still attempt to exhaust server resources by uploading tons of images. 

Suggested Fix For Vendor 

i) The suggested fix is removing the "svg" extension from whitelist

$acceptable_extensions = ['png','gif','jpg','jpeg','jif','jfif''];

ii) File names should be re-written after they are uploaded, so that their location may not be guessed. along with directory listing should also be disabled. 

Suggested Fix For Webmasters

iii) Server administrators should modify the .htaccess file to only support allowed extensions. and prevent accessing other files.

iv) Content-Type-Options: nosniff header to prevent exploiting the site using SWF file with .jpg extension for example -

v) Content-Disposition header should be utilized.

Thanks for Soroush Dallili from NCC group and Daniel Sid from Sucuri for tipping off. 

One reason why InfoSec sucked in the past 20 years – the "security tips" myth

From time to time, I get disappointed how much effort and money is put into securing computers, networks, mobile phones, ... and yet in 2016 here we are, where not much has changed on the defensive side. There are many things I personally blame for this situation, and one of them are the security tips.

The goal of these security tips is that if the average user follows these easy to remember rules, their computer will be safe. Unfortunately, by the time people integrate these rules into their daily life, these rules either become outdated, or these rules were so oversimplified that it was never true in the first place. Some of these security tips might sound ridiculous to people in InfoSec nowadays, but this is exactly what people still remember because we told them so for years.

PDF is safe to open

This is an oldie. I think this started at the time of macro viruses. Still, people think opening a PDF from an untrusted source is safer than opening a Word file. For details why this is not true, check:
On an unrelated note, people still believe PDF is integrity protected because the content cannot be changed (compered to a Word document).
Image stolen from Kaspersky

Java is secure

One of the best ones. Oracle started marketing Java as a safe language, where buffer overflows, format strings and pointer based vulnerabilities are gone. Unfortunately they forgot to tell the world that instead of "unsafe programs developed by others" they installed their unsafe program on 3 billion devices. 

Stay away from rogue websites and you will be safe

This is a very common belief I hear from average people. "I only visit some trusted news sites and social media, I never visit those shady sites." I have some bad news. At the time of malvertising and infected websites, you don't have to visit those shady sites anymore to get infected.

Don't use open WiFi

I have a very long explanation why this makes no sense, see here. Actually, the whole recommendation makes no sense as people will connect to public WiFis, no matter what we (InfoSec) recommend.

The password policy nightmare

Actually this topic has been covered by myself in two blog posts, see here and here. Long story short: use a password manager and 2 factor authentication wherever possible. Let the password manager choose the password for you. And last but not least, corporate password policy sux.

Sites with a padlock are safe

We tell people for years that the communication with HTTPS sites are safe, and you can be sure it is HTTPS by finding a randomly changing padlock icon somewhere next to the URL. What people hear is that sites with padlocks are safe. Whatever that means. Same goes for WiFi - a network with a padlock is safe.

Use Linux, it is free from malware

For years people told to Windows users that only if they would use Linux they won't have so much malware. Thanks to Android, now everyone on the world can enjoy malware on his/her Linux machine.

OSX is free from malware

It is true that there is significantly less malware on OSX than on Windows, but this is an "economical" question rather than a "security" one. The more people use OSX, the better target it will become. Some people even believe they are safe from phishing because they are using a Mac!

Updated AV + firewall makes me 100% safe

There is no such thing as 100% safe, and unfortunately nowadays most malware is written for PROFIT, which means it can bypass these basic protections for days (or weeks, months, years). The more proactive protection is built into the product, the better!

How to backup data

Although this is one of the most important security tip which is not followed by people, my problem here is not the backup data advise, but how we as a community failed to provide easy to use ways to do that. Now that crypto-ransomware is a real threat to every Windows (and some OSX) users, even those people who have backups on their NAS can find their backups lost. The only hope is that at least OSX has Time Machine which is not targeted yet, and the only backup solution which really works.
The worst part is that we even created NAS devices which can be infected via worms ...

Disconnect your computer from the Internet when not used

There is no need to comment this. Whoever recommends things like that, clearly has a problem.

Use (free) VPN to protect your anonimity

First of all. There is no such thing as free service. If it is free, you are the service. On another hand, non-free VPN can introduce new vulnerablities, and they won't protect your anonimity. It replaces one ISP with another (your VPN provider). Even TOR cannot guarantee anonimity by itself, and VPNs are much worse.

The corporate "security tips" myth

"Luckily" these toxic security tips have infected the enterprise environment as well, not just the home users.

Use robots.txt to hide secret information on public websites

It is 2016 and somehow web developers still believe in this nonsense. And this is why this is usually the first to check on a website for penetration testers or attackers.

My password policy is safer than ever

As previously discussed, passwords are bad. Very bad. And they will stick with us for decades ...

Use WAF, IDS, IPS, Nextgen APT detection hibber-gibber and you will be safe

Companies should invest more into people and less into magic blinking devices.

Instead of shipping computers with bloatware, ship computers with exploit protection software
Teach people how to use a password safe
Teach people how to use 2FA
Teach people how to use common-sense


Computer security is complex, hard and the risks change every year. Is this our fault? Probably. But these kind of security tips won't help us save the world. 

Is it the End of Angler ?

Everyone looking at the DriveBy landscape is seeing the same : as Nuclear disappeared around April 30th,  Angler EK has totally vanished on June 7th. We were first thinking about Vacation as in January 2016 or maybe Infrastructure move. But something else is going on.

On the Week-End of the 4-5th of June I noticed that the ongoing malvertising from SadClowns was redirecting to Neutrino Exploit Kit (dropping Cerber)

EngageBDR malvertising redirecting to SadClowns infra pushing traffic to Neutrino to Drop Cerber Ransomware
On the 6th I noticed several group migrating to RIG, Neutrino or even Sundown.
But I got speechless when I noticed that GooNky had switched to Neutrino to spread their CryptXXX U000001 and U000006.
They were sticking exclusively to Angler EK since years and their vacation were synchronized with Angler's in January.

Checking all known to me infection path I could hardly find some Angler....last one were behind the EItest infection chain on the night of the 6th to 7th of June.

Last Angler pass I captured on 2016-06-07
EITest into Angler dropping CryptXXX 3.200 U000017
On June 7th around 5:30 AM GMT my tracker recorded its last Angler hit :

Last Hit in my Angler tracker.

After that...RIG, Neutrino instead of Angler almost everywhere.[Side note: Magnitude is still around...But as mentioned earlier it's a One Actor operation since some time]
Aside SadClowns and GooNky here are two other big (cf traffic volume) group which transition has not been covered already

"WordsJS"  (named NTL/NTLR by RiskIQ) into Neutrino > CryptXXX U000010
"ScriptJS" (Named DoublePar by RiskIQ and AfraidGate by PaloAlto) into Neutrino > CryptXXX U000011
This gang  was historically dropping Necurs, then Locky Affid13 before going to CryptXXX
Illustrating with a picture of words and some arrows:

MISP : select documented EK pass with associated tags.
1 arrow where you would have find Angler several days before.
(+ SadClowns + GooNky not featured in that selection)

With the recent 50 arrests tied to Lurk in mind and knowing the infection vector for Lurk was the "Indexm" variant of Angler between 2012 and beginning of 2016...we might think there is a connection and that some actors are stepping back.

Another hint that this is probably not vacation "only" for Angler is that Neutrino changed its conditions on June 9th. From 880$ per week on shared server and 3.5k$ per month on dedicated, Neutrino doubled the price to 7k$ on dedicated only (no more per week work). Such move were seen in reaction to Blackhole's coder (Paunch) arrest in October 2013.

So is this the End of Angler ? The pages to be written will tell us.

“If a book is well written, I always find it too short.” 
― Jane Austen, Sense and Sensibility

Post publication notes:

RIG : mentioned they were sill alive and would not change their Price.
Maybe unrelated to RIG mention, Neutrino updated his thread as announced previously on underground but conditions are revisited :
------Google translate:-----
Tarif week on a shared server:
Rent: $ 1500
Limit: 100k hosts per day
One-time daily discharge limits: $ 200

Rate per month on a dedicated server:
Rent: $ 4000
Limits: 500k hosts per day, and more - on an individual basis.
One-time daily discharge limits: $ 200
So now only price per week is doubled and month rate + ~20%


Thanks to Will Metcalf (Emerging Threats/Proofpoint) who made the replay of SadClowns' malvertising possible. Thanks to EKWatcher and Malc0de for their help on several points.

Read More :
XXX is Angler EK - 2015-12-21
Russian hacker gang arrested over $25m theft - 2016-06-02 - BBC News
Neutrino EK and CryptXXX - 2016-06-08 - ISCSans
Lurk Banker Trojan: Exclusively for Russia - 2016-06-10 - Securelist - Kaspersky

How we helped to catch one of the most dangerous gangs of financial cybercriminals - 2016-08-30 - SecureList


There's a good article on Google Dorking on DarkReading here. If you're not sure what Google Dorking is, in essence, it's using Google (you can do the same with other search engines) with advanced operators to find information on the Internet that shouldn't be exposed. You can find files, passwords, user accounts, open webcams and all sort of other data. The concept was made popular by Johnny Long, who now resides in Uganda with his family helping educate needy kids (Hackers For Charity is his organization and and a worthy place to donate funds, equipment or time).

Of Data Sharing and Statistics Being Removed


As most of you may know The Shadowserver Foundation is a non-profit organization in both the US and in the EU.  We survive through donations, sponsorships, as well as project work to expand out what we are able to do.  We share our data for no cost with the direct network owners.  From our last few posts you can get an idea of how many drives we go through and the possible cost to maintain all the work that we have been doing.  We do not ask for credit, only the occasional support.

This is Why We Cannot Have Anything Nice

When we started doing our Internet level scans we provided all the statistical results by country and ASN to anyone that wanted to download the data.  Our hope was that someone would come back and share some new and interesting way of visualizing that data, or some trending that would blow our minds.  Instead it seems that this data has been used as a basis for making sales to organizations, and in some cases the statistical data itself sold.  While we are occasionally wild-eyed firebrands and believe that the world is a better place we are normally rooted in a more robust reality.

So, this is just a note to let everyone know that these statistics have been removed and will not be made available in the future.

Where Does This Lead To?

Well, besides some kumbia songs by the campfire?  We have been working on creating newer sets, and older as well, of statistics for the purpose of putting together a wondrous page of visual amazement for everyone sometime in the future.  Of course finding a studio to help build this beast is one of the first hurdles as well as finding funding to put the whole thing together, but we believe that we will get to this soon.  We are hoping to have some form of delivery in 2016.  So, look forward and make your offers to help!

A Close Look at TeslaCrypt 3.0 Ransomware

TeslaCrypt is yet another ransomware taking the cyber world by storm. It is mostly distributed via a spear phishing email and through the Angler exploit kit. The Angler exploits vulnerability in Adobe Flash. The Angler exploit downloads a variant of the ransomware upon success.

TeslaCrypt 3.0 possesses various updates, one of which renders encrypted files irrecoverable via normal means.

Infection Indicator/s
Machines infected by TeslaCrypt will usually have the following files present in almost every directory:

  • +REcovER+[Random]+.html
  • +REcovER+[Random]+.txt
  • +REcovER+[Random]+.png

The recovery instructions for the encrypted files can be found inside these files.

TeslaCrypt ransom note

TeslaCrypt ransom note

Technical Details
Note: The file used for this analysis has an MD5 value of 1028929105f1e6118e06f8b7df0b3381.

The malware starts by ensuring it’s in its intended directory. For this sample, it checks if it is located in the Documents directory. If it’s not, it copies itself to that directory and executes its copy from there. It deletes itself after executing the copy.

The ransomware creates multiple threads that do the following:

  • Monitors processes and terminates those that contain the following strings:
    • taskmg
    • regedi
    • procex
    • msconfi
    • cmd
  • Contacts the C&C server and sends certain information like system information and the unique system ID.
  • File encryption routine

TeslaCrypt is not immune to recycling code from older malware families. The initial code is an encryption of the compressed binary. Upon decryption, the malware will call the RtlDecompressBuffer API and finally write the decompressed data into its own memory.

Call to RtlDecompressBuffer

Call to RtlDecompressBuffer

The malware also uses a technique to obscure API calls by using the hash of the API name and passing it to a function that retrieves the API address.

The malware passes an API hash to a function that returns the procedure address of the API.

The same code but labeled properly in a disassembler

The same code but labeled properly in a disassembler.

File Encryption
TeslaCrypt uses AES encryption and will send one part of the key to its C&C server, which will render the files irrecoverable on its own.

It will start by checking if the system already has its own recovery key. If not, it will begin generating the necessary encryption keys. These keys will be used for the encryption routine.

Figure 5

Checks if the recovery key already exists and generates it if it doesn’t.

TeslaCrypt will traverse all fixed, remote and removable drives for files with the following extensions:


The exception, however, is if the file contains the string “recove” or if it is found in the following directories:

  • %WINDIR% (C:\Windows)
  • %PROGRAMFILES% (C:\Program Files)
  • %COMMONAPPDATA% (C:\Documents and Settings\All Users\Application Data for Windows XP and C:\ProgramData for Windows Vista and above)
  • %LOCALAPPDATA%\Temporary Internet Files (C:\Documents and Settings\[USERNAME]\Local Settings for Windows XP and C:\Users\[USERNAME]\AppData\Local for Windows 7 and above)
Figure 6

Checking for fixed, removable and remote drives

 Once a file passes the extension check, the malware will proceed with the encryption. The ransomware variant first checks for its encryption header. If the file is not yet encrypted, it will proceed with the encryption.

Encrypted files’ headers contain data that includes – but isn’t limited to – the global recovery key, the global public key, the original file size and the encrypted data itself.

Sample of an encrypted file

Sample of an encrypted file

C&C Servers
The malware tries to connect to one of the following domains:

  • hxxp://
  • hxxp://
  • hxxp://
  • hxxp://
  • hxxp://
  • hxxp://

If it manages to connect to a server, it then sends a POST request using encoded data. The data it will send includes the following:

  • The shared key for the encryption
  • Bitcoin address
  • OS version
  • TeslaCrypt version
  • Unique ID for the infected system
HttpSendRequest with the encrypted data

HttpSendRequest with the encrypted data

Other Details
To ensure the malware only has one instance running, it creates a mutex as “8_8_8_8.”

Figure 9

CreateMutex function

It creates an auto start registry entry to ensure execution every startup.

Autostart registry

Autostart registry

It also adds a policy in the registry to remove permission restrictions on network drives, essentially allowing any user to access these network drives.

EnableLinkedConnections registry value

EnableLinkedConnections registry value

Interestingly enough, though, it appears the gang behind TeslaCrypt has had a change of heart and have publicly shared their master decrypt key. Before they shut down, the now-defunct payment site required a minimum of $500 in the form of bitcoin.

TeslaCrypt payment page

TeslaCrypt payment page

Advanced threat defense products like those used in this analysis help avoid ransomware infection. The advanced solutions catch the emerging threat before it can do any damage.  You’ve got two great lines of defense: The first is via email and the next is your network.

Advanced email defense solutions like ThreatSecure Email are designed to catch malware that evades traditional defenses. It’s a great tool to help stop attacks by detecting phishing links and exploits that deliver ransomware. That can stop TeslaCrypt from encrypting and taking the data from you.

The next stop is bolstering your network. Adding an advanced defense solution that identifies and correlates discovered threats with anomalous network activity is an invaluable tool to guard your data. ThreatTrack’s ThreatSecure Network, for instance, provides end-to-end network visibility and real-time detection to catch traffic hitting known malicious IPs associated with ransomware distribution and C&C.


The post A Close Look at TeslaCrypt 3.0 Ransomware appeared first on ThreatTrack Security Labs Blog.

Acunetix Website Hack And Lessons Learnt

Update: Acunetix has just released an official response about the incident, read it here.

Last night, Website of Acunetix(A Wellknown Automated Web Application Scanner) was hacked by Croatian hackers. From that point of this onward the website has been taken offline and acunetix team are reviewing the root cause for the hack. Currently the homepage is displaying a "403 Forbidden error", it might be due to the fact that either the attacker has deleted all he files or developers have deliberately taken it down in order to review the files for any possible backdoor that might had been injected.

Courtesy -

Lessons Learnt 

Up till now the cause of the hack remains unknown as Acunetix is yet to acknowledge it. However, The hack gives us the following important generic lessons:

i) Defense is more difficult than offense. For defense you have to find and close 100 doors which an attacker can use to get into the Server, For offense the attacker has to find one single way to get in.

ii) WebApplications now days have became extremely complex with new features being added on daily basis. It's almost impossible to achieve complexity and Security at the same time.

iii) Automated Scanners and Web Application Firewalls won't necessarily protect your Webapplications. As both of them do not understand Business Logic of the Application. Defense in depth principle should be followed where Security should be ensured at all layers. You can refer my article  "Secure Application Development And Modern Defenses"

iv) Security is not a one time job, it's an ongoing process, no specific requirement has to be met for 100% security.

One of the arguments that People would use is "How can their Tool ensure our Webapplication's Security, when they cannot protect themselves from getting hacked?", the answer is absolutely nothing can ensure 100% security,We have seem many Security products failing to ensure their own security, one of the examples can be found here (Imperva SecureSphere Web Application Firewall MX 9.5.6 - Blind SQL Injection), here ( So Who Hacked EC-Council Three Times This Week?) and here (Tavis Ormandy finds vulnerabilities in Sophos Anti-Virus products) and it's perfectly normal.

The problem comes when these product owner instead of acknowledging and responding to the breach wishes to remain silent and thereby loosing it's credibility even further in the eyes of customers and well as infosec community.  It is the right of  the customers to know whether their data was compromised in the breach and if yes up to what extent and if passwords were compromised, how were they storing the passwords.

With that being said, i would like to highlight the fact that they will not necessarily go out of the business after this hack. Eccouncil has been hacked multiple times and they are still in the business.